Introduction: Radiographer reporting is accepted practice in the UK. With a national shortage of radiographers and radiologists, artificial intelligence (AI) support in reporting may help minimise the backlog of unreported images. Modern AI is not well understood by human end-users. This may have ethical implications and impact human trust in these systems, due to over- and under-reliance. This study investigates the perceptions of reporting radiographers about AI, gathers information to explain how they may interact with AI in future and identifies features perceived as necessary for appropriate trust in these systems.
Methods: A Qualtrics® survey was designed and piloted by a team of UK AI expert radiographers. This paper reports the third part of the survey, open to reporting radiographers only.
Results: 86 responses were received. Respondents were confident in how an AI reached its decision (n = 53, 62%). Less than a third of respondents would be confident communicating the AI decision to stakeholders. Affirmation from AI would improve confidence (n = 49, 57%) and disagreement would make respondents seek a second opinion (n = 60, 70%). There is a moderate trust level in AI for image interpretation. System performance data and AI visual explanations would increase trust.
Conclusions: Responses indicate that AI will have a strong impact on reporting radiographers' decision making in the future. Respondents are confident in how an AI makes decisions but less confident explaining this to others. Trust levels could be improved with explainable AI solutions.
Implications for practice: This survey clarifies UK reporting radiographers' perceptions of AI, used for image interpretation, highlighting key issues with AI integration.
Keywords: AI; Artificial intelligence; Clinical imaging; Digital health; Education; Radiography; Workforce training.
Copyright © 2022 The Author(s). Published by Elsevier Ltd.. All rights reserved.