- Advertisement -spot_img
Wednesday, May 22, 2024
HomeArticleImage AnalysisArtificial Intelligence and the Clinician – balancing the potentials and the pitfalls...

Artificial Intelligence and the Clinician – balancing the potentials and the pitfalls of modern digital healthcare

By Clare Rainey, Course Director BSc (Hons) Diagnostic Radiography and Imaging, Ulster University 

Until relatively recently, vision had been considered an attribute of animates only.  The ImageNet challenge provided a showcase for artificial intelligence (AI) tools for use in identifying everyday objects.  This has led to the use of such systems in myriad ways, such a facial recognition software, self-driving cars and image interpretation in medicine. Many applications are applied in day-to-day life but use in higher stakes situations are, needfully, attracting greater scrutiny.  As computer power is increasing and proof of concept applications become mainstream, care should be taken regarding how best to adopt these systems into our lives – how can we leverage what computers are best at to optimise the things that humans are best at, without causing harm?

I am an academic and researcher at Ulster University in Northern Ireland and I work with a professionally diverse team currently researching human-computer interaction in radiology. We are conducting several studies investigating the impact of different forms of AI feedback on the clinical end-user and how this relates to diagnostic performance, trust, decision switching and automation bias.

As we emerge into a ‘new post-pandemic normal’ in healthcare, we are realising that to protect the health service we need to make our practice more sustainable.  It has already been established that the NHS in the UK aims to include AI systems in healthcare workflows to increase efficiency and accomplish automatable tasks.  There has been a realisation that the future of healthcare will be more data driven, with patients and service users more interested in monitoring their own health journey.  For instance, the sale of wearable devices to manage and monitor health is expected to reach 440 million devices in 2024, generating a deluge of additional health related data.  Radiographic images produced from radiological examinations are also a form of digitised data.  In the five years preceding the Care Quality Commission’s 2018 report, the demand for radiology services had grown by more than 16%.  This has further increased in recent years during and following the COVID-19 pandemic. This additional data, created through increased use of services, may place significant pressure on an already depleted workforce.  AI has been proposed to speed up clinicians by assistance in patient positioning, automated contrast media calculations and image interpretation assistance to reporting radiographers and radiologists.

Radiology professionals providing diagnosis from radiological images are aware of biases which may inherently exist, such as confirmation, anchoring and satisfaction of search, which lead to errors.  Additional, unfamiliar biases and issues present themselves when considering the human-computer relationship in this setting.  Lack of trust has been cited as a barrier to the implementation of AI into radiology departments, however the professional community are now becoming aware of potential scenarios where the user becomes over, rather than under reliant on the system to make their decision.  Automation Bias is the cognitive bias of reliance on the system over one’s own decision.  This has been investigated in some other fields of medicine, such as prescribing and cardiology and has been shown to impact inexperienced users more greatly.

With the multiple issues surrounding the use of AI in radiology would it be wiser to exclude it from radiology of the future?  The ‘Godfather of AI’, Geoffrey Hinton said, in 2016, that the advances in computer vision meant that we should stop training radiologists, as they would be replaced by machines in the near future.  This sparked great fear about job security in radiology and caused backlash from radiologists who felt that the depth of their role was oversimplified.  Current thinking leads us to accept that those who embrace the technology critically and work with developers to create systems which are useful for the human end user, may replace those who are more reticent.  The professional community can begin to imagine a future where, due to AI assistance, they can spend more time with the patient and on non-automatable tasks.  For this to happen, users need to be able to calibrate their trust in the system that they are using.  The high-performance systems used for computer vison tasks are usually built on a neural network architecture using machine and deep learning and are therefore less interpretable compared with older, simpler systems, such as human programmed shape and pattern identification systems as previously used in mammography.  When the AI system makes errors, it is not always entirely clear why.  Means of explaining the AI attention in making its decision has been proposed, with heatmaps being one of the most common.  The preference of the end user should be taken into consideration when providing different forms of AI feedback and consideration should be given to the impact this may have on their decision making and ultimate diagnostic accuracy. Prior research by the Society and College of Radiographers’ AI Advisory Board (of which the author is a member) has found that radiographers prefer an overall performance of the system related to a task, rather than a form of visual feedback, although heatmap provision was desirable also Recent research by Saporta et al., 2022, propose caution with the use of heatmaps as the pathology localisation of even the best performing systems proved coarse at best.  Upcoming publications from the team at Ulster University investigate and quantify how different forms of AI feedback impact decision switching (‘changing of mind’), automation bias and trust in a range of radiography professionals and aims to shed light on the best way forward with AI – where we can create a healthy symbiotic relationship, where we feel empowered to do what we do best as humans with technological support to allow this to happen.

 

 

Must Read

Related News

Translate »