Allan Tucker
Professor of Artificial Intelligence in the Department of Computer Science, Brunel University of London
"It looks like a solid piece of work that highlights what many researchers in AI fear - that of automation bias. It is only one study and the limitations of it are explicitly highlighted in the paper.
In terms of limitations, authors only looked at one AI system, there are many different systems and technologies that may be better at supporting or explaining decisions than others. The healthcare professionals selected were clearly experienced and interested in taking part in the study, other less tech-savvy or less experienced professionals may behave differently. It is also worth noting that some major changes to the endoscopy department were undertaken in the middle of the study, and the authors make it clear that randomised crossover trials are needed to make more robust claims.
There have been other examples reported of automation bias which highlight some of the risks in healthcare more generally.
This is not unique to AI systems and is a risk with the introduction of any new technology, but the risk involved with AI systems is potentially more extreme. AI aims to imitate human decision-making. This can place much more pressure on a human in terms of their own decision-making than other technologies. For example, they could feel under pressure to agree with the new technology. Imagine if a mistake is made and the human expert has to defend their over-ruling of an AI decision. They could see it a less risky thing to simply agree with the AI.
The paper is particularly interesting because it indicates that AI still spots more cancers overall. The ethical question then is whether we trust AI over humans. Often, we expect there to be a human overseeing all AI decision-making but if the human experts are putting less effort into their own decisions as a result of introducing AI systems this could be problematic.
One side of the argument would be ‘who cares if more cancer is identified’.
The other side may counter ‘but if the AI is biased and making its own mistakes then it could be making them at a massive scale if left unsupervised’.