This article is 3 months old

The use of AI may lead to a loss of skills among those who perform colonoscopies

The introduction of artificial intelligence (AI) to assist in performing colonoscopies is associated with a reduction in the ability of healthcare professionals to detect precancerous growths (adenomas) in the colon without the aid of AI, according to an article published in The Lancet Gastroenterology & Hepatology.

13/08/2025 - 00:30 CEST
Expert reactions

Venet Osmani - IA colon

Venet Osmani

Professor of Clinical AI and Machine Learning, Queen Mary University of London

Science Media Centre UK

There are several reasons to be cautious about concluding that AI alone is causing a deskilling effect in clinicians. The study's findings might be influenced by other factors.

For example, the number of colonoscopies performed nearly doubled after the AI tool was introduced, going from 795 to 1382. It's possible that this sharp increase in workload, rather than the AI itself, could have led to a lower detection rate. A more intense schedule might mean doctors have less time or are more fatigued, which could affect their performance.

Furthermore, the introduction of a new technology like AI often comes with other changes, such as new clinical workflows or a shift in how resources are used. These organisational changes, which the study did not measure, could also be affecting detection rates.

Finally, the study suggests a drop in skill over just three months. This is a very short period, especially for a clinician with over 27 years of experience. It raises the question of whether a true loss of skill could happen so quickly, or if the doctors were simply changing their habits in a way that affected their performance when the AI was not available.

The author has declared they have no conflicts of interest
EN

Allan Tucker - IA colon

Allan Tucker

Professor of Artificial Intelligence in the Department of Computer Science, Brunel University of London

Science Media Centre UK

"It looks like a solid piece of work that highlights what many researchers in AI fear - that of automation bias. It is only one study and the limitations of it are explicitly highlighted in the paper.

In terms of limitations, authors only looked at one AI system, there are many different systems and technologies that may be better at supporting or explaining decisions than others. The healthcare professionals selected were clearly experienced and interested in taking part in the study, other less tech-savvy or less experienced professionals may behave differently. It is also worth noting that some major changes to the endoscopy department were undertaken in the middle of the study, and the authors make it clear that randomised crossover trials are needed to make more robust claims.

There have been other examples reported of automation bias which highlight some of the risks in healthcare more generally.

This is not unique to AI systems and is a risk with the introduction of any new technology, but the risk involved with AI systems is potentially more extreme. AI aims to imitate human decision-making. This can place much more pressure on a human in terms of their own decision-making than other technologies. For example, they could feel under pressure to agree with the new technology. Imagine if a mistake is made and the human expert has to defend their over-ruling of an AI decision. They could see it a less risky thing to simply agree with the AI.

The paper is particularly interesting because it indicates that AI still spots more cancers overall. The ethical question then is whether we trust AI over humans. Often, we expect there to be a human overseeing all AI decision-making but if the human experts are putting less effort into their own decisions as a result of introducing AI systems this could be problematic.

One side of the argument would be ‘who cares if more cancer is identified’.

The other side may counter ‘but if the AI is biased and making its own mistakes then it could be making them at a massive scale if left unsupervised’.

Conflict of interest: "My only other commitments beyond academia are advising the MHRA on the use of AI in healthcare (currently at no cost)".

EN
Publications
Journal
The Lancet Gastroenterology & Hepatology
Publication date
Authors

Budzyri et al.

Study types:
  • Research article
  • Peer reviewed
  • Observational study
The 5Ws +1
Publish it
FAQ
Contact