Autor/es reacciones

Ewen Harrison

Professor of Surgery and Data Science and Co-Director Centre for Medical Informatics, University of Edinburgh

This is an important study showing that modern AI systems can be good at one of the central tasks of doctors and nurses: taking the information available about a patient and suggesting which diagnoses should be considered.

This matters - these systems are no longer just passing medical exams or solving artificial test cases. They are starting to look like useful second-opinion tools for clinicians, particularly when it is important to consider a wider range of possible diagnoses and avoid missing something important.

But this does not mean AI should be quickly ushered into clinical care without limits. Producing a good list of possible diagnoses is not the same as improving patient care. We still need studies showing that these tools help doctors and nurses make better decisions, reduce harm, avoid unnecessary tests, and work safely in busy hospitals and GP practices.

This study moves the field forward, but it does not by itself change clinical practice. The responsible route is not to ban these systems, but also not to let them drift into casual use. They should be tested in real clinical settings, used as second-opinion tools rather than replacements for clinicians, and monitored against the outcomes that actually matter to patients: better, safer, quicker care.

EN