Pere Castellvi Obiols
Associate professor in the Department of Medicine at the International University of Catalonia (UIC)
This article analyses how artificial intelligence (AI) can transform mental health research and care by surpassing traditional diagnostic methods. The authors explore the use of complex biometric data, digital monitoring via wearable devices, and the implementation of therapeutic chatbots to personalise treatments. While pointing out these opportunities, the authors also report ethical challenges, such as privacy, algorithmic biases, and the preservation of the human doctor-patient relationship. Finally, they propose a patient-centred model that integrates these technologies in a safe and validated manner into clinical practice, encouraging the development of regulations and up-to-date training for clinicians."
What are the implications and how does this fit with existing evidence?
"The application of AI in mental health has profound implications for both research and clinical practice. Currently, diagnosis is made through subjective assessments guided by diagnostic manuals, such as the DSM or the ICD. AI allows these to be integrated with complex, multidimensional phenotypic data (voice, facial expression, body movements captured with digital devices, etc.) that can aid accuracy and personalisation. In addition, it allows for the evaluation of progress, from screening to treatment, relapse and recovery, reducing the administrative burden and improving clinical efficiency.
Existing evidence shows promising results in some fields, such as Alzheimer's, which have proven to be superior to traditional biomarkers, although modest in psychiatry. For example, in major depression, they have not been superior to traditional clinical variables, and predictive models of suicidal behaviour have not predicted those who will engage in such behaviour, although therapeutic chatbots are already being used with promising results.
However, the ethical risks implicit in the use of AI should be taken into account, such as the violation of user privacy and misuse by companies or insurers if there are no strict regulations, the biases and inequalities that exist in training data in minority and vulnerable populations, and hallucinations through incorrect or false responses and inappropriate advice that can cause iatrogenesis and even suicidal behaviour or psychotic symptoms. What is important is the need for regulation and constant human supervision with a focus on the patient rather than the technology."
Can anyone with mental health problems benefit from AI?
"A priori, we can say that anyone can benefit from AI. Even so, its benefits are not universal nor are they risk-free. Whether patients with mental disorders can benefit depends on many factors, such as the type of disorder, their level of empowerment, and access to technology, among others.
As we have said before, the accuracy of early detection and prediction of cognitive decline in Alzheimer's patients has improved, and hybrid chatbots such as Therabot have shown clinically significant reductions in trials. However, it would always be highly recommended to seek the advice of a mental health professional throughout the process and that AI should not replace the therapist.
Finally, it should also be mentioned that people with severe mental disorders and/or those admitted to psychiatric units may be excluded from this tool.
What should we be most careful about when applying AI to treat people with mental health problems?
The application of AI to people with mental health issues presents several challenges that must be taken into account due to the sensitive and intimate nature of the information being handled, as it requires the highest standards of data protection. We could say that the main areas where special attention should be paid are privacy, associated stigma, and the use or misuse of data by companies or governments.
Furthermore, we must bear in mind that AI does not have epistemological truth, although many users take this for granted. AI is subject to hallucinations, errors, biases, bad advice and discrimination, and we should critically analyse the responses it gives us and not take everything for granted. We should remember that, even though it is programmed to simulate empathy, AI does not have it, and this can confuse the user.
Finally, adolescents should be considered a particularly vulnerable population because they are at a stage of brain, emotional and social development in which a high reward seeking, need for belonging and poor inhibitory control are combined, while spending many hours connected to digital environments such as AI. This increases the risk of exposure to harmful content, dependence on assistants/algorithms, and victimisation (e.g., sexual deepfakes and cyberbullying), where there is a risk of mental health problems.