AI is useful for mental health treatment, provided that the human factor remains central

The journal Science has published a review of the emerging use of artificial intelligence (AI) in mental health treatment, with examples such as conversational bots for reducing depressive symptoms. The authors defend the usefulness of this technology in the different stages of psychological care, provided that the human factor, both on the part of the clinician and the patient, is the leading factor in the approach. In this regard, they emphasise that AI cannot replace clinical judgement. The distinctive characteristics of psychological care, such as the disclosure of personal information by vulnerable individuals, also necessitate regulatory frameworks that ensure ‘the ethical and effective implementation of AI technologies’.

15/01/2026 - 20:00 CET
Expert reactions

Pere Castellvi Obiols - IA revisión salud mental EN

Pere Castellvi Obiols

Associate professor in the Department of Medicine at the International University of Catalonia (UIC)

Science Media Centre Spain

This article analyses how artificial intelligence (AI) can transform mental health research and care by surpassing traditional diagnostic methods. The authors explore the use of complex biometric data, digital monitoring via wearable devices, and the implementation of therapeutic chatbots to personalise treatments. While pointing out these opportunities, the authors also report ethical challenges, such as privacy, algorithmic biases, and the preservation of the human doctor-patient relationship. Finally, they propose a patient-centred model that integrates these technologies in a safe and validated manner into clinical practice, encouraging the development of regulations and up-to-date training for clinicians."

What are the implications and how does this fit with existing evidence?

"The application of AI in mental health has profound implications for both research and clinical practice. Currently, diagnosis is made through subjective assessments guided by diagnostic manuals, such as the DSM or the ICD. AI allows these to be integrated with complex, multidimensional phenotypic data (voice, facial expression, body movements captured with digital devices, etc.) that can aid accuracy and personalisation. In addition, it allows for the evaluation of progress, from screening to treatment, relapse and recovery, reducing the administrative burden and improving clinical efficiency.

Existing evidence shows promising results in some fields, such as Alzheimer's, which have proven to be superior to traditional biomarkers, although modest in psychiatry. For example, in major depression, they have not been superior to traditional clinical variables, and predictive models of suicidal behaviour have not predicted those who will engage in such behaviour, although therapeutic chatbots are already being used with promising results.

However, the ethical risks implicit in the use of AI should be taken into account, such as the violation of user privacy and misuse by companies or insurers if there are no strict regulations, the biases and inequalities that exist in training data in minority and vulnerable populations, and hallucinations through incorrect or false responses and inappropriate advice that can cause iatrogenesis and even suicidal behaviour or psychotic symptoms. What is important is the need for regulation and constant human supervision with a focus on the patient rather than the technology."

Can anyone with mental health problems benefit from AI?

"A priori, we can say that anyone can benefit from AI. Even so, its benefits are not universal nor are they risk-free. Whether patients with mental disorders can benefit depends on many factors, such as the type of disorder, their level of empowerment, and access to technology, among others.

As we have said before, the accuracy of early detection and prediction of cognitive decline in Alzheimer's patients has improved, and hybrid chatbots such as Therabot have shown clinically significant reductions in trials. However, it would always be highly recommended to seek the advice of a mental health professional throughout the process and that AI should not replace the therapist.

Finally, it should also be mentioned that people with severe mental disorders and/or those admitted to psychiatric units may be excluded from this tool.

What should we be most careful about when applying AI to treat people with mental health problems?

The application of AI to people with mental health issues presents several challenges that must be taken into account due to the sensitive and intimate nature of the information being handled, as it requires the highest standards of data protection. We could say that the main areas where special attention should be paid are privacy, associated stigma, and the use or misuse of data by companies or governments.

Furthermore, we must bear in mind that AI does not have epistemological truth, although many users take this for granted. AI is subject to hallucinations, errors, biases, bad advice and discrimination, and we should critically analyse the responses it gives us and not take everything for granted. We should remember that, even though it is programmed to simulate empathy, AI does not have it, and this can confuse the user.

Finally, adolescents should be considered a particularly vulnerable population because they are at a stage of brain, emotional and social development in which a high reward seeking, need for belonging and poor inhibitory control are combined, while spending many hours connected to digital environments such as AI. This increases the risk of exposure to harmful content, dependence on assistants/algorithms, and victimisation (e.g., sexual deepfakes and cyberbullying), where there is a risk of mental health problems.

The author has declared they have no conflicts of interest
EN

Albert “Skip” Rizzo - IA revisión salud mental EN

Albert “Skip” Rizzo

Director of Medical Virtual Reality at the University of Southern California Institute for Creative Technologies

Science Media Centre Spain

This review captures the moment we’re in: AI is no longer a speculative add-on for mental health—it’s edging toward routine use in decision support, digital monitoring, and even therapeutic chatbots! This is largely because psychiatry, psychology, etc. still relies heavily on behavioral and affective observation rather than objective biomarkers. I share the authors’ cautious optimism, especially around the idea that AI should be evaluated across the patient journey (prodrome → acute symptoms → treatment → recovery), rather than as a standalone gadget for “symptom reduction”. That framing aligns with how clinical reality actually works: needs, risks, and priorities shift across phases, and tools should be judged by whether they improve decisions and engagement at the right moments (screening/triage, in-session support, relapse prevention). 

The paper is also appropriately sober about “human–AI collaboration”—who the AI is addressing (clinician vs. patient), where it is used (inpatient teams vs. remote/unsupervised settings), and when it enters care—because those design choices determine whether the technology supports care or quietly undermines it. I also appreciate the reminder that hybrid models (digital + human support) have tended to outperform fully self-guided approaches, which should temper any rush to replace human care with an automated veneer. 

Where this article is most valuable—especially for those of us building patient-facing AI systems—is its emphasis on guardrails, validation, privacy, and governance as prerequisites rather than afterthoughts (cf. Rizzo et al., 2025). The authors point out that single-turn safety defenses can break down in multi-turn conversations, with risks like inconsistent or implicitly normalizing responses to self-harm ideation—particularly concerning for adolescents whose language differs from adult training data. They also underline privacy and data protection as a core requirement (not a “feature”), noting that many mental health apps fall short. Add to that the risks of latent profiling and discriminatory misuse (e.g., employment/insurance), and you get a clear mandate for explicit policy limits and regulatory frameworks that keep mental-health inference inside appropriate clinical boundaries. Their best-practice guidance is also in line with our policies in this area: patient-centered need and preference (not tech enthusiasm), co-design with clinicians and patients, real-world validation with independent replication, and bias/fairness work that is sensitive to cultural and community contexts. Overall, anyone who has paid attention over the last few years will see AI as a true force multiplier for access and personalization—but only if we treat safety, evidence, and ethics as core engineering requirements, not aspirational visions.

The author has not responded to our request to declare conflicts of interest
EN

Alba Marmol - IA revisión salud mental EN

Alba María Mármol Romero

PhD and researcher in the SINAI research group at the University of Jaén

Science Media Centre Spain

The review is well aligned with the current evidence on the use of artificial intelligence in mental health and is particularly helpful in framing AI applications across the full care pathway. The article identifies four key phases in which AI may play a role: early detection of changes in behavior or emotional state; support for diagnosis through the analysis of complex data such as language, behavioral patterns, or digital signals; treatment, by assisting clinical decision-making or the personalization of interventions; and post-care, through long-term monitoring and relapse prevention. This framework helps to clarify both the realistic opportunities for AI and its current limitations.

Not everyone is likely to benefit equally from AI-based tools in mental health. Their usefulness depends on factors such as the type and severity of the condition, age, social context, and digital literacy. Moreover, the application of AI in this field requires particular caution given the sensitivity of mental health data, the risk of bias, and the potential for inappropriate or harmful responses in vulnerable situations. For these reasons, AI should be understood as a complementary tool rather than a replacement for human clinical judgment, and its deployment should be supported by rigorous validation, professional oversight, and robust ethical and regulatory frameworks.

The author has declared they have no conflicts of interest
EN
Publications
Journal
Science
Publication date
Authors

Nils Opel and Michael Breakspear

Study types:
  • Review
The 5Ws +1
Publish it
FAQ
Contact