Albert “Skip” Rizzo
Director of Medical Virtual Reality at the University of Southern California Institute for Creative Technologies
This review captures the moment we’re in: AI is no longer a speculative add-on for mental health—it’s edging toward routine use in decision support, digital monitoring, and even therapeutic chatbots! This is largely because psychiatry, psychology, etc. still relies heavily on behavioral and affective observation rather than objective biomarkers. I share the authors’ cautious optimism, especially around the idea that AI should be evaluated across the patient journey (prodrome → acute symptoms → treatment → recovery), rather than as a standalone gadget for “symptom reduction”. That framing aligns with how clinical reality actually works: needs, risks, and priorities shift across phases, and tools should be judged by whether they improve decisions and engagement at the right moments (screening/triage, in-session support, relapse prevention).
The paper is also appropriately sober about “human–AI collaboration”—who the AI is addressing (clinician vs. patient), where it is used (inpatient teams vs. remote/unsupervised settings), and when it enters care—because those design choices determine whether the technology supports care or quietly undermines it. I also appreciate the reminder that hybrid models (digital + human support) have tended to outperform fully self-guided approaches, which should temper any rush to replace human care with an automated veneer.
Where this article is most valuable—especially for those of us building patient-facing AI systems—is its emphasis on guardrails, validation, privacy, and governance as prerequisites rather than afterthoughts (cf. Rizzo et al., 2025). The authors point out that single-turn safety defenses can break down in multi-turn conversations, with risks like inconsistent or implicitly normalizing responses to self-harm ideation—particularly concerning for adolescents whose language differs from adult training data. They also underline privacy and data protection as a core requirement (not a “feature”), noting that many mental health apps fall short. Add to that the risks of latent profiling and discriminatory misuse (e.g., employment/insurance), and you get a clear mandate for explicit policy limits and regulatory frameworks that keep mental-health inference inside appropriate clinical boundaries. Their best-practice guidance is also in line with our policies in this area: patient-centered need and preference (not tech enthusiasm), co-design with clinicians and patients, real-world validation with independent replication, and bias/fairness work that is sensitive to cultural and community contexts. Overall, anyone who has paid attention over the last few years will see AI as a true force multiplier for access and personalization—but only if we treat safety, evidence, and ethics as core engineering requirements, not aspirational visions.