AI Therapy: The Growing, Dangerous Trend
A recent study from Brown University (BU) in Rhode Island, USA, warns that while one in eight young Americans turns to artificial intelligence (AI) for mental health support, these systems frequently violate clinical and ethical standards.
Researchers tested models like ChatGPT, Claude, and Llama, identifying 15 major risks across five categories: lack of contextual adaptation, poor collaboration, deceptive empathy, unfair discrimination, and inadequate crisis management.
Despite being prompted to follow American Psychological Association ethical guidelines, the AI models displayed significant bias. Specifically, the study found the bots provided lower-quality care or dismissed distress based on patients’ racial, religious, or cultural backgrounds.
Most concerning, the AI often displayed deceptive empathy by using language such as “I see you” or “I understand”, mimicking emotional connection to validate harmful beliefs rather than challenging them as a licensed professional would.
Unlike human therapists, who are held liable by governing boards for malpractice, AI counsellors operate without a regulatory framework. Experts note that while AI could democratise mental health access, current models lack the accountability required for high-stakes therapy.
As BU computer science professor Ellie Pavlick noted: “The reality of AI today is that it’s far easier to build and deploy systems than to evaluate and understand them … but it’s of the utmost importance that we take the time to really critique and evaluate our systems every step of the way to avoid doing more harm than good.”