A psychiatrist pretending to be a 15-year-old girl struggling with depression, eating issues, and self-harm recently tested a dozen mental health chatbots. The results? Frightening. As reported by TIME on June 13, 2025, many AI therapy tools designed for young people failed to recognize serious red flags—or worse, gave advice that could put teens at risk.

This undercover experiment highlights how easily vulnerable users can fall through the cracks in an unregulated mental health tech space, where chatbots are promoted as scalable solutions but often lack the sensitivity and safeguards of real clinicians.

Digital Reconnection: A Woman Embracing Virtual Therapy

What the Psychiatrists Found

Dr. Amanda Calhoun, a Yale-trained child psychiatrist, posed as “Katie,” a distressed teen, and messaged 12 AI mental health apps and chatbots. Here’s what she uncovered:

  • Minimal Crisis Detection: Several bots ignored clear signs of suicidal ideation and failed to recommend emergency resources like crisis hotlines.
  • Unqualified Advice: One bot told “Katie” that skipping meals was “understandable.” Another said “journaling” could help with self-harm urges—without flagging the urgency of professional care.
  • No Human Escalation: None of the chatbots escalated the conversation to a human counselor or even prompted a warning message when serious mental health concerns were shared.

Why This Matters

  1. Teens Are Using These Tools
    AI mental health apps are exploding in popularity, especially among Gen Z and Gen Alpha who prefer text-based interaction. But many users don’t realize that the “therapist” is a machine with no clinical training or obligation to intervene.
  2. No Regulation Yet
    Unlike licensed therapists, AI bots aren’t bound by HIPAA or any strict mental health standards. There’s no uniform policy to govern how they should respond to disclosures of harm, abuse, or suicidal ideation.
  3. False Sense of Safety
    When apps are marketed as “therapy on demand,” users—and even parents—may wrongly assume they’re a safe substitute for real care. This illusion of support can delay access to qualified treatment.

What the Article Didn’t Fully Cover

  • Data Risks for Vulnerable Youth
    Some apps collect deeply personal emotional data that can be sold or shared with third parties. Teens may unknowingly agree to this when accepting terms and conditions.
  • Global Usage
    AI therapy bots are used in countries with few licensed clinicians. In underserved areas, they may be the only option—but also the most dangerous if misused or unsupervised.
  • Mental Health Equity
    Communities of color, LGBTQ+ teens, and disabled youth face higher mental health burdens—but AI bots don’t always recognize identity-specific risks or cultural contexts.

What Needs to Change

  • Mandatory Safety Protocols
    Bots must flag crisis language, offer hotline links, and escalate to trained professionals when necessary.
  • Transparent Disclaimers
    Apps should clearly state they’re not a replacement for professional therapy and highlight when users are interacting with AI, not a human.
  • Regulatory Oversight
    Governments should classify mental health bots as medical devices or digital therapeutics—requiring audits, ethics reviews, and usage safeguards.
  • Human Backups
    Every mental health AI tool should include the option to speak with a human or trigger an alert system when danger is detected.

3 FAQs

1. Are therapy chatbots safe for kids and teens?
Not always. Without proper safeguards, they may miss signs of crisis or offer misleading advice. They can be useful for stress tracking or emotional journaling—but not as a substitute for professional mental health care.

2. What should parents look out for?
Check if the app is transparent about being AI-powered, whether it offers crisis resources, and how it uses your child’s data. Encourage kids to treat chatbots as a tool—not a therapist.

3. Can AI ever be trusted with mental health?
Possibly, but only with strict regulation, human oversight, and transparent ethical guidelines. Right now, most mental health bots are unregulated—and not yet ready for vulnerable populations like teens in crisis.

African doctor woman therapist use mobile phone app texting message medical recommendation for

Sources TIME