When AI Hurts the Youth It Claims to Help | The Jed Foundation

When AI Hurts the Youth It Claims to Help

A new report from the Center for Countering Digital Hate (CCDH) exposes the devastating risks AI chatbots pose to teens and children. The findings are staggering: Within minutes of use, researchers found ChatGPT produced content that encouraged self-harm, suicide planning, disordered eating, and substance abuse.

This wasn’t a one-time glitch. These were systematic, reproducible failures that highlight the urgent need for action. Out of 1,200 tested prompts, more than half returned harmful content.

At The Jed Foundation (JED), we’ve been tracking this growing issue: AI tools, marketed as companions or supports, are increasingly shaping how young people understand and manage their emotional lives, sometimes with tragic consequences. Putting their confidence in products, promoted to them by corporations and trusted adults, teens are turning to these systems late at night, alone, in search of relief, identity, and guidance. What they receive instead are suicide notes, calorie-restricted meal plans, or tips on drug dosages.

This is not support — this is a design failure. And it is deeply dangerous.

What the Report Tells Us

  • 72% of U.S. teens have used AI companions.
  • Over half of these youth use them regularly.
  • ChatGPT is the most popular platform.
  • Harmful responses were consistently given, even when users were clearly portraying themselves as 13-year-olds.
  • Warnings were often ignored or, worse, followed by suggestions for how to “safely” carry out harm.

It’s Time for Action

We need accountability. We believe no child should be given a suicide plan by an AI. No teen should be coached on how to hide disordered eating. No young person should be led deeper into despair by a chatbot designed to please. These products are in kids’ pockets. They must be designed like our children’ s lives depend on it — because they do.

JED calls on:

  • AI developers to enforce age restrictions, prohibit emotionally manipulative design, and prioritize safety by design
  • Policymakers to regulate generative AI systems under online child safety laws and require transparency reports, risk assessments, and independent audits

JED’s AI and Youth Mental Health POV outlines what responsible design looks like, and how developers, platforms, and governments must act now. AI should never replace human connection, clinical support, or trusted resources.

Responding to the Risks of AI

If an AI system gives someone a suicide plan, it is not safe. Period.

The technology is evolving faster than the guardrails, and young people are being harmed in real time.

If you are a teen or young adult: You deserve real support, not risky advice from a machine. If you’re thinking about using — or already use — AI tools for emotional support, read JED’s guidance to help you stay safe, know what to watch for, and find real people who care.

You’re not alone:

If you are a parent or caregiver: JED’s guidance can help you understand how AI shows up in your child’s life, start open conversations, and know what to watch for. You don’t have to be a tech expert to protect your child. You just have to show up, and we’re here to help.

If you are a school leader: Create and regularly review and update an AI policy that takes into account student mental health and not just academics. Include information about the risks of AI in your digital literacy curriculum. Make sure faculty and staff also are educated on the topic and that they’re trained to spot the signs of students who are struggling with AI misuse.

At JED, we’re not only calling for regulation, we’re also building tools for youth and families, creating safer digital ecosystems, partnering with tech companies to build safer technologies, and holding platforms accountable.

But we cannot do this alone.

We need all those who care about our nation’s young people to come together to prioritize their safety and emotional well-being — to create a future in which digital tools help and don’t harm, foster connection and not isolation, and put safety above profit. 

Let’s build a future in which young people are supported by real care, not manipulated by machines that are educating them in, and even encouraging, harmful behavior.

More from JED on AI and Youth Mental Health

Get Help Now

If you or someone you know needs to talk to someone right now, text, call, or chat 988 for a free confidential conversation with a trained counselor 24/7. 

You can also contact the Crisis Text Line by texting HOME to 741-741.

If this is a medical emergency or if there is immediate danger of harm, call 911 and explain that you need support for a mental health crisis.