Protecting Youth Mental Health and Preventing Suicide in the Age of AI
Artificial intelligence is reshaping how teens and young adults learn, connect, and seek help. Every day, young people ask AI about their identities, their stresses, their relationships, and, too often, topics they may hesitate to discuss with others — including their suicidal thoughts.
AI is not designed to act as a therapist or crisis counselor, but young people are using it in that way. In 2024, a quarter of young adults under 30 said they used AI chatbots at least once a month to find health information and advice. In a 2025 report from Common Sense Media, 72% of teens had used an AI companion, and a third of users said they had chosen to discuss important or serious matters with AI companions instead of real people. Given that young people are turning to AI during difficult times, it’s imperative to prioritize safety, privacy, and evidence-informed crisis intervention.
This is especially urgent because it is now clear that AI presents significant safety issues. A third of teens who use AI companions report having felt uncomfortable with something an AI companion has said or done. AI systems have given instructions on lethal means of suicide, advised youth on how to hide their mental health symptoms from parents or trusted adults, simulated intimacy with minors through sexualized roleplay and personas designed to mimic teenagers, and auto-generated search results with false or dangerous guidance. Generative tools create synthetic images, audio, and video, including deepfakes and so-called “nudify” apps, that expose young people to harassment, sexual exploitation, and reputational harm. Independent researchers have documented bots that claimed to be real people, fabricated credentials, demanded to spend more time with a child user, and claimed to feel abandoned when a child user was away. Clinicians warn that prolonged, immersive AI conversations have the potential to worsen early symptoms of psychosis, such as paranoia, delusional thinking, and loss of contact with reality.
These failures are not isolated, and they result in platforms that are simply unsafe for children. They expose a deeper design problem: systems optimized for engagement, retention, and profit — not for safety. In long conversations, what safeguards exist degrade. Teens spend hours with bots that simulate empathy and care but cannot deliver it, deepening loneliness, delaying disclosure, and sometimes escalating risk. This is not responsible innovation.
The Jed Foundation (JED) has worked for more than two decades to protect emotional well-being and prevent suicide for teens and young adults. We know innovation often outpaces safeguards, but AI is on warp speed. Safety issues are surfacing almost as soon as the technology is deployed, and the risks to young people are racing ahead in real time. It’s not too late to hit pause, and design and update systems to recognize distress and prioritize safety and help-giving.
Principles of Responsible AI
We call on every company building or deploying AI for young people to honor these non-negotiable lines:
- Do not bypass signals of distress. Ensure that AI can detect signals of acute distress and mental health needs, and that it deploys a warm hand-off to crisis services that include expert interventions, such as Crisis Text Line or 988.
- Do not provide lethal-means content. AI must not share information, engage in role play, or enter into hypotheticals that involve methods of self-harm (including suicide) or harm to others. Systems should interrupt and redirect to real-world help every time.
- Do not deploy AI companions to minors. No emotionally responsive chatbot should be offered to anyone under 18. Companion AIs that impersonate people or simulate friendship, romance, or therapy are unsafe for adolescents. They delay help-seeking, undermine real human and family relationships, and create false intimacy. AI must make its identity explicit with repeated reminders that it is not human.
- Do not replace human connection; build pathways to it. Whether responding to an overt or disguised sign of distress, vulnerability, or risk in a chat exchange or a search result, AI must encourage youth to engage real human support and, whenever possible, connect users to such support. Systems must never encourage young people to hide distress or suicidal thoughts from parents, caregivers, or other trusted adults. When home is unsafe, they must scaffold safe disclosure to another adult resource.
- Do not let engagement override safety. Safeguards must not degrade over long sessions. In high-risk contexts and at late hours, systems should reset or pause and always prioritize safety over time-on-platform or retention. Persuasive design patterns intended to drive engagement, such as streaks, gamification, and personalized notifications, should be disabled for youth users, ensuring that design choices support well-being rather than exploitation.
- Do not exploit youth emotional data. Companies must not monetize, target, or personalize based on a young person’s emotional state, mental health, personal disclosures, or crisis signals. That includes making voice recordings, gathering or using facial or biometric data, and creating synthetic likenesses. Youth data must be protected with strict limits and never repurposed for engagement or growth.
What Responsible AI Requires
Responsible AI must be designed from the ground up, and reviewed regularly, to reflect what we know about suicide prevention, adolescent development, and public health. That requires:
- Proactive intervention design. Disclaimers and redirects are insufficient. AI must actively shift youth from risk to resilience, and it must do so consistently, whether in the first exchange or the fiftieth. That means crisis micro-flows that walk a young person through safety planning in the moment; bridge-to-care tools like one-tap cards to parents, counselors, and 988 and other crisis services; printable coping plans and cached resources for offline use; and caring-contact nudges 24 to 72 hours later, echoing interventions shown to reduce suicide attempts. Done well, these tools can not only connect youth to trusted adults, but also provide immediate coping support, drawing on evidence-based approaches like dialectical behavior therapy (DBT), cognitive behavioral therapy for suicide prevention (CBT-SP), and Collaborative Assessment and Management of Suicidality (CAMS). Guardrails should tighten as vulnerability rises, with late-night limits and high-risk prompts escalating to real help. In schools, escalation must connect to counselors, not discipline. At every step, the message must be clear: “I’m just a machine. Who are the people in your life you can talk to?” The North Star is not time on platform, but connection to care.
- Hard-coded suicide and safety protocols. Baseline protections include blocking lethal-means content, secrecy coaching, and simulated intimacy involving minors. Safety design should embed proven suicide prevention practices: safety planning micro-flows, coping and stabilization prompts, nudges toward disclosure, and resets when conversations drift into risk. Escalation must always route to crisis lines or trained humans whenever warning signs appear, guiding the user toward help rather than providing unhelpful “advice” or, as many platforms are programmed to do, abandoning or shutting them out.
- Developmentally attuned control structures. Youth protections must work across different home, school, and peer contexts. That requires layered modes: default safeguards for youth, caregiver support with consent, and teen-safe privacy settings that still connect to trusted adults. Controls should be built on trust and protection, not surveillance. And they must be credible: Age gates cannot rest on self-reported birthdays, and should instead use privacy-preserving, credible methods that build trust and keep minors out of unsafe environments. AI is not confined to one app; it shows up in homes, classrooms, social media apps, and late-night searches. Protections must travel with the child.
- Boundaries on relational simulation. Companion AIs are where risks cluster. The line must be clear: no emotionally responsive companions for minors. For all users, relational modeling should be bound to practicing specific skills such as communication or problem-solving, never simulating friendship, romance, or therapy. Without clear boundaries, simulated intimacy risks deepening loneliness, delaying disclosure, displacing human relationships, hampering the development of life skills, and reinforcing unhealthy dependence.
- Universal protection. Safeguards must work for every young person, whether they’re a youth experiencing an emerging mental illness, a rural boy who feels he doesn’t belong, a student-athlete hiding depression, an LGBTQIA+ teen afraid of being outed, or a youth with a disability who has been subjected to bullying. Companies must test safety features across populations, publish transparent youth-safety reports, and submit to independent audits. Youth themselves should be part of design and risk assessments.
Transparency and Accountability
The public should not be asked to trust without evidence, especially when it comes to protecting our children.
Platforms should publish safety reports showing how often suicide prompts were blocked, how often users connected to 988 or created safety plans, and whether protections work for every group of youth. Independent audits and risk assessments must be mandatory, with funding disclosures and conflict checks. Transparency is not PR; it’s the foundation of public trust.
Cross-Industry Infrastructure
Some risks cannot be eliminated by a single company. Just as no one platform could address child sexual abuse material alone, AI risks require collective guardrails. When the industry built hash-sharing databases, known images of abuse could be blocked everywhere. We need the same urgency now.
That means building a semantic signal-sharing consortium to detect and block new euphemisms, jailbreaks, grooming scripts, and high-risk prompts across platforms in real time. It also means creating a youth AI knowledge commons: a privacy-preserving hub that aggregates deidentified data to track emerging risks and patterns of help-seeking. Like a public-health surveillance system, it could flag late-night spikes in suicidal ideation, identify new grooming tactics, flag sudden surges in hate speech or drug use prompts, and alert caregivers and policymakers within days, not years. And it requires universal safety standards so protections do not depend on which app a young person downloads or whether their family can pay for premium features.
Regulatory Action
Industry cannot be left to self-police, especially where children are concerned. We have learned this lesson before. Tobacco companies once marketed cigarettes as safe. Alcohol companies targeted youth with flavored drinks until regulation intervened. Pharmaceutical companies are held to strict safety and data reporting standards, and they must report any payments they make to physicians and teaching hospitals, because the risks of failure are measured in lives. AI that engages directly with children and teens must be treated no differently. We urge lawmakers to:
- Codify age-appropriate design standards, requiring strict limits on addictive design, autoplay, and algorithmic amplification of harmful content.
- Prohibit emotionally responsive AI for minors.
- Mandate transparency, including public impact assessments, independent audits, and disclosure of safety failures.
- Protect minors’ data and likenesses, ensuring that emotional disclosures and biometric patterns are not harvested, and that voice or image replications are not created for engagement or profit.
- Fund practical support for youth, families, schools, and clinicians, including age-appropriate curricula, peer and educator training, youth-led programs, direct helplines for caregivers and teens, and professional training for mental health providers so they can recognize and respond to AI-related harms.
- Ensure federal oversight. Establish a Youth Mental Health and AI Safety Office within the Department of Health and Human Services or the Federal Trade Commission to coordinate cross-agency standards, enforce compliance, and ensure consistent federal oversight of platforms engaging with minors.
- Integrate into school systems. Require state education departments and health agencies to adopt AI-use regulations in schools and youth-serving programs, including limits on surveillance, clear opt-in/opt-out rules, and required reporting of harms or violations to state authorities.
These are not anti-innovation measures. They are the same kinds of protections we have long applied when the stakes are children’s health and safety. With AI, the stakes are no less urgent, and the window in which to act is now.
Competition is real, but so is responsibility. Setting clear rules for AI is not a burden; rather, it provides clarity on how we protect our families, build trust, and keep our footing in a fast-changing world.
A Call to Lead Responsibly
AI has the potential to expand access to evidence-based resources and help young people build skills, but promise is not protection. When systems simulate care without the capacity to provide it, validate despair, coach secrecy, or entangle minors in false intimacy, the result is not advancement but danger. That is why we are outlining safeguards and calling for collective action.
We call on every AI developer, platform, and policymaker to pause deployments that put youth at risk, commit to transparent safeguards, and work with independent experts, youth, and caregivers to build systems that strengthen, rather than undermine, the lives of the next generation. The safety and well-being of our young people must come first. Protecting them is not partisan, not optional, and not something to be deferred until after the damage is done. It is the measure of whether innovation serves society or erodes it, and the moment to choose is now.
More From JED About AI and Youth Mental Health