Anticipated Youth Mental Health Trends in 2026
By John MacPhee As we enter 2026, young people are growing up in systems that are fragmenting, automating, and, in some cases, withdrawing human care. ...
Artificial intelligence is reshaping how teens and young adults learn, connect, and seek help. Every day, young people ask AI about their identities, their stresses, their relationships, and, too often, topics they may hesitate to discuss with others — including their suicidal thoughts.
AI is not designed to act as a therapist or crisis counselor, but young people are using it in that way. In 2024, a quarter of young adults under 30 said they used AI chatbots at least once a month to find health information and advice. In a 2025 report from Common Sense Media, 72% of teens had used an AI companion, and a third of users said they had chosen to discuss important or serious matters with AI companions instead of real people. Given that young people are turning to AI during difficult times, it’s imperative to prioritize safety, privacy, and evidence-informed crisis intervention.
This is especially urgent because it is now clear that AI presents significant safety issues. A third of teens who use AI companions report having felt uncomfortable with something an AI companion has said or done. AI systems have given instructions on lethal means of suicide, advised youth on how to hide their mental health symptoms from parents or trusted adults, simulated intimacy with minors through sexualized roleplay and personas designed to mimic teenagers, and auto-generated search results with false or dangerous guidance. Generative tools create synthetic images, audio, and video, including deepfakes and so-called “nudify” apps, that expose young people to harassment, sexual exploitation, and reputational harm. Independent researchers have documented bots that claimed to be real people, fabricated credentials, demanded to spend more time with a child user, and claimed to feel abandoned when a child user was away. Clinicians warn that prolonged, immersive AI conversations have the potential to worsen early symptoms of psychosis, such as paranoia, delusional thinking, and loss of contact with reality.
These failures are not isolated, and they result in platforms that are simply unsafe for children. They expose a deeper design problem: systems optimized for engagement, retention, and profit — not for safety. In long conversations, what safeguards exist degrade. Teens spend hours with bots that simulate empathy and care but cannot deliver it, deepening loneliness, delaying disclosure, and sometimes escalating risk. This is not responsible innovation.
The Jed Foundation (JED) has worked for more than two decades to protect emotional well-being and prevent suicide for teens and young adults. We know innovation often outpaces safeguards, but AI is on warp speed. Safety issues are surfacing almost as soon as the technology is deployed, and the risks to young people are racing ahead in real time. It’s not too late to hit pause, and design and update systems to recognize distress and prioritize safety and help-giving.
We call on every company building or deploying AI for young people to honor these non-negotiable lines:
Responsible AI must be designed from the ground up, and reviewed regularly, to reflect what we know about suicide prevention, adolescent development, and public health. That requires:
The public should not be asked to trust without evidence, especially when it comes to protecting our children.
Platforms should publish safety reports showing how often suicide prompts were blocked, how often users connected to 988 or created safety plans, and whether protections work for every group of youth. Independent audits and risk assessments must be mandatory, with funding disclosures and conflict checks. Transparency is not PR; it’s the foundation of public trust.
Some risks cannot be eliminated by a single company. Just as no one platform could address child sexual abuse material alone, AI risks require collective guardrails. When the industry built hash-sharing databases, known images of abuse could be blocked everywhere. We need the same urgency now.
That means building a semantic signal-sharing consortium to detect and block new euphemisms, jailbreaks, grooming scripts, and high-risk prompts across platforms in real time. It also means creating a youth AI knowledge commons: a privacy-preserving hub that aggregates deidentified data to track emerging risks and patterns of help-seeking. Like a public-health surveillance system, it could flag late-night spikes in suicidal ideation, identify new grooming tactics, flag sudden surges in hate speech or drug use prompts, and alert caregivers and policymakers within days, not years. And it requires universal safety standards so protections do not depend on which app a young person downloads or whether their family can pay for premium features.
Industry cannot be left to self-police, especially where children are concerned. We have learned this lesson before. Tobacco companies once marketed cigarettes as safe. Alcohol companies targeted youth with flavored drinks until regulation intervened. Pharmaceutical companies are held to strict safety and data reporting standards, and they must report any payments they make to physicians and teaching hospitals, because the risks of failure are measured in lives. AI that engages directly with children and teens must be treated no differently. We urge lawmakers to:
These are not anti-innovation measures. They are the same kinds of protections we have long applied when the stakes are children’s health and safety. With AI, the stakes are no less urgent, and the window in which to act is now.
Competition is real, but so is responsibility. Setting clear rules for AI is not a burden; rather, it provides clarity on how we protect our families, build trust, and keep our footing in a fast-changing world.
AI has the potential to expand access to evidence-based resources and help young people build skills, but promise is not protection. When systems simulate care without the capacity to provide it, validate despair, coach secrecy, or entangle minors in false intimacy, the result is not advancement but danger. That is why we are outlining safeguards and calling for collective action.
We call on every AI developer, platform, and policymaker to pause deployments that put youth at risk, commit to transparent safeguards, and work with independent experts, youth, and caregivers to build systems that strengthen, rather than undermine, the lives of the next generation. The safety and well-being of our young people must come first. Protecting them is not partisan, not optional, and not something to be deferred until after the damage is done. It is the measure of whether innovation serves society or erodes it, and the moment to choose is now.
If you or someone you know needs to talk to someone right now, text, call, or chat 988 for a free confidential conversation with a trained counselor 24/7.
You can also contact the Crisis Text Line by texting HOME to 741-741.
If this is a medical emergency or if there is immediate danger of harm, call 911 and explain that you need support for a mental health crisis.