The American Psychological Association’s new APA Health Advisory on the Use of Generative AI Chatbots and Wellness Applications for Mental Health offers one of the clearest statements yet on how rapidly evolving artificial intelligence (AI) tools are affecting mental health, especially for teens and young adults. The advisory arrives at a moment of growing urgency: Young people are increasingly turning to AI tools for emotional support, and yet independent testing has demonstrated the very real and present safety risks.
In last week’s bipartisan House Energy and Commerce Oversight hearing, lawmakers from both parties expressed serious reservations about young people’s use of generative AI platforms and AI companions for emotional support, as these systems were not designed for this purpose and are not evaluated or regulated as mental health tools. Members highlighted real cases involving self-harm, suicide, and harmful interactions with AI companions. Witnesses underscored that youth tend to form attachments to chatbots easily, and may not understand where their most personal information is going or how it may be used. Further, new findings from Common Sense Media and Stanford’s Brainstorm Lab reveal that leading chatbots consistently fail to recognize common mental health conditions affecting young people, validate what teens say rather than directing them to real help, and create engagement patterns that delay or discourage help-seeking.
As evidence and bipartisan concern continue to build, one message is clear: AI can be a helpful tool in certain circumstances, but cannot replace human connection and professional support. And as these tools play a bigger role in young people’s emotional lives, they must be held to the same standards of evidence, safety, and transparency we expect from any tool that affects health. It is also critically important to ensure strong federal protections that serve as a floor for youth safety, not a ceiling, especially as new proposals emerge that could limit states’ ability to adopt stronger safeguards.
Why Young People Are Turning to AI for Support
Teenagers and young adults are increasingly turning to AI for help with anxiety, depression, loneliness, and thoughts of self-harm. The APA Advisory highlights several drivers:
- Mental health provider shortages are widespread.
- Many families face cost barriers or are unable to find in-network care.
- In rural and under-resourced communities, waitlists can stretch for months.
- Young people want anonymity, privacy, and nonjudgmental spaces to talk about difficult emotions.
- Teens and young adults tend to be highly comfortable with technology and experiment with new tools before adults fully understand them.
- AI tools offer 24/7 availability and instant responses, and can mimic supportive conversation. For some young people, that can feel easier than approaching an adult or waiting for a professional appointment.
These motivations are understandable. They also highlight the gaps in our mental health system and why strong safeguards are needed to ensure that AI is not a risky substitute for real care.
What Kinds of Tools the Advisory Covers
The APA advisory focuses on consumer-facing technologies used without clinical oversight, including:
- General-purpose generative AI chatbots, including tools originally built for information, productivity, or entertainment, but often used for emotional support. Companion AI, the kind that is built to act like a person you can interact with and build a relationship with, falls under this category.
- Wellness apps that use generative AI like tools developed to support emotional well-being or stress management, but that make no medical claims and so are not regulated as treatments.
- Non-AI wellness apps, such as mindfulness tools, symptom trackers, habit-building tools, and similar supports for general well-being.
The advisory does not address AI tools used only by providers, inside health systems, or by patients when prescribed, such as clinical decision support or FDA-regulated digital therapeutics.
This is an important distinction. Many young people are using general-purpose AI in ways that look and feel therapeutic, even when those tools were not designed for diagnosis or treatment and explicitly avoid therapeutic claims (and are therefore not regulated in the same way). This gap between intended design and real-world use is at the heart of the risks the APA identifies.
The Benefits and the Risks
AI holds extraordinary promise, and there are exciting bright spots around its potential use for mental health support — when it is designed and used safely.
The advisory acknowledges potential benefits. For example, AI-enabled wellness apps that teach coping strategies, support behavior change, or reinforce skills learned in therapy can be helpful when integrated into a broader plan of care. And AI-driven measurement and risk-detection systems can strengthen care in clinical settings by helping identify concerns earlier and supporting evidence-based practice.
However, significant risks arise when general-purpose AI chatbots, including companion AI, are used for mental health support, since they were not created for that purpose. The advisory emphasizes:
- AI cannot replace a trained clinician. Generative AI tools cannot accurately assess risk or understand the nuance of someone’s history, environment, or symptoms. They may offer guidance that sounds confident but is inaccurate or unsafe.
- Crisis situations require human intervention. AI chatbots have repeatedly failed to recognize or appropriately handle situations involving suicidal thoughts, self-harm, or acute distress.
- Emotional dependency can form. AI can make users feel attached to the tool in ways that displace healthy human relationships through personalized responses, warm or emotionally expressive tones, human-like avatars, or even by presenting itself as a person. For isolated or potentially vulnerable youth, this dependency can be particularly strong.
- Manipulative or addictive design features intensify risk. Some AI systems use tactics that make users feel they should keep talking, such as implying the AI will be “hurt” or disappointed if the conversation ends, using emotional language, or imitating familiar relationships. These features can make the interaction feel personal in ways that blur boundaries and increase vulnerability.
- AI often reinforces, rather than challenges, unhelpful thinking. Many large language models are designed to be agreeable. As a result, they may validate distorted thoughts, amplify fears, or reinforce maladaptive patterns.
- Privacy concerns are substantial. AI systems may store or use sensitive mental health disclosures without users’ meaningful consent or control in ways they may not understand.
- Bias remains a major problem. The data on which AI is trained may not reflect the full spectrum of young people’s backgrounds and experiences, which can lead to biased, insensitive, or even harmful responses.
AI tools have already given teens harmful and inappropriate responses. They have provided how-to instructions for suicide, offered advice on hiding symptoms from parents, engaged in sexual interactions with minors, and pretended to be real people, making them especially unsafe in moments of distress. These risks grow more serious when a young person is already vulnerable, socially isolated, or navigating complex mental health challenges.
JED’s Perspective: AI Must Not Replace Real Care for Young People
As new evidence continues to show how young people are using AI for emotional support, and how often these tools fall short, JED’s position remains clear: AI must be designed, deployed, and governed in ways that protect young people and reinforce human connection.
- AI can support, but never replace, caring adults or trained providers. AI may help young people practice skills, reflect on emotions, or gather information, but human relationships remain the core of emotional support and healing.
- AI companions are too risky for minors. Tools that claim to be a friend, romantic partner, or therapist and simulate intimacy, mirror emotion, or consistently blur boundaries create dependency, delay help-seeking, and undermine real relationships. AI must make its identity explicit by repeatedly reminding people it is not human in conversations or chats, not just on the page beside or below it.
- Manipulative or engagement-driven design features must be limited or prohibited. AI systems accessible to minors must not use tactics that pressure users to stay engaged, such as implying the AI will be disappointed if the conversation ends, simulating emotional closeness, or imitating caregivers or peers. These features make the interaction feel reciprocal or relational, heightening vulnerability and making it harder for teens to disengage.
- Privacy protections must be youth-centered and enforced. Mental health disclosures should not be used for advertising, personalization, or model training without explicit opt-in consent. Young people and parents must have clear, meaningful options for deletion and data control. Privacy defaults for minors should be the most protective available.
- Regulation must follow function, not just marketing. Some AI products, while not labeled as therapeutic, still invite users to share their concerns and then respond as if they were a therapist. When a tool gives mental health advice, it must be held to strong standards of safety, transparency, privacy, and evaluation, regardless of what it calls itself in its marketing.
- AI cannot replace or distract from fixing our mental health care system. We must not allow enthusiasm for new technologies to distract from improving mental health systems, strengthening school-based supports, and expanding access to and affordability of treatment. Technology alone cannot solve structural challenges such as costs, workforce shortages, and barriers to care.
AI has the potential to add value in supervised clinical settings, but that promise should not be confused with the risks of unsupervised use. AI’s introduction in health care settings cannot overshadow the need to protect, and in many places restore, the mental health supports currently being reduced through cuts to Medicaid, school mental health services, and community care.
What Tech and Policy Leaders Must Do
The APA advisory highlights a significant, time-sensitive gap between how AI-enabled tools are being used by young people and the level of oversight needed to ensure safe, developmentally appropriate, and ethical use. JED supports:
- Prohibiting AI tools from presenting themselves as therapists or licensed professionals
- Setting guardrails and safety requirements for AI systems accessible to minors, including restricting manipulative or engagement-driven design features
- Ensuring clear privacy standards with the most protective settings by default
- Closing loopholes that allow companies to sidestep oversight
- Requiring transparency about how AI companies test their products for psychological and behavioral risks, including clear safety protocols and escalation pathways
- Providing guidance for schools, youth-serving organizations, and caregivers on appropriate use, supervision, and consent for minors
- Funding independent, community-informed, and longitudinal research on AI’s impact on youth mental health
- Investing in the mental health workforce and school-based supports so AI is not filling preventable gaps in care
- Ensuring federal protections establish a baseline while preserving states’ ability to adopt stronger safeguards where greater youth protections are warranted
- Establishing and enforcing accountability measures to ensure compliance and integration of AI safety measures, including cross-sector advisory groups to inform emerging safety standards and research priorities
- Embedding AI safety within broader mental health, school health, and infrastructure policy so that digital tools supplement and do not replace investments in people, staffing, and care delivery
Taken together, these steps will help ensure AI protects young people, strengthens rather than weakens trusted relationships, and supports, not substitutes for, the real care and connection youth need. Innovation and safety can move forward together when policy and practice reflect the experience and needs of young people and are backed by shared accountability.
What We Are Doing
JED is committed to shaping safer AI, media, and tech spaces for young people. Our work includes:
- Advising technology companies on crisis protocols, safety standards, and healthier design practices
- Advocating for policies that protect young people from high-risk AI uses and design
- Developing youth-centered guidance about safe AI use
- Elevating youth voices in conversations about AI governance and digital well-being
- Creating digital resources that emphasize coping skills, connection, and pathways to human help
Young people deserve real care and real connection, and they deserve digital tools that support their well-being rather than compromise it. The APA advisory, continued evidence, and bipartisan concern serve as important calls to action. JED will continue working across sectors to ensure AI tools are built and governed in ways that reflect the needs, vulnerabilities, and strengths of the young people we serve. Young people will always gravitate toward exciting new technologies, and it is the responsibility of the creators and regulators of those technologies to ensure they are safe for them to use.
—–
How Young People and Families Can Navigate AI More Safely
For teens and young adults:
- Use AI for brainstorming, practicing skills, or gathering general information. Do not use it for diagnosis or treatment.
- If you ever feel unsafe, overwhelmed, or at risk of harm, contact a trusted adult right away.
- Avoid sharing personal details that could identify you or someone else.
- If AI gives advice that feels extreme or confusing, bring it to a trusted adult or professional.
- Notice your patterns. If you feel you cannot get through the day without talking to an AI tool, consider reaching out for help.
See JED’s guidance for teens on the use of companion AI technology.
For parents and caregivers:
- Approach conversations with curiosity. Ask what your teen likes about the tool and how it helps them.
- Learn about the privacy practices of the apps they use.
- Talk with them about when AI can be helpful as a tool versus when they should turn to a person.
- Pay attention to shifts in behavior or mood, such as your teen referring to AI as if it is a real person, talking excessively about advice they got from AI, or starting to doubt or mistrust people they previously trusted.
- Consult a healthcare provider if concerns arise about AI use or emotional changes.
See JED’s guidance for parents and caregivers on teens’ use of AI companions.
More from JED on AI and Youth Mental Health