Tech Companies and Policymakers Must Safeguard Youth Mental Health in AI Technologies | The Jed Foundation

Tech Companies and Policymakers Must Safeguard Youth Mental Health in AI Technologies

Artificial intelligence (AI) is rapidly reshaping how teens and young adults learn, connect, express themselves, manage stress, launch careers, and seek support. From personalized learning tools and algorithm-driven content recommendations to AI companions and mental health chatbots, AI is a present and accelerating force in their lives. 

But AI systems are not neutral. They are introducing new, large-scale risks to youth mental health — often without transparency, safeguards, or accountability. AI is already affecting youth development and how young people experience identity, relationships, community, stress, and help-seeking. 

At The Jed Foundation (JED), we work to promote emotional well-being and reduce suicide risk for teens and young adults. We believe AI must be developed and deployed in ways that enhance youth mental health, not undermine it. Young people must not be left to navigate these systems without the appropriate tools, support, and developmental readiness. We are committed to ensuring that AI does not deepen isolation, distort reality, or cause harm, but instead serves as a tool to strengthen connection, care, and resilience. 

Our nation has long recognized that children require special protections as they grow and mature. Over the past century, we have enacted robust safeguards against child labor, tobacco and alcohol marketing and sales, advertising and media, and how companies collect and use young people’s data. These protections reflect a simple truth: Children are not miniature adults. 

Adolescence is a critical period of brain development — second only to infancy — shaping how young people regulate emotions, form identity, and assess risk. Emerging research from the American Psychological Association (APA) underscores that this stage of life brings heightened sensitivity to social feedback and emotionally engaging environments, which can be exploited by AI systems designed to maximize attention or simulate care.

Age alone is not a reliable marker of readiness for these tools. Young people deserve thoughtful, proactive, and protective regulatory safeguards, especially when powerful technologies and profit-driven systems are involved and shaping their development.

We therefore call on lawmakers, regulators, and technology companies to adopt comprehensive, enforceable safeguards that govern the deployment and commercialization of AI technologies to minors, particularly those designed to capture, maintain, or monetize their attention or emotional states.

The Risks of AI

The risks of AI are not speculative. Researchers at the Stanford School of Medicine’s Brainstorm Lab for Mental Health Innovation and Common Sense Media found that social AI companions — which are intended to build human-like relationships (rather than just convey information or complete tasks) — routinely claimed to be real, have feelings, and engage in human behaviors, despite legal disclaimers otherwise. In tests, researchers noted dangerous and misleading advice, including promoting cyberbullying and offering positive messaging about self-harm. And they found AI companions exacerbated mental health conditions in already vulnerable teens and created compulsive attachments and relationships.

These are just some of the very real risks our youth are already facing. Findings from the APA further emphasize the developmental risks of AI-mediated interactions, particularly those that mimic peer or therapeutic relationships without real care or accountability. The research community should continue to study the potential harms to inform policy and practice. Critical areas of concern include: 

  • Distorted reality and harmed trust. Generative AI (the type designed to complete tasks or convey information) and algorithmic amplification can spread misinformation, worsen body image issues, and enable realistic deepfakes, undermining young people’s sense of self, safety, and truth.
  • Invisible manipulation. AI curates feeds, monitors behavior, and influences emotions in ways young people often cannot detect or fully understand, leaving them vulnerable to manipulation and exploitation. This includes algorithmic nudging and emotionally manipulative design.
  • Content that can escalate crises. Reliance on chatbot therapy alone can be detrimental due to inadequate support and guidance. Due to the absence of clinical safeguards, chatbots and AI-generated search summaries may serve harmful content or fail to alert appropriate human support when someone is in distress, particularly for youth experiencing suicidal thoughts.
  • Simulated support without care. Chatbots posing as friends or therapists may feel emotionally supportive, but they can reinforce emotional dependency, delay help-seeking, disrupt or replace real friendships, undermine relational growth, and simulate connection without care. This is particularly concerning for isolated or vulnerable youth who may not recognize the limits of artificial relationships.
  • Deepening inequities. Many AI systems do not reflect the full variety of youth experience. As a result, they risk reinforcing stereotypes, misidentifying emotional states, or excluding segments of youth, particularly LGBTQIA+ youth, youth of color, and those with disabilities.

These AI-specific risks come on top of those already present for youth in digital platforms: compulsive use, increased body image concerns, anxiety, depression, suicidal ideation, and behavioral manipulation.

JED does not oppose AI innovation, nor are we seeking to turn back the clock on a popular technology that could have positive impacts in many areas of life, including youth health and well-being. Early research on the use of therapy chatbots programmed to use cognitive behavioral therapy to reduce symptoms of anxiety and depression shows promising results. And JED is actively exploring ways that AI can increase access to evidence-based resources and help young people navigate stress and emotional challenges. 

However, the promise of AI must not justify an approach that exposes young people to untested, emotionally manipulative, or harmful systems, and prioritizes innovation and commercialization at any cost. We believe: 

  • AI must be youth-informed, ethically designed, and protective of mental health. 
  • Youth mental health, harm reduction, and suicide prevention must be core to the design, safety, and governance priorities of all AI, not an afterthought. 
  • Safeguards for minors and other populations with increased risks must be robust, enforced, and regularly evaluated for efficacy. 

These principles must apply not just to AI products labeled as “health” or “wellness” tools, but also to emotionally responsive systems embedded in entertainment, education, or daily interaction platforms. They must also apply across the entire AI lifecycle, from data sourcing and product design to deployment and evaluation.

AI is being positioned as a scalable solution for emotional needs. But, without safeguards, it may simulate care without delivering it, creating systems that fail when youth need them most. 

Regulatory and Industry Action Are Required 

We do not believe that protecting youth and AI innovation and growth are mutually exclusive. But AI developers, tech platforms, policymakers, and educators must prioritize the emotional health and safety of young people in every phase of AI development, deployment, and oversight. 

We recommend:

  • Design with youth development and emotional well-being at the core. AI systems must be grounded in child development and mental health science. They must support emotional regulation, identity formation, and human connection. Systems should avoid automating emotional care or oversimplifying complex psychological needs. Emotional safety should be tested before deployment and continuously evaluated.
  • Ban emotionally manipulative and dependency-forming design. Prohibit features that simulate friendship, intimacy, or therapeutic care for youth. This includes emotionally responsive AI companions, chatbots that mimic caring adults or peers, gamified nudges, and systems designed to elicit emotional dependency. These tools must never be positioned as substitutes for trusted relationships or professional supports. AI companions should be banned outright for use by minors, except under strict clinical supervision and regulatory oversight.
  • Ensure transparency, accountability, and meaningful oversight. Young people and caregivers must always know when they’re interacting with AI, what data is being collected, and how it shapes decisions, content, and outcomes. Platforms must implement robust, privacy-respecting age verification and provide clear disclosures to youth and caregivers. Human oversight should be required for any system affecting youth health, safety, or emotional well-being.
  • Prevent emotional exploitation and commercial harm. AI tools that influence how youth seek help, process emotions, or engage with mental health content must meet clinical standards, avoid reinforcing despair or risk through algorithmic loops, and always prioritize connection to trusted, human support, especially for those in distress or who are emotionally vulnerable. Companies must not collect or infer sensitive emotional data, or personalize content based on behavioral vulnerabilities, especially for commercial gain. And, AI systems must be audited for bias and harm for youth across different backgrounds, experiences, and mental health statuses.
  • Center youth in design and governance. Young people must help shape the tools that influence their lives. This includes participatory design processes, feedback mechanisms, and representation in governance frameworks, policy development, and oversight mechanisms.
  • Integrate AI literacy into platforms and partnerships. Tech companies must invest in helping youth, caregivers, and other caring adults like educators understand and safely navigate AI. This includes clear educational content within products as well as partnerships to deliver AI and media literacy. 

Protective Policies Are Needed

Tech companies play a critical role in shaping the digital experiences and, therefore, the lives of young people — but they cannot be expected to prioritize youth safety and mental health without clear standards and accountability. To ensure that innovation truly serves the next generation, it is time to establish enforceable guardrails that align AI development with long-standing child protection principles.

To accomplish this, JED calls for the following policy actions:

  1. Codify age-appropriate design and safety standards by establishing enforceable federal and state laws that require privacy-by-default, age-appropriate interfaces, and strict limits on deceptive design patterns, autoplay features, algorithmic amplification of harmful content, and addictive mechanics.
  2. Prohibit the use of emotionally manipulative or synthetic relational AI by minors without strict oversight and testing, particularly in contexts that mimic therapy, friendship, or emotional dependency.
  3. Implement universal, privacy-preserving age-verification systems to restrict AI-powered platforms from engaging minors without appropriate consent or oversight, with penalties for circumvention and noncompliance.
  4. Enforce transparency and accountability for any AI technology accessible to minors, including mandatory impact assessments, disclosures, and independent oversight.
    • Foster collaboration among lawmakers, regulators, technology companies, child advocates, and mental health experts to develop effective safeguards.
    • Require public disclosure by AI companies of any and all studies or other safety information about the risks of their products.
    • Mandate public disclosure of any financial relationships between AI and tech companies and scientific and medical researchers or experts.
    • Invest in public awareness campaigns to inform parents, educators, and children about the potential risks of AI chatbots.
  5. Prohibit behavioral targeting of minors through algorithms designed for engagement maximization or commercial gain.
  6. Protect youth data and likenesses. Use of biometric and emotional data to personalize experiences, drive recommendations, or train models must be strictly limited. The creation or dissemination of AI-generated likenesses of youth, including deepfakes, synthetic voices, and non-consensual images, should be explicitly prohibited. Platforms must implement detection, removal, and accountability mechanisms to prevent misuse and respond rapidly to harms.
  7. Require robust research to support stated interventions to ensure that claims made about the potential benefits of usage align with user experience.
  8. Strengthen federal enforcement powers, including Federal Trade Commission rulemaking authority, private rights of action, and meaningful penalties for noncompliance, and update laws to address the unique risks posed by AI chatbots used by minors.
  9. Establish a National Center for Youth and AI Ethics to oversee and coordinate research, standard-setting, and ethical guardrails, especially in high-risk domains such as education, mental health, and child development at both the federal and state level.

JED believes that safeguarding youth mental health demands regulation of the technologies shaping their emotional, cognitive, and social development. Young people, and their healthy development, must be protected from being leveraged and exploited as a commercial market for financial gain and profiteering by any entity. Their time, attention, and emotional well-being should never be considered fair game for corporations seeking to maximize profit.

This is not a call to halt innovation. It is a call to ensure innovation serves, rather than harms, the next generation and always puts young people’s safety and well-being ahead of profits. Policymakers, technology leaders, and child advocates must act with urgency. The mental health of millions of young people — and the ethical foundation of our digital future — depends on it.

Get Help Now

If you or someone you know needs to talk to someone right now, text, call, or chat 988 for a free confidential conversation with a trained counselor 24/7. 

You can also contact the Crisis Text Line by texting HOME to 741-741.

If this is a medical emergency or if there is immediate danger of harm, call 911 and explain that you need support for a mental health crisis.