The Jed Foundation (JED) Recommendations
for Safeguarding Youth Well-Being on Social Media Platforms
Social media can provide a source of connection and identity-affirming support to young people, which can be particularly impactful for LGBTQIA+ youth and girls of color, who may struggle to find support elsewhere. However, social media can also cause significant harm to the well-being of children and adolescents, and may have a lasting impact on the developing brain. Insufficient access to data and a lack of transparency from technology companies have prevented the research community from understanding the magnitude of social media’s impact on the mental health and well-being of youth.
With these data in mind, The Jed Foundation (JED) advocates for a safety-first approach, in alignment with the surgeon general’s advisory Social Media and Youth Mental Health.
JED is also closely watching the lawsuit against Meta, the parent company of Facebook and Instagram, which presents an opportunity for the community and government to learn more about the practices used with young people. The case will highlight the effects of social media on young people, enabling us to better advocate for the implementation of more protective regulations and measures for young social media users. The case also emphasizes that we cannot rely on industry regulation, particularly when it comes to protecting children. Government regulation is necessary to develop safeguards that ensure online safety for young people.
JED issues the following recommendations:
- Support federal regulation designed to limit the harmful aspects of social media, such as preventing online platforms from employing technologies that drive nonstop engagement, including:
- Video autoplay
- Platform-generated messages or alerts (e.g., push notifications)
- Engagement-based rewards
- Enticements and algorithmic preferences to share personal information or maximize money spent on platforms
- Support federal regulation designed to maximize protective factors, including:
- Leveraging algorithms to surface supportive mental health content
- Regulating advertising around individuals under the age of 18
- Building in time limits and digital breaks to support youth in putting their devices down and connecting to other people in person
- Implementing content blocks instead of easily dismissed pop-ups
- Using expertise to discover, downgrade, and ban content that encourages harmful behaviors, including suicide, self-injury, disordered eating, and cyberbullying
- Using expertise to uplift and advance content that is supportive of help-seeking, as well as positive solutions to the challenges youth are facing
- Promoting transparency in algorithms across social media platforms, ensuring that young social media users gain insight into the factors shaping their online experiences
- Giving young people clearer ways to choose the type of content that appears in their feeds
- Require social media companies to establish data transparency policies that include:
- Promptly reporting all data to researchers so the information can be interpreted and further disseminated
- Publishing ongoing studies in a publicly available registry so that the field knows what studies are underway at social media companies
- Disclosing grants or payments to researchers so it is clear what studies were supported with funding from these companies
- Require social media companies to include experts such as psychologists, ethicists, and medical and public health professionals on industry advisory bodies to advocate for youth well-being. To anticipate and mitigate possible harms, companies should involve those professionals before and during the design and deployment of features and algorithms to robustly and fairly evaluate their developmental and mental health impacts.
- Invest in high-quality, large-scale research into interventions, protective policies, and the short- and long-term effects of social media on mental health. Researchers must be able to conduct longitudinal and real-time studies to understand and map the user by technology interactions that affect mental health vulnerability. This requires an added level of collaboration between technology companies and independent research groups.
- Establish a regulatory agency exclusively dedicated to safeguarding digital and online safety. The agency would ensure that industry complies with regulations and centers the well-being of users in practice and policy.
- Support the creation of a commission and advisory council, with regular reporting. The commission, comprised of mental health experts, should center child well-being over profit by limiting disclosure and data-sharing to third parties and designating users between age 13 (the youngest age people are technically allowed on social media) and 17 (the end of legal childhood, last day of age 17) as a special class.
- Mandate development and deployment of software that accurately detects user age and governs graduated affordances by age on platforms. Companies must do a better job of guaranteeing that users below the designated age of access (typically age 13) are barred from accessing the platform.
- Involve young people in decision-making and idea generation processes, inviting them to:
- Partner with government agencies to contribute to the creation of legislation and policies that will offer them substantive protection online and enhance the positive benefits of social media
- Collaborate with social media companies to design, create, and implement improvements intended to minimize harm and support their well-being