Meta Unveils New Teen Safety Tools, Removes 635K Accounts

- Meta is introducing new teen safety features, including one-tap blocking, and removed 635,000 accounts for inappropriate interactions amid mental health scrutiny.
Meta, the parent company of Instagram, has introduced several new safety features aimed at protecting its younger users. The company also announced the removal of hundreds of thousands of accounts that engaged in inappropriate behavior towards children. These measures come amid growing public and legal pressure over the impact of social media on youth mental health. One key update now provides teens with more information about accounts that message them, empowering them to make safer choices online.
The company stated it removed a total of 635,000 accounts for making sexualized comments or requesting explicit images from adult-run accounts of kids under 13. A breakdown revealed that 135,000 accounts were removed for commenting, while another 500,000 were associated with inappropriate interactions. In addition, teen users now have a simple, one-tap option to block and report accounts that make them feel uncomfortable. The new features build on an existing system that has already prompted teens to block more than one million accounts and report another million after seeing a “safety notice.”
AI-Based Age Verification and Protections
Earlier this year, Meta began testing the use of artificial intelligence to verify the ages of its users on Instagram. This system is designed to identify individuals who may have lied about their age to bypass platform restrictions. If the AI determines a user is under 18 but has claimed to be an adult, their account will automatically be converted to a teen account. These specialized accounts come with stricter privacy settings, including being private by default. They also limit private messages to only those from people the teen already follows.
This follows an earlier policy, implemented in 2024, where all new teen accounts were made private by default. The move to use AI for age verification and to add these new protections is a direct response to the ongoing legal and ethical debates surrounding youth safety online. The company’s efforts reflect a broader industry-wide acknowledgment of the need for stronger safeguards. These technological solutions are aimed at creating a safer environment for younger users on its platforms.
Broader Legal and Public Scrutiny
Meta’s new safety measures arrive as the company faces a wave of lawsuits from across the United States. Dozens of states have filed suits accusing the company of deliberately designing its platforms to be addictive and harmful to young people. The legal challenges claim that specific features on Instagram and Facebook contribute directly to a youth mental health crisis. These lawsuits underscore the immense scrutiny the company is under regarding its corporate responsibility to protect its youngest users.
The intense focus on youth safety is also driving changes across the social media landscape. Meta’s actions are part of a larger conversation about the ethical design of digital products and their societal impact. The company’s continued efforts to implement new features are a direct response to these ongoing legal and public pressures. They reflect a growing industry trend where companies must not only innovate but also prioritize user well-being, especially for vulnerable populations.
The Evolution of Online Safety
In recent years, the conversation around children’s online safety has moved beyond simple parental controls to focus on platform design itself. Research from the Pew Research Center in 2023 showed that 95% of teens aged 13-17 use a social media platform, with YouTube, TikTok, and Instagram being the most popular. This widespread use has put immense pressure on tech companies to design their services with a greater sense of responsibility.