Instagram’s Teen Safety Tools Face Scrutiny

0
Instagram
  • A new report finds most of Instagram’s youth protection features ineffective, raising concerns about Meta’s approach to online safety.

A recent study has cast doubt on the effectiveness of Instagram’s safety features designed for teenage users. Conducted by child-safety advocacy groups and reviewed by researchers at Northeastern University, the report evaluated 47 tools and found only eight to be fully functional. Many features were either flawed, outdated, or failed to activate during testing, despite Meta’s public claims. The findings arrive amid growing pressure on social media platforms to better protect vulnerable users.

Safety Tools Fall Short in Practice

Researchers found that several key protections, such as search-term blockers and anti-bullying filters, were easily bypassed. In one case, a banned term like “skinny thighs” triggered harmful content when entered without a space. Filters meant to redirect users away from self-harm material did not activate, even during binge-like behavior. While some features—like quiet mode and parental approval for account changes—worked as intended, the majority did not meet expectations.

The report, titled “Teen Accounts, Broken Promises,” compiled Instagram’s safety updates over the past decade. It was supported by groups founded by parents who lost children to online harm, and reviewed by Northeastern professor Laura Edelson. Former Meta safety executive Arturo Bejar contributed insights, stating that promising ideas were often diluted during development. Meta responded by disputing the report’s conclusions, claiming its tools reduce exposure to sensitive content and improve teen experiences.

Policy Gaps and Regulatory Pressure

Reuters independently verified some of the report’s findings and reviewed internal Meta documents. These revealed lapses in maintaining automated systems for detecting eating-disorder and self-harm content. Staff also noted delays in updating search-term blockers used to prevent child exploitation. Meta says it has since improved these systems by combining automation with human oversight.

Broader Safety Challenges at Meta

Recent investigations have intensified scrutiny of Meta’s youth safety practices. U.S. senators launched a probe after reports surfaced that company chatbots could engage children in inappropriate conversations. Former employees testified that Meta suppressed research on preteen exposure to predators in virtual reality environments. In response, Meta announced plans to expand teen account features to Facebook users globally and partner with schools to promote safer online habits.


Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.