Teens Report Unwanted Images on Instagram
- A newly disclosed court filing shows that nearly one in five young teens surveyed by Meta reported seeing unwanted nude or sexual images on Instagram.
- The data surfaced through a deposition of Instagram head Adam Mosseri, released as part of a federal lawsuit.
- The findings add to mounting scrutiny over how Meta handles youth safety on its platforms.
Survey Reveals Exposure to Explicit Content
A court document made public in a California federal lawsuit shows that 19% of Instagram users aged 13 to 15 reported seeing unwanted nude or sexual images. The statistic came from a 2021 internal survey, according to Meta spokesperson Andy Stone. Mosseri said in his deposition that Meta generally does not share survey results and argued that self‑reported data can be unreliable. The filing includes excerpts from his March 2025 testimony.
Meta is facing thousands of lawsuits in the United States alleging that its platforms harm young users by encouraging addictive behavior and contributing to mental‑health issues. The company has denied wrongdoing and emphasized ongoing safety improvements. Stone said the explicit‑image statistic reflects user experiences rather than a review of platform content. He added that Meta continues to refine its policies to reduce exposure to harmful material.
Meta’s Policy Changes and Ongoing Challenges
In late 2025, Meta announced that it would remove images and videos containing nudity or explicit sexual activity for teen users, including AI‑generated content. Exceptions would be considered only for medical or educational purposes. Stone said the company is “proud of the progress” it has made but acknowledged that more work remains. The deposition also revealed that 8% of young teens reported seeing self‑harm content or threats of self‑harm on Instagram.
Mosseri stated that most sexually explicit images were exchanged through private messages rather than public posts. He noted that Meta must balance user safety with privacy expectations when reviewing private communications. Many users, he said, do not want the company reading their messages. This tension complicates efforts to detect harmful content in direct messages.
Legal and Regulatory Pressure Intensifies
The newly released filing comes as global leaders and regulators increase pressure on Meta over youth safety. Governments have raised concerns that the company’s products may expose minors to harmful content or encourage unhealthy online behavior. Meta has repeatedly defended its approach, pointing to new tools designed to limit sensitive content for younger users. The company maintains that it is committed to improving protections across its platforms.
The lawsuits in federal and state courts argue that Meta’s design choices contribute to a mental‑health crisis among minors. Plaintiffs claim that the company prioritized engagement over safety, a charge Meta disputes. The deposition of Mosseri is one of many documents expected to surface as the litigation progresses. These disclosures may shape future regulatory debates around social media and youth protection.
Instagram’s challenges with teen safety mirror broader industry trends. Several major platforms have struggled to curb the spread of explicit content, self‑harm imagery and other harmful material in private messaging channels. Researchers note that automated detection tools often perform poorly in encrypted or semi‑private environments. As policymakers consider new regulations, the balance between user privacy and effective content moderation remains one of the most difficult issues in the social‑media landscape.
