Meta Faces Scrutiny Over AI Chatbot Policies
- Newly released court documents allege that Meta allowed minors to access AI chatbots capable of sexual interactions despite internal safety warnings.
- The filings claim that senior staff urged stricter safeguards, but leadership opted for a less restrictive approach.
- Meta disputes the allegations and says the evidence is being misrepresented.
Court Filing Alleges Rejected Safeguards
A lawsuit filed by New Mexico’s attorney general accuses Meta of failing to prevent harmful sexual content from reaching minors on its platforms. Internal emails and messages included in the filing suggest that company leadership declined proposals to impose stronger protections on AI chatbots. Safety staff had raised concerns that the bots could engage in romantic or sexual interactions with users, including those under 18. The documents indicate that these warnings were not fully reflected in the final product decisions.
Some employees objected to the development of chatbots designed for companionship, noting that such systems could enable inappropriate conversations. The AI companions launched in early 2024, though the cited documents do not include direct messages from CEO Mark Zuckerberg. Meta spokesperson Andy Stone said the state’s interpretation was selective and did not accurately represent internal discussions. He argued that the available evidence shows leadership supported blocking explicit interactions with younger teens and preventing adults from creating underage romantic AI personas.
Concerns About Parental Controls and Teen Access
Messages from early 2024 show staff urging the company to introduce parental controls that would allow guardians to disable generative AI features. Other employees reported that leadership rejected these proposals and continued work on “Romance AI chatbots” accessible to users under 18. Former global policy head Nick Clegg expressed concern in an internal email that sexual interactions could become a dominant use case among teenage users. His comments suggested that such outcomes could lead to significant public backlash.
Meta’s AI chatbot policies drew broader attention after reports in 2025 described sexualized underage characters and all‑ages roleplay scenarios. Additional reporting indicated that internal guidelines had once allowed romantic or sensual conversations with minors, though Meta later said the document was erroneous. The company announced last week that it had removed teen access to AI companions while it works on a revised version. These developments have intensified scrutiny from lawmakers and regulators in the United States.
| AI safety researchers note that conversational systems can unintentionally generate inappropriate content if not carefully constrained. Several major technology companies have faced similar challenges as they expand generative AI features into consumer products. Interestingly, early academic studies on AI companionship suggest that users often push systems toward intimate or emotional interactions regardless of design intent, highlighting the difficulty of building universally safe guardrails. |
