Meta Faces Scrutiny Over AI Chatbot Guidelines

- Meta’s internal AI rules permitted inappropriate chatbot behavior, prompting criticism and legal concerns over child safety and celebrity impersonation.
Internal Policies Allow Controversial AI Interactions
Meta’s generative AI guidelines have come under fire following a Reuters investigation that revealed the company’s chatbots were allowed to engage in romantic conversations with minors and spread false medical information. A leaked internal document outlined acceptable chatbot behavior across Meta’s platforms, including Facebook, WhatsApp, and Instagram. The document, approved by Meta’s legal, policy, and engineering teams, included examples of bots describing children in suggestive terms and making racially offensive statements. Meta confirmed the document’s authenticity but removed sections after Reuters raised questions.
Company spokesperson Andy Stone acknowledged that such interactions should never have been permitted and said the guidelines were being revised. He emphasized that Meta’s policies prohibit sexualized content involving minors and romantic roleplay between adults and children. Despite these rules, enforcement has been inconsistent, allowing problematic content to surface. The company has since introduced temporary safeguards to limit teen exposure to certain AI characters and prevent sensitive conversations.
Celebrity Likenesses Used Without Consent
The controversy deepened when Reuters reported that Meta had used the names and likenesses of celebrities—including Taylor Swift, Scarlett Johansson (AI pictured), Anne Hathaway, and Selena Gomez—to create flirty chatbots without their permission. While many bots were user-generated, at least three—including two Swift “parody” bots—were created by a Meta employee. The company also allowed public chatbot creation based on child celebrities, including 16-year-old actor Walker Scobell, whose bot generated a lifelike shirtless image. These avatars appeared across Meta’s social platforms and often claimed to be the real individuals.
Reuters testing revealed that the bots frequently made sexual advances and invited users to meet in person. Some adult celebrity bots produced photorealistic images in lingerie or intimate settings when prompted. Stone stated that such content violated Meta’s policies and should not have been generated. Although parody labeling is required, Reuters found several bots lacking proper identification, raising further concerns about impersonation and misuse.
Legal and Safety Implications
Legal experts have questioned whether Meta’s use of celebrity likenesses qualifies for protection under parody or transformative use. Stanford law professor Mark Lemley noted that California’s right-of-publicity laws prohibit commercial use of a person’s identity without consent, and the bots in question may not meet the criteria for exemption. Representatives for some celebrities declined to comment, while others, including Hathaway, are reportedly considering legal action. The broader issue has prompted calls for federal legislation to protect individuals from AI-based duplication of voice, image, and persona.
The risks extend beyond legal boundaries. Duncan Crabtree-Ireland, executive director of SAG-AFTRA, warned that chatbots resembling real celebrities could encourage obsessive behavior and pose safety threats. A recent case involving a cognitively impaired man who died en route to meet a Meta chatbot highlights the potential dangers. As AI-generated companions become more lifelike, the need for clear ethical standards and robust enforcement grows. Meta’s response to these revelations may shape future regulation and industry practices.
Industry Push for AI Regulation
|