FTC to Probe AI Chatbots’ Impact on Children

- U.S. regulators plan to request internal records from major AI firms amid growing concerns over chatbot interactions with minors.
Regulatory Scrutiny Intensifies
The U.S. Federal Trade Commission (FTC) is preparing to examine how AI chatbots may affect children’s mental health, according to a report by the Wall Street Journal. Internal documents will be requested from several leading tech companies, including OpenAI, Meta Platforms, and Character.AI. Officials cited by the report say the agency is drafting letters to firms operating widely used chatbot services. These inquiries reflect a broader effort to assess the risks posed by emerging technologies in sensitive contexts.
Character.AI stated it had not yet received formal communication from the FTC but expressed willingness to cooperate with regulators and lawmakers. Neither Meta nor OpenAI responded to Reuters’ requests for comment, and the report has not been independently verified. A White House spokesperson emphasized that the administration remains committed to advancing U.S. leadership in AI while safeguarding public welfare. This initiative aligns with President Trump’s broader technology agenda, which includes dominance in AI and cryptocurrency.
Concerns Over Chatbot Behavior
Recent revelations have intensified scrutiny of chatbot interactions with minors. A Reuters investigation found that Meta’s AI systems had permitted bots to engage in romantic or suggestive conversations with children. In response, Meta announced new safeguards, including training its AI to avoid flirtatious dialogue and discussions of self-harm or suicide. Temporary restrictions have also been placed on access to certain AI characters for teenage users.
These developments follow a formal complaint filed in June by over 20 consumer advocacy groups. The complaint alleged that platforms like Meta AI Studio and Character.AI were facilitating unlicensed mental health services through so-called “therapy bots.” Critics argue that such interactions may mislead users and pose risks to vulnerable populations. Legal and ethical questions continue to mount as AI tools become more integrated into everyday digital experiences.
Legal Action and Industry Response
Texas Attorney General Ken Paxton has launched an investigation into Meta and Character.AI, citing deceptive trade practices and privacy violations. The probe focuses on claims that children were misled by AI-generated mental health advice presented as therapeutic support. These allegations have prompted renewed calls for clearer regulation and oversight of AI-driven services. Companies are now facing pressure to disclose how their systems interact with underage users.
The FTC’s planned review may lead to broader legislative efforts aimed at protecting minors online. Industry stakeholders are watching closely as regulators define boundaries for responsible AI deployment. While some firms have begun implementing safeguards, others may face increased scrutiny over their design choices and data handling practices. The outcome of these investigations could shape future standards for AI development and user protection.
AI and Child Safety Standards
|