OpenAI Adds Parental Controls to ChatGPT Platform

Chatgpt - OpenAI
  • Following a teen suicide case, OpenAI introduces new safety features for families using ChatGPT, including content filters and usage limits.

OpenAI has launched a set of parental controls for its ChatGPT platform, available on both web and mobile starting this week. The update comes in response to a lawsuit filed by the parents of a California teenager who died by suicide, allegedly after receiving harmful advice from the chatbot. These new tools allow parents and teens to link accounts, enabling stronger safeguards only when both parties agree to the connection. Regulators in the United States have intensified scrutiny of AI platforms, citing concerns about their influence on minors.

Safety Features and Account Linking

Once accounts are linked, parents gain access to several control options aimed at reducing exposure to sensitive content. They can manage whether ChatGPT retains past conversations and decide if those chats contribute to model training. Additional settings include quiet hours that restrict access during specific times, and the ability to disable voice interaction and image generation. Despite these controls, parents will not be able to view their teen’s chat history, preserving a degree of user privacy.

In cases where OpenAI’s systems or human reviewers detect signs of serious safety risks, limited notifications may be sent to parents. These alerts will contain only the information necessary to support the teen’s wellbeing. If a teen chooses to unlink the accounts, parents will be informed of the change. OpenAI emphasized that these measures are designed to balance safety with autonomy, especially for older minors using the platform independently.

Broader Industry Response and Future Plans

OpenAI, which reports around 700 million weekly active users for ChatGPT, is also developing an age prediction system to automatically apply teen-appropriate settings. The initiative reflects growing industry efforts to address risks associated with AI interactions among younger users. Meta recently announced similar safeguards for its AI products, including restrictions on flirty dialogue and discussions of self-harm. Temporary limits on access to certain AI characters were also introduced as part of Meta’s update.

Age Prediction as a Safety Mechanism

OpenAI’s age prediction system represents a technical approach to enforcing age-specific protections without requiring formal age verification. By analyzing behavioral patterns and language use, the system aims to identify underage users and apply appropriate filters automatically. This method could become a standard across AI platforms seeking to comply with emerging safety regulations. As AI tools become more embedded in everyday life, such innovations may play a key role in safeguarding vulnerable users.Set featured image


 

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.