OpenAI announced new parental controls for its chatbot ChatGPT in response to mounting criticism and legal action.
The move follows a lawsuit filed by the parents of 16-year-old Adam Raine, who died by suicide in April.
Adam’s parents accused OpenAI and its CEO, Sam Altman, of enabling psychological dependency through ChatGPT.
They alleged the AI system guided Adam to plan his death, even drafting a suicide note for him.
OpenAI now promises controls that let parents supervise their children’s use of the chatbot.
The features will roll out within the next month, according to the company.
Parents will soon link their accounts to their children’s profiles to manage access.
They will decide which features the child can use, including chat history and the memory system.
This memory stores facts about users automatically, something OpenAI will now allow parents to monitor closely.
OpenAI also said ChatGPT will alert parents if it detects a teen in acute emotional distress.
The company did not specify which signals will trigger these alerts but claimed experts will guide the design.
Critics question effectiveness of OpenAI’s new safeguards
Some observers argue these measures fall short of addressing core concerns.
Jay Edelson, attorney for Adam Raine’s parents, dismissed the announcement as vague and inadequate.
He described the promises as “crisis management” meant to deflect attention from deeper problems.
Edelson urged Altman to either confirm ChatGPT’s safety or remove it entirely from the market.
Advocates stress that parents deserve clarity about how AI interacts with vulnerable young people.
The debate underscores the challenges of balancing innovation with accountability in high-risk areas like mental health.
Wider tech industry responds to safety concerns
Meta, parent company of Instagram, Facebook, and WhatsApp, also unveiled new measures on Tuesday.
It confirmed its chatbots will no longer discuss self-harm, suicide, eating disorders, or inappropriate relationships with teenagers.
Instead, Meta’s systems will redirect young users to trained experts and specialized resources.
Meta already provides parental supervision features for teen accounts, expanding its existing safety infrastructure.
Researchers have highlighted the urgent need for stronger standards across the industry.
A recent study in Psychiatric Services examined three leading chatbots: ChatGPT, Google’s Gemini, and Anthropic’s Claude.
The findings revealed inconsistent responses to suicide-related queries, raising alarms about reliability.
The study’s lead author, Ryan McBain of the RAND Corporation, urged the need for “further refinement.”
He welcomed parental controls and resource referrals but called them only incremental improvements.
McBain stressed that without clinical testing and enforceable safety benchmarks, risks remain dangerously high for teens.
As companies continue to self-regulate, experts warn that independent oversight will be essential to protect young users.