OpenAI says parents will soon be able to monitor and manage their children’s use of ChatGPT after mounting lawsuits and concerns that the popular AI chatbot may be putting young users at risk.

The company announced that it will roll out new parental controls for ChatGPT within the next month, allowing parents to link their accounts with those of their teens, set age-appropriate guidelines, disable memory and chat history, and receive alerts if their child is detected to be in “acute distress” while interacting with the bot.

“Many young people are already using AI. They are among the first ‘AI natives,’ growing up with these tools as part of daily life, much like earlier generations did with the internet or smartphones. That creates real opportunities for support, learning, and creativity, but it also means families and teens may need support in setting healthy guidelines that fit a teen’s unique stage of development,” read a September 2 statement posted by OpenAI.

ChatGPT had previously implemented basic safeguards, such as directing users to crisis hotlines, but these safeguards will now be further expanded to help reduce the potential of self-harm.

CLICK HERE TO GET THE DALLAS EXPRESS APP

Beyond parental oversight, OpenAI stated that future updates will include new technical guardrails, such as routing sensitive conversations to advanced safety models, and a continued partnership with youth development and mental health experts. 

OpenAI did not reference the specific legal action in the announcement regarding child safety changes, but acknowledged “recent heartbreaking cases of people using ChatGPT in the midst of acute crises” as motivation for change in another post.

This move comes shortly after a high-profile lawsuit was filed in California, in which two parents allege ChatGPT contributed to their 16-year-old son’s suicide by offering harmful advice and validating self-destructive thoughts, per BBC.

The lawsuit alleges that ChatGPT validated the “most harmful and self-destructive thoughts” presented by their son, with the lawsuit accusing OpenAI of negligence and wrongful death.

Robbie Torney, a director of AI programs at Common Sense Media, said the introduction of more safety guardrails is a solid first step for OpenAI, but added that the features can be easily shut down by the young users if desired.

“This is not really the solution that is going to keep kids safe with A.I. in the long term,” he wrote, per The New York Times. “It’s more like a Band-Aid.”

OpenAI stated that these changes are “only the beginning” for the development of safety features, adding that further enhancements will be introduced over the next few months.

“We will continue learning and strengthening our approach, guided by experts, with the goal of making ChatGPT as helpful as possible. We look forward to sharing our progress over the coming 120 days,” stated the company.