OpenAI launches age prediction for teen safety
The rollout of ChatGPT age prediction is now underway, according to an announcement OpenAI made Tuesday.
Last fall, the company indicated it would introduce age prediction as a safety measure for teens. OpenAI has been sued by parents of teens who died by suicide after ChatGPT allegedly coached them to end their lives or didn't respond appropriately to their discussions of psychological distress. OpenAI has denied the allegations in the first of those lawsuits.
In December, the company introduced an update to its Model Spec, which guides how its AI models should behave. The update focused on principles for responding to under-18 users in high-stakes situations.
The details made public on Tuesday explain that age prediction will launch on consumer plans.
ChatGPT's age prediction model estimates a user's age based on behavior and signals from their account, like when the person is active during the day, long-term usage patterns, how long the account has existed, and the user's stated age.
If the user's age is assessed as under-18, their account should not be exposed to graphic violence, depictions of self-harm, and sexual, romantic, or violent role play, among other types of potentially harmful content.
Teens who tell OpenAI they're under 18 upon opening a ChatGPT account are automatically subject to such safeguards. If OpenAI isn't confident about a user's age, their account will default to safer settings.
OpenAI said that adult users whose accounts are mistakenly placed in the under-18 experience can confirm their age by submitting a selfie to Persona, a third-party identity verification service. OpenAI did not provide additional information about how ID documents would be retained. In Oct. 2025, a third-party vendor used by the messaging platform Discord was breached, exposing upwards of 70,000 government IDs.
OpenAI said it planned to improve age-prediction accuracy and make further improvements based on observations from the initial rollout.