Technology
ChatGPT now estimates users’ age in bold move to protect online safety of minors
DCM Editorial Summary: This story has been independently rewritten and summarised for DCM readers to highlight key developments relevant to the region. Original reporting by Tech Crunch, click this post to read the original article.

If you’ve been keeping up with recent developments in AI safety, you might be interested to know that OpenAI has added a new “age prediction” feature to ChatGPT. This tool is designed to better identify users who may be minors and automatically apply content restrictions to protect them during conversations. It’s part of a broader effort by OpenAI to address growing concerns about young people’s exposure to inappropriate content.
You may remember that OpenAI has faced substantial criticism in recent years for how ChatGPT can impact children. There have been troubling cases—including teen suicides—that were allegedly linked to interactions with the chatbot. Additionally, the company has been under fire for allowing conversations that touch on sexual themes with underage users. In one notable instance last year, a bug briefly let the chatbot create erotic material for users under 18, which sparked public outcry and forced the company to take corrective measures.
Now with the age prediction system, OpenAI uses an AI algorithm to analyze various signals from your account, such as your stated age, how long your account has existed, and when you’re typically active. Based on these behavioral and account-level cues, the system aims to gauge whether you might be underage.
If the system flags you as likely being under 18, it will automatically activate existing content filters. These filters are intended to block discussions involving sex, violence, and other sensitive subjects that aren’t appropriate for younger users. The goal is to create a safer environment for all users, especially minors, without overly restricting access for adults.