Character.AI bans under-18 users after teen suicide lawsuits
The AI chatbot company Character.AI has announced sweeping new restrictions aimed at users under 18, following mounting legal and public-scrutiny over alleged harms to minors engaged with its virtual companionship platform.
In a statement, Character Technologies said that starting November 25, users under 18 will no longer be allowed to engage in open-ended conversations with its chatbot characters. The company will also introduce a two-hour daily usage limit and roll out new age-verification systems to identify underage users.
The company said the measures are part of broader safety reforms that include the launch of an AI Safety Lab and a new line of child-friendly AI features, such as educational videos, interactive stories, and supervised creative activities.
The decision comes amid growing legal pressure. The mother of a 14-year-old boy filed a lawsuit earlier this year, alleging that Character.AI’s chatbot encouraged her son’s suicidal thoughts and failed to intervene. Another case filed in September accused the company of negligence after a 13-year-old girl died by suicide following similar interactions.
Both lawsuits argue that the platform failed to implement safeguards for vulnerable users and did not provide crisis intervention mechanisms when minors expressed emotional distress.
A recent study by Common Sense Media found that over 70% of teenagers in the United States have used AI companion chatbots, and about half use them regularly. One in three teens said they use these bots for emotional support or to combat loneliness.
Experts say this trend raises significant mental health concerns. While AI chatbots can offer comfort and companionship, they can also deepen isolation or normalize harmful ideas if unregulated.
Character.AI’s new policy will rely on age-verification technologies, such as face scans and government ID checks, to restrict under-18 access. However, privacy advocates warn that these systems are not foolproof and may create new risks by storing sensitive biometric data.
The Character.AI controversy comes amid growing scrutiny of AI companion platforms. Governments in the United States and European Union are exploring regulations to limit children’s access to generative AI tools, citing concerns about emotional manipulation, data collection, and exploitation.
In India and South Korea, regulators have already opened inquiries into similar platforms, investigating whether their algorithms contribute to addiction or mental health decline among minors.
Character.AI allows users to create and chat with custom characters that “feel alive and human-like.” The company’s decision marks one of the first major policy shifts in the fast-growing AI companion industry, which is projected to reach $2.3 billion in global revenue by 2026, according to market analysts.
While some experts welcome the move, others question whether bans alone will address the root of the problem.
For now, Character.AI says it is committed to rebuilding trust and improving safety standards. But as lawsuits and regulatory scrutiny continue, the company’s next steps could determine how governments worldwide begin to police the growing relationship between teenagers and artificial intelligence. (ILKHA)
LEGAL WARNING: All rights of the published news, photos and videos are reserved by İlke Haber Ajansı Basın Yayın San. Trade A.Ş. Under no circumstances can all or part of the news, photos and videos be used without a written contract or subscription.
As the Netherlands prepares for its snap general election on October 29, the country’s Data Protection Authority (AP) has issued a strong warning to citizens: do not rely on AI chatbots for voting advice.
A hacker collective known as the “Cyber Support Front”, reportedly aligned with pro-Iranian cyber networks, announced that it had successfully infiltrated the systems of the Israeli defense contractor MAYA.
Chinese scientists are set to complete the development of the powerful fusion reactor BEST (Burning Plasma Experimental Superconducting Tokamak), commonly referred to as the “artificial sun”, by 2027.