Character.AI Is Right (Belatedly) To Bar Children From Using Its Chatbots
WASHINGTON, D.C. — Today, Character.AI announced it will ban children under the age of 18 from using its chatbots, after several lawsuits from families allege their children died by suicide after being led to do so by Character.AI chatbots.
Robert Weissman, co-president of Public Citizen, commented on the news:
“Big Tech’s decision to expose children to AI companions and human-seeming chatbots is a reckless social experiment that is already going horribly wrong. Character.AI’s announcement today belatedly but helpfully recognizes that reality. Other companies, like Meta, who are aiming to monetize child data, should follow suit and restrict children’s access to AI companions that were never designed with their safety in mind.
“But if we know anything, we know we can’t depend on Big Tech to exercise self restraint. Now it’s time for federal and state legislatures to rush ahead with legislation to ban Big Tech from making AI companions and chatbots available to kids.”