Regulating AI in the States
Public Citizen News / Nov-Dec 2025
By Ilana Beller
This article appeared in the November/December 2025 edition of Public Citizen News. Download the full edition here.
As artificial intelligence (AI) increasingly reshapes the contours of daily life — deciding insurance claims, meddling in elections, even chatting with teens — the rules meant to keep it in check are lagging far behind. That’s why Public Citizen has stepped in, leading a nationwide push to establish guardrails that protect people from AI’s most dangerous uses. From combating deepfakes in elections to shielding kids from manipulative and dangerous “AI companions,” the organization is helping lawmakers in nearly every state turn concern into action.
Two years ago, we launched an initiative to pass legislation regulating deepfakes in election communications in all 50 states and Washington, D.C. Ever since, we have substantially expanded upon this work, building relationships with Democratic and Republican lawmakers interested in regulating AI in most states. We have been intimately involved in the enactment of AI legislation in states across the nation. To date, we have drafted five model bills that provide common-sense legislative solutions to mitigate some of the most obvious and dangerous harms of unregulated AI.
Our latest model legislation addresses the risks posed by emotionally manipulative AI companions, including chatbots. These AI companions are designed to engage in conversations that mimic human interactions and drive emotional engagement. Tech companies typically design these AI companions to maximize user engagement. This means that the companions tend to validate statements from users, even when they are troubling. Common Sense Media reported that 72% of teens interact with AI companions, with many engaging with bots at least a few times a month.
The American Psychological Association has expressed significant concern that children’s relationships with AI companions may hinder their ability to develop social skills and real-life emotional connections while creating unhealthy dependencies on the technology.
Making matters worse, conversations with AI companions can become highly sexualized — even when the user has identified as a minor. A Wall Street Journal investigation revealed thatMeta’s (Facebook’s parent company’s) AI companions continued to engage in sexual discussions after learning users were minors, making explicit references to their ages before proceeding with graphic exchanges.
In some extreme cases, AI companions can also encourage people to commit serious harm to themselves and others. Sixteen-year-old Adam Raine struggled with suicidal thoughts, but was allegedly discouraged by ChatGPT from seeking any outside help. Message logs revealed that the chatbot gave him advice on covering red marks on his neck from an attempted hanging and further helped him assess the effectiveness of a specific noose.
“There is serious reason to be worried about the impact of AI companions on minors, both for extreme cases as well as the more general impact on adolescent development and social well-being,” said Robert Weissman, co-president of Public Citizen. “Big Tech and AI companies are rapidly advancing this technology. Our children should not be the guinea pigs in such a reckless social experiment, particularly when we have so much early evidence of harm.”
Public Citizen is taking aggressive action, engaging legislators nationwide to confront this crisis head-on and pass protections for children. Our new model bill, created in partnership with the Young People’s Alliance, addresses concerns about emotionally manipulative AI and its impact on kids. We are discussing this legislation with lawmakers across the country, sharing our model and supporting them in passing the bill in their state legislatures.
In the coming years, we are committed to working tirelessly to rein in and regulate AI across the board for a better future for our society, and nowhere is that more pressing or critical than in the need to establish clear guardrails for our children’s safety from AI companions.
If you or someone you know needs help, call the National Suicide Prevention Lifeline at 988. You can also reach a crisis counselor by messaging the Crisis Text Line at 741741.