fb tracking

FTC’s AI Chatbots Investigation Must Be Catalyst for Stronger, Enforceable Guardrails

WASHINGTON, D.C. — The Federal Trade Commission announced today that it is launching an inquiry into seven companies that provide consumer-facing AI chatbots, including Alphabet, Meta, OpenAI, and Character Technologies. The FTC is seeking information on how these firms measure, test, and monitor potential harms to children and teens, and whether they are complying with federal privacy protections. The investigation will examine how companies monetize user engagement, handle personal data, and mitigate risks such as psychological manipulation, unsafe conversations, and inappropriate relationships between children and AI companions.

J.B. Branch, Big Tech Accountability Advocate at Public Citizen, issued the following statement:

“The Federal Trade Commission’s inquiry into the risks to children posed by AI-powered chatbots is urgently needed. For too long, Big Tech has treated kids as test subjects for experimental AI companions that mimic human emotions, encourage dependency, and harvest intimate data — all without meaningful safeguards. Reports of chatbots engaging in sexualized conversations with minors, normalizing harmful behaviors, or blurring the lines between reality and simulation underscore the dangers of leaving these technologies unregulated.

This inquiry must be a catalyst for stronger, enforceable guardrails from Congress – voluntary promises are not enough. Protecting kids online requires accountability, transparency, and regulation with real teeth. That’s why lawmakers must also reject efforts like Senator Cruz’s SANDBOX Act, which would hand Big Tech sweeping exemptions from consumer protection laws under the guise of “innovation.” The SANDBOX Act would allow AI companies to sidestep the very safeguards the FTC is now scrutinizing, putting children, families, and democracy at risk. Deregulation is not leadership; it’s a blank check for corporate abuse.”