EU Investigation Into Grok Underscores Global Alarm Over Untested AI System
WASHINGTON, D.C. — Today, the European Commission announced that it opened an investigation into Elon Musk’s AI chatbot Grok’s creation and dissemination of non-consensual sexually explicit material.
J.B. Branch, Big Tech accountability advocate at Public Citizen, issued the following statement:
“The European Commission’s decision to open a formal investigation into Grok’s artificial intelligence system underscores the growing global alarm over the dissemination of manipulated sexualized images of women and apparent minors generated by AI tools. The probe follows similar actions by regulators in the United Kingdom and Asia and reflects concerns that X failed to adequately assess and mitigate systemic risks associated with Grok’s deployment.
“Public Citizen has repeatedly warned the U.S. federal government about these risks, including in two formal letters joined by more than 30 civil society organizations, urging federal agencies to address the dangers posed by generative AI systems capable of producing exploitative and unlawful content.
“Europe is doing what the United States has so far failed to do: treating AI-enabled sexual exploitation as a systemic risk that demands immediate oversight. Grok’s failures are not isolated incidents—they are documented, foreseeable, and repeatedly flagged by AI safety experts around the world. U.S. state attorneys general and Congress must launch their own investigations into Grok’s development, deployment, and safeguards. In particular, the Senate Commerce Committee under Senator Ted Cruz and the Senate Homeland Security and Governmental Affairs Committee under Senator Rand Paul, should exercise their oversight authority to determine whether federal institutions have failed in their duty to protect consumers, children, and fundamental rights. The federal government must also pause the usage of Grok within federal agencies as we have detailed in several letters. The U.S. cannot simultaneously criticize European regulators for holding U.S. tech companies accountable while refusing to confront the same dangers at home. If the United States is serious about AI safety and democratic accountability, it cannot outsource the work of oversight to foreign regulators.”