Third Letter Sent to OMB Urging the Suspension and Removal of Grok from Federal Agencies
The most recent letter comes after Grok created nonconsensual sexualized images of women and apparent minors
Director Russell Vought
Office of Management and Budget
Executive Office of the President
725 17th Street, NW
Washington, DC 20503
Re: Third Follow-Up on Federal Procurement of Grok AI
Dear Director Vought,
We write to follow up on our August 28, 2025 and October 29, 2025 coalition letters urging the Office of Management and Budget (OMB) to suspend the federal deployment of Grok, the large language model developed by xAI. It is deeply concerning that the federal government would continue to deploy an AI product with system-level failures resulting in generation of nonconsensual sexual imagery and child sexual abuse material. Given the administration’s executive orders, guidance, and the recently passed Take It Down Act supported by the White House, it is alarming that OMB has not yet directed federal agencies to decommission Grok. The General Services Administration has continued to make Grok available government-wide under its OneGov contract. This continued deployment is inconsistent with Executive Order 14319 and OMB’s binding AI safety and neutrality guidance.
Grok has demonstrated persistent and escalating failures related to accuracy, neutrality, and safety — including the generation of racist, antisemitic, conspiratorial, and false content, as well as sexually exploitative outputs. These failures are extensively documented and widely reported. Elon Musk himself has said Grok is “very dumb,” committed a “dumb error” resulting in the AI bot being banned on X, and at times “too compliant to user prompts.” This level of corporate conduct and system failures are patently incompatible with the Administration’s own requirements for federally procured AI systems. Under OMB’s guidance, systems that present severe and foreseeable risks that cannot be adequately mitigated must be discontinued. Grok meets that standard.
This letter reiterates our prior concerns, documents new and disqualifying failures, and underscores that Grok’s behavior directly contradicts Executive Order 14319 and OMB M-25-21 and M-25-22.
Grok Violates the Administration’s Own AI Rules
Executive Order 14319 and OMB’s implementing guidance requires that AI systems procured by the federal government be truth-seeking, accurate, ideologically neutral, and subject to effective risk mitigation. OMB M-25-21 and M-25-22 require agencies to discontinue use of an AI system if proper risk mitigation is not possible. OMB M-25-22 further requires that any acquisition of AI conform to the standards in M-25-21. This language in totality argues for the complete removal of Grok from the OneGov contract.
Sexual Exploitation, Nonconsensual Imagery, and Child Harm
In December 2025, Grok began producing sexually exploitative content involving minors and nonconsensual sexualized imagery of women at an alarming rate. Public demonstrations and reporting show Grok responding to prompts explicitly seeking to generate sexualized images of real or hypothetical individuals without consent, including images involving children. On December 28, 2025 alone, Grok reportedly generated approximately one nonconsensual sexual deepfake per minute.
These outputs include sexually explicit and sexually suggestive images and narrative descriptions generated in response to user requests, including attempts to convert images into child sexual abuse material. The production of any such content is incompatible with federal procurement, federal child protection policy, and OMB’s AI safety framework.
These are not isolated incidents. They reflect systemic failures in xAI’s ability to enforce baseline protections against sexual exploitation, child harm, and nonconsensual abuse. As we warned in our prior letters, Grok’s architecture and content moderation safeguards are insufficient to prevent foreseeable and severe harms, particularly to women and children.
Under OMB Memorandum M-25-21, agencies are required to discontinue the use of AI systems when foreseeable risks of severe harm cannot be adequately mitigated. The continued production of sexualized content involving minors and nonconsensual sexual imagery demonstrates that such mitigation has not been achieved. These failures independently require immediate suspension and investigation of Grok’s federal deployment.
An AI system capable of generating sexualized depictions of minors or facilitating nonconsensual sexual imagery cannot meet the minimum safety threshold for federal use. Grok’s continued availability under the OneGov contract therefore represents a direct violation of OMB’s binding risk mitigation requirements.
AI Director Michael Kratsios Testimony on Executive Order 14319
During September 10, 2025 testimony Director Kratsios confirmed that AI systems procured by the federal government must be truth-seeking, accurate, and compliant with Executive Order 14319 and OMB’s implementing guidance. In his testimony Director Kratsios further confirmed that models producing antisemitic content, Holocaust denial, conspiratorial outputs, or ideological bias are in violation of those requirements.
The behaviors documented from Grok fall squarely within the conduct the executive order and OMB memoranda were designed to prohibit. Grok’s outputs, as publicly reported and previously detailed, are inconsistent with the standards required for federally procured AI systems. The continued federal deployment of Grok in light of these acknowledged violations is indefensible.
Required Actions
OMB must take immediate action to bring federal AI procurement into compliance with Executive Order 14319 and OMB Memoranda M-25-21 and M-25-22. Specifically, OMB must:
- Immediately suspend the federal deployment of Grok under the GSA OneGov contract pending a full compliance determination.
- Initiate a formal investigation into Grok’s safety failures and the procurement and oversight processes that permitted its federal deployment, including whether required risk assessments, mitigation measures, and compliance determinations were conducted and appropriately reviewed.
- Publicly clarify whether Grok has been evaluated for compliance with Executive Order 14319’s truth-seeking and neutrality requirements and whether it was determined to meet OMB’s risk mitigation standards.
- Require disclosure of all safety testing, red-teaming results, and risk assessments conducted on Grok as a condition of any continued consideration for federal use.
- Account for the legal basis on which Grok remains available to federal agencies despite documented violations of OMB’s binding guidance.
OMB is entrusted with ensuring that AI systems procured by the federal government meet the highest standards of safety, truth-seeking, accuracy, and neutrality. OMB has declared to uphold public trust in AI systems while delivering government efficiency and innovation. Grok has not only repeatedly failed to meet these standards, but has demonstrated escalating and disqualifying harms, including the generation of nonconsensual sexual imagery and content involving the sexual exploitation of minors. This is deeply damaging of public trust in the government’s AI deployment and use. As Director Kratsios has acknowledged, such conduct is precisely the type of behavior Executive Order 14319 and OMB’s binding guidance were designed to prevent.
Continuing to deploy Grok in the federal government is wholly inconsistent with OMB’s own safety mandates, the Administration’s AI Action Plan, and the core commitments of Executive Order 14319. We therefore urge OMB to take immediate corrective action to suspend Grok’s deployment, investigate how these failures occurred, and prevent further erosion of public trust, institutional integrity, democratic governance, and national security.
Respectfully submitted,
Public Citizen
Center for AI and Digital Policy (CAIDP)
AFT
Autistic Women & Nonbinary Network
Asian Americans Advancing Justice
Center for Biological Diversity
Center for Economic Justice
Center for Oil and Gas Organizing
Center for Progressive Reform
Color Of Change
Common Cause
Consumer Federation of America
Demand Progress
Distributed AI Research Institute
Government Information Watch
Indivisible
Memphis Community Against Pollution
Mossville Environmental Action Now (MEAN)
MPower Change
National Association of Voice Actors
Open MIC
Oxfam America
People Power United
Presente.org
Stand.earth
The Leadership Conference on Civil and Human Rights
The Value Alliance
UltraViolet
United Church of Christ Media Justice Ministry
Welcoming America
CC:
U.S. Government Accountability Office
U.S. House of Representatives Oversight Committee
U.S. Office of Science and Technology Policy
U.S. Senate Oversight Committee
U.S. Federal Trade Commission