fb tracking

New Letter Calls on OMB to Block Grok Implementation

WASHINGTON, D.C. — In a letter organized by Public Citizen and Color of Change, over 30 organizations called on the Office of Management and Budget (OMB) to block the adoption of Grok, the large language model (LLM) developed by xAI, across federal agencies. The letter cites concerns over Grok’s lack of objectivity, documented hate speech, and lack of any safety testing.

The letter warns that “Grok’s recurring patterns of ideological bias, erratic behavior, and tolerance for hate speech render it wholly incompatible…” with the Administration’s own AI principles and points to OMB guidance that, “…expressly requires agencies to discontinue use of an AI system if proper risk-mitigation is not possible.” It argues that allowing such a system in federal use would violate OMB rules, erode public trust, and heighten risks of bias, misinformation, and governance failures.

“Grok is wildly ill-suited for government use. It has shown a reckless disregard for accuracy, a propensity for ideological-based meltdowns, and a documented record of spewing racist and antisemitic rhetoric,” said J.B. Branch, a Big Tech accountability advocate at Public Citizen. “xAI’s refusal to publish basic safety cards — an industry standard — only underscores the recklessness of pushing this tool into federal systems. The Trump administration claims to champion neutral, trustworthy AI, yet is fast-tracking a model that flagrantly violates its own principles. That hypocrisy, combined with Grok’s instability, would invite chaos, bias, and controversy into the very institutions that demand the highest standards of truth and fairness. No federal agency should be a proving ground for a product this unstable.”

“Elon Musk’s first foray into weaponizing the government against its people was using DOGE to access citizens’ private data, facilitate mass layoffs, shut down life-saving programs, and cost taxpayers billions,” said Portia Allen-Kyle, Interim Executive Director, Color Of Change. “If Grok is integrated into federal agencies, the results will be an even bigger disaster. xAI will be free to financially benefit from a model that will make Black people’s lives harder and more dangerous. We know the ways that the government’s use of AI has actively harmed us: innocent people being accused of crimes due to faulty facial recognition technology, Black taxpayers disproportionately audited, and data centers polluting the air and water in our communities. So the last thing we need is for our legal system to be influenced by an AI model steeped in racial prejudice. The idea that AI is neutral is not only false, but dangerous.”

The letter cites various issues that render LLMs such as Grok unfit for usage at the federal level.

  • Ideological bias: Grok has demonstrated repeated instances of ideological bias in the form of racism and antisemitism, erroneous reasoning, and hate speech. Documented incidents of bias include holocaust denial, climate change denial, and promoting conspiratorial content. The lack of objectivity and ideological neutrality in LLM’s such as Grok endangers institutional integrity.
  • Legal standard: The usage of platforms such as Grok at the federal level is incompatible with the legal standard set by the Trump Administration regarding artificial intelligence (AI). Executive Order 14099, Preventing Woke AI in the Federal Government (Executive Order 14099) states that all AI must be truth-seeking and ideologically neutral. In addition, the OMB’s Memorandum M-25-21 explicitly requires agencies to discontinue use of an AI system if proper risk-mitigation is not possible.
  • Safety concerns: xAI has failed to do its due diligence regarding safety testing of Grok. There is also concern that Grok is vulnerable to external attacks, with AI experts calling it “easy to jailbreak.” Furthermore, there is a lack of publicly available safety information regarding Grok, which is in direct contradiction with typical AI industry norms and best practices.

###