fb tracking

Trump Administration’s Reckless AI Agenda—Just Another Corporate Giveaway

By Ilana Beller, JB Branch, Rick Claypool, Tyson Slocum, and Savannah Wooten

Today, the Trump Administration announced executive actions on artificial intelligence (AI) that prioritize corporate profits over public safety. The administration plans to give billions to Big Tech so they can burn even more dirty energy, release untested products, and rush into the AI era without accountability to the American public. Our petition calls on Congress to ensure that AI advancements deliver first for the American people and not Big Tech profits.

At Public Citizen, we believe AI should serve the public—not exploit it. From our earliest investigations into the unchecked development of consumer-facing chatbots to our reports on AI weapons systems and AI-driven energy infrastructure, our work has consistently exposed the risks of unregulated artificial intelligence and the corporations pushing dangerous technologies with little regard for consequences. 

In our 2023 report, Chatbots Are Not People, we sounded the alarm on the human-like design features of generative AI systems that mislead users and invite manipulation. In 2024, our AI Joe report detailed the Pentagon’s race to deploy AI-enabled weaponry, a move we warned could have deadly, irreversible consequences. In 2025, we testified before Congress, underscoring the importance of ensuring the U.S. energy supply works in the interest of the public and doesn’t financially burden taxpayers and harm local communities.

We’ve backed it up with action. Public Citizen has championed multiple pieces of model legislation that would:

  • Regulate the use of deceptive and fraudulent deepfakes in election communications. Public Citizen has worked directly with legislators in over 40 states to support them in bringing this legislation forth. 28 states have now enacted this legislation. Most of these bills had strong bipartisan support. 
  • Outlaw nonconsensual intimate deepfake images. As a result of our work with legislators in dozens of states on this, 45 states now have some form of non-consensual intimate deepfake regulation in place.  
  • Require that a consumer is informed when they are interacting with a chat bot. Legislation on this topic would ensure that a user is not led to believe, for example,  that they are speaking with a licensed professional, such as a medical expert, when, in fact, they are communicating with an AI-driven system.

Additionally,  when Big Tech tried to sneak a 10-year federal moratorium on state AI regulation into the federal budget bill we fought back—and won—exposing how this corporate impunity would block local protections, leaving Americans vulnerable to untested and unsafe AI products.

Now, the Trump Administration is once again siding with powerful corporations over the public. Their latest announcement is a gift to Big Tech—one that guts oversight and fast-tracks risky AI products for public use. It’s a move that ignores mounting evidence of AI harms, including discrimination, manipulation, exploitation, and privacy violations.

Public Citizen supports a future where AI is treated like any other industry: one where products must prove they are safe, truthful, and accountable before reaching the public. That’s not radical—it’s responsible. These are the same common-sense protections that govern cars, food, and medicine.

Public Citizen supports federal legislation that would:

  • Establish product liability, transparency, and safety requirements for AI systems.
  • Hold AI developers accountable when their technology causes public harm or violates basic safety standards.
  • Require clear labeling of all AI-generated content.
  • Mandate watermarking and traceability for synthetic media.
  • Ban surveillance-based advertising and manipulative personalization tactics.
  • Enact civil rights protections to guard against algorithmic bias in housing, hiring, education, and more.
  • Uphold worker rights through transparency about AI use in the workplace and the right to collectively bargain over automation.
  • Safeguard children, older adults, people with disabilities, those with mental health conditions, and other vulnerable communities from exploitative AI design.
  • Protect patients from adverse consequences of AI use in the healthcare system.
  • Require independent audits, public impact assessments, and disclosure of training data across the AI development lifecycle.
  • Disallow use of emergency authorities to overrule federal, state and local public health and safety laws for the siting and construction of AI data centers and associated energy infrastructure.
  • Subject AI data centers to energy regulatory oversight to ensure they do not harm bulk power market reliability or impact just and reasonable rates. 
  • Strengthen antitrust enforcement to prevent corporate consolidation and control of generative artificial intelligence technologies.
  • Protect our financial markets from AI models that exploit investors. 
  • Codify “human in the loop” requirements for the development and deployment of autonomous weapons, emphasizing human control over weapons use by the U.S. military and police. 

The stakes are too high to let AI remain lawless. The time to act is now—before the next wave of reckless AI releases harms consumers, workers, and communities across the country.