Federal Preemption of State AI Laws Is Dangerous and Reckless
Chairman Gus Bilirakis and Ranking Member Jan Schawkowsky
Subcommittee on Commerce, Manufacturing, and Trade
U.S. House of Representatives
2322 Rayburn House Office Building, Washington D.C. 20515
Re: Public Citizen’s Statement for the Record: AI Regulation and the Future of U.S. Leadership
Dear Chairman Bilirakis, Ranking Member Schawkowsky, and Members of the Subcommittee,
Public Citizen welcomes the opportunity to submit this statement for the hearing on AI Regulation and the Future of U.S. Leadership. We appreciate the Subcommittee’s engagement on artificial intelligence (AI), a transformative technology that poses both immense promise and profound risks. Public Citizen is a national public interest organization with more than 500,000 members and supporters. Since our founding in 1971, we have worked to ensure that government and corporate power are transparent, accountable, and responsive to the needs of the public. In the context of emerging technologies, Public Citizen advocates for policies that promote innovation while safeguarding the public from potential harms. We believe that technological advancements should serve the broader public good, and that includes the responsible development and deployment of Artificial Intelligence. We thank the Committee for the opportunity to submit this statement for the record.
This Committee has rightly recognized that how we regulate AI today will shape our economy, democracy, and society for decades to come. Earlier this month, during deliberations on the reconciliation budget, members engaged in a spirited debate over the future of AI oversight weighing the need to remain globally competitive against the imperative to protect the public from preventable harm. That conversation continues today, and Public Citizen believes it is vital that innovation be guided by clear, enforceable rules that reflect the values of accountability, safety, and democratic governance.
Today’s hearing purports to explore how the United States can maintain leadership in AI. But true leadership requires more than economic dominance. It requires moral clarity, democratic oversight, and public accountability. That means rejecting a deregulatory race to the bottom disguised as innovation. That means ensuring consumer protections, including all of your constituents, over the profits of a few tech billionaires. And most urgently, it means ending the dangerous and sweeping calls for federal preemption of state AI laws.
Preemption is a Decade-Long Deregulatory Blackout
Several representatives within this very committee have vocalized a desire for Congress to preempt state laws on AI. The call is not incremental, and it is not a placeholder for better proposed federal legislation. Instead, it is a permission slip for unchecked harm. Gutting existing state protections and preventing the creation of new ones regardless of how severe the damage is simply reckless. The historical evidence is clear that state legislatures have stepped up where Congress has stalled. It is a sobering fact that undoing these state laws will result in imminent harm to the very people committee members are tasked with representing.
Preemption of state AI laws would be an open invitation for Big Tech to operate without accountability in areas that include civil rights, mental health, data privacy, fraud, public safety, and child protection. At a time when generative AI is accelerating at breakneck speed producing deepfake election material, AI-generated child sexual abuse content, and AI chatbots encouraging self-harm Congress should be protecting the public, not shielding industry.
States are Leading Where Congress has Not
Federal lawmakers have had years to establish meaningful AI safeguards. And yet, Congress has failed to enact even the most basic protections. In the vacuum left by this inaction, states have done what federal leaders would not: they’ve protected their constituents from real-world harms. These state laws are bipartisan, pragmatic, and urgently needed.
To provide the Subcommittee with a more concrete understanding of the scope and significance of state action, I offer the following illustrative examples:
- Two-thirds of U.S. states have enacted bans on AI-generated deepfake pornography.
- Half of U.S. states have passed laws against deepfake election disinformation.
- Colorado passed a comprehensive AI Act establishing transparency and consumer protections.
- Tennessee’s ELVIS Act protects against strangers cloning one’s voice and profiting off it, which is an essential safeguard for artists, gig workers, and everyday users.
- North Dakota requires healthcare decisions to be made by doctors, not automated triage tools.
- New York has adopted an AI Bill of Rights that safeguards civil liberties.
- Utah protects users interacting with mental health AI tools from unsafe design.
- California, a global tech hub, has pioneered laws requiring content disclosures, regulating training data, and protecting children on social media.
- Kentucky has laws that protects citizens from AI discrimination by state agencies, mandating transparency and due process in AI-driven decision-making.
These are not theoretical harms. People have been run over by autonomous vehicles and dragged as the autonomous vehicle did not register a person underneath it. Children have killed themselves after encouragement from AI chatbots. Parents have been physically threatened by teenagers who were encouraged to kill them. Children have been exposed to sexual conversations with AI chatbots. Stock markets have been rattled by stock trading AI agents. Workers have been surveilled. People have been wrongfully arrested with the usage of facial recognition. Members of Congress have been mistaken for criminals. Consumers have been defrauded by fake human avatars. Women have been killed when algorithms claim an abusive spouse does not pose a threat. These are the harms that await American consumers if Congress pursues preemption of state AI laws.
Fear-Based Tactics are Not Sound Policy but Distractions from True Leadership
Two recurring themes have emerged from proponents of deregulation. First, they claim that state-level regulation is stifling AI innovation. Second, they argue that the only path to U.S. dominance in AI is through sweeping deregulation. Both assertions follow a familiar playbook of fear and false choices. The evidence shows that these claims are not only misleading, but they are flatly untrue.
Current state regulations have not stifled innovation. They have coexisted with it. In fact, the soaring valuations of leading U.S. AI companies make one thing clear: the industry is thriving under existing laws. The current valuations of the leading AI companies in the U.S. show a thriving AI market under current policies:
- OpenAI’s most recent evaluation at $300 billion.
- Scale AI’s most recent valuation at $25 billion.
- Anthropic’s most valuation at $61.5 billion.
- Palantir’s valuation at $281 billion.
- Or one of the newest AI companies, Perplexity, which entered the market earlier this year as the “AI search engine” being valued at $14 billion.
In short, AI companies are booming under existing state laws. Some of the most successful AI startups in the world operate in California, New York, and Colorado, with the most comprehensive AI or data privacy regulations. Let us be clear, America is leading the world in the AI industry. If state regulations were truly unmanageable, the industry would not be surging.
When the hysterics of innovation fall on deaf ears, opponents of state AI regulations fall back on a manufactured “AI arms race” with China. The constant “AI arms race” framing serves to justify policy decisions that would otherwise be indefensible. It is an excuse to silence dissent, dismiss scrutiny, and trade away civil rights. But the public should not have to sacrifice transparency, fairness, or the rule of law in the name of a manufactured rivalry.
The suggestion that, unless we deregulate AI, the U.S. will “fall behind China” is both false and offensive. There is no evidence that consumer protection and global competitiveness are mutually exclusive. In fact, leadership in the 21st century will require building safe, trustworthy systems that align with democratic values, not abandoning those values in pursuit of speed.
Rather than fueling unwarranted alarm, Congress should look to the states for guidance. Lawmakers have a clear opportunity to build on the thoughtful, bipartisan measures already enacted at the state level. By embracing these best practices and advancing comprehensive, responsible AI legislation, Congress can fulfill its obligation to serve the public interest — as each member pledged to do upon taking office.
Preemption Doesn’t Create a National Standard, It Creates a Vacuum
Some members of this Committee have suggested that preemption now is acceptable because Congress will “get to” a federal AI law later. This is baffling. The same members who have punted on passing responsible AI legislation are willing to dismantle the only AI consumer protections present to pass what exactly? There is no bill. There is no timeline. There is no plan. The idea of taking away rights and offer only vague assurances is irresponsible at best and deceptive at worst.
What is offered is a regulatory black hole. Companies could avoid lawsuits. They can avoid state attorneys general attempting to provide consumers with any semblance of oversight. Victims of deepfake porn would have no meaningful path to accountability. Attorneys would be powerless to represent their clients.
The collective memory of Congress cannot be this short. For years Congress deferred action on social media and states were slow to respond. Now, the public lives with the consequences: rampant disinformation, teen mental health crises, data privacy violations, and election interference. AI is exponentially more powerful. This Committee cannot afford to make the same mistake.
Public Citizen’s Recommendations
Public Citizen supports innovation. But we do not support pursuing innovation without integrity. Real AI leadership requires enforceable rules that:
- Reject any bill that includes language aimed at preempting state AI regulation and instead pursue comprehensive federal AI regulation creating a nationwide floor of consumer rights and protections based on best practices.
- Require clear labeling of all AI-generated content, including deepfakes and synthetic media.
- Mandate watermarking and traceability mechanisms to preserve evidentiary integrity and support enforcement.
- Ban surveillance-based advertising and manipulative personalization that exploits user data and erodes autonomy.
- Enact civil rights protections to prevent algorithmic discrimination in housing, employment, education, and beyond.
- Uphold worker protections, including transparency around AI use in the workplace and the right to collectively bargain over automation and algorithmic decision-making.
- Safeguard vulnerable populations including children, people with disabilities, older adults, and those with mental health conditions from exploitative AI systems and unsafe chatbot design.
- Require independent audits, public impact assessments, and disclosure of training data sources, ensuring accountability throughout the AI development lifecycle.
These are not anti-innovation proposals. They are the foundation of a democratic, dignified, and equitable AI future. They protect the public, reinforce trust, and ensure that technological progress serves people — not the other way around.
Conclusion
Stripping states of their ability to protect themselves preempts bipartisan laws already in place. It would hand the future of AI to a handful of unaccountable corporations. This is not leadership. It is abdication and reckless behavior.
This Committee must resist the urge to sacrifice public protections on the altar of speculative growth. It must defend the right of states to protect its citizens while some in the halls of Congress would rather shield Big Tech. It must buttress the guardrails built by the states, not erase them.
America is not defined by lobbyists. It is defined by the values of the American people. State after state the people have spoken. They want the AI protections they have in place. This Committee must maintain the courage to support a strong federal-state partnership.
We urge you to maintain protections for American consumers. The stakes could not be higher.
Respectfully submitted,
J.B. Branch
Technology Accountability Advocate
Public Citizen
JBranch@citizen.org