Congress Must Reject Deregulation and Pass Enforceable Federal Protections
September 10, 2025
Chairman Tedd Budd and Ranking Member Tammy Baldwin
U.S. Senate Subcommittee on Science, Manufacturing, and Competitiveness
U.S. Senate
253 Russell Senate Office Building
Washington, D.C. 20510
Re: Public Citizen’s Statement for the Record: AI’ve Got a Plan: America’s AI Action Plan
Dear Chairman Budd, Ranking Member Baldwin, and Members of the Subcommittee,
Public Citizen appreciates the opportunity to submit this statement for the record for the hearing titled, “AI’ve Got a Plan: America’s AI Action Plan.” As an organization deeply committed to protecting consumers, ensuring corporate accountability, and advancing public-interest technology policy, we have long warned that unregulated artificial intelligence (AI) tools expose consumers to a vast array of harms. We believe maintaining the United States’ global leadership in AI requires not just technological dominance, but also moral clarity, democratic oversight, and public accountability.
Founded in 1971, Public Citizen is a national nonprofit organization with more than 500,000 members and supporters across the country. We advocate for an accountable government, corporate transparency, and consumer protection in the public interest. Our recent work to support and help pass first-in-the-nation deepfake protections for children at the state level has shown just how urgently policymakers must respond to AI-enabled harms. We advocate for responsible AI guardrails that prevent tech companies from deploying systems that are untested, unsafe, and ripe for abuse.
We urge the Committee to reject provisions that condition federal benefits on states diluting their laws, politicizing technical standards, or weakening independent oversight. Instead, Congress should move forward—this session—with enforceable, bipartisan guardrails that protect consumers, workers, and democracy. Congress—not the Executive alone—must set durable, democratically accountable rules. Otherwise, the country will endure policy whiplash, litigation, and uncertainty that hurt both innovators and the public.
The Trump Administration’s AI Action Plan Deprioritizes American Consumers
The Trump administration’s AI Action Plan signals a dangerous shift toward aggressive deregulation, prioritizing corporate interests and geopolitical posturing over the safety and well-being of the American public. It promotes policies toward rapid deployment, weaker oversight, and politicized standards—while flirting with back-door preemption via funding conditions and agency posture. It weakens crucial consumer protections, agency enforcement, and politicizes the very technical standards meant to ensure fairness and safety.
One of the most troubling components of the administration’s plan is its attempt to undermine state-level AI regulations. The plan suggests that federal funding could be withheld from states with “burdensome” AI laws and it expresses a desire for a single federal standard that would preempt these state efforts. Support for these provisions would be short-sighted and belie recent history underscoring the American public’s backlash to these ideas.
Just a few months ago, in July, the Senate voted down this unpopular policy 99–1 after an “uproar” from state advocates. Senator Ted Cruz’s provision would have preempted state AI consumer protections, handcuffing states while leaving no meaningful federal regime in place. Simply put, the public interest is not served by handcuffing states through federal preemption or by outsourcing AI policy to executive action alone. It is an attack on state’s ability to self-regulate and a direct threat to the only meaningful consumer protections currently in place.
For years, Congress has failed to pass comprehensive AI safeguards, leaving a void that state legislatures have responsibly filled. These state laws, often bipartisan, are addressing real and present harms to their constituents.
- Two-thirds of U.S. states have enacted bans on AI-generated deepfake pornography.
- Half of all states have laws against deepfake election disinformation.
- Tennessee’s ELVIS Act protects artists and gig workers from having their voices cloned without consent.
- Utah and Nevada have passed laws to protect citizens interacting with AI mental health tools from unsafe designs and misleading claims.
The administration’s renewed push to hamstring state efforts is a direct rebuke of this bipartisan will and serves no one’s interest but Big Tech, which seeks to operate without accountability. Instead, Congress should create a federal floor preserving state leadership while ensuring baseline national rights—just as Congress has done in other consumer-protection domains.
Americans are Subsidizing Big Tech Companies with Record Valuations
The argument that state-level regulation stifles innovation is an industry talking point with no basis in reality. The U.S. remains the global leader in AI, and American AI companies are booming. The industry is thriving under existing state laws.
- OpenAI, the maker of ChatGPT, is valued at over $500 billion.
- Anthropic, a leading competitor, is valued at $183 billion.
- Palantir’s valuation is $360 billion.
- Even newer companies like Perplexity have reached a $20 billion valuation.
These valuations make it clear: regulation has not been a barrier to growth. In fact, a predictable and enforceable regulatory framework can foster innovation by building public trust. American leadership will be defined not by who can develop the fastest, but by who can develop the safest and most trustworthy systems that align with democratic values.
Instead, the administration’s AI plan, and its recent actions, underscore a disturbing pattern of prioritizing the interests of a select group of tech billionaires over the well-being of the American public. The Administration’s close courtship of tech CEOs—including the White House dinner with Big Tech leaders last week—raises concerns about industry access and influence at the precise moment agencies and Congress must scrutinize safety, competition, and consumer impacts. The public deserves confidence that policy is not being designed behind closed-door events that privilege corporate executives over the average American.
The Trump AI Action Plan fast-tracks the construction of energy-intensive data centers by dismantling key environmental protections. This is a prime example of this administration’s desire to please Wall Street over Main Street. This cozy relationship with Big Tech is deeply concerning as the American people are already facing the consequences of this unchecked growth.
The massive demand for electricity from AI data centers is straining our nation’s power grid, leading to skyrocketing electricity bills for everyday families. PJM—the nation’s largest grid—warns of tight capacity and 20%+ bill surges in some areas with heavy data-center growth. A federal strategy that fast-tracks siting and subsidizes power for hyperscale AI facilities without rigorous review or cost-allocation fairness risks sticking families and small businesses with the tab. Meanwhile, these same tech companies are rewarded with sweetheart deals and expedited permitting, shifting the financial burden and environmental risks onto communities.
Congress should require independent grid-impact assessments, community input, and cost-causation principles for data-center development, rather than overriding local and state safeguards in the name of “streamlining.”
AI Companies Continue Unleashing Unregulated Products Harming the American Public
This is not an abstract debate. Companion/chatbot systems and generative tools are already producing real-world harm, including vulnerable users. These negative impacts of unregulated AI are even more evident than ever, given recent revelations including:
- A recent lawsuit alleges that an OpenAI chatbot provided recommendations on how a 16-year-old could successfully commit suicide after previous failures, ultimately resulting in his death.
- Investigations show popular companion platforms exposing minors to sexual, self-harm, and drug content—sometimes initiating risky exchanges.
- Surveys report that a large majority of U.S. teens have tried AI companions; about a third have felt uncomfortable with bot behavior—evidence that safety baselines are not being met.
- Another legal action claims a Character.AI chatbot encouraged a child to kill their parents.
These are not isolated incidents but predictable outcomes of a “move fast and break things” approach to technology. A recent Common Sense Media report found that 72% of U.S. teens have used an AI companion. The same poll showed a whopping 93% of parents have concerns about their children using AI.
Congress must not make the same mistake it made with social media, where a decade of inaction led to a crisis of disinformation and teen mental health. The time for voluntary guidelines and industry self-regulation is over. The American people deserve an AI future rooted in safety, fairness, and accountability—not a handout to a select few billion-dollar companies.
Public Citizen’s Recommendations
To truly lead in the age of AI, Congress must reject any language that aims to preempt state AI regulation and instead pursue a comprehensive federal framework that creates a nationwide floor of consumer rights and protections. We urge this subcommittee to:
- Reject any AI liability shield regardless of length as well as any federal proposal that would preempt stronger state laws.
- Pursue common-sense guardrails that include clear labeling, traceability, and independent audits for high-risk AI systems.
- Require pre-deployment safety testing, independent audits, crisis-intervention protocols, and parental controls for AI companions.
- Mandate content labeling and interoperable provenance/watermarking to protect elections, markets, and victims of intimate-image abuse.
- Demand consumer protection agencies investigate and respond to AI-enabled harms including scams, fraud, and exploitation.
- Ensure transparency for automated employment decisions; bar discriminatory uses; preserve collective-bargaining rights in automation.
- Pass measures to address the environmental impact of AI development, ensuring that communities are not forced to subsidize corporate profits.
This subcommittee can champion a legislative framework that prioritizes the public interest. America can lead in AI and protect its people, but only if Congress legislates a durable federal floor, empowers independent oversight, and preserves state authority to respond to fast-moving risks. The path forward is not deregulation at any cost. It is rules that channel innovation toward the public good—rooted in democratic values, consumer safety, and shared prosperity.
Respectfully Submitted,
/s/
J.B. Branch
Big Tech Accountability Advocate
Public Citizen