fb tracking

Comment on OSTP’s Request for Information: Identifying Federal Statutes and Regulations that “Hinder AI Innovation”

Download PDF

Docket No. OSTP-TECH-2025-0067
Submitted by: Public Citizen
Date: October 27, 2025

Introduction

Public Citizen appreciates the opportunity to submit comments in response to the Office of Science and Technology Policy’s Request for Information (RFI) on “identifying existing Federal statutes, regulations, agency rules, guidance, forms, and administrative processes that unnecessarily hinder the development, deployment, and adoption of artificial intelligence (AI) technologies.”

Founded in 1971, Public Citizen is a national nonprofit consumer advocacy organization with more than 1 million members and supporters. We work to ensure that the government serves the public—not corporate—interest. Our advocacy on AI spans consumer protection, civil rights, and democratic accountability. We have testified before Congress, submitted rulemaking comments across agencies, and published analyses warning that premature deregulation of AI would endanger consumers, workers, and democratic institutions.

I. The Trump Administration’s Deregulatory AI Agenda Harms Consumers and Tilts Power Toward Corporations

The current administration’s America’s AI Action Plan frames regulatory safeguards and democratic oversight as “barriers” to innovation. That framing is false and dangerous. What hinders innovation is public distrust born from unregulated harms, not the existence of commonsense guardrails. A recent Pew Charitable Trusts  survey found that 62 percent of U.S. adults lack confidence in the government’s ability to regulate AI, while 59 percent are skeptical of industry efforts to do so responsibly. Together, these numbers reflect a growing public distrust—both in government oversight and in Big Tech’s commitment to the public good. Without clear and enforceable safeguards, this distrust will only deepen, making consumers wary and slowing adoption. Strong regulations, by contrast, can build the public confidence needed to ensure AI is governed safely and transparently.

Instead of identifying unnecessary barriers, the Administration should recognize that the United States continues to proceed with an AI-driven agenda without any federal AI governance structures in place to protect individuals, workers, communities, and the environment. This includes a lack of federal liability frameworks specific to AI, no enforceable AI safety standards, and continued pursuit by some in Congress to preempt the few state AI regulations in place protecting consumers. It is misleading to claim that federal rules are holding AI back. Instead, the absence of clear, enforceable protections has allowed powerful firms to deploy untested, unsafe, and manipulative AI systems at scale.

By inviting agencies to suspend or waive statutory and regulatory obligations, the Administration’s approach serves corporate interests at the expense of the public. It undermines environmental reviews for data-center construction, weakens civil-rights enforcement, and attempts to preempt state laws that fill the current vacuum of federal oversight.

This deregulatory orientation reflects a pattern of policymaking that treats the American people as test subjects rather than rights-holders. OSTP should reject any proposals that (1) weaken agency rulemaking authority, (2) displace state consumer protections, or (3) conflate “innovation” with exemption from responsibility and accountability.

 II. Existing Federal Regulations Should Be Preserved and Enforced

Because no comprehensive federal AI framework exists, existing consumer-protection, civil-rights, and environmental statutes are the only meaningful safeguards currently in force. These include, but are not limited to:

These statutes are foundational to the protection of the American public. Weakening their reach under the guise of “AI-specific flexibility” would expose the public to new and unmitigated harms. Moreover, the claim that regulation chills innovation is contradicted by the marketplace itself. The U.S. remains the world leader in AI, with OpenAI, Anthropic, Palantir, and others reaching record-breaking valuations under current regulatory structures.

Predictable, enforceable rules foster trust and investment by providing clarity for responsible companies and accountability for reckless ones. Public Citizen therefore urges OSTP to affirm that no existing consumer-protection, labor, environmental, or civil-rights law should be suspended or weakened for AI “experimentation.” Agencies should interpret, apply, and enforce these statutes robustly to begin to ensure algorithmic accountability, transparency, and fairness.

III.  New Federal Standards Are Needed to Protect Consumers, Workers, and Democracy

There is no doubt that the Trump administration and Congress want to lead the world in AI innovation. However, the deregulatory policy agenda proposed in the AI Action Plan endangers the public. Evidence already shows that unregulated AI has caused measurable harm:

These harms underscore the need for new, enforceable federal standards, including:

  1. A Federal AI Safety Framework: requiring pre-deployment testing, independent audits, and ongoing risk assessments for high-impact systems.
  2. Transparency and Traceability: mandating clear labeling of AI-generated content and provenance metadata to protect elections, consumers, and artists.
  3. Protection for Workers from Harmful AI Use Cases: regulating the usage of AI in the workplace to protect workers’ rights against employer surveillance and ensuring worker safety. 
  4. Protection for Minors and Vulnerable Users: including prohibitions on sexualized, manipulative, or emotionally exploitative AI companions.
  5. Environmental and Community Safeguards: requiring grid-impact assessments, community consultation, and equitable cost-allocation for data-center development.
  6. Preservation of State Authority: ensuring that federal standards establish a floor, not a ceiling, so states can address emergent harms swiftly.

These measures would not “hinder” innovation. They would sustain innovation by building the public trust necessary for widespread adoption by ensuring the technology is safe for usage, and in the event that an AI-related harm occurs, companies will be held responsible.

 

Conclusion

Public Citizen urges OSTP to pursue policy efforts that champion the American people and not Big Tech and their lobbyists. America’s leadership in artificial intelligence must be measured not by reckless deployment of technologies, but by how steadfastly we protect the people they affect. We therefore call on OSTP to reject any effort to waive or preempt existing consumer, civil-rights, labor, or environmental protections; to affirm that responsible AI governance requires enforceable guardrails at both the federal and state levels; and to establish new standards that promote transparency, accountability, and public trust. Progress should not be measured solely in profit margins or global rankings, but also by whether we build an AI future rooted in fairness, safety, and democracy.

 

Respectfully submitted,

J.B. Branch
Big Tech Accountability Advocate
Public Citizen