Public Citizen Urges Senate to Regulate Deepfakes and Reject AI Deregulation
Chairwoman Marsha Blackburn and Ranking Member Amy Klobuchar
Subcommittee on Privacy, Technology, and the Law
U.S. Senate
Dirksen Senate Office Building Room 226, Washington D.C. 20515
Re: Public Citizen’s Statement for the Record: The Good, the Bad, and the Ugly: AI-Generated Deepfakes in 2025
Dear Chairwoman Blackburn, Ranking Member Klobuchar, and Members of the Subcommittee,
Public Citizen welcomes the opportunity to submit this statement for the hearing on The Good, the Bad, and the Ugly: AI Generated Deepfakes in 2025. We appreciate the Subcommittee’s leadership on artificial intelligence (AI), a transformative technology that poses both immense promise and profound risks. Public Citizen is a national public interest organization with more than 500,000 members and supporters. Since our founding in 1971, we have worked to ensure that government and corporate power are transparent, accountable, and responsive to the needs of the public. In the context of emerging technologies, Public Citizen advocates for policies that promote innovation while safeguarding the public from potential harms. We believe that technological advancements should serve the broader public good, and that includes the responsible development and deployment of AI. We thank the Committee for the opportunity to submit this statement for the record.
As AI-generated synthetic media becomes increasingly indistinguishable from authentic content, deepfakes pose a unique and urgent challenge to public trust, civil rights, consumer safety, and democratic institutions. While the United States leads in developing advanced technologies, especially within AI, we are woefully behind in federal AI protections for consumers. The U.S. must maintain its innovative leadership, but we must also lead in developing the laws and safeguards necessary to ensure those technologies are used ethically, transparently, and in service of the public good. Further, while federal AI regulations take shape Congress must resist the calls, within this very chamber, to preempt state AI laws—the only statutes protecting consumers to this day.
Deepfakes: A Crisis of Authenticity, Safety, and Accountability
The rise of AI-generated deepfakes represents one of the most urgent and destabilizing threats posed by artificial intelligence. In fact, the Department of Homeland Security has highlighted deepfakes as one of the key threats to the United States. As synthetic media becomes more realistic, accessible, and rapidly disseminated, the very notion of what can be trusted is under assault. Deepfakes blur the line between fact and fabrication, eroding public trust, enabling fraud, facilitating abuse, and undermining the integrity of our democratic institutions.
Like many insidious technologies, women and children have already borne the brunt of the darkest consequences. Deepfake pornography has proliferated at alarming rates, weaponizing AI to create and distribute sexually explicit images and videos of real people without their consent. Earlier this month, Public Citizen warned of Elon Musk’s Grok which was found to be creating nonconsensual AI deepfakes of women with a “remove her clothes” coding. This feature took any innocent photo uploaded of a woman into a sexually charged assaultive image. For minors, the risk is even more dire: AI-generated child sexual abuse material has surfaced on major platforms, a grim illustration of how existing laws are not keeping pace with generative tools.
The Senate must be applauded for its leadership in passing the Take It Down Act. The act originated in the Senate and quickly moved through in bipartisan fashion showing how efficiently senators could pass common sense responsible AI legislation. However, it also bares noting that the Take It Down Act does not provide a private right of action for victims, instead leaving enforcement to the Federal Trade Commission. This means that as platform moderation fails many victims must rely on state AI regulations for recourse and legal accountability.
The harms extend beyond individual abuse. Deepfakes have been used to impersonate public officials, sway voters with false election messaging, and defraud consumers with synthetic voices mimicking loved ones or trusted institutions. As the technology advances malicious actors, both foreign and domestic, will find it easier and cheaper to disrupt elections, incite violence, and sow chaos in ways that are difficult to detect and nearly impossible to stop.
In the workplace, artists, musicians, journalists, and other creators are seeing their voices, faces, and likenesses cloned and exploited for profit without compensation or control. In courts the gold standards of credibility like audio and video evidence are now suspect, jeopardizing judicial outcomes. In healthcare and emergency services, the risk of synthetic impersonation can delay or distort life-saving responses.
This is not a problem we can afford to ignore or delay. Deepfakes are a threat to American democracy itself. The speed and scale at which they can be produced and spread means that misinformation can now be entirely fabricated by machines, indistinguishable from reality and immune to traditional content moderation strategies. Without a coordinated federal response, the deepfake crisis will deepen, and the victims will multiply.
Congress must act now to protect the public’s right to safety, dignity, and a shared reality anchored in truth. Public Citizen has proudly assisted several states in pursuing legislation regulating AI deepfakes in elections and as it pertains to pornographic content, including AI-generated child sex abuse material and intimate AI-generated content aimed at all people. We stand ready to assist this Subcommittee in its pursuit with deepfake regulation, including drafted model legislation that has assisted several state legislatures.
States are Leading Where Congress has Not
Several senators within this very committee have vocalized a desire for Congress to preempt state laws on AI, including state consumer protections applying to deepfakes. The historical evidence is clear that state legislatures have stepped up where Congress has stalled. It is a sobering fact that undoing these state laws will result in imminent harm to the very people committee members are tasked with representing.
Preemption of state AI laws would be an open invitation for Big Tech to operate without accountability. To provide the Subcommittee with a more concrete understanding of how federal preemption would impact state deepfake protections, I offer the following illustrative examples:
- Two-thirds of U.S. states have enacted bans on AI-generated deepfake pornography, including a private right of action which is not included in the Take It Down Act.
- Half of U.S. states have passed laws against deepfake election disinformation.
- Colorado passed a comprehensive AI Act establishing transparency and consumer protections.
- Tennessee’s ELVIS Act protects against strangers cloning one’s voice and profiting off it, which is an essential safeguard for artists, gig workers, and everyday users.
- New York has adopted an AI Bill of Rights that safeguards civil liberties.
- Utah protects users interacting with mental health AI tools from unsafe design.
- California, a global tech hub, has pioneered laws requiring content disclosures, regulating training data, and protecting children on social media.
Senator Ted Cruz has been quite vocal of his desire to issue a “10-year pause” on state AI-consumer protections, including deepfakes. The above are just a few of the state protections that constituents would miss should Senator Cruz’s policy of preempting state AI protections take hold.
The False Choice Between Consumer Protection and American Innovation
One recurring theme from those who are avoidant of any regulation on the AI industry is that it would stifle innovation. This is an industry talking point grounded in a fabricated reality. In short, the evidence shows this to simply be wrong.
Current state regulations have allowed American AI companies to flourish. Indeed, the U.S. is currently the global leader within the AI industry. The AI industry is thriving under existing laws and current valuations of the leading AI companies in the U.S. underscore this truth:
- OpenAI’s most recent valuation at $300 billion.
- Scale AI’s most recent valuation at $25 billion.
- Anthropic’s most recent valuation at $61.5 billion.
- Palantir’s valuation at $281 billion.
- Perplexity, which entered the market earlier this year, valued at $14 billion.
AI companies are booming under existing state laws and America is leading the world in AI. If regulations were truly unmanageable, the industry would not be surging. Moreover, many of the best practices to protect our country on deepfakes have originated at the state level. Congress therefore should look to the states for guidance. By embracing these best practices and advancing responsible AI legislation, Congress can protect against the real threat that AI deepfakes pose for our democracy.
Public Citizen’s Recommendations
Public Citizen shares this Subcommittees concerns on AI deepfakes and strongly supports comprehensive regulation of AI-generated deepfakes. The harms are real, the threats are growing, and the tools to deceive are advancing faster than the laws meant to contain them. Deepfake regulation is an urgent matter requiring this Subcommittee’s ongoing leadership. Congress must act to ensure clear labeling, meaningful accountability, and robust protections against synthetic fraud, abuse, and manipulation. This is not a partisan issue. It is a matter of public safety, democratic integrity, and basic human dignity.
Public Citizen supports pursuing innovation with integrity. We believe this moment calls for leadership and regulation of the AI industry to allow it to thrive alongside our communities. Therefore we support comprehensive responsible AI regulation and enforceable rules that:
- Rejects any language aimed at preempting state AI regulation and instead pursues comprehensive federal AI regulation creating a nationwide floor of consumer rights and protections based on best practices.
- Requires clear labeling of all AI-generated content, including deepfakes and synthetic media.
- Mandates watermarking and traceability mechanisms to preserve evidentiary integrity and support enforcement.
- Bans surveillance-based advertising and manipulative personalization that exploits user data and erodes autonomy.
- Enacts civil rights protections to prevent algorithmic discrimination in housing, employment, education, and beyond.
- Upholds worker protections, including transparency around AI use in the workplace and the right to collectively bargain over automation and algorithmic decision-making.
- Safeguards vulnerable populations including children, people with disabilities, older adults, and those with mental health conditions from exploitative AI systems and unsafe chatbot design.
- Requires independent audits, public impact assessments, and disclosure of training data sources, ensuring accountability throughout the AI development lifecycle.
These proposals strike the right balance between regulatory leadership and continued American innovation. They lay the groundwork for an AI future that is democratic, equitable, and rooted in human dignity. By prioritizing public protection and reinforcing trust, they ensure that technological advancement remains in service to the American people and not at their expense.
Conclusion
This Subcommittee must continue to be the stewards of public accountability on AI deepfakes. When voters see fake videos of candidates, hear fabricated voices of loved ones, or fall victim to deepfake financial scams, they will not turn to tech CEOs for help. They will turn to their elected representatives. Congress cannot stand silent. The harms perpetuated by AI deepfakes are unfolding now, in real time, with real consequences. We urge this Subcommittee to pursue comprehensive deepfake protections and reject any proposal that would eliminate critical safeguards already in place.
Respectfully submitted,
J.B. Branch
Technology Accountability Advocate
Public Citizen
JBranch@citizen.org