fb tracking

EDITORIAL BOARD MEMO: FEC Must Protect Our Elections from Deepfakes 

Chair Cooksey has indicated the FEC will dismiss Public Citizen's petition to regulate political deepfakes — doing so would be catastrophic.

WASHINGTON, D.C. — The Federal Election Commission (FEC) is poised this week to block an effort to regulate political deepfakes before this fall’s elections, FEC Chair Sean Cooksey told Axios. The news comes over a year after Public Citizen petitioned the agency to issue a rule requiring that political deepfakes be labeled.
We encourage you to editorialize in favor of FEC action to protect our elections from deceptive and fraudulent deepfakes. The FEC should require that political deepfakes be labeled.

Deepfakes threaten election integrity.

Extraordinary advances in artificial intelligence now provide political operatives with the means to produce campaign ads and other communications with computer-generated fake images, audio or video of candidates that appear real-life, fraudulently misrepresenting what candidates say or do. Already, deepfake audios can be almost impossible to detect, images are extremely convincing and high-quality videos appear real to a casual viewer.

Political deepfakes can leave lasting impressions and are extremely difficult to rebut, because they require a victim to persuade people not believe what they saw or heard with their own eyes and ears. A late-breaking deepfake — for example, falsely showing a candidate making a racist statement, or slurring their words, or accepting a bribe — could easily sway an election.

There is an easy solution to this problem: Require that deepfakes be labeled with language like: “This has been manipulated or generated by artificial intelligence.” This simple disclosure would inform viewers that what they are hearing or seeing is not real and would cure the deception.

Public Citizen petitioned the FEC to adopt a rule requiring such disclosures in July 2023. The FEC accepted comments on the rule, which overwhelmingly favored the proposal, through October. Public Citizen’s comments are available here.

Deepfakes are on the rise.

Deepfakes have already influenced elections around the world. A Wired Magazine database is here. Deepfakes are said to have impacted election results in Slovakia, damaged election integrity in Pakistan and spread disinformation in Argentina.

In the United States, the DeSantis campaign published a deepfake image showing former President Donald Trump embracing Dr. Anthony Fauci, a deepfake appeared to show Chicago mayoral candidate Paul Vallas condoning police brutality and a Super PAC posted deepfake videos falsely depicting then-congressional candidate Mark Walker saying his opponent is more qualified than him.

A political consultant used a deepfake version of President Biden’s voice on robocalls discouraging New Hampshire voters from turning out for the state’s primary (and has now been indicted on felony charges). In July, Elon Musk posted a deepfake video of Vice President Harris, in violation of X/Twitter’s deepfake policy.

The FEC has authority to act and is failing to uphold its mandate.

While it would be ideal for Congress to pass new legislation to address deepfakes (and many bipartisan proposals are pending), the FEC has authority to act under current law. Federal law proscribes candidates for federal office or their employees or agents from fraudulently misrepresenting themselves as speaking or acting for or on behalf of another candidate or political party on a matter damaging to the other candidate or party.

The prohibition on fraudulent misrepresentation precisely describes deepfakes and provides ample authority for the FEC to act. In its proposed draft order not to act on deepfakes, the FEC simply issues a conclusory statement that it does not have authority.

The FEC also makes the bizarre claim that it should not act because, so far, there has been limited use of AI technology in elections. FEC Chair Sean Cooksey told Axios that the agency should “study how AI is actually used on the ground before considering any new rules.”

That’s equivalent to saying we should wait for a plane to crash before regulating jet engines.

Common sense says that those charged with protecting election integrity should act before foreseeable problems manifest, not afterwards. Twenty state legislatures have recognized this and acted to require labeling of political deepfakes.

It’s not too much to ask the nation’s election fairness agency to do the same.