Political Deepfakes Count as ‘Fraudulent Misrepresentation,’ Public Citizen Tells FEC
WASHINGTON, D.C. – The Federal Election Commission (FEC) should clarify in its regulations that political deepfakes are covered under the law against fraudulent misrepresentation, Public Citizen said in a comment submitted to the agency today.
“Deepfakes pose a significant threat to democracy as we know it,” said Robert Weissman, president of Public Citizen and co-author of the comment. “The FEC must use its authority to ban deepfakes or risk being complicit with an A.I.-driven wave of fraudulent misinformation and the destruction of basic norms of truth and falsity.”
In August, the FEC voted unanimously to advance a Public Citizen petition requesting rulemaking to address the anticipated onslaught of deepfakes in the 2024 election. After the comment period closes on Oct. 16, the FEC will determine whether to take up a final rule. Members in both chambers of Congress circulated letters supporting the aims of Public Citizen’s petition and asking the FEC to begin the rulemaking process.
“An unregulated and undisclosed Wild West of A.I.-generated campaign ads will further erode the public’s confidence in the integrity of the electoral process,” said Craig Holman, government affairs lobbyist for Public Citizen and co-author of the comment. “If voters cannot discern fact from fiction in campaign messages, they will increasingly doubt the value of casting a ballot – or the value of ballots cast by others.”
Rapid advances in artificial intelligence (A.I.) have given political operatives the means to produce campaign ads with computer-generated fake images, audio, or video of candidates that appear genuine, fraudulently misrepresenting what candidates have said or done. When A.I.-generated content makes a candidate say or do things they never did – for the explicit purpose of damaging the targeted candidate’s reputation – these ads are known as deepfakes. Deepfakes are currently legal in federal elections and most states. They are not even subject to a disclaimer requirement noting that the content never happened in real life.
Campaigns are already using deepfakes to shape their campaign communications. The presidential campaign of Florida Gov. Ron DeSantis, for example, posted deepfake images of former President Donald Trump hugging Dr. Anthony Fauci. The hug never happened. And in a recent mayoral election in Chicago, mayoral candidate Paul Vallas complained that A.I. technology was used to clone his voice in a fake news outlet on Twitter in a way that made him appear to be condoning police brutality. It never happened, and Vallas lost the race.
Deceptive deepfakes fit squarely within the parameters of the FEC’s existing statutory authority, Public Citizen’s comment argues. The comment provides a detailed analysis of the FEC’s authority to prohibit “fraudulent misrepresentation” and how and why it should be applied to deepfakes.