Petition for Rulemaking to Clarify that the Law Against ‘Fraudulent Misrepresentation’ Applies to Deceptive AI Campaign Ads
Petition for Rulemaking 52 USC 30124
Lisa J. Stevenson
Office of General Counsel
Federal Election Commission
1050 First Street NE
Washington, DC 20463
Dear Ms. Stevenson:
Public Citizen respectfully submits this petition for rulemaking pursuant to 11 C.F.R. §200.1 et seq. The extraordinary advances in “Artificial Intelligence” (AI) now provide political operatives with the means to produce campaign ads with computer-generated fake images of candidates that appear real-life to portray fraudulent misrepresentation of those candidates. Public Citizen requests that the Federal Election Commission clarify when and how 52 U.S.C. §30124 (“Fraudulent misrepresentation of campaign authority”) applies to deliberately deceptive AI campaign ads.
Background
Generative artificial intelligence (AI) and deepfake technology — a type of artificial intelligence used to create convincing images, audio and video hoaxes – is evolving very rapidly. Every day, it seems, new and increasingly convincing deepfake audio and video clips are disseminated, including, for example, this audio fake of President Biden, the video fake of the actor Morgan Freeman and an audio fake of the actress Emma Watson reading Mein Kampf.
Deepfakes’ quality is impressive and able to fool listeners and viewers. Generally, on careful examination it is now possible to identify flaws that show them to be fake.
But as the technology continues to improve, it will become increasingly difficult and, perhaps, nearly impossible for an average person to distinguish deepfake videos and audio clips from authentic media. It is an open question how well digital technology experts will be able to distinguish fakes from real media.
The technology will almost certainly create the opportunity for political actors to deploy it to deceive voters, in ways that extend well beyond any First Amendment protections for political expression, opinion or satire. A political actor may well be able to use AI technology to create a video that purports to show an opponent making an offensive statement or accepting a bribe. That video may then be disseminated with the intent and effect of persuading voters that the opponent said or did something they did not say or do. The crucial point is that the video would not purport to characterize how an opponent might speak or behave, but to convey deceptively that they actually did so, when they did not.
A blockbuster deepfake video released shortly before an election could go “viral” on social media and be widely disseminated, with no ability for voters to determine whether it is making fraudulent claims.
Legislation has been introduced in Congress to require clear and obvious disclaimers on political ads whenever AI-generated content is used. For the time being, however, there are no such disclaimer requirements.
Request for Rulemaking
Federal law proscribes candidates for federal office or their employees or agents from fraudulently misrepresenting themselves as speaking or acting for or on behalf of another candidate or political party on a matter damaging to the other candidate or party. [52 U.S.C. §30124] Specifically, that section reads:
§30124. Fraudulent misrepresentation of campaign authority
(a) In general
No person who is a candidate for Federal office or an employee or agent of such a candidate shall-
(1) fraudulently misrepresent himself or any committee or organization under his control as speaking or writing or otherwise acting for or on behalf of any other candidate or political party or employee or agent thereof on a matter which is damaging to such other candidate or political party or employee or agent thereof; or
(2) willfully and knowingly participate in or conspire to participate in any plan, scheme, or design to violate paragraph (1).
(b) Fraudulent solicitation of funds
No person shall-
(1) fraudulently misrepresent the person as speaking, writing, or otherwise acting for or on behalf of any candidate or political party or employee or agent thereof for the purpose of soliciting contributions or donations; or
(2) willfully and knowingly participate in or conspire to participate in any plan, scheme, or design to violate paragraph (1).
A deepfake audio clip or video by a candidate or their agent that purports to show an opponent saying or doing something they did not do would violate this provision of the law. It would constitute a candidate or their agent “fraudulently misrepresent[ing]” themselves “as speaking or writing or otherwise acting for or on behalf of any other candidate or political party or employee or agent thereof on a matter which is damaging to such other candidate or political party or employee or agent thereof.”
In view of the novelty of deepfake technology and the speed with which it is improving, Public Citizen encourages the Commission to specify in regulation or guidance that if candidates or their agents fraudulently misrepresent other candidates or political parties through deliberately false AI-generated content in campaign ads, that the restrictions and penalties of 52 U.S.C. §30124 are applicable.
Sincerely,
Robert Weissman
President
Public Citizen
Craig Holman, Ph.D.
Government Affairs Lobbyist
Public Citizen