fb tracking

Petition for Rulemaking to Clarify that the Law Against “Fraudulent Misrepresentation” Applies to Deceptive AI Campaign Communications

Second Submission: Petition for Rulemaking to Clarify that the Law Against “Fraudulent Misrepresentation” (52 U.S.C. §30124) Applies to Deceptive AI Campaign Communications

July 13, 2023

Federal Election Commission
Lisa J. Stevenson, Office of General Counsel 1050 First Street, NE
Washington, D.C. 20463

Dear Ms. Stevenson:

Public Citizen respectfully submits this second petition for rulemaking pursuant to 11 C.F.R. §200.1 et seq. on the subject of “fraudulent misrepresentation” regarding deliberately misleading campaign communications generated through the use of artificial intelligence (AI). This petition requests the Federal Election Commission conduct a rulemaking to clarify the meaning of “fraudulent misrepresentation” at 11 C.F.R. §110.16.

The first petition by Public Citizen on this matter was debated by the Commission on June 22, 2023. The Commission declined to issue a Notice of Availability (NOA) on a 3-3 vote, depriving the public of any opportunity to comment on the proposal and halting consideration of the petition. It is highly irregular for the Commission to decline to issue an NOA.

Commissioners posited two key reasons for voting to reject the petition. The first concern expressed doubt whether the Commission has the statutory authority to regulate deliberately deceptive AI-produced content in campaign ads and other communications under the federal law against “fraudulent misrepresentation” (52 U.S.C. §30124). The second concern was that the petition failed to cite the specific regulation it wished to amend.

These issues are addressed in this second submission of a petition for rulemaking to clarify that the law against “fraudulent misrepresentation” (52 U.S.C. §30124) applies to deliberately deceptive AI-produced content in campaign communications.

BACKGROUND

Extraordinary advances in artificial intelligence now provide political operatives with the means to produce campaign ads and other communications with computer-generated fake images, audio or video of candidates that appear real-life, fraudulently misrepresenting that what candidates say or do. Generative artificial intelligence and deepfake technology – a type of artificial intelligence used to create convincing images, audio and video hoaxes1 – is evolving very rapidly. Every day, it seems, new and increasingly convincing deepfake audio and video clips are disseminated, including, for example, an audio fake of President Biden,2 a video fake of the actor Morgan Freeman3 and an audio fake of the actress Emma Watson reading Mein Kampf.4

Deceptive deepfakes are already appearing in elections and it is a near certainty that this trend will intensify absent action from the Federal Election Commission and other policymakers:

  • In Chicago, a mayoral candidate in this year’s city elections complained that AI technology was used to clone his voice in a fake news outlet on Twitter in a way that made him appear to be condoning police brutality.5
  • As the 2024 presidential election heats up, some campaigns are already testing AI technology to shape their campaign ads. The presidential campaign of Gov. Ron DeSantis, for example, posted deepfake images of former President Donald Trump hugging Dr. Anthony Fauci.6

Deepfakes’ quality is impressive and already able to fool listeners and viewers. Generally, however, on careful examination, it is now possible to identify flaws that show them to be fake.

But as the technology continues to improve, it will become increasingly difficult and, perhaps, nearly impossible for an average person to distinguish deepfake videos and audio clips from authentic media. It is an open question how well digital technology experts will be able to distinguish fakes from real media.

The technology will almost certainly create the opportunity for political actors to deploy it to deceive voters in ways that extend well beyond any First Amendment protections for political expression, opinion, or satire. A political actor may well be able to use AI technology to create a video that purports to show an opponent making an offensive statement or accepting a bribe.

That video may then be disseminated with the intent and effect of persuading voters that the opponent said or did something they did not say or do. The crucial point is that the video would not purport to characterize how an opponent might speak or behave, but to convey deceptively that they actually did so, when they did not.

A blockbuster deepfake video with this kind of fraudulent misrepresentation could be released shortly before an election, go “viral” on social media, and be widely disseminated, with no ability for voters to determine that its claims are fraudulent.

REQUEST FOR RULEMAKING

Federal law proscribes candidates for federal office or their employees or agents from fraudulently misrepresenting themselves as speaking or acting for or on behalf of another candidate or political party on a matter damaging to the other candidate or party. [52 U.S.C. §30124] Specifically, that section reads:

§30124. Fraudulent misrepresentation of campaign authority
(a) In general
No person who is a candidate for Federal office or an employee or agent of such a candidate shall-
(1) fraudulently misrepresent himself or any committee or organization under his control as speaking or writing or otherwise acting for or on behalf of any other candidate or political party or employee or agent thereof on a matter which is damaging to such other candidate or political party or employee or agent thereof; or
(2) willfully and knowingly participate in or conspire to participate in any plan, scheme, or design to violate paragraph (1).
(b) Fraudulent solicitation of funds No person shall-
(1) fraudulently misrepresent the person as speaking, writing, or otherwise acting for or on behalf of any candidate or political party or employee or agent thereof for the purpose of soliciting contributions or donations; or
(2) willfully and knowingly participate in or conspire to participate in any plan, scheme, or design to violate paragraph (1).

A deepfake audio clip or video by a candidate or their agent that purports to show an opponent saying or doing something they did not do would violate this provision of the law. It would constitute a candidate or their agent “fraudulently misrepresent[ing]” themselves “as speaking or writing or otherwise acting for or on behalf of any other candidate or political party or employee or agent thereof on a matter which is damaging to such other candidate or political party or employee or agent thereof.”

Specifically, by falsely putting words into another candidate’s mouth, or showing the candidate taking action they did not, the deepfake would fraudulently speak or act “for” that candidate in a way deliberately intended to damage him or her. This is precisely what the statute aims to proscribe. The key point is that the deepfake purports to show a candidate speaking or acting in a way they did not. The deepfake misrepresents the identity of the true speaker, which is an opposing candidate or campaign. The deepfaker misrepresents themselves as speaking for the deepfaked candidate. The deepfake is fraudulent because the deepfaked candidate in fact did not say or do what is depicted by the deepfake and because the deepfake aims to deceive the public. And this fraudulent misrepresentation aims to damage the campaign of the deepfaked candidate.

It is important to distinguish how deceptive deepfakes violate the prohibition on fraudulent misrepresentation compared to other practices:

• The prohibition on fraudulent misrepresentation does not apply generally to the use of artificial intelligence in campaign communications, but only to deepfakes or similar communications.
• The prohibition on fraudulent misrepresentation would not apply to cases of parody, where an opposing candidate is shown doing or saying something they did not, but where the purpose and effect is not to deceive voters and, therefore, where there is no fraud.
• The prohibition on fraudulent misrepresentation would not apply in cases where there is a sufficiently prominent disclosure that the image, audio or video was generated by artificial intelligence and portrays fictitious statements and actions; the fact of a sufficiently prominent disclosure would eliminate the element of deception and fraud.

1. The Commission has already recognized its statutory authority to regulate under the law against “fraudulent misrepresentation”

In 2018, former Commissioner Lee Goodman explained how the law against “fraudulent misrepresentation” is part and parcel of the Federal Election Campaign Act (FECA), subject to regulation by the FEC. As Goodman observed:

“The Act and Commission regulations set forth two prohibitions with respect to fraudulent misrepresentation. The first prohibits a candidate or his or her employees or agents from speaking, writing or otherwise acting on behalf of another candidate or political party committee on a matter which is damaging to such other candidate or political party. The second prohibits other persons from misrepresenting themselves as speaking, writing, or otherwise acting for or on behalf of any candidate or political party for the purpose of soliciting contributions. The Act further provides that no person shall willfully and knowingly participate in or conspire to participate in any plan or scheme to engage in such behavior.”7

Former Commissioner Goodman’s full statement, which is attached as Appendix A, also provides some useful guidance for the current Commission regarding disclosure in developing further guidance in regulating the law against “fraudulent misrepresentation.”

2. 11 C.F.R. §110.16 is the specific regulation implementing the statutory prohibition on “fraudulent misrepresentation” and is the regulatory provision that the Commission should modify

The FEC has implemented the law against “fraudulent misrepresentation” in 11 C.F.R. §110.16, which reads:

§ 110.16 Prohibitions on fraudulent misrepresentations.
a. In general. No person who is a candidate for Federal office or an employee or agent of such a candidate shall—
1. Fraudulently misrepresent the person or any committee or organization under the person’s control as speaking or writing or otherwise acting for or on behalf of any other candidate or political party or employee or agent thereof in a matter which is damaging to such other candidate or political party or employee or agent thereof; or
2. Willfully and knowingly participate in or conspire to participate in any plan, scheme, or design to violate paragraph (a)(1) of this section.
b. Fraudulent solicitation of funds. No person shall—
1. Fraudulently misrepresent the person as speaking, writing, or otherwise acting for or on behalf of any candidate or political party or employee or agent thereof for the purpose of soliciting contributions or donations; or
2. Willfully and knowingly participate in or conspire to participate in any plan, scheme, or design to violate paragraph (b)(1) of this section.

The Commission expanded this regulation following passage of the Bipartisan Campaign Reform Act of 2002 which added the provision on “fraudulent solicitation of funds” to 52 U.S.C. §30124.

In April 2021, Commissioner Dickerson joined Commissioner James Trainor in a statement of reasons issued in an enforcement case specifically addressing the law against “fraudulent misrepresentation” and its implementing regulation in the matter of Americans for Sensible Solutions PAC and David Garrett (MUR 7140), which is attached as Appendix B, “Statement of Reasons of Vice Chair Allen Dickerson and Commissioner James Trainor.”

In view of the novelty of deepfake technology and the speed with which it is improving, Public Citizen encourages the Commission to specify in guidance as well as in an amendment to 11 C.F.R. §110.16(a) that if candidates or their agents fraudulently misrepresent other candidates or political parties through deliberately false AI-generated content in campaign ads or other communications – absent clear and conspicuous disclosure in the communication itself that the content is generated by artificial intelligence and does not represent real events – then the restrictions and penalties of the law and the Code of Regulations are applicable.

Sincerely,

Public Citizen, by Robert Weissman, President
1600 20th Street, N.W. Washington, D.C. 20009
(202) 588-1000

Public Citizen, by Craig Holman, Ph.D.
Government affairs lobbyist 215 Pennsylvania Avenue, S.E.
Washington, D.C. 20003
(202) 454-5182