fb tracking

Louisiana Petition for Rulemaking on Deepfakes

Public Citizen and Common Cause Seek Transparency of AI-content in Electioneering Messages

By Craig Holman, Ph.D.

Download PDF

Louisiana Board of Ethics
Mrs. Carolyn Abadie Landry, Program Director
P.O. Box 4368
Baton Rouge, LA 70821

Submitted via email

Petition for Rulemaking to Clarify that R.S. 18:1463(C)(4) of the Elections Code Prohibiting Misrepresentations in Political Materials Applies to Deceptive AI Campaign Communications

Dear Louisiana Board of Ethics:

Public Citizen and Common Cause respectfully submit this petition for rulemaking pursuant to R.S. 49:9564 on the subject of deceptive misrepresentations in political materials. R.S. 18:1463(C)(4) of the Louisiana Elections Code expressly prohibits any person from falsely misrepresenting themselves as speaking, writing or acting on behalf of a candidate or political committee. This petition requests that the Louisiana Board of Ethics conduct rulemaking to clarify that deliberately deceptive Artificial Intelligence (AI)-content in campaign communications targeting a candidate with the intent to knowingly deceive voters is in violation of the state law prohibiting false misrepresentation affecting elections, unless the communication includes adequate disclosure that the content is in fact fabricated.

Public Citizen is a nonprofit organization based in Washington, D.C. representing the interests for open and honest campaigns on behalf of its 3,641 activists who reside in Louisiana. Common Cause is representing the same interests for its members and supporters in Louisiana.

The Louisiana Board of Ethics has the responsibility and statutory authority to regulate deliberately deceptive AI-content in campaign communications in which a person pretends to be speaking or acting on behalf of a candidate or committee seeking to falsely influence voters. R.S. 18:1463(C)(4)(a) – “Political material; ethics; prohibitions” – reads as follows:

(4)(a) No person shall misrepresent himself or any committee or organization under his control as speaking, writing, or otherwise acting for or on behalf of any candidate, political committee, or political party, or any employee or agent thereof.

Background

Extraordinary advances in artificial intelligence now provide political operatives with the means to produce campaign ads and other communications with computer-generated fake images, audio or video of candidates that appear real-life, fraudulently misrepresenting that what candidates say or do. Generative artificial intelligence and deepfake technology – a special type of artificial intelligence used to create convincing images, audio and video hoaxes[1] – is evolving very rapidly. Every day, it seems, new and increasingly convincing deepfake audio and video clips are disseminated, including, for example, the production videos fabricating sexual acts of children or unconsenting adults that was recently outlawed in Louisiana, or an audio fake of President Biden,[2] a video fake of the actor Morgan Freeman[3] and an audio fake of the actress Emma Watson reading Mein Kampf.[4]

Heading into the 2024 election, Louisiana, like all other states, faces an acute risk of deepfake communications attempting to misinform if not deceptively sway voters. Candidates as well as other electioneering operatives are capable of producing real-looking, but entirely fabricated, videos and voices of candidates saying or doing something that they never in fact did – essentially falsely misrepresenting themselves as speaking or acting on behalf of the candidates. Louisiana lawmakers have not yet taken official steps to regulate the use of AI-content in campaigns and begin to implement guardrails in political communications.

Dietram Scheufele, who studies science communication and technology policy at the University of Wisconsin-Madison, noted “we’re definitely entering a new world.” The technology, he added, “gets real creepy real fast.”[5]

Deceptive deepfakes are already appearing in this election cycle and it is a near certainty that this trend will intensify absent reasonable regulations from state and federal officials:

  • In Chicago, a mayoral candidate in this year’s city elections complained that AI technology was used to clone his voice in a fake news outlet on Twitter in a way that made him appear to be condoning police brutality.[6]
  • As the 2024 presidential election heats up, some campaigns are already testing AI technology to shape their campaign ads. The presidential campaign of Gov. Ron DeSantis, for example, posted deepfake images of former President Donald Trump hugging Dr. Anthony Fauci.[7]

As the technology continues to improve, it will become increasingly difficult and, perhaps, nearly impossible for an average person to distinguish deepfake videos and audio clips from authentic media. It is an open question how well even digital technology experts will be able to distinguish fakes from real media.

The technology will almost certainly create the opportunity for political actors to deploy it to deceive voters in ways that extend well beyond any First Amendment protections for political expression, opinion, or satire. A political actor may well be able to use AI technology to create a video that purports to show an opponent making an offensive statement or accepting a bribe.

That video may then be disseminated with the intent and effect of persuading voters that the opponent said or did something they did not say or do. The crucial point is that the video would not purport to characterize how an opponent might speak or behave, but to convey deceptively that they actually did so, when they did not.

A blockbuster deepfake video with this kind of fraudulent misrepresentation could be released shortly before an election, go “viral” on social media, and be widely disseminated, with little ability for voters to determine that its claims are fraudulent.

Request for Rulemaking

While the Louisiana state legislature may eventually address the problem of deliberately deceptive deepfakes in campaign communications through legislation, and notably has already done so when it comes to deepfake pornography, that goal is not likely to be achieved in time for the critical 2024 elections. A handful of states so far have taken legislative action to address the problem of deepfakes: California,[8] Michigan, Minnesota,[9] Texas[10] and Washington.[11] Several other state legislatures, including Louisiana, have granted election agencies the statutory authority to rein in egregious abuses through regulations and enforcement – at least until the legislature has the time to tackle the issue itself.

Section 1463(C)(4) of the Louisiana Elections Code is a straightforward and unequivocal law prohibiting any person from “misrepresent[ing] himself, or any committee or organization under his control as speaking, writing or otherwise acting for or on behalf of any candidate, political committee or political party, or any employee or agent thereof.” It is the person producing the deepfake who is doing all the talking and acting on behalf of the candidate. The highly deceptive potential of deepfake technology is so new to elections that it is unclear whether it falls under the scope of this law against false misrepresentations.

Public Citizen and Common Cause request that the Louisiana Board of Ethics clarify the conditions upon which deliberately deceptive deepfake content in campaign communications, absent adequate disclosure that the content is entirely fabricated, falls within the four corners of Title 18, §1463(C)(4) prohibiting knowingly false misrepresentations by persons pretending to be speaking or acting on behalf of candidates that is intended to affect vote choices.

Sincerely,

Public Citizen, by
Robert Weissman
President
1600 20th Street, N.W.
Washington, D.C.  20009
(202) 588-1000
Public Citizen, by
Craig Holman, Ph.D.
Government affairs lobbyist
215 Pennsylvania Avenue, S.E.
Washington, D.C. 20003
(202) 454-5182
Common Cause, by
Ishan Mehta
Director, Media and Democracy Program
805 15th Street, N.W., 8th Floor
Washington, D.C.  20005
(202) 833-1200

 

Appendix A.

Public Citizen Model State Law on Deceptive and Fraudulent Deepfakes in Election Communications

(a) For purposes of this section, “synthetic media” means an image, an audio recording, or a video recording of an individual’s appearance, speech, or conduct that has been created or intentionally manipulated with the use of generative adversarial network techniques or other digital technology in a manner to create a realistic but false image, audio, or video.

(b) For purposes of this section, “a deceptive and fraudulent deepfake” is synthetic media that depicts a candidate or political party with the intent to injure the reputation of the candidate or party or otherwise deceive a voter that:

(1) Appears to a reasonable person to depict a real individual saying or doing something that did not actually occur in reality; or

(2) Provides a reasonable person a fundamentally different understanding or impression of the appearance, action, or speech than a reasonable person would have from the unaltered, original version of the image, audio recording, or video recording.

(c) Except as provided in subdivision (d), a person, corporation, committee, or other entity shall not, within 90 days of an election at which a candidate for elective office will appear on the ballot, distribute a synthetic media message that the person, corporation, committee or other entity knows or should have known is a deceptive and fraudulent deepfake, as defined in subdivision (b), of a candidate or party on the state or local ballot.

(d) (1) The prohibition in subdivision (c) does not apply if the audio or visual media includes a disclosure stating: “This          has been manipulated or generated by artificial intelligence.”

(2) The blank in the disclosure required by sub-paragraph (1) shall be filled with whichever of the following terms most accurately describes the media:

(A) Image.

(B) Video.

(C) Audio.

(3) For visual media, the text of the disclosure shall appear in a size that is easily readable by the average viewer and no smaller than the largest font size of other text appearing in the visual media. If the visual media does not include any other text, the disclosure shall appear in a size that is easily readable by the average viewer. For visual media that is video, the disclosure shall appear for the duration of the video.

(4) If the media consists of audio only, the disclosure shall be read in a clearly spoken manner and in a pitch that can be easily heard by the average listener, at the beginning of the audio, at the end of the audio, and, if the audio is greater than two minutes in length, interspersed within the audio at intervals of not greater than two minutes each.

(e) (1) Use of deceptive and fraudulent deep fake to influence an election; penalty. A candidate whose appearance, action, or speech is depicted through the use of a deceptive and fraudulent deepfake in violation of subdivision (c) may seek injunctive or other equitable relief prohibiting the publication of such deceptive and fraudulent deepfake.

(2) A person may also be held liable by the election enforcement agency of civil penalties for violating subdivision (c) without the appropriate disclosures and fined as follows:

(A) if the person commits the violation within five years of one or more prior convictions under this section, to payment of a fine of not more than $10,000;

(B) if the person commits the violation with the intent to cause violence or bodily harm, to payment of a fine of not more than $5,000; or

(C) in other cases, to payment of a fine of not more than $1,000.

(3) This section does not apply to a radio or television broadcasting station, including a cable or satellite television operator, programmer, or producer, that broadcasts a deceptive and fraudulent deepfake prohibited by this section as part of a bona fide newscast, news interview, news documentary, or on-the-spot coverage of bona fide news events, if the broadcast clearly acknowledges through content or a disclosure, in a manner that can be easily heard or read by the average listener or viewer, that there are questions about the authenticity of the materially deceptive audio or visual media.

(4) This section does not apply to a radio or television broadcasting station, including a cable or satellite television operator, programmer, or producer, when it is paid to broadcast a deceptive and fraudulent deepfake and has made a good faith effort to establish the depiction is not a deceptive and fraudulent deepfake.

(5) This section does not apply to an internet website, or a regularly published newspaper, magazine, or other periodical of general circulation, including an internet or electronic publication, that routinely carries news and commentary of general interest, and that publishes materially deceptive audio or visual media prohibited by this section, if the publication clearly states that the materially deceptive audio or visual media does not accurately represent the speech or conduct of the candidate.

(6) This section does not apply to materially deceptive audio or visual media that constitutes satire or parody.

(f) The provisions of this section are severable. If any provision of this section or its application is held invalid, that invalidity shall not affect other provisions or applications that can be given effect without the invalid provision or application.

 

[1] https://www.techtarget.com/whatis/definition/deepfake 

[2] https://twitter.com/zachsilberberg/status/1627438454756835329

[3] https://www.youtube.com/watch?v=oxXpB9pSETo&t=9s

[4] https://www.thetimes.co.uk/article/ai-4chan-emma-watson-mein-kampf-elevenlabs-9wghsmt9c

[5] https://wisconsinwatch.org/2023/07/ai-elections-wisconsin-artificial-intelligence/

[6] https://www.nytimes.com/2023/06/25/technology/ai-elections-disinformation- guardrails.html#:~:text=Gaps%20in%20campaign%20rules%20allow,increasingly%20powerful%20artificial%20int elligence%20technology

[7] https://www.theverge.com/2023/6/8/23753626/deepfake-political-attack-ad-ron-desantis-donald-trump-anthony- fauci 

[8] §2000010a of the California elections code.

[9] §609.771 of the Minnesota elections code.

[10] §255.004 of the Texas elections code.

[11] §42.17a of the Washington elections code.