Letter to Elon Musk and X: Take Down Harris Deepfakes Now
Linda Yaccarino and Elon Musk
X
1355 Market Street
San Francisco, CA 94103
Dear Linda Yaccarino and Elon Musk,
I am writing to urge you to take down or robustly label a deepfake video recently posted by Mr. Musk and an X user who uses the account name “Mr. Reagan” and to reaffirm that X will robustly and even-handedly enforce its policy on synthetic and manipulated media.
On July 26 at 9:23 AM, an X account owned by user “Mr. Reagan” posted a deepfake video of Vice President Kamala Harris. The video manipulates a campaign ad from the Harris Campaign, employing what seems to be an AI-generated version of Harris’ voice, saying things that she did not say. The AI generated voice is of high quality and indistinguishable, or nearly indistinguishable, from Harris’ actual voice. Mr. Reagan posted the video with accompanying text stating, “Kamala Harris Campaign Ad PARODY,” although the video itself does not state that it is a parody or digitally altered media.
On the evening of July 26, 7:11 PM, Mr. Musk reposted the video, although without the accompanying text stating that the video is parody. Mr. Musk’s accompanying text instead stated, “This is amazing” and was followed by a laughing emoji. As of around noon eastern time on July 29, the video has received more than 129 million views.
The deepfake video clearly violates X policies on posting of synthetic media. The core of the policy is defined simply in its opening lines: “You may not share synthetic, manipulated, or out-of-context media that may deceive or confuse people and lead to harm (‘misleading media’).”
The X policy establishes two separate grounds upon which content may be determined to be misleading. The Mr. Reagan/Musk video violates both:
- “Media that is significantly and deceptively altered, manipulated, or fabricated:” Vice President Harris’s voice is deceptively fabricated in the video.
- “Media that is shared in a deceptive manner or with false context:” The video is deceptive in showing actual footage of Harris – using an actual ad from the Harris campaign – and deceptively presenting Harris saying things that she never said.
To violate the policy, the content must also be “likely to result in widespread confusion on public issues, impact public safety, or cause serious harm.” The video is likely to cause widespread confusion and deceive many viewers into believing that Harris actually stated the things the AI-generated voice says. This is particularly true because the video’s more exaggerated statements come later in the video, when there will be relatively fewer viewers than the initial 15 seconds. The confusion sowed by the video will be consequential. The AI-generated voice statements ratify themes that Harris opponents are trying to connect to her candidacy.
Importantly, the policy takes note of and respects satire, but makes clear that memes and satire are permissible “provided these do not cause significant confusion about the authenticity of the media.” Simply stating after-the-fact that content is parody, as Mr. Musk appears to have done, does not rescue it from violating the X policy. Nor does stating that something is a parody in text accompanying a posted video – but not the video itself – adequately cure the deception or bring otherwise misleading content into conformity with X policies.
The X policy calls for the removing “high-severity” violations of its synthetic media policy. We strongly believe this is such a case, given its potential effect on a presidential election and the broad circulation of the media, and that X should take down all posts of the video immediately. At minimum, according to the policy, the video should be labeled, its visibility should be reduced and likes and reposts should be turned off.
The stakes are high in this case. This is one of the most prominent political deepfakes to circulate in the United States. It concerns a major presidential candidate. And the misleading synthetic media was reposted by the owner of the platform, who also happens to have the largest following on the platform.
If X’s policies are to have meaning, they must be applied in the most important cases and to all users, no matter their power, influence or even ownership of the platform. In fact, to have meaning, they must especially be applied in those cases. We look forward to you doing so.
Sincerely,
Robert Weissman,
Co-President of Public Citizen