The Nature and Risks of AI-Generated Deepfakes in Political Communications
Paper Presented Before the 2025 Conference of the American Political Science Association
By Craig Holman, Ph.D.
ABSTRACT
Although generative “Artificial Intelligence” (AI) has been around for quite some time – computerized deep-learning models that can take raw data and produce high-quality images, videos, text and voice content – only in recent years has AI technology made such startling advances in producing computerized content that is so realistic in appearance and sound as to often be indistinguishable from actual events. When used in political communications depicting a candidate or party representative saying or doing something that never happened, this is called a “deepfake.” The 2024 election witnessed the first serious onslaught of realistic appearing deepfakes in campaign communications. As shown in this study, 2024 is only the beginning. This study employs a unique database – the “Political Deepfakes Incidents Database” – in an effort to measure the impact deepfakes may have on (i) misinforming voters; (ii) affecting election outcomes and (iii) lowering trust in democratic institutions. The author finds a likely significant impact of deepfakes on elections and the electoral process in future elections and proposes a regulatory model for addressing these potential harm.
Request full paper from author cholman@citizen.org