FCC’s Enforcement Action Exemplifies How Agencies Should React to Deepfakes
Washington, D.C. — Today, the Federal Communications Commission (FCC) announced an historic settlement with the company that transmitted AI-generated deepfake robocalls during the presidential primary election in New Hampshire. As a result of the settlement, Lingo Telecom will pay a $1 million civil penalty.
Robert Weissman, co-president of Public Citizen, reacted to the news:
“AI deepfakes’ ability to spread disinformation poses an existential threat to our democracy, as the New Hampshire robocalls illustrated. The FCC is demonstrating how government enforcers must act to protect our democracy against that threat.
“FCC Chair Jessica Rosenworcel gets it exactly right when she says that people have a right to know when they are receiving authentic content and when they are receiving AI-generated deepfakes — and that no one should ever be fooled into thinking they are seeing or hearing authentic content that is actually an AI fake.
“This enforcement action highlights the importance of the FCC’s proposed rules requiring disclosure of AI-generated content in TV and radio political ads. Equally, it highlights the regulatory malpractice of the Federal Election Commission refusing to adopt rules requiring disclosure of political deepfakes.”