Technology

Fake Biden robocall ‘tip of the iceberg’ for AI election misinformation

Skip to content

Technology

Fake Biden robocall ‘tip of the iceberg’ for AI election misinformation

President Biden

Greg Nash

President Biden arrives at White House in Washington, D.C., on Monday, January 22, 2024 after spending the weekend in Delaware.

A digitally altered message created to sound like President Biden urging New Hampshire residents not to vote Tuesday’s primary added fuel to calls for regulation of artificial intelligence (AI) as the 2024 campaign heats up.  

The robocall is the latest example of how AI is being used in races as the U.S. lacks fundamental guardrails to curtail threats posed by the technology, which can make it appear like a candidate is saying or doing something that never happened. 

As the technology becomes harder to detect and easier for anyone to use, experts said more AI election content will likely emerge, which could sow confusion and distrust among voters.  

“This is kind of just the tip of the iceberg in what could be done with respect to voter suppression or attacks on election workers,” said Kathleen Carley, a professor at Carnegie Mellon University.  

 “It was almost a harbinger of what all kinds of things we should be expecting over the next few months,” she added.  

Samir Jain, vice president of policy at the Center for Democracy and Technology (CDT), said the robocall highlighted two risks from deceptive AI election content: It made a candidate appear to say something they did not, and it spread false information about voting.  

A spokesperson for the campaign to write in Biden’s name on the New Hampshire primary ballot slammed the robocall, which was first reported by NBC News, as a form of “deepfake disinformation designed to harm Joe Biden, suppress votes, and damage our democracy.”  

The robocall campaign is under investigation by New Hampshire regulators and triggered a call from a New York Democrat for an investigation by the Justice Department.  

In addition to the risks of replicating a candidate’s likeness, AI advances also make it easier to target specific groups with misinformation campaigns. The large language models that power AI systems could be used to generate a model of a potential audience, Carley said.  

Just like an AI system could produce a song in the style of a certain genre or artist, Carley said, AI “could take the same misleading story” and “tell it in different ways” that are tailored to the style a particular audience would like.

The advances also raise concerns about attacks on election workers and trust in the process. For example, AI could be used to generate videos that falsely appear to show election workers tampering with votes, Carley said.  

“It’s 100 percent certainty in the absence of laws making deepfakes illegal or requiring disclosure that political operatives of all stripes and all levels of government will use the technology to advance their interests, irrespective of the impact on the broader democracy,” said Robert Weissman, president of nonprofit watchdog group Public Citizen.

The advocacy group led a petition urging the Federal Election Commission (FEC) to amend its rules about fraudulently mispresenting candidates or political parties to make it clear that it applies to deliberately deceptive AI in campaigns.  

The FEC unanimously voted in August to open a public comment period about updating the rule. The commission has yet to make an update since closing the public comment period in October. 

Public Citizen has slammed the FEC for moving slowly on the matter.  

A spokesperson for the FEC said Tuesday they do not have an update on timing to share. The spokesperson also directed The Hill to a statement from FEC Chairman Sean J. Cooksey (R) to The Washington Post that pushed back on criticism of moving slowly.  

“Any suggestion that the FEC is not working on the pending AI rulemaking petition is false. The Commission and its staff are diligently reviewing the thousands of public comments submitted. When that process is complete, I expect the Commission will resolve the AI rulemaking by early summer,” Cooksey said.  

Weissman said the New Hampshire robocall underscores the need for swift action.  

“The political deepfake era is here and regulators are either going to step up or we’re going to descend into election chaos,” he said.  

Weissman said the FEC’s action is crucial not just because of what rule they can issue, but also to establish the “social and legal norm that political deepfakes are not okay.”  

At the same time, he said the FEC’s authority is limited to the reach of candidates and political parties meaning it wouldn’t address content by outside political spenders or just individuals posting on social media. To target those concerns, he said there needs to be a more comprehensive Congressional action.  

Although there is bipartisan interest, Weissman said it is “unwise to bet on Congress to do anything on any topic at this point.”  

Sen. Amy Klobuchar (D-Minn.) pushed her bipartisan bill, co-sponsored by Sens. Josh Hawley (R-Mo.), Chris Coons. (D-Del.) and Susan Collins (R-Maine), that seeks to prohibit the distribution of materially deceptive AI-generated audio, images or video in political ads as a way to mitigate risks like the New Hampshire robocall.  

“These AI-created deepfakes of candidates are dangerous to our democracy, and we’re already starting to see this happen with the reported fake robocall using the President’s voice to tell people not to vote in New Hampshire. Whether you are a Democrat or a Republican, no one wants to see fake ads or robocalls where you cannot even tell if it’s your candidate or not,” Klobuchar said in a statement Monday.  

States are also pushing ahead with action in lieu of federal guidelines. California, Michigan, Minnesota, Texas and Washington have enacted laws to regulate deepfakes in elections, and about two dozen more have introduced or discussed such regulations, according to a tracker compiled by Public Citizen.

Jain said the risks posed by AI election content is an “ecosystem-wide problem” that will require action by government, political campaigns, and private companies to help mitigate the threats.  

Major social media and AI companies have rolled out policies that require the disclosure of generative AI in political ads. OpenAI, the company behind the popular ChatGPT tool, also released a plan to prevent the use of its tools to spread election misinformation, including banning people from using its tech to create chatbots that impersonate real candidates.  

Already, OpenAI implemented the policy to ban a bot imitating Democratic presidential candidate Rep. Dean Phillips (Minn.).  

Jain said it’s crucial that private companies don’t just have policies in place, but also back them up with resources and enforcement.  

“Without those two steps, it really doesn’t matter what the policies are,” he said.  


Tags

2024 election


Artificial intelligence


Deepfake


Joe Biden


Joe Biden


New Hampshire primary


Robert Weissman


Copyright 2023 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.