California has adopted some of the strictest measures in the country to combat the spread of deepfakes in elections ahead of the 2024 vote.
Governor Gavin Newsom signed a series of bills into law at an AI conference in San Francisco.
The new policies include a law targeting fake AI-generated political ads and materials that could mislead the electorate.
The law, which came into effect immediately, allows individuals to sue for damages if they have been harmed by deepfake content.
It also allows courts to order the removal of misleading AI-generated materials that misrepresent candidates, election processes or even election workers.
Governor Newsom said these measures are essential to preserving public confidence in elections at a time when AI technologies are rapidly advancing.
They will also place the state at the forefront of combating the potential impact of artificial intelligence on the integrity of elections.
“This is about protecting democracy, about ensuring that Californians get the truth, not manipulated fabrications that could influence how people vote,” he said.
However, the new legislation is already facing legal opposition.
A lawsuit has been filed in Sacramento by a political activist who created parody videos containing altered audio clips of Vice President Kamala Harris.
The individual, whose work has been shared by Elon Musk, claims the new laws infringe on First Amendment rights.
His complaint argues that the laws are overly broad and could be used to censor free speech under the guise of regulating AI-generated content.
“The Governor of California just made this parody video illegal in violation of the United States Constitution,” Musk wrote on X, formerly known as Twitter, referring to one of the parody videos shared on his platform.
Elon Musk has been one of Newsom’s most vocal critics, notably mocking the governor’s AI policies in a tweet that referred to the satirical character “Professor Suggon Deeznutz.”
State officials argue that the legislation does not target satire or parody, but rather misleading content that misleads voters without clearly indicating that AI is involved.
“This new election misinformation disclosure law is no more burdensome than laws already passed in other states,” Newsom spokeswoman Izzy Gardon said in response to the lawsuit.
Experts on both sides of the debate are watching California closely, as the state’s approach could set a national precedent.
Theodore Frank, the attorney representing the plaintiff, warned that the law could open the door for social media companies to “censor and harass people” based on subjective interpretations of AI-created content.
Public Citizen, a consumer advocacy organization, tracks state laws on election deepfakes.
Her representative, Ilana Beller, said that while California’s new law has the potential to act as a deterrent, its actual effectiveness will depend on how quickly courts can act to stop the spread of misleading content.
“In an ideal world, we would be able to remove content as soon as it is posted,” she said.
“Because the sooner you can remove content, the less people will see it, the less people will spread it through reposts and so on, and the sooner you can dispel it.”
This article includes reporting from The Associated Press