California takes on deepfakes fight ahead of election

California takes on deepfakes fight ahead of election

Days after Vice President Kamala Harris launched her presidential bid, a video — created with the help of artificial intelligence — has gone viral.

“I am your Democratic nominee for president because Joe Biden finally exposed his senility in the debate,” a voice that sounded like Harris’ said in the fake audio used to edit one of her campaign ads. “I was selected because I am the ideal diversity candidate.”

Billionaire Elon Musk, who endorsed Harris’ Republican opponent, former President Trump, shared the video on X, then clarified two days later that it was actually a parody. His first tweet was viewed 136 million times. The next tweet, calling the video a parody, was viewed 26 million times.

For Democrats, including California Gov. Gavin Newsom, the incident was not a matter to be taken lightly, fueling calls for more regulation to combat AI-generated videos with political messaging and renewed debate over the appropriate role of government in trying to contain emerging technologies.

California lawmakers on Friday gave final approval to a bill that would ban the broadcast of misleading campaign ads, or “electioneering communications,” in the 120 days before an election. Assembly Bill 2839 targets manipulated content that could harm a candidate’s reputation or electoral prospects as well as confidence in the outcome of an election. It’s meant to address videos like the one Musk shared of Harris, though it includes an exception for parody and satire.

“We are witnessing the first election in California history where misinformation fueled by generative AI will pollute our information ecosystems like never before and millions of voters will not know what images, audio, or video they can trust,” said Rep. Gail Pellerin (D-Santa Cruz). “So we have to do something.”

Newsom has indicated he will sign the bill, which would take effect immediately, in time for the November election.

The bill updates a California law that prohibits the dissemination of misleading audio or visual content intended to harm a candidate’s reputation or mislead a voter in the 60 days before an election. State lawmakers say the law needs to be strengthened in an election cycle where people are already flooding social media with digitally altered videos and photos, known as deepfakes.

The use of deepfakes to spread misinformation has concerned lawmakers and regulators in previous election cycles. Those concerns have grown following the release of new AI-powered tools, such as chatbots that can quickly generate images and videos. From fake robocalls to fake celebrity endorsements for candidates, AI-generated content is challenging tech platforms and lawmakers.

Under AB 2839, a candidate, election committee, or election official can seek a court order to have deepfakes removed. They can also sue for damages against the person who distributed or republished the misleading material.

The legislation also applies to misleading media published 60 days after the election, including content that falsely portrays a voting machine, ballot, voting site or other election-related property in a manner that could undermine confidence in the election outcome.

This does not apply to satire or parody labeled as such, nor to broadcast stations if they inform viewers that what is being depicted does not accurately represent speech or an event.

Tech industry groups oppose AB 2839, as well as other bills that target online platforms that fail to properly moderate misleading election content or label AI-generated content.

“This will effectively cripple and block constitutionally protected free speech,” said Carl Szabo, vice president and general counsel for NetChoice. The group’s members include Google, X and Snap, as well as Facebook’s parent company Meta and other tech giants.

Online platforms have their own rules regarding manipulated media and political ads, but their policies may differ.

Unlike Meta and X, TikTok does not allow political ads and says it can even remove AI-generated labeled content if it depicts a public figure such as a celebrity “when used for political or commercial support.” Truth Social, a platform founded by Trump, does not address manipulated media in its rules about what is not allowed on its platform.

Federal and state regulators are already cracking down on AI-generated content.

In May, the Federal Communications Commission proposed a $6 million fine against Steve Kramer, a Democratic political consultant who sent a robocall using AI to impersonate President Biden. The fake call discouraged voter turnout in New Hampshire’s Democratic presidential primary in January. Kramer, who told NBC News he planned the call to draw attention to the dangers of AI in politics, also faces charges of voter suppression and misdemeanor candidate impersonation.

Szabo said current laws are sufficient to address concerns about election deepfakes. NetChoice has sued several states to stop some laws designed to protect children on social media, alleging they violate free speech protections under the First Amendment.

“Just creating a new law doesn’t do anything to stop bad behavior. You actually have to enforce the laws,” Szabo said.

According to the consumer advocacy nonprofit Public Citizen, more than two dozen states, including Washington, Arizona and Oregon, have enacted, adopted or are developing legislation to regulate deepfakes.

In 2019, California passed a law aimed at combating media manipulation after a video of House Speaker Nancy Pelosi drunkenly circulated on social media. Enforcing the law has proven to be a challenge.

“We had to water it down,” said Rep. Marc Berman (D-Menlo Park), the bill’s author. “It brought a lot of attention to the potential risks of this technology, but I was worried that ultimately it wouldn’t have any major effects.”

Rather than taking legal action, political candidates might choose to debunk a deepfake or even ignore it to limit its spread, according to Danielle Citron, a professor at the University of Virginia School of Law. By the time they can get through the legal system, the content could have already gone viral.

“These laws are important because of the message they send. They teach us something,” she said, adding that they inform people who share deepfakes that there is a cost to doing so.

This year, lawmakers worked with the California Initiative for Technology and Democracy, a project of the nonprofit California Common Cause, on several bills aimed at combating political deepfakes.

Some targets are online platforms that, under federal law, cannot be held responsible for content posted by users.

Berman introduced a bill that would require an online platform with at least 1 million California users to remove or label certain misleading election-related content within 120 days of an election. Platforms would have to act no later than 72 hours after a user reports the post. Under AB 2655, which passed the Legislature on Wednesday, platforms would also have to have procedures in place to identify, remove and label false content. It also does not apply to parody or satire or news media that meet certain requirements.

Another bill, co-authored by Rep. Buffy Wicks (D-Oakland), would require online platforms to label AI-generated content. While NetChoice and TechNet, another industry group, oppose the bill, OpenAI, the creator of ChatGPT, supports AB 3211, Reuters reported.

Both bills, however, will not come into force until after the election, highlighting the challenges of passing new laws as technology advances rapidly.

“Part of my hope in introducing this bill is the attention it gets and, I hope, the pressure it puts on social media platforms to behave properly now,” Berman said.