Scott Wiener’s AI Bill Moves Forward With Major Changes

Scott Wiener’s AI Bill Moves Forward With Major Changes

A controversial bill aimed at protecting Californians from disasters caused by artificial intelligence has sparked an uproar in the tech industry. This week, the bill passed a key committee, but with amendments intended to make it more palatable to Silicon Valley.

SB 1047, proposed by Sen. Scott Wiener (D-San Francisco), is expected to be introduced in the state legislature later this month. If passed by the legislature, Gov. Gavin Newsom will have to decide whether to sign or veto the groundbreaking legislation.

Supporters of the bill say it will create safeguards to prevent rapidly advancing AI models from causing disastrous incidents, such as shutting down the power grid without warning. They worry that the technology is developing faster than its human creators can control it.

Lawmakers want to incentivize developers to handle the technology responsibly and give the state attorney general the power to impose penalties if there is a threat or imminent harm. The legislation also requires developers to be able to turn off AI models they directly control if things go wrong.

But some tech companies, like Facebook owner Meta Platforms, and politicians, including influential U.S. Rep. Ro Khanna (D-Fremont), say the bill would stifle innovation. Some critics say it focuses on distant, apocalyptic scenarios, rather than more immediate concerns like privacy and misinformation, though there are other bills that address those issues.

SB 1047 is one of about 50 AI-related bills that have been introduced in the state Legislature as concerns about the technology’s impacts on jobs, misinformation and public safety have grown. As politicians scramble to create new laws to put barriers on the growing industry, some companies and talent are suing AI companies in hopes that courts can establish ground rules.

Wiener, who represents San Francisco — home to AI startups OpenAI and Anthropic — was at the center of the debate.

On Thursday, he made significant changes to his bill that some say weaken the legislation while increasing its chances of passage through the assembly.

The amendments removed the penalty for perjury from the bill and changed the legal standard for developers regarding the security of their advanced AI models.

Additionally, the proposed creation of a new government entity, which would have been called the Frontier Model Division, is no longer under consideration. Under the original text, the bill would have required developers to submit their security measures to the newly created division. Under the new version, developers would submit those security measures to the attorney general.

“I think some of these changes could increase its chances of adoption,” said Christian Grose, a professor of political science and public policy at USC.

Some in the tech industry support the bill, including the Center for AI Safety and Geoffrey Hinton, considered the “godfather of AI.” Others, however, worry it could hurt a booming industry in California.

Eight members of the California House of Representatives — Khanna, Zoe Lofgren (D-San Jose), Anna G. Eshoo (D-Menlo Park), Scott Peters (D-San Diego), Tony Cárdenas (D-Pacoima), Ami Bera (D-Elk Grove), Nanette Diaz Barragan (D-San Pedro) and Lou Correa (D-Santa Ana) — wrote a letter to Newsom on Thursday urging him to veto the bill if it passes the state Assembly.

“[Wiener] “There’s definitely a cross-pressure in San Francisco between the experts in this field, who have told him and others in California that AI can be dangerous if we don’t regulate it, and those whose salaries, whose cutting-edge research, comes from AI,” Grose said. “This could be a real flashpoint for him, both for and against, for his career.”

Some tech giants say they are open to regulation but disagree with Wiener’s approach.

“We agree with how (Wiener) describes the bill and the goals that it has, but we remain concerned about the impact of the bill on AI innovation, particularly in California, and particularly on open source innovation,” Meta State Policy Director Kevin McKinley said in a meeting with LA Times editorial board members last week.

Meta is one of the companies that has an open-source collection of AI models called Llama, which allows developers to build on it for their own products. Meta released Llama 3 in April, and the tech giant has already said it has already downloaded 20 million pieces of software.

Meta declined to discuss the new amendments. Last week, McKinley said SB 1047 was “actually a very difficult bill to amend and fix.”

A Newsom spokesperson said his office does not typically comment on pending bills.

“The governor will evaluate this bill on its merits if it reaches his desk,” spokesperson Izzy Gardon wrote in an email.

Anthropic, a San Francisco startup known for its AI assistant Claude, has indicated it might support the bill if it were amended. In a July 23 letter to Rep. Buffy Wicks (D-Oakland), Hank Dempsey, Anthropic’s head of state and local policy, proposed changes, including shifting the bill to focus on holding companies accountable for causing disasters rather than enforcing enforcement before they happen.

Wiener said the amendments addressed Anthropic’s concerns.

“We can advance both innovation and safety,” Wiener said in a statement. “The two are not mutually exclusive.”

It is not yet clear whether these amendments will change Anthropic’s position on the bill. On Thursday, Anthropic said in a statement that it would review the new “text of the bill as soon as it becomes available.”

Russell Wald, deputy director of Stanford University’s HAI, which aims to advance AI research and policy, said he still opposes the bill.

“The recent amendments appear to be more about appearance than substance,” Wald said in a statement. “They appear less controversial in appeasing a few large AI companies, but do little to address the real concerns of academic institutions and open source communities.”

It’s a delicate balance for lawmakers trying to weigh concerns about AI while supporting the state’s tech sector.

“What many of us are trying to do is find a regulatory environment that allows some of these safety barriers to exist without stifling the innovation and economic growth that comes with AI,” Wicks said after Thursday’s committee meeting.

Times staff writer Anabel Sosa contributed to this report.