Please consider supporting us by disabling your content blocker.
loader

Scott Wiener

A controversial bill aiming to protect Californians from potential AI-driven disasters has stirred significant debate within the tech sector. Recently, the legislation passed a crucial committee with amendments designed to make it more acceptable to Silicon Valley.

SB 1047, introduced by state Sen. Scott Wiener (D-San Francisco), is scheduled for a vote in the state Assembly later this month. If it receives approval, Governor Gavin Newsom will face the decision of signing or vetoing this groundbreaking legislation.

Proponents of the bill argue that it will establish necessary safeguards to prevent advanced AI models from causing severe incidents, such as unexpected power grid failures. There are concerns that AI technology is evolving at a pace that outstrips human oversight.

The legislation aims to encourage developers to manage AI responsibly and grants the state attorney general the authority to impose penalties in cases of imminent danger or harm. Furthermore, it mandates that developers must be able to deactivate their AI systems if issues arise.

However, some tech giants, including Meta Platforms, and influential politicians like U.S. Rep. Ro Khanna (D-Fremont), contend that the bill could hinder innovation. Critics argue that it focuses too heavily on distant, apocalyptic scenarios rather than pressing issues like privacy and misinformation, although there are other bills addressing these concerns.

SB 1047 is one of approximately 50 AI-related bills currently under consideration in the California Legislature, reflecting growing worries about the technology’s impact on employment, misinformation, and public safety. As lawmakers strive to create new regulations for the rapidly evolving industry, some companies are resorting to legal action against AI firms in hopes that the courts will establish foundational rules.

Wiener, representing the heart of AI innovation in San Francisco, has found himself at the center of this debate.

On Thursday, he introduced significant amendments to his bill, which some believe may dilute its effectiveness while increasing its chances of passing in the Assembly.

Among the changes, a perjury penalty was removed, and the legal standards for developers regarding the safety of their advanced AI systems were altered. Additionally, the proposal for a new government body, the Frontier Model Division, has been scrapped. Previously, developers would have needed to submit their safety protocols to this new division; now, they will report directly to the attorney general.

“I do think some of those changes might make it more likely to pass,” noted Christian Grose, a political science and public policy professor at USC.

Some industry players, including the Center for AI Safety and Geoffrey Hinton, a prominent figure in AI, support the bill. However, others fear it could negatively impact California’s thriving tech sector.

Eight members of the California House, including Khanna and several other Democrats, sent a letter to Newsom urging him to veto the bill if it passes the Assembly.

“[Wiener] is caught between experts warning of AI’s dangers and those whose livelihoods depend on AI advancements,” Grose remarked. “This could become a pivotal moment for his career.”

While some tech giants express openness to regulation, they disagree with Wiener’s approach. Kevin McKinley, Meta’s state policy manager, stated, “We align with the goals of the bill but are concerned about its impact on AI innovation, especially in California and open-source projects.”

Meta’s Llama, a collection of open-source AI models, has seen significant adoption, with 20 million downloads of Llama 3 since its release in April.

Meta has refrained from commenting on the recent amendments. McKinley previously described SB 1047 as “a challenging bill to amend.”

Newsom’s office typically does not comment on pending legislation, but spokesperson Izzy Gardon indicated, “The Governor will evaluate this bill on its merits should it reach his desk.”

San Francisco-based AI startup Anthropic, known for its AI assistant Claude, has indicated potential support for the bill if further amendments are made. In a letter to Assemblymember Buffy Wicks, Anthropic’s state and policy lead Hank Dempsey suggested focusing on holding companies accountable for causing harm rather than preemptive enforcement.

Wiener stated that the amendments took Anthropic’s feedback into account, asserting, “We can advance both innovation and safety. The two are not mutually exclusive.”

It remains uncertain whether the amendments will alter Anthropic’s stance on the bill. On Thursday, the company announced it would review the new language as it becomes available.

Russell Wald, deputy director at Stanford University’s HAI, which promotes AI research and policy, expressed continued opposition to the bill. He remarked, “Recent amendments seem more about optics than substance, appeasing a few leading AI companies while failing to address genuine concerns from academia and open-source communities.”

Lawmakers face the delicate task of balancing AI concerns with the need to support California’s tech sector.

“Our goal is to establish a regulatory framework that allows for necessary safeguards while fostering innovation and economic growth in the AI sector,” Wicks stated after the committee meeting.

Times staff writer Anabel Sosa contributed to this report.

Further Reading