Please consider supporting us by disabling your content blocker.
loader

Introduction

SAN FRANCISCO — Republican delegates gathered in Milwaukee this week have committed to reversing federal restrictions on artificial intelligence (AI), while allies of former president Donald Trump are planning initiatives to enhance military AI capabilities.

California’s Legislative Response

In contrast, California’s Democratic-controlled legislature is considering a proposal that would require major tech companies to assess their AI for potential “catastrophic” risks before public release.

Scott Wiener, a Democratic state senator from San Francisco, authored the measure, which has faced backlash from tech industry leaders who argue it could deter innovation and create unnecessary bureaucratic hurdles.

Concerns and Opposition

Critics of the bill have suggested it could lead to severe penalties for developers if their technology is misused, a claim Wiener has strongly refuted.

After the bill’s approval by a California Senate committee, Google’s head of AI policy, Alice Friend, expressed concerns in a letter to the chairman, stating that the bill’s requirements are “not technically feasible” and could unjustly penalize responsible developers.

The Urgency of Action

Wiener argues that the legislation is crucial to mitigate extreme AI risks and to build public trust in the technology, especially in light of Republican efforts to dismantle President Biden’s 2023 executive order, which mandates AI companies to share safety testing information with the government.

Shifting the Regulatory Landscape

The bill positions Sacramento as a focal point in the debate over AI regulation, highlighting the limits of Silicon Valley’s enthusiasm for government oversight, even as leaders like OpenAI’s CEO, Sam Altman, advocate for regulatory measures.

By transforming voluntary commitments into mandatory regulations, Wiener’s proposal has sparked significant pushback from the tech sector, as noted by Nicol Turner Lee from the Brookings Institution, who emphasized the need for greater accountability from Big Tech.

Industry Reactions

Dylan Hoffman, TechNet’s executive director for California, remarked on the significance of the letters from major companies, indicating a shift in their approach to the legislation.

Despite the controversy, companies like Google, OpenAI, and Meta have refrained from commenting publicly, while Microsoft has stated its preference for federal regulation over state-level initiatives.

California’s Role in Tech Legislation

California has long been recognized as a leader in tech legislation, having enacted the nation’s most comprehensive digital privacy law in 2018. The state’s Department of Motor Vehicles also regulates autonomous vehicles.

Future of AI Regulation

With over 450 AI-related bills introduced across the country this year, California is at the forefront, with more than 45 bills pending, though many have stalled.

Wiener’s bill stands out as the most contentious, requiring AI companies with significant computing power to evaluate their models for risks related to chemical or biological weapon development, hacking, and power grid disruptions. Companies would need to submit safety reports to a new government office, the Frontier Model Division (FMD), which could adjust the scope of the law.

Addressing Safety and Security

The legislation also mandates the creation of a cloud computing system for researchers and startups, reducing reliance on expensive services from major tech firms.

Dan Hendrycks, founder of the Center for AI Safety, has been involved in the bill’s development, advocating for the recognition of AI’s potential dangers, akin to nuclear threats.

Criticism and Skepticism

However, some experts argue that the risks associated with AI are overstated and that there is currently no standardized method to assess them.

Oren Etzioni, an AI researcher, criticized the focus on size as a risk metric, suggesting that more dangerous models could be overlooked.

Broader Implications

The bill’s emphasis on catastrophic risks has drawn criticism from AI researchers who believe that more immediate issues, such as bias in AI systems and data privacy, require attention.

Meta’s AI head, Yann LeCun, even labeled Hendrycks an “apocalyptic cult guru” for his views.

Conclusion

As the debate continues, it is clear that the path forward for AI regulation will be complex, with various stakeholders advocating for different approaches to ensure safety and innovation in this rapidly evolving field.