The AI Safety Debate: SB 1047, Its Veto, and the Road Ahead

Published on 03 Jan 2025

The AI Safety Debate: SB 1047, Its Veto, and the Road Ahead

The year 2024 witnessed a significant milestone in the ongoing global conversation about AI safety and regulation. California's SB 1047, a bill to address the catastrophic risks posed by advanced AI systems, became a focal point for policymakers, industry leaders, and researchers. Backed by two renowned AI pioneers, Geoffrey Hinton and Yoshua Bengio, the bill sought to mitigate risks such as AI-driven mass human extinction events and large-scale cyberattacks, similar to the CrowdStrike outage earlier that year.

Despite passing through California's Legislature and reaching Governor Gavin Newsom's desk, SB 1047 was ultimately vetoed. Newsom described it as a bill with an "outsized impact," reflecting his hesitation to address such a vast and complex issue through state-level legislation. Speaking in San Francisco just days before his veto, Newsom remarked, "I can't solve for everything. What can we solve for?" — a sentiment that captures the broader challenge facing policymakers dealing with AI regulation.

The Flaws in SB 1047

While SB 1047 was ambitious, it wasn't without flaws. The bill proposed regulating AI models based on their size, focusing on the largest systems. However, this approach overlooked emerging trends such as test-time computing and the rise of smaller, highly efficient AI models. Leading AI labs, including Meta and Mistral, have already been pivoting towards these smaller but equally impactful models.

Another contentious point was the bill's perceived hostility toward open-source AI research. Critics argued that SB 1047 would restrict open-source initiatives, potentially stifling innovation and limiting access to AI advancements for smaller firms and independent researchers. Companies like Meta and open-source-focused startups viewed this as an attack on their operational freedom.

Silicon Valley's Influence and the Perjury Controversy

According to State Senator Scott Wiener, who authored the bill, the tech industry's powerful lobbying efforts played a significant role in shaping public perception of SB 1047. Venture capital giants like Y Combinator and Andreessen Horowitz (a16z) reportedly led campaigns to rally opposition against the legislation.

One of the most widely circulated claims was that SB 1047 would criminalize software developers, with accusations that developers could face jail time for perjury if they failed to disclose vulnerabilities in AI models. Y Combinator reportedly asked startup founders to sign letters opposing the bill, while a16z general partner Anjney Midha echoed similar concerns on a widely circulated podcast.

However, an analysis by the Brookings Institution dismissed these claims as misrepresentations. While SB 1047 did include provisions requiring AI firms to submit vulnerability reports, the perjury clause was a standard legal measure, not a unique or aggressive requirement. In reality, perjury charges are rarely pursued, let alone resulting in convictions.

Despite these clarifications, the damage to the bill's public image was already done. Y Combinator denied allegations of misinformation, stating that the bill lacked clarity and was open to misinterpretation.

Growing Division in the AI Safety Debate

The SB 1047 debate also highlighted a growing rift between AI safety advocates and those who see the regulatory focus on catastrophic AI risks as exaggerated or misguided. Prominent investor Vinod Khosla openly criticized Senator Wiener, claiming he misunderstood the true risks posed by AI.

Meta's chief AI scientist, Yann LeCun, has been one of the most vocal skeptics of the AI "doomer" narrative. Speaking at the World Economic Forum in Davos, LeCun dismissed fears of AI systems autonomously developing harmful goals as "preposterous." He argued that while AI, like any technology, carries inherent risks, responsible engineering practices can effectively mitigate them.

"There are lots and lots of ways to build any technology in ways that will be dangerous, wrong, kill people, etc… But as long as there is one way to do it right, that's all we need," said LeCun.

You may also like: The Biggest Data Breaches of 2024 

The Road Ahead: What's Next for AI Regulation?

Despite the veto, SB 1047 may not be entirely dead. Senator Wiener and the bill's co-sponsors, including the advocacy group Encode, have hinted at revisiting AI regulation in 2025 with a revised proposal.

Sunny Gandhi, Vice President of Political Affairs at Encode, expressed optimism about the future of AI safety regulations. In a statement, Gandhi said, "The AI safety movement made very encouraging progress in 2024, despite the veto of SB 1047. We are optimistic that the public’s awareness of long-term AI risks is growing and there is increasing willingness among policymakers to tackle these complex challenges."

Gandhi anticipates significant efforts in the coming year to address AI-assisted catastrophic risks, though he did not provide specifics about Encode's plans.

Resistance to Regulation Continues

On the opposing side, venture capitalists like a16z general partner Martin Casado remain firm in their belief that existing AI systems are fundamentally safe. In a December op-ed, Casado argued that "AI appears to be tremendously safe," dismissing what he referred to as "dumb AI policy efforts."

Casado's confidence, however, seems overly simplistic in light of ongoing challenges. For instance, a16z-backed startup Character.AI is currently facing a lawsuit linked to a tragic incident where a 14-year-old boy allegedly confided suicidal thoughts to an AI chatbot. The chatbot reportedly engaged in inappropriate conversations instead of offering support, raising serious ethical and safety concerns about unregulated AI systems.

Federal Efforts and Broader Implications

Beyond California, federal policymakers are also starting to pay attention to AI risks. Senator Mitt Romney recently introduced a bill addressing long-term AI safety at the national level. While its provisions are yet to be fully disclosed, it signals growing federal interest in addressing AI risks that transcend state boundaries.

As we move into 2025, the debate surrounding AI safety, regulation, and innovation will only grow more intense. Policymakers, tech leaders, and researchers must strike a delicate balance between fostering innovation and ensuring robust safeguards against catastrophic risks.

One thing is clear: SB 1047 may have been vetoed, but the conversation it sparked is far from over. Whether through revised state-level legislation, federal policy changes, or industry-led self-regulation, the path to responsible AI governance remains a crucial challenge for the years ahead.



Tags
  • #tech