According to ZDNet, a new California law authored by state Democrat Scott Wiener goes into effect on Thursday, January 1. It requires companies developing frontier AI models to publish detailed plans for responding to “catastrophic risk” on their websites and to notify the state of any “critical safety incident” within 15 days. The law defines a catastrophic risk as an event where an AI model kills or injures more than 50 people or causes over $1 billion in damage. Fines for violations can reach up to $1 million per incident. The legislation also provides new whistleblower protections for employees at these AI companies.
California vs. the wild west
Here’s the thing: this law is a direct, state-level response to a federal vacuum. The article points out the Trump administration’s approach has essentially been to let the industry run wild to compete with China. So California, home to most of these tech giants, is stepping in where Washington won’t. It’s a fascinating experiment. Can one state effectively regulate a global technology? The law’s requirements are specific but also incredibly high-bar—defining catastrophe at 50+ deaths or $1+ billion. It almost feels like planning for a sci-fi movie plot, but when you have figures like Yoshua Bengio talking about the need for a “kill switch,” maybe that’s the level we need to be thinking at now.
The whistleblower gamble
For me, the whistleblower protection might be the most impactful part. Think about it. AI safety is a black box. We’re relying on companies to tell us when their own creations are dangerous. That’s a massive conflict of interest. Giving employees a shield and a reason to speak up—backed by the threat of those million-dollar fines—could actually surface real problems. It internalizes some of the oversight. But it’s a gamble. Will employees actually use it? Or will the culture of secrecy in cutting-edge AI labs prevail? The law’s success might hinge less on the published plans and more on whether we start hearing from insiders.
Reaction and what comes next
Unsurprisingly, the industry is already reacting. The same day this article came out, OpenAI announced it’s hiring a “Head of Preparedness” with a salary around $555,000. Sam Altman said on X it’s a “critical role at an important time.” That’s not a coincidence. It’s PR, but it’s also real operational change likely spurred by this new regulatory pressure. The law, SB-53, creates a paper trail and formalizes a process that was totally ad-hoc. Other states will watch closely. If it works without stifling innovation (a big ‘if’), it becomes a blueprint. If it fails, or is ignored, it proves the skeptics right that you can’t regulate this genie once it’s out of the bottle. Basically, California just became the world‘s most important AI policy lab.
