California Governor Gavin Newsom has signed into law a groundbreaking measure aimed at regulating advanced artificial intelligence models. The new legislation, Senate Bill 53 (SB 53), requires major AI developers—including OpenAI, Google, Meta, Nvidia, and Anthropic—to publicly disclose how they plan to mitigate catastrophic risks from their cutting-edge AI systems.
Newsom emphasized that California, home to the nation’s largest cluster of AI companies, is taking the lead in setting rules for a technology that will shape the economy and society. He stated the law balances innovation with public safety, positioning the state as a model for potential federal legislation.
The law applies to companies with over $500 million in annual revenue. These firms must conduct assessments on risks such as AI models escaping human control or being misused to develop bioweapons. They are required to release those assessments to the public, and violations may result in fines of up to $1 million.
This marks a shift from California’s earlier attempt at AI regulation, which Newsom vetoed in 2023 due to industry backlash. That proposal demanded costly third-party audits and imposed heavy financial penalties. SB 53, however, adopts a more balanced framework, which Jack Clark, co-founder of Anthropic, praised as promoting both safety and innovation.
Still, some in the tech sector are concerned. Critics, including Andreessen Horowitz’s head of government affairs, warned that state-led laws risk creating a fragmented regulatory environment across the U.S. With similar AI laws passed in Colorado and New York, industry leaders fear compliance challenges for startups and smaller firms.
Federal lawmakers are now under pressure to act. Representative Ted Lieu highlighted the need for national standards, asking whether Americans prefer “17 states” regulating AI independently or a unified approach from Congress. Meanwhile, Representative Jay Obernolte is drafting federal AI legislation that could preempt state-level rules.
The California law is widely seen as a catalyst that may accelerate Washington’s efforts to establish a nationwide AI regulatory framework.


Trump Set to Begin Final Interviews for Next Federal Reserve Chair
Australia’s Under-16 Social Media Ban Sparks Global Debate and Early Challenges
Judge Orders Return of Seized Evidence in Comey-Related Case, DOJ May Seek New Warrant
Federal Judge Blocks Trump Administration’s Pause on New Wind-Energy Permits
Ukraine Claims First-Ever Underwater Drone Strike on Russian Missile Submarine
Nvidia Weighs Expanding H200 AI Chip Production as China Demand Surges
SK Hynix Considers U.S. ADR Listing to Boost Shareholder Value Amid Rising AI Chip Demand
U.S. Greenlights Nvidia H200 Chip Exports to China With 25% Fee
Trump Criticizes EU’s €120 Million Fine on Elon Musk’s X Platform
Supporters Gather Ahead of Verdict in Jimmy Lai’s Landmark Hong Kong National Security Trial
Trump Weighs Reclassifying Marijuana as Schedule III, Potentially Transforming U.S. Cannabis Industry
U.S. Soldiers Killed in ISIS Attack in Palmyra, Syria During Counterterrorism Mission
Ireland Limits Planned Trade Ban on Israeli Settlements to Goods Only
U.S. Suspends UK Technology Deal Amid Trade Disputes Under Trump Administration
Australia Enforces World-First Social Media Age Limit as Global Regulation Looms
EU Court Cuts Intel Antitrust Fine to €237 Million Amid Long-Running AMD Dispute
US Charges Two Men in Alleged Nvidia Chip Smuggling Scheme to China 



