Skip to content
  • Blog Home
  • Cyber Map
  • About Us – Contact
  • Disclaimer
  • Terms and Rules
  • Privacy Policy
Cyber Web Spider Blog – News

Cyber Web Spider Blog – News

Globe Threat Map provides a real-time, interactive 3D visualization of global cyber threats. Monitor DDoS attacks, malware, and hacking attempts with geo-located arcs on a rotating globe. Stay informed with live logs and archive stats.

  • Home
  • Cyber Map
  • Cyber Security News
  • Security Week News
  • The Hacker News
  • How To?
  • Toggle search form

California Gov. Gavin Newsom Signs Bill Creating AI Safety Measures

Posted on September 30, 2025September 30, 2025 By CWS

California Gov. Gavin Newsom on Monday signed a regulation that goals to stop individuals from utilizing highly effective synthetic intelligence fashions for probably catastrophic actions like constructing a bioweapon or shutting down a financial institution system.

The transfer comes as Newsom touted California as a pacesetter in AI regulation and criticized the inaction on the federal stage in a current dialog with former President Invoice Clinton. The brand new regulation will set up a few of the first-in-the-nation rules on large-scale AI fashions with out hurting the state’s homegrown trade, Newsom mentioned. Most of the world’s prime AI corporations are situated in California and must observe the necessities.

“California has confirmed that we will set up rules to guard our communities whereas additionally guaranteeing that the rising AI trade continues to thrive. This laws strikes that stability,” Newsom mentioned in an announcement.

The laws requires AI corporations to implement and disclose publicly security protocols to stop their most superior fashions from getting used to trigger main hurt. The principles are designed to cowl AI methods in the event that they meet a “frontier” threshold that indicators they run on an enormous quantity of computing energy.

Such thresholds are primarily based on what number of calculations the computer systems are performing. Those that crafted the rules have acknowledged the numerical thresholds are an imperfect place to begin to differentiate at present’s highest-performing generative AI methods from the following era that could possibly be much more highly effective. The present methods are largely made by California-based corporations like Anthropic, Google, Meta Platforms and OpenAI.

The laws defines a catastrophic threat as one thing that might trigger a minimum of $1 billion in injury or greater than 50 accidents or deaths. It’s designed to protect in opposition to AI getting used for actions that might trigger mass disruption, comparable to hacking into an influence grid.

Firms additionally should report back to the state any essential security incidents inside 15 days. The regulation creates whistleblower protections for AI employees and establishes a public cloud for researchers. It features a positive of $1 million per violation.

It drew opposition from some tech corporations, which argued that AI laws must be completed on the federal stage. However Anthropic mentioned the rules are “sensible safeguards” that make official the protection practices many corporations are already doing voluntarily.Commercial. Scroll to proceed studying.

“Whereas federal requirements stay important to keep away from a patchwork of state rules, California has created a robust framework that balances public security with continued innovation,” Jack Clark, co-founder and head of coverage at Anthropic, mentioned in an announcement.

The signing comes after Newsom final 12 months vetoed a broader model of the laws, siding with tech corporations that mentioned the necessities had been too inflexible and would have hampered innovation. Newsom as an alternative requested a bunch of a number of trade consultants, together with AI pioneer Fei-Fei Li, to develop suggestions on guardrails round highly effective AI fashions.

The brand new regulation incorporates suggestions and suggestions from Newsom’s group of AI consultants and the trade, supporters mentioned. The laws additionally doesn’t put the identical stage of reporting necessities on startups to keep away from hurting innovation, mentioned state Sen. Scott Wiener of San Francisco, the invoice’s creator.

“With this regulation, California is stepping up, as soon as once more, as a worldwide chief on each expertise innovation and security,” Wiener mentioned in an announcement.

Newsom’s choice comes as President Donald Trump in July introduced a plan to eradicate what his administration sees as “onerous” rules to hurry up AI innovation and cement the U.S.’ place as the worldwide AI chief. Republicans in Congress earlier this 12 months unsuccessfully tried to ban states and localities from regulating AI for a decade.

With out stronger federal rules, states throughout the nation have spent the previous couple of years making an attempt to rein within the expertise, tackling every thing from deepfakes in elections to AI “remedy.” In California, the Legislature this 12 months handed a lot of payments to deal with security considerations round AI chatbots for youngsters and the usage of AI within the office.

California has additionally been an early adopter of AI applied sciences. The state has deployed generative AI instruments to identify wildfires and tackle freeway congestion and street security, amongst different issues.

Security Week News Tags:Bill, California, Creating, Gavin, Gov, Measures, Newsom, Safety, Signs

Post navigation

Previous Post: High-Severity Vulnerabilities Patched in VMware Aria Operations, NSX, vCenter 
Next Post: New Guidance Calls on OT Operators to Create Continually Updated System Inventory

Related Posts

Volvo Group Employee Data Stolen in Ransomware Attack Security Week News
364,000 Impacted by Data Breach at LexisNexis Risk Solutions Security Week News
Turla and Gamaredon Working Together in Fresh Ukrainian Intrusions Security Week News
Industry Reactions to Trump Cybersecurity Executive Order: Feedback Friday Security Week News
Vibe Coding: When Everyone’s a Developer, Who Secures the Code? Security Week News
750,000 Impacted by Data Breach at The Alcohol & Drug Testing Service Security Week News

Categories

  • Cyber Security News
  • How To?
  • Security Week News
  • The Hacker News

Recent Posts

  • Microsoft Warns of Hackers Abuse Teams Features and Capabilities to Deliver Malware
  • Why Threat Prioritization Is the Key SOC Performance Driver  
  • BK Technologies Data Breach – Hackers Compromise IT Systems and Exfiltrate Data
  • BatShadow Group Uses New Go-Based ‘Vampire Bot’ Malware to Hunt Job Seekers
  • Google’s New AI Doesn’t Just Find Vulnerabilities — It Rewrites Code to Patch Them

Pages

  • About Us – Contact
  • Disclaimer
  • Privacy Policy
  • Terms and Rules

Archives

  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025

Recent Posts

  • Microsoft Warns of Hackers Abuse Teams Features and Capabilities to Deliver Malware
  • Why Threat Prioritization Is the Key SOC Performance Driver  
  • BK Technologies Data Breach – Hackers Compromise IT Systems and Exfiltrate Data
  • BatShadow Group Uses New Go-Based ‘Vampire Bot’ Malware to Hunt Job Seekers
  • Google’s New AI Doesn’t Just Find Vulnerabilities — It Rewrites Code to Patch Them

Pages

  • About Us – Contact
  • Disclaimer
  • Privacy Policy
  • Terms and Rules

Categories

  • Cyber Security News
  • How To?
  • Security Week News
  • The Hacker News