AI Innovation Meets Regulation: Inside California’s Artificial Intelligence Act

Continuing California long history of proactive technology legislation

Willy Leichter

September 4, 2024

AI Innovation Meets Regulation: Inside California’s Artificial Intelligence Act

Subscribe to our Blogs

Get weekly updates on the latest industry news, thought leadership, and other security topics in our blogs.

As part of an ongoing series, we are looking at emerging global regulations around artificial intelligence that will likely have a significant effect on AI development, governance, compliance, and innovation. California has a long history of proactively enacting technology legislation. Going back to 2002, SB 1386 was one of the first data breach notification laws, which was widely copied by most US states and many foreign governments. Given California’s size (5th largest economy in the world) and continuing status as a nexus for tech innovation, it’s worth carefully monitoring these regulatory initiatives. Following is a summary of the California AI Act.

Formally known as the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047), the California AI Act is groundbreaking legislation aimed at regulating the development of powerful AI models, particularly those with the potential to cause significant societal harm. Introduced in 2024, the bill targets the largest and most advanced AI models—referred to as "covered AI models"—which require massive computational resources to train, typically exceeding 10^26 integer or floating-point operations or costing over $100 million to develop.

Key Provisions

  1. Scope and Definition: The Act applies primarily to AI models with significant computational requirements, focusing on frontier models capable of influencing real-world environments or making autonomous decisions. These include models whose computational needs exceed set thresholds or those that match the performance of cutting-edge foundation models.
  2. Pre-training Safety Assessments: Developers of covered AI models are required to conduct thorough safety assessments before model training. These assessments must ensure that models do not present an "unreasonable risk" of enabling catastrophic harms, such as the creation of chemical or biological weapons, cyberattacks on critical infrastructure, or incidents leading to mass casualties or property damage. ​(Wikipedia, Global Law Firm | DLA Piper).
  3. Third-party Testing and Shutdown Capabilities: Developers must establish robust safety protocols, including third-party testing to verify that the AI model does not pose critical risks. Additionally, they must implement the capability to promptly shut down models that have not passed safety checks. This ensures that dangerous or unvetted AI systems can be disabled before causing harm​ (Global Law Firm | DLA Piper).
  4. Incident Reporting and Whistleblower Protections: The Act mandates that developers report any AI-related safety incidents to the newly established Frontier Model Division within 72 hours. This reporting mechanism is intended to alert regulators to any misuse, accidental release, or failures in the safety controls of these models. Additionally, whistleblower protections are in place to ensure that individuals within AI development companies can report violations without fear of retaliation​ (Global Law Firm | DLA Piper, ASIS International).
  5. Annual Compliance and Third-party Audits: AI developers are required to certify compliance annually, with senior officers attesting that their models adhere to safety standards. Beginning in 2028, third-party auditors must verify compliance with the Act’s stringent requirements​ (ASIS International).
  6. Customer Verification for Computing Resources: The Act also imposes obligations on organizations operating computing clusters used for AI training. These organizations must verify the identity and purpose of customers seeking to use substantial computational resources, akin to Know Your Customer (KYC) practices in financial sectors​ (ASIS International).

Penalties and Enforcement

Non-compliance with the Act could result in significant penalties, including injunctions, fines, and potential legal action by the state’s attorney general. The Act allows the recovery of up to 10% of the computing cost used for AI model training in case of violations, escalating to 30% for repeat offenses. This includes voiding any contracts attempting to shift liability away from developers​ (ASIS International).

Impact and Controversy

The bill has garnered bipartisan support in California's legislature but has also faced strong opposition from industry leaders and AI developers. Companies like Meta, OpenAI, and others have raised concerns, suggesting the legislation may stifle innovation and overburden developers with regulatory requirements. Critics argue that the bill's stringent focus on large-scale models may deter open-source AI development and raise barriers for smaller companies in the AI space. However, supporters, including some lawmakers and tech safety advocates, argue that such regulations are necessary to prevent catastrophic AI-related risks​ (Wikipedia, ASIS International).

Conclusion

California’s Safe and Secure Innovation for Frontier Artificial Intelligence Models Act represents one of the most ambitious attempts at regulating the rapidly advancing field of AI. By focusing on the most powerful models and imposing stringent safety, security, and compliance measures, the legislation aims to mitigate the risks associated with AI while promoting responsible innovation. While the Act has sparked debate, it sets a precedent for future AI regulations both in the U.S. and globally​ (Global Law Firm | DLA Piper, ASIS International).