• article

California Senate Bill 1047: A Comprehensive Overview

What are the consequences of introducing California Senate Bill 1047 to the existing law?

Dorota Jasińska

Content Specialist

With the rising use of AI in technology and everyday purposes, the issue of regulations has been regularly addressed. Now, in California, a new Senate Bill 1047 passed in the Senate in May and will be voted on by the California Assembly in August. This may lead to its addition to California law. This article focuses on the content of the bill, its implications, and community reception. What are the consequences of introducing California Senate Bill 1047 to the existing law?

Senate Bill 1047 – Key Takeaways

Senate Bill 1047 was introduced by Senator Scott Wiener, and its co-authors are Richard Roth, Susan Rubio, and Henry Stern. It’s also known as the Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act, and it was created to ensure the safety and security of AI model development. Thanks to the new regulations, the chances of AI misuse would decrease.

What does Senate Bill 1047 include?

The Act is still in the works, but there are some key points worth mentioning at this moment. These are the major requirements of the Senate Bill.

Covered models

Senate Bill 1047 includes a few points that need to be addressed by developers. The Act doesn’t cover all AI systems, but the “covered models” are trained using a quality of computing power over 10^26 integer or floating-point operations and models with a performance similar to a state-of-the-art foundation model. The bill aims to apply restrictions to models trained above specific thresholds and hold developers liable for the downstream use or modification of their models.

Safety assessment

What’s more, developers would also be required to assess the model’s safety. AI companies and specialists would have to certify the models to ensure safeguards against hazardous capabilities and other misuse before they start training artificial intelligence. Moreover, the certification would have to be repeated every year.

Safety protocols

The bill includes safety protocols to prevent harm from AI models and ensure public safety. It also mandates a risk assessment process, aiming to identify potential safety vulnerabilities or biases. What’s more, the law will require developers to make a full shutdown of the model possible if necessary. It’s also required to implement safety incident reporting about artificial intelligence safety incidents.


In the case of breaching any requirements of the Senate Bill, companies or developers will be held accountable for the outcomes of the technology they work on. This includes penalties that reach 10-30% of the cost of training, depending on how many times the regulation was violated. The first violation would cost 10%, and the penalty would increase to 30% for every subsequent one.

Controversy around Senate Bill 1047

The greatest problem with the bill seems to be the vague definitions, stiff legal liability, and contributing to economic risk for AI developers. At this point, the limitations will impact only California, but more states may follow as the need to implement AI regulations is an open case.

In general, the bill aims to establish mandatory reporting for models trained on specific computing power and cost, which isn’t defined in detail. Such changes to the law may lead to significant limitations on AI model innovation. The bill will also impact startups and small businesses by creating systemic disadvantages as it establishes penalties for its violation. The penalties may reach 10-30% of the cost of training, which is a devastating fee for small businesses. The opponents claim that SB 1047 will disincentivize AI research in the US.

The legislation seems to target the model layer, which doesn’t guarantee the limitation of malicious use or applications. Moreover, it hinders innovation in AI. The impact of the bill is widely debated. On the one hand, people claim it’s a step towards safe AI development. On the other hand, there’s a risk it will visibly burden small businesses with compliance costs, stifling innovation.

Possible risks of Senate Bill 1047

Due to the vague nature of the Act’s requirements, there is a potential risk posed by the consequences of this law. Forbes analyzed the AI risks that may arise from implementing the Senate Bill. In the article, the author mentions that the safety assessments don’t guarantee that any AI-related malicious uses will be predicted or prevented. AI systems may behave in unexpected ways that may be impossible to address even with the most rigorous law.

Another example is the idea of the full shutdown of the model and incident reporting. A complete shutdown may create another risk of hackers attacking the AI system. This solution may create more risks and vulnerabilities instead. Moreover, reporting measures can fail and also contribute to decreased safety of the artificial intelligence model.

Moreover, the Act doesn’t address the issue of bad actors, especially those located outside Californian jurisdiction. This means that enforcing the Act would be impossible in other states or countries. The bad actors will ignore the restrictions and continue to use AI for malicious purposes.


The government’s efforts to protect people against threats of possible AI misuse are a step towards regulating this technology to ensure safety and security. However, it seems that Senate Bill 1047 may not be a solution that would work in this case. Such legislation changes should be implemented knowing that this is an ongoing challenge and one Act won’t cover all the issues. Moreover, the vagueness of the law has caused the community to act against the Senate Bill.

copy link
Agata Tomasik
Board Member
Head of Outsourcing

Contact me

    Type of inquiry: