Dorota Jasińska
Marcin Dobosz
With the rising use of AI in technology and everyday purposes, the issue of regulations has been regularly addressed. Now, in California, a new Senate Bill 1047 passed in the Senate in May and will be voted on by the California Assembly in August. This bill focuses on regulating frontier artificial intelligence models, aiming to ensure safety and security in the development and deployment of advanced AI systems. This article focuses on the content of the bill, its implications, and community reception. What are the consequences of introducing California Senate Bill 1047 to the existing law?
Senate Bill 1047 – Key Takeaways
Senate Bill 1047 was introduced by Senator Scott Wiener, and its co-authors are Richard Roth, Susan Rubio, and Henry Stern. It’s also known as the Safe and Secure Innovation for Frontier Artificial Intelligence Systems Act, and it was created to ensure the safety and security of AI model development. Thanks to the new regulations, the chances of AI misuse would decrease.
What does Senate Bill 1047 include?
The Act is still in the works, but there are some key points worth mentioning at this moment. These are the major requirements of the Senate Bill.
Covered models
Senate Bill 1047 includes a few points that need to be addressed by developers. The Act doesn’t cover all AI systems, but the “covered models” are trained using a quality of computing power over 10^26 integer or floating-point operations and models with a performance similar to a state-of-the-art foundation model. The bill aims to apply restrictions to models trained above specific thresholds and hold developers liable for the downstream use or modification of their models.
Safety assessment
What’s more, developers would also be required to assess the model’s safety. AI companies and specialists would have to certify the models to ensure safeguards against hazardous capabilities and other misuse before they start training artificial intelligence. Moreover, the certification would have to be repeated every year.
Safety protocols
The bill includes safety protocols to prevent harm from AI models and ensure public safety. It also mandates a risk assessment process, aiming to identify potential safety vulnerabilities or biases. What’s more, the law will require developers to make a full shutdown of the model possible if necessary. It’s also required to implement safety incident reporting about artificial intelligence safety incidents.
Accountability
In the case of breaching any requirements of the Senate Bill, companies or developers will be held accountable for the outcomes of the technology they work on. This includes penalties that reach 10-30% of the cost of training, depending on how many times the regulation was violated. The first violation would cost 10%, and the penalty would increase to 30% for every subsequent one.
Controversy around Senate Bill 1047
The greatest problem with the bill seems to be the vague definitions, stiff legal liability, and contributing to economic risk for AI developers. At this point, the limitations will impact only California, but more states may follow as the need to implement AI regulations is an open case.
In general, the bill aims to establish mandatory reporting for models trained on specific computing power and cost, which isn’t defined in detail. Such changes to the law may lead to significant limitations on AI model innovation. The bill will also impact startups and small businesses by creating systemic disadvantages as it establishes penalties for its violation. The penalties may reach 10-30% of the cost of training, which is a devastating fee for small businesses. The opponents claim that SB 1047 will disincentivize AI research in the US.
The legislation seems to target the model layer, which doesn’t guarantee the limitation of malicious use or applications. Moreover, it hinders innovation in AI. The impact of the bill is widely debated. On the one hand, people claim it’s a step towards safe AI development. On the other hand, there’s a risk it will visibly burden small businesses with compliance costs, stifling innovation.
Possible risks of Senate Bill 1047
Due to the vague nature of the Act’s requirements, there is a potential risk posed by the consequences of this law. Forbes analyzed the AI risks that may arise from implementing the Senate Bill. In the article, the author mentions that the safety assessments don’t guarantee that any AI-related malicious uses will be predicted or prevented. AI systems may behave in unexpected ways that may be impossible to address even with the most rigorous law.
Another example is the idea of the full shutdown of the model and incident reporting. A complete shutdown may create another risk of hackers attacking the AI system. This solution may create more risks and vulnerabilities instead. Moreover, reporting measures can fail and also contribute to decreased safety of the artificial intelligence model.
Moreover, the Act doesn’t address the issue of bad actors, especially those located outside Californian jurisdiction. This means that enforcing the Act would be impossible in other states or countries. The bad actors will ignore the restrictions and continue to use AI for malicious purposes.
FAQ
What penalties or sanctions are included in California Senate Bill 1047 for non-compliance?
As for now, the amount of penalty for non-compliance is not determined by the Attorney General. However, the penalties will increase if an offence is repeated. The penalty would range from 10% do 30% of the cost of training of the AI model. Another possible penalty is the deletion of non-compliant models.
What are the ethical considerations addressed in California Senate Bill 1047?
The Act was created mainly to address the issues of safety, accountability and transparency. The Bill aims to ensure risk minimization of any threats caused by AI models. That’s why developers would be held accountable for the usage of AI models. The Bill also underlines the need for transparency in terms of AI model development.
Conclusion
The government’s efforts to protect people against threats of possible AI misuse are a step towards regulating this technology to ensure safety and security. However, it seems that Senate Bill 1047 may not be a solution that would work in this case. Such legislation changes should be implemented knowing that this is an ongoing challenge and one Act won’t cover all the issues. Moreover, the vagueness of the law has caused the community to act against the Senate Bill.