California’s new AI safety bill SB 1047 is a current issue that is being discussed in the tech industry. This bill has garnered interest from high-profile personalities including Elon Musk the CEO of Tesla and Buterin Vitalik the co-creator of Ethereum. Both leaders have expressed their thoughts regarding the bill which which seeks to govern the growth of artificial intelligence in the state.
Musk expressed a reserved support of the bill saying that it may lead to some controversy. He pointed out that it’s a tough decision and will most likely annoy some folks, but all things considered, California should probably approve the SB 1047 AI safety bill. This is not a new stance from Musk who has been calling for regulation of artificial intelligence for over two decades now. He stressed the need for supervision of technology that may present a threat to the public, akin to how other dangerous items are controlled.
Critical Harm in AI Bill
Vitalik Buterin also provides some insights into the bill’s provisions. He underlined the inclusion of a new category of critical harm as one of the most important but overlooked aspects of the bill. Buterin said that the bill’s differentiation between critical harm and other risks is important, given that the term safety could now be applied to a wide range of situations. He also noted the need to distinguish between different levels of damage in the conversation about artificial intelligence risks.
A major point of concern is the reasonable care standard in the bill which some opponents have accused of being too broad and likely to result in uncertain legal outcomes. Buterin here agreed with the fact that the envisaged entity does not have a clearly defined scope but stressed that this does not lead to unlimited liability. He noted that the bill does not make the companies liable for any risk without limit for the actions of downstream users of their Artificial Intelligence models, which is important.
AI Bill Standards and Safety
Some of the concerns raised include the ability of small companies to develop artificial intelligence to meet the standards set in the bill. Still, Buterin argued that the entities that conduct expensive AI training processes are not actually small and larger players may follow the bill’s requirements while using open-weight models.
Also, concerns that the bill may concern open weights, which are AI models shared with the public for further application, have been somewhat assuaged. The earlier versions of the bill were more severe than the one presented today, but the measures were softened. Buterin said that he hopes that this bill will pay special attention to the aspect of safety testing, especially on Artificial Intelligence models that may have adverse consequences for the world.
While California lawmakers are still debating on SB 1047, the voices of tech titans such as Musk and Buterin will inevitably shape the future of AI regulation. The result could have significant ramifications for the future of Artificial Intelligence and its regulation within the state and in the international system.