California’s SB 53, a groundbreaking AI safety and transparency bill, demonstrates that state-level regulation can coexist with technological progress. As highlighted by Adam Billen, vice president of public policy at Encode AI, the legislation mandates large AI labs to disclose their safety and security protocols, ensuring that AI models do not pose catastrophic risks, such as cyberattacks or bio-weapon creation.
The bill also requires companies to adhere to these protocols, backed by enforcement from the Office of Emergency Services. According to Billen, many companies already conduct safety testing and release model cards. However, regulatory measures like SB 53 are crucial to prevent firms from compromising safety under competitive pressures.
Despite muted public opposition, there is resistance within Silicon Valley, where some argue that AI regulation hinders the U.S. in its competitive race against China. This has led to efforts, including funding super PACs and pushing for federal preemption of state laws, to influence AI policy.
Senator Ted Cruz has introduced a proposal allowing AI companies to bypass certain federal regulations temporarily, and there are discussions around a federal AI standard that could override state laws. Billen warns against such federal interventions, advocating for state bills that address deepfakes, transparency, algorithmic discrimination, and children’s safety.
While the AI race with China remains a priority, Billen suggests that policies like export controls are more effective in maintaining U.S. competitiveness. Initiatives such as the Chip Security Act aim to regulate the export of advanced AI chips to China, though some companies, including Nvidia, express concerns over competitiveness and security.
SB 53 exemplifies a collaborative approach between industry and policymakers, showcasing democracy in action. As Billen notes, it is essential to continue this process of federalism to balance innovation with necessary safeguards, ensuring a secure technological future.