As I have mentioned before on this Substack, we are in the brief window in history when regulators will be able to shape the future of generative AI — its market, applications, and development — before the technology proliferates into the economy and society at large. Decisions we make today will impact how our society uses generative AI one hundred years from now.
Because technology is by its nature global, regulations passed in one jurisdiction have the potential to ripple across the entire industry. For example, every American technology company is aware of the European Union’s passage of the General Data Protection Regulation (GDPR) in 2016 — many, if not most of them, choose to adhere to its data privacy requirements even if they do not operate in Europe. The same will be true of the EU’s AI Act that came out earlier this year.
Which leads us to the latest fight over mitigating AI’s risks: a bill working its way through the California State Senate, SB 1047. The outcome of this debate will directly impact AI companies operating in California — read: all the big ones. Also notable: the debate is playing out entirely among Democrats, the party at a national level most closely associated with AI pessimism and concerns about the safety of AI development.
The bill, the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, seeks to mitigate existential risks posed by generative AI. The bill mandates that algorithms that cost a over a certain financial threshold to develop and/or use a certain amount of computing power be subject to reporting requirements and independent auditing, adopt a “kill switch” to shut the models off, and provide whistleblower protections. Perhaps most importantly, the bill proposes a regulatory regime in which the initial developers of an algorithm will be held liable for harms perpetuated by the models they build, even if the model is modified as part of open-source development.
Proponents of the law, including its progressive sponsor, State Sen. Scott Wiener, and conservative billionaire, Elon Musk, argue that it forces AI companies to consider safety when developing large language models and mitigate its worst potential harms. They believe a kill switch provision will ensure that we can “turn the machine off” if humans lose control and that documentation requirements and audits will make the development of generative AI models more transparent. Holding platforms liable for the harms their algorithms perpetuate will force them to put safety first when developing new AI technology.
Opponents, including the AI industry and many prominent members of the California Congressional delegation (including Nancy Pelosi), argue that the bill hurts American competitiveness. They say that the reporting requirements are too onerous for small firms, that holding the initial AI developers liable for harms will kill the open source movement, and that the law is designed to prevent unlikely forms of harm — robots taking over the world — rather than harms we can imagine today — algorithmic discrimination and deepfakes being a few. They also argue that the federal government has not yet regulated generative AI, which renders many of the requirements in the California bill regarding red-teaming and adhering to various federal guidelines (e.g. the NIST standards from the Department of Commerce) unclear.
Other than outlining the two sides of the bill and bringing it to your attention, there are three points worth making about this debate.
The first is that this is the type of healthy, honest debate we should be having about AI — about what types of regulation to pursue rather than arguing about whether there should be any regulation at all. As I outlined in my last post, the stakes of generative AI are too important for the unserious position held by those who simply are against government regulation in every case. Regardless of which side you identify with more, the two sides of this debate are acting in good faith and making reasoned arguments for why their approach better benefits American competitiveness and protects society.
The second is that although I sympathize with some of the critiques of SB 1047, whatever regulations come from Washington or Sacramento will be criticized for minimizing competitiveness and overstating risk. It is in the interest of AI companies and their lobbyists to make that argument to prevent any sort of regulation. Smart regulation, however, should attempt to balance the promise and peril of generative AI appropriately. The opposite of no regulation isn’t too much regulation — it’s the right amount of regulation.
The third is that no regulation is a 100% solution from the get go. Regulators will inevitably overestimate some risks and underestimate others, especially for a technology that is so rapidly advancing and especially in a democracy that requires compromise. When it comes to AI regulation, the perfect may be the enemy of the good.
I’m curious to see, if SB 1047 makes it through the Democratic California legislature, whether Governor Gavin Newsom, another Democrat, signs it into law. If he does, it will be the first serious American attempt to regulate the most revolutionary technology of my lifetime.