It is no question that future generations of high school students will study the events of July 2024 in History class. Between Joe Biden’s disastrous debate, the attempted assassination attempt of Donald Trump, to Biden’s decision not to seek reelection, the last month has been one of the most turbulent in modern political history.
But now that the dust has somewhat settled and we know who is on the Democratic and Republican tickets in 2024 , we can dive deeper on the issues to understand where the candidates stand. Most relevant to this newsletter, the Harris and Trump campaigns have vastly different positions on how to regulate artificial intelligence. When you step into the voting booth this November, it is important to bear these differences in mind.
Vice President Harris has largely been the face of the Biden administration’s with regard to generative AI regulation. She has worked closely with the White House’s Office of Science and Technology Policy to draft a Blueprint for the AI Bill of Rights and to draft and execute the Biden Administration’s AI Executive Order. She has also met repeatedly with Silicon Valley executives to secure “voluntary commitments” on AI safety and announced the Office of Management and Budget’s guidance on the use of AI within the government.
Throughout all of these initiatives, Harris, as a representative of President Biden’s agenda, has attempted to strike a balance between a desire to foster innovation and protect the most vulnerable from AI’s potential harms. At a November 2023 summit in London, the Vice President spoke predominantly about mitigating these harms from a human rights lens:
There are additional threats that also demand our action — threats that are currently causing harm and which, to many people, also feel existential.
Consider, for example: When a senior is kicked off his healthcare plan because of a faulty AI algorithm, is that not existential for him?
When a woman is threatened by an abusive partner with explicit, deep-fake photographs, is that not existential for her?
When a young father is wrongfully imprisoned because of biased AI facial recognition, is that not existential for his family?
And when people around the world cannot discern fact from fiction because of a flood of AI-enabled mis- and disinformation, I ask, is that not existential for democracy?
Accordingly, to define AI safety, I offer that we must consider and address the full spectrum of AI risk — threats to humanity as a whole, as well as threats to individuals, communities, to our institutions, and to our most vulnerable populations.
Conversely, although former President Trump has not spoken publicly about generative AI in any significant way, his platform directly attacks Harris’s and the Biden Administration’s concern for AI’s potential risks. His platform is a conventional GOP mix of anti-regulatory conservatism and resistance to extending protections to disadvantaged communities:
“We will repeal Joe Biden’s dangerous Executive Order that hinders AI Innovation, and imposes Radical Leftwing ideas on the development of this technology. In its place, Republicans support AI Development rooted in Free Speech and Human Flourishing.”
Trump’s nonspecific position on the regulation of generative AI leaves potential gaps for his running mate, Senator JD Vance, to fill. Vance, a former venture capitalist himself, has close relationships with many in the industry, including Peter Thiel a renowned Silicon Valley investor and GOP donor.
Like Trump, Vance seems to be against a human rights-based AI regulatory regime and against government intervention of most kinds. However, he has broken with his party to support the Biden administration’s desire to break up large tech monopolies and is on the record supporting the open source movement, which believes that large language models (LLMs) should be developed in the open so that all firms have access to them. Vance has also expressed concern regarding Silicon Valley’s left-leaning political bias and control a few CEOs ostensibly have over AI systems.
With regard to regulation, Vance’s primary worry has been about regulatory capture - that regulation will benefit incumbent large firms at the expense of smaller AI startups. In essence, his ideal AI marketplace is in the Jeffersonian mold: many small firms building technology without government interference.
So, if you’re an AI policy-driven voter, the question on the ballot is pretty stark: are you in favor of an administration that is interested in balancing the need for innovation with human rights concerns? Or, are you more aligned with an administration whose position, albeit unclear at the moment, is disinclined towards protecting human rights and building out a regulatory regime, but may be in favor of an open-source “small firms” approach?
Although I don’t want to tip my hand too much on which approach I would prefer, the consequences of generative AI are too large to ignore. How the US government regulates this technology over the next two to three years will determine the shape of our society for the next one hundred. The intellectually lazy approach of “government is bad” is insufficient when confronting the importance of this narrow window of time.