Generative AI is Out of the Bag
The Lesson from OpenAI: One Company's Governance Structure Cannot Regulate AI
To my frequent readers: apologies for the increasing distance between posts. I have been preparing for my Darden course and dealing with a face-paced end of year at SmartNews.
All of that said, I couldn’t resist commenting on one of the biggest, if not the biggest, tech ethics stories of the year.
Bottom line: I think the coverage of Sam Altman’s ouster and return at OpenAI has missed the overall narrative.
Although we still don’t know all the details of why Altman was ousted from OpenAI, the reporting is that it had to do with the board’s ethical concerns about how Altman wanted to monetize and democratize generative AI despite certain risks, both short and long term, that AI poses. The Altman drama is being interpreted a proxy war between AI optimists and pessimists — between capitalism and human rights. Upon Altman’s reinstatement at OpenAI and the reshuffling of its board, we’re seeing headlines like “The OpenAI Drama Has a Clear Winner: The Capitalists,” and, “The Money Always Wins.”
These headlines miss the point. Free markets “won” the battle over AI long before the OpenAI boardroom drama. They won when the US government decided not to develop generative AI on its own or partner with companies on development.
At the outset of OpenAI, Altman approached the US government seeking a partnership in developing generative AI. Much of the technology and capital underpinning Silicon Valley is due to government-funded innovation. The idea that the government, in a controlled environment, could hire the best and brightest thinkers to build generative AI technology and control its monetization and deployment is not without precedent. Although development would likely move at a slower pace and there would be less economic value created from the technology, government-sponsored LLM’s could have have been, in theory, a stronger safeguard of consumer safety. Time will tell whether a government approach would have been the right one as generative AI proceeds and we begin to understand its consequences.
For various reasons, however, that partnership never materialized, which effectively started an arms race — among the firms with enough talent, financial capital, and computing power — to create generative AI and monetize it. Although OpenAI’s nonprofit structure was intended as a check on these free market incentives within the firm, Altman and OpenAI learned that it needed private capital to compete in that arms race. Although vestiges of the nonprofit structure remained, OpenAI started operating, for the most part, as a for-profit corporation, partnered with Microsoft, and took millions of dollars of private investment.
Implicit in the privatization of LLM development across the industry was that there would be a competitive market fighting over the talent necessary to build these LLMs. In other words, private sector employees working on generative AI are free to work at whichever firm — be it OpenAI, or Google, or Anthropic— compensates them the best financially, intellectually or otherwise. If one firm develops ethical scruples and decided to curtail development in a way that would significantly reduce financial returns, its most talented employees, if they so choose, might decide to simply jump ship and build the models they want to build elsewhere. If there is demand of unethical algorithms, a free labor market will supply them. Capitalism 1. Controlled development 0.
Case in point: the Altman drama at OpenAI. The board, whether driven by effective altruism or some other motivation, decided to pump the brakes by firing Altman. Fine, Altman said — I’ll just go to Microsoft and take all my people with me. Had Altman landed at Microsoft, he and his team could have continued building whatever models they wanted to build and monetizing them as they see fit, unencumbered by the OpenAI board’s ethical concerns.
Put differently, OpenAI’s technical existence as a nonprofit and the scruples of the nonprofit’s board members cannot stop generative AI technology from moving forward according to the dictates of capitalism. Although the Altman story made for a good corporate drama, it is largely irrelevant. Capitalism won when Uncle Sam decided to cede the development of LLMs to private firms.
So what does OpenAI’s board intervention mean for the development of ethical technology? It means that that, short of effective government regulation — which is arguably starting now with President Biden’s executive order and some movement in the European Union — the way to ensure that privately-developed technology is developed and deployed ethically is for managers and leaders at these firms to incorporate ethics into how they develop products.
Based on what know, it is clear that leaders at OpenAI, Google, and elsewhere are saying the right things in this regard. They collectively employ hundreds of trust and safety employees and are calling for government regulation. Whether these are actually taking the appropriate precautions, only time will tell. As we learn more about what happened at OpenAI, I’ll be curious to know what the OpenAI board was trying to tell us about Altman’s decision-making, and whether that message signals that OpenAI — and perhaps the LLM market at large — is not taking appropriate care to mitigate downside ethical risk.
However, as this drama unfolded, we learned that the OpenAI board was not a systemic guarantor of the collective welfare, but simply a whistleblower — one that could stop the forward progress of an industry with the potential to alter the lives of millions, for generations to come.