OpenAI’s board may have been right to fire Sam Altman — and to rehire him, too
The seismic shake-up at OpenAI — involving the firing and, ultimately, the reinstatement of CEO Sam Altman — came as a shock to almost everyone. But the truth is, the company was probably always going to reach a breaking point. It was built on a fault line so deep and unstable that eventually, stability would give way to chaos.
That fault line was OpenAI’s dual mission: to build AI that’s smarter than humanity, while also making sure that AI would be safe and beneficial to humanity. There’s an inherent tension between those goals because advanced AI could harm humans in a variety of ways, from entrenching bias to enabling bioterrorism. Now, the tension in OpenAI’s mandate appears to have helped precipitate the tech industry’s biggest earthquake in decades.
On Friday, the board fired Altman over an alleged lack of transparency, and company president Greg Brockman then quit in protest. On Saturday, the pair tried to get the board to reinstate them, but negotiations didn’t go their way. By Sunday, both had accepted jobs with major OpenAI investor Microsoft, where they would continue their work on cutting-edge AI. By Monday, 95 percent of OpenAI employees were threatening to leave for Microsoft, too.
Late Tuesday night, OpenAI announced, “We have reached an agreement in principle for Sam Altman to return to OpenAI as CEO with a new initial board.”
As chaotic as all this was, the aftershocks for the AI ecosystem might have been scarier if the shake-up had ended with a mass exodus of OpenAI employees, as it appeared poised to do a few days ago. A flow of talent from OpenAI to Microsoft would have meant a flow from a company that had been founded on worries about AI safety to a company that can barely be bothered to pay lip service to the concept.
So at the end of the day, did OpenAI’s board make the right decision when it fired Altman? Or did it make the right decision when it rehired him?
The answer may well be “yes” to both.
OpenAI’s board did exactly what it was supposed to do: Protect the company’s integrity
OpenAI is not a typical tech company. It has a unique structure, and that structure is key to understanding the current shake-up.
The company was originally founded as a nonprofit focused on AI research in 2015. But in 2019, hungry for the resources it would need to create AGI — artificial general intelligence, a hypothetical system that can match or exceed human abilities — OpenAI created a for-profit entity. That allowed investors to pour money into OpenAI and potentially earn a return on it, though their profits would be capped, according to the rules of the new setup, and anything above the cap would revert to the nonprofit. Crucially, the nonprofit board retained the power to govern the for-profit entity. That included hiring and firing power.
The board’s job was to make sure OpenAI stuck to its mission, as expressed in its charter, which states clearly, “Our primary fiduciary duty is to humanity.” Not to investors. Not to employees. To humanity.
The charter also states, “We are concerned about late-stage AGI development becoming a competitive race without time for adequate safety precautions.” But it also paradoxically states, “To be effective at addressing AGI’s impact on society, OpenAI must be on the cutting edge of AI capabilities.”
This reads a lot like: We’re worried about a race where everyone’s pushing to be at the front of the pack. But we’ve got to be at the front of the pack.
Each of those two impulses found an avatar in one of OpenAI’s leaders. Ilya Sutskever, an OpenAI co-founder and top AI researcher, reportedly worried that the company was moving too fast, trying to make a splash and a profit at the expense of safety. Since July, he’s co-led OpenAI’s “Superalignment” team, which aims to figure out how to manage the risk of superintelligent AI.
Altman, meanwhile, was moving full steam ahead. Under his tenure, OpenAI did more than any other company to catalyze an arms race dynamic, most notably with the launch of ChatGPT last November. More recently, Altman was reportedly fundraising with autocratic regimes in the Middle East like Saudi Arabia so he could spin up a new AI chip-making company. That in itself could raise safety concerns, since such regimes might use AI to supercharge digital surveillance or human rights abuses.
We still don’t know exactly why the OpenAI board fired Altman. The board has said that he was “not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities.” Sutskever, who spearheaded Altman’s ouster, initially defended the move in similar terms: “This was the board doing its duty to the mission of the nonprofit, which is to make sure that OpenAI builds AGI that benefits all of humanity,” he told employees at an all-hands meeting hours after the firing. (Sutskever later flipped sides, however, and said he regretted participating in the ouster.)
“Sam Altman and Greg Brockman seem to be of the view that accelerating AI can achieve the most good for humanity. The plurality of the [old] board, however, appears to be of a different view that the pace of advancement is too fast and could compromise safety and trust,” said Sarah Kreps, director of the Tech Policy Institute at Cornell University.
“I think that the board made the only decision they felt like they could make” in firing Altman, AI expert Gary Marcus told me. “I think they saw something from Sam that they thought they could not live with and stay true to their mission. So in their eyes, they made the right choice.”
Before OpenAI agreed to reinstate Altman, Kreps worried that “the board may have won the battle but lost the war.”
In other words, if the board fired Altman in part over concerns that his accelerationist impulse was jeopardizing the safety part of OpenAI’s mission, it won the battle, in that it did what it could to keep the company true to the mission.
But had the saga ended with the coup pushing OpenAI’s top talent straight into the arms of Microsoft, the board would have lost the larger war — the effort to keep AI safe for humankind. Which brings us to …
The AI risk landscape would probably be worse if Altman had stayed fired
Altman’s firing caused an unbelievable amount of chaos. According to futurist Amy Webb, the CEO of the Future Today Institute, OpenAI’s board had failed to practice “strategic foresight” — to understand how its sudden dismissal of Altman might cause the company to implode and might reverberate across the larger AI ecosystem. “You have to think through the next-order implications of your actions,” she told me.
It’s certainly possible that Sutskever did not predict the threat of a mass exodus that could have ended OpenAI altogether. But another board member behind the ouster, Helen Toner — whom Altman had castigated over a paper she co-wrote that appeared to criticize OpenAI’s approach to safety — did understand that was a possibility. And it was a possibility she was prepared to stomach, if that was what would best safeguard humanity’s interests — which, remember, was the board’s job. She said that if the company was destroyed as a result of Altman’s firing, that could be consistent with its mission, the New York Times reported.
However, once Altman and Brockman announced they were joining Microsoft and the OpenAI staff threatened a mass exodus, too, that may have changed the board’s calculation: Keeping them in house was arguably better than this new alternative. Sending them straight into Microsoft’s arms would probably not bode well for AI safety.
After all, Microsoft laid off its entire AI ethics team earlier this year. When Microsoft CEO Satya Nadella teamed up with OpenAI to embed its GPT-4 into Bing search in February, he taunted competitor Google: “We made them dance.” And upon hiring Altman, Nadella tweeted that he was excited for the ousted leader to set “a new pace for innovation.”
Pushing out Altman and OpenAI’s top talent would have meant that “OpenAI can wash its hands of any responsibility for any possible future missteps on AI development but can’t stop it from happening,” Kreps said. “The developments show just how dynamic and high-stakes the AI space has become, and that it’s impossible either to stop or contain the progress.”
Impossible may be too strong a word. But containing the progress would require changing the underlying incentive structure in the AI industry, and that has proven extremely difficult in the context of hyper-capitalist, hyper-competitive, move-fast-and-break-things Silicon Valley. Being at the cutting edge of tech development is what earns profit and prestige, but that does not lend itself to slowing down, even when slowing down is strongly warranted.
Under Altman, OpenAI tried to square this circle by arguing that researchers need to play with advanced AI to figure out how to make advanced AI safe — so accelerating development is actually helpful. That was tenuous logic even a decade ago, but it doesn’t hold up today, when we’ve got AI systems so advanced and so opaque (think: GPT-4) that many experts say we need to figure out how they work before we build more black boxes that are even more unexplainable.
OpenAI had also run into a more prosaic problem that made it susceptible to taking a profit-seeking path: It needed money. To run large-scale AI experiments these days, you need a ton of computing power — more than 300,000 times what you needed a decade ago — and that’s incredibly expensive. So to stay at the cutting edge, it had to create a for-profit arm and partner with Microsoft. OpenAI wasn’t alone in this: The rival company Anthropic, which former OpenAI employees spun up because they wanted to focus more on safety, started out by arguing that we need to change the underlying incentive structure in the industry, but it ended up joining forces with Amazon.
Given all this, is it even possible to build an AI company that advances the state of the art while also truly prioritizing ethics and safety?
“It’s looking like maybe not,” Marcus said.
Webb was even more direct, saying, “I don’t think it’s possible.” Instead, she emphasized that the government needs to change the underlying incentive structure within which all these companies operate. That would include a mix of carrots and sticks: positive incentives, like tax breaks for companies that prove they’re upholding the highest safety standards; and negative incentives, like regulation.
In the meantime, the AI industry is a Wild West, where each company plays by its own rules. OpenAI lives to play another day.
Update, November 22, 11:30 am ET: This story was originally published on November 21 and has been updated to reflect Altman’s reinstatement at OpenAI.