[ In case you don’t want to read a serious take and instead just want memes, go check out this post from earlier in the week ]
[EDIT: ironically, less than an hour before this post was supposed to go live, the OpenAI board decided to resign and bring back Sam. Please mentally adjust the appropriate past tense verbs to present tense and so on. If more things change, idk, go eat some turkey I guess?]
Maybe it's time I finally dive into this whole AI safety debate. Fair warning, this subject may take multiple posts to fully flesh out.
For those who somehow are unaware, the last week in Silicon Valley was nothing short of a circus. OpenAI, the creator of ChatGPT, was essentially torn apart by its board members. Sam Altman, the CEO and the public face of the company, was fired with almost no notice, then almost rehired, then ended up at Microsoft, and threatened to poach approximately 90% of OpenAI's staff. And as of this writing, no one knows what will happen next.
Amidst all of the speculation, amidst the conspiracies and the theory-crafting, one thing has emerged: there is a deep divide within OpenAI, and indeed within the entire industry, over whether AI and its associated products should be treated with caution and fear or with excitement and joy. This debate over "AI Safety" or "AI Ethics" cuts shockingly deep.
I've been studying AI in some form for about 10 years now. I think I generally lean libertarian, politically — I've never met a bureaucracy that I liked. I tend to be against regulation, especially of technology, especially of internet speech.
But, cards on the table, I am pretty firmly in the "we should be very careful about all of this" camp when it comes to AI. This position has become an exceedingly unpopular one, especially in the last 48 hours. So I feel at least some desire to write out why I think I agree with AI Safety.
That's not this post.
This post is about why, regardless of how I feel about AI Safety overall, I think the board behaved roughly the way it should have. Also an unpopular position! But one that I think is fairly straightforward.
Very broadly speaking:
Through a pretty convoluted legal setup, OpenAI is controlled by a non-profit board that explicitly does not have any fiduciary duties to the company or to shareholders. Rather, the board has a moral duty to 'humanity' to ensure safe development of AI;
Regardless of how you specifically feel about those ethical values, that is how the company is set up. This ethical focus is reinforced mechanically (the board has no equity, no ability to accept external board members), legally (the charter of the company says as much), and through PR (the name of the company is OpenAI, leadership was especially vocal about AI safety at the time of the company's founding);
The board correctly identified that the company direction was towards a consumer/techno-capitalist direction, which is against their stated purpose. Put more bluntly: the company was dependent on money, and decisions were being made explicitly for money, and it turns out it's hard to figure out what the "right" decision is when someone is putting millions of dollars on one side of the scale.
The person who was most responsible for the current situation is Sam Altman. He set up the original "deal with the devil" partnership with Microsoft. He set up the 'for profit' arm of OpenAI. He's been publicly pushing for the 'productization' of OpenAI's research.
Everything else falls out from there.
Tools don't have ethics. The ethical output of a tool is dependent on how well that thing accomplishes the job that we humans set out for it. A clock that can't tell time is a bad clock. A dull knife is a bad knife, regardless of whether it's being used to stab people or chop celery. And a bomb that fails to go off is a bad bomb, regardless of how the bomb is being used.
A non-profit is just another tool. This tool happens to be made of people, but it could just as easily be made of code (hello crypto smart-contract people!). All tools exist to fulfill the goals we set out; this non-profit's goal was the safe development of AI. We may not like the concept of AI Safety, in the same way pacifists may not like the concept of bombs. Doesn't matter. The board is operating under its own rules.
So, in a nitpicky technical sense, when I say I think the board did the right thing, what I mean is: the board did the thing that was consistent with the charter of the company, and (as a result) the obligations of their job. The only thing to evaluate is: was Sam encouraging the safe, ethical development of AI.
And the answer is like, obviously no, right?
After a lot of scrolling through hacker news and twitter, I didn't see a single person who argued that Sam is the standard-bearer for AI Safety. Because it's obvious that he isn't.
ChatGPT, API access, dev day — these are products, useful for building a platform. These are tools for generating and securing capital and revenue. They are not about research. They are not safe deployments of AI. They are definitely not about supporting safe AGI development. The events of the last few days further hammer this point home. Sam credibly threatened to raid OpenAI's entire workforce and bring it to Microsoft, a company that is about as close to techno capital control as you can get. This is obviously the opposite of the OAI charter, and anyone who truly believed the x-risk potential of AI would never consider giving Microsoft everything.
You don't really have to take it from me. There have been many high profile departures from OpenAI over its safety policy decisions, including Elon and the entire team over at Anthropic.
So the board identified that their non-profit, which was about reaching AGI safely, without being beholden to moneyed interests, was
Not focused on reaching AGI safely and
Was beholden to moneyed interests.
Right, well.
Maybe the board could have done a better job in its communications, maybe the timing was off, maybe maybe maybe. But at the end of the day, the board was accomplishing its goals. People seem dumbfounded that the board would rather destroy itself than surrender to Sam's demands, but this makes perfect sense if the non-profit's interests are best served through its own destruction. It's the ethical equivalent of declaring bankruptcy — sometimes selling everything and shutting down really is in the best interest of the shareholders!
By the way, the fact that the Open AI board was put under immense pressure by a machine worth trillions of dollars is itself worthy of inspection. Sam, together with Satya and a large portion of the OpenAI staff, mounted a pretty strong revolt against the board decision. To paraphrase Matt Levine, there's control and then there's Control. The board has control over the company, it is legally in charge of everything and can do things like fire the CEO and nothing can be done about it. But Microsoft has Control over the company, it provides the money and the infrastructure and without Microsoft the company cannot do most of the things it is currently doing (up to and including paying salaries).
All the legalese in the world couldn't stop Microsoft from throwing its (multi-trillion dollar) weight around to get what it wanted. It almost succeeded. It still might. [EDIT: it has.]
The fact that Microsoft came so close to simply overriding the board's judgment by force and fiat is actually very strong evidence that the board was correct in its analysis about Sam. That is:
OpenAI was so dependent on Microsoft that Microsoft was (almost) able to override the board's decision;
Failing that, Microsoft may still be able to poach many OpenAI employees;
Sam was the person in charge of steering the company into this precarious position.
It also lends a sense of urgency to the board's decision making. Given how the weekend played out, last week may have been one of the last opportunities for the board to pull the trigger the way they did. Apparently there was an 80B tender offer on the horizon — more investors were going to pour money into the company, while employees were going to be able to sell shares and make fortunes. While the terms of that offer are not public, I would not be surprised if the board was simply against the idea that the company was going to be even more beholden to capital interests, or that their staff were so motivated by financial gain.
Speaking of the staff, I think it's worth diving into their role in all of this.
In some sense, the value of OpenAI is in its IP — that is, the code that's been written, the weights of the GPT models, that sort of thing. But in a much more real sense, the value of OpenAI is in its team. After all, anything that can be built once can be built again (with the right resources). So the staff as a collective has the ability to play kingmaker — the board's actions and Sam/Satya's counter-actions are entirely dependent on how the staff will respond. Or, to quote approximately every OpenAI employee on Twitter:
Well, OpenAI is a Safety-First company, working on Safety-First projects. Clearly, then, it follows that OpenAI staff support Safety-First outcomes, right?
Obviously not!
OpenAI staff are there because they get to work on AI and get paid a fortune doing it.
Look, I was a STEM student, I remember being in college. While the liberal arts folks were taking classes on the ethics of social media, mass surveillance, and environmental damage, the vast majority of STEM folks were taking classes about graph theory. I'm not saying engineering students are unethical, but…well, they aren't exactly trained to think about ethics. At the end of the day, the average STEM student is motivated by getting paid to geek out about whatever they're into. This is, by the way, why SpaceX consistently gets great talent, even though Elon is a tough master to please. Quoting ACX:
The cliche answer - that they believe in the mission - is mostly true. But many employees also talked about their past jobs at Boeing or GM or wherever. They would have some cool idea, and tell it to their boss, and their boss would say they weren’t in the cool idea business and were already getting plenty of government contracts. If they pushed, they would get told to file it with the Vice President of Employee Feedback, who might hold a meeting to determine a process to summon an exploratory committee to add it to the queue of things to consider for the 2030 version of the product.
Meanwhile, if someone told Elon about a cool idea, he would think about it for fifteen seconds, give them a million dollars, and tell them to have it ready within a month - no, two weeks! - no, three days! For some people, the increased freedom and the feeling of getting to reach their full potential was worth the cost.
I get this. This is how I am, too. I love working on the things I work on, and if I could just…do that, instead of being bogged down in mind numbing shit that I don't care about — like figuring out contracts or filling out safety forms or whatever — I would skip that extra stuff every time1.
Maybe the early hires at OpenAI really believed in the Safety mission. But I think over time, the company was filled out by people who were really excited about AI, and just tolerated the Safety stuff because OpenAI was the best place to do AI. On top of that, Sam, for all of his flaws, took academics who were making max 100k per year in universities and gave them millions of dollars. That will buy a lot of fierce loyalty, as evidenced by the list going around asking the board to resign2. And, by the same token, the board telling all those people that they actually aren’t going to be making millions from the tender offer will create a lot of animosity.
All this to say, the board may been in a losing position for a long time. The most valuable parts of the company — the employees — already cared about money too much. In this environment, I think it's possible to craft a narrative in which financial incentives crept in and embedded themselves so deep that the board's only real option was to hit the self destruct button.
Anyway, to bring this to a close, I think that in response to the news of Sam’s ousting, many people are arguing that AI safety is itself a harmful ideology that deserves to be stamped out. Maybe so. But this is the value system the company has enshrined. Arguing that the company should have different ethics misses the point. That fight happened a decade ago, and the AI safety side won. Now the question is much deeper: should OpenAI — an organization dedicated to AI Safety — exist at all? Or should it abandon its name and past and go full on accelerationist mode?
More thoughts for later, more twitter scrolling for now.
This is actually the opposite of what you personally get to do as the founder of a company. Rather, your job as a founder is to ensure that all of your employees get to work on cool stuff without distraction. You deal with the distraction! Basically, if your main goal in founding a company is to do whatever you want, I have bad news for you.
Note that the folks at OAI aren't stupid. Anyone who may disagree with the way Sam was running the show will still sign whatever document, just because it's public, and because of the public reaction overall. At that point, signing is more about future job security than anything else. So maybe we shouldn’t necessarily read “95% of OpenAI Employees want the Board Gone” as fully accurate. But it’s still directionally and emotionally correct.