Back in the early 2000s, Google was making waves as a fundamentally new kind of company. It wasn't just trying to make boatloads of money. It was trying to do the right thing. And in an era where Enron was busy filing for bankruptcy and the general corporate vibe was more 'Office Space' than 'Iron Man', Google felt like a breath of fresh air!
To really show how much Google was all about improving things for humanity, its execs decided to encode their new-age beliefs into a short pithy motto: "Don't be evil".
The motto became a huge part of Google culture. It was taught in onboardings and cited in product meetings. It even made its way into the 2004 founders' letter before Google's IPO:
DON’T BE EVIL
Don’t be evil. We believe strongly that in the long term, we will be better served—as shareholders and in all other ways—by a company that does good things for the world even if we forgo some short term gains. This is an important aspect of our culture and is broadly shared within the company.
Google users trust our systems to help them with important decisions: medical, financial and many others. Our search results are the best we know how to produce. They are unbiased and objective, and we do not accept payment for them or for inclusion or more frequent updating. We also display advertising, which we work hard to make relevant, and we label it clearly. This is similar to a well-run newspaper, where the advertisements are clear and the articles are not influenced by the advertisers’ payments. We believe it is important for everyone to have access to the best information and research, not only to the information people pay for you to see.
"Don't be evil" was a lot of things, and there were a lot of disagreeing interpretations about what it meant. One thing that no one disagreed about: it was hard to get rid of. Execs at Google ended up regretting the "Don't be evil" motto, because no matter what Google did they would get raked over the coals for doing it. "I thought you said you wouldn't be evil", internet commenters would snidely say. They even got sued over it!
On 29 November 2021, three former Google employees filed a lawsuit alleging that Google's motto "Don't be evil" amounts to a contractual obligation that the tech giant violated, that Google broke their own moral code by firing them as retaliation for their efforts against "evil", in what the trio thought were in accordance with the principle, in drawing attention to and organizing employees against controversial projects, such as work for the U.S. Customs and Border Protection (CBP) during the Trump administration which, as they claimed, amounted to "doing evil", and as such deserve monetary damages.
Starting in 2018, after being slammed in the press yet again for their government contracts and attempts to reenter the Chinese markets, Google quietly distanced itself from its once famous motto. In 2024, its easy to forget that Google once had these lofty ethical dreams. They're a company. They make money. They create value for shareholders. They're evil now.
Anyway, here's OpenAI:
As we enter 2025, we will have to become more than a lab and a startup — we have to become an enduring company. The Board’s objectives as it considers, in consultation with outside legal and financial advisors, how to best structure OpenAI to advance the mission of ensuring AGI benefits all of humanity have been:
Choose a non-profit / for-profit structure that is best for the long-term success of the mission. Our plan is to transform our existing for-profit into a Delaware Public Benefit Corporation (PBC) with ordinary shares of stock and the OpenAI mission as its public benefit interest. The PBC is a structure used by many others that requires the company to balance shareholder interests, stakeholder interests, and a public benefit interest in its decisionmaking. It will enable us to raise the necessary capital with conventional terms like others in this space.
Make the non-profit sustainable. Our plan would result in one of the best resourced non-profits in history. The non-profit’s significant interest in the existing for-profit would take the form of shares in the PBC at a fair valuation determined by independent financial advisors. This will multiply the resources that our donors gave manyfold.
Equip each arm to do its part. Our current structure does not allow the Board to directly consider the interests of those who would finance the mission and does not enable the non-profit to easily do more than control the for-profit. The PBC will run and control OpenAI’s operations and business, while the non-profit will hire a leadership team and staff to pursue charitable initiatives in sectors such as health care, education, and science.
We’ve learned to think of the mission as a continuous objective rather than just building any single system. The world is moving to build out a new infrastructure of energy, land use, chips, datacenters, data, AI models, and AI systems for the 21st century economy. We seek to evolve in order to take the next step in our mission, helping to build the AGI economy and ensuring it benefits humanity.
As expected, the general internet commentary is not having it. Here's the most generous take from HackerNews:
I don't understand why they are spending so much time and effort trying to put a positive spin on this whole for-profit thing. No one is buying it. We all know what's going on. Just say "we want to make lots of money" and move on with your lives.
Right, well, that makes sense. OpenAI promised people that they were going to be open and free, and now they are neither open nor free. The public has a right to be pissed, and to be fair, they have an easy case to make! They just have to cite OpenAI's own founding documents!
Our current structure does not allow the Board to directly consider the interests of those who would finance the mission and does not enable the non-profit to easily do more than control the for-profit.
I kind of thought that was the point of the current structure.
Says internet anon @jrmg. And he's right! That was the point of the current structure!
The best argument for Altman's case is that a whole bunch of other companies offer the same tools as OpenAI. Claude, Gemini, Llama — these are very powerful, and are being developed by companies that don't even have the pretense of being aligned to human flourishing goals. We just spent like 250 words talking about how Google is more willing to be evil! OpenAI’s non-profit structure is an albatross around its neck, and if we believe that OpenAI is the best protector of human interests, we want to make sure they can compete with all of the unethical evil organizations out there! There’s a reason why the OpenAI blog post links out to so many other companies that are doing the same thing.
But the title of this article is "OpenAI is an unaligned agent", and I want to keep coming back to that.
When AI researchers fret about AI, they don't care about AI that is evil per se. They care about AI that becomes evil. They care about AI that manages to break out of its boundaries. Scott Alexander talks about AI alignment:
Into this morass, we add alignment training. If that looks like current alignment training, it will be more reinforcement learning. Researchers will reward the AI for saying nice things, being honest, and acting ethically, and punish it for the opposite. How does that affect its labyrinth of task-completion-related goals?
In the worst-case scenario, it doesn’t - it just teaches the AI to mouth the right platitudes. Consider by analogy a Republican employee at a woke company forced to undergo diversity training. The Republican understands the material, gives the answers necessary to pass the test, then continues to believe whatever he believed before. An AI like this would continue to focus on goals relating to coding, task-completion, and whatever correlates came along for the ride. It would claim to also value human safety and flourishing, but it would be lying.
When I say OpenAI is an unaligned agent, I mean that as an organization it seems like it learned how to mouth the right platitudes just long enough to pass the test. It claimed to be all about human flourishing, but SURPRISE! It was actually just trying to amass power and wealth all along!
The problem with Altman's version of this story is that we shouldn't trust OpenAI to be the best protector of human interests. When Google got rid of its “Don’t be evil” motto, a lot of people took it as “now we are going to be more evil”, and were as a result much more wary when Google continued to claim they were operating in the best interests of society. Same here. While we shouldn't necessarily trust Google or Meta or Anthropic, we definitely shouldn't trust OpenAI. Of all of the companies listed, only OpenAI has attempted (and largely succeeded) to break out of its boundaries. The smartest minds in the world got together to try and ensure that OpenAI would not have a financial conflict of interest. And now those same people are saying things like "OpenAI’s Structure Must Evolve To Advance Our Mission". They may be right, but you can't blame someone for being suspicious.