Before diving into the meat of this, I want to briefly introduce Scott and Rux, two other writers on Substack. If you already know who they are, skip the next few paragraphs. But I assume most of my readers don’t, since I primarily write about like, civ and startups.
I don't follow very many people. I find that the average self-stylized thinker is often capable of great insight sometimes, but it is rare that their full body of work is of equal value. I joke with friends that I do not like non-fiction books, because I can rarely justify giving a single author my attention for a novel length treatment of a subject; I much prefer to directly read papers and primary sources.
But every rule has an exception. And Scott Alexander is, in my opinion, exceptional.
As a quick summary, Scott Alexander is a psychiatrist and author who has been blogging since possibly the stone age, but at least since I first encountered his work in 2013. From the beginning, I found that his writing just resonated. There is something about the way he presents arguments. He is an expert at using analogy and metaphor to help his audience reach surprising or unintuitive conclusions, and over the years he has built up an impressive body of work around 'how to think better'. Not many people can claim that their personal blog has a wikipedia page.
I don't think Scott's work is for everyone — a good friend of mine thinks Scott is way too verbose and is just retreading ground already covered by philosophers decades prior — but he definitely appeals to a particular kind of center-left vaguely-libertarian tech-adjacent possibly-neurodivergent individual. In other words: your average Bay Area techie. And as a result, he has become something of a figurehead for both the 'rationalist' and 'effective altruist' movements,1 which are both centered in and around San Francisco. Many of the individuals involved in the rationalist community have gone on to become extremely influential in the fields of AI interpretability, AI safety, and AI alignment.
As for Rux…I don't really know her. Sorry! Only recently read one of her posts because Scott mentioned it on his blog. Seems cool though.
Anyway, Scott recently posted a subscriber-only piece that has been making the rounds.The piece is written as a dialogue between two characters, though these are clearly meant to represent Scott's internal monologue. In the piece, Scott muses whether the average centrist writer was right to criticize the broader establishment in the leadup to Trump's election and the rise of MAGA populism. In critiquing the worst excesses of academia, media, and government, did these writers end up giving intellectual cover to a populist movement that has open disdain for the foundational ideals of Western liberal democracies? To quote from the article:
Adraste: Do you feel guilty?
Beroe: About what?
Adraste: We’ve spent the past ten years pushing, you know, based heterodox edgy opinions. I still think our opinions were true and good. But…
We wanted people to question p-hacked psychology studies and TED talk speakers telling us the Nine Ways That Science Proves Merit Is Fake. So we punctured some windbag experts, then woke up one day with an anti-vaxxer in HHS and half the country thinking insulin is a globohomo conspiracy - or whatever it is they’re saying on X now.
We wanted a swift, lean government that stopped strangling innovation and infrastructure. Instead we got chainsaw-style firings, total devastation of state capacity in exactly the way most likely to strangle innovation more than ever, and the worst and dumbest people in the world gloating about how they solved the “grift” of sending life-saving medications to dying babies.
We wanted to believe in Silicon Valley, in the power of smart techno-optimists to do good and change the world. Instead, those people turned on us and helped elect a lunatic in exchange for his using taxpayer money to pump their crypto bags. The guy I thought of as a hero and an inspiration through my twenties, reduced to a Catturd-retweet-bot.
Beroe: You can’t blame thinkers for what other, worse people do with their ideas. Wasn’t it Kipling who said you needed to be prepared to “hear the truth you’ve spoken twisted by knaves to make a trap for fools”?
Adraste: He said you needed to do that in order to be a man. I’m a woman, so I can tell it to you straight: hearing the truth you’ve spoken twisted by knaves to make a trap for fools fucking sucks.
Scott is clearly feeling a lot of guilt here. As a centrist writer who spent a lot of the 2010s critiquing the establishment, he's looking around at the flames of our society and wondering if he personally could have done something different to prevent this outcome. And he's explicitly framing this in utilitarian terms — "If I knew that my writing would lead to MAGA, was I wrong to have written, even if it was truthful?"
Ruxandra responds that Scott is being way too hard on himself. His critiques were right; it's the elite's fault for not listening!
It’s clear to me that if anyone is to actually be assigned indirect responsibility for future negative consequences, those would be previously miscalibrated “establishment elites”, who should have listened to the likes of Scott Alexander, Nate Silver and other [centrist writers] sooner.
My specialty is not politics/the kind of policies Derek and Ezra discuss in their book, but I have been writing about elite culture, especially as reflected in academia a lot over the past year or so. My overall opinion is that the elite consensus pre 2024 had gone wrong in important ways and that this is what is to be blamed for populism.
I simply believe elites have a higher responsibility than the rest of people and I think the health of a society depends in large part on their behaviour. I guess in some sense, I have a vision of old-school honour, where people who have any sort of influence or hold power also have more responsibility, because their decisions are more consequential.
Rux's larger point is straightforward: we want to improve things, and improving things requires understanding what can be improved, and critique is the best way to do that. Hiding your critiques because you're afraid of the consequences is a bit like being shaken down by the mob —
— but it's worse because the bad outcomes may still happen anyway. Thus, Rux argues, Scott should feel proud of his work and advocates that he and others should continue calling out bullshit, wherever they see it.
On that last part, I do agree. But obviously, given the title of this piece, I think both Scott and Rux are missing the point, and I want to propose a third angle of thinking about the rise of populism that I think both Scott and Rux haven't considered.
Recently I've been thinking a lot about epistemology, credentials, and mass communication.
Epistemology is, very roughly, the study of how people incorporate new information into their models of the world. If you squint, you can kinda separate out the ways in which this occurs into two broad buckets: first principles reasoning, and faith in credentials.
The former is pretty straightforward to understand. If I'm trying to cook an egg, I don't need someone to tell me to put it on a skillet. I can observe myself that turning on the gas stove makes the skillet hot, and I can reason that cracking an egg on a hot surface will cook it. QED, time for an omelet.
The latter is a bit less obvious, but in some ways significantly more important. I've written about the importance of faith in credentials before:
You could imagine a world where the only way people gathered new information was directly through what they could experience themselves. I imagine primitive humans more or less did just that. But relying on yourself is extremely limiting. If you had to personally verify whether every plant was poisonous or every animal edible, you'd probably end up dying pretty quick. Much better to go ask a trusted friend in the village, maybe someone who spends a lot of time only thinking about flora and fauna, whether these new berries you found are going to be delicious or give you a horrible bout of dysentery. This is the basic idea behind credentials — the village gets together and agrees that Bob is the medicine guy, you go to Bob for any medicine questions, and anyone else who has opinions about medicine can take a hike. Bob's word carries weight, because he's spent a lot of time thinking about things related to medicine, or whatever, and you presumably haven't, so if Bob says don't eat the berries you don't eat the berries.
Credentials are the bedrock of civilization. Our ability to delegate trust is what separates us from animals.
I can't emphasize enough how critical it is that we have good, trusted credentialing systems. Credentials allow you to outsource learning itself. I don't need to become a world famous chef, I can just follow a recipe guide. I don't need to learn how to fix my car, I can just go to a mechanic. I don't need to become an expert in random skin diseases, I can just go to a doctor. In each of these cases, I am effectively massively increasing the skills available to me without having to put in the work to gain those skills. I get to piggyback off all of human society, and so does everyone else. This is amazing!
In my opinion, when Scott and Rux are talking about 'elites', they are referring to the expert class — those people with credentials who are trusted to do science / economics / healthcare / governance due to their special training and personal investment.2 'Elites' are upstream of everyone else's epistemological systems. When Bob says "vaccines are good", Bob can point to his diploma that says he is an expert in immunology as the reason for why the average Joe Schmoe should believe them. Joe doesn't have to trust Bob specifically. Joe just has to trust the diploma.
What are the things that may cause Joe to trust the diploma less? A few things come to mind.
He may find other sources that he trusts more, like family members or friends.
He may discover that Bob has made a mistake, which causes him to be more skeptical of Bob AND the diploma-granting institution in the future.
He may feel that Bob is abusing his credentials to get support for takes that have nothing to do with immunology.
Right, well. There has been a concerted effort over the last, say, 30 years, to fundamentally decentralize knowledge dissemination, communication, and ability to get visibility and transparency into previously obscure systems. Recording devices, telecom, the Internet, social media, ubiquitous mobile devices — these are, at their core, populist technologies. Each of these innovations reduces the barrier for entry for people to just spout random misinformation, while simultaneously making it much harder for the elite class to "fail gracefully". And at the same time, we have dramatically increased the number of people with the very credentials that were originally meant to identify elites from everyone else. Something like 38% of the US population has a Bachelor's degree! The degree doesn't signify anything anymore! In that environment, any individual contribution -- from Scott to Hanania to Catturd -- is just part of a larger inevitable macro trend.
Scott is noticing that the elites fail because of transparency caused by technological changes; he writes about them publicly thanks to technological changes; any amount of reach he gets is because of technological changes. In this framing, Scott is wrong because he is simply being far too self centered! To paraphrase, "You're so vain, you probably think this [populist political movement] is about you".
And Rux, on the other hand, is way too harsh on elites. The 'elites' are not a coordinated group. They are a huge mass of unaffiliated people! At least one random citation on the web suggested that almost 2% of the US population has a PhD. That is 6 million people. Even if 99% of the most elite 'elites' were perfect all the time, you would still have 60000 people who are going to go viral for saying something totally stupid.
This is simply a version of the Chinese Robber fallacy. Quoting Scott himself:
There are over a billion Chinese people. If even one in a thousand is a robber, you can provide one million examples of Chinese robbers to appease the doubters. Most people think of stereotyping as “Here’s one example I heard of where the out-group does something bad,” and then you correct it with “But we can’t generalize about an entire group just from one example!” It’s less obvious that you may be able to provide literally one million examples of your false stereotype and still have it be a false stereotype. If you spend twelve hours a day on the task and can describe one crime every ten seconds, you can spend four months doing nothing but providing examples of burglarous Chinese – and still have absolutely no point.
If we’re really concerned about media bias, we need to think about Chinese Robber Fallacy as one of the media’s strongest weapons. There are lots of people – 300 million in America alone. No matter what point the media wants to make, there will be hundreds of salient examples. No matter how low-probability their outcome of interest is, they will never have to stop covering it if they don’t want to.
Of course elite institutions are under attack — we've built a society where our faith in institutions is grounded in the belief that experts don't make mistakes, and then superpowered everyone with the ability to amplify every mistake ever made by any elite person anywhere! And at the same time, we've given everyone else a megaphone, so now you have a million thought influencer grifters who can personally benefit from taking down the expert of the day regardless of whether their 'takedown' is true or not. We expect individuals to continue trusting the NYT when "their friend's cousin" swears by raw milk, times a billion? We saw the destabilizing effects of these technologies during the Arab Spring and cheered because they were destabilizing the 'bad' institutions. And we thought our media and government were safe. But the ability to destabilize turned out to be institution-agnostic -- social media attacks ALL institutions regardless of their 'values'.
We’ve landed in such an extremely adversarial environment for credentialing, it's a miracle the expert class is able to function at all. Scott and Rux debating the impacts of their individual writing on the decline of trust in elites and elite institutions is a bit like debating whether me whistling a tune had any impact on a cat 5 hurricane.3 Move fast and break things moved fast and broke things. The hurricane was coming no matter what Scott — or anyone else — did.
The more pressing question on my mind is: what now? It's obvious to me that our prior way of being, and possibly the way we were built evolutionarily, wasn't equipped to handle the engagement economy. But also, the engagement economy is here to stay. Our project needs to be something like "how do we rebuild trust in institutions, given Twitter?"
I don't think I have an answer, but I have scattered thoughts.
I think it's necessary to really discredit 'elites' who are obviously and shamelessly grifting. One of the most important jobs of an elite, on par with being stewards for society or maximizing progress, is ensuring the stability of and faith in elite institutions. You cannot easily solve problems in a democratic society without ensuring that there is alignment between what people think are problems and what are actual problems. If some small cohort of elites are intentionally exaggerating or misrepresenting information in order to get political power through a riled up populist base, it is extremely difficult to, like, actually get around to making society better. In that light, Murdoch and his various press organs have probably done more explicit damage to elite institutions than anyone or anything in the world. I have no idea how you fix this.
We probably need some government intervention and regulation around social media, especially around misinformation. The term 'snake oil salesman' comes from an era where medicine was unregulated, no one knew who to trust, and random grifters were making cash selling…well, snake oil. And the government stepped in and explicitly said "no, stop that" and slapped massive penalties on anyone who pretended to be an expert. You could imagine something similar for social media. Right now, everyone is following the natural incentives of the engagement economy. The platforms are incentivized to drive clicks and so reward people for saying and doing outrageous things; the people are incentivized to say and do outrageous things to be rewarded. Seems like the perfect place for government regulation to come in and capture the negative externalities. Of course, the trick is figuring out how to balance free speech norms with the need to combat obvious bullshit. A few thoughts on kinds of regulation that would be good:
Require that any social media platform above a certain daily user count has to have some kind of built in credential verification system. Maybe Twitter could have these little blue check marks next to people's names that indicate they are people of import, that sort of thing.4
Require that personalized feeds are default opt-out instead of default opt-in. Right now, engagement optimization algorithms ensure that people only see the things that they maximally agree with and the things that are maximally enraging. This is, bluntly, not a good environment to learn new things. Having personalization off by default would go a long way towards piercing people's epistemological bubbles (and probably reduce everyone's screen addictions by a healthy amount).
Prevent social network policies from delisting or downranking posts that have links going to external sites. Right now, social media platforms are incentivized to keep people on their platform. Delisting external links is a way to do that. But it also significantly decreases the quality of information, because there is no way to provide additional sources.
Require everyone to adopt community notes style moderation. Community notes is a fantastic innovation in decentralized fact checking, and though it is silly to assume that it can hold all of the weight, it is clearly net good.
On that last one, community notes itself seems like a really interesting path forward to make the entire internet more usable. You could imagine a form of decentralized fact checking through something like community notes, but for the whole web. This is, to a first approximation, Wikipedia, which has done an incredible job being a fairly neutral source of truth for even extremely controversial topics.5 All of this is likely thankless work, but it notably does a lot to help the misinformation problem, and we should advocate for more projects and funding in that direction.
I think AI is going to make / has already made all of these problems much worse. In my Meditations on AI, I wrote about how reducing the barrier to entry in the games industry resulted in an increased need for curation:
Games follow a power law distribution in terms of quality. Most games are crap, and a very small percent of them are good. The average person who plays video games only really cares about the small percent that is actually good.
Tools that lower the barrier to entry don't change the shape of this distribution. Rather, they raise the entire distribution up. The best games get, on average, even better, and there are more of them to go around; but there are also way more bad games, and most of the time the absolute increase in bad games is much larger than the increase in good games. Put bluntly, the average indie game is terrible. Sixth grade Amol was not producing greatness. In modern terminology, Unreal and Unity and even Flash enabled a lot of "slop", as random people without expertise started producing and releasing video games.
You could imagine an environment where the overwhelming increase in bad games saturates the market, resulting in the games industry dying. But that's not what happened, not even close. For the most part, everyone was and is able to ignore the slop thanks to market innovations.
As the quantity of games increased it became increasingly important to identify quality. Taste began to matter a lot more; the market demanded curators who could guarantee some level of consistency for the broader consumer audience. In the early days, quality came out of brand familiarity. Many gamers like Nintendo because Nintendo games are generally really high quality, and for a while Nintendo actually capitalized on this by stamping games they liked with an 'official seal of approval'.
As the games industry got bigger, the market grew capacity for individual niche reviewers — videogamedunkey makes a living off playing, reviewing, and recommending games to his audience, while publications like IGN exist entirely to review videogames.
And finally, at a certain size, the industry developed marketplaces to serve as a single point of entry for customers looking to buy while also providing decentralized community review services. This started with brick and mortar stores — think GameStop — but eventually moved online. Steam is now the marketplace for games. It has a catalog of some 85k games, each one reviewed and curated by the larger community. Steam has systematized and decentralized rating mechanics that allow good games to float to the top, rewarding people who make popular and well liked games while ensuring consumers do not have to wade through piles of trash to find something they might enjoy. And Steam acts as a central watering-hole of sorts — publishers and consumers alike benefit from having a one stop meeting ground to discover and buy games.
In that piece, the games industry was an analogy for the software industry. I argued that because AI will dramatically reduce barriers to software creation, more software will be made, most of it will be bad, and there will be an increased need for curation and taste making. But you can also see the games industry as an analogy for information in general. AI makes it easy for people to just send random spam posts → more random posts will be made, most of it bad → there will be an increased need for curation.
This is, really, nothing more than a request for even more stringent forms of credentialing. I don't fully know what this would look like. I always personally felt like standardized testing is underrated, and having a wider range of tests ranging from easy to really really hard could be a good way to show that you understand complicated subjects without having to do a 4 year degree of semi-questionable value.6 But this is all very half-baked; I'm more confident in the problem and the general shape of a solution, than I am on this particular path as a way forward.
My last thought is directed at Scott, personally. You're an extremely conscientious person, and it's good that you think deeply about these things. Regret is a symptom of good epistemology. But to the extent that the words of some random guy on the internet matter, you're being too hard on yourself. Keep on writing. You're noticing that the things around us right now are not good, that the current administration is out of control and riding a wave of extremely dangerous anti-intellectualism. The past is the past; the worst thing that could happen now is if you take yourself out of the game. On the flip side, if all of this inspires you to put effort towards fixing that, it will have been a net benefit for the rest of us.
You may have heard of effective altruism before from the Sam Bankman Fried case, which is extremely unfortunate. I think they generally do good work, and shouldn't be tarred by association from one particularly malicious guy.
I think this is actually very explicit for Rux, who writes primarily about academia.
No chaos theory people please.
Yes this is tongue in cheek.
Related: for the love of god, stop trying to politicize Wikipedia! People who go out of their way to discredit wiki are going to doom us all to endless streams of misinformation.
Like, we have a bar exam; passing it is supposed to show that you understand the law well enough to practice as a lawyer. Various engineering industries have guild-like accreditation processes which serve the same purpose. Etc. Maybe we could just get rid of the degree requirement entirely and instead just replace all of our credentialing with publicly available exams of various kinds?
You might find Meganets by David Auerbach an interesting read. I believe he has a similar position as you: He thinks that social media and the internet more generally are currently going to destabilize governments/institutions, regardless of whether those institutions are effective or not. He has some specific policy prescriptions aimed at regulating social media to make posts less viral and slower.
> The more pressing question on my mind is: what now? It's obvious to me that our prior way of being, and possibly the way we were built evolutionarily, wasn't equipped to handle the engagement economy.
Although I too think the attention economy is a problem that should probably be regulated (for kids at the very least, China is already doing this for kids), I don't think this is actually the problem.
The problem is that we're a democracy and we have entered an equilibrium where fully half the country essentially hates and fears the other half.
https://imgur.com/a/Q78MRiL
Worse, this is a stable attractor. Even if all of our institutions were respected again tomorrow, even if the attention economy were entirely eliminated and we only had legacy media, this is the kind of dynamic where people are socially and memetically filtered enough that they'll just bubble and polarize to roughly this degree again.
And that doesn't work in a democracy. If essentially half of voters don't respect and won't listen to the other half, you're kind of boned. If you can never build anything or fix anything because either one half or the other will object, you're boned.
Now what's the answer to THAT?
I don't think it really has one that keeps our society, country, and institutions intact and recognizable. Either National Divorce or Civil War or letting AI's run us, or something worse than all of those.
Some things, and I think history, culture, and political praxis among them, are acyclic graphs. We can never go back to the 1990's or 2000's or 2010's. We can never go back to the 1950's either. There is no "getting there" from here.
Once a given culture / country / empire's institutions have decayed and become dysfunctional enough, there's no getting back as the same people / culture / country. The only country historically that's done anything close is China, and they sure weren't the same leaders, culture, and country on any resurgence - pretty much every turn of that wheel involved megadeaths and civilizational collapse before rising again - and they're the ONLY example that's even done it!
I mean, I'm personally rooting for AI running us out of those options, because "gradual dispowerment" sure sounds better than immediate megadeaths or civil war.
But you know, maybe a National Divorce could work, too. Fingers crossed.