01 March 2024


I need to write something proper about Timnit Gebru & Émile P. Torres’ coinage “TESCREAL” to describe the weirdnerd ideological cluster of transhumanism, Extropianism, singularitarianism, cosmism, Rationalism, Effective Altruism and longtermism. I find the acronym very useful in registering how these ideologies are entangled and reflect a shared lineage, though I think Torres’ conclusion that these are Just All One Thing is a misleading overstatement. If we are going to confront this and neighboring movemements like neoreaction and Gray Tribe et cetera, it is important to make distinctions.

In the meantime, a few commentaries I find useful:

21 February 2024

Cheap oil killed the Soviet Union, not Reagan

I was just reminded of how conservatives like to claim that Ronald Reagan’s vast expansion of the US military budget and general “toughness” caused the collapse of the Soviet Union under his successor George HW Bush in 1991.

This is just not true.

Obviously it is impossible to attribute any turn so large and consequential to any single cause. It was complex and contingent on historical particulars. But every informed analysis I have seen points to an overwhelming proximate cause.

The Soviets long had been unable to produce enough grain to feed their people. For decades, they had made up the difference by buying grain on the world market. They raised the foreign currency necessary to buy grain by selling oil. The price of oil fell sharply over the 1980s. Flat broke, the Soviet state simply could no longer function.

I have a breakdown of the mechanics from Russian Yegor Gaidar, published by the center-right American Enterprise Institute in 2007:

[In 1985] the Saudis stopped protecting oil prices, and Saudi Arabia quickly regained its share in the world market. During the next six months, oil production in Saudi Arabia increased fourfold, while oil prices collapsed by approximately the same amount in real terms.

As a result, the Soviet Union lost approximately $20 billion per year, money without which the country simply could not survive.


The only option left for the Soviet elites was to begin immediate negotiations about the conditions of surrender. Gorbachev did not have to inform President George H. W. Bush at the Malta Summit in 1989 that the threat of force to support the communist regimes in Eastern Europe would not be employed. This was already evident at the time. Six weeks after the talks, no communist regime in Eastern Europe remained.


What were Gorbachev’s options at the time? He could not easily dissolve the Soviet empire; the conservative elements inside the Soviet leadership were strongly against this notion. Yet he could not prevent the dissolution of the empire without a massive use of force. But if force was employed, the Soviet state would not get the necessary funds from the West, without which Gorbachev had no chance of staying in power.


Even if they found one division able to crush all the people who demonstrated against the GKChP [the movement attempting a coup against Gorbachev in 1991], would the grain appear? Where would they find the food necessary to feed the larger cities? Would the West rapidly give the $100 billion? Their case, like the Soviet state itself, was entirely lost.

On August 22, 1991, the story of the Soviet Union came to an end. A state that does not control its borders or military forces and has no revenue simply cannot exist. The document which effectively concluded the history of the Soviet Union was a letter from the Vneshekonombank in November 1991 to the Soviet leadership, informing them that the Soviet state had not a cent in its coffers.

Any telling of the Soviet collapse which does not rest on their need for grain and the price of oil is nonsense.

One might credit Reagan indirectly, for getting the US cozy with the Saudis. I do give Reagan credit, sort of, in the sense that as the Vulcan proverb teaches, only Reagan could have allowed Gorbechev to attempt a soft landing for the shattered Soviet empire without suffering criticism for being “soft” on the Russians.

But Reagan boosters do not want to tell those stories, they want to talk about Reagan being “tough”. He was not. And “toughness” had nothing to do with Soviet collapse.

08 February 2024

Star Trek as liberal propaganda

The root beer scene from the Star Trek: Deep Space Nine episode of “Way Of The Warrior” is my single favorite work of liberal-as-in-Isaiah-Berlin-and-liberal-democracy propaganda.

If you don’t know Trek, a little grounding:

  • The guy with the ears is Quark; his society are quasi-libertarian space capitalists
  • The guy with the neck is Garak; his society are space fascists
  • The Federation are our space heroes, the liberal society that Kirk, Spock, and Picard come from, space rivals to the other two space societies
  • Over the next season of the show, the Federation will in fact decisively win the space war they are talking about

I have watched this dozens of times and find it moving every time. (Evidently actors Andrew Robinson & Armin Shimerman deserve credit for it!)

It has become common to mock conservatives grumbling about Star Trek “going woke”, since original Trek was very deliberately and transparently liberal-as-in-not-conservative propaganda in countless ways. But Trek is also structurally about liberalism in the deeper sense. A Star Trek story goes like this:

  1. Our protagonists encounter Something Strange.
  2. The Something Strange seems hostile.
  3. Our protagnoists assume that they do not understand why the Something Strange is presenting the threat.
  4. Our protagonists work together as a team, combining their different knowledge, talents, and perspectives to figure out what is up with the Something Strange.
  5. Though a blend of heart and reason, they figure it out. Yep, this was a big misunderstanding.
  6. Armed with this knowledge, the protagonists do something difficult — even risky — to help the Something Strange.
  7. Aiding the Something Strange makes it no longer a threat.
  8. Now our protagonists have a new friend.

Star Trek is the dream of liberalism always working, just as Superman is the dream of refusing to accept the terms of the Trolly Problem and saving everyone.

Is this formulaic? Heck yes. This post started as a Twitter thread, where I noted a conservative wag mocking the then-forthcoming Marvels movie:

Wait, wait … hang on … just a wild guess here, but I bet Carol Danvers discovers that only by Working As A Team and by Utilizing Diverse Strengths can she defeat the Big Baddie. Also, quips! And multiverse!

Calling MCU formulaic is an insult to formulas.

That guy does not hate that Marvel movies are formulaic. He hates that they are a formula for liberalism.

Not that the formula always works, either in Marvel or Trek. I need to follow up this post with a look at the ways pop liberal propaganda fails at its own project, often revealing cracks in the foundations. I am particularly grumpy at how recent Trek so often misfires.

But I remain committed to libdem. When Trek delivers formulaic boosterism for liberalism, that is the good comfort food. I count root beer as the sacred libation of Aphrodítē Pandēmos — god of love for all in the shared space of the ἀγορά, and thus for me god of liberal democracy — and my thirst for it is bottomless.

Might I trouble you for a glass of kanar?

Help yourself. It’s on the house.

How uncharacteristically generous of you.

I’m in an uncharacteristic mood. Besides, I’ve got eighty cases of this stuff sitting in my stockroom. And the way things are going, I’ll never unload another bottle unless it’s to you.

How thoughtless of me not to consider the effect the destruction of my homeworld would have on your business. These must be trying times for you. Be brave.

I should’ve listened to my cousin Gaila. He said to me, “Quark, I’ve got one word for you: weapons. No one ever went broke selling weapons.” But did I take his advice? No. And why not? Because I’m a people person. I like interacting with my customers. Like you and I are doing right now. Talking to each other, getting to know one another.

I can see the attraction. For you.

But when you’re dealing in weapons, buyers aren’t interested in casual conversation. They just want their merchandise, no questions asked. It’s so impersonal.

Your charms would be wasted.

Exactly. So now Gaila owns his own moon, and I’m staring into the abyss. And the worst part is, my only hope for salvation is the Federation.

I know precisely how you feel.

I want you to try something for me. Take a sip of this.

What is it?

A human drink. It's called “root beer”.

I don’t know.

Come on. Aren’t you just a little bit curious?

Garak drinks, looks disgusted

What do you think?

It’s vile.

I know. It’s so bubbly and cloying and happy.

Just like the Federation.

But you know what’s really frightening? If you drink enough of it … you begin to like it.

It’s insidious.

Just like the Federation.

Do you think they’ll be able to save us?

I hope so.

07 February 2024

Decision-based evidence-making

Once upon a time, I had an epistemological crisis at my day job.

When I was working at a consulting studio I did a lot of presenting with PowerPoint decks, communicating research and design work. I devote a lot of effort to my presentation materials and feel pride when clients admire them presenting complex ideas clearly. Occasionally this has resulted in clients asking for my help in assembling PowerPoint decks of their own, which is usually fun, easy, good for the studio’s client relationships, and makes the world a tiny bit better by spreading wisdom about making non-gawdawful presentations.

Several years ago, a client in a large organization asked for me to take a day or three to help their colleague from another part of their company to put together a presentation about some research they had done. This person came my office with a huge stack of binders full of research reports. “We need to figure out how to explain to people in Department X how this research shows that we need to pursue Policy Y.”

“Great!” I said. “Just walk me through it. Tell me in your words how you came to your conclusions, we will talk until I have a good grasp of it, then we can figure out together how best to explain it clearly, succinctly, and convincingly.”

This surprised them. “I don’t understand.”

“What don’t you understand?”

“Why are you asking me how this research shows that we should do Policy Y?”

Huh? “Is this not your research?”

“Well, some of this research we did ourselves, some of it we paid for. But what does that have to do with anything?”

I shrugged. “I’m sorry, I did not want to get sidetracked on how the research was done. I was asking because I want to understand what from the research lead you to choose Policy Y.”

“I don’t understand. Policy Y is my team’s plan.”

“Yes, how did you decide that?”

“What does that have to do with the presentation?”

They were visibly exasperated. I am sure I failed at concealing my irritation at how weird this conversation was.

Obviously I am abbreviating a longer, twistier conversation in which we were talking past each other. After a few rounds, I did the Shelf Sweep with my hands and tried to take a conceptual step back.

“Okay. You did the research. You looked at it, you thought about it. Then you realized that you needed to do Policy Y. What in the research pointed you there?”

“That’s what I want you to figure out for the presentation!”

I tried another step back. “I think the folks who set this up confused me about what you need. Let’s start over. Can you tell me what you need me to do?”

“I need you to find the data in this research which shows that we need to do Policy Y, so I can convince Department X to do it.”


“Now I think I am following you. You have already decided to do Policy Y, for your own reasons. You want to persuade the people in Department X. So you want to sift through all this research to find things which support Policy Y.”


“I apologize for the misunderstanding.”

“It’s okay. Let’s get to it!”

“Um. No. We have had a misunderstanding about what I do for a living.”

“Aren’t you a research consultant?”

“Yes, I am, among other things.”

“So what is the misunderstanding?”

“I do research, and analyze it, to help clients make good decisions about products and services.”

“Right. I need help analyzing this research.”

“You are neither asking me to analyze that research nor to communicate an analysis of it.”

“Huh? I am asking for an analysis of how this research supports Policy Y.”

“Analysis is looking at research and figuring out what policies and solutions it suggests. That is a different thing.”

“I don’t get the difference.”

“Yes, I see that. I am very sorry to have wasted your time. Obviously our studio will not bill you for this meeting, because of the misunderstanding.”

Again, this was a much longer and clumsier conversation than I could hope to reproduce. They were not just astonished that I considered myself professionally obligated to refuse this work, they could not see what my objection was. Wasn’t this what researchers did? Wasn’t this what consultants did? My presumption that one should examine evidence before reaching a conclusion, rather than using it to support a conclusion, was not even an idea they could understand well enough to reject.

I am not against trying to be persuasive. That is a necessary art. But I was shaken they could not conceive of any use of information other than persuasion.

I have reluctantly come to believe that there are more people like them than there are like me.

This post was born as a Twitter thread, and over time I threaded related observations from others.

In tech / design / business

Maia Bittner sums up my story succinctly:

I have finally learned how “data-driven decision-making” works in business:
  • First you make whatever decision you want to about your product or business.
  • Then you go spelunking to cherry-pick data to back up it being a good decision! 🍒

Nils Gilman describes the kind of consultant that client of mine thought they were hiring:

Consultancies claim that what they’re providing is access to “best practices” — which is sometimes true. But much of the time what they’re actually providing is CYA for unpopular decisions and/or ammo for one side in an intra-firm political struggle.

The Economist | Do McKinsey and other consultants do anything useful?
Though hated, they often provide a valuable service to the economy

Pavel A. Samsonov feels misunderstood like I did:

Was just introduced by a colleague as someone who will “validate business requirements” and a chill ran up my spine.

Samsonov describes the underlying politics well:

No amount of research will make stakeholders as confident as following all of their biases.

Sell stakeholders on the idea of doing research in the first place, or they will demand impossible bulletproof evidence for anything that doesn’t align with their existing assumptions.

In one article, Jared Spool discusses how you must always ask executives for what they expect to happen before user testing. Otherwise they will pretend like they knew the outcome ahead of time, and it will not sway their thinking. This is an analogous concept.

If you produce results without creating the expectation that the stakeholder’s assumptions may have been wrong, they will find a way to reconcile the results with their existing mental model. Often that just means being dismissed: “you asked the wrong people/questions”.

As Alan Cooper says - user experience is a power struggle. In low-maturity product orgs, stakeholders gather political power through being Experts who are Always Right. Realigning attitudes away from assumptions is a political game, not a delivery one.

These orgs create a vicious cycle between assumption-driven design and brittle development. The cost of being wrong is massive (politically and financially). Rather than reducing the cost of being wrong, stakeholders find it safer to obfuscate how “right” is determined.

Introducing the idea of experimentation - controlled, rapid failing - is anathema to this environment. So you have to sell the whole thing at once: small, quick, cheap tests aimed at learning rather than “winning”. The only real failure is if we didn't learn anything new.

This is very scary because it requires stakeholders to commit to a real “definition of good” that’s not just “the team delivered the requirements.” They have to get buy-in for goals and hypotheses (you might recognize these as OKRs) and that takes work!

But more importantly, they give up control. They are no longer the Knower and Decider (instead, research determines the way forward). They no longer set the team's outputs (the team determines their own experiments). But they are still on the hook for results.

This is what you’re struggling against: convincing stakeholders to relinquish absolute power. If they do not, then regardless of the title on your business cards you are stuck as a feature designer or usability tester, not a UX designer or user researcher.

Samsonov also talks about the organizational dynamics these deceits produce in the kinds of product development I do:

Product teams fall apart when they start lying to themselves. This breaks the core mechanism:

  1. Define problem
  2. Prioritize problem
  3. Define appropriately scaled solution for problem

Teams that lie to themselves can’t do 2, and therefore can’t do 3.

When teams lie to themselves, it usually looks something like this:

  • 🧑🏻: It would be cool and innovative to build Widget
  • 🧔🏻: I’ll create the user story! “As user, I want to have Widget
  • 🧑🏻: Great! I’ll add it to the roadmap.

Did you spot the lie? It’s when the team invents a problem simply to justify the solution. But because the problem is made-up, it can’t be scoped or prioritized. It’s impossible to determine the right amount of resources to dedicate to it, or to define success criteria.

Being honest with themselves doesn’t mean that the team can’t say “we want to build Widget.” But it does mean that the problem is defined as “we want to build Widget” and is prioritized accordingly (probably very low on the list). Due to the low prioritization, Widget receives the appropriate amount of resourcing (probably not much). This naturally creates scaled-down success criteria, from “we build Widget” to perhaps “we proved building Widget was possible” or “we proved Widget is desirable to users.”

Why do I say that teams who lie to themselves fall apart?

Because without being able to establish clear success criteria, the team lacks one of the two key aspects of what actually makes a team: shared goals (or in other words, the “definition of good”).

The issue is not that the outcome the team is working towards isn’t a user-facing outcome. It’s pretending that it is. Because real user-facing outcomes have real definitions of good: users being able to reach their goals better than before. Fake user stories don’t have goals. So trust begins to break down. PMs challenge designers on “what users would want.” Devs challenge PMs on what is doable. The guiding principle for each role shifts towards outputs - we will follow our process and produce an output. “Shipped” over “good.”

The team that lies to itself loses trust in itself and starts to focus more and more on project management, chasing deadlines, and treating responsibility for outcomes like a hot potato. “We followed the Scrum process, it’s not our fault.”

Teams that are honest with themselves can identify an appropriate definition of good, even if it’s not user-facing (which is not inherently bad, a lot of valuable work isn’t easy to trace to user outcomes). But crucially, everyone can align on what the desired team outcome is. A team that is honest about its problems, and therefore is able to align on desired outcomes, can work backwards from those outcomes to make decisions in a real Agile way. There is no need for appeal to authority or “I’ll know it when I see it” type feedback/”acceptance testing”.

A team that is aligned on desired outcomes can say “we learned that solution X will not work to achieve goal Y” and can try something else.

A team that lies to itself cannot admit this, because building solution X is their problem. Without solution X, the team has no mandate.

The members of the team that lies to itself are not stupid, however. Individually, eventually, they realize that “build solution X” is not a valuable problem to solve. But without public recognition of this fact, they feel lost & bitter: why can’t anyone else see what I see?

This is the point at which the team starts to fall apart. Enthusiasm evaporates, work that used to energize the team becomes draining. People start leaving. But the team promised solution X to leadership. Stopping would be a crushing blow to credibility.

So now you have a zombie, shambling forward without any objective beyond devouring as many brains as it can. All because the team was initially organized around a lie.

Samsonov comes directly at how yes, this is about epistemology:

Software orgs are shaped by their source of truth.

If the source of truth is users, Agile is great for helping the team continuously develop understanding of fixed user needs. If the source of truth is executives, needs are not fixed. Execs constantly change their mind.

To an outside observer, these may look the same. The team starts out with some understanding of what needs to be built, and that understanding changes over time. But understanding of users can only improve. Exec-driven development is always at risk of being flipped upside-down.

When the goal is to understand users, experimentation is the norm. We don’t build features - we make bets, manage the risks, and learn.

Exec-driven development is extremely hostile to experiments. The HiPPO told you what to build, just build that. Don't ask questions.

So we can’t simply say, “let’s switch to Agile” without effecting change outside the dev team. Agility under HiPPO rule doesn’t look anything like agility for user-facing development. Teams are only thinking about making it easiest to keep up with exec flip-flopping.

An “agile” team in a feature factory must first build expensive, highly flexible architecture, so that pivoting along with exec whims is easy. Story point estimates are punitive to slow down the rate of requests for changes.

But execs don’t want architecture, they want UI! So one or two Java developers draw the short straw, spin up a web app, and start making some kind of website. This is when the org expects designers to contribute. We don’t need testing or research since the exec decided that already. Here is the BRD, draw us some mockups.

You will never be able to justify the value of design in an environment where design is the rendering of executive fiat. The designer can’t change their role simply by making better outputs, or challenging the PM. After all, it's not the PM’s decision either - it’s the exec’s. Rocking that boat is a huge risk.

The only way forward is to take on that risk.

Just because your outputs change from wireframes to processes doesn’t mean that the “make a bet and reduce the risk” model stops applying. Your users: management. Their goals: make money. Unknown: their mental model around converting features into value into money.

There are different schools of thought on how to do that. But it all boils down to: how can you change management’s mental model that experiments are risks, failure, and wasted work? How do you convince them that “just do what I tell you” is actually more risky?

Roja tells a tale of this breakdown happening at Twitter:

Years ago when I was a new PhD grad working at Twitter, there was a director of product who had strong ideas about how to solve the abuse and harassment problem on Twitter.

I worked as a new Data scientist, building a model to detect abuse and then was responsible for A/B testing the solution. His product solution, which he was pretty sure was going to change abuser behavior.

His idea was that if victims did not see the abuse and did not respond to trolls, the trolls would stop the harassment (facepalm). I did the analysis and no change was detectable in abuser behavior.

I was told this guy had already drafted a press release ready for my numbers to be added, claiming that abuser behavior had changed. My results were not good news. There was real pressure on me to keep looking at different cuts of data. Something not super kosher in stats.

I kept getting the same results. And if anyone asked me, I’d have said his solution wasn’t going to change abuser behavior. Anyone who has ever been harassed online or offline knows that “not engaging” the trolls does not protect you from abuse.

I didn’t have a manager at the time (my new manager was on parental leave). There was no one to push back and be on my side. I remember a Google doc where I kept adding comments saying “no evidence of behavior change” wherever there was such a claim. It killed the press release.

But I felt so powerless, in a team of all men, who had a lot of confidence but very little understanding of the problem and even less empathy for the victims of abuse on Twitter (the Sr eng on the team made some disgusting comments about sexism and racism online).

The team’s director of engineering thought the abuse issue on Twitter was a matter of optics (he said this in a meeting I was in). That it wasn’t that bad and if we quickly shipped a few solutions for the supposed abuse problem the media would move on.

I also remember the director of product sitting in a meeting with people from rights groups and folks who were harassed on Twitter and had a very condescending conversation with them. The idea was that he knew what he was doing and these people needed to be handled.

It felt like I was a failure for not getting the “desired” results. What’s worse is that I was part of a wave of layoffs that happened right then, where 30% of employees were let go. I stayed strong but there was a voice inside me that told me I was bad at my job.

I’ve been managing teams of engineers and data scientists for a few years now. But the remnants of that first experience have stayed with me. The feeling that I am an outsider. That these arrogant men owned the place even when they did not understand the problem.

Those eng and product leaders stayed a while and then moved on to the next hot startup and accrued connections and wealth, stabilizing their position in our industry. They didn’t really face any consequences for getting the Twitter abuse problem wrong back then.

I had to get this story off my chest today. But there is a silver lining too. During that same period I met so many amazing people who worked at Twitter. Coworkers who made me feel connected and whom I enjoyed their company greatly and whom I am friends with to this day ♥️.

You all know who you are :) thank you for being great. I appreciate you.

(P.S. I think Twitter has come a long way in terms of cleaning up the timeline from the days of gamergate. Still would love to see it do better, especially in non-English languages)

Arvind Venkataramani has encountered a weird self-awareness about this:

A belief that exposure to human-centered design can overcome inherent power differences is a great way to be blind to the ways in which people are misusing power.

I’ve been told to my face (by someone solidly entrenched in the organisation) after presenting research insights and the raw data and analysis process that led me there, that I was making it all up.

At some point it is a useful heuristic to suspect people in power 🤷‍♂️

Cat Hicks offers Five Things I’ve Learned in Ten Years of being a Social Scientist in Tech

  1. We keep saying data when we mean measurement
  2. Specificity is ok, qualitative is ok
  3. Ships of Theseus & Zombie Luggage
  4. Is Social Science any good?
  5. Evidence Science

Several Twitter gems from Erika Just Enough Research Hall:

I also turn down every job which can be described as “help us do research so we can use the data to convince the person in charge of the value of research”. If that person doesn't already value a particular source of evidence, it's not going to happen.
Never say “validate” unless you are talking about parking. Evaluate!
I get asked “How do I use data to convince someone of a thing?” all the time. And my answer is always, “Go back in time and build a relationship with that person before trying to change their mind.” In the current moment, you can listen to their reasoning and ask questions.
Too often, the reason research findings are ignored by the business, is that the business model itself is based on a combination of fiction and extraction.

Other domains

A comparable example from an article about picking a host for Jeopardy.

For example, much has been made of Sony’s use of analytics to identify a front-runner. “We want to go at this with real analytics and real testing and not just go, ‘Hey, how about this guy?’” Richards told a Wall Street Journal podcast in April. Indeed, the studio called out this element in its announcement of Richards and Bialik, who Sony TV chairman Ravi Ahuja said “were both at the top of our research and analysis.” On Saturday, The New York Times reported that Richards alone selected the episodes that were sent to focus groups for review; the show’s two supervising producers, Lisa Broffman and Rocky Schmidt, who are both in their fourth decade working on the show, were excluded from the process. When The Ringer asked about the Times focus group report, neither Sony nor Richards offered comment.

Dorian Taylor gets philosophical about decision-based evidence-making maybe being the root of evidence-based decision-making:

over the last two years i have thought about this paper more than ever Why do humans reason? Arguments for an argumentative theory

and i still will make the effort to point out that they missed the most amazing opportunity for a title: an argument for argument as the reason for reason


  • we take the positions that align with our agendas; if people agree or disagree with us then okay, but what about the people on the fence?
  • that's where reasoning comes in
  • the fact that reasoning also yields logic and math and science and problem-solving is pure side effect

i love this idea. i think it is absolutely delicious

is it true? i don’t know. i don't care. i’m doing what they said i would

but holy crap it explains so much

the way i have come to treat theories in the social sciences is like a bet: they are often too spongy to be conclusively true but what if you behaved as if it were true? what kinds of outcomes would you get? i find that’s pretty easy to game out a lot of the time

also it exposes a lot of bad theories because you just ask “what evidence would it take for this theory to be false” and i mean that’s like elementary my dear popper

but in the same vein it doesn’t matter if a theory is “true”, just if it predicts outcomes reliably enough

and being true and predicting outcomes aren’t the exact same thing

like take physics: are quarks real? they are “real” in the sense that you can make calculations with them and predict real outcomes, but a physicist will tell you a quark is really just a prosthesis for thinking

is there a thing there For Real?™

at that scale what does that even mean

so, back to the paper’s hypothesis: people reason in order to persuade people on the fence to come over to their side first, and to solve problems a distant second

what would falsify that hypothesis? dunno man, it’s psychology. falsifying their hypotheses can actually get pretty tough, i’d have to think about it

okay, what if it were true? what bets could you make and could you attribute any payoff to the hypothesis being true?

The implications of decision-based evidence-making in politics politics are immense. Consider the question Why Did We Invade Iraq?:

But the intelligence analysts who were most expert in the region and in the technology for making and handling WMDs couldn’t find persuasive evidence to make the case that Saddam had any, and Tenet did what he could to suppress their skepticism. A holdover from the previous administration, he had been frustrated by Bill Clinton’s lack of interest in what the CIA had to offer. For any CIA director, the president is the “First Customer”—the sole source of the agency’s power—and under Clinton that power had dissipated. By contrast Bush, especially after September 11, was riveted by the agency’s reports; he had Tenet personally deliver its Presidential Daily Briefing at 8 AM, six days a week. At last, the CIA had a seat at the big table, and Tenet wasn’t going to blow it.


Senior officials throughout the national security bureaucracy—Tenet very much among them—inferred from these and other incidents that the decision to invade was a fait accompli and made sure to hop aboard, lest they lose their influence. This “fevered swamp of fear and genuine threat” particularly pervaded Cheney’s office, which Draper calls “the Bush administration’s think tank of the unthinkable.” Tenet went so far as to supply Team Cheney with a “Red Cell”—a group whose job was to invent the scariest scenarios and draw the most far-fetched connect-the-dot conspiracies imaginable in “punchy” three-page memos. (“Our goal,” one of its members said, “was plausibility, not anybody’s notion of truth.”) Cheney, Libby, and Wolfowitz loved its work. (The one question the Red Cell did not ask, Draper notes, was “What if Saddam Hussein did not possess WMD?”)

AI as a bullshit machine

One of my very favorite essays on the internet, Teresa Nielsen Hayden’s Common Fraud, concludes with the chilling conclusion about faux research that “deceiving us has become an industrial process”. Pookleblinky speculates on the political implications of it becoming fully automated:

In Dirk Gently’s Holistic Detective Agency, a programmer is working on an AI system that proves theorems. He fucks up, and makes it run backwards instead: given a statement, it constructs a proof for it. The Pentagon immediately falls in love with it.

At the time, this was considered a joke. The idea that you could ask a computer how to justify why you need to go to war, or make something illegal, or kill some group of people, and it’d pop out a whole argument for it.

Now consider DALLE and similar systems. The exact same generative model can generate text, code, data, etc. Not just images. With only a bit of tweaking, a DALLE-alike could generate persuasive arguments given a prompt. A wall of text, with accompanying generated stock footage, that you could text-to-speech (TTS) and upload to youtube.

Youtube has a lot of channels which simply read news articles and reddit posts aloud, accompanied by stock images. Boomers spend hours watching this like babies watching pregnant elsa foot torture on autoplay. There is a huge youtube genre that consists of a TTS voice, reading text, while stock footage is shown. When boomers say “I do my research” they mean they watch 5+ hours of these videos each day.

With a tiny bit of tweaking, you could make a DALLE-alike that effortlessly produces basically infinite such videos. Endless content. “Dalle, make a 5 minute video about how george soros and fauci are lizard people, stock footage of scary doctors”. The mean cost of producing a truly staggering amount of propaganda, drops to basically nothing.

DALLE, give me a 3 hour long compilation of urban decay and a narrator arguing against immigration.”

DALLE, give me 10 30-minute videos of angry amber heard caricatures with upbeat ragtime piano music”

DALLE, give me a 20 minute video of ben shapiro's voice talking negatively about trans people, over stock footage of monty python wearing dresses”

Basically: infinite, endless propaganda, in a form effortlessly consumed.

You already know how autoplay traps vulnerable people into radicalization rabbit holes. How much more effective do you think the pipeline to blood libel will be, when the cost of saturating the infosphere drops to basically zero? Picture a boomer, watching 10 hours of virulent hate speech on youtube, not a second of which was made by a human.

“Kermit as tony soprano” and “kermit as comic book count dracula,” except it’s mutually contradictory versions of blood libel or false flag conspiracies. Picture layers of algorithms that target an audience, generate content to show to that audience that ensures high engagement metrics, and not one of those algorithms understands what hate speech is.

Imagine a boomer, trapped in an infinite conspiracy theory rabbit hole, made just for them.

This brings us back to tech & design by way of Pavel Samsonov again, who observes No, AI user research is not “better than nothing” — it’s much worse.

Another epistemic break

I cannot resist transcribing into this post a Twitter thread from Emmett Sheara about a similarly-astonishing dialogue revealing a different epistemic disjoint.

I attended Hereticon in 2022, and I had an experience there that made a truly deep impact on me. Getting to the conclusion was a trek for me, so you’re going to have to listen to the whole story.

The conference itself was effectively TED from the Nega-Earth dimension: only talks the NYT would not approve of, lots of interesting young people, low prep talks, great parties.

While it is true that, as they say, reversed stupidity is not Intelligence, I had a great time. Only a couple of the talks grabbed me but I had some incredible conversations. Did you know you can use nicotine gum as a reinforcement tool to condition yourself to exercise?

At the end of the weekend I was lounging by the pool with a few friends, talking about the future.the sky was clear and the moon was up, and I said something like, “it still fills me with wonder and joy that we managed to actually land someone on the moon”.

Another conference attendee pipes up from the next seat over. “You know that was faked, right?”

It’s Hereticon. Of course someone thinks the moon landing was fake. I’m barely surprised.

“I did not know that. I’ve seen a video … tell me, how do you know?”

“Oh. Well, that’s pretty straightforward. Space doesn’t exist.”

I have to admit, this caught me a little flat-footed. It is true, if space isn’t real then the moon landing was definitely faked.

Me: “That’s a new one for me! Ok, I’ll bite. What happens if I keep going up?”

Guy: “You hit the crystal sphere.”

Me: “Ok, cool. So I have a question about what it’s made of, but let’s just say I drill through the sphere and keep going. Then what?”

Guy: “Water. Lots of water.”

Me: “Ok, we bring a submarine. What happens if I keep going?”

Guy: “You know I’m not sure … no one’s ever gone to see.”

Me: “Ok fair enough. How does the crystal sphere stay up? Why doesn’t the massive weight of water crush it and fall in on us?”

Guy: “I don’t think it works that way, but I’m not an expert. I can send you some YouTube videos.”

Me: “That’s ok. Let’s talk about another part of it. How do satellites work?”

Guy: “High altitude weather balloons”

Me: “But they go really fast, wouldn’t they burn up?”

Guy: “I’m not sure but I think the atmosphere is really thin.”

Me: “No problem, how about the ISS? I’ve seen phone calls to there in zero gravity.”

Guy: “Sound stage, it’s all faked.”

Me: “I’m not sure how they fake the gravity, but you know you can literally see the ISS from earth right? Like we could go look up where it is and see it tonight w a telescope. And it’s moving really fast … if it was in the atmosphere it would burn up.”

Guy: “Well I should go check that out, I haven’t heard that before. But it’s probably faked somehow, I just don’t know how.”

At this point, I basically gave up on the space thing. I assume he’d gotten brainwormed by flat earther YouTube videos. Let’s move on. But it’s Jan 2022 … COVID was still very much on everyone’s mind … so I have to ask.

“So, I have to ask. The vaccines, what’s up with them?”

Guy: “They don’t work, obviously.”

Me: “obviously, I mean that’s what I expected. But how do you know?”

Guy: “that’s easy. Viruses aren’t real. I mean, infectious diseases in general aren’t real.”

At this point I have to understand.

He seems so reasonable and well spoken, but his epistemology is completely deranged. What the hell is going on???

I dig. I dig. I ask every question, I offer mental models, I give challenges he can’t answer but nothing works.

Finally, I hit on the right question. “What is the underlying nature of knowing truth to you? What’s the gold standard for evidence? Everyone has to take a lot on faith, there’s too much to figure it all out. But ultimately, what form of evidence trumps all others?”

Interregnum: I encourage you to actually consider this question. What is it that grounds out truth for you? This isn’t a trick question, go with the obvious and naive answer.

Srsly think about your answer (and what other people might answer) before you keep reading. It’s more fun that way and you learn more.

So eventually he comes to his answer. When someone presents an idea, he connects with them. He looks in their eyes. He considers their motivations. And if their argument is internally consistent with itself. And if all the signs are green, he trusts them and that’s truth.

This just blew me away. It’s a fundamentally social theory of truth. Truth is based on credible arguments presented by trustworthy people. That is not what truth is to me, at all.

For me, the gold standard of truth is what I can see, hear, taste, smell, feel for myself. What is “true” is what my current conceptual model of the world can infer from that data.

(Actually truth is whatever reflects the underlying nature of reality, we are talking here only about our current best guess at truth, but this is an unavoidable distinction for finite beings who cannot know Truth)

I’d never heard someone actually say the quiet part out loud like that before. I think perhaps many people have his relationship to the truth, but with more normal choices for people to trust, and therefore more standard beliefs (and more correct ones IMO).

At this point I completely lost interest in discussing space and viruses and etc w him any further. Because I’d understood what had been driving me to ask and ask … I’d figured out how this seemingly smart man had come to conclusions that seemed so insane.

I’ve come up with names for these two sides (tho I’d be interested in better ones, if anyone has them): team words-mean-things, and team alliances.

The sense-data theory of truth leads to team words-mean-things. Because the goal of words, at least outside of art, is to convey sense data impressions and models inferred from them. And to do this well, one must care fanatically that your use of the word is the same as theirs.

The canonical sign you are talking to a words-mean-things person is the exasperated sigh followed by “wait but i thought you said X was Y, but now it’s not Y?”

The credible-talk-from-trusted-sources theory of truth by contrast leads to team alliances. Because if you get the words a little wrong, it’s fine, you can trust the ultimate conclusions regardless as long as you can trust them. As long as they’re on your team.

The canonical sign you’re talking to a team alliances person is when they seem more interested in figuring out whether you’re on their team than whether the details of your words make sense. “Is he a good guy?”

Alliances : Words-Mean-Things
Mimesis : First Principles

Given my team, my take feels anti-alliances. But learning-via-trust has a lot to recommend it. I don’t think one is “better”. But know yourself, know who you’re talking to, and connect at their level.

Israel, Gaza, war, and genocide

In several discussions of the ongoing situation in Gaza I refer to it as a genocidal attack by Israel rather than a “war”. My language reflects some deliberate and technical choices which merit unpacking, not least because I have ambivalences about those choices.


As someone with an unwholesome interest in genocide, I consider it vital to recognize violences short of eliminationist mass murder as nonetheless genocidal when they direct harm to a people as a people. This is important in dignifying that range of violences as important, in understanding how genocide works, in preëmpting mass murders before they start.

The 13 October Jewish Currents article A Textbook Case Of Genocide persuaded me early on that Israel’s response to Hamas’ horrifying 7 October attack qualified.

The UN Genocide Convention lists five acts that fall under its definition. Israel is currently perpetrating three of these in Gaza.

And of course Israel has escalated much further in the months since.

We need to register this as a phase transition. Israel had a longstanding program of brutal apartheid military policing of Gaza & the West Bank; wrong, but not genocidal. Deliberately attacking the Gazan people as a people is categorically different.


And I feel very uneasy with the term “genocide” allowing — even inviting — bonehead critics misunderstanding what is happening so badly that they manage to overstate wrongs as immense as those Israel is committing.

Yes, there are Kahanist maniacs in the Likud government like Ben-Gvir, eager to purge Gaza of Arab Palestinians even before 10/7, thinking that their moment has come. Their frank admission of that intent are a key part of why we must see Israel’s attack as already genocidal and threatening a spiral into even greater horror. But they are not the mainspring of events; reading the situation as nothing other than them realizing their dreams of genocide is absurd.

Most galling are the inevitable comparisons to Nazi genocide. It should be obvious why it is offensive to compare Jews to Nazis, period, and the comparison is also offensive in its stupidity. The Nazis are a metonym for evil because they ran death factories where they carefully calculated the right rations of stale bread to ensure that people died at precisely the fastest rate at which they could despose of the bodies. Deadly as the attack on Gaza has been, they are not that. Israel obviously could kill far more civilians than they have.

Fantasies that Israel has long plotted this purge of Arab Palestinians from Gaza, even engineered the Hamas attack to justify it, are equally preposterous. Why would the Likudniks withdraw settlers and the IDF from Gaza for twenty years before finally getting around to executing this plan?

Callousness explains more than bloodthirstiness. Netanyahu and his Likudnik political coalition are in a panic to retain power, cynically trying to rally shocked Israelis to them, after 10/7 demonstrated the catastrophic failure of Likudniks’ promises to ensure Israelis’ secruity. Americans who remember the Bush administration’s reaction to 9/11 should recognize the pattern. And many experts foresaw (or even advocated) Israel trying to break Hamas using the bloody military tactics Sri Lanka used to destroy the Tamil Tigers at cost of immense civilian casualties. It is no defense of Israel to register these evils as different evils from pure malice.

I consider it better to correct these misunderstandings of the implications of the word “genocide” by insisting on digging in to how we understand genocide rather than by backing off from the word. We need sophistication about genocide to understand what is happening in Gaza … and, alas, the world.


The IDF attacking Gaza is not a military conflict between national belligerents with clear stakes; though we need to recognize the genocidal escalation it represents, I think we also need to register its continuity with the long process of military policing by Israel which began in the occupation era and has continued through Israel’s efforts to control the quasi-sovereign Palestinian Authority. Calling this a “war” thus has misleading implications, so I refer to it as an “attack”, and refer to “crimes against humanity” rather than “war crimes” on the part of both Israel and Hamas.

And my stubbornness over this language presents problems.

Some refer to the situation not as an “Israel-Gaza war” but an “Israel-Hamas war”, and I envy this naming Israel driven at destroying Hamas specifically.

And there is a rhetorical turn I wish I could make. With Gaza extraordinary in such important ways, there is something deeply strange in horrors which are tragically ordinary in fighting around the world — destroyed buildings, captured fighters on their knees, dead civilians — shared with a note of such intense shock and and fascination that I want to ask, “Why is this so special in Israel’s case? Are you only now noticing that war is bad?”

05 February 2024


Ursula K. LeGuin’s short story The Ones Who Walk Away From Omelas is rightly celebrated.

How can I tell you about the people of Omelas? They were not naive and happy children--though their children were, in fact, happy. They were mature, intelligent, passionate adults whose lives were not wretched. O miracle! But I wish I could describe it better. I wish I could convince you.

Some bracing responses:

02 February 2024

The weapon hand

A reflection on a theme in movies.

In the Evil Dead movies, Ash lives in a world of literally monstrous violence.

When this costs him his hand, he learns to accept that he will never be able to escape the violence of his world, so he must embrace how this has transformed him into an someone capable of responding to the world he inhabits.

But this makes him incapable of returning to an ordinary life. His hero’s journey does not bring him home.

In Videodrome, Max Renn’s exposure to violent propaganda created by a corporation intent on brutal control makes him so obsessed with his gun that he experiences it becoming a part of him.

Eventually this destroys Renn — either by detaching him from reality or by driving him to literally destroy himself, we never get to be sure which.

To ensure that we do not misunderstand the horror of Videodrome as reflecting the intrusion of the mechanical into the organic, Cronenberg gives us a repulsively organic gun in eXistenZ.

The horror comes from holding a weapon, not from holding a machine.

Luke Skywalker confronts violence with violence and it costs him his weapon hand.

Palpatine urges Luke to embrace violence. But Luke sees how that has transformed Vader. Contemplating his weapon hand, seeing the same process happening to him, he chooses to renounce violence.

A much older Luke has stopped hiding his weapon hand under a glove. Maybe he needs the reminder of the corrupting power of violence in front of him. He has renounced the entire world.

In Luke’s final moments he returns to the world, having finally learned to serve it without violence, as his mentors taught him. Weapons cannot stop him.

The Iron Giant is not tempted to pick up a weapon in his hand; someone made him to be a weapon before he had any choice.

Then when the Iron Giant does have a choice, he knows it. He is not a gun.

He knows who he is.

31 January 2024

Apple Vision Pro, eye tracking, and the cursors of the future

I am fascinated by how the Apple Vision Pro identifies where the user is looking, treating that locus of attention much like the cursor used on the Mac and other desktop computers; one “clicks” with hand gestures. This is a cunning way to make desktop software usable on this very different platform, and discerning this by watching eye movements is an astonishing technological feat. It is not just a matter of precisely detecting where the eye is pointed, which would be hard enough; our eyes constantly jitter around in saccades, so the Vision Pro has to deduce from this complex unconscious movement where the user has their attention in their subjective experience.

Modifying desktop computer interfaces

It is fun to think about exotic alternatives to the conventional mouse/trackpad & cursor combination. The big gestural interfaces seen in science fiction movies mostly turn out to be a bad idea — Tom Cruise was exhausted after fifteen minutes of just pretending to use them in Minority Report — but I believe that there are opportunities for innovation. Clayton Miller’s 10/gui considers ways we might take advantage of a huge multi-touch surface instead of a little trackpad. Bruce Tognazzini’s Starfire from 1994 is still ahead of available technology, bursting with both good & bad ideas for combining direct manipulation with touchscreens & styluses together with indirect manipulation using a mouse or trackpad. Devices like the iPad have begun to unlock the promise of distinguishing fingers from styluses to create more graceful, complex interaction idioms by combining the two; a few specialists use stylus input tools like Wacom tablets at the desktop, and I feel an itch that more people might benefit from integration of stylus input into their desktop experience.

So might we just replace the mouse/trackpad & cursor with eye tracking? No. I cannot imagine that it could ever provide the fine precision of the mouse/trackpad (or a stylus). But I think eye tracking could combine well with those input tools to make some things more graceful. It would not require fine precision, just the ability to register which window the user is currently looking at.

Discussion with Raghav Agrawal underlines that I am proposing something I hope would deliver a fundamentaly different experience than the Apple Vision Pro. A user of the Vision Pro feels that they control the system with their gaze. A user of the desktop should still feel that they control the system with the mouse, with the system aware of their gaze and using that to ensure that it Just Does The Right Thing.

Solving some multi-monitor challenges

I think this will prove especially valuable if one has multiple big screens, which I expect more and more people to do as they get better and cheaper. I am a lunatic who uses a big wide monitor, a big tall monitor, my laptop’s 16" display, and a little teleprompter display at my desk. I love being able to look around at several open windows, and expect that many people will discover how good this is for work.

But using existing mouse-cursor-window interfaces with multiple big screens does come with disadvantages. Dragging-and-dropping across expansive screens gets clumsy. One can lose track of the cursor in all that space; even wiggling the cursor does not always make it easy to find. With a lot of windows open, one can easily lose track of which one is currently selected.

A radical proposal for multiple cursors

Rather than drive the cursor to appear at the point of one’s visual focus — one does not want the cursor racing back and forth across the screen every time one glances at information on another screen — I think it would work to have a cursor in each window, with mouse/trackpad actions affecting only the window one is currently looking at. When one looks away from a window, its cursor stays where one left it.

This puts a cursor within view wherever one looks, easy to find. Maybe on a big window, if one has not looked at it in a while the cursor returns to the center or gets a little momentary flash of emphasis when one looks back at that window.

The Mac puts the Menu Bar at the top of the screen because the edge preventing overshooting make it easier to decisively mouse to an element there. Keeping the cursor confined to the boundaries of each window makes all four window edges this kind of convenient interface territory.

Integrating eye tracking also eliminates the need to have a selected window to keep track of. In existing systems, actions like using the mouse scroll wheel can produce an awkward surprise when it does not affect the document in view, instead disrupting the content of a window which one has forgotten remained selected. With eye tracking, user actions can always just affect the thing they have in view, eliminating that problem. (I will get to one important exception to this pattern in a moment.)

Acting across multiple windows

Confining input effects to within windows seems like it would break a lot of interaction gestures which require moving across multiple windows, but I think everything one must do that way now can work at least as well in my proposal.

Again, we do not need to move the cursor across windows to select one; attention tracking eliminates the need for a selected window.

One need not move the cursor across windows to do window management. The edges of windows remain drag handles for resizing them and moving them around, and as I said above, with the cursor confined to the window, these become easier targets. One can combine this with the buttons and other controls I envision putting at those edges: drag to affect the window, click to use the control. I am a crank who perfers a tiled display to overlapping windows, but handling overlapping windows is fine: look at the protruding bit and click to pop it to the front.

Drag-and-drop across windows would require a bit of an adjustment, but eye tracking enables an improvement. One starts dragging an object in one window — turns to the other window — and that window’s cursor is right there with the object attached, responding to mouse movements. This will be more graceful, with less mouse movement and less risk of dropping onto the wrong window when dragging between windows on separate screens.

Imagine working with two text documents, referencing an old one while authoring a new one, bouncing back-and-forth between the two. Turning from the new document to the old one briefly, one might scroll to the next page in the old document, use the cursor in that document to select something one wants to quote, copy it, then turn back to the new document to find the cursor waiting right where one left it, ready to paste in the quote.

Plain text as the input exception

Keyboard shortcuts would act on the window one is looking at, just like mouse movement and clicks. But plain text is a little trickier.

It should be obvious how in the new-and-old document example above one may want to type into the new document while looking at the old one. There are a lot of situations like that. Text input boxes need a standard interface element allowing one to lock the box as the target of plaintext input from the keyboard; when that is active, other text input boxes show as unavailable. So one need not hunt down the locked text input box to unlock it, when a text box shows as unavailable, it would a control to unlock the old locked text box, allowing text input to go where one is looking ... or to immediately make that text box the new locked input target.

Having proposed this interface idiom, I am tempted to want the this ability to lock the text input target, overriding the selected window, in the conventional systems we have now!

29 January 2024

Scott Alexander Siskind & Slate Star Codex

Scott Alexander Siskind — of the long-running blog Slate Star Codex and newer Substack newsletter Astral Codex Ten — cannot be trusted as a source or a commentator because he carries water for far right ideas which are transparently wrong both factually and morally.

The core critique

Elizabeth Sandifer says:

Here’s what any good overview of Scott Alexander is going to need to say.
  1. He emerged out of Eliezer Yudkowsky’s LessWrong community. This community consists almost entirely of hobbyists attempting to reinvent academic disciplines from scratch. The results occasionally have moments where they frame interesting questions or perspectives that might not occur to an expert. The signal to noise ratio, however, is extremely low. Most of the time they just fall blindly into well-documented errors and refuse to admit it.
  2. Alexander belongs to a particular subgroup of that community who has gotten seduced by pseudoscientific ideas about the genetics of race. These ideas should be dismissed the same way climate denial is. Alexander is good at looking like the wide-eyed, innocent speculator who’s merely asking questions. Maybe he actually is. In practice, that community’s tolerance for racist pseudoscience is routinely exploited by white supremacists.
  3. This is extra specifically true for Alexander himself, whose blog has created a community that is extremely fertile grounds for white nationalist recruitment. Alexander cannot plausibly claim ignorance of this. If you’re writing any sort of overview that does not have a very clear view of those three facts, you’re probably going to end up directing attention and curiosity towards Alexander’s community and in doing so aiding white nationalist recruitment.

Sandifer wrote a book on that circle of people, Neoreaction: A Basilisk, which I recommend as both entertaining and insightful in understanding neo-reactionaries (“NRx”). I maintain my own index of resources about NRx; they are a far right movement distinct from fascism but no less evil in their opposition to democracy and equality.

Alexander’s writing is dangerously credulous about far right ideas. One need not know more than that.

But a decent respect to the opinions of mankind requires that I should declare the causes for concluding that Alexander’s thinking is so dangerously bad, I should examine why Sandifer and I think he might be a useful idiot for reactionaries rather than really one of them, and I should examine the implications.

Rhetorical & intellectual sloppiness

Alexander admits that he does not write carefully.

It takes me a couple of hours to write a post.

I work a forty hour week, so having a couple of hours each week to write posts isn’t really a problem. In my own life, I've noticed that time is almost never a real constraint on anything, and whenever I think it is, what I mean is “I have really low energy and I want some time to rest before doing the next thing". But writing posts doesn’t really take that much energy so I am okay with it.

Also, I have no social life and pretty much avoid all my friends and never talk to anybody, which is helpful.

I guess I don’t really understand why it takes so many people so long to write. They seem to be able to talk instantaneously, and writing isn’t that different from speech. Why can’t they just say what they want to say, but instead of speaking it aloud, write it down?

Sandifer does a deep dive into the implications of his resulting rhetorical style in her essay The Beigeness, or How to Kill People with Bad Writing: The Scott Alexander Method:

My contention is that Siskind’s prose — which I view as representative of a larger style — works through a sort of logorrheic beigeness. Siskind is good at giving readers the sense that they are being intelligent — that they are thinking about serious issues at considerable length. In practice, he says… not quite nothing, but very little, at least on a moment to moment basis. Instead he engages in a litany of small bullshits — shoddy arguments that at their best compound into banality, but at their worst compound into something deeply destructive, all made over such length that smoking guns are hard to find, which is of course the point.

I have quibbles with some of the particular criticisms Sandifer makes in the full post, but her central point is correct: Alexander’s writing demonstrates a kind of anti-rigor in a form which obscures its worst implications.

In a telling example, Kevin Drum looks closely at Alexander criticizing the FDA and discovers that he is intellectually incoherent:

I mentioned in a section of my recent post, “Sympathy For The Devil”, that I think the FDA as an agency is often quite good. They’re smart, caring people, and they usually carry out their mandate well — so well that the few exceptions, like aducanumab, are highly newsworthy. I have no objection to Dr. Gura’s mostly-positive portrayal of them.

This bears no resemblance — none — to [Alexander’s] diatribe in Part 1:

Every single hour of every single day the FDA does things exactly this stupid and destructive....I am a doctor and sometimes I have to deal with the Schmoe’s Syndromes of the world and every f@$king time there is some story about the FDA doing something exactly this awful and counterproductive.

I have no idea how you can write “they usually carry out their mandate well” in one place and then, in your main post, just go ahead and repeat your original belief — backed by an example you know is wrong — that the FDA does stupid and destructive things on practically a daily basis.

That shows Alexander confused. But he is also wrong about easy questions — logically, factually, and morally.

Very bad ideas

Credulity about NRx

Consider a relatively mild example from Reddit in 2019. For the uninitiated, The Cathedral is NRx’ers name for the conspriacy of leftists which holds an iron grip on all meaningful institutions.

When the nrxers talk about the Cathedral, I find it tempting — sure, they flirt with conspiracy theory, but it seems they’re at least good conspiracy theories, in the sense that they explain a real phenomenon. Kennedy assassination conspiracy theories have their flaws, but one of their strong points is that Kennedy is in fact dead. If you’re coming up with a conspiracy theory to explain why people are biased in favor of capitalism, that seems almost like coming up with an Obama assassination conspiracy theory — not only are conspiracy theories bad, but this one doesn’t even explain a real fact.

Trump got elected after promising tariffs and immigration restrictions that no business or plutocrat wanted. Bernie Sanders was on top of the prediction market for next Dem nominee as of last week (today it’s Biden, but Sanders is close behind). The richest people in the world spend most of their time constantly apologizing to everyone for trumped up charges, and loudly (one might say fearfully) confessing they don’t deserve their wealth. This just really doesn’t seem like the world where capitalism is in control of the narrative, unless it’s doing some weird judo I’ve never heard communists try to explain.


Though it is silly of reactionaries to see a conspiracy, capitalism has lost control of society as NRx’ers say. In crituqes of capitalism? No. In discussion of alternatives? No. But some policies do not perfectly suit the liking of rich people!

This is detatched from logic and reality. Alexander doubles down and gets weirder.

The latest studies suggest that the rich do not get their policy preferences enacted any more than any other class (a study came out before showing the opposite, but seems to have been wrong). I’m not sure what else you mean by “capital really is in power”, other than that rich people can buy yachts or something.

I’m tempted to take an extreme contrarian position that everything interesting happens in a parallel status economy. The money economy isn’t “in power”, it’s a (weak) brake on power, or a force orthogonal to power which is helpful in not concentrating power 100%. That's why overthrowing capitalism keeps producing authoritarians.

Where is the “yes let’s overturn capitalism” side of the debate represented? Certainly not in the editorial line of any major newspaper, TV station or radio station.

I mean, it’s better represented than libertarianism. Yes, the Overton Window goes between “slightly more capitalism” and “slightly less capitalism”, but the “slightly less capitalism" side always seems to have the upper hand. I agree the war of ideas isn't yet a total massacre, I’m just saying the anti-capitalist side always seems to be winning, and pro-capitalist on the defensive. Propaganda victory exerts a weak pressure on reality, it doesn’t shift it all at once.


Since capitalism is not a locus of actual power, calls for Slightly Less Capitalism will eventually develop into a massacre.


On SSC, he offers other insight from NRx to harness:

I’ve said many times that I think the Reactionaries have some good ideas, but the narrative in which they place them turns me off (I feel the same way about Communists, feminists, libertarians, et al). Even though I like both basic income guarantees and eugenics, I don’t think these are two things that go well together — making the income conditional upon sterilization is a little too close to coercion for my purposes. Still, probably better than what we have right now.

We must skip over how ripe it is to put neo-reactionaires, libertarians, feminists, and communists in the same category to pay attenton to …


Among the good ideas to draw from reactionaries Alexander finds … eugenics? Yikes.

Alexander gives us the cold comfort that he would not march “undesirables” into death factories at rifle-point with his rejection of “coercion”. Cold comfort while we untangle what he does mean.

He brings up eugenics in more than just this bizarre aside so we can try to make sense of things. On his LiveJournal back in 2012 he asked

So if you had to design a eugenics program, how would you do it? Be creative.

I’m asking because I’m working on writing about a fictional society that practices eugenics. I want them to be interesting and sympathetic, and not to immediately pattern-match to a dystopia that kills everyone who doesn’t look exactly alike.

To be generous: contrarian science-fictional world-building as a whetstone for thinking about principles has a noble tradition which includes satire, cautionary thought experiments, and visualizing dystopian outcomes. But one must be wary with topics like eugenics where bad actors speaking in bad faith do a lot of Just Asking Questions as a veil over the monstrous answers they have in mind. Alexander is not treading nearly carefully enough.

The discussion community does not respond with the mortified “whut?!?” they should. Instead, one commenter replies with …

Paying undesirables to be sterilised is happening! There’s a charity that pays drug addicts £200 to be snipped: Project Prevention. Seems like a good idea to me.

How about a benign eugenics regime that is about preserving diversity of human mental types while minimising disease? Everyone is profiled, and nerdy Aspergers types are encouraged to mate iwth empathisers rather than other nerds, ensuring that they don’t make full on autistic babies. Some of the funniest, most creative people I know are definitely touched by the spectrum and have fully autistic relatives in some cases, so old-fashioned eugenics response of sterilising everyone who is even vaguely autistic would destroy a lot of human capital.

In general, a eugenics regime that isn’t pushing toward a single human ideal, but is aware of the value of diversity, could be sympathetic. Maybe go the other way and have them maintain castes of specially bred ultra-systematisers, ultra-empathisers, synaesthetes, etc. The key to avoiding a retread of Brave New World or Morlocks/Eloi is that the castes are not ranked, and everything is done to make each caste happy. There would have to be safeguards to stop the empathisers manipulating everythgin for their own benefit — what would those be? At some point, are the castes reproductively isolated? What if there is some slow-motion catastrophe where humans will have to be very different a few generations hence — maybe it becomes obvious that climate change will collapse advanced civlisation and humans have to rebuild from hunter-gatherer level, so it becomes necessary to breed robust humans who’ll survive a population bottleneck ...

Alexander as squid314 responds to this with none of the pointed questions one should. (“Castes? Human capital?! Arranged mating?!? Undesirables!?!”) Instead, he is enthusiastic about this creepy realworld organization:

I ... actually think I am probably going to donate to that charity next time I get money. Though I’d feel better if it was something more reversible.


And. Of course. This does go where one expects eugenics to go …

Racist pseudoscience about IQ

On SSC in 2016, Alexander praised Charles Murray of the notoriously racist bullshit The Bell Curve.

The only public figure I can think of in the southeast quadrant [of an imagined political compass for poverty policy] with me is Charles Murray. Neither he nor I would dare reduce all class differences to heredity, and he in particular has some very sophisticated theories about class and culture.

That is not the only time:

my impression of Murray is positive [⋯] One hopes Charles Murray pursues what he thinks is true, and any offense caused is unintentional

Neither post directly supports Murray’s racism. But both are stupidly credulous about what he pretends to advocate. His arguments are overwhelmingly nonsense. Even if Alexander is not knowingly promoting Murray’s most evil and disingenouous arguments, taking Murray as credible demonstrates hopelessly bad judgement.

In the course of in addressing current “scientific racists” offering evil crackpot pseudoscience about intelligence, Kiera Havens’ Medium post Oroborous racks up pointers to these and other examples of how Alexander is deeply entangled with that movement.

Siskind chose to deliberately hide his affinity for race science for writings on his popular blogs, SlateStarCodex and AstralTenCodex. In 2014 emails where he detailed his strategy for mainstreaming hereditarianism came to light and Siskind (with all the confidence of a toddler emphatically declaring through crumbs and chocolate that THEY did not eat that cookie) posted a categorical denial on another one of his websites, raikoth.net [a reference the utopian society Siskind spent years developing]. The same webite linked to an account used six months prior to solicit resources on Asheknazi IQ to improve the arguments on “my side”. The recommended items later emerge as a lengthy post on SlateStarCodex where he finds the same discredited theory Pinker promoted “really compelling”.


Siskind defends the genetic basis of IQ in 2016 and 2021, often citing Plomin (who was wrong in many different ways). In 2017 he sighs to his audience that the hereditarian left “seems like as close to a useful self-identifier as I’m going to get”. In 2012 he solicits thought experiments on eugenics, graduating to “I like both basic income and eugenics” in 2013. A now deleted 2017 comment has him argue that the science isn’t settled and skull-measuring is actually a scientifically rigorous way to determine cognitive ability. When challenged on the welcoming atmosphere he is creating for ‘race science’ and its proponents (also in 2017), Siskind says that people on Twitter seem to think Emil Kirkegaard is okay , a claim that Kirkegaard later uses to convince himself he’s not a crackpot. To put a finer point on this one — Kirkegaard is *also* installed at [self-described scientific racist] Richard Lynn’s publishing mill [the Ulster Institute for Social Research], started his own self-published, self-reviewed journal to publish folks like Willoughby and Fuerst (and host their conversations), and as part of his truly enormous body of work to promote scientific racism, spent years seeding Wikipedia with hereditarian talking points.

Alexander avoiding this stuff in his SSC essays explains how I missed it, but it does sometimes turn up. On SSC in 2020 Alexander said:

Normally this would be a hidden thread, but I wanted to signal boost this request for help by Professor Steve Hsu, vice president of research at Michigan State University. Hsu is a friend of the blog and was a guest speaker at one of our recent online meetups – some of you might also have gotten a chance to meet him at a Berkeley meetup last year. He and his blog Information Processing have also been instrumental in helping me and thousands of other people better understand genetics and neuroscience. If you’ve met him, you know he is incredibly kind, patient, and willing to go to great lengths to help improve people’s scientific understanding.

Hsu is unmistakably aligned with Holocaust deniers, white nationalists, and racist pseudoscientists. Not maybe kinda. Not by coy implication. Directly. He is not someone one should support, have in one’s community, or point to for help “understanding genetics and neuroscience”. Alexander’s support for Hsu is unforgivable.

Alexander also financially sponsors Quillette, another bad actor so pernicious that I keep a page about them.

Quillette is an instrument for credibility-washing evil far right pseudo-intellectual bullshit.

I am sympathetic to people who get fooled by an article from them. They publish a lot of genuinely intriguing contrarian articles, often by left-leaning commentators, to create an impression that they are a venue for smart, serious, adventurous ideas. But this is a ploy, to create a good impression so that one becomes open to entertaining one of their articles arguing Oh So Reasonably for [⋯] racist & sexist pseudoscience, nonsense about “censorship” by the Left, and even doxxing “antifa” journalists knowing that the violent fascist cult Atomwaffen used their article as a “Kill List”.

It would be bad enough if Alexander shared an article from them, or a pointer to them. But he gives them money. Unforgivable.

Alexander justifies himself

In 2021, Topher T Brennan shared a 2014 email Alexander sent him defending his thinking and motives in addressing reactionaries’ ideas.

I’ve decided to say “screw it” and post receipts showing that Scott Siskind (the guy behind Slate Star Codex) isn’t being honest about his history with the far-right.

The context is that I’d been publicly critical of the rationalist community’s relationship with a branch of the online far right that called themselves “neoreactionaries”, and Scott (a vague internet acquaintance at the time) basically responded by saying, “oh, I agree the people you’re thinking of don't have much of value to say” but offered to point me to supposedly “better” examples of neoreactionary thought. This is what he sent me—something I was very much not expecting. (And no, he did not first say “can I tell you something in confidence?” or anything like that.)

Posting this now because Scott accusing Cade Metz [author of the NYT article Silicon Valley’s Safe Space: Slate Star Codex] of dishonesty and a lot of people are jumping on that to smear Metz and the NYT. The thing is, Metz never said Scott endorsed the far-right or anything like that — just that the Slate Star Codex community was far more welcoming to the far-right than to so-called “SJWs”. That’s a simple fact that has been a matter of public record for years. Scott and his defenders says it’s dishonest to point that out because it might lead people to infer Scott is far more sympathetic to the far-right than he’s admitted publicly. But the inference is correct.

I feel a certain hesitation about re-sharing a message Alexander sent in confidence, but many Alexander critics reference it so the cat is already out of the bag … and it is too illuminating to ignore.

Some context for the uninitiated:

  • “HBD” stands for “human biodiversity”, a term used by people promoting intellectually dishonest racist pseudoscience about how different “subgroups” of humanity are different from each other, focusing of course on differences in intelligence & temperament
  • “LW” is short for LessWrong, the Rationalist forum focused on the ideas of the weird crank Eliezer Yudkowsky whom Sandifer criticizes in the references at the top of this post
  • Robin Hanson is a creepy crank prominent in the Rationalist community
  • “Moldbug” is the nom de guerre of NRx leader Curtis Yarvin
  • RationalWiki is an index maintained by the Rationalist community — a useful place to start when looking for resources debunking bad ideas & bad actors, and you’ll notice that this post points to their article on Alexander early on

I said a while ago I would collect lists of importantly correct neoreactionary stuff to convince you I’m not wrong to waste time with neoreactionaries. I would have preferred to collect stuff for a little longer, but since it's blown up now, let me make the strongest argument I can at this point:

1. HBD is probably partially correct or at least very non-provably not-correct.


This then spreads into a vast variety of interesting but less-well-supported HBD-type hypotheses which should probably be more strongly investigated if we accept some of the bigger ones are correct. See eg http://hbdchick.wordpress.com/2012/11/08/theorie/ or http://en.wikipedia.org/wiki/Albion%27s_Seed.

(I will appreciate if you NEVER TELL ANYONE I SAID THIS, not even in confidence. And by “appreciate”, I mean that if you ever do, I’ll probably either leave the Internet forever or seek some sort of horrible revenge.)

2. The public response to this is abysmally horrible.

See for example Konk’s comment http://lesswrong.com/r/discussion/lw/jpj/open_thread_for_february_1824_2014/ala7 which I downvoted because I don’t want it on LW, but which is nevertheless correct and important.

See also http://radishmag.wordpress.com/2014/02/02/crazy-talk/

3. Reactionaries are almost the only people discussing the object-level problem AND the only people discussing the meta-level problem.

Many of their insights seem important. At the risk (well, certainty) of confusing reactionary insights with insights I learned about through Reactionaries, see:


4. These things are actually important

I suspect that race issues helped lead to the discrediting of IQ tests which helped lead to college degrees as the sole determinant of worth which helped lead to everyone having to go to a four-year college which helped lead to massive debt crises, poverty, and social immobility (I am assuming you can fill in the holes in this argument).

I think they’re correct that “you are racist and sexist” is a very strong club used to bludgeon any group that strays too far from the mainstream — like Silicon Valley tech culture, libertarians, computer scientists, atheists, rationalists, et cetera. For complicated reasons these groups are disproportionately white and male, meaning that they have to spend an annoying amount of time and energy apologizing for this. I’m not sure how much this retards their growth, but my highball estimate is “a lot”.

5. They are correct about a bunch of scattered other things

the superiority of corporal punishment to our current punishment system (google "all too humane" in http://slatestarcodex.com/2013/03/03/reactionary-philosophy-in-an-enormous-planet-sized-nutshell/ ). Robin Hanson also noted this, but there’s no shame in independent rediscovering a point made by Robin Hanson. I think the Reactionaries are also correct about that it is very worrying that our society can’t amalgamate or discuss this belief. various scattered historical events which they seem able to parse much better than anyone else. See for example http://foseti.wordpress.com/2013/10/01/review-of-the-last-lion-by-paul-reid/ Moldbug’s theory of why modem poetry is so atrocious, which I will not bore you by asking you to read.

Michael successfully alerted me to the fact that crime has risen by a factor of ten over the past century, which seems REALLY IMPORTANT and nobody else is talking about it and it seems like the sort of thing that more people than just Michael should be paying attention to.

6. A general theory of who is worth paying attention to.

Compare RationalWiki and the neoreactionaries. RationalWiki provides a steady stream of mediocrity. Almost nothing they say is outrageously wrong, but almost nothing they say is especially educational to someone who is smart enough to have already figured out that homeopathy doesn't work. Even things of theirs I didn’t know — let’s say some particular study proving homeopathy doesn't work that I had never read before — doesn’t provide me with real value, since they fit exactly into my existing worldview without teaching me anything new (ie I so strongly assume such studies should exist that learning they actually exist changes nothing for me).

The Neoreactionaries provide a vast stream of garbage with occasional nuggets of absolute gold in them. Despite considering myself pretty smart and clueful, I constantly learn new and important things (like the crime stuff, or the WWII history, or the HBD) from the Reactionaries. Anything that gives you a constant stream of very important new insights is something you grab as tight as you can and never let go of.

The garbage doesn’t matter because I can tune it out.

7. My behavior is the most appropriate response to these facts

I am monitoring Reactionaries to try to take advantage of their insight and learn from them. I am also strongly criticizing Reactionaries for several reasons.

First is a purely selfish reason — my blog gets about 5x more hits and new followers when I write about Reaction or gender than it does when I write about anything else, and writing about gender is horrible. Blog followers are useful to me because they expand my ability to spread important ideas and network with important people.

Second is goodwill to the Reactionary community. I want to improve their thinking so that they become stronger and keep what is correct while throwing out the garbage. A reactionary movement that kept the high intellectual standard (which you seem to admit they have), the correct criticisms of class and of social justice, and few other things while dropping the monarchy-talk and the cathedral-talk and the traditional gender-talk and the feudalism-talk — would be really useful people to have around. So I criticize the monarchy-talk etc, and this seems to be working — as far as I can tell a lot of Reactionaries have quietly started talking about monarchy and feudalism a lot less (still haven't gotten many results about the Cathedral or traditional gender).

Third is that I want to spread the good parts of Reactionary thought. Becoming a Reactionary would both be stupid and decrease my ability to spread things to non-Reactionary readers. Criticizing the stupid parts of Reaction while also mentioning my appreciation for the good parts of their thought seems like the optimal way to inform people of them. And in fact I think it’s possible (though I can't prove) that my FAQ inspired some of the recent media interest in Reactionaries.

Finally, there’s a social aspect. They tend to be extremely unusual and very smart people who have a lot of stuff to offer me. I am happy to have some of them (not Jim!) as blog commenters who are constantly informing me of cool new things (like nydwracu linking me to the McDonalds article yesterday)

8. SERIOUSLY SERIOUSLY, the absurdity heuristic doesn’t work

You’re into cryonics, so you’ve kind of lost the right to say “These people, even tough they’re smart, are saying something obviously stupid, so we don’t have to listen to them”

Drew has even less of a right to say that — he seems to be criticizing the Reactionaries on the grounds of “you wouldn’t pay attention to creationists, would you?” even while he discovered Catholic philosophy and got so into it that he has now either converted to Catholicism or is strongly considering doing so.

If there is a movement consisting of very smart people — not pseudointellectual people, like the type who write really clever-looking defenses of creationism — then in my opinion it's almost always a bad idea to dismiss it completely.

Also, I should have mentioned this on your steelmanning creationism thread, but although I feel no particular urge to steelman young earth creationism, it is actually pretty useful to read some of their stuff. You never realize how LITTLE you know about evolution until you read some Behe and are like “I know that can’t be correct...but why not? Even if it turned out there was zero value to anything any Reactionary ever said, by challenging beliefs of mine that would otherwise never be challenged they have forced me to up my game and clarify my thinking. That alone is worth thousand hours reading things I already agree with on RationalWiki.

Some call this peek into Alexander’s thinking & motives a smoking gun which demonstrates that he is a crypto-reactionary. I want to chew on that …

So what is it with Alexander?

We can conclude that we must shun Alexander and his work for carrying water for dangerous nonsense without needing to understand Alexander’s motives and thought processes.

But I have reasons to want to dig for an understanding of him.

Sandifer and I suggest that we might read Alexander as foolish rather than just a crypto-reactionary. Why?

Despite knowing the worst from him, I confess that I still find Alexander’s long 2014 poetic evocation of rigorous liberalism In Favor Of Niceness, Community, And Civilization moving.

Liberalism does not conquer by fire and sword. Liberalism conquers by communities of people who agree to play by the rules, slowly growing until eventually an equilibrium is disturbed. Its battle cry is not “Death to the unbelievers!” but “If you’re nice, you can join our cuddle pile!”

But some people, through lack of imagination, fail to find this battle cry sufficiently fear-inspiring.

In 2013 Alexander was early to take a hard and critical look at NRx on SSC in 2013, predating most other critiques I know about. In those posts he steelmans NRx ideas … and finds them badly wanting. His anti-Reaction essay concludes:

Some Reactionaries are saying things about society that need to be said. A few even have good policy proposals. But couching them in a narrative that talks about the wonders of feudalism and the evils of the Cathedral and how we should replace democracy with an absolute monarch just discredits them entirely.

Recall how in the leaked email, Alexander called NRx:

a vast stream of garbage with occasional nuggets of absolute gold

I find it impossible to imagine Alexander concocting these as nothing other than a smokescreen over his true reactionary agenda. Yet Alexander unmistakably supports some of the worst reactionary ideas and actors. How to reconcile that?

Jeff Eaton, one of my favorite commentators on far right ideology in the US, distils the leaked email and finds too much sympathy for NRx:

In the context of what he’s writing (i.e., the whole message rather than an isolated phrase or two) it seems straightforward that:

  1. He believes the NRx movement / thinkers are tackling critical questions few other people are
  2. They don’t get everything right, but that is better than not trying
  3. He takes information from them credulously and considers them a unique pool of insights
    1. He cites specific ideas NRx folks have offered to him that critics have debunked but Scott accepted and went on to consider important ingredients in his thinking
    2. He believes smart people should listen to them, because of the positives
  4. He avoided publicly associating himself with NRx in part because felt it would affect his credibility with non-NRx people, not because he condemned the movement’s priors or conclusions
  5. He considers almost everything to be a stream of garbage that intelligent people must sort through to find the valuable elements
  6. He believes that NRx is on the balance better than other “dismissable” ideas like homeopathy, and should be listened to
  7. He believes that smart people like himself will not be affected by whatever poor conclusions or bad priors the NRx movement brings to the table

In that context the “stream of garbage” phrase doesn’t carry a lot of weight.

I disagree with Eaton a bit on that last point — I find it important to distinguish between Alexander simply supporting the movement versus finding it wrong-but-instructive — but agree with his conclusion about Alexander’s failure.

The “gold” Alexander finds is not merely worthless. It is poisonous.

So — again — what is going on with him?

Consider how Alexander also wrote a long Non-Libertarian FAQ which, like the Anti-Reactionary FAQ, steelmans libertarian ideas then rejects them.

The main reason I’m writing this is that I encounter many libertarians, and I need a single document I can point to explaining why I don’t agree with them.

I intend the post you are reading now to do something similar: accumulating particulars about Alexander and examining generous readings of him, to criticize him thoroughly rather than just dismissively. I have done similar posts before.

Given the parallel between posts he and I have both done, I recognize Alexander as a species of nerd who has a high tolerance for intellectual disgust, has a taste for examining evil ideas and picking them apart, and lives in the sphere of California nerds who talk about this stuff. That recognition comes of our similarities. It would comfort me to believe that we fundamentally differ because his liberal protestations are just an act, to believe that he only feigns his commitment to the deep egalitarian and democratic values at the core of my own thinking. I might then feel confident that I could not go as wrong as he has.

Instead, I take his liberal side as sincere, which inspires my discomfort. If we are so alike, I must dread the possibility that I could make comparable mistakes. After all, I have failed in recognizing Alexander’s worst too slowly. I have an obligation to examine what brought him to where he is, to learn to avoid his failings.

An anonymous commenter offers a reading of Alexander’s driving psychology, in response to another example of his mortifying moral tone-deafness.

(For the uninitiated, MIRI is an artificial intelligence “research” project entangled with the Rationalist community.)

Many of Scott’s house-mates from the rationalist community are extremely weird and awkward (I guess I can’t name them without sharing personal info so you’ll have to take my word for it) and are often sad about their lack of status. They are very wealthy by worldwide standards if not by the absurd local-regional standards which is still enough to at least feel obligated to feel guilty by community standards. (Think: people who are making making donations MIRI well over the US median household income)

If you combine this with the frequent inability of people perceive their own privilege and the high levels of narcissist-like traits exhibited in the rationalist community you end up with people around you saying “I have all this money and yet no one respects for the Gift to the world that I am and instead keeps treating me like a weirdo…” and maybe you start thinking money doesn’t matter much.

Some of this likely stems from conflating status and power as a result of overvaluing what other people think of you as a result of living in a group house (similar to how high-schoolers are stereotyped as thinking their life is over at every bump in their social lives).

Let me offer an alternative explanation (in pseudo mathy terms so the rationalists can pretend that its deeply insightful): Power is a normalized product of many factors:

     ( F1you * F2you * F3you … * Fnyou )
     ( sum( product(Fn)everyone ) )

and many of these factors are highly correlated with wealth: education, connections to other people with high power: things like free time, safety from starvation, good health, affiliation with socially powerful groups, level of control over the time of others (e.g. owning a business), freedom from biological/social persecution…

Some of these factors could rightfully be considered latent forms of wealth in themselves (in that they inevitably result from or lead to wealth). As a result, P changes with wealth raised to some high power but weakness in a non-wealth respect can still handicap you.

So yes, you can have some modicum of wealth and still have low power by being very weak in other respects, such as not having enough EQ to realize when your “just asking” has ventured into extremely offensive and impolitic waters or too much selfishness to cut it out if you do realize. This does not change the fact that wealth is a universal solvent able to radically simply many concerns and a nearly impassable barrier for many goals.

Over time, you become your friends in many respects. Choosing who you spend time with is one of the biggest things someone can do to influence their future personality. Comparing the Scott of today to the one who wrote the anti-libertarian FAQ feels to me like looking at someone who hasn’t made the best decisions of this kind.

Is it the people or the philosophy? asks about that Rationalist community reflected in Alexander and SSC, describing a certain white-guy-nerd hubris. I too am a white guy nerd. I should beware.

For the uninitiated:

  • The Sequences is a web book of eccentric essays about thinking by crank Rationalist star Eliezer Yudkowsky
  • Bayes’ Theorem is a way of approaching probability problems which Yudkowsky and many other Rationalists tend to emphasize as inspiring useful intuitions

Every once in a while, someone with real credentials in a trendy domain like genetics or economics will drop in to mention how jarring it is to see so many people talking so enthusiastically about their academic discipline with such esoteric vocabulary, yet the vast majority of it is utter horseshit that wouldn’t pass a 101-level class. One response I got when I did this was that someone, with no apparent irony, accused me of “LARPing” because the scientific establishment is clearly just pretending to epistemological “prestige” that can only be truly earned by studying the Sequences.

PhD < Bayes’s theorem (< IQ).

This is, of course, the perfect description of what the Rational community is up to. Instead of labs they do their research in armchairs; instead of peer-reviewed journals they publish their findings in blogs (whose posts still get actual citations years later). But they’re creating a parallel world of alt-academia in fields that are already well trod by the genuine article, like philosophy and economics and quantum mechanics and oh-so-much genetics. They do happily admit real-world publications into their body of knowledge, but then they also admit pseudoscientists like that Kirkegaard guy or the crackpot virologist whom Peter Thiel paid to give people herpes in order to prove we don’t need the FDA. I think this is where Rationalists are the most cultlike and earn their capital R: not the abundance of unnecessary jargon/shibboleths, nor the alarming tendency to handle everything in their daily lives (even their health) through the community, but the whole ecosystem they have of theories and thought-leaders that are constantly discussed inside the community yet no one outside has ever heard of them.

Maybe this comes back to the evasion of empathy, the reluctance to give any weight to other people’s experience — a doctor’s opinions about health are just as irrelevant as an African American's opinions about racism. In that sense it could just be one more battleground in the eternal conflict between rationalism and empiricism.

I take comfort that I have a view of power much more heavily informed by a social justice analysis and a view of expertise much more skeptical of fringe figures. But most of all, I find myself dwelling on Alexander saying in the leaked email:

The garbage doesn’t matter because I can tune it out

Alexander can be very smart. His essay Meditations on Moloch remains a marvel I recommend to anyone. His two essays on NRx are necessary reading if one wants to understand the movement. And my fascination with his Niceness, Community, and Civilization essay is not dulled by his failings at the values it describes but rather sharpened by the cautionary example of how thinking his principles protected him from bad actors failed, may even have made him dangerously overconfident.

But he is also very stupid. No, he cannot tune out the garbage.

This presents a problem, because someone has to debunk false and evil ideas and present the critiques to a candid world. This post is among my attempts to do that work. I have a sober sense of how that is difficult and important.

When I read something like The Turner Diaries, the men in the sub-mircron filtration Chemurion suits emerge from the clean room with their vacuum traps. They hoover up the memes and peel back the protein sheath. The virus gets spun down in the centrifuge, cut to pieces with enzymes, run through the sequencer. And the white coated man emerges from the lab, with a clipboard, and announces, “You know what? This looks very similar to something we indexed in 1995…” And they begin to formulate a vaccine — for widespread distribution, to stop the spread.

Alexander, for all his genuine wit at his best, is just far too intellectually reckless to be trusted with the delicacy of sifting through bad ideas. He demonstrates us how garbage sources are bad for you even if you go in knowing they are garbage. His wit is a handicap. It makes him overconfident. It presents an illusion that he is a rigorous thinker.


This post emerged from a Twitter thread accumulating particulars, inspired by a nightmarishly bushy Twitter discussion with critics of Alexander’s. Twitter is of course both an inconvenient format and a dying platform serving the far right at Elon Musk’s hands, so I have made an attempt to capture the essentials here. I have refined this post since originally posting it and intend to keep refining and expanding it, so I encourage readers to drop me a line pointing to anything important I have missed.