Why Rationality?

I’ve identified as a rationalist for about five years now. The dictionary definitions are a bit off from what I mean, so here’s my definition.

Epistemic rationality: believing, and updating on evidence, so as to systematically improve the correspondence between your map and the territory.  The art of obtaining beliefs that correspond to reality as closely as possible.  This correspondence is commonly termed “truth” or “accuracy”, and we’re happy to call it that.

Instrumental rationality: achieving your values.  Not necessarily “your values” in the sense of being selfish values or unshared values: “your values” means anything you care about.  The art of choosing actions that steer the future toward outcomes ranked higher in your preferences.  On LW we sometimes refer to this as “winning”.

Eliezer Yudkowsky, “What Do We Mean By ‘Rationality’?”, LessWrong

Of course, these two definitions are really subsets of the same general concept, and they intertwine considerably. It’s somewhat difficult to achieve your values without believing true things, and similarly, it’s difficult (for a human, at least) to search for truth in absence of wanting to actually do anything with it. Still, it’s useful to distinguish the two subsets, since it helps to distinguish the clusters in concept-space.

So if that’s what I mean by rationality, then why am I a rationalist? Because I like believing true things and achieving my values. The better question here would be “why is everyone not a rationalist?”, and the answer is that, if it was both easy to do and widely known about, I think everyone would be.

Answering why it isn’t well-known is more complicated than answering why it isn’t easy, so, here are a handful of the reasons for the latter. (Written in the first person, because identifying as a rationalist doesn’t make me magically exempt from any of these things, it just means I know what they are and I do my best to fix them.)

  • I’m running on corrupted hardware. Looking at any list of cognitive biases will confirm this. And since I’m not a self-improving agent—I can’t reach into my brain and rearrange my neurons; I can’t rewrite my source code—I can only really make surface-level fixes to these extremely fundamental bugs. This is both difficult and frustrating, and to some extent scary, because it’s incredibly easy to break things irreparably if you go messing around without knowing what you’re doing, and you would be the thing you’re breaking.
  • I’m running on severely limited computing power. “One of the single greatest puzzles about the human brain,” Eliezer Yudkowsky wrote, “is how the damn thing works at all when most neurons fire 10-20 times per second, or 200Hz tops. […] Can you imagine having to program using 100Hz CPUs, no matter how many of them you had?  You’d also need a hundred billion processors just to get anything done in realtime. If you did need to write realtime programs for a hundred billion 100Hz processors, one trick you’d use as heavily as possible is caching. That’s when you store the results of previous operations and look them up next time, instead of recomputing them from scratch. […] It’s a good guess that the actual majority of human cognition consists of cache lookups.” Since most of my thoughts are cached, when I get new information, I need to resist my brain’s tendency to rely on those cached thoughts (which can end up in my head by accident and come from anywhere), and actually recompute my beliefs from scratch. Else, I end up with a lot of junk.
  • I can’t see the consequences of the things I believe. Now, on some level being able to do this (with infinite computing power) would be a superpower: in that circumstance all you’d need is a solid grasp of quantum physics and the rest would just follow from there. But humans don’t just lack the computing power; we can believe, or at least feel like we believe, two inherently contradictory things. This concept is, in psychology, called “cognitive dissonance”.
  • As a smart human starting from irrationality, knowing more information can easily hurt me. Smart humans naturally become very good at clever arguing—arguing for a predetermined position with propositions convoluted enough to confuse and confound any human arguer, even one who is right—and can thus use their intelligence to defeat itself with great efficiency. They argue against the truth convincingly, and can still feel like they’re winning while running away from the goal at top speed. Therefore, in any argument, I have to dissect my own position just as carefully, if not more carefully, than I dissect those of my opponents. Otherwise, I come away more secure in my potentially-faulty beliefs, and more able to argue those beliefs against the truth.

This is a short and incomplete list, of some of the problems that are easiest to explain. It’s by no means the entire list, or the list which would lend the most emotional weight to the statement “it’s incredibly difficult to believe true things”. But I do hope that it shed at least a little light on the problem.

If rationality is really so difficult, then, why bother?

In my case, I say “because my goal is important enough to be worth the hassle”. In general, I think that if you have a goal that’s worth spending thirty years on, that goal is also worth trying to be as rational as humanly possible about. However, I’d go a step further. Even if the goal is worth spending a few years or even months on, it’s still worth being rational about, because not being rational about it won’t just waste those years or months; it may waste your whole career.

Why? Because the universe rarely arrives at your doorstep to speak in grave tones, “this is an Important Decision, make it Wisely”. Instead, small decisions build to larger ones, and if those small decisions are made irrationally, you may never get the chance to make a big mistake; the small ones may have already sealed your doom. Here’s a personal example.

From a very young age, I wanted to go to Stanford. I learned that my parents had met there when I was about six, and I decided that I was going to go too. Like most decisions made by six-year-olds, this wasn’t based on any meaningful intelligence, let alone the full cost-benefit analysis that such a major life decision should have required. But I was young, and I let myself believe the very convenient thought that following the standard path would work for me. This was not, itself, the problem. The problem was that I kept on thinking this simplified six-year-old thought well into my young adulthood.

As I grew up, I piled all sorts of convincing arguments around that immature thought, rationalizing reasons I didn’t actually have to do anything difficult and change my beliefs. I would make all sorts of great connections with smart interesting people at Stanford, I thought, as if I couldn’t do the same in the workforce. I would get a prestigious degree that would open up many doors, I thought, as if working for Google isn’t just as prestigious but will pay you for the trouble. It will be worth the investment, the cached thoughts of society thought for me, and I didn’t question them.

I continued to fail at questioning them every year after, until the beginning of my senior year. At that point, I was pretty sick of school, so this wasn’t rationality, but a motivated search. But it was a search nonetheless, and I did reject the cached thoughts which I’d built up in my head for so long, and as I took the first step outside my bubble of predetermined cognition, I instantly saw a good number of arguments against attending Stanford. I realized that it had a huge opportunity cost, in both time and money. Four years and hundreds of thousands of dollars should not have been parted with that lightly.

And yet, even after I realized this, I was not done. It would have been incredibly easy to reject the conclusion I’d made because I didn’t want all that work to have been a waste. I was so close: I had a high SAT, I’d gotten good scores on 6 AP tests, including the only two computer science APs (the area I’d been intending to major in), and I’d gotten National Merit Commended Scholar status. All that would have been left was to complete my application, which I’m moderately confident I would have done well on, since I’m a good writer.

That bitterness could have cost me my life. Not in the sense that I would die for it immediately, but in the sense that everyone is dying for anything they spend significant time on, because everyone is dying. And it was here that rationality was my saving grace. I knew about the sunk cost fallacy. I knew that at this point I should scream “OOPS” and give up. I knew that at this point I should lose.

I bit my tongue, and lost.

I don’t know where I would end up if I hadn’t been able to lose here. The optimistic estimate is that I would have wasted four years, but gotten some form of financial aid or scholarship such that the financial cost was lower, and further, that in the process of attending college, I wouldn’t gain any more bad habits, I wouldn’t go stir-crazy from the practical inapplicability of the material (this was most of what had frustrated me about school before), and I would come out the other end with a degree but not too much debt and a non-zero number of gained skills and connections. That’s a very optimistic estimate, though, as you can probably tell given the way I wrote out the details. (Writing out all the details that make the optimistic scenario implausible is one of my favorite ways of combatting the planning fallacy.) There are a lot more pessimistic estimates, and it’s much more likely that one of those would happen.

Just by looking at the decision itself, you wouldn’t think of it as a particularly major one. Go to college, don’t go to college. How bad could it be, you may be tempted to ask. And my answer is, very bad. The universe is not fair. It’s not necessarily going to create a big cause for a big event: World War I was caused by some dude having a pity sandwich. Just because you feel like you’re making a minor life choice doesn’t mean you are, and just because you feel like you should be allowed to make an irrational choice just this once doesn’t mean the universe isn’t allowed to kill you anyway.

I don’t mean to make this excessively dramatic. It’s possible that being irrational here wouldn’t have messed me up. I don’t know, I didn’t live that outcome. But I highly doubt that this was the only opportunity I’ll get to be stupid. Actually, given my goals, I think it’s likely I’ll get a lot more, and that the next ones will have much higher stakes. In the near future, I can see people—possibly including me—making decisions where being stupid sounds like “oops” followed by the dull thuds of seven billion bodies hitting the floor.

This is genuinely the direction the future is headed. We are becoming more and more able to craft our destines, but we are flawed architects, and we must double and triple check our work, else the whole world collapses around us like a house on a poor foundation. If that scares you, irrationality should scare you. It sure terrifies the fuck out of me.

Leave a Reply

Your email address will not be published. Required fields are marked *