Thoughts from Planet Earth

Sometimes I look at the sky and I think, how high can I jump? Like, half a foot or so. And then I think, how high can I get in an airplane? That’s the kind of delta that human ingenuity gives us. And it’s not even the limit: how high can you get in a rocket ship?

But even so, the stars that I’m looking at are so much further away. If Earth was a word, the nearest star is ten times every word you’ll ever say in your life. And yet, we humans went from the half a foot we can jump to the upper stratosphere, where the tallest mountains are vague outlines of purple and white.

It took a lot of work. Not with our muscles, which can only get us about five feet up if we train relentlessly for years, but with our brains. Thousands of ordinary human brains, no smarter or greater than you or I, made the airplanes and the rocket ships. In fact, human brains did a whole lot more than that. They created the entire modern world.

I wake up in the morning to an alarm app that comes installed on my smartphone, which gets its data from a clock which runs on humanity’s collective knowledge of quantum physics and syncs with all the other phones via a network which runs on humanity’s collective knowledge of binary logic. I get dressed in clothes that came directly from somebody I’ve never met who donated them to a Goodwill on the opposite side of a continent from my current residence, and that person got them from a store which got them from a country I’ve never been to. I take a train to work which moves through a huge tunnel under the ocean at upwards of ten times the speed I can run. Humans made all of this! Some of it is pretty suboptimal—the fact that I have to get up at 6am certainly comes to mind—but you can’t deny that it’s incredibly cool.

If thousands of ordinary human minds were able to make all these things, I think that with a few thousand more, we’ll be able to make it all the way to those stars and to the planets that might orbit them.

What might we find there, on those distant worlds? Maybe nothing more than we minimally expect. Some interesting places, both habitable and hostile, to which we can add the beauty that comes with perception by intelligent life. And this isn’t a loss! Improving the diversity and span of human experience across the galaxy is one of the best futures I can imagine for us.

But maybe, just maybe, we might find someone else out there. Not humans in funny suits, like in sci-fi movies, but things which are more different from us than we are from petunias—because we and petunias both evolved on Earth, though our evolutionary branches separated aeons ago. Sentient things made of complex configurations of silicon, crystal, liquid, metal, or things even stranger. Sentient things which are further from humans than anything we know, but that we can still call people, because though they may not have human thoughts, emotions, or biases, they have goals and they don’t want to die and this is all we really need.

Think how similar we are to each other, compared to the others we might find out there. All humans look the same—bipedal ape-like things made of flesh and built from DNA. We think the same—simple animals which evolved higher thought, planning, and consciousness by a constant competition with one another that produced predictable patterns and errors in reasoning. We act the same—pack animals inclined to organize into social groups and gatherings who are practically mandated by our development process to use syntactic combinatorial language along with specific nonverbal gestures and facial expressions to communicate. We feel the same—emotional creatures motivated primarily by fear and secondarily by joy, sadness, anger, and love. We’re as alike as peas from the same pod.

And maybe this is the part of my mind that grew up on Star Trek talking, but it doesn’t make any sense, if we could find and cooperate with these others—and I think we would, judging from how badly we want to not be alone in the universe—that we would still have such silly things as the various -isms and -phobias which are manifestations of inter-human hate and insecurity. How could we hate a fellow language-using, emotion-feeling, hairless ape built by DNA, when we can get along with electrically-charged systems of fractalized crystal and gaseous blobs of pulsating color?

If our distant ancestors could gradually build our modern world and our distant descendants could get along with these impossibly different aliens, how could it make sense for me—operating in a flawed modern world but made of the same stuff that made up the humans before me and will make up those after—to hate another human?

These are the things I think of, looking up into the sky at night.

Why Rationality?

I’ve identified as a rationalist for about five years now. The dictionary definitions are a bit off from what I mean, so here’s my definition.

Epistemic rationality: believing, and updating on evidence, so as to systematically improve the correspondence between your map and the territory.  The art of obtaining beliefs that correspond to reality as closely as possible.  This correspondence is commonly termed “truth” or “accuracy”, and we’re happy to call it that.

Instrumental rationality: achieving your values.  Not necessarily “your values” in the sense of being selfish values or unshared values: “your values” means anything you care about.  The art of choosing actions that steer the future toward outcomes ranked higher in your preferences.  On LW we sometimes refer to this as “winning”.

Eliezer Yudkowsky, “What Do We Mean By ‘Rationality’?”, LessWrong

Of course, these two definitions are really subsets of the same general concept, and they intertwine considerably. It’s somewhat difficult to achieve your values without believing true things, and similarly, it’s difficult (for a human, at least) to search for truth in absence of wanting to actually do anything with it. Still, it’s useful to distinguish the two subsets, since it helps to distinguish the clusters in concept-space.

So if that’s what I mean by rationality, then why am I a rationalist? Because I like believing true things and achieving my values. The better question here would be “why is everyone not a rationalist?”, and the answer is that, if it was both easy to do and widely known about, I think everyone would be.

Answering why it isn’t well-known is more complicated than answering why it isn’t easy, so, here are a handful of the reasons for the latter. (Written in the first person, because identifying as a rationalist doesn’t make me magically exempt from any of these things, it just means I know what they are and I do my best to fix them.)

  • I’m running on corrupted hardware. Looking at any list of cognitive biases will confirm this. And since I’m not a self-improving agent—I can’t reach into my brain and rearrange my neurons; I can’t rewrite my source code—I can only really make surface-level fixes to these extremely fundamental bugs. This is both difficult and frustrating, and to some extent scary, because it’s incredibly easy to break things irreparably if you go messing around without knowing what you’re doing, and you would be the thing you’re breaking.
  • I’m running on severely limited computing power. “One of the single greatest puzzles about the human brain,” Eliezer Yudkowsky wrote, “is how the damn thing works at all when most neurons fire 10-20 times per second, or 200Hz tops. […] Can you imagine having to program using 100Hz CPUs, no matter how many of them you had?  You’d also need a hundred billion processors just to get anything done in realtime. If you did need to write realtime programs for a hundred billion 100Hz processors, one trick you’d use as heavily as possible is caching. That’s when you store the results of previous operations and look them up next time, instead of recomputing them from scratch. […] It’s a good guess that the actual majority of human cognition consists of cache lookups.” Since most of my thoughts are cached, when I get new information, I need to resist my brain’s tendency to rely on those cached thoughts (which can end up in my head by accident and come from anywhere), and actually recompute my beliefs from scratch. Else, I end up with a lot of junk.
  • I can’t see the consequences of the things I believe. Now, on some level being able to do this (with infinite computing power) would be a superpower: in that circumstance all you’d need is a solid grasp of quantum physics and the rest would just follow from there. But humans don’t just lack the computing power; we can believe, or at least feel like we believe, two inherently contradictory things. This concept is, in psychology, called “cognitive dissonance”.
  • As a smart human starting from irrationality, knowing more information can easily hurt me. Smart humans naturally become very good at clever arguing—arguing for a predetermined position with propositions convoluted enough to confuse and confound any human arguer, even one who is right—and can thus use their intelligence to defeat itself with great efficiency. They argue against the truth convincingly, and can still feel like they’re winning while running away from the goal at top speed. Therefore, in any argument, I have to dissect my own position just as carefully, if not more carefully, than I dissect those of my opponents. Otherwise, I come away more secure in my potentially-faulty beliefs, and more able to argue those beliefs against the truth.

This is a short and incomplete list, of some of the problems that are easiest to explain. It’s by no means the entire list, or the list which would lend the most emotional weight to the statement “it’s incredibly difficult to believe true things”. But I do hope that it shed at least a little light on the problem.

If rationality is really so difficult, then, why bother?

In my case, I say “because my goal is important enough to be worth the hassle”. In general, I think that if you have a goal that’s worth spending thirty years on, that goal is also worth trying to be as rational as humanly possible about. However, I’d go a step further. Even if the goal is worth spending a few years or even months on, it’s still worth being rational about, because not being rational about it won’t just waste those years or months; it may waste your whole career.

Why? Because the universe rarely arrives at your doorstep to speak in grave tones, “this is an Important Decision, make it Wisely”. Instead, small decisions build to larger ones, and if those small decisions are made irrationally, you may never get the chance to make a big mistake; the small ones may have already sealed your doom. Here’s a personal example.

From a very young age, I wanted to go to Stanford. I learned that my parents had met there when I was about six, and I decided that I was going to go too. Like most decisions made by six-year-olds, this wasn’t based on any meaningful intelligence, let alone the full cost-benefit analysis that such a major life decision should have required. But I was young, and I let myself believe the very convenient thought that following the standard path would work for me. This was not, itself, the problem. The problem was that I kept on thinking this simplified six-year-old thought well into my young adulthood.

As I grew up, I piled all sorts of convincing arguments around that immature thought, rationalizing reasons I didn’t actually have to do anything difficult and change my beliefs. I would make all sorts of great connections with smart interesting people at Stanford, I thought, as if I couldn’t do the same in the workforce. I would get a prestigious degree that would open up many doors, I thought, as if working for Google isn’t just as prestigious but will pay you for the trouble. It will be worth the investment, the cached thoughts of society thought for me, and I didn’t question them.

I continued to fail at questioning them every year after, until the beginning of my senior year. At that point, I was pretty sick of school, so this wasn’t rationality, but a motivated search. But it was a search nonetheless, and I did reject the cached thoughts which I’d built up in my head for so long, and as I took the first step outside my bubble of predetermined cognition, I instantly saw a good number of arguments against attending Stanford. I realized that it had a huge opportunity cost, in both time and money. Four years and hundreds of thousands of dollars should not have been parted with that lightly.

And yet, even after I realized this, I was not done. It would have been incredibly easy to reject the conclusion I’d made because I didn’t want all that work to have been a waste. I was so close: I had a high SAT, I’d gotten good scores on 6 AP tests, including the only two computer science APs (the area I’d been intending to major in), and I’d gotten National Merit Commended Scholar status. All that would have been left was to complete my application, which I’m moderately confident I would have done well on, since I’m a good writer.

That bitterness could have cost me my life. Not in the sense that I would die for it immediately, but in the sense that everyone is dying for anything they spend significant time on, because everyone is dying. And it was here that rationality was my saving grace. I knew about the sunk cost fallacy. I knew that at this point I should scream “OOPS” and give up. I knew that at this point I should lose.

I bit my tongue, and lost.

I don’t know where I would end up if I hadn’t been able to lose here. The optimistic estimate is that I would have wasted four years, but gotten some form of financial aid or scholarship such that the financial cost was lower, and further, that in the process of attending college, I wouldn’t gain any more bad habits, I wouldn’t go stir-crazy from the practical inapplicability of the material (this was most of what had frustrated me about school before), and I would come out the other end with a degree but not too much debt and a non-zero number of gained skills and connections. That’s a very optimistic estimate, though, as you can probably tell given the way I wrote out the details. (Writing out all the details that make the optimistic scenario implausible is one of my favorite ways of combatting the planning fallacy.) There are a lot more pessimistic estimates, and it’s much more likely that one of those would happen.

Just by looking at the decision itself, you wouldn’t think of it as a particularly major one. Go to college, don’t go to college. How bad could it be, you may be tempted to ask. And my answer is, very bad. The universe is not fair. It’s not necessarily going to create a big cause for a big event: World War I was caused by some dude having a pity sandwich. Just because you feel like you’re making a minor life choice doesn’t mean you are, and just because you feel like you should be allowed to make an irrational choice just this once doesn’t mean the universe isn’t allowed to kill you anyway.

I don’t mean to make this excessively dramatic. It’s possible that being irrational here wouldn’t have messed me up. I don’t know, I didn’t live that outcome. But I highly doubt that this was the only opportunity I’ll get to be stupid. Actually, given my goals, I think it’s likely I’ll get a lot more, and that the next ones will have much higher stakes. In the near future, I can see people—possibly including me—making decisions where being stupid sounds like “oops” followed by the dull thuds of seven billion bodies hitting the floor.

This is genuinely the direction the future is headed. We are becoming more and more able to craft our destines, but we are flawed architects, and we must double and triple check our work, else the whole world collapses around us like a house on a poor foundation. If that scares you, irrationality should scare you. It sure terrifies the fuck out of me.