Why Rationality?

I’ve identified as a rationalist for about five years now. The dictionary definitions are a bit off from what I mean, so here’s my definition.

Epistemic rationality: believing, and updating on evidence, so as to systematically improve the correspondence between your map and the territory.  The art of obtaining beliefs that correspond to reality as closely as possible.  This correspondence is commonly termed “truth” or “accuracy”, and we’re happy to call it that.

Instrumental rationality: achieving your values.  Not necessarily “your values” in the sense of being selfish values or unshared values: “your values” means anything you care about.  The art of choosing actions that steer the future toward outcomes ranked higher in your preferences.  On LW we sometimes refer to this as “winning”.

Eliezer Yudkowsky, “What Do We Mean By ‘Rationality’?”, LessWrong

Of course, these two definitions are really subsets of the same general concept, and they intertwine considerably. It’s somewhat difficult to achieve your values without believing true things, and similarly, it’s difficult (for a human, at least) to search for truth in absence of wanting to actually do anything with it. Still, it’s useful to distinguish the two subsets, since it helps to distinguish the clusters in concept-space.

So if that’s what I mean by rationality, then why am I a rationalist? Because I like believing true things and achieving my values. The better question here would be “why is everyone not a rationalist?”, and the answer is that, if it was both easy to do and widely known about, I think everyone would be.

Answering why it isn’t well-known is more complicated than answering why it isn’t easy, so, here are a handful of the reasons for the latter. (Written in the first person, because identifying as a rationalist doesn’t make me magically exempt from any of these things, it just means I know what they are and I do my best to fix them.)

  • I’m running on corrupted hardware. Looking at any list of cognitive biases will confirm this. And since I’m not a self-improving agent—I can’t reach into my brain and rearrange my neurons; I can’t rewrite my source code—I can only really make surface-level fixes to these extremely fundamental bugs. This is both difficult and frustrating, and to some extent scary, because it’s incredibly easy to break things irreparably if you go messing around without knowing what you’re doing, and you would be the thing you’re breaking.
  • I’m running on severely limited computing power. “One of the single greatest puzzles about the human brain,” Eliezer Yudkowsky wrote, “is how the damn thing works at all when most neurons fire 10-20 times per second, or 200Hz tops. […] Can you imagine having to program using 100Hz CPUs, no matter how many of them you had?  You’d also need a hundred billion processors just to get anything done in realtime. If you did need to write realtime programs for a hundred billion 100Hz processors, one trick you’d use as heavily as possible is caching. That’s when you store the results of previous operations and look them up next time, instead of recomputing them from scratch. […] It’s a good guess that the actual majority of human cognition consists of cache lookups.” Since most of my thoughts are cached, when I get new information, I need to resist my brain’s tendency to rely on those cached thoughts (which can end up in my head by accident and come from anywhere), and actually recompute my beliefs from scratch. Else, I end up with a lot of junk.
  • I can’t see the consequences of the things I believe. Now, on some level being able to do this (with infinite computing power) would be a superpower: in that circumstance all you’d need is a solid grasp of quantum physics and the rest would just follow from there. But humans don’t just lack the computing power; we can believe, or at least feel like we believe, two inherently contradictory things. This concept is, in psychology, called “cognitive dissonance”.
  • As a smart human starting from irrationality, knowing more information can easily hurt me. Smart humans naturally become very good at clever arguing—arguing for a predetermined position with propositions convoluted enough to confuse and confound any human arguer, even one who is right—and can thus use their intelligence to defeat itself with great efficiency. They argue against the truth convincingly, and can still feel like they’re winning while running away from the goal at top speed. Therefore, in any argument, I have to dissect my own position just as carefully, if not more carefully, than I dissect those of my opponents. Otherwise, I come away more secure in my potentially-faulty beliefs, and more able to argue those beliefs against the truth.

This is a short and incomplete list, of some of the problems that are easiest to explain. It’s by no means the entire list, or the list which would lend the most emotional weight to the statement “it’s incredibly difficult to believe true things”. But I do hope that it shed at least a little light on the problem.

If rationality is really so difficult, then, why bother?

In my case, I say “because my goal is important enough to be worth the hassle”. In general, I think that if you have a goal that’s worth spending thirty years on, that goal is also worth trying to be as rational as humanly possible about. However, I’d go a step further. Even if the goal is worth spending a few years or even months on, it’s still worth being rational about, because not being rational about it won’t just waste those years or months; it may waste your whole career.

Why? Because the universe rarely arrives at your doorstep to speak in grave tones, “this is an Important Decision, make it Wisely”. Instead, small decisions build to larger ones, and if those small decisions are made irrationally, you may never get the chance to make a big mistake; the small ones may have already sealed your doom. Here’s a personal example.

From a very young age, I wanted to go to Stanford. I learned that my parents had met there when I was about six, and I decided that I was going to go too. Like most decisions made by six-year-olds, this wasn’t based on any meaningful intelligence, let alone the full cost-benefit analysis that such a major life decision should have required. But I was young, and I let myself believe the very convenient thought that following the standard path would work for me. This was not, itself, the problem. The problem was that I kept on thinking this simplified six-year-old thought well into my young adulthood.

As I grew up, I piled all sorts of convincing arguments around that immature thought, rationalizing reasons I didn’t actually have to do anything difficult and change my beliefs. I would make all sorts of great connections with smart interesting people at Stanford, I thought, as if I couldn’t do the same in the workforce. I would get a prestigious degree that would open up many doors, I thought, as if working for Google isn’t just as prestigious but will pay you for the trouble. It will be worth the investment, the cached thoughts of society thought for me, and I didn’t question them.

I continued to fail at questioning them every year after, until the beginning of my senior year. At that point, I was pretty sick of school, so this wasn’t rationality, but a motivated search. But it was a search nonetheless, and I did reject the cached thoughts which I’d built up in my head for so long, and as I took the first step outside my bubble of predetermined cognition, I instantly saw a good number of arguments against attending Stanford. I realized that it had a huge opportunity cost, in both time and money. Four years and hundreds of thousands of dollars should not have been parted with that lightly.

And yet, even after I realized this, I was not done. It would have been incredibly easy to reject the conclusion I’d made because I didn’t want all that work to have been a waste. I was so close: I had a high SAT, I’d gotten good scores on 6 AP tests, including the only two computer science APs (the area I’d been intending to major in), and I’d gotten National Merit Commended Scholar status. All that would have been left was to complete my application, which I’m moderately confident I would have done well on, since I’m a good writer.

That bitterness could have cost me my life. Not in the sense that I would die for it immediately, but in the sense that everyone is dying for anything they spend significant time on, because everyone is dying. And it was here that rationality was my saving grace. I knew about the sunk cost fallacy. I knew that at this point I should scream “OOPS” and give up. I knew that at this point I should lose.

I bit my tongue, and lost.

I don’t know where I would end up if I hadn’t been able to lose here. The optimistic estimate is that I would have wasted four years, but gotten some form of financial aid or scholarship such that the financial cost was lower, and further, that in the process of attending college, I wouldn’t gain any more bad habits, I wouldn’t go stir-crazy from the practical inapplicability of the material (this was most of what had frustrated me about school before), and I would come out the other end with a degree but not too much debt and a non-zero number of gained skills and connections. That’s a very optimistic estimate, though, as you can probably tell given the way I wrote out the details. (Writing out all the details that make the optimistic scenario implausible is one of my favorite ways of combatting the planning fallacy.) There are a lot more pessimistic estimates, and it’s much more likely that one of those would happen.

Just by looking at the decision itself, you wouldn’t think of it as a particularly major one. Go to college, don’t go to college. How bad could it be, you may be tempted to ask. And my answer is, very bad. The universe is not fair. It’s not necessarily going to create a big cause for a big event: World War I was caused by some dude having a pity sandwich. Just because you feel like you’re making a minor life choice doesn’t mean you are, and just because you feel like you should be allowed to make an irrational choice just this once doesn’t mean the universe isn’t allowed to kill you anyway.

I don’t mean to make this excessively dramatic. It’s possible that being irrational here wouldn’t have messed me up. I don’t know, I didn’t live that outcome. But I highly doubt that this was the only opportunity I’ll get to be stupid. Actually, given my goals, I think it’s likely I’ll get a lot more, and that the next ones will have much higher stakes. In the near future, I can see people—possibly including me—making decisions where being stupid sounds like “oops” followed by the dull thuds of seven billion bodies hitting the floor.

This is genuinely the direction the future is headed. We are becoming more and more able to craft our destines, but we are flawed architects, and we must double and triple check our work, else the whole world collapses around us like a house on a poor foundation. If that scares you, irrationality should scare you. It sure terrifies the fuck out of me.

Book Review: The Humans

Matt Haig’s “The Humans” gains the dubious title of “most frustrating book I’ve ever read all the way through”.

Before reading this review, please read the book yourself and come up with your own ideas about it. I very much don’t want this review to spoil it for you, and I’m about to lay out and thoroughly dissect the plot. Despite the fact that some of its meta-elements frustrate me in particular, the book is immensely well-written and beautiful, and I don’t want to diminish anyone’s enjoyment of it before they’ve even gotten the chance to read the original.

That being said…

I’ve found a number of books frustrating. The overwhelming majority, I didn’t bother to finish. Some of these books were badly-written, some espoused ideologies I strongly disagree with, some were internally inconsistent. I won’t name the specific books on this so-frustrating-I-didn’t-finish-them list, because you’ll probably think I’m making a value judgement against those books, or that I want to make you feel bad if you enjoy them. I’m not, and I don’t: my frustration with these books is an attribute of me, not of the books. Likewise, my frustration with “The Humans”.

Here’s a quick plot synopsis – as a refresher for the bits I’ll be talking about; if you haven’t read the book, read it.

There is a highly advanced alien species who finds out that a particular human has found out a thing they don’t want him to find out. As such, they kill him and send one of their own to impersonate him, to delete the evidence, including that which happens to be represented within human brains. The aliens are not concerned with the fact that humans tend to call this “murder”. The one they send has a difficult time adjusting to life as a human for a number of reasons, but gets out of some tough scrapes using magi- I mean alien technology. In the process, he gets attached to the family of the man he’s impersonating, who he was sent to kill, and also somewhat to humanity in general. He has an existential crisis over it all, and ends up relinquishing his life in his hyper-advanced home civilization to spend the rest of his life as a human mortal.

Here are my two specific points of frustration with that.

#1: The author is so focused on the main character’s journey to the end state which he understands (poetic sympathy with the modern human condition) that he doesn’t adequately demonstrate the beginning state, and the whole journey is cheapened as a result. Essentially, he writes a story from the perspective of someone who comes from an entire society whose entire purpose in existence is math, and yet there isn’t much actual math in it. Not even for the purpose of making decisions. I know from experience that when you really care about the math, you sort of become the math. It isn’t just a tool you use, it takes over your thoughts. Part of the beauty of stories like HPMOR is that they’re really, honestly about science – you couldn’t remove the science without removing the story.

There is a fundamental disconnect when you try to write a book from the perspective of someone in love with math, without yourself actually being in love with math. Really being in love with math doesn’t look like having a favorite prime number. It doesn’t even look like recognizing the importance of math to the structure of the universe, though this is in fact a piece of insight more people could do to have. Really being in love with math looks like having the thoroughly amazing realization that the question “what should I believe?” has an empirically proven correct answer. It looks like finding beauty in a proof like an artist finds beauty in a flower. It looks like loving the universe more because of its mathematical roots; finding more joy, not less, in a rainbow once it has been explained.

In short, I’d like to see this book’s premise rewritten by a mathematician.

#2: The ending of this book generally makes the transhumanist in me want to scream.

I don’t think it’s terribly hard to see why death is a bad thing. A decent portion of humans have already decided on it. It would be even easier to decide that death is bad if you came from a society which didn’t have any such thing: the only reason that many humans think it’s okay is rationalization, anyway. You could make people rationalize reasons why getting hit on the head with a truncheon every week was actually a good thing, if they thought it was inevitable. (It makes your head stronger! And makes you happier on the days you’re not getting hit on the head! No, really!) But if I asked you, dear reader, who are presumably not subject to such a weekly annoyance, if you’d like to start, for all the amazing benefits, I think you’d say no.

And yet this alien, who comes from a society which has no such thing as death, and furthermore no such thing as permanent physical injury, accepts mortality in exchange for becoming one of The Humans.

I mean, I get it, humans are cool. That’s the whole “humanist” bit. I love humans too. I think we’re capable of greatness. But exchanging immortality for us? Without so much as putting up a fight?

I think I’d at least try to apply my superior intelligence to figure out exactly how the relevant bits of alien technology worked, and find out how to apply them in humans. Yet he fails to take a trip down that line of discovery. Further, the alien is small-scale altruistic without ever considering the concept of large-scale altruism. He spends a lot of time agonizing over the fact that he can’t help the humans since they’d realize he wasn’t one of them, and yet he spends a non-negligible portion of the book helping the family of the man he’s impersonating. I think if I had a magic left hand that I didn’t want anyone to know about, I would still go around using it to cure people. Just, when I got asked how it worked, I’d say “Science!” – it’s a curiosity-stopper for a lot of people. On the whole, if I was really intent on abandoning my home planet for Earth, I would at least try to steal as much useful stuff as possible before I left, and use it to the best of my ability.

So why didn’t the alien do this? Simply, because he was written by a human who had not thought of it. The writer must encompass his characters, and so no character can go beyond the knowledge of the writer. If you consider what an immortal alien would do, that doesn’t let you magically climb outside your own brain to generalize from knowledge that isn’t yours. If you accept death as the natural order, who says that an immortal alien wouldn’t accept it too?

I do. It doesn’t make any sense. I wouldn’t do that, and I grew up with death. Within the past year, two of my relatives have died, along with hundreds of thousands of strangers, and I find that completely unacceptable. I have reason to believe that an immortal alien would probably think a bit more like me than like Matt Haig – assuming the alien were capable of thinking like a human at all.

So, I suppose, this book is frustrating because it accepts what, to me, is unacceptable, without putting up a fight at all. It’s one long exercise in the Mind Projection Fallacy, and a demonstration of the fact that to write true science fiction you need to actually know science. I read it all the way through anyways because it’s beautifully written and incredibly interesting.

PDP 4

This week, I learned about deep learning and neural networks, and I wrote a handful of blog posts relating to concepts I learned last week.

The most poignant of these posts was Language: A Cluster Analysis of Reality. Taking inspiration from Eliezer Yudkowsky’s essay series A Human’s Guide To Words, and pieces of what I learned last week about cluster analyses, I created an abstract comparison between human language and cluster analyses done on n-dimensional reality-space.

Besides this, I started learning in depth about machine learning. I learned about the common loss functions, L2-norm and cross-entropy. I learned about the concept of deep neural nets: not just the theory, but the practice, all the way down to the math. I figured out what gradient descent is, and I’m getting started with TensorFlow. I’ll have more detail on all of this next week: there’s a lot I still don’t understand, and I don’t want to give a partially misinformed synopsis.

The most unfortunate part of this week was certainly that in order to fully understand deep neural networks, you need calculus, because a decent portion of the math relies on partial derivatives. I did statistics instead of calculus in high school, since I dramatically prefer probability theory to differential equations, so I don’t actually have all that much in the way of calculus, and there was an upper bound on how much of the math I actually got. I think that I’ll give myself a bit of remedial calculus in the next week.

The most fortunate part of this week was the discovery of how legitimately useful my favorite book is. Around four or five years ago, I read Rationality: From AI to Zombies. It’s written by a dude who’s big on AI, so obviously it contains rather a lot referencing that subject. When I first read it, I knew absolutely nothing about AI, so I just kind of skimmed over it, except to the extent that I was able to understand the fundamental theory by osmosis. However, I’ve been recently rereading Rationality for completely unrelated reasons, and the sections on AI are making a lot more sense to me now. The sections on AI are scattered through books 3, 4, and 5: The Machine in the Ghost, Mere Reality, and Mere Goodness.

And the most unexpected part of this week was that I had a pretty neat idea for a project, entirely unrelated to any of this other stuff I’ve been learning. I think I’ll program it in JavaScript over the next week, on top of this current project. It’s not complicated, so it shouldn’t get in the way of any of my higher-priority goals, but I had the idea because I would personally find it very useful. (Needless to say, I’ll be documenting that project on this blog, too.)

Language: A Cluster Analysis of Reality

Cluster analysis is the process of quantitatively grouping data in such a way that observations in the same group are more similar to each other than to those in other groups. This image should clear it up.

Whenever you do a cluster analysis, you do it on a specific set of variables: for example, I could cluster a set of customers against the two variables of satisfaction and brand loyalty. In that analysis, I might identify four clusters: (loyalty:high, satisfaction:low), (loyalty:low, satisfaction:low), (loyalty:high, satisfaction:high), and (loyalty:low, satisfaction:high). I might then label these four clusters to identify their characteristics for easy reference: “supporters”, “alienated”, “fans” and “roamers”, respectively.

What does that have to do with language?

Let’s take a word, “human”. If I define “human” as “featherless biped”, I’m effectively doing three things. One, I’m clustering an n-dimensional “reality-space”, which contains all the things in the universe graphed according to their properties, against the two variables ‘feathered’ and ‘bipedal’. Two, I’m pointing to the cluster of things which are (feathered:false, bipedal:true). Three, I’m labeling that cluster “human”.

This, the Aristotelian definition of “human”, isn’t very specific. It’s only clustering reality-space on two variables, so it ends up including some things that shouldn’t actually belong in the cluster, like apes and plucked chickens. Still, it’s good enough for most practical purposes, and assuming there aren’t any apes or plucked chickens around, it’ll help you to identify humans as separate from other things, like houses, vases, sandwiches, cats, colors, and mathematical theorems.

If we wanted to be more specific with our “human” definition, we could add a few more dimensions to our cluster analysis—add a few more attributes to our definition—and remove those outliers. For example, we might define “human” as “featherless bipedal mammals with red blood and 23 pairs of chromosomes, who reproduce sexually and use syntactical combinatorial language”. Now, we’re clustering reality-space against seven dimensions, instead of just two, and we get a more accurate analysis.

Despite this, we really can’t create a complete list of all the things that most real categories have in common. Our generalizations are leaky in some way, around the edges: our analyses aren’t perfect. (This is absolutely the case with every other cluster analysis, too.) There are always observations at the edges that might be in any number of clusters. Take a look at the graph above in this post. Those blue points at the top left edge, should they really be blue, or red or green instead? Are there really three clusters, or would it be more useful to say there are two, or four, or seven?

We make these decisions when we define words, too. Deciding which cluster to place an observation happens all the time with colors: is it red or orange, blue or green? Splitting one cluster into many happens when we need to split a word in order to convey more specific meaning: for example, “person” trisects into “human”, “alien”, and “AI”. Maybe you could split the “person” cluster even further than that. On the other end, you combine two categories into one when sub-cluster distinctions don’t matter for a certain purpose. The base-level category “table” substitutes more specific terms like “dining table” and “kotatsu” when the specifics don’t matter.

You can do a cluster analysis objectively wrong. There is math, and if the math says you’re wrong, you’re wrong. If your WCSS is so high that you have a cluster that you can’t label more distinctly than “everything else”, or if it’s so low you’ve segregated your clusters beyond the point of usefulness, then you’ve done it wrong.

Many people think “you can define a word any way you like”, but this doesn’t make sense. Words are cluster analyses of reality-space, and if cluster analyses can be wrong, words can also be wrong.


This post is a summary of / is based on Eliezer Yudkowsky’s essay sequence, “A Human’s Guide to Words“.

I Want To Cure Mortality.

Do you want to live forever?

No? Okay, let me phrase it another way. Do you want to live tomorrow?

Most people answer yes to this second question, even if they said no to the first. (If you didn’t say yes to the second, that’s typically called suicidal ideation, and there are hotlines for that.)

This doesn’t quite make sense to me. If I came to you tomorrow, and I asked the same question, “Do you want to live tomorrow?”, you’d probably still say yes; likewise with the day after that, and the day after that, and the day after that. Under normal circumstances, you’ll probably keep saying yes to that question forever. So why don’t you want to live forever?

Maybe, you think that the question “do you want to live forever” implies “do you want to be completely incapable of dying, and also, do you want to be the only immortal person around”. Not being able to die, ever, could be kind of sucky, especially if you continued to age. (There was a Greek myth about that.) Further, being the only person among those you care about who can’t die would also suck, since you’d witness the inevitable end of every meaningful relationship you had.

But these sorts of arbitrary constraints are the realm of fiction. First, if a scientist invented immortality, there would be no justifiable reason that it wouldn’t be as available to those you care about as it would be to you. Second, it’s a heck of a lot easier to just stop people from aging than it is to altogether make a human completely impervious to anything which might be lethal. When I say “yes” to “do you want to live forever”, it’s induction on the positive integers, not a specific vision whose desire spans infinity.

Even after I’ve made sure we’re on the same page as to what exactly real immortality might look like, some people still aren’t convinced it would be a good idea. A decent amount of the arguments are some variant on “death gives meaning to life”.

To this, I’ll borrow Eliezer Yudkowsky’s allegory: if everybody got hit on the head with a truncheon once a week, soon enough people would start coming up with all sorts of benefits associated with it, like, it makes your head stronger, or it makes you appreciate the days you’re not getting hit with a truncheon. But if I took a given person who was not being hit on the head with a truncheon every week, and asked them if they’d like to start, for all these amazing benefits, I think they’d say no. Wouldn’t you?

People make a virtue of necessity. They’d accept getting hit on the head with a truncheon once a week, just as they now accept the gradual process of becoming more and more unable to do things they enjoy, being in pain more often than not, and eventually ceasing to exist entirely. That doesn’t make it a good thing, it just demonstrates peoples’ capacity for cognitive dissonance.

These are the reasons I’ve made it my goal to cure mortality. The motivation is extremely similar to anyone’s motivation to cure any deadly disease. Senescence is a terminal illness, which I would like to cure.

It disrupts the natural order, but so does curing any other disease. Cholera was the natural order for thousands of years, but we’ve since decided it’s bad, and nowadays nobody is considering the idea of mixing sewage with drinking water to bring it back. There were tons of diseases that were part of the natural order right up until we eradicated them. We don’t seem to have any trouble, as a society, deciding that cancer is bad. But death itself—the very thing we’re trying to prevent by curing all these diseases—is somehow not okay to attack directly.

Here’s the bottom line. I know for a fact I’m not the only one with this goal. Some of the people at MIRI come to mind, as well as João Pedro de Magalhães. I’d personally love to contribute to any of these causes. If you know someone, or are someone, who’s working towards this goal, I’d love to join you.

Intuition Is Adaptable

Or, why “X is counterintuitive” is just a way of saying “I haven’t seen any sufficiently intuitive explanations of X”.


As a kid, I was a huge science nerd. In particular, I loved Stephen Hawking’s work. He had a TV show at some point, which I watched whenever I got the chance. One of the first books I remember reading was “A Brief History of Time”.

The main reason I loved reading his writing is that it didn’t seem very complicated to me. Not in the way Richard Feynman’s work seems un-complicated—Feynman just seems like he’s dicking around all the time and happens to love dicking around in physics specifically, so much so that he got a Nobel Prize in it. (I’m aware this isn’t remotely what happened, but that’s the sense you get from reading his writing.) Instead, I found Stephen Hawking’s writing to be un-complicated in the way that I later found Eliezer Yudkowsky’s and Daniel Kahneman’s writings to be un-complicated: it just makes sense.

The best physical example of Stephen Hawking’s influence on kid-me is a sheet of sticky paper that’s still stuck to the wall of the library in my parents’ house, on which I wrote an explanation of the Many Worlds interpretation of quantum mechanics.

You’d think I was joking, but…

The easy way to explain this is just to say that I was a genius, or if you don’t feel like giving me that much credit, you can say that I was able to do a lot of book-learning because I had no social life. (This assessment isn’t wrong, by the way.)

But I’m actually going to give myself even less credit than that. I don’t think I’m a genius, and I also don’t think that the extra time I gained by skipping recess to read the encyclopedia (I already told you I was a nerd, get off my back) actually contributed in any meaningful way to my comprehension of “A Brief History of Time”. I don’t think it had anything to do with me at all; rather, it was almost entirely a property of the authors.

Which authors are particularly poignant to you has a decent amount to do with you: I know a girl who thinks that Jen Sincero’s book You Are A Badass is the best book on the planet; I read the first paragraph and immediately put it back down again. But all other things being equal, a human brain is a human brain, and an intuitive explanation is an intuitive explanation.

Assuming you’ve got some very basic algebra, Eliezer Yudkowsky’s An Intuitive Explanation of Bayes’s Theorem will almost certainly make sense to you. (Whether or not you care is a function of your preferences in reading material, separate from whether or not you could understand it if you did care.) Even if you suck at math. I know, because at the time that I read the Explanation, I sucked at math. (I’ve since gotten much better through deliberate effort.) There are books in every discipline that make things make sense to people, that clarify cloudy issues, that provide intuitive explanations.

This is very good news for those of us who think we are just bad at something, and we have no way to get better. I grew up thinking I was bad at math, since I hate algebra and I hate trig and I’d always give up before I finished a problem and it was generally just the dullest drek on the planet. And yet, I have no difficulty calculating conditional probabilities with Bayes Theorem. All I had to do was read a good enough explanation. If you think you’re irreparably bad at something, don’t give up on it, just keep reading.

This is moderately poor news for those of us who are in the habit of writing explanations, though, because we can’t blame our readers’ lack of comprehension on the difficulty of the subject matter. It may partially be about the subject matter—neither all subjects nor all readers are created equal—but there is always some way that we could be better writers, better explainers, and thus have our explanations make more sense.

Personally, I choose to take this as a challenge. If no subject is imperceptibly counterintuitive, no subject is outside my domain, if I’m good enough. I just need to get stronger.

PDP 3

This week, I went even further in depth into doing statistical analyses in Python. I learned how to do logistic regressions and cluster analyses using k-means. I got a refresher on linear algebra, then used it to learn about the NumPy data type “ndarray”.

Logistic regressions are a bit complicated. The course I used explains it in a kind of strange way, which probably didn’t help. Fortunately, my mom knows a decent amount about statistical analyses (she used to be a researcher), so she was able to clear things up for me.

You do a logistic regression on a binary dependent variable. It ends up looking like a stretched-out S, either forwards or backwards. Data points are graphed on one of two lines, either y=0 or y=1. The regression line basically demonstrates a probability: how likely is it that you’ll pass an exam, given a certain number of study hours? How likely is it that you’ll get admitted to a college, given a certain SAT score? Practically, we care most about the tipping point, 50% probability, or y=0.5, and what values fall above and below that tipping point.

This can be slightly confusing since regression lines (or curves, for nonlinear regressions) usually predict values, but since there are only two possible values for a binary variable, the logistic regression line predicts a probability that a certain value will occur.

After I finished that, I moved on to K-means clustering, which is actually surprisingly easy. You randomly generate a number of centroids (generic term for the center of something, be it a line, polygon, cluster, etc.) corresponding to the number of clusters you want, and you assign points to centroids based on least Euclidean distance, move the centroids to the center of those new clusters, then assign the points to the centroids a second time.

Linear algebra is a little harder to understand, especially if your intuition isn’t visual like mine is. In essence, the basic object of linear algebra is a “tensor”, of which all other objects are types. A “scalar” is just an ordinary integer; a “vector” is a one-dimensional list of integers, and a “matrix” is a two-dimensional plane of integers, or a list of lists. These are tensors of type 0, 1, and 2, respectively. There are also tensors of type 3, which have no special name, as well as higher-order types.

I learned some basic linear algebra in school, but I figured it was a bit pointless. As it turns out though, linear algebra is incredibly useful for creating fast algorithms for multivariate algorithms, with many variables, many weights, and many constants. If you use standard integers (scalars) only, you’d need a formula like:
y1 + y2 + … + yk = (w1x1 + w2x2 + … + wkxk) + (b1 + b2 + … + bk).
But if you let all relevant variables be tensors, you can simplify that formula to:
y = wx + b

There are a handful of other awesome, useful ways to implement tensors. For example, image recognition. In order to represent an image as something the computer can do stuff with, we have to turn it into numbers. A type 3 tensor of the form 3xAxB, where AxB is the pixel dimension of the image in question, works perfectly. (Why use a third dimension of 3? Because images are commonly represented using the RGB, or Red/Green/Blue, color schema. In this, every color is represented with different values of R/G/B, between 0 and 255.)

Tensors, in the context of NumPy, which has a specific object type which is designed to handle them, are implemented using “ndarray”, or n-dimensional array. They’re not difficult to implement, and the notation is for once pretty straightforward. (It’s square brackets, similar to the mathematical notation.)

This should teach me to think of mathematical concepts as “pointless”. Computers think in math, so no matter how esoteric or silly the math seems, it’s part of how the computer thinks and I should probably learn it, for the same reasons I’ve devoted a lot of time to learning about all humans’ miscellaneous cognitive biases.

I’ve asked a handful of the statisticians I know if they wouldn’t mind providing some data for me to do some analyses of, since that would be a neat thing to do. But if I don’t do that, this coming week I’ll be learning in depth about AI, which my brain is already teeming with ideas for projects on. I’ve loved AI for a long time, and I’ve known how it works in theory for ages, but now I get to actually make one myself! I’m excited!

The Last Enemy That Shall Be Destroyed Is Death

His wand rose into the starting position for the Patronus Charm.
Harry thought of the stars, the image that had almost held off the Dementor even without a Patronus. Only this time, Harry added the missing ingredient, he’d never truly seen it but he’d seen the pictures and the video. The Earth, blazing blue and white with reflected sunlight as it hung in space, amid the black void and the brilliant points of light. It belonged there, within that image, because it was what gave everything else its meaning. The Earth was what made the stars significant, made them more than uncontrolled fusion reactions, because it was Earth that would someday colonize the galaxy, and fulfill the promise of the night sky.

Would they still be plagued by Dementors, the children’s children’s
children, the distant descendants of humankind as they strode from star to star? No. Of course not. The Dementors were only little nuisances, paling into nothingness in the light of that promise; not unkillable, not invincible, not even close. You had to put up with little nuisances, if you were one of the lucky and unlucky few to be born on Earth; on Ancient Earth, as it would be remembered someday. That too was part of what it meant to be alive, if you were one of the tiny handful of sentient beings born into the beginning of all things, before intelligent life had come fully into its power. That the much vaster future depended on what you did here, now, in the earliest days of dawn, when there was still so much darkness to be fought, and temporary nuisances like Dementors.

On the wand, Harry’s fingers moved into their starting positions; he
was ready, now, to think the right sort of warm and happy thought. And Harry’s eyes stared directly at that which lay beneath the tattered cloak, looked straight at that which had been named Dementor. The void, the emptiness, the hole in the universe, the absence of color and space, the open drain through which warmth poured out of the world. The fear it exuded stole away all happy thoughts, its closeness drained your power and strength, its kiss would destroy everything that you were.

I know you now, Harry thought as his wand twitched once, twice, thrice and four times, as his fingers slid exactly the right distances, I comprehend your nature, you symbolize Death, through some law of magic you are a shadow that Death casts into the world.
And Death is not something I will ever embrace.
It is only a childish thing, that the human species has not yet outgrown.

– Eliezer Yudkowsky, “Harry Potter and the Methods of Rationality“; Chapter 45, “Humanism, Part III”


I already talked about why HPMOR is my favorite book on the planet. (Which is why I tried very hard not to spoil anything terribly important with the above excerpt, while still having it convey the intended meaning. I would love it if you’d read it.) Now, here’s an oil painting inspired by it. The title of this post, and of the painting itself, are inspired by a thematically similar section later in the book.

I had a bit of trouble finding decent reference pictures for ‘earth from space’, funnily enough. It’s difficult to distinguish high-quality photographs from digital art. The references (yes, plural) I ended up settling on were taken from NASA and the ISS. Even so, maybe it doesn’t matter, since I ended up using a pretty impressionistic style anyway.

I’ve actually never drawn space or planets before. I would absolutely not trust myself to be non-detail-focused enough to do this in markers, hence the painting. (Also, since this is my warm happy thought as well as Harry’s, I want to hang this on my wall, and oil paintings are better for that.) It was an interesting experiment to try and loosen up enough to draw something from such a high level, especially when my brain was busy making me think thoughts like “okay just remember that if you move your brush in slightly the wrong way you’ve erased the entire state of Texas”. As you zoom out more and more, you have to suggest more and more stuff with subtle brush techniques, and when the things you’re suggesting are on the order of entire states or countries… it gets moderately stressful.

Still, I think it came out alright. I’m pretty happy with the color of the ocean, and the general texture of the clouds. The space was both the easiest and the most fun part, starting with a black gesso and painting over it with blues and purples. I may touch this up later, but it’s good for now.

As a final note: Unlike the rest of my paintings on this blog, this one is not for sale. I’m happy to make a copy if you’d like one (which includes making modified versions, ex., with the U.S.S. Enterprise in the foreground); for details on making commissions, visit my Commission Me page.

The Limits of the Argument from Incredulity

Of late, I’ve heard a lot of arguments of a general form “X is immoral, unacceptable, unreasonable, unpleasant, and otherwise should really just not be the way it is, so therefore, we must make it stop being this way”. Inherently, there is absolutely no problem with this sort of ethical/moral argument: it has the ability to highlight areas in which the world could be fixed. But of late, I’ve seen this argument used in ways that make very little logical sense, and it occurred to me that people who make this argument may not realize its limitations.

Let me borrow Eliezer Yudkowsky’s example, and argue that I should be able to run my car without needing to put gas in it. “It would be so much more fun, and so much less expensive, if we just decided to repeal the law that cars need fuel.” Owning a car would become more accessible to lower-income households, if you remove the gas expense. There is less pollution if nobody is burning fossil fuels anymore. “Isn’t it just obviously better for everyone?”

Well, that would be very nice if it were possible, but given that cars, like all things in the universe, must obey the Law of Conservation of Energy, it isn’t. Being angry about it will not change that.

When people use these, as I’m calling them, “arguments from incredulity”, the biggest problem is that they are so caught up in being angry at how bad the thing is, they fail to realize that any possible solution is dramatically more complicated than just “abolish the thing”. It’s obvious with something simple, like putting fuel in cars, but when you get to something more complicated, like the minimum wage or housing the poor, it’s less so.

I’ve seen arguments about the minimum wage. They start with something about how the current minimum wage is insufficient to cover what the arguer considers a minimally adequate cost of living, then there’s some tear-jerking personal anecdote tossed in, and it ends with the conclusion that the minimum wage needs to be raised (to 12 or 15 or whatever dollars an hour). Do they take into account the problem that raising the minimum wage, without doing anything about low-income housing/student loans/etc, would simply end up increasing inflation, without doing jack shit about the actual problem they’re so incredulous about? No.

I’ve seen arguments about housing the poor. They start with some statistics about how many houses in America are currently vacant and some other statistics about how many Americans are homeless, then they go on some diatribe about how if rich people weren’t such assholes we could just fill the vacant houses with homeless people and be done with it. Do they bother to consider the fact that what counts as a vacant house for the purposes of those statistics includes a rental house that so happens to not currently be tenanted, but will become tenanted within the next month, as well as many other circumstances that would not fit well with the five-word “stick homeless people in them” solution? No.

Complex problems rarely have simple solutions, but getting angry tends to make peoples’ brains think in simple terms: therein lies the weakness of the argument from incredulity.

So how do we fix this problem? How do we individually stop ourselves from making angry arguments which aren’t useful?

The obvious solution is to think about the problem (preferably for five whole minutes by a physical clock) without getting angry about any aspect of it. Consider the problem’s complexity, consider the possible reasons why this problem might exist, and consider the way this problem came to be in the first place. Remember that the problem is most likely caused and perpetuated by normal, not abnormal, psychology: homelessness does not exist because all landlords are psychopaths. And remember that the solution to the problem will take into account the fact that human beings are not always perfect all the time, not make a clever argument for how humans would all live in perfect harmony if only we would implement communism.

A problem cannot be solved by getting angry with it, like one might persuade a human to do something by getting angry at them. Complex problems are solved by solutions which encompass their every aspect, break them down into manageable pieces, and tackle each piece as thoroughly as necessary. A good solution does not leave the hearer with a lingering sense of doubt; instead, it should make the problem feel solvable. If your solution doesn’t do that, it’s probably a good idea to keep looking.

Book Review: Methods of Rationality

It’s high time I did a real review of my favorite book in the universe. I read it for the first time at the age of 13, and it triggered an utter obsession with cognitive science, rationality, and artificial intelligence that has not disappeared to this day. (It has, however, become more mature: I no longer write shitty romantic poetry about cognitive science.)

I will attempt to describe this masterpiece of literature; more than once since I will absolutely fail several times.

Harry Potter and the Methods of Rationality is a 122-chapter parallel-universe Harry Potter fanfic in which Lily Evans married an Oxford professor, Michael Verres, and Harry was adopted and raised in a loving home filled to the brim with books. It is written by one Eliezer Yudkowsky, co-founder of the Machine Intelligence Research Institute, who writes frequently for the blog Less Wrong, which I’ve cited here before, and is best known for popularizing the idea of Friendly AI.

Harry Potter and the Methods of Rationality is a book about an eleven-year-old who knows both magic and calculus and wants to take over the world using Science so he can get more books.

Harry Potter and the Methods of Rationality is a book that successfully taught a 13-year-old girl—who wasn’t and still isn’t a genius—the underlying fundamentals of cognitive psychology, quantum physics, artificial intelligence, and Bayesian probability theory. If you read it, you will also learn these things, without ever realizing you have learned them. It will simply make sense, in a way that makes you wonder how you ever didn’t understand it.

While reading Harry Potter and the Methods of Rationality, you will frequently have absolutely no idea whether Harry is the villain or the hero. You will frequently have absolutely no idea whether Draco Malfoy is the villain or the hero, either. This goes for most of the characters, with the exception of Hermione and McGonagall. It does not exclude Voldemort.

This book will make you laugh, cry, learn, and question human existence. It will make you very aware of the sound of snapping fingers, and the shape of the night sky. It will show you the best and worst of humanity, and make both understandable. If you let it, it will teach you some of the most valuable life lessons you might ever learn.

Find the completed book at hpmor.com. You can read it in however much time you like, but given the length, it takes a fast reader about three or four days to binge straight through, so you probably can’t read it any faster than that. In any case, when you do finish it, please leave a comment telling me what you thought! And of course, give the author some feedback and leave reviews on however many chapters you like.

As an end note, in case you might not have believed me, here is only one of the shitty romantic poems I wrote about rationality. Please be nice to the author, she was a little girl who fell in love with science, not a poet, and she was doing her best.

Be skeptical, not cynical;
be open, but not gullible.
Be curious, not clever;
no rationalization, ever.

Accept the truth for what it is;
and look for contradictions
in all arguments, yours included;
you’re more confused by fiction.

A word is just a label
before you know the referent;
a lie gets told a long time,
if someone’s to protect it.

Certain kinds of people
truth they wrongly construe,
but they’ll do it in the name of
who they think is watching you.

Humans tend to think
they could predict things in advance
but that’s some hindsight bias
when really there’s low chance.

Don’t explain all this all at once,
mind inferential distance,
plus the illusion of transparency,
and all peoples’ heuristics.

People don’t like weird ideas,
or saying they don’t know;
but even with our biases,
There’s a long way we shall go.