Why I Don’t Care If YOU Want to Cure Mortality

I’m fairly ardent on this blog about my desire to prevent humans from dying involuntarily. But, while I’ve made many attempts to rebut poor arguments and explain the reasons I care about this, I haven’t ever attempted to convince anyone else that they should have the same goal I do.

This isn’t because I have any desire to avoid persuading people; I’ve written rather a lot of persuasive posts on this blog, inviting people to abolish everything from gender to public schools. Instead, it’s because I don’t think everyone should be trying to cure mortality.

I chose the particular goal I did when I was very young, but I’ve stuck with it because it seems like a reasonable first step. If I can extend my life, and the lives of others, then we’ll all have more time to do other things.

But death from senescence is not the only threat, or even the single biggest one (though it is up there). Pretty much everything on the Wikipedia page for Global Catastrophic Risks is a notable candidate for an Important Problem that somebody should be working on.

“Most people all the time, and all people most of the time, should stick to the possible.” Even those of us working on one impossible (read: very difficult or potentially unsolvable) problem cannot work on multiple at once. Therefore, it is absolutely critical that smart ambitious people work on multiple different impossible problems, work on eliminating or preventing multiple Global Catastrophic Risks at once, so that we can make the world better as efficiently and effectively as possible.

So, if you’re a smart ambitious person not currently working on curing mortality, because you’ve deemed it more important to work on Friendly AI or biotechnology or global warming or FTL travel, then that is exactly how it should be.

“But,” some of you might protest, “I’m not working on any impossible problems. Do you look down on me, or think I should be doing something different?”

Not at all. “Most people all the time, and all people most of the time, should stick to the possible.” The first phrase is just as critical as the second: most people should always do things they know they can do, work on normal goals, and have normal lives. Us weirdos working on impossible problems need the world to keep running while we do it. We need accountants and restaurant owners and librarians and politicians and auto mechanics.

There is absolutely no reason that anybody who doesn’t already have some compulsion to work on an impossible problem should do so. If you have an idea for a startup that could change the world, you have no obligation to follow through with it. If you hear about a Global Catastrophic Risk, you have no obligation to do anything about it (other than, perhaps, try to help a little bit however you can). There are those of us who are indifferent to the idea of spending our whole lives on a potentially fruitless endeavor, who are willing to do so in exchange for decreasing the risk of something about which we are genuinely terrified: the serious crippling, or permanent extinction, of the human race.

That’s our own prerogative, not yours. It doesn’t matter to me what you choose to do with your life: that’s dependent on your utility function, not mine. The only thing that matters to me is my own work. If we each focus on our own work, and sphere of influence, that’s enough.

The Incredibles 2, and How the Universe is Allowed to Just Kill You Anyway

The Incredibles 2 is a movie about superheroes, which is the sequel to another movie about superheroes. Both are centrally themed around the idea that “no man is an island” – as in, you aren’t alone, you don’t need to be alone, and in fact, you do better when you let others help you – to the point that “Nomanisan Island” is an actual location in the films.

I watched the first Incredibles movie when I was a child. It was good, but it didn’t leave any lasting impression in my young brain beyond “Elastigirl cool”. I thought this might be because I’d seen it before I was sentient, so I watched it again later. I liked it more than I had when I was young, but it still didn’t hammer its central theme into my brain nearly as effectively as its sequel.

There are three main things that made the sequel much better than the original; at least, three that are particularly poignant to me. First, the stakes are meaningful. At the end of the first movie, if the good guys didn’t win, Syndrome would have “made everybody super, so no one is”, whatever that means. At the end of the sequel, if the good guys didn’t win, a gigantic cruise ship would have crashed into a coastal city, killing hundreds or thousands of people. You can imagine which of these is more emotionally moving to me.

Second: partway through the second act, Helen meets a number of other superheroes who have been in hiding. This introduces an important element the first movie lacks: a supporting cast. It fleshes out the group “superheroes” to see more than six, and it shows us the sheer number of people whose lives have been negatively impacted by the outlawing of supers.

This, and a few other scenes, make it clear that we, the audience, are expected to care about people besides the main cast. A lot of movies just take collateral damage in stride, telling the audience not to think about the fact that the good guys let hundreds of unnamed pedestrians die when they crash a bad guy’s helicopter into a building. In Incredibles movies, this comes from a plot focus on minimization of collateral damage from superheroes, but it resonates nicely with the humanist in me.

Third: Because the Incredibles movies feature a family, there have been some critical parts of the plot featuring the children as central characters. But while in the first movie they mostly just held their own, in the sequel they were able to independently progress the plot. In fact, there is a point in the climax where all the adults have been hypnotized by the villain, and it’s up to the children – who have so far been bickering, uncoordinated, and generally unqualified to accomplish this necessary task – to save the day.

They succeed, of course, because this is a family-friendly movie. But that sequence of events produced a genuine feeling of uncertainty about the outcome that most movies struggle for. It captured, for me at least, the precise feeling I have when thinking about global catastrophic risks.

The definition of a global catastrophic risk (GCR) is “a hypothetical future event which could damage human well-being on a global scale”. A sub-type of GCR is existential risks, which include things like non-value-aligned artificial superintelligences. Existential risks are the ones which would cause human extinction.

There are tangible, obvious, salient stakes when talking about GCRs. We discuss them because we care about our fellow humans, and we want them not to suffer or die. But, at the same time, we are in a horribly unfortunate position in terms of actually preventing any of these risks, because we’re all bickering, uncoordinated, and generally unqualified to accomplish this necessary task.

At the same time that people are still working on developing ASI, still pumping massive amounts of greenhouse gases into the atmosphere, still stockpiling nuclear weapons, the rest of us are all wandering about our petty lives, not realizing the actions of these few people might imminently kill us all. We can’t affect the actions of these groups, just like we can’t affect the orbit of the sun – as in, it’s strictly-speaking possible, but extraordinarily difficult.

So we’re stuck between extinction and impossibility. Either accept the greater-than-50% likelihood of a universe tiled with paperclips, or move the sun.

Unlike in The Incredibles movies, real life is not family-friendly. There is no plot armor protecting us from extinction, no reason that the squabbling children should be able to defeat the villain. If we are going to survive, we need to become much better.

What “better” looks like depends on the risk, and there are a lot of them. We all should educate ourselves on what GCRs are, what they look like, how bad each one could be, and what preventative measures could be taken to make us safer. In instances where the powers that be are likely to listen to us, we should rally, and scream loudly enough that they’re forced to listen. And, lastly, in the specific situations where we ourselves are the powers that be – if we are AI programmers or molecular nanotechnology developers or biotechnologists – we need to think long and hard about our decisions.

Do not underestimate the likelihood of a future where someone says “oops”, and seven billion bodies hit the ground.