The Incredibles 2, and How the Universe is Allowed to Just Kill You Anyway

The Incredibles 2 is a movie about superheroes, which is the sequel to another movie about superheroes. Both are centrally themed around the idea that “no man is an island” – as in, you aren’t alone, you don’t need to be alone, and in fact, you do better when you let others help you – to the point that “Nomanisan Island” is an actual location in the films.

I watched the first Incredibles movie when I was a child. It was good, but it didn’t leave any lasting impression in my young brain beyond “Elastigirl cool”. I thought this might be because I’d seen it before I was sentient, so I watched it again later. I liked it more than I had when I was young, but it still didn’t hammer its central theme into my brain nearly as effectively as its sequel.

There are three main things that made the sequel much better than the original; at least, three that are particularly poignant to me. First, the stakes are meaningful. At the end of the first movie, if the good guys didn’t win, Syndrome would have “made everybody super, so no one is”, whatever that means. At the end of the sequel, if the good guys didn’t win, a gigantic cruise ship would have crashed into a coastal city, killing hundreds or thousands of people. You can imagine which of these is more emotionally moving to me.

Second: partway through the second act, Helen meets a number of other superheroes who have been in hiding. This introduces an important element the first movie lacks: a supporting cast. It fleshes out the group “superheroes” to see more than six, and it shows us the sheer number of people whose lives have been negatively impacted by the outlawing of supers.

This, and a few other scenes, make it clear that we, the audience, are expected to care about people besides the main cast. A lot of movies just take collateral damage in stride, telling the audience not to think about the fact that the good guys let hundreds of unnamed pedestrians die when they crash a bad guy’s helicopter into a building. In Incredibles movies, this comes from a plot focus on minimization of collateral damage from superheroes, but it resonates nicely with the humanist in me.

Third: Because the Incredibles movies feature a family, there have been some critical parts of the plot featuring the children as central characters. But while in the first movie they mostly just held their own, in the sequel they were able to independently progress the plot. In fact, there is a point in the climax where all the adults have been hypnotized by the villain, and it’s up to the children – who have so far been bickering, uncoordinated, and generally unqualified to accomplish this necessary task – to save the day.

They succeed, of course, because this is a family-friendly movie. But that sequence of events produced a genuine feeling of uncertainty about the outcome that most movies struggle for. It captured, for me at least, the precise feeling I have when thinking about global catastrophic risks.

The definition of a global catastrophic risk (GCR) is “a hypothetical future event which could damage human well-being on a global scale”. A sub-type of GCR is existential risks, which include things like non-value-aligned artificial superintelligences. Existential risks are the ones which would cause human extinction.

There are tangible, obvious, salient stakes when talking about GCRs. We discuss them because we care about our fellow humans, and we want them not to suffer or die. But, at the same time, we are in a horribly unfortunate position in terms of actually preventing any of these risks, because we’re all bickering, uncoordinated, and generally unqualified to accomplish this necessary task.

At the same time that people are still working on developing ASI, still pumping massive amounts of greenhouse gases into the atmosphere, still stockpiling nuclear weapons, the rest of us are all wandering about our petty lives, not realizing the actions of these few people might imminently kill us all. We can’t affect the actions of these groups, just like we can’t affect the orbit of the sun – as in, it’s strictly-speaking possible, but extraordinarily difficult.

So we’re stuck between extinction and impossibility. Either accept the greater-than-50% likelihood of a universe tiled with paperclips, or move the sun.

Unlike in The Incredibles movies, real life is not family-friendly. There is no plot armor protecting us from extinction, no reason that the squabbling children should be able to defeat the villain. If we are going to survive, we need to become much better.

What “better” looks like depends on the risk, and there are a lot of them. We all should educate ourselves on what GCRs are, what they look like, how bad each one could be, and what preventative measures could be taken to make us safer. In instances where the powers that be are likely to listen to us, we should rally, and scream loudly enough that they’re forced to listen. And, lastly, in the specific situations where we ourselves are the powers that be – if we are AI programmers or molecular nanotechnology developers or biotechnologists – we need to think long and hard about our decisions.

Do not underestimate the likelihood of a future where someone says “oops”, and seven billion bodies hit the ground.


This week, I learned about deep learning and neural networks, and I wrote a handful of blog posts relating to concepts I learned last week.

The most poignant of these posts was Language: A Cluster Analysis of Reality. Taking inspiration from Eliezer Yudkowsky’s essay series A Human’s Guide To Words, and pieces of what I learned last week about cluster analyses, I created an abstract comparison between human language and cluster analyses done on n-dimensional reality-space.

Besides this, I started learning in depth about machine learning. I learned about the common loss functions, L2-norm and cross-entropy. I learned about the concept of deep neural nets: not just the theory, but the practice, all the way down to the math. I figured out what gradient descent is, and I’m getting started with TensorFlow. I’ll have more detail on all of this next week: there’s a lot I still don’t understand, and I don’t want to give a partially misinformed synopsis.

The most unfortunate part of this week was certainly that in order to fully understand deep neural networks, you need calculus, because a decent portion of the math relies on partial derivatives. I did statistics instead of calculus in high school, since I dramatically prefer probability theory to differential equations, so I don’t actually have all that much in the way of calculus, and there was an upper bound on how much of the math I actually got. I think that I’ll give myself a bit of remedial calculus in the next week.

The most fortunate part of this week was the discovery of how legitimately useful my favorite book is. Around four or five years ago, I read Rationality: From AI to Zombies. It’s written by a dude who’s big on AI, so obviously it contains rather a lot referencing that subject. When I first read it, I knew absolutely nothing about AI, so I just kind of skimmed over it, except to the extent that I was able to understand the fundamental theory by osmosis. However, I’ve been recently rereading Rationality for completely unrelated reasons, and the sections on AI are making a lot more sense to me now. The sections on AI are scattered through books 3, 4, and 5: The Machine in the Ghost, Mere Reality, and Mere Goodness.

And the most unexpected part of this week was that I had a pretty neat idea for a project, entirely unrelated to any of this other stuff I’ve been learning. I think I’ll program it in JavaScript over the next week, on top of this current project. It’s not complicated, so it shouldn’t get in the way of any of my higher-priority goals, but I had the idea because I would personally find it very useful. (Needless to say, I’ll be documenting that project on this blog, too.)