Ditch Pros and Cons: Use a Utility Function

If you’ve ever met me in person, you know I talk a lot about utility. My close friends are used to answering questions like “does this have net positive utility to you?” and “is that a strongly weighted component of your utility function?”. Every time I make a decision – what to do with my evening, or what to do with my life – I think about it in terms of utility.

I didn’t always do this, but I’ve adopted this way of thinking because it forces me to clarify everything that’s going on in my head and weight all my thoughts appropriately before making a decision. When I decide things this way, I genuinely feel like I’ve made the best possible choice I could, given everything I knew at the time.

What on earth is utility?

Utility is just a fancy word for “value”. If you enjoy chocolate ice cream, then eating chocolate ice cream has positive utility. If you don’t enjoy carrot cake, then eating carrot cake has negative utility.

One action can have multiple kinds of utility. You can add together all the utility types to get the action’s net utility. For example, if I assign positive utility to eating ice cream but a negative utility to gaining weight, there will be a certain optimal point where I eat as much ice cream as I can without gaining weight. Maybe, if I assign eating ice cream +5 utility, not gaining weight +5 utility, and exercising -5 utility, then it would make sense for me to hit the gym more often so that I can eat more ice cream without gaining weight.

The set of utilities I assign to all outcomes also tells me the optimal possible outcomes, with the highest net utility. In this example, that would be either modifying ice cream so that it doesn’t make me gain weight, or modifying my body’s energy processing system to get it to process ice cream without storing any of it as fat.

Having numbers that are consistent is helpful sometimes, but isn’t strictly necessary. When I need to make quick, relatively straightforward decisions, I typically just make up some utility numbers. Utility calculations in a small isolated system are basically matters of ratios: it doesn’t matter exactly how much utility I assign to something, but if outcome X has 5 more utility points than outcome Y, X is preferable.

Forcing yourself to make up numbers and compare them to each other reveals what you care about. If you initially thought you didn’t care much about something, but then realize that if you calculated net utility with a low number assigned to that thing, you’d be unsatisfied with the result, then you care more than you thought you did.

It might be somewhat unclear, with my super-simple examples so far, what you can assign utility to. So, here are some examples of things that I assign positive utility to:

  • Reading good books
  • Doing new things
  • Increasing utility according to the utility functions of people I care about
  • Building neat software
  • Drawing and painting
  • Writing stories and blog posts
  • Improving/maintaining my mental and physical health
  • Having interesting conversations
  • Improving the quality of life of all sentient beings
  • Running
  • Riding my bike
  • Taking walks with my girlfriend
  • Eating ice cream

If you enjoy doing it, if you think you should do it, if it makes you happy, if it’s a goal you have for some reason, or anything else like that, you assign it some amount of positive utility.

If you’d like to figure out how much relative utility you assign to different options, compare them: if I had to either give up on improving the quality of life for all sentient beings, or give up on ice cream, the ice cream has gotta go.

You can even assign positive utility to things you don’t end up doing. That’s because the net utility, after accounting for circumstances or mutually exclusive alternate actions. Knowing that you would do something, barring XYZ condition, is a useful thing to know in order to dissect your own thoughts, feelings, goals, and motivations. The converse is true, too: you can assign negative utility to things you end up doing anyway, because the net utility is positive.

So if that’s utility, what’s a utility function?

A utility function is a comprehensive set of everything that an agent assigns any utility to. “An agent” is anything capable of making decisions: a human, an AI, a sentient nonhuman alien, etc. Your utility function is the set of everything you care about.

The inputs to a utility function are quantities of certain outcomes, each of which are multiplied by their respectively-assigned utility value and then added together to get the total expected utility of a given course of action. In an equation, this is:

Ax + By + Cz + ...

Where A, B, C, and so on are individual facets of outcomes, and x, y, z, and so on are utilities.

Say I’m with my sister and we’re going to get food. I personally assign a strong net positive to getting burgers and a weak net negative for anything else. I also assign a positive utility to making my sister happy, regardless of where we go for food. If she has a strong net negative for getting burgers, and a weak net positive for sushi, I can evaluate that situation in my utility function and decide that my desire to make her happy overpowers the weak negative I have for anything besides burgers, so we go get sushi.

When evaluating more complex situations (such as moving to a job across the country, where positives include career advancement and increased income, and negatives include having to leave your home and make new friends), modeling your own utility function is an excellent way to parse out all the feelings that come from a choice like that. It’s better than a simple list of pros and cons because you have (numeric, if you like) weights for all the relevant actions.

How to use your utility function

I don’t keep my entire utility function in my head at one time. I’ve never even written it down. But I make sure I understand large swaths of it, compartmentalized to situations I often find myself in. However, if you decide to actually write down your utility values, proceed to make them all consistent, and actually calculate utility when you make decisions, there’s nothing stopping you.

In terms of the optimal way to think about utility calculations, I have one piece of advice. If you come out of a utility calculation thinking “gotcha, I can do this”, “alright, this seems reasonable”, or even “ugh, okay, I don’t like it but this is the best option”, then that’s good. That’s the utility function doing its job. But, if you come out of one thinking “hmmm… I guess, but what about XYZ contingency? I really don’t want to do ABC…”, or otherwise lingering on the point of decision, then you’ve forgotten something.

Go back and ask “what’s wrong with the ‘optimal’ outcome?”. It might be something you don’t want to admit to yourself, but you don’t gain anything by having an inaccurate perception of your own utility function. Remember that, in absence of a verbal reason, “I don’t wanna” is still a perfectly valid justification for assigning negative utility to an action or outcome. In order for this process to work, you need to parse out your desires/feelings/goals from your actions, without beating yourself up for it. Your utility function already is what it is, and owning up to it doesn’t make it worse.

Once you have a pretty good handle on your own utility function, you can go ahead and mentally model other peoples’. Humans are calculating utility all the time in the form of preferences and vague intuitions, so even if other people don’t know their utility functions, you can learn them by a combination of watching their actions and listening to their words.

The discrepancy between those two, by the way, is indicative of one of two things: either the person is choosing an action with suboptimal utility, or they don’t actually assign utility to the things they say they do aloud (perhaps for social reasons). You can point out this discrepancy politely, and perhaps help them to make better decisions in the future.

Once you begin to use utility functions for both yourself and others, you might be surprised at how much easier it is to make decisions. When considering possible courses of action for yourself, you’ll be able to choose the best option and know it was the best. And, in a group, having an accurate model of other peoples’ utility functions can let you account for their preferences, perhaps even better than they themselves do.

Book Review: The Humans

Matt Haig’s “The Humans” gains the dubious title of “most frustrating book I’ve ever read all the way through”.

Before reading this review, please read the book yourself and come up with your own ideas about it. I very much don’t want this review to spoil it for you, and I’m about to lay out and thoroughly dissect the plot. Despite the fact that some of its meta-elements frustrate me in particular, the book is immensely well-written and beautiful, and I don’t want to diminish anyone’s enjoyment of it before they’ve even gotten the chance to read the original.

That being said…

I’ve found a number of books frustrating. The overwhelming majority, I didn’t bother to finish. Some of these books were badly-written, some espoused ideologies I strongly disagree with, some were internally inconsistent. I won’t name the specific books on this so-frustrating-I-didn’t-finish-them list, because you’ll probably think I’m making a value judgement against those books, or that I want to make you feel bad if you enjoy them. I’m not, and I don’t: my frustration with these books is an attribute of me, not of the books. Likewise, my frustration with “The Humans”.

Here’s a quick plot synopsis – as a refresher for the bits I’ll be talking about; if you haven’t read the book, read it.

There is a highly advanced alien species who finds out that a particular human has found out a thing they don’t want him to find out. As such, they kill him and send one of their own to impersonate him, to delete the evidence, including that which happens to be represented within human brains. The aliens are not concerned with the fact that humans tend to call this “murder”. The one they send has a difficult time adjusting to life as a human for a number of reasons, but gets out of some tough scrapes using magi- I mean alien technology. In the process, he gets attached to the family of the man he’s impersonating, who he was sent to kill, and also somewhat to humanity in general. He has an existential crisis over it all, and ends up relinquishing his life in his hyper-advanced home civilization to spend the rest of his life as a human mortal.

Here are my two specific points of frustration with that.

#1: The author is so focused on the main character’s journey to the end state which he understands (poetic sympathy with the modern human condition) that he doesn’t adequately demonstrate the beginning state, and the whole journey is cheapened as a result. Essentially, he writes a story from the perspective of someone who comes from an entire society whose entire purpose in existence is math, and yet there isn’t much actual math in it. Not even for the purpose of making decisions. I know from experience that when you really care about the math, you sort of become the math. It isn’t just a tool you use, it takes over your thoughts. Part of the beauty of stories like HPMOR is that they’re really, honestly about science – you couldn’t remove the science without removing the story.

There is a fundamental disconnect when you try to write a book from the perspective of someone in love with math, without yourself actually being in love with math. Really being in love with math doesn’t look like having a favorite prime number. It doesn’t even look like recognizing the importance of math to the structure of the universe, though this is in fact a piece of insight more people could do to have. Really being in love with math looks like having the thoroughly amazing realization that the question “what should I believe?” has an empirically proven correct answer. It looks like finding beauty in a proof like an artist finds beauty in a flower. It looks like loving the universe more because of its mathematical roots; finding more joy, not less, in a rainbow once it has been explained.

In short, I’d like to see this book’s premise rewritten by a mathematician.

#2: The ending of this book generally makes the transhumanist in me want to scream.

I don’t think it’s terribly hard to see why death is a bad thing. A decent portion of humans have already decided on it. It would be even easier to decide that death is bad if you came from a society which didn’t have any such thing: the only reason that many humans think it’s okay is rationalization, anyway. You could make people rationalize reasons why getting hit on the head with a truncheon every week was actually a good thing, if they thought it was inevitable. (It makes your head stronger! And makes you happier on the days you’re not getting hit on the head! No, really!) But if I asked you, dear reader, who are presumably not subject to such a weekly annoyance, if you’d like to start, for all the amazing benefits, I think you’d say no.

And yet this alien, who comes from a society which has no such thing as death, and furthermore no such thing as permanent physical injury, accepts mortality in exchange for becoming one of The Humans.

I mean, I get it, humans are cool. That’s the whole “humanist” bit. I love humans too. I think we’re capable of greatness. But exchanging immortality for us? Without so much as putting up a fight?

I think I’d at least try to apply my superior intelligence to figure out exactly how the relevant bits of alien technology worked, and find out how to apply them in humans. Yet he fails to take a trip down that line of discovery. Further, the alien is small-scale altruistic without ever considering the concept of large-scale altruism. He spends a lot of time agonizing over the fact that he can’t help the humans since they’d realize he wasn’t one of them, and yet he spends a non-negligible portion of the book helping the family of the man he’s impersonating. I think if I had a magic left hand that I didn’t want anyone to know about, I would still go around using it to cure people. Just, when I got asked how it worked, I’d say “Science!” – it’s a curiosity-stopper for a lot of people. On the whole, if I was really intent on abandoning my home planet for Earth, I would at least try to steal as much useful stuff as possible before I left, and use it to the best of my ability.

So why didn’t the alien do this? Simply, because he was written by a human who had not thought of it. The writer must encompass his characters, and so no character can go beyond the knowledge of the writer. If you consider what an immortal alien would do, that doesn’t let you magically climb outside your own brain to generalize from knowledge that isn’t yours. If you accept death as the natural order, who says that an immortal alien wouldn’t accept it too?

I do. It doesn’t make any sense. I wouldn’t do that, and I grew up with death. Within the past year, two of my relatives have died, along with hundreds of thousands of strangers, and I find that completely unacceptable. I have reason to believe that an immortal alien would probably think a bit more like me than like Matt Haig – assuming the alien were capable of thinking like a human at all.

So, I suppose, this book is frustrating because it accepts what, to me, is unacceptable, without putting up a fight at all. It’s one long exercise in the Mind Projection Fallacy, and a demonstration of the fact that to write true science fiction you need to actually know science. I read it all the way through anyways because it’s beautifully written and incredibly interesting.