Ditch Pros and Cons: Use a Utility Function

If you’ve ever met me in person, you know I talk a lot about utility. My close friends are used to answering questions like “does this have net positive utility to you?” and “is that a strongly weighted component of your utility function?”. Every time I make a decision – what to do with my evening, or what to do with my life – I think about it in terms of utility.

I didn’t always do this, but I’ve adopted this way of thinking because it forces me to clarify everything that’s going on in my head and weight all my thoughts appropriately before making a decision. When I decide things this way, I genuinely feel like I’ve made the best possible choice I could, given everything I knew at the time.

What on earth is utility?

Utility is just a fancy word for “value”. If you enjoy chocolate ice cream, then eating chocolate ice cream has positive utility. If you don’t enjoy carrot cake, then eating carrot cake has negative utility.

One action can have multiple kinds of utility. You can add together all the utility types to get the action’s net utility. For example, if I assign positive utility to eating ice cream but a negative utility to gaining weight, there will be a certain optimal point where I eat as much ice cream as I can without gaining weight. Maybe, if I assign eating ice cream +5 utility, not gaining weight +5 utility, and exercising -5 utility, then it would make sense for me to hit the gym more often so that I can eat more ice cream without gaining weight.

The set of utilities I assign to all outcomes also tells me the optimal possible outcomes, with the highest net utility. In this example, that would be either modifying ice cream so that it doesn’t make me gain weight, or modifying my body’s energy processing system to get it to process ice cream without storing any of it as fat.

Having numbers that are consistent is helpful sometimes, but isn’t strictly necessary. When I need to make quick, relatively straightforward decisions, I typically just make up some utility numbers. Utility calculations in a small isolated system are basically matters of ratios: it doesn’t matter exactly how much utility I assign to something, but if outcome X has 5 more utility points than outcome Y, X is preferable.

Forcing yourself to make up numbers and compare them to each other reveals what you care about. If you initially thought you didn’t care much about something, but then realize that if you calculated net utility with a low number assigned to that thing, you’d be unsatisfied with the result, then you care more than you thought you did.

It might be somewhat unclear, with my super-simple examples so far, what you can assign utility to. So, here are some examples of things that I assign positive utility to:

  • Reading good books
  • Doing new things
  • Increasing utility according to the utility functions of people I care about
  • Building neat software
  • Drawing and painting
  • Writing stories and blog posts
  • Improving/maintaining my mental and physical health
  • Having interesting conversations
  • Improving the quality of life of all sentient beings
  • Running
  • Riding my bike
  • Taking walks with my girlfriend
  • Eating ice cream

If you enjoy doing it, if you think you should do it, if it makes you happy, if it’s a goal you have for some reason, or anything else like that, you assign it some amount of positive utility.

If you’d like to figure out how much relative utility you assign to different options, compare them: if I had to either give up on improving the quality of life for all sentient beings, or give up on ice cream, the ice cream has gotta go.

You can even assign positive utility to things you don’t end up doing. That’s because the net utility, after accounting for circumstances or mutually exclusive alternate actions. Knowing that you would do something, barring XYZ condition, is a useful thing to know in order to dissect your own thoughts, feelings, goals, and motivations. The converse is true, too: you can assign negative utility to things you end up doing anyway, because the net utility is positive.

So if that’s utility, what’s a utility function?

A utility function is a comprehensive set of everything that an agent assigns any utility to. “An agent” is anything capable of making decisions: a human, an AI, a sentient nonhuman alien, etc. Your utility function is the set of everything you care about.

The inputs to a utility function are quantities of certain outcomes, each of which are multiplied by their respectively-assigned utility value and then added together to get the total expected utility of a given course of action. In an equation, this is:

Ax + By + Cz + ...

Where A, B, C, and so on are individual facets of outcomes, and x, y, z, and so on are utilities.

Say I’m with my sister and we’re going to get food. I personally assign a strong net positive to getting burgers and a weak net negative for anything else. I also assign a positive utility to making my sister happy, regardless of where we go for food. If she has a strong net negative for getting burgers, and a weak net positive for sushi, I can evaluate that situation in my utility function and decide that my desire to make her happy overpowers the weak negative I have for anything besides burgers, so we go get sushi.

When evaluating more complex situations (such as moving to a job across the country, where positives include career advancement and increased income, and negatives include having to leave your home and make new friends), modeling your own utility function is an excellent way to parse out all the feelings that come from a choice like that. It’s better than a simple list of pros and cons because you have (numeric, if you like) weights for all the relevant actions.

How to use your utility function

I don’t keep my entire utility function in my head at one time. I’ve never even written it down. But I make sure I understand large swaths of it, compartmentalized to situations I often find myself in. However, if you decide to actually write down your utility values, proceed to make them all consistent, and actually calculate utility when you make decisions, there’s nothing stopping you.

In terms of the optimal way to think about utility calculations, I have one piece of advice. If you come out of a utility calculation thinking “gotcha, I can do this”, “alright, this seems reasonable”, or even “ugh, okay, I don’t like it but this is the best option”, then that’s good. That’s the utility function doing its job. But, if you come out of one thinking “hmmm… I guess, but what about XYZ contingency? I really don’t want to do ABC…”, or otherwise lingering on the point of decision, then you’ve forgotten something.

Go back and ask “what’s wrong with the ‘optimal’ outcome?”. It might be something you don’t want to admit to yourself, but you don’t gain anything by having an inaccurate perception of your own utility function. Remember that, in absence of a verbal reason, “I don’t wanna” is still a perfectly valid justification for assigning negative utility to an action or outcome. In order for this process to work, you need to parse out your desires/feelings/goals from your actions, without beating yourself up for it. Your utility function already is what it is, and owning up to it doesn’t make it worse.

Once you have a pretty good handle on your own utility function, you can go ahead and mentally model other peoples’. Humans are calculating utility all the time in the form of preferences and vague intuitions, so even if other people don’t know their utility functions, you can learn them by a combination of watching their actions and listening to their words.

The discrepancy between those two, by the way, is indicative of one of two things: either the person is choosing an action with suboptimal utility, or they don’t actually assign utility to the things they say they do aloud (perhaps for social reasons). You can point out this discrepancy politely, and perhaps help them to make better decisions in the future.

Once you begin to use utility functions for both yourself and others, you might be surprised at how much easier it is to make decisions. When considering possible courses of action for yourself, you’ll be able to choose the best option and know it was the best. And, in a group, having an accurate model of other peoples’ utility functions can let you account for their preferences, perhaps even better than they themselves do.

PDP 5

Hey all! This update is coming in a little late, since I’ve been working through a bunch of interviews and projects for companies recently. After I get the definitive yes/no from those companies, I’ll see if I can post the projects here, but until then, here’s this week’s update.

This week, I learned in depth how to combat overfitting and faulty initialization, preprocess data, and a few state-of-the-art learning rate and gradient descent rules (including AdaGrad, RMSProp, and Adam). I also read some original ML research, and got started on doing my ML “Hello World”: the MNIST problem.

The section on overfitting was complete and explained the subject well, but the bit on initialization left me questioning a few things. For example, why do we use a sigmoid activation function if a lot of the possible values it can take (near 1, 0, and 0.5) are practically linear? Well, the answer, from the cutting-edge research, seems to be “we shouldn’t”. Xavier Glorot’s paper, Understanding the difficulty of training deep feedforward neural networks, explored a number of activation functions, and found that the sigmoid was one of the least useful, compared to the hyperbolic tangent and the softsign. To quote the paper, “We find that the logistic sigmoid activation is unsuited for deep networks with random initialization because of its mean value, which can drive especially the top hidden layer into saturation. […] We find that a new non-linearity that saturates less can often be beneficial.”

Within the course I’m using, section 41 deals with the state-of-the-art gradient descent rules. It’s exceedingly math-heavy, and took me a while to get through and understand. I found it helpful to copy down on paper all the relevant formulas, label the variables, and explain in short words what the different rules were for. Here’s part of a page of my notes.

I did teach myself enough calculus to understand the concepts of instantaneous rate of motion and partial derivative, which is all I’ve needed so far for ML. Here was the PDF I learned from, and which I will return to if I need to learn more.

The sections on preprocessing weren’t difficult to understand, but they did gloss over a decent amount of the detailed process, and I anticipate having a few minor difficulties when I start actually trying to preprocess real data. The part I don’t anticipate having any trouble with is deciding when to use binary versus one-hot encoding: they explain that bit relatively well. (Binary encoding involves sequential ordering of the categories, then converting those categories to binary and storing the 1 or 0 in an individual variable. One-hot encoding involves giving each individual item a 1 in a specific spot along a long list of length corresponding to the number of categories. You’d use binary encoding for large numbers of categories, but one-hot encoding for smaller numbers.)

The last thing I did was get started with MNIST. For anyone who hasn’t heard of it before, the MNIST data set is a large, preprocessed set of handwritten digits which can be categorically sorted by an ML algorithm into ten categories (the digits 0-9). I don’t have a lot to say about my process for doing this circa this week, but I’ll have an in-depth update on it next week when I finish it.

PDP 4

This week, I learned about deep learning and neural networks, and I wrote a handful of blog posts relating to concepts I learned last week.

The most poignant of these posts was Language: A Cluster Analysis of Reality. Taking inspiration from Eliezer Yudkowsky’s essay series A Human’s Guide To Words, and pieces of what I learned last week about cluster analyses, I created an abstract comparison between human language and cluster analyses done on n-dimensional reality-space.

Besides this, I started learning in depth about machine learning. I learned about the common loss functions, L2-norm and cross-entropy. I learned about the concept of deep neural nets: not just the theory, but the practice, all the way down to the math. I figured out what gradient descent is, and I’m getting started with TensorFlow. I’ll have more detail on all of this next week: there’s a lot I still don’t understand, and I don’t want to give a partially misinformed synopsis.

The most unfortunate part of this week was certainly that in order to fully understand deep neural networks, you need calculus, because a decent portion of the math relies on partial derivatives. I did statistics instead of calculus in high school, since I dramatically prefer probability theory to differential equations, so I don’t actually have all that much in the way of calculus, and there was an upper bound on how much of the math I actually got. I think that I’ll give myself a bit of remedial calculus in the next week.

The most fortunate part of this week was the discovery of how legitimately useful my favorite book is. Around four or five years ago, I read Rationality: From AI to Zombies. It’s written by a dude who’s big on AI, so obviously it contains rather a lot referencing that subject. When I first read it, I knew absolutely nothing about AI, so I just kind of skimmed over it, except to the extent that I was able to understand the fundamental theory by osmosis. However, I’ve been recently rereading Rationality for completely unrelated reasons, and the sections on AI are making a lot more sense to me now. The sections on AI are scattered through books 3, 4, and 5: The Machine in the Ghost, Mere Reality, and Mere Goodness.

And the most unexpected part of this week was that I had a pretty neat idea for a project, entirely unrelated to any of this other stuff I’ve been learning. I think I’ll program it in JavaScript over the next week, on top of this current project. It’s not complicated, so it shouldn’t get in the way of any of my higher-priority goals, but I had the idea because I would personally find it very useful. (Needless to say, I’ll be documenting that project on this blog, too.)