The Myth of the 100-Hour Work Week

“Startup founders work 100-hour weeks.” I forget when I first heard this, but I believe it was around the same time I heard about technology startups.

At the time, I was very impressed with the tremendous passion and work ethic of these founders, who could spend 6+ days out of every week doing nothing but eating, sleeping, and working. And that really was exactly what I thought they were doing: getting 100 full hours, every week, of laser-focused productive work time, without taking breaks to chat about non-work things, or stare into space letting their minds drift, or take walks, or exercise, or anything.

Recently (for some complicated reasons I’ll post later), I’ve decided to impose on myself a work week containing as many productive hours as possible. In making my schedule, I took into account all the psychology of learning that I had researched over years of interest in such things: efficient thinking happens on 8 hours of sleep, taking frequent short breaks helps brains remember things by dint of primacy and recency effects, human circadian rhythms are diurnal and therefore napping in the “afternoon slump” is more effective than trying to work through that time, etc. When I was done filling up all my time with little blue boxes in Google Calendar, I tallied up all my productive working time and found that I had only 52 hours.

Now, to be clear, my schedule did not contain quite as much work as it theoretically could have. I had allotted myself an hour to make lunch, and two hours to exercise in the morning, and an hour and a half to socialize in the evenings. The purpose was to make the plan sustainable, so that executing against it wouldn’t burn me out.

But even if I didn’t care about that, I didn’t see how I could have gotten that productive-hours-per-week number up to 100. It just seemed inefficient, based on everything I had read about human brains, for someone to work nose-to-the-grindstone at a task for that many hours. Taking breaks to exercise and eat healthy food and sleep eight hours a night would make their thinking more efficient than just working longer hours.

I had a hypothesis, that people might say they worked for a hundred hours a week, but perhaps, they only spent 60-80 actually being productive. When I’ve worked full-time at an office, having to look productive regardless of whether I actually was, plus inefficient use of work-time doing stuff like chatting with coworkers, and the corporate busywork that comes with a salaried job like that, all took up significant time. This is a very common situation: statistically, the average number of productive hours an office worker has in an 8-hour day is 2 hours and 53 minutes.

But I wasn’t sure about this, and not having worked in a technology startup myself, I wanted to ask someone who had. So, I asked my mom. This was what she told me.

People who say they work 100-hour weeks may be at the office for a hundred hours, but they are actually productive for around 60. The remaining 40 hours is spent taking breaks of various sorts. But the reason they stay at the office for that time is so that they can take breaks in the same space as a bunch of other smart people working on the same set of projects they are. That way, discussions about not-work meld into discussions about work, and produce more productive time for the group in general over time.

This is the optimal workflow for any group of people working on a project for as much time as possible, as it turns out. And it’s used not only by technology startups but in every other area where such a thing may be needed. For example, before she got into tech, my mom worked as a researcher for NASA. They used the same system: people would work for several hours, then take a break and play foosball and talk about something unrelated, then get talking about their work at the foosball table, and then somebody would have an idea and run off to the office to go work on it.

Learning this, I was even more impressed than I had initially been. Apparently, every group of highly-productive people for a very long time had reinvented this same style of working. And now I get to use it, too.

Rationality is Generalizable

In order to make some extra money during this pandemic, I’ve been doing a bit of work fixing up one of my mom’s rental houses. All the work is within my preexisting skillset, and it’s pretty nice to have a physical job to give me a break from my mental work.

When fixing up a house between tenants, the first item on the to-do list is to create the to-do list, called a “punch list”. In order to create it, I walked through the house and noted down the problems: this railing is loose, this stair tread is broken, these tiles are cracked, etc. Once I was finished, I’d made a list that would probably take me about a week to complete.

And then I tripled that time, and sent it to my mom as my time estimate.

The reason I did this was in order to combat something known as the planning fallacy, one of many endemic flaws in the human brain.

When humans make plans, they envision a scenario where nothing goes unexpectedly. But reality doesn’t work the way human-brains-making-plans do. Fixing those cracked tiles ends up requiring ripping out eight layers of rotted wood underneath filling in the resulting two-inch-deep gap with concrete, then leveling it out with plywood before laying the new tiles. Repairing the crack in the living room wall ends up requiring replacing the whole gutter system, which was causing water to run through the bricks on the outside and into the drywall. When we just see some cracked tiles or some chipping paint, we don’t imagine the root problem that might need to be fixed: we just consider replacing a few tiles or repairing a bit of cracked wall.

This generalizes far beyond fixing houses. When a group of students were asked for estimates for when they thought they would complete their personal academic projects,

  • 13% of subjects finished their project by the time they had assigned a 50% probability level;
  • 19% finished by the time assigned a 75% probability level;
  • and only 45% finished by the time of their 99% probability level.

As Buehler et al. wrote, “The results for the 99% probability level are especially striking: Even when asked to make a highly conservative forecast, a prediction that they felt virtually certain that they would fulfill, students’ confidence in their time estimates far exceeded their accomplishments.”

Humans planning things envision everything going according to their plan, with no unforeseen delays: exactly the same as if you ask them for a best-case estimate, in fact. In real life, what happens is somewhat worse than the worst-case estimate.

There are some useful debiasing techniques for combating the planning fallacy. The most useful of these is to utilize the outside view instead of the inside view: to consider how similar projects have gone in the past, instead of considering all the specific details of this particular project. (Considering specific details drives up inaccuracy.)

I’ve used this technique often, but in this particular circumstance, I couldn’t. While I’d done every individual thing that would need to be done to finish this house before, I had never done all of them in sequence.

Considering this problem, you might advise me to ask someone who had done whole houses before. I have easy access to such a person, in fact. The issue with this solution is that this person always makes overly optimistic estimates for how long it’s going to take to complete projects.

You would think her experience would make it easier for her to take the outside view, to consider this house in the context of all the other houses she had fixed. This doesn’t happen, so what gives?

Roy et al. propose a reason: “People base predictions of future duration on their memories of how long past events have taken, but these memories are systematic underestimates of past duration. People appear to underestimate future event duration because they underestimate past event duration.”

In light of all this, my best course of action, whenever I cannot take an outside view myself, is to take my normal (= optimistic) estimate and triple it.

I made that three-week time estimate around the first of June. Today is the 16th, and I’m finishing the last of the work today.

This whole situation, like the planning fallacy in particular, is generalizable. Learning about cognitive psychology, and the persistent flaws in all human brains, is oftentimes more useful for coming up with correct answers than experience.

You might not think that the tactics for “how to not be stupid” would be as generalizable to every field as they are. Each field has its own tips and tricks, its own separate toolbox, carpentry is not like neurosurgery is not like musical composition… But we are all human brains, us carpenters and neurosurgeons and composers, and we are all using the same flawed circuitry generated by the same optimization process. Here, as in planning, the special reasons why each field is different detract from an accurate conclusion.

Knowing about one cognitive bias produced a better time estimate than thirty years of experience in the field. Rationality does not always produce such a large improvement, but it does prevent humans from making the types of stupid mistakes we are prone to. In my personal opinion, if a decision is worth making, it is worth making rationally.

What Is the “Wage Gap”, Anyway?

“For every dollar a man makes, a woman makes 77¢.” You’ve probably heard this statistic thrown around before. But what, really, does it mean? For a statistic we use to benchmark the “wage gap”, it’s a shockingly broad statement. Well, as it turns out, this statistic is legitimate, and it does demonstrate a significant problem with sexism in the modern world – but it doesn’t mean what you think it does.

Growing up, I always knew I wanted to do something big with my life. From fairly early, I was clear on what, though I wasn’t always clear on how (my idea of how to accomplish my goal at age 6 was to take over the country of Australia – how I thought this would help, I no longer remember). Over time, as I grew up, my ideas crystallized into an actual goal: I would become a successful technology startup founder.

Even so, there was always this discontent looming in my head. Being assigned female at birth, I had heard all the horror stories about the ways the patriarchy made womens’ lives hell, in and out of the workplace. And as a person who wanted in particular to make a ton of money, that “77¢ on the dollar” statistic haunted me. If only I’d been born male, I thought, I would be able to make almost 30% more money! For several months in my teens, I seriously considered making a medical transition in order to up my earning potential.

Except, no. Because that isn’t actually how it works.

According to my previous model of the world, the 77¢ thing applied across the board: a woman working any job would make 23% less than a man working the same job, always, in every industry. So a female teacher would make less than a male one, and a female software engineer would also make less than a male one.

But in actuality, the reason that women make on average 23% less than men is that women take jobs that pay on average 23% less. A female software engineer doesn’t get paid less than a male one – in fact, although women do make up a statistical minority of the programming/technology world, those who are a part of that space tend to make more than men, since women tend toward leadership roles. However, most women are not software engineers; most women are teachers, nurses, secretaries, cashiers, and retail workers. And these jobs pay much less than male-dominated jobs.

So, the source of the gender wage gap is not endemic sexism? Well, not quite. There is a reason that women on average choose jobs that pay less.

When my grandmother was young, there was a common saying in her college, that the women were only there to get their “Mrs. degree”. After she graduated, employers refused to take her on because she “was just going to get married and leave the workforce”.

When my mother was young, many of the other girls in her Catholic school made excuses for their lack of willingness to attempt difficult intellectual pursuits because they were “just girls”. (Her father never let her make these excuses, which is a decent part of what made my mother how she is.)

When I was young, I attended a series of all-girls STEM bootcamps that were designed to encourage girls to go into technical fields. I never much understood the point, because it had never occurred to me that gender had any relation at all to career choices.

Historically, women have been told that their being female limited their career options, or that certain careers were “less feminine”. Women who were told this type of thing comprise a significant portion of the women alive today. And hence, the 27% wage gap.

If you account for differences in college majors, occupations, working hours, and parental leave, the difference between women and men across the board is more like 3-6%.

But hang on, 3-6% is still significant. Where does that come from?

That small (but present) difference likely arises from a large variety of factors, including some amount of (real!) sex discrimination in the workplace. Still, my best guess on the biggest reason for the remaining gap is this:

Women are not systematically taught career skills.

I know a lot of men who were taught how to do business by their fathers, who own businesses. I can attest to the usefulness of learning business at a young age: I worked in my mom’s businesses most of my youth. It is possible to learn how to negotiate a salary, or interview effectively, or manage a team, without the ready-made mentor of a business-savvy parent, but it’s much more difficult. Most of the women in the workforce today don’t have that advantage.

Just working for a while doesn’t magically bestow upon you the skills you need to get paid what you’re worth. You aren’t going to learn how to interview well just by doing it a whole bunch – at a minimum, to understand the whole process you’ll need to come at it from both ends. And you aren’t going to learn how to be an effective manager without getting advice from someone who is.

The easiest way to learn these things is, obviously, to have a parent who will teach it to you. In absence of such, many women are left without critical career skills, and make less money as a result.

This means a number of things. First, we as a society need to stop gendering careers. That girls-only STEM program I went to should be abolished, because it should not be a novel concept to anyone that girls can be technicians (though the general concept of allowing young people to shadow technical professionals was a great thing to have; my brother should just have been allowed to attend).

Second, any individual women who are busy making excuses for their poor work ethic and poor salaries, blaming their gender, should get ahold of their bootstraps and start pulling themselves up.

Third, if any parent cannot provide their children with a satisfactory training in business, that is a critical failure, and they should do their best to outsource that training which they cannot provide themselves (ie, they should send their children to a program which can provide such training).

And fourth, most obviously, sexism in general should be eradicated.

Lastly, we all need accurate information on what the wage gaps endemic to our workplaces are, and what causes them. If we have an inaccurate picture of the reason for such statistics as “for every dollar a man makes, a woman makes 77¢”, we are doomed to waste our efforts on ineffective solutions. And given how bloody slowly change happens in modern politics, inefficiency is not an available option.

What I Would Do With Immortality

I’ve previously discussed that I didn’t like reading fiction growing up, because I knew that if I thought about it too hard, it would break down. In real life, you can do experiments to answer your questions, instead of needing to rely on authority or source material, but in a story, this isn’t true. The real universe is modeled by quarks; stories are modeled by plots.

The fact that real life is based on universally consistent laws is a nearly endless source of intrigue, entertainment, and general fun, at least for me. Whenever I ask a question about reality, I know that it has an answer, somewhere. If I don’t know it, I can learn about it from someone who does, and if nobody knows it, I can find the answer myself. The existence of a consistent reality that I can do experiments on means that I am not limited in my ability to learn stuff by anything besides my willingness to do so.

The primary reason that I haven’t gone on a quest to rediscover every single insight ever made by the human race – which seems to approximate the Maximum Fun Plan – is because doing so would take orders of magnitude more years than I am presently expected to live, barring major advances in medical science. So, I’ve got to solve the pesky mortality problem first. But once I do… I certainly plan to spend a lot of time rediscovering things.

It might seem a bit odd, that I would want to spend decades and centuries rediscovering things that other people already know. A waste of effort, isn’t it? It would be more efficient to ask somebody who knows about the thing already.

More efficient it may be, indeed – which is why I don’t do it right now – but more fun it is not. I’m certain that Isaac Newton had way more fun inventing calculus than I had learning about it in school, and that isn’t just because our modern school system is a train-wreck. The joy of discovering something for myself is substantially greater than that of hearing the solution from somewhere else before I’ve even tried my hand at the problem. (I do prefer that the solution be printed somewhere, especially if the experiments to confirm my solution are difficult to create. It would be nice to hear somebody else’s solution to the AI-box problem, for example.)

Not only is the joy of the knowledge-acquisition inherently less, but the quality of the knowledge itself is also lower. When you discover something for yourself, you don’t have the problem of storing as “knowledge” what is actually just a referentless pointer (ie. a physicist tells you that “light is waves”, and you store the phrase “light is waves”, but you don’t have the background knowledge to really know what it means, and you couldn’t regenerate the “knowledge” if it were deleted from your brain). You also won’t have the potential pitfall of taking the solution for granted. People often don’t properly contextualize beliefs that they themselves didn’t generate: it feels to them like things which are now understood by Science, like rocks and stars and brains, have always been that way, instead of having been a mystery to the human species for the many millennia until they suddenly weren’t anymore.

There is a more abstract objection to the idea of reinventing old discoveries, coming from a less efficiency-focused mental place. The idea seems to be that, if somebody already knows, the problem is for some reason no longer interesting. It’s the position taken by everyone who is enraptured by the breaking new scientific controversies, but is not the slightest bit interested in the proved-correct equations of General Relativity.

But in my book, it doesn’t much matter what somebody knows, if I don’t. When I was young, I wanted to know how my body worked. Why did my hands move when I willed them to, but a glass of water wouldn’t slide across the table to me with a similar mental effort? Why did eating a whole bunch of candy make me feel ill, but eating a whole bunch of salad didn’t? I didn’t know, and I wanted to know; if you had told young-me that somebody knew, I would have replied “okay, can they tell me, please?”.

So far as I was, and am, concerned, if somebody else happens to know the answer to a question, that doesn’t cheapen the discovery for me. In fact, it wouldn’t make any sense for me to have a term in my utility function for being the first person in the universe to make a discovery, because for all I know, super-advanced aliens on the other side of the galaxy have already discovered everything I could possibly want to learn. If my choices are between never taking joy in a discovery, because somewhere else, someone else might already know the answer, and having fun, I’ll pick the fun.

It might be impossible for me to do anything about this until I (or somebody else) create a feasible solution to the imminent mortality problem, but once it happens, you can bet you’ll find me in a remote field, trying to find the optimal way to rub two sticks together.