Why I Don’t Care If YOU Want to Cure Mortality

I’m fairly ardent on this blog about my desire to prevent humans from dying involuntarily. But, while I’ve made many attempts to rebut poor arguments and explain the reasons I care about this, I haven’t ever attempted to convince anyone else that they should have the same goal I do.

This isn’t because I have any desire to avoid persuading people; I’ve written rather a lot of persuasive posts on this blog, inviting people to abolish everything from gender to public schools. Instead, it’s because I don’t think everyone should be trying to cure mortality.

I chose the particular goal I did when I was very young, but I’ve stuck with it because it seems like a reasonable first step. If I can extend my life, and the lives of others, then we’ll all have more time to do other things.

But death from senescence is not the only threat, or even the single biggest one (though it is up there). Pretty much everything on the Wikipedia page for Global Catastrophic Risks is a notable candidate for an Important Problem that somebody should be working on.

“Most people all the time, and all people most of the time, should stick to the possible.” Even those of us working on one impossible (read: very difficult or potentially unsolvable) problem cannot work on multiple at once. Therefore, it is absolutely critical that smart ambitious people work on multiple different impossible problems, work on eliminating or preventing multiple Global Catastrophic Risks at once, so that we can make the world better as efficiently and effectively as possible.

So, if you’re a smart ambitious person not currently working on curing mortality, because you’ve deemed it more important to work on Friendly AI or biotechnology or global warming or FTL travel, then that is exactly how it should be.

“But,” some of you might protest, “I’m not working on any impossible problems. Do you look down on me, or think I should be doing something different?”

Not at all. “Most people all the time, and all people most of the time, should stick to the possible.” The first phrase is just as critical as the second: most people should always do things they know they can do, work on normal goals, and have normal lives. Us weirdos working on impossible problems need the world to keep running while we do it. We need accountants and restaurant owners and librarians and politicians and auto mechanics.

There is absolutely no reason that anybody who doesn’t already have some compulsion to work on an impossible problem should do so. If you have an idea for a startup that could change the world, you have no obligation to follow through with it. If you hear about a Global Catastrophic Risk, you have no obligation to do anything about it (other than, perhaps, try to help a little bit however you can). There are those of us who are indifferent to the idea of spending our whole lives on a potentially fruitless endeavor, who are willing to do so in exchange for decreasing the risk of something about which we are genuinely terrified: the serious crippling, or permanent extinction, of the human race.

That’s our own prerogative, not yours. It doesn’t matter to me what you choose to do with your life: that’s dependent on your utility function, not mine. The only thing that matters to me is my own work. If we each focus on our own work, and sphere of influence, that’s enough.

The Incredibles 2, and How the Universe is Allowed to Just Kill You Anyway

The Incredibles 2 is a movie about superheroes, which is the sequel to another movie about superheroes. Both are centrally themed around the idea that “no man is an island” – as in, you aren’t alone, you don’t need to be alone, and in fact, you do better when you let others help you – to the point that “Nomanisan Island” is an actual location in the films.

I watched the first Incredibles movie when I was a child. It was good, but it didn’t leave any lasting impression in my young brain beyond “Elastigirl cool”. I thought this might be because I’d seen it before I was sentient, so I watched it again later. I liked it more than I had when I was young, but it still didn’t hammer its central theme into my brain nearly as effectively as its sequel.

There are three main things that made the sequel much better than the original; at least, three that are particularly poignant to me. First, the stakes are meaningful. At the end of the first movie, if the good guys didn’t win, Syndrome would have “made everybody super, so no one is”, whatever that means. At the end of the sequel, if the good guys didn’t win, a gigantic cruise ship would have crashed into a coastal city, killing hundreds or thousands of people. You can imagine which of these is more emotionally moving to me.

Second: partway through the second act, Helen meets a number of other superheroes who have been in hiding. This introduces an important element the first movie lacks: a supporting cast. It fleshes out the group “superheroes” to see more than six, and it shows us the sheer number of people whose lives have been negatively impacted by the outlawing of supers.

This, and a few other scenes, make it clear that we, the audience, are expected to care about people besides the main cast. A lot of movies just take collateral damage in stride, telling the audience not to think about the fact that the good guys let hundreds of unnamed pedestrians die when they crash a bad guy’s helicopter into a building. In Incredibles movies, this comes from a plot focus on minimization of collateral damage from superheroes, but it resonates nicely with the humanist in me.

Third: Because the Incredibles movies feature a family, there have been some critical parts of the plot featuring the children as central characters. But while in the first movie they mostly just held their own, in the sequel they were able to independently progress the plot. In fact, there is a point in the climax where all the adults have been hypnotized by the villain, and it’s up to the children – who have so far been bickering, uncoordinated, and generally unqualified to accomplish this necessary task – to save the day.

They succeed, of course, because this is a family-friendly movie. But that sequence of events produced a genuine feeling of uncertainty about the outcome that most movies struggle for. It captured, for me at least, the precise feeling I have when thinking about global catastrophic risks.

The definition of a global catastrophic risk (GCR) is “a hypothetical future event which could damage human well-being on a global scale”. A sub-type of GCR is existential risks, which include things like non-value-aligned artificial superintelligences. Existential risks are the ones which would cause human extinction.

There are tangible, obvious, salient stakes when talking about GCRs. We discuss them because we care about our fellow humans, and we want them not to suffer or die. But, at the same time, we are in a horribly unfortunate position in terms of actually preventing any of these risks, because we’re all bickering, uncoordinated, and generally unqualified to accomplish this necessary task.

At the same time that people are still working on developing ASI, still pumping massive amounts of greenhouse gases into the atmosphere, still stockpiling nuclear weapons, the rest of us are all wandering about our petty lives, not realizing the actions of these few people might imminently kill us all. We can’t affect the actions of these groups, just like we can’t affect the orbit of the sun – as in, it’s strictly-speaking possible, but extraordinarily difficult.

So we’re stuck between extinction and impossibility. Either accept the greater-than-50% likelihood of a universe tiled with paperclips, or move the sun.

Unlike in The Incredibles movies, real life is not family-friendly. There is no plot armor protecting us from extinction, no reason that the squabbling children should be able to defeat the villain. If we are going to survive, we need to become much better.

What “better” looks like depends on the risk, and there are a lot of them. We all should educate ourselves on what GCRs are, what they look like, how bad each one could be, and what preventative measures could be taken to make us safer. In instances where the powers that be are likely to listen to us, we should rally, and scream loudly enough that they’re forced to listen. And, lastly, in the specific situations where we ourselves are the powers that be – if we are AI programmers or molecular nanotechnology developers or biotechnologists – we need to think long and hard about our decisions.

Do not underestimate the likelihood of a future where someone says “oops”, and seven billion bodies hit the ground.

A Letter to my Cousin Rose

I wrote this for unrelated reasons, but I’m posting it here as an update to “I am a 4-Year College Opt-Out. Here’s Why.” It’s not necessary for you to read the original post in order to understand this one; in fact, I’ve restated most of the original post here. Still, I’m leaving the original post alone, because I believe it’s important for people to see my progression over time.


Dear Rose,

It’s been a long time since we’ve seen each other; my memory of you is frozen at the age of 5. But I know you’re 14 now and starting high school. Your mother told me that you were considering your future: if and where you’ll go to college, what you’ll do for a career, and all those major life-determining questions we’re expected to answer in our adolescence. These answers are more complex and nuanced than most people realize, and since I’m closer to you in age than your parents are, I thought I would share my experiences in this area with you.

For me, awareness of college started the summer I turned 8. My siblings were debating with my parents where to go out to dinner, and as with most families trying to decide on things, we voted on it. The kids were initially in the majority with one decision, but then the parents threw a wrench in the rules: “adults get five votes.”

I didn’t mind, but I was curious about the reasoning. “So we get five votes when we turn 13?” I asked, being a Jew who becomes culturally an adult at 13. (By the way, Rose: congratulations on your bat mitzvah; I’m sorry I couldn’t be there!) “No,” said my mom, “it would be silly if you could age into it. For our purposes, an adult is someone who’s graduated from college.”

From this and other similar conversations, I decided I was going to college. But when a child decides to pursue something, it’s not because they’ve done a cost-benefit analysis and found it’s the logical conclusion based on their knowledge and past experience. A child thinks something is worth pursuing if it sounds impressive, fun, or cool.

In and of itself, this wasn’t a problem. Children choose to pursue plenty of silly ideas: when I was 8 I also wanted to make a career out of inventing a time machine. 

But then society perpetuated the problem by leading me away from ever questioning my belief. “Of course college is the right choice for you,” spoke the voice of the populous. “You’re smart, capable, and a hard worker. And you want a good job, right? You need college to get a good job.” I didn’t question these comments: they came from people I knew, trusted, and both knew and trusted to understand more than me about the world.

So as I was entering my senior year of high school, I was just assuming I would go to college. That’s what you do, right? But despite this, I had gotten really sick of taking classes. The things I worked on in my courses seldom related to the real world, and if they did at all, they reflected real-life work through a funhouse mirror. Due to dual-enrollment, I was close to graduating high school with my Associate’s in computer science. At last, I thought, I could start doing meaningful work and creating value for real people! Wait, no, I couldn’t. I had to go to college. Didn’t I?

Finally, I realized that I had pursued college with partially mistaken and mostly absent reasoning. This was not the right way to go about making a major life choice. When I began genuinely considering my options with a fully sentient brain, I came to the conclusion that I did not have nearly enough evidence on which to base a decision that would cost me years, and tens to hundreds of thousands of dollars, if I chose wrong.

This terrified me, but I had no way to remedy it. I didn’t have enough data to decide whether or not college was the right choice. The only way to gather that data would be to get a job in or near the area in which I wanted to work, figure out what types of degrees the people working in my desired field had, and then go to a 4-year college or not based on that.

So, at the end of high school, that’s exactly what I did. I went through a selective program that matches young people with startups, and chose to move to San Francisco and work as a digital marketing consultant. 

While living in SF, I met a lot of technology professionals: programmers, business analysts, technical writers. Before I moved, I’d never thought about the differences between these professions, nor had I made any effort to choose one. Now, because I understood what they were and knew people who did them, I could find out which I would be good at, which I would like, and which paid the best; the combination of which I used to choose a target career. (I decided on business systems analysis.)

Now that I had an idea of what career I wanted, I could work backwards. Do people working in that career have college degrees? What types of degrees do the best new hires at their company with similar jobs have? If they have degrees, where are they from, and what are their majors? 

Based on all the data I gathered, pure programmers often didn’t have degrees at all, or had Associate’s or Bachelor’s degrees in unrelated things from schools I’d never heard of. Pure writers were the same way. Consultants and analysts were much more likely to have Bachelor’s degrees. Finally, data scientists, especially those in research-intensive roles, often had Master’s degrees or PhDs.

From a year of living as a self-supporting, independent adult and working full time for a technology company, I decided with input from friends and associates that the best fit for me was to be a business systems analyst, most of which have Bachelor’s degrees. Therefore, I decided to get a Bachelor’s degree in Computer Information Systems, which, as you may have heard, I’m now working on.

I have three points of advice for you, Rose, from my experience.

First, if you haven’t already looked into dual-enrollment during high school, I recommend it. It’s much more cost-effective in terms of both time and money to get as much of your college work done as possible while you’re still in high school, and I know you’re smart and hardworking enough to do it.

Second, and perhaps most importantly, don’t go to college just because everyone does it. Even if you get a full-ride scholarship, it will still cost you 3-4 years of your life which might be better spent working. 

Third, you may have heard from various adults that you start by choosing what you want to study, then where you want to study it, then what career you want to get. This was the way our parents approached college, but it’s the opposite of how we should do so. Begin by doing research into what career you want to get, then use that to determine whether or not you need a college degree, and if so, which type. From there, you can choose a major that best suits the career choice you’ve made, and use that to decide between colleges.

I know this has been a long letter and is probably a lot to absorb, but I hope it has been useful. If you have any questions, or just want to chat – I would love to catch up – just let me know.

Love,
Your cousin,
Jenya

Subway Advertising and the Illusion of Choice

I lived in San Francisco for almost exactly a year. I didn’t own a car: I took the subway (the Bay Area Rapid Transit, or BART).

Going from Berkeley, there were only two available trains, and I could time my arrival at the station to the one that would take me to downtown SF, where my office was. But coming back from SF to Berkeley in the evenings, there were a lot more trains, and timing my arrival was effectively impossible. The net of this was, I spent an awful lot of time standing around at San Francisco BART stops.

At these stations, they have advertisements plastered on the walls. So, as you stand behind the bumpy yellow line and periodically glance at the LED displays overhead, you’re looking in the general direction of the ads.

Often, I would catch myself reading and rereading the same ads over and over, because they didn’t change them too often. But, strangely, I wasn’t getting frustrated.

Historically, on most platforms where advertising spaces existed, they got in the way of the content. The ads before YouTube videos are still a prime example of this: you have to sit there and endure at least 5 seconds of some advertiser yammering before you can get on with the content you actually want to watch.

But in the San Francisco subway, I didn’t get annoyed at the ads. And I think the sole reason is, they weren’t forced on me. They didn’t get in the way of any content I was trying to view, they were just available as things-to-look-at. Not that there was anything else in the things-to-look-at category, but I could still choose to stare at the floor or a section of wall without ads on it, if I wanted to. The fact that the ads were the most visually interesting things in the vicinity didn’t produce any of the same anger I felt at the ads before YouTube videos.

This is a marketing tactic I call the illusion of choice. The decision between “look at an ad” and “look at a blank wall” isn’t really a decision, but it feels that way to you. Your eyes moved to the ad all on their own. The ad wasn’t even moving or flashing or anything, it was just sitting there. So, if you end up rereading an ad over and over, the advertiser’s message is still getting into your brain, but you aren’t annoyed about it, because it felt like your own choice. Contrast this, again, with YouTube ads: getting the same advertisement over and over again when it’s forced on you is tedious, boring, and infuriating.

More and more marketers are learning that providing prospects with the illusion of choice is the key element that transforms annoying ads into tolerable ones. And, over time through repeated exposure, anything that is tolerable becomes likable via the Mere-Exposure Effect.

As people existing in an advertising-saturated space, we need to keep this tactic in mind. Just because it felt like your choice to look at a subway ad, or watch a sponsored skit from your favorite content creator, doesn’t mean it was. And, once something has gotten into your brain as a non-offensive stimulus, repetition will push your opinion of it higher and higher until the company paying for the advertising gets your money.

Before buying something, always consider where the association between the thing and the company you’re planning to buy from originated. Because, if it came from a subway ad, it was probably the illusion of choice.

A Novel Novel

I feel a little bit late to the party in terms of book-writing. Everyone else in my family has either written, or co-authored, at least one book. My father wrote a book on his experience with competitive swimming. My sister co-authored several books with my mother in a series that teaches children to improve their handwriting using quotes from influential historical figures. My mother has also independently authored a series of manuals for several versions of Microsoft Dynamics Navision. Heck, even my youngest sister is a published ghostwriter.

I’ve been publishing my writings for about ten years, including these, but all of it has been online. I’ve never experienced the feeling of holding in my hands a book I wrote.

That’s about to change.

The content of my new book is something I didn’t think would ever write about, though it had been sitting in the back of my mind for years. This is because, until a few weeks ago, I didn’t think there would be a market for it.

I was born lacking the machinery that lets people learn social skills naturally. (In short, I’m autistic.) But I was determined to prevent that fact from hindering me, so I spent a decade of my life through my teens working tirelessly on improving my social abilities, until I was not only adequate, but significantly above average. I’ve spoken with Congressional lobbyists and spokespeople, as well as received awards from the Toastmasters Club for public speaking and worked as a salesperson for a year.

Back then, I wished there were a book of all the social rules, so that I could just pick it up and learn. It would have made my life so much easier: most of the non-autistic people I talked to couldn’t actually explain what they were doing in social situations, or why what I was doing was so wrong. There was no such book, so I learned it all myself. Logged it all in my “mental Excel spreadsheet”, as my sister (an accountant) calls it. When I got nearer to the end of my journey, I considered actually writing up all this declarative knowledge about social rules that I’d acquired, but I decided against it: I’m not an expert, not a psychologist, I’m only a lowly autistic person, how could I know enough to write something useful?

(I should have recognized these arguments as what they are – Resistance – but I didn’t, because I hadn’t read The War of Art yet.)

Until recently, all this was just a kooky detail about how I got where I am today. I told friends about it, in conversations about how I see the world, but I didn’t explain it to anyone I didn’t know very well, and I certainly didn’t consider writing any books about it. I didn’t think anybody would bother buying such a book.

But then I moved to San Francisco, where I met dozens of other autistic people. I learned that the wish I’d had as a young person, that somebody would just tell me The Rules, wasn’t unusual at all. Before I left for SF, I figured on some level that people would teach themselves the rules to whatever level they felt was necessary, and so even if I decided to write The Rules, nobody would read it. But in making that assessment, I underestimated the amount I’d been helped by factors outside my control, like having parents who are well-versed in developmental psychology. I failed to consider the idea that maybe other autistic people wouldn’t have spent the time I had on gathering all the data, interpreting and analyzing it, and condensing years of experience in social environments into cohesive explanations of social rules. I failed to consider that they might not know how, or might not know it was even possible.

The thing that made me reconsider was a blog, Autistic Not Weird, run by Chris Bonnello. His posts about the struggles faced by autistic adults made me realize that a book on The Rules wouldn’t be redundant; instead, it could be the first of its kind in a niche market with huge demand and no supply.

I wrote the first draft in 8 days, then tossed it to a group of my friends, who made a slew of edits which added nearly fifteen thousand more words to the length. (A good thing, because I felt the first draft was too short at 25k words.) As they were making edits, I did an oil painting for the front cover, plugged it into Adobe Illustrator as soon as it was dry enough to photograph, and designed the remainder of the cover digitally, adding a back-cover blurb and an “About the Author” section.

In all, the first draft was finished in two weeks. I did this on a short time frame because I had six classes starting at the end of those two weeks, and I knew I would have much less time to write while I was busy taking them.

When I started writing this book, I had considered the idea of getting it from a blank page to publication within the two weeks I had available. I decided, ultimately, that waiting six months to finish the book would improve the end product. The most creative people are not those who precrastinate (rush to complete a task as soon as it’s available), nor those who procrastinate (don’t even start a task until the last minute). There is a sweet spot for creativity where you “procrastinate” just a little bit.

So, now a week into my classes, I’m allowing the passage of time to help my creative process generate new ideas. I’m keeping a list of everything I want to add, and I’ve been adding them gradually as I’ve had time.

I’m setting my goal for publication for December of this year (2020). It feels too far away, like a more reasonable time frame would be two months, but I need to remind myself that the quality of the book is not going to be judged by how long it took me to write it. And, the difference between two weeks and six months of writing time is not going to matter to the people I’m trying to help.

I hope to update this post in December, when I’ve successfully published the book. Here’s to all the work it has taken, and will take, to make that happen!

Motivation Does Not Come from Mortality

I’ve often heard the hypothesis that motivation originates from mortality. That is, if we didn’t know that our time was limited, we would have no reason to do anything.

As a person who is in ardent pursuit of immortality, this is clearly not a worldview I hold. If I found out today that my immortality was guaranteed, I would still be publishing a book. Not because I want to create something that will outlive me, obviously. If I were immortal, anything I ever created would be guaranteed to eventually fall into obscurity, probably preceded by a lot of parody, misquoting, and bastardization.

But that has never been my motivation to create. I create because I think it will be useful to people, even if only temporarily. I often create even if the only person it will help is me. The book I’m working on is not some moonshot at a legacy, it’s an attempt to help people. If it only succeeds at that goal for a few years, decades, centuries, and then stops being useful, then I will be glad it succeeded at all.

Actually, if I knew I would never die, all the more reason to create! I currently spend much less time on creative pursuits (writing, painting, music, etc.) than I would like, because I have higher-priority tasks, mostly centered around trying not to die (ie, eating, exercising, curing mortality). But if I didn’t have to worry about cramming everything I might want to do into a mere eighty-year lifespan, I could spend so much more time on art.

“You might say that now,” says the cynic, “but just you wait; you’ll get tired of living eventually.”

I’ve heard this said a lot, so let me provide a counterpoint. For several years, I felt that I had already experienced the full spectrum of human emotion, and had nothing else left. I’d seen stunning beauty: the Great Wall of China, Niagara Falls, my girlfriend’s eyes. I’d seen despair and desolation: the aftermaths of hurricanes, the trauma of child abuse, the city of Detroit. And, I thought, no matter what other events might trigger the same feelings, it will be the same old feelings, on endless loop in various combinations for the rest of my life. The utter pointlessness of it all made me think: if I was already so tired after sixteen years, then why linger another sixty?

This is typically called being suicidal, and most people think it’s bad. So my question to the cynic is, “Why is it bad to want to die at the age of sixteen, but it’s okay at the age of six hundred?”

(If the cynic replies that it isn’t bad to die at sixteen, then I have nothing more to say at the moment: we have a difference of opinion, but there is no logical inconsistency in their position.)

Outside the context of fiction, where an immortal person can become alone and isolated after everyone they ever cared about has died, there is no inherent difference between real-life people who might be mortal versus immortal. Because, in real life where immortality is created by science, everyone who wants to can become immortal.

The only reason real-life immortality might become bad – in and of itself, leaving aside any potential negative ramifications of particular implementations – is if living itself becomes bad after some time. The question of “Is immortality worthwhile?” becomes, “Is the day-to-day experience of living worth it, or not?”

My answering “no” was what made me suicidal in the past. The concept of a bucket list had never appealed to me, because the actual day-to-day experience of my life was not a highlight reel. Your actual life is not comprised of vacations and magical evenings and jaw-dropping scenery; your actual life is comprised of whatever you do today. I couldn’t stay alive because one day, eventually, I wanted to see X or do Y. Enduring a whole ocean of boredom in order to get to a little island of potential happiness didn’t seem like a worthwhile trade.

So, if it’s worth being alive, then it’s worth it even though there’s no finite list of items to tick off a list. It has to be the everyday mundane experience of living and loving and continuing to exist that’s precious, not any one specific experience or set of experiences. It follows that, if this is the case, I would want to keep on doing that forever, because I would never run out of living to do. There will always be new books to read and conversations to have and people to meet and things to do.

In the end, this was what saved me. I realized that I could make each day worthwhile, and enjoy it just as much if it were my first day of eternity or my last day on earth.

And so, if I found out today that I would get to be immortal, my motivation would not all evaporate. As a matter of fact, not much would change. Even if I stopped pursuing immortality, I would start pursuing something else. Probably, I would even maintain the basic life-path of “obtain as much money and power as possible in order to improve the state of the world”. And further, with my increased lifespan, I would have a lot more time to create and discover.

My Problem with “Women”

I have no trouble with being a woman, any more than I have trouble with being a human in general. I do dislike some parts of that, to be fair: continuous sensory experiences I can’t turn off, a body that needs ongoing maintenance, and the nonexistence of any owner’s manual for the whole system. But these would be the same if I were a man instead, and so short of being uploaded, there’s not a lot I can do about it.

However, being called a “woman” – being referred to as such in any social context – is uncomfortable. I’m aware that the biological definition of the word “woman” does apply to me: so far as I know, I can bear children. But if all people meant by the word “woman” was “an adult human of the type which ovulates and can bear children”, I wouldn’t be uncomfortable with being called that. (I also have no clue why anyone would bother, outside the context of asking to have children with me.)

Humans love to sneak things into definitions that are not actually present in the best definition. We basically never use “woman” to mean what it should actually mean: a set of commonly co-occurring biological traits. Instead, it conveys a whole host of mostly inaccurate, often outdated, sometimes demeaning, stereotypes that have been thrust upon us childbearing-capable humans by various cultures throughout history.

What about ovulating produces poor driving skills? What about a uterus produces a lack of self-confidence? Does a vagina induce a desire to cook, clean, and take care of children? Does chromosome set XX create gracefulness, thinness, or a preference for dresses?

But, of course, these are only stereotypes. While they are annoying, it shouldn’t make me uncomfortable with being called a “woman” just because some idiot thinks women can’t drive.

The problem is, it isn’t only the stereotypes. As social psychologists learn to better isolate relevant variables, studies have been published about the differences between the sexes. Evidently, women are more likely to prioritize work flexibility and job stability over earnings growth. They prefer socially interesting, rather than mechanically interesting, jobs. They’re physically weaker than men.

The inherent trouble with this is segmenting an entire population into two groups, and having the definitions of those groups be any more complex than a single trait. If you assume all women are graceful housewives with poor driving skills and self-confidence issues, or if you assume all women are family-focused social butterflies with a chronic avoidance of lifting anything heavy, then you’ll be wrong no matter whether your sources are stereotypes or science.

Nobody is an average. There is no such thing as “the average woman” or “the average man”. These aren’t people; they don’t exist. When a social psychologist says “the average woman prefers work flexibility to higher salary, prefers socially interesting over mechanically interesting work, and is weaker than the average man”, they aren’t saying that every woman is like this, or that there even is a single woman who is all these things. They are merely pointing out a pattern.

The trouble pointing out patterns is that, in any sufficiently large population, there are always patterns. It is always possible to find traits that are more likely to occur in one group than another, even with statistically significance. If I analyzed a large enough population, I’m sure I could find statistically significant correlations between someone’s personality and their hair color, skin tone, or preference for window coverings.

And even if the patterns do reflect real differences, they still don’t tell you anything about any specific individual. There are statistical differences between age of mortality, for example – women tend to live longer than men – but that doesn’t mean that any given woman will certainly outlive any same-aged man.

The implications of all this pattern-finding, though, is that people who are outside the pattern often don’t get to do outside-the-pattern things they enjoy. For example, I like helping people by doing physically-demanding tasks for them (lifting or carrying heavy objects, opening tight jar lids, etc), but nobody ever asks me to do this because they don’t think of me as the sort of person who would enjoy that type of thing.

And then everyone – especially intellectual and “progressive” folks – decides to bring my womanhood into every conversation. I can’t just be strong, I have to be a “strong woman”. I can’t just be an engineer, I have to be a “woman engineer”. I can’t just be primarily attracted to women, I have to be a “lesbian”.

Keep in mind: I wouldn’t feel any less strongly about any of this if I had been born a man. I could write a whole list of arguments rebutting the stereotypes and tendencies associated with men that I don’t follow, if those stereotypes and tendencies had been the ones to shape my social life.

None of this has anything to do with womanhood; it has to do with gender.

I can understand why gender would be a useful thing to consider, in the primitive societies where language was being developed. I can understand why it would continue to be useful throughout the majority of human history to have gendered words like “woman” and “man” used more frequently than “person”: when your gender determines everything from your clothing choices to your career options, differentiating between genders while talking is sensible, perhaps even necessary. But in modern society, where we’re trying to do away with gender roles, I cannot imagine a reason to continue using gendered words.

It isn’t only the fact that everyone’s gender doesn’t necessarily match their birth sex, although this is true. It isn’t only the fact that not everyone’s gender fits into the Western binary system, although this is true also. It isn’t even about the fact that, if you assume someone’s gender and you’re wrong, it can be anywhere from a minor inconvenience to an immensely traumatizing experience, and you have no idea which.

It’s about the fact that assuming peoples’ gender based on their gender presentation actively entrenches gender roles and stereotypes.

If, in trying to accept transgender people, we go from an attitude of “anyone with breasts is a woman and should be called by she/her pronouns” to one of “anyone wearing stereotypically feminine clothing is a woman and should be called by she/her pronouns”, that isn’t progress. It’s moving backwards. Because, in the act of moving from the one to the other, we get the worst possible mix of both. In order to “pass” as a woman, a trans woman needs to embrace every bit of stereotypical femininity she can. She puts on makeup. She shaves her legs. She paints her nails. Even her actions aren’t free from scrutiny: if she doesn’t keep her knees together while sitting down, speak in a high-pitched quiet vocal register, and apologize constantly, how will people know she’s a woman?

The appropriate thing to do here is definitely not to get upset with transgender people. Passing is necessary for their survival in a cisgender-dominated world: it isn’t their fault that this necessity perpetuates gender roles. The best thing to do, instead, is to abolish gendered words altogether.

This might seem daunting, but you can implement it yourself easily. Just refer to anybody whose gender you don’t know – including hypothetical people and strangers – with they/them pronouns. When describing someone, say “person” instead of “woman” or “man”.

Of course, in the modern world, some conversations are explicitly about gender, in which case, gendered words are inevitable. You can’t restate phrases like “women tend to make less money than men” into gender-neutral words and have them still make sense. But the majority of conversations, containing phrases like “I saw a lady at the bus stop today, she was wearing this gorgeous purple suit”, or “when I was driving home yesterday this old guy was tailgating me the whole time”, can be restated with gender-neutral words easily.

In conclusion: I don’t want to be called a woman because gendered words carry uncomfortable stereotypes and then perpetuate them by their continued use. Instead, I prefer to be referred to with they/them pronouns and gender-neutral words.

In the interest of moving society past the pervasiveness of gender, I invite you to consider the same.

Driving Barefoot

A personal and biased account of the year 2018, written at the end of that year. I have no grand reasons for failing to publish it here until now.


As an American, I legally became a person at the age of eighteen. As a person born in the year 2000, that happened this year. As a result, I figure, this is the first year that counts. It may only make sense to start at the beginning, but I find the middle, while also more nonsensical, to be more fun.

There is a thing that seems strange to me, although there’s no other way that things could be. A human life is so long, and in such detail, but the overwhelming majority of it, people don’t care about. I could fill twenty novels with my experiences so far, but nobody would read it. There are things we do that aren’t interesting to anyone but ourselves. We can’t make our lives growing up into interesting anecdotes unless something weird happened “one day”. But that doesn’t mean that our lives weren’t important to us. It means they aren’t important to anyone but us.

That’s why younger people roll their eyes and sigh when older people reminisce about where they grew up, where they went to school, where they blah blah blah. We haven’t had the occasion to experience the concept of having our daily lives change radically, yet. And we will understand, at some point, when that does happen. Then somebody else is going to be rolling their eyes at us.

But there’s another point. Even supposing I wanted to read about someone’s whole life, end-to-end, it would take my whole life, end-to-end, to do so. It’s not just because we don’t care, though of necessity we don’t: we are incapable of caring. In the most literal sense, I only have time to care about snippets of other peoples’ lives, because it’s the only way that I, too, get to live.

The only person who can care about a life is the person who lived it. It feels like a bit of a gyp, but there it is.

My grandmother broke both her hips this year. The first one in January, when she slipped and fell on ice while walking home from her neighbor’s house, where she was delivering a homemade cake for New Year’s. The second one in June, when in a less stereotypically grandmotherly way, she tripped on a sidewalk crack walking to her car. Both occasions took up several straight weeks of my life after they happened—staying with her overnight in hospitals, sitting through her combination of childlike weeping and emotionally manipulative complaining, staying with her every third day at the rehabilitation center, and all of that.

I wish I could tell you that I did it out of love, or something. And in some twisted way, I suppose I did – I did it out of love for my mother, so I could spare her the burden of having to deal with her mother. My mother is a kind, if occasionally frustrating, fully capable adult. My grandmother is an overgrown baby who needs to be cared for and coddled twenty-four-seven, or else, like a younger sibling who knows it will get you in trouble, she’ll throw a tantrum. Except, instead of getting my mother in trouble, my grandmother gets her into debt. It’s like if you took that troublesome younger sibling and gave them your credit card.

I think grandmothers are more valuable to children, anyways. I recall when I was a child and, in exchange for the simple busywork of trimming her hedges and crushing some bottles and cans to take to the recycling place, my siblings and I got to play badminton in her backyard (without the net, because what’s the fun in a game if you can lose?) and raid her pantry for delicious snacks. We didn’t know that anything was wrong with her spending habits back then; we hardly knew what spending habits were. It’s easy to enjoy going out to dinner every day if you don’t understand that it costs money.

And there’s a lot more a grandmother can offer a child, too. My grandmother had nearly no experience in the real world, having been cared for by her parents till college, then by her husband until he left her, then by her then-fourteen-year-old daughter who faked her age and went on to work for a living on top of putting herself through college. But that kind of thing doesn’t matter to children: you have something to offer a child if you have good food and/or fun games. On the other hand, in order to have something to offer an adult, you need to have some knowledge or experience that they find valuable.

After my grandmother moved into a nursing home, where there are no icy or uneven sidewalks, I was tasked with the job of cleaning out her house so that we could rent it out. I’d like to say I was all sentimental about this, but in fact it was just… weird. You get used to a specific set of circumstances that accompany a place, and whenever you go to that place, you expect those circumstances to happen. And when they don’t, it feels weird.

I drive into my grandmother’s driveway and the lights are on; I walk up to the back door and I shove it open, since it tends to stick. The room is bright and warm, and a covered pan smells of my grandmother’s patented delicious concoction of orange juice and baby carrots. My grandmother is sitting on the heavily padded couch, tapping away at her computer that sits on a plastic tray table. The TV is quietly playing a news channel.

I open the door, for real this time. The lights are off. The room is nearly as freezing as the winter air outside. The living room is bereft of furniture and everything is in disarray. I wander around the vacant house, trying to find the thermostat so I can turn the heat on. Most of the light switches don’t work, so I open a handful of blinds so I can see around. The light in the spare room works.

Laughter spills from the room. My two sisters are piled into a small loveseat, sitting slightly on top of each other. My brother is slowly walking backwards on the treadmill. My grandma is standing beside a closet, asking everyone if they’d like to take home some of the clothes that she doesn’t wear anymore. There is a coat that used to be my grandfather’s that looks like it might fit my brother, so he stops walking backwards and glides to the end of the treadmill, jumping off and walking over to try the coat on.

I blink. The only thing that remains of the scene is the inviting orange light from the overhead lamp. It looks out of place in a frigid room filled with boxes. I turn away and walk back down the hall, still looking for the thermostat. The bedroom not only has no working light switch, it looks like it never did. There is a large empty space where the bed used to sit. I turn away and walk into the bathroom, flipping the light switch on.

In the living room, one of my sisters is sitting at the piano, tinkling out a melody that she’s trying to make sound creepy. My other sister is sitting in the comfy recliner by the couch, the coveted seat that she monopolized the instant we all walked in the door, telling her that she’s playing the notes wrong. My brother is contentedly munching out of grandma’s candy bowl as he waits for the rest of us to get ready. He decided to dress up as a Viking this year, so his entire costume is a heavy Icelandic fur-trimmed robe. I, on the other hand, decided to be a cat, so I needed to head to the bathroom to do my eyeliner whiskers.

The bathroom cabinets are open and empty; I cleaned out all my grandmother’s fancy collector’s perfumes the previous week. The sink is bare, no green toothbrush or little mouthwash cups. There is no silver hair in the bathtub drain.

I found the thermostat as I walked back to the living room. “Replace battery”, it said. I took the batteries out to see what kind they were. Triple-A. I’d have to make sure I brought some when I came back next. I shoved my hands into the pockets of my overcoat and walked over to one of the only remaining pieces of furniture in the living room: one of those old writing desks that folds down and reveals little cubbies inside. Next to it is a matching breakfront, empty of the hundreds of VCR tapes it used to contain. Two weeks ago, I’d boxed those all up and donated them to Goodwill; the employees there informed me that they were going to stop taking them soon, since people rarely bought them.

I opened up the writing desk and brought over the piano bench so I’d have something to sit on while I sorted through the piles and piles of paperwork in the desk. I put on “Bright and Clear” by Sou, since it seemed like it had the right slightly melancholy feel to it. At some point, I took a break and hit a McDonald’s.

After I’d cleaned out the desk and sorted everything out and stuck everything in my car, I was too damn cold to do anything else. I decided to drive home. I’d left my car on while I was packing stuff into it, so it was warm, but my clothes were cold. I tossed off my gloves and opened my coat, turning up the fan. As I drove down the main drag in my grandmother’s old neighborhood, I realized my feet were still cold. I kicked off my boots.

The pedals weren’t cold or wet, they were dry and warm from all the warm air that the heater had been blowing on them. Unbeknownst to my previously-shod feet, the gas pedal was skinnier than the brake, and was rooted to the floor. My first and fifth toes wrapped a little bit around the sides of it. The brake pedal, on the other hand, was a bit higher up, and attached to the roof of the foot-compartment. It was wider and more deeply grooved, and it moved along the curve of a clock pendulum. With shoes on, I’d operated both pedals exactly the same way, but I came to realize that they were very different. It felt very strangely intimate.

Up until that moment, I’d hated that car. It wasn’t my first car, and I didn’t buy it. Legally, it wasn’t even in my name yet. It reeked of cigarette smoke from the two months that some of the roofers that work for my mom had used it, after my grandmother was in the nursing home and couldn’t drive anymore, but before my sister had wrecked my old car and left me car-less. It accelerated like an itchy trigger finger and braked the same way. Nearly every surface was caked in dirt and cigarette ashes and the ones that weren’t were full of pill bottles.

My old car had the most comfortable seats you’d ever sit in. It had the most perfect heat and air conditioning; the hottest setting was just hot enough to be toasty but not so hot that you felt you’d burn your face off, and the coolest setting made me feel like I was in an ice rink, not like I was about to get frostbite. The space right beside the emergency brake lever was a perfect place to put my wallet, since it was wide and flat enough. I kept it meticulously clean for the most part, and so had the guy I’d bought it from, and so the worst stain it had was from an energy drink that had spilled all over the cupholders when I’d had to brake 70 to 20 on a highway after some idiot had pulled out in front of me. All the windows except the front windshield were tinted drug-dealer-style, so that I could confidently do whatever the fuck I wanted in a parking lot and nobody would be able to see. The trunk had a ton of space: I fit a whole gigantic lawnmower in there once. On the whole, I knew that car up and down, front and back, and it knew me.

Then my sister, who I love more than life itself, wrecked it.

This was the sister who I’d called my soulmate more than once, who I drove to South Dakota and back with (that’s four days total of driving, seventeen hours each way) and somehow managed to never get bored with talking to, who I’d talked to about everything and nothing, who had read every single one of my shitty attempts at fiction writing and managed not to laugh except at the jokes, who I have never had a bad argument with, who frequently consoled me after I’ve had bad arguments with other people, who I would gladly follow to the ends of the earth.

This was my first car, the car that I’d bought with every penny of my life savings, that carried my sister and I to South Dakota and back, that carried us on so many other trips, that I drove back and forth to my sales job that I couldn’t have gotten without it, that had heard me reciting every sales pitch, mumbling in every foreign language, asking for every takeout order, conversing with every friend. That I drove to the ends of the earth, and back again.

I did what I had to do.

I consoled my sister. I told her not to beat herself up about it. I told her that the only thing she should take away from this is that she needs to become a safer driver. I didn’t shout and I didn’t scream and I didn’t cry. Not in that moment, anyways, though I certainly did those things afterwards, when she was out of earshot. I spoke in a calm, collected, grown-up voice.

I didn’t do this because I realized some grand moral truth about how people are more important than objects or whatever. Sometimes, objects are people, or at least, they are memories, which is as close to a person as you’re going to get. That car was two years of my life. I’m not getting them back.

No, I did it because I logically decided that the best course of action was going to be the one that made my sister into a better person, and that allowed her to move on from this in a positive way. For the duration of that car ride, for the duration of the next two weeks when I interacted with her, I decided that my grief wasn’t important. I decided that while those two years mattered to me, while I mattered to me, my sister mattered more.

Did I mention I love her more than life itself?

Until that day, as I drove back from working on my grandmother’s old house, I hated that new car. But as I merged onto the highway, felt myself accelerating as I eased the gas pedal down with my toes, I figured, maybe it’s okay. Maybe I don’t mind the uncomfortable seats and the bad smell and the heating that makes me feel like I’m standing in front of an oven.

The light snow that fell outside turned into a miniature blizzard, cutting my visibility in half. Seeing the sea of red light ahead of me, I braked, feeling the grooves of the pedal under the ball of my foot. I put on a piano cover of Toto’s Africa.

Maybe I don’t mind the outlet that doesn’t work and the cigarette ashes that get all over my tubes of lip balm when I put them in the compartments. The past version of me that had that car is gone. The hypothetical future version of me, the version I had always assumed I would become, where I continued to have that car, is gone. But maybe the current version of me, where I have this new car, isn’t so bad.

The Myth of the 100-Hour Work Week

“Startup founders work 100-hour weeks.” I forget when I first heard this, but I believe it was around the same time I heard about technology startups.

At the time, I was very impressed with the tremendous passion and work ethic of these founders, who could spend 6+ days out of every week doing nothing but eating, sleeping, and working. And that really was exactly what I thought they were doing: getting 100 full hours, every week, of laser-focused productive work time, without taking breaks to chat about non-work things, or stare into space letting their minds drift, or take walks, or exercise, or anything.

Recently (for some complicated reasons I’ll post later), I’ve decided to impose on myself a work week containing as many productive hours as possible. In making my schedule, I took into account all the psychology of learning that I had researched over years of interest in such things: efficient thinking happens on 8 hours of sleep, taking frequent short breaks helps brains remember things by dint of primacy and recency effects, human circadian rhythms are diurnal and therefore napping in the “afternoon slump” is more effective than trying to work through that time, etc. When I was done filling up all my time with little blue boxes in Google Calendar, I tallied up all my productive working time and found that I had only 52 hours.

Now, to be clear, my schedule did not contain quite as much work as it theoretically could have. I had allotted myself an hour to make lunch, and two hours to exercise in the morning, and an hour and a half to socialize in the evenings. The purpose was to make the plan sustainable, so that executing against it wouldn’t burn me out.

But even if I didn’t care about that, I didn’t see how I could have gotten that productive-hours-per-week number up to 100. It just seemed inefficient, based on everything I had read about human brains, for someone to work nose-to-the-grindstone at a task for that many hours. Taking breaks to exercise and eat healthy food and sleep eight hours a night would make their thinking more efficient than just working longer hours.

I had a hypothesis, that people might say they worked for a hundred hours a week, but perhaps, they only spent 60-80 actually being productive. When I’ve worked full-time at an office, having to look productive regardless of whether I actually was, plus inefficient use of work-time doing stuff like chatting with coworkers, and the corporate busywork that comes with a salaried job like that, all took up significant time. This is a very common situation: statistically, the average number of productive hours an office worker has in an 8-hour day is 2 hours and 53 minutes.

But I wasn’t sure about this, and not having worked in a technology startup myself, I wanted to ask someone who had. So, I asked my mom. This was what she told me.

People who say they work 100-hour weeks may be at the office for a hundred hours, but they are actually productive for around 60. The remaining 40 hours is spent taking breaks of various sorts. But the reason they stay at the office for that time is so that they can take breaks in the same space as a bunch of other smart people working on the same set of projects they are. That way, discussions about not-work meld into discussions about work, and produce more productive time for the group in general over time.

This is the optimal workflow for any group of people working on a project for as much time as possible, as it turns out. And it’s used not only by technology startups but in every other area where such a thing may be needed. For example, before she got into tech, my mom worked as a researcher for NASA. They used the same system: people would work for several hours, then take a break and play foosball and talk about something unrelated, then get talking about their work at the foosball table, and then somebody would have an idea and run off to the office to go work on it.

Learning this, I was even more impressed than I had initially been. Apparently, every group of highly-productive people for a very long time had reinvented this same style of working. And now I get to use it, too.

Rationality is Generalizable

In order to make some extra money during this pandemic, I’ve been doing a bit of work fixing up one of my mom’s rental houses. All the work is within my preexisting skillset, and it’s pretty nice to have a physical job to give me a break from my mental work.

When fixing up a house between tenants, the first item on the to-do list is to create the to-do list, called a “punch list”. In order to create it, I walked through the house and noted down the problems: this railing is loose, this stair tread is broken, these tiles are cracked, etc. Once I was finished, I’d made a list that would probably take me about a week to complete.

And then I tripled that time, and sent it to my mom as my time estimate.

The reason I did this was in order to combat something known as the planning fallacy, one of many endemic flaws in the human brain.

When humans make plans, they envision a scenario where nothing goes unexpectedly. But reality doesn’t work the way human-brains-making-plans do. Fixing those cracked tiles ends up requiring ripping out eight layers of rotted wood underneath filling in the resulting two-inch-deep gap with concrete, then leveling it out with plywood before laying the new tiles. Repairing the crack in the living room wall ends up requiring replacing the whole gutter system, which was causing water to run through the bricks on the outside and into the drywall. When we just see some cracked tiles or some chipping paint, we don’t imagine the root problem that might need to be fixed: we just consider replacing a few tiles or repairing a bit of cracked wall.

This generalizes far beyond fixing houses. When a group of students were asked for estimates for when they thought they would complete their personal academic projects,

  • 13% of subjects finished their project by the time they had assigned a 50% probability level;
  • 19% finished by the time assigned a 75% probability level;
  • and only 45% finished by the time of their 99% probability level.

As Buehler et al. wrote, “The results for the 99% probability level are especially striking: Even when asked to make a highly conservative forecast, a prediction that they felt virtually certain that they would fulfill, students’ confidence in their time estimates far exceeded their accomplishments.”

Humans planning things envision everything going according to their plan, with no unforeseen delays: exactly the same as if you ask them for a best-case estimate, in fact. In real life, what happens is somewhat worse than the worst-case estimate.

There are some useful debiasing techniques for combating the planning fallacy. The most useful of these is to utilize the outside view instead of the inside view: to consider how similar projects have gone in the past, instead of considering all the specific details of this particular project. (Considering specific details drives up inaccuracy.)

I’ve used this technique often, but in this particular circumstance, I couldn’t. While I’d done every individual thing that would need to be done to finish this house before, I had never done all of them in sequence.

Considering this problem, you might advise me to ask someone who had done whole houses before. I have easy access to such a person, in fact. The issue with this solution is that this person always makes overly optimistic estimates for how long it’s going to take to complete projects.

You would think her experience would make it easier for her to take the outside view, to consider this house in the context of all the other houses she had fixed. This doesn’t happen, so what gives?

Roy et al. propose a reason: “People base predictions of future duration on their memories of how long past events have taken, but these memories are systematic underestimates of past duration. People appear to underestimate future event duration because they underestimate past event duration.”

In light of all this, my best course of action, whenever I cannot take an outside view myself, is to take my normal (= optimistic) estimate and triple it.

I made that three-week time estimate around the first of June. Today is the 16th, and I’m finishing the last of the work today.

This whole situation, like the planning fallacy in particular, is generalizable. Learning about cognitive psychology, and the persistent flaws in all human brains, is oftentimes more useful for coming up with correct answers than experience.

You might not think that the tactics for “how to not be stupid” would be as generalizable to every field as they are. Each field has its own tips and tricks, its own separate toolbox, carpentry is not like neurosurgery is not like musical composition… But we are all human brains, us carpenters and neurosurgeons and composers, and we are all using the same flawed circuitry generated by the same optimization process. Here, as in planning, the special reasons why each field is different detract from an accurate conclusion.

Knowing about one cognitive bias produced a better time estimate than thirty years of experience in the field. Rationality does not always produce such a large improvement, but it does prevent humans from making the types of stupid mistakes we are prone to. In my personal opinion, if a decision is worth making, it is worth making rationally.