Language: A Cluster Analysis of Reality

Cluster analysis is the process of quantitatively grouping data in such a way that observations in the same group are more similar to each other than to those in other groups. This image should clear it up.

Whenever you do a cluster analysis, you do it on a specific set of variables: for example, I could cluster a set of customers against the two variables of satisfaction and brand loyalty. In that analysis, I might identify four clusters: (loyalty:high, satisfaction:low), (loyalty:low, satisfaction:low), (loyalty:high, satisfaction:high), and (loyalty:low, satisfaction:high). I might then label these four clusters to identify their characteristics for easy reference: “supporters”, “alienated”, “fans” and “roamers”, respectively.

What does that have to do with language?

Let’s take a word, “human”. If I define “human” as “featherless biped”, I’m effectively doing three things. One, I’m clustering an n-dimensional “reality-space”, which contains all the things in the universe graphed according to their properties, against the two variables ‘feathered’ and ‘bipedal’. Two, I’m pointing to the cluster of things which are (feathered:false, bipedal:true). Three, I’m labeling that cluster “human”.

This, the Aristotelian definition of “human”, isn’t very specific. It’s only clustering reality-space on two variables, so it ends up including some things that shouldn’t actually belong in the cluster, like apes and plucked chickens. Still, it’s good enough for most practical purposes, and assuming there aren’t any apes or plucked chickens around, it’ll help you to identify humans as separate from other things, like houses, vases, sandwiches, cats, colors, and mathematical theorems.

If we wanted to be more specific with our “human” definition, we could add a few more dimensions to our cluster analysis—add a few more attributes to our definition—and remove those outliers. For example, we might define “human” as “featherless bipedal mammals with red blood and 23 pairs of chromosomes, who reproduce sexually and use syntactical combinatorial language”. Now, we’re clustering reality-space against seven dimensions, instead of just two, and we get a more accurate analysis.

Despite this, we really can’t create a complete list of all the things that most real categories have in common. Our generalizations are leaky in some way, around the edges: our analyses aren’t perfect. (This is absolutely the case with every other cluster analysis, too.) There are always observations at the edges that might be in any number of clusters. Take a look at the graph above in this post. Those blue points at the top left edge, should they really be blue, or red or green instead? Are there really three clusters, or would it be more useful to say there are two, or four, or seven?

We make these decisions when we define words, too. Deciding which cluster to place an observation happens all the time with colors: is it red or orange, blue or green? Splitting one cluster into many happens when we need to split a word in order to convey more specific meaning: for example, “person” trisects into “human”, “alien”, and “AI”. Maybe you could split the “person” cluster even further than that. On the other end, you combine two categories into one when sub-cluster distinctions don’t matter for a certain purpose. The base-level category “table” substitutes more specific terms like “dining table” and “kotatsu” when the specifics don’t matter.

You can do a cluster analysis objectively wrong. There is math, and if the math says you’re wrong, you’re wrong. If your WCSS is so high that you have a cluster that you can’t label more distinctly than “everything else”, or if it’s so low you’ve segregated your clusters beyond the point of usefulness, then you’ve done it wrong.

Many people think “you can define a word any way you like”, but this doesn’t make sense. Words are cluster analyses of reality-space, and if cluster analyses can be wrong, words can also be wrong.


This post is a summary of / is based on Eliezer Yudkowsky’s essay sequence, “A Human’s Guide to Words“.

The Importance of Support

Being Jewish was always something I felt like I was in the abstract. I had a different culture than most people, I celebrated different holidays, I had a different native country, my family spoke a different language. I was different, sure, but not in any way that mattered.

Otherwise, I’m just like every other American. I celebrate the Fourth of July with fireworks. I stay up late on New Year’s Eve watching the ball drop on TV. Unlike many Jews, I even celebrate Christmas: my dad grew up Christian, so we decided to maintain the tradition from his side of the family. Being Jewish never got in the way of these things.

When I told people I was Jewish, I was sometimes met with confusion, but rarely with hate. In fact, it happened so infrequently that I can recall each individual instance.

This is why I was so shaken when I heard about the shooting at the Tree of Life synagogue. I didn’t understand how this man could look at a bunch of people who celebrated our holidays like he celebrated Christmas, who had a native culture and history like other Americans might be Irish or Norse, but who were also American citizens just like anyone else, and decide we must be eradicated off the face of the earth.

How do you look at my family on Rosh Hashannah, smiling and laughing and passing around a brisket like many families would pass around a honey ham on Christmas, and decide that “all Jews must die”?

I don’t think I can hope to know, but I was scared nonetheless.

I personally am relatively safe. I go to a different synagogue which doesn’t happen to be in a Jewish neighborhood, and I only go on high holidays when they have a decent amount of security. Everyone I know personally, even those who go to the Tree of Life, is okay. But though that diminishes the fear for the personal safety of those I know, it doesn’t do anything about the more general fear I have for my people.

If you’re a member of a majority culture, you may not understand the strong bond between members of a minority one. Try to think of it as if all Jews are members of the same extended family. (Technically speaking, with Jews in particular this is actually true; you can only become Jewish by marriage or by being the child of a Jewish family, so all Jews are in some sense related.) So, though nobody I knew personally was killed or injured, many members of my extended family were. And that feels pretty awful.

There is a light in the fog, though. It’s the reason I decided to write this essay, as opposed to many others I could have written around a similar topic. And that light is the fact that a lot of people, all of them goyim, have been asking me questions like these.

“Jen… are you okay? I mean I know you weren’t in it but… anyone you knew?”

“Hey, you okay? Cole mentioned you live near Pittsburgh.”

“Is your family safe?”

I’ve never had so many people asking after me before. It was really nice to know that so many people cared. It helped me to realize that, in the words of my skating coach, “Those who hate are a small percentage of the country. The people who love are so many more in number and power and we will always win in the end.” Just because one man thinks that I shouldn’t exist doesn’t mean that everyone thinks that.

This is the importance of support. And it’s not just about mass shootings that make national news; it’s about every crisis, big and small. If you ask one simple question, “are you okay”, you can lift one straw off someone’s breaking back. You can make their day that much more bearable. If you ever question whether or not to reach out to someone going through hardship, do it. Reach out.

It really does help.

Good Things to Know About People and Money

Arguably, the three biggest things you have to deal with nowadays are people, money, and tech. I’ve already written about tech, so this post will be about people and money. The sciences of people and money are, for the most part, psychology and economics, so today I’ll be discussing a brief overview of each.

Psychology, especially cognitive psychology, is heavily based on the assumption that people want to believe that they are rational and logical, and make rational, logical, fact-based decisions, but mostly they don’t.

Economics, however, bases most if not all of its models on the assumption that people will, for the most part, behave rationally.

You will notice the fundamental contradiction.

As such, I’ll talk first about psychology, then economics in theory, and lastly, economics in practice, taking the psychology into account.

psychology

The most important thing you need to know about people is that they are brains. The most important thing you need to know about brains is that they are fallible.

The specific ways in which brains are fallible are called biases. Eliezer Yudkowsky defined biases as “obstacles to truth which are produced, not by the cost of information, nor by limited computing power, but by the shape of our own mental machinery.” There’s a decently sized list of them on Wikipedia, but unfortunately it isn’t really comprehensible to laypeople. There’s a decently sized collection of essays about many of them on LessWrong as well, but it’d take you a really long time to read (I know, I’ve read it) and you don’t have all year. As such, I’ll discuss a few of the most common and most stupid.

Conjunction Fallacy

The probability of one thing happening is always higher than the probability of one thing plus another additional thing happening. If you want to be mathematical about it, if A stands for a thing happening, B stands for a different thing happening, and P stands for “the probability of”, P(A) > P(A&B).

However, in practice people don’t apply this. In a 1981 experiment, 68% of the subjects ranked it more likely that “Reagan will provide federal support for unwed mothers and cut federal support to local governments” than that “Reagan will provide federal support for unwed mothers.” The subjects substituted judgment of representativeness for judgment of probability.

The way to fix this is to understand that each additional detail, no matter how representative, is a burden on the probability. Each “and” decreases the likelihood that the whole thing will happen.

Read more about the conjunction fallacy and the research done to find it.

Availability Heuristic

If I’ve got a map of California, that map is not itself California. It’s a representation. In the same way, the picture of reality that you’ve got in your brain is not itself reality. The map is not the territory. This concept can be hard for humans to grasp, because we have never observed the territory directly: we’ve only got our body, our nerves, our senses, and that’s the closest thing we’re ever going to get. We’ve got a number of different maps but we have no real territory. (This fact becomes really obvious when your maps get messed up: if you get high on LSD, “reality” gets messed up but actual reality stays exactly the same.)

Because we interact with maps instead of territories, we intuitively judge probabilities by how quickly we remember them. That is the availability heuristic. But the map is not the territory, so the availability of a memory is not the same thing as the probability of the actual event.

Read about how the availability heuristic makes people unprepared for large disasters.

Economics in theory

Economics is one of those things where if you can even give a cursory explanation of basic principles, you’ll sound like a genius. I think it’s some combination of 1. the genius implied by understanding money, aka the motivation of the world, 2. all the complicated terms that economists use to describe relatively simple trends and math, and 3. the fact that economics is mainly studied by immensely boring people.

Some members of my family (they aren’t boring, I promise) who actually do understand economics have taught me some basic principles. Now that I have had my share of feeling like a genius for knowing how money works, I’ll pass it on to you. Don’t worry, I made them explain it to me with simple words and minimal jargon, so that’s how I’ll pass it on.

At a fundamental level, companies set their prices so that they’ll get customers, and customers buy things at prices they think are reasonable.

Let’s give an example. If I want to buy ice cream, and ice cream costs $5, I’ll say “okay, sure” and buy it. If ice cream costs $10, though, I’ll say “screw that” and buy a different sweet. So obviously, there is a price that I’m not willing to pay for an ice cream; there is a price at which I as a customer will find an alternative.

On the other end of the price spectrum, it’s not worth the ice cream vendor’s time to sell their ice cream for $1. If I’m not willing to buy ice cream for more than $1, I’m going to be very hard-pressed to find a vendor who’ll sell it to me for so little. Therefore, there is also a price at which it doesn’t make sense for a vendor to sell the product.

We can graph this. If we put price on the vertical axis and quantity (amount of product purchased or frequency at which product is purchased) on the horizontal, we can get some nice lines.

Supply is just how much of the product is on the market: how much ice cream is available to buy. Demand is how much people will buy the thing (either number of products or purchase frequency): how often I buy ice cream. Right there in the middle is “equilibrium”: it’s the optimal price and quantity at which both supply and demand can be at their highest point.

Equilibrium does a great job at setting a price at which people want to buy stuff and companies want to sell stuff. But unfortunately, economics is never as easy as the theory. Politics comes into play.

economics in practice

Let’s imagine that someone comes along and says that it’s completely unacceptable that people can’t buy ice cream for $1. There are underprivileged families, they say, who don’t have more than $1 for ice cream, and those families should be able to buy ice cream, too. They have heartbreaking ad campaigns featuring poor families unable to buy a simple dessert. Their opinion gains leverage, and they start to lobby for a regulation to put an upper limit of $1 on all ice cream sales in America. They win; now vendors are forced to sell ice cream for $1 at most.

People hear about this and start lining up for the ultra-cheap $1 ice cream, but meanwhile, the ice cream vendors are slowly going crazy. They’re losing money on every sale, and they’ll rapidly go out of business. One vendor is still turning a profit, but it’s incredibly small. She can’t support her family on this, so she picks up a side gig, and quits the ice cream business entirely shortly thereafter. Another vendor decides that if he has to make ice cream for $1, he’ll make his portions smaller and use worse ingredients. By doing this he makes it cheap enough that he can keep his business and livelihood afloat. He’s not happy about it though: he misses making quality ice cream.

Meanwhile, the consumers feel gipped. Yeah, they’ve got cheap ice cream, but three out of every four ice cream vendors has gone under, so it’s scarce. Further, the ice cream vendors who are still afloat have decreased the quality of their goods substantially.

The concept of the government introducing a maximum price for something is called an artificial price ceiling. The classic example is rent control in NYC. If you think about it in terms of the supply and demand curve, it’s capping out the vertical axis way below equilibrium, so supply is way too low for the demand.

Now let’s turn it around. Let’s say instead that someone comes along and says that ice cream vendors don’t get paid enough. Ice cream is the American way, they say, and we have to protect American ice cream vendors. They have heartbreaking ad campaigns featuring sad ice cream vendors coming home to cramped empty apartments and sighing over piles of bills. Their opinion gains leverage, and they start to lobby for a regulation to put a lower limit of $10 on all ice cream sales in America. They win; now nobody can buy an ice cream for less than $10.

A lot of people suddenly stop buying ice cream. They buy alternatives: cookies, candy, etc. As a result, ice cream vendors see a steep decline in their sales, because the only people who still buy it are the die-hard ice cream fans, and even they buy it less frequently. The vendors are making more per ice cream, but their sales have been cut so much that it doesn’t matter; they’re making less overall profit. Again, both the customers and the vendors have been screwed over.

The government introducing a minimum price for something is called an artificial price floor. The classic example is minimum wage. This time with supply and demand, it’s forcing the vertical axis way above equilibrium, so demand is way too low for the supply.

Price floors are made to help vendors, but in the end it screws them over. Price ceilings are made to help customers, but in the end it screws them over. Artificial political constraints on economics are meant to help, but no matter who you’re lobbying for you end up hurting everyone.

We can tie this back in with psychology. If humans were really logical beings, we would know that being indignant is not a substitute for doing math. But we don’t realize that; we keep making the same dumb mistakes over and over again. Maybe somebody should compile a list.

Do People Want To Learn?

I’ve written before about how the public school system doesn’t teach the right things. But there’s a bigger problem underlying the whole rotten mess of concrete and bureaucracy that is the modern public school system. There’s one single assumption that underlies the whole thing, and that one assumption is untrue.

That false assumption? “People don’t naturally want to learn.”

If you believe people don’t naturally want to learn, then what about babies and toddlers? Nobody formally teaches little kids to sit and crawl and walk and talk, but everybody knows that all little children learn these things. It’s really obvious that little children are wired to learn and to learn voraciously. Just look at any two-year-old who annoys the grownups by asking so many questions.

So if the concept of “people don’t want to learn” doesn’t happen until later, when exactly does it happen? If you look at kids, it seems to happen right around school age. Children who a year ago would be annoying with their extreme curiosity mellow out, then proceed to sink further into “I hate learning”.

Still, the same exact children who don’t want to learn in school continue to learn voraciously about things that interest them. It may be things adults don’t approve of, like cartoon characters, or video game stats, or how to bypass the screen time lockouts on their phones. But this is still learning, and it’s still curiosity. It’s learning in absence of being forced to learn, which is why it continues to be fun. So evidently, people can and do learn things that they’re motivated to learn and interested in learning, at all ages.

“But people don’t learn the things they need to learn!” you may exclaim. Let me ask you, what exactly is it that we teach in school that people need to learn? And how do we know that they’re not going to learn those things naturally, outside of school?

What do people need to learn? Reading. Writing. Basic arithmetic. How to exist as an adult. But everyone learns these things of necessity; you can’t function in the world without them. You don’t need school to teach that. And after they have the minimum knowledge they need to function in the world, individuals follow their specific interests to logical conclusions.

Still, what about all those other things that we teach in schools? Spanish, differential equations, mitochondria, whatever? What about how to get into college?

Interestingly, there is a strong and growing subculture of people who raise their kids with no enforced education. And the research shows that these kids can get into college and have successful careers at rates equal to or even greater than that of the publicly or privately schooled population. (Sources: Smithsonian, KQED)

So if just letting kids do what they want is so great, why do we all think instinctively that it shouldn’t work?

John Holt wrote this in his book How Children Learn. “All I am saying in this book can be summed up in two words—Trust Children. Nothing could be more simple—or more difficult. Difficult, because to trust children we must trust ourselves—and most of us were taught as children that we could not be trusted. […] What we have to do is break this long downward cycle of fear and distrust, and trust children as we ourselves were not trusted.”

We don’t think unschooling should work even though it does because the societal wisdom about children, which we all have somewhere in our brains, is wrong. We were taught not to trust how children naturally learn. But we were taught by the very system that profits off not allowing children to learn naturally; we were taught propaganda.

If you don’t need to force people to learn, then, is there no place for teachers, classes, students?

No. There is still a place for that. Just look at all the non-mandatory classes that people take over their lives. People take classes in music and art and tech and science and history and every other thing. Classes can be a very effective way to learn… if the people in them want to learn.

When I was getting started as an artist, I experimented with a number of media through taking classes. When I signed up for them, they were explicitly “for adults”. Not because they had any risqué content, just because they didn’t have anybody to be the schoolteacher, the authority figure. They were meant for adults because they trusted adults. They didn’t trust children.

With some combination of my mom’s persuasive skills and my dashing charm (just kidding, I was like twelve; it was 100% my mom’s persuasive skills) I got into these classes “for adults”. One of them was a wildlife drawing class.

It was a ton of fun and a great experience. I’d been out of school for a while at that point, so I didn’t think it was strange that the teacher just walked around giving advice and making critiques, telling us to help ourselves to complimentary cookies and soda while we drew. I made a few friends in that class, most of whom were many times my age.

A few years later, I took a ceramics class. This one was explicitly “for teenagers”; I think the age range was 15-18 or 13-18 or something like that. The kind of thing that’s meant as an extracurricular for high schoolers.

It was a weird experience. Besides the complete lack of age diversity, there were a ton of really weird rules and expectations. No more than one person was allowed to leave the studio at one time to use the bathroom. I wasn’t particularly annoyed since it didn’t inconvenience me, I was just baffled. It was so unnecessary.

Not only was the class setup weird, but the teacher was also weird. They (I don’t remember their gender) were really distant and not friendly at all, and they seemed to expect this kind of deference. You know those pompous customers you get working retail, where they just expect you to hand them the universe on a silver platter? This teacher acted a bit like that.

I talked to my mom about it on the ride home, and she informed me that it wasn’t that the class or the teacher was weird. It was because it was a class for teenagers.

With classes for adults, you can be sure that 100% of the people there are there because they want to be. Nobody forces an adult to take an art class. If the student has learned what they wanted to learn, the objective of the class has been achieved. But with classes for teenagers, it’s a completely different story. The teacher can’t be sure that the student wants to be there, or wants to learn. Further, they don’t have to answer to the student; the real master for a teacher of teens is those teens’ parents.  The teacher tries their best to make the class interesting and fun, but they have to control what the kids do so that the parents are pleased, and generally act like a schoolteacher, which severely limits their ability to do that.

There is still a place for classes and teachers. These are valuable things. But the public school environment, where the students don’t want to learn and the teachers don’t want to teach and literally nobody wants to be there at all, that is not useful.

So where do we go from here? How does the establishment change?

I propose using the funds that are currently being funneled into the public school system and use it to fund optional classes, held at public libraries. After the “school subjects” are made optional, we can decide to make things mandatory which are important for everyone to know regardless of their interests; things which are necessary for functioning in modern society. Teaching basic technology, psychology, and economics would be a good start: after all, there’s an awful lot of people, tech, and money in the world right now. It also makes significant sense to teach people stuff like basic self-care and first aid, what laws there are, how to pay taxes, how to get insurance, etc etc. These mandatory things, then, can fill the psychological void left by the public school system (appeasing all the grownups who love telling kids what to do), as well as filling the physical void of the empty school buildings.

What do you think? If you’ve got ideas for how the system could be changed, or reasons why it shouldn’t be, stick them in the comments. I’d love to hear from you.

Rethink the Concept

Let me ask you a question. If you could magically instill every youth in America with specific knowledge, what would you teach them?

Presumably, you’d want to teach them something that would be useful to every one of them, so, what kinds of things are important for every American? How about you teach them how the American government works. The world economy. The Fortune-500 companies. You could tell them which things are legal and illegal, because though everyone knows murder is illegal, there are other things that are more complicated and less obvious. You could teach them their human rights.

Perhaps you could also teach people how to take care of themselves. You could explain what medicines to take for what problems, symptoms for common ailments, and under what circumstances to go to the doctor. You could tell them about things that are harmful to their health: smoking, vaping, unprotected sex, etc. You could talk about symptoms of mental illnesses and healthy ways to cope. You could teach them first aid.

Why not also talk about practical life skills? How to get a job, vote, pay taxes, get a mortgage, get and maintain insurance, or budget finances. Most people are going to become parents, how about we teach them how to raise children?

These are not theoretical questions. We have a method of instilling knowledge into American youth. It’s called public school.

If you think about it, the basic concept is ingenious. We have a program with mandatory attendance, for which purpose we have the resources to transport children to and from a truly gargantuan number of individual buildings. At each building, we have a standardized curriculum, which has specific yearly checkpoints for completion. For twelve whole years, from age six to eighteen, we have the undivided attention of the nation! The undivided attention of the future!

Yet alas, we squander our opportunity. We teach pointless trivia that can be googled in two seconds. We force people to learn things that aren’t useful to most peoples’ lives.

Why do we do this?

Basically, governments move very slowly. The things which we teach in school today would have been much more useful to have memorized when you actually wouldn’t have had a calculator on you daily (isn’t that what our maths teachers used to say when trying to persuade us to memorize multiplication tables?), sixty or so years ago. Part of the problem is that the bureaucracy just hasn’t caught up yet.

But there’s another problem. Though people are pushing to change schools, they’re all pushing in different directions. Many of them aren’t asking the fundamental question: “what is the point of this period of mandatory education, anyway?” And most of those that are asking reply that the goal is college, as if that does anything other than pass the buck.

It seems to me that the purpose of educating youth is to prepare them to be adults. One part of being an adult is making a living. Another part of being an adult is being a good citizen (knowing what laws exist and how the government works, perhaps also learning history and civics). Adults need to be financially self-sufficient. Adults need to know how to avoid scams. Adults need to know how to raise children – even if they themselves don’t have children, they will inevitably be around kids at some point. Adults need to know how to care for themselves and others.

We teach exactly none of that in high school or college.

A lot of people have it stuck in their head that it has to work this way. That public school is supposed to be useless; as if it’s a necessary evil. That teaching everybody calculus and teaching nobody first aid is a reasonable state of affairs. But it’s not.

There needs to be a complete rethinking of the purpose of the school curriculum. Not just “how do we do a better job of preparing more people for college”. Not just “how do we tweak the existing formula to make it a little better in some areas”. We need to completely rethink the concept.

Good Tech Things to Know: An Incomplete List

In today’s technology-saturated world, it’s very helpful to know some stuff about tech even if you’re not yourself a technologist. However, I’m very aware that good explanations of tech for non-tech people are few and far between. So, in this post, I’ll give some simple explanations of some of the most common tech things you might want to know.

The Structure of the internet

If you’re reading this, I’m going to presume you use the internet, and I’m also going to presume you know that it’s primarily composed of web pages. You probably don’t know, however, how exactly those pages are constructed.

Fortunately for you, it’s surprisingly simple. There are three main components to a web page: HTML, CSS, and a big bucket of other Miscellaneous Things. HTML and CSS are what are called “markup languages”. They create the structure and style of a page, but for the most part they don’t do anything. Miscellaneous Things, which include SQL and JavaScript, are “programming languages”: they actually do stuff, like perform actions and make decisions.

Think of markup languages like a static piece of text, and think of programming languages like a button.

Now let’s add some more detail about all of those components, starting with HTML. HTML stands for HyperText Markup Language. (The markup language bit you already understand, and I’ll get to the hypertext bit in a moment.) Essentially, HTML creates the framework for a webpage, by itself, with no stylization (color, layout, etc). Now, when you think of “framework”, it can be tempting to think of a wireframe: a website wireframe

But this isn’t what I mean. See, though there are no images or color, there is still style, because there is still layout. There are distinct sections. The spots to put images are different sizes. Text is organized in columns. HTML, by itself, contains none of these. HTML, by itself, looks like this.

ultra-basic pure html page

Kinda boring, eh?

Text is organized in a single column. It may be bigger and bold if it’s a heading, but that’s the browser’s default style; technically, HTML doesn’t do that, your browser does. If you display pure HTML, you get a single left-justified column of black text with images and links on a white background. (The presence of links, by the way, is the definition of hypertext. Remember I said I’d get to that? It really is that simple: “link” stands for “hyperlink” which is another word for hypertext.) Overall, it’s really uninteresting to look at.

This is where CSS comes in. CSS stands for Cascading Style Sheets, and it’s that middle word we care about: Style. Essentially, CSS creates the colors, fonts, layouts, and almost everything in a website that you care about. The fact that the text you are reading now exists at all is because of HTML, but the fact that the text uses the font Merriweather is because of CSS.

All HTML is supposed to do is tell the computer what stuff is: what part of the webpage is a heading, or body text, or an image, etc. CSS is the thing that makes all of that visually interesting: for example, it tells the computer that the headings should be blue, the body text should use a serif font, and the images should be on the right-hand side.

CSS is actually the reason that wireframes exist. You don’t program a wireframe, you draw it, and the reason is that wireframes don’t exist for the benefit of computers. Wireframes are planning tools that exist for the benefit of humans. Computers already have something to tell them how a webpage should be structured without CSS: it’s called HTML. But a human needs a picture to know that, because humans think in pictures, not code.

You now understand a good third or so of how the internet works. HTML creates the website structure by telling the computer what stuff is. CSS styles that structure into something aesthetically pleasing to humans. Before we move on, let’s dip our toes into the Miscellaneous Things bucket, otherwise known as programming languages.

You already know the most important thing about programming languages: they do stuff. And essentially, they do two things: they perform actions, and they make decisions. (Frequently they do both.) An example of an action is changing the color of an icon. An example of a decision is figuring out which browser the user is viewing the page on. An example of doing both is changing the color of an icon depending on the user’s browser.

The above examples can be done with JavaScript, which is probably the most popular programming language used on the web. JavaScript is also frequently used in the creation of navigation menus, login forms, and various site-enhancing animations.

As a side note, it’s important to make the distinction between the actual, visible piece of the webpage—the buttons, links, input fields, etc.—which are created using HTML and CSS like any other visible website piece, and the decisions and actions that are attached to those visible website pieces—the action to be taken when the button is clicked, when the link is hovered over, when the input field is typed into, etc.—which are created using JavaScript. These are separate components of the webpage.

Here’s another, slightly more complicated programming language that’s frequently attached to websites: SQL (pronounced “sequel”). SQL stands for Structured Query Language, and yet again, the middle word is important: Query. Essentially, a query is a request for information. Here’s an example. When you log in to a website, you type your login information into the input fields and hit the submit button. When you do that, the server (the electronic place where your data is stored) is asked by the webpage for the login data. The answer to that question is fetched by SQL.

Fetched from where? Well, on the server, the place where the data is stored is called a database. The database is built, managed, and queried with SQL. When a new user creates an account, their login information is stored in the database, and whenever they log in again, the information they typed into the input fields is checked against the data stored in the database. All of this happens with SQL.

Everything we’ve talked about on the web so far—HTML, CSS, Javascript—happens “client side”, aka, on the user’s browser. SQL, by contrast, happens “server side”, aka, on the server where the website data is located. Client side processing happens where it does because the experience is different for each user, depending on the size of their browser window, what type of browser they’re using, etc. Server side processing happens where it does because the data is collected from a huge number of different users, so it makes the most sense for the resulting gigantic amount of data to be stored in one centralized location.

Here is the major takeaway: Computers think dramatically differently than humans. HTML is the basic structure of a webpage from a computer’s point of view. A wireframe is the basic structure of a webpage from a human’s point of view. People who program computers need a foot in both doors: obviously, they think like humans, but they also need to understand how to think like a computer. Computers don’t speak English, computers speak code, and if you want a computer to do what you want, you had better be able to talk to it in its native tongue.

Search Engine Optimization

Now that you generally know how the web is structured, let’s talk about a surprisingly little-understood but incredibly crucial aspect: search engine optimization, or SEO.

If you’re reading this, I’m going to presume you know what a search engine is. What you probably don’t know, though, is how exactly search engines decide what results go first. When you search for something on Google, you see results in a certain order, but what algorithm generates that order?

The answer is constantly changing for two reasons. First, as soon as companies figure out what the criteria are for higher search result placement, they capitalize on it like crazy, because higher search result placement leads to more customers. (The process of doing this is SEO.) Second, it’s in Google’s best interests for its algorithm to promote the kinds of search results that people actually want to see. If Google’s customers’ criteria for what makes a good search result is different from the kinds of results Google’s algorithm actually generates, the results are skewed in a direction that isn’t beneficial to the customers, and they get mad.

So ideally, if Google makes a good algorithm, the companies will actively try to make their websites better in order to get higher search result placement.

As a side note before I get into the rest of this, one thing a lot of non-techie people don’t realize is that when they search something, there are some results—typically the top ones—that are paid ads. People have paid a certain amount of money to have their product/service shown whenever that keyword is searched. Frequently these are labeled “AD”, but frequently this label is inconspicuous or otherwise difficult to see. In this section, I’m talking about “organic” or “free” top results, not results that are at the top because somebody paid for them to be there.

I don’t know the whole of the algorithm Google uses, since it’s obviously a closely-guarded secret, but here are some of the well-known portions of it that are commonly used by companies for SEO.

  • Improving page load time. Visitors to a site really hate having to wait a long time for a page to load, so search engines put websites that load quickly higher in their rankings.
  • Having other websites linking to yours. This was a much bigger thing ten years ago, but it’s still a part of the algorithm. Essentially, if a good number of other websites link to yours, then the people posting on those other sites probably think your content/product is good, so search engines put your website a little higher. The reason this is a smaller component of the search engine algorithm is it’s really easy to cheat. Back when this was a larger portion of the algorithm, companies would create a ton of small sites that linked liberally to their main site.
  • Visitors click on your page, stay on your page longer, and/or visit other pages on your site. If you can get, and more importantly keep, your visitors, your page is probably giving them what they want. The ranking in search engines for these things is as follows:
    1. The user clicks on the page, spends a decent amount of time there, and proceeds to navigate around on the rest of the site.
    2. The user clicks the page and spends a decent amount of time there.
    3. The user doesn’t click the page.
    4. The user clicks the page, but clicks away almost immediately.

That last one might seem strange. Shouldn’t those last two be in the opposite order? No, and here’s why. If the user doesn’t click the page at all, they may think “no, that’s not what I want”, or they may have just not noticed it. By contrast, if the user clicks the page and then immediately clicks back, the search engine can be reasonably sure that the user thought “no, that’s not what I want”. The difference is between a pretty certain “no”, and a merely possible “no”, hence the ranking.

  • The page uses words/phrases contained in the user’s search. In the 00s, this actually used to be almost the only method of SEO. The reason this changed was that, like my second point above, it is very easy to cheat. Companies would create a ton of webpages on their site that came very close to spamming a particular keyword, while still looking enough like a genuine article to fool a search engine. (Some of the first paid work I ever did was working freelance writing articles like these; I remember writing 500-word articles on black mesh, swimming goggles, and other miscellaneous junk.) However, due to its importance in determining a good search result given how search engines fundamentally work, this remains a pretty large portion of modern SEO. They’ve gotten around companies doing the 00s-SEO by making “stuffing”, or the overuse of keywords, a penalized practice.
  • Search engines can move easily through the pages to find what they need. This one is kind of blatant self-interest on behalf of the search engines, but it still makes a lot of sense. Search engines have a ridiculous amount of internet to “crawl” through (that is the technical term, I’m not kidding), and they want to show their users their search results as quickly as technologically possible, so they prioritize pages that are easy for search engines to find information in.
  • The site works well on mobile. Nowadays, a lot of people view webpages on their mobile phones, not just on desktop. However, mobile-enabling a website is hard (trust me, I know). So, to incentivize developers to do it, search engines give heavy penalties to sites that don’t have good and usable mobile versions.

These are just some of the most crucial and important ones off the top of my head, but there are a huge number of other factors. For further reading, see this insightful SEO Periodic Table.


That’s all for right now! I will probably update this list whenever I find another tech thing that somebody doesn’t understand. If there’s a tech thing that you’ve been hearing about but don’t get, absolutely post it in the comments: I’d love to hear from you and update this post accordingly.

My Experiences In Sales

Note before I start: there was a ton I learned from my sales experience. If I had it to do over, I would still have chosen to take the job. Even so, doing a full-time sales job on top of being in school full-time was one of the hardest periods of my life, so I’m going to talk about it as such. Further, there was a reason I quit. As I’ve written before: I’m a big fan of the truth. I like it when people have accurate beliefs about reality. And the reality is, sales is hard, and it’s not for everyone.

I write this in the hopes that you may learn something useful about sales jobs, or at least people who are wildly unsuited for long-term careers in sales doing sales jobs. Or maybe you’ll just find it amusing. Whatever works.


I took this sales job at the beginning of 2018. I worked for an independent sales firm which specialized in doing door-to-door sales for huge companies that didn’t have the time to bother with such a thing themselves.

I decided to take a sales position for a few reasons. Most important of those was that I always knew that I had a major weakness in the area of social skills, and I wanted to improve at it. I figured that the best and most effective way of doing this would be to go somewhere that I was in way over my head. If you’re drowning, you only have two options: die or learn to swim. Knowing that, I walked in the door on my first day.

I’d made it through the interview process, which was as much a test of charisma as it was anything else. My father, for all he had failed dismally to pass it on to me, has always been able to exude massive charisma when necessary. And I knew from the interviews that from a charisma standpoint, I was in an office full of copies of my dad. Every single person could do backflips over the social stage, when I could hardly find the confidence to walk without tripping.

But despite all of this, I was ruthlessly determined. I was going to learn from these people, I was going to absorb this charisma that saturated the air, and I was going to come out of this experience stronger. That was the goal, I thought as I was shown into the main meeting room.

I learned that a conversation with a prospect was broken out into five main sections: the intro or elevator pitch, the questioning, the presentation, the rehash, and the close. On the first day, we learned the intro pitch, which I will probably still be able to recite many years from now. “Hi, how’s it going? So nothing crazy, my name’s Jenya, and I’m dropping by really quickly on behalf of Verizon. [Did I mention our main client was Verizon? Well, it was, but that’s not too important.] We’ve done a ton of updates around here, helped out a bunch of your neighbors, and I’m just here to see how I can help you too.”

We dissected it based on what they called the “four factors of impulse”, which were as follows: Jones Effect, or the impulse to want what others have; Fear of Loss, or the fear of missing out on an opportunity; Sense of Urgency, or the importance of the time of both the salesperson and the prospect; and Indifference, or the necessity for a salesperson to not act like a salesperson (prospects don’t trust salespeople).

I recited that pitch to myself in the car on the way home. I recited it as I was washing my face and getting ready for bed. I recited intermittently through the entire next day, which I had off. And by the day after, I had it solidly memorized.

Evidently, this was impressive and unusual. We practiced our pitches in the office the next day, and the more experienced salespeople were impressed. In the afternoon I got the opportunity to practice it “in the field”. I’d knock on the prospect’s door, introduce myself and also the person who was mentoring me, and after I finished the pitch, I would pass the proverbial baton to them so they could keep talking with the prospect. That first day, we collectively made a sale, and I got to keep half the proceeds.

I was tentatively optimistic, but I refused to let my determination slip. Hearing “no” over and over wasn’t hard when I wasn’t doing much, but when I was controlling the conversation, I imagined it would get harder. Still, the fact that a seasoned professional still got a ton of “no”s gave me some excellent perspective.

The next day, I learned the questions: a complicated decision tree based on what the prospect’s answers were. After “how many TVs do you have” was “do you use DVR”; if they said “no”, you moved on, but if they said “yes”, there were a number of so-called “deeper questions” that we had to ask about it. There was no such thing as a learning curve here: we still had only one day to learn this, but it was fifty times as complex as the pitch.

I took exceedingly prolific notes and stuck the notebook I’d taken them on into a bag. Every piece of paper, every chart and graph and magazine article that I was given, I stuck into that bag. It was my sales bag, and every time I needed anything, I could get it from there. When I got home every day, around 10pm, I would review everything in my bag in detail. For this reason, I never had to be told anything twice.

I deep-dove into this so much partially just because that’s what I do, but also because I knew I had to make up for my lack of natural sociability. Growing up, I had been reprimanded a number of times for using the wrong tone, saying the wrong thing, or otherwise not intuitively knowing what to do in a social situation. I had absolutely no social sense, and so during this job, I asked a number of what they probably thought of as incredibly strange questions. How far away from the door should I stand? Where should I put my hands? How often should I make eye contact? What tone should I use during what segment of the conversation? All these sorts of ridiculously specific questions that they had probably never thought about, since they don’t think about what tone they use, they just use it, and people like them.

But I asked these stupid questions, and I got better. I may not be able to intuit what to do in a social situation, but I can sure as heck analyze it to death and memorize every minute difference. So, just like my pitch, I analyzed the conversations. I analyzed my tone, eye contact, gestures, body language. I analyzed those things for the prospect, too. When I didn’t know something, I didn’t think about how dumb it was to not know it, I just asked. And I took liberal notes. Then in the mornings and evenings and during any other time when I wasn’t doing schoolwork (because remember I was in college too), I reviewed my notes. Use a low tone when stating facts. Avoid crossing your arms. Put one foot one step above the other when on a staircase. Take two steps away from the door after you knock. Use eye contact for emphasis when you’re talking, or any time the prospect is talking.

After a few months, though, I noticed that I was stagnating. My mentors were making sales on their own, but I wasn’t. I had learned everything I could, and I didn’t know what else I could do. I got frustrated. Not the kind of momentary frustration, the kind that spikes up when you spill a drink; this was a long, drawn-out frustration that seeped into my mind over the course of these stagnant weeks, when I was walking six miles up and down peoples’ doorsteps, knocking on a hundred and fifty doors, working a twelve-hour day, and coming home long after it had gotten dark with nothing to show for it all.

To make it worse, around this point, my greatest mentor quit. He had been the greatest help to me overall: he gave detailed explanations of what to do in each specific situation, he knew like a good coach exactly what I was doing wrong and how to fix it, and he communicated clearly. Not only that, he was a delight to have around, and he was consistently one of the people in the office who made the most money.

Even after all of this it was hard for me to get up the nerve to quit. I had known from the get-go that I wasn’t suited for a long-term career in sales, but I didn’t want to be one of those people who just quit when the going got rough. It took a long conversation with my mother about priorities for me to see past this. I went into this with the goal of improving my social skills, and I had succeeded. Yes, it would have been nice to make more sales, but at the end of the day, this wasn’t what I wanted to do professionally. I didn’t need to be frustrated with my lack of success in something I’d only gone into in the first place because I knew I was awful at it. So soon after, I handed in my resignation.

Still, the lessons I learned from this tough period will follow me to this day. Through this process I learned what it’s like to be literally the worst person in a group at something. Growing up, I’d never had that opportunity, since I was always in the top 1% of everything (that is, after all, how you go about getting into Stanford). It was hard to be the worst, but it was also useful: I could learn from literally everyone.

I learned grit and determination. The experience created for me a crazy high benchmark that I can always compare future stressful events against. No matter what I go through, I can think, “this is easier than taking multiple extremely difficult classes, none of which I find fun or satisfying, on top of having a full-time job that I suck abysmally at; as such, I can get through this.”

I learned how to be cheerful no matter what. Growing up a performer, I thought I knew how to be cheerful in the extremes of misery: after all, I went out in -10º weather, in the snow and freezing rain, in a skimpy leotard, moved around a sheet of ice at 30+ mph for five or six minutes at a stretch, and had to make it all look easy. Sales made that look like a walk in the park. I walked around neighborhoods in the snow and freezing rain, not for six minutes, but for six hours. I walked up and down stairs in grueling heat, too; something I never had to do as a figure skater. And when I got to peoples’ doors, I couldn’t grimace in the slightest. Unlike in skating, where the audience sees you from fifty feet away and won’t notice a tiny crease in your brow, your prospect will see you from two feet away. They will notice.

I learned how to take “no” for an answer, and in fact to take it in stride. Just because of how the numbers play out, even the best salesperson in the world won’t be able to get a “yes” from every single prospect. There will be people who, say, work for Comcast and get their internet for free. There will be people who slam the door in your face. And you just have to deal with that, don’t let it shake you, and move right along. Next to their name on your list, write “110” and put a diagonal line between the 1s.

But most importantly, I learned how to sell. I learned the details of how the sales funnel works. I learned how to direct a conversation. I learned the difference between a legitimate “no” and a “no” that comes only from a fear of change. I learned how to make smalltalk (a surprisingly huge part of sales!). I learned to speak and persuade off-the-cuff.

Sales would have been an awful career choice for me, but taking a sales job anyway was one of the most useful experiences of my life.

Dating: A Rational Approach

About four years ago, I decided I wanted to find a life partner. Primarily because I was socially oblivious and didn’t know any different, I took a heavily analytical, statistically-based approach to do this—as we all do with other important areas of our lives. The entire process took me a matter of weeks and I have since been in a committed relationship for four years.

For a long time, primarily because of the fact that my method was so unorthodox and so unheard-of, I largely imagined that the normal way of doing things was the best method, and I was a lucky fluke. But a number of recent conversations and some reading has led me to consider that maybe, instead of being a lucky fluke, I am instead one of the few people who does this right, while most of society does it wrong.

Before I dive into this, let’s establish a key point: if you’re going to get married, it is absolutely the most important decision of your life. While your choice of career dictates how you spend a good portion of your life, who you marry dictates how you spend all of your life, because it dictates who you spend your life with. The best recipe for misery is a bad marriage, and the best recipe for greatness is either no marriage or a great one.

That said, let’s go.

To start, John T. Reed, author of Succeeding, wrote about both the humungous importance of marrying the right person, and the haphazard way that many people take to get there. He writes, “The divorce rate is about 50% in the U.S. The median duration of marriages is seven years—just enough time to have some kids and acquire property so that the divorce really screws things up. […] Why are so many people screwing up the most important decision of their lives? Look at how they go about it. I read a book once that said most Americans feel the correct way to meet your spouse is ‘chance proximity’.”

Essentially, most people go about their lives making little to no effort to meet anyone, and they expect to meet their spouse by chance. Reed writes, “‘Some enchanted evening, you will meet a stranger across a crowded room.’ Ask an old maid or old bachelor why they never married and they often tell you that the right person never ‘came along’. ‘Came along’! You gotta be kidding me! People make more effort to buy the right used car!”

And he’s right! Why, of all our important life decisions, do we so adamantly fudge this one?

Well, I don’t know, and I don’t hope to. But I will tell you how I managed to circumvent it.

First, create a list, as comprehensive as you want to make it, of everything of importance that you want in a partner. This is physical traits (i.e., a beard), personality traits (i.e., wanderlust), or anything else you can think of. Once you’ve made this list, rank-order it, from most to least important.

Now, make a similar list of everything you don’t want in a partner. Be specific, but feel free to be obvious – while “emotionally manipulative” is an obvious anti-want, it might still be useful to put it on the list. Once you’ve made the list, rank-order it.

After you’ve done both of these, now it’s time to do some market research. What kind of dating pools exist? While answering this question, be sure to keep in mind which of these you’ll be willing to utilize. If you live in the U.S., it’s probably out of the question to try to find a date at a convention in London. If you’re considering internet-based dating pools, consider whether you’re willing to be long-distance for an extended period. Make a list of some potential dating pools and rank-order the list by feasibility.

Now it’s time to merge all these lists. Figure out what kind of person you’re mostly going to find at each of these dating pools and compare that to your lists of wants and anti-wants. Re-rank your list of dating pools against these criteria, then compare your list of dating pools ranked by plausibility of candidates against your list ranked by feasibility. Whatever dating pool is ranked highest in both (feel free to bias your ranking toward whichever you think is more important for you), make plans to go there.

Let’s go through my own story as an example. My list of wants included someone who is sensitive, who listens, and who could adapt to my hectic lifestyle. My list of anti-wants included someone who is overly macho or self-centered. I was young and very broke, so my options for dating pools were financially limited, but I also didn’t mind distance (I’d never really been taught that it was supposed to be hard, so I didn’t think it would be; and at present, after having quote-unquote “suffered” two years of distance, I maintain that view). Based on my specific desires, dislikes, and difficulties, I was able to put at the top of my dating pool priority list a convention in Baltimore that ran three days in August.

The process is not over once you arrive at your dating pool: aimless drifting is still not a good plan (though it’s a better plan here than it would be elsewhere). No, now were going to systematically look for possible candidates.

The goal of this section is to meet as many people as possible. A good number would be twenty-five candidates, but you could go for more. With each candidate, weigh the pros and cons. Think of this like going to a used car dealership and looking at all the cars. You can find out a lot about a car by just sitting in the driver’s seat, and you can find out a lot about a person by just having a conversation. Just like you don’t need to take every car in the lot on a test drive, you also don’t need to take every candidate on a date. This variety of speed-dating has the benefit that you don’t mess with anyone’s heart—theirs or yours. You simply have a list of traits to compare this person against, and all you’re doing is comparing.

Pretty good, eh? The only thing we still need is to account for feelings. It’s all well and good to meet a person you think would be perfect, but you both need to fall for each other. How do you account for that? Very simply, actually. If you start to feel something good for them as you’re talking, keep talking. And, as you usually do when you date the conventional way, look for signs that they like you back. Not everything has to be complicated.

An important thing to do as you continue conversing with people is to take notes from your conversations and update your lists accordingly. If you started with a list item saying you want to meet people who do X, but when you actually met several people who did X they didn’t seem appealing to you, modify the list! If initially you thought that people who did Y were unbearable, but you met some people who did Y and they actually were fine, modify the list! Make sure to also modify the priority order of things if necessary.

These lists are not set in stone. In fact, it would be silly to have your actual experience with real people take second place to what you dreamed up about what real people might be like. If you’ve never been to New York, you can’t draw an accurate roadmap by sitting on your couch and dreaming about it; likewise, if you’ve never been on the dating scene, you can’t come up with an accurate picture of your ideal spouse by sitting on our couch and dreaming about it. So once you have real-life experience, modify your lists!

If by the end of your first venture into a dating pool, you don’t have a life partner yet, don’t worry! Just go on back to your lists and find your second choice for a dating pool, then rinse and repeat. It may also be possible that your criteria are too broad, or too narrow, or you were wrong about what kinds of people frequent what places. In those circumstances, don’t sweat, just go back and revise your lists with your new knowledge. Then go on back into the world and keep at it! I promise, having a systematic approach will work so much better than just waiting for someone to “come along”, and it will feel better, too. You’re being way more productive!

Obviously, this is a very different approach than the conventional one. But if you step back and think logically about how people should go about making this choice, it’s a much more reasonable approach. I’m sure there will be people saying it’s “not romantic”, but approaches like these have resulted in lasting relationships: my father took a similarly systematic approach to dating and my parents have been together for thirty years; John Reed followed a similar approach and was married for much longer. You don’t need to take my four-year relationship as your only data point.

Furthermore, “romantic” should mean “spontaneous”, not “stumbled into”. Too often, people confuse the two. Romance doesn’t have to be about random chance.

I am a 4-year-college opt-out. Here’s why.

A few days back, a family friend asked when I planned on going to college. I said, “I’m not. At least not right now.” I didn’t have the time to explain my reasoning to her, so I don’t think she understood. But here, I have the time and the words, and I’ll try to explain the reasoning behind this massive and unconventional life choice.

Let’s skip back ten years, to the summer of 2008. My siblings and I are debating with our parents about where to go for dinner. As with most families trying to decide on things, we vote on it. By purely counting heads, the option the kids want should win, but my parents throw a wrench into the rules: “adults get five votes”. Suddenly, the kids are outnumbered.

I don’t mind all that much – I still get free food, after all – but I’m curious as to the reasoning. “So we get five votes when we turn 13?” I ask, being a Jew, who gets her bat mitzvah and becomes an adult at 13. “No,” says my mother, “it would be silly for you to be able to just age into it. You have to earn your five votes. For our purposes, an adult is someone who’s graduated from college.”

From that point onwards, I made it my goal to get into Stanford, where both my parents went, and in fact where they met. It seemed an accomplishable goal: both my parents had gone, and so from a genetic standpoint I had everything I needed. Furthermore, I considered, they were not genetic flukes in terms of intelligence: most of my grandparents had gone to high-end schools. My maternal grandfather went to Harvard, my paternal grandfather to Yale.

I took my first class at my community college at 14, thinking it would up my chances for getting into Stanford if I already had an Associate’s by the time I graduated from high school. My brother, who had decided on a similar track, took the class with me. I wasn’t sure about a major yet, but it also didn’t really matter: there were a ton of prerequisites I had to take, for both high school and college, before I needed to worry about it. So, we took Spanish 1.

I had a great time in that class for a number of reasons. I was absolutely stoked to be going to college, albeit a podunk community college. My professor was great (only later did I find out that this was a blessing rather than a rule), the coursework required a lot of study but was nonetheless fun, and I got awesome grades. I felt I was preparing well to go to Stanford in four years.

The knowledge that I was going to a four-year college, and furthermore, I was going to a top-tier college (Stanford preferably, but Yale, Harvard, or something else comparable would also do), saturated my entire childhood. I made every decision based on what would get me into the colleges I wanted to go to. By my sophomore year, I’d either taken or planned for seven AP tests. When it came time to study for the PSAT, I spent nine months doing so to the near-exclusion of all else. For my Associate’s, I chose only those electives that would prepare me to apply to the colleges I wanted to go to.

Until sometime in the spring of 2018, when everything changed.

Unlike a lot of major life shifts, it didn’t happen slowly. It happened in one fell swoop of three chaotic weeks, as I realized three fundamental things.

Firstly, I was sick of taking classes. It had been four years, and community college courses had turned from a joy to a slog. Seldom did anything I worked on in my courses relate to the real world, and if it did at all, it reflected real-life work through a funhouse mirror. I was close to graduating high school with my Associate’s in computer science, and I felt I saw the light at the end of the tunnel. At last, I thought, I could start doing meaningful work and creating value for real people! Wait, no, I couldn’t. I had to go to college. Didn’t I?

I started to doubt my rationale for pursuing college so ardently. I’d decided I would do it when I was a child, mostly because my parents had both done it. When a child thinks something is worth pursuing, it’s not because they’ve done a cost-benefit analysis and decided that it’s the logical conclusion based on their knowledge and previous experience. A child thinks something is worth pursuing because it sounds impressive, fun, or cool.

Further, societal expectations had pushed me away from questioning the idea of going to college. Even when I questioned the usefulness of college, I needed only to look at any book or article, or to talk to any human being, and I would have my wishes to attend college validated. On top of that, even the people who said college might not be a necessity for everyone continued to say it was the best option for smart people. And given our societal propensity for scoring children on standardized tests, it was always very apparent to me how smart I was, at least from an intellectual standpoint.

But now it became apparent that college was not the best option for me, or even a terribly good one. Everyone knows that the cost of college on dollars is excessively and often prohibitively high, but on top of that, I had to face the opportunity cost. My goals in life, like most other peoples’, had to do with the real world, with making money in real life, with having a career. If I went to college, I would put all that off for four more years. And for what? A name on a résumé and a few connections. The former might not even be necessary: I didn’t know enough about the work world yet to know whether any of my future employers would even care whether I had a degree or not!

Lastly, I realized that I had another option. Sometime in the spring, I heard about a business internship program called Praxis. Their business model: create a more practical college alternative by giving young people a six-month professional bootcamp, followed immediately by a chance to apply what they’ve learned through a six-month internship at a technology startup.

The process of learning about Praxis was what kickstarted me out on questioning the path I’d presumed my life would take from childhood. I had to face the facts: recently, despite my stated goal of and plan for getting into a top-tier school, I was moving towards it like a duty, an obligation. When I was younger, learning had been a joy; now, I yearned to apply what I learned. I kept going because I saw college as an inevitable end for a smart person like me; if not that, what else?

The answer to that previously-unanswerable question became Praxis. The application process was intensive, with a multitude of essays and interviews on a very tight timeframe, but I came out the other end with a scholarship and a plan. A very different plan than the one I’d had before, but also a plan I liked a lot better. A plan that brought the light at the end of the tunnel closer, instead of further away.

It was still hard to cope with my decision. For the next few months after my turning-point, I doubted myself a lot. It felt horrible that I’d spent so long working monomaniacally towards a goal only to quit at the last second. But I had to remind myself, I wasn’t quitting. I was choosing a better alternative, since I had more information at seventeen than I’d had at eight (surprise surprise!). I reminded myself that the statistics showed the uselessness of college as a preparation for real-world jobs. That tons of people, entrepreneurs especially, became very successful without degrees. That the field I was going into—technology—didn’t have a degree requirement (unlike, say, accounting, where to sit the CPA exam you need ~150 credit hours of college). That Praxis provided me with the sort of community I was hoping to get from a top-tier school.

At the time of this writing, I’m a month into the six-month professional bootcamp. So far, I’ve hand-coded my personal website (the one you’re on right now!), fixed up my LinkedIn and résumé, and created a personal pitch deck (more on that in this article). Everything I’ve done is immediately applicable to my career.

Contrast this with the inapplicable classes and assignments from last year: AP Latin, during which I badly translated texts by Caesar and Virgil that had been translated much better by others, and tried impossibly hard to be a little less horrible at literary analysis; AP English Composition, during which I wrote a ton of essays and analyses I’m never going to publish because the prompts are so obscure and the topics would be boring to read about, and also tried to be a little less horrible at literary analysis; and AP Java, which consisted mainly of writing code on paper, by hand, with a pencil: something no programmer in their right mind ever does.

Finally I’m working on projects and learning skills that will actually matter to me in the long run. While I was in school, I frequently had to say to myself, “This may seem obscure or stupid or useless, but it’s moving me towards my eventual goal, so it’s worth it.” Now, I don’t need to: everything I do has an obvious connection to my goal. I was dreading the next four years of my future; now, I have a fresh start.

I’m looking forward to it.

You CAN Be the Best in the World

There’s a common thing repeated by people trying to tell people to diversify their skillsets: that being the best at one specific thing is functionally impossible. Take this bit from Scott Adams, creator of the Dilbert comic:

If you want an average successful life, it doesn’t take much planning. Just stay out of trouble, go to school, and apply for jobs you might like. But if you want something extraordinary, you have two paths:

  1. Become the best at one specific thing.
  2. Become very good (top 25%) at two or more things.

The first strategy is difficult to the point of near impossibility. Few people will ever play in the NBA or make a platinum album. I don’t recommend anyone even try.

The second strategy is fairly easy. Everyone has at least a few areas in which they could be in the top 25% with some effort. In my case, I can draw better than most people, but I’m hardly an artist. And I’m not any funnier than the average standup comedian who never makes it big, but I’m funnier than most people. The magic is that few people can draw well and write jokes. It’s the combination of the two that makes what I do so rare. And when you add in my business background, suddenly I had a topic that few cartoonists could hope to understand without living it.

Adams’s point about the second strategy is golden. It’s excellent advice with a personal example, and many people would do well to apply it. However, I would contest with him on the first point.

There are two types of examples people cite when they try to make the point that being the best in the world at one thing is impossible: fields that have very specific, unchangeable rulesets, and fields that are very broad. Adams references playing in the NBA and making a platinum album: these are examples of the first type.

Now, I’m not trying to say that Adams isn’t absolutely correct about both of these examples; he is. The problem is that neither of them actually apply to the majority of the people in the work world.

For a field that has specific, unchangeable rulesets, such as just about any sport, there is only one way to succeed in these fields: be in the top fraction of a fraction of a percent. As such, not only does success require the kind of absolute, relentless focus that means pursuit of it takes over your entire life, it also requires a non-negligible amount of birth lottery: no matter how hard you try, you can’t play in the NBA if you’re 5’3″. And yet, I have never seen or heard of a job that has such steadfast rules as a sport: the work world is much more malleable.

Now let’s look at the other type of example: exceedingly broad fields. Adams doesn’t give such an example, but it’s easy to find one: being the best writer in the world, being the best programmer, etc. Now, at first glance, these seem like they apply to the work world, and they also seem like they confirm the “you can’t be the best at one thing” wisdom. Except they don’t. Because these examples are all far too broad.

Setting out to be the best writer in the world would, yes, be inconceivably difficult, and likely impossible. But this isn’t because you particularly need to combine your writing skill with some other skill in order to succeed: it’s because you need to niche down. You may not be able to be the best writer in the world, but I’m sure you can become the go-to guy for famous people who want books ghostwritten for them. You may not be able to be the best programmer, but you can become something like a guy my mother knows.

My mother works in Navision (abbreviated Nav), a type of ERP software owned by Microsoft. Essentially, Nav is a UI that makes SQL easy for accountants to use. However, as with any UI with a complicated back-end, sometimes the back-end does something funky. And as with any time a program does something funky, there is a niche for a programmer who can fix it.

In this case, the man who occupies that niche is Ahmed Amini. Everybody who does Nav programming knows who he is, and if you have a strange SQL problem, he’s always your go-to. Over fifteen years while my mother has worked with Nav, he has been recommended or mentioned countless times. He didn’t diversify his skillset. He just became the Nav SQL guy.

I’ll give another example. I know a cardiac surgeon by the name of Dave Garber, and he specializes in a very specific procedure (I don’t recall the name) by which an artery in the thigh is transplanted and used to fix something with the heart. This is now nearly everything he does, because he is the best at it. He didn’t diversify his skillset either. He just became the surgeon for this procedure.

How did this happen? The first step is to find something you seem good at within your field, that most people aren’t. Next, specialize in it. Seek it out. Try to do more of the thing that you’re good at. Over time, other people in your field will realize that you are very good at the thing, and they’ll start recommending you as the go-to.

Once you’re here, you’re golden, because it creates a virtuous cycle: you’re good at the thing, so you do good work, so people see the good work and recommend you, so you do more of the thing, and you get the opportunity to get even better, so you do even better work, and so on.

This is how you become the best at a field. Pick a niche you’re good at, specialize in it, and then let the word-of-mouth about your excellent work take it from there. No skill diversification needed.

Why does this matter? Well, because it’s true. I’m a big believer in truth. I like it when people have accurate beliefs about what’s possible and what’s not. If “become the best at a thing” has been moved from your mental “not possible” bucket to your “possible” bucket, this post has done its job.

Skillset-diversifying is still an excellent option. It may be the best option for you. But if that’s the case, it’s still the best option if it’s not the only one out there. And if it’s not the best option, know that you do have others.