Corporate atheism

Legally and formally, a corporation is a person, with the same rights (life, liberty, and property) that a human is accorded. Whether this is good is hotly debated.

In theory, I like the corporate veil (protection of personal property, reputation, and liberty in cases of good-faith business failure and bankruptcy) but I don’t see it playing well in practice. If you need $400,000 in bank loans to start your restaurant, you’ll still be expected to take on personal liability, or you won’t get the loan. I don’t see corporate personhood doing what it’s supposed to for the little guys. It seems to work only for those with the most powerful lobbyists. (Prepare for rage.) Health insurance companies cannot be sued, not even for the amount of the claim, if their denial of coverage causes financial hardship, injury, or death. (If a health-insurance executive is sitting next to you, I give you permission to beat the shit out of him.) On the other hand, a restaurant proprietor or software freelancer who makes mistakes on his taxes can get seriously fucked up by the IRS. I’m a huge fan of protecting genuine entrepreneurs from the consequences of good-faith failure. As for cases of bad-faith failure among corrupt, social-climbing, private-sector bureaucrats populating Corporate America’s upper ranks, well… not as much. Unfortunately, the corporate veil in practice seems to protect the rich and well-connected from the consequences of some enormous crimes (environmental degradation, human rights violations abroad, etc.) I can’t stand for that.

On the corporation, it’s clearly not a person like you or me. It can’t be imprisoned. It can be fined heavily (loss of status and belief) but not executed. It has immense power, if for no other reason than its reputation among “real” physical people, but no empirically discernible will, so we must trust its representatives (“executives”) to know it. It tends to operate in ways that are outside of mortal human’s moral limitations, because it is nearly immune from punishment, but a fair deal of bad behavior will be justified. The worst that can happen to it is gradual erosion of status and reputation. A mere mortal who behaved as it does would be called a psychopath, but it somehow enjoys a high degree of moral credibility in spite of its actions. (Might makes right.) Is that a person, a flesh-and-blood human? Nope. That’s a god. Corporations don’t die because they “run out of money”. They die because people stop believing in them. (Financial capitalism accelerates the disbelief process, one that used to require centuries.) Their products and services are no longer valued on divine reputation, and their ability to finance operations fails. It takes a lot of bad behavior for most humans to dare disbelieve in a trusted god. Zeus was a rapist, and the literal Yahweh genocidal, and they still enjoyed belief for thousands of years.

“God” is a loaded word, because some people will think I’m talking about their concept of a god. (This diversion isn’t useful here, but I’m actually not an atheist.) I have no issue with the philosophical concept of a supreme being. I’m actually talking about the historical artifacts, such as Zeus or Ra or Odin or (I won’t pick sides) the ones believed in today. I do have an issue with those, because their political effects on real, physical humans can be devastating. It’s not controversial in 2014 that most of these gods don’t exist– and it probably won’t be controversial in 5014 that the literal Jehovah/Allah doesn’t exist– but people believed in them at a time, and no longer do. When they were believed to be real, they (really, their human mouthpieces) could be more powerful than kings.

The MacLeod model of the organization separates it into three tiers. The Losers (checked-out grunts) are the half-hearted believers who might suspect that their chosen god doesn’t exist, but would never say it at the dinner table. The Clueless (unconditional overperformers who lack strategic vision and are destined for middle-management) are the zealots destined for the low priesthood, who clean the temple bathrooms. Not only do they believe, but they’re the ones who work to make blind faith look like a virtue. At the top are the Sociopaths (executives) who often aren’t believers themselves, but who enjoy the prosperity and comfort of being at the highest levels of the priesthood and, unless their corruption becomes obnoxiously obvious, being trusted to speak for the gods. The fact that this nonexistent being never punishes them for acting badly means there is virtually no check on their increasing abuse of “its” power.

Ever since humans began inventing gods, others have not believed in them. Atheism isn’t a new belief we can pin on (non-atheistic scientist) Charles Darwin. Many of the great Greek philosophers were atheists, to start. Buddha was, arguably, an atheist and Buddhism is theologically agnostic to this day. Socrates may not have thought himself an atheist, but one of his major “crimes” was disbelief in the literal Greek gods. In truth, I would bet that the second human ever to speak on anthropomorphic, supernatural beings said, “You’re full of shit, Asshole”. (Those may, however, have been his last words.) There’s nothing to be ashamed of in disbelief. Many of the high priests (MacLeod Sociopaths) are, themselves, non-believers!

I’m a corporate atheist and a humanist. My stance is radical. From most, these gods (and not the people doing all the damn work) are claimed to be the engines of progress and innovation. People who are not baptized and blessed by them (employment, promotion, good references) are judged to be filthy, and “natural” humans deserve shame (original sin). If you don’t have their titles and accolades, your reputation is shit and you are disenfranchised from the economy. This enables them to act as extortionists, just as religious authorities could extract tribute not because those supernatural beings existed (they never did) but because they possessed the political and social ability to banish and to justify violence.

I’m sorry, but I don’t fucking agree with any of that.

Look-ahead: a likely explanation for female disinterest in VC-funded startups.

There’s been quite a bit of cyber-ink flowing on the question of why so few women are in the software industry, especially at the top, and especially in VC-funded startups. Paul Graham’s comments on the matter, being taken out of context by The Information, made him a lightning rod. There’s a lot of anger and passion around this topic, and I’m not going to do all of that justice. Why are there almost no venture capitalists, few women being funded, and not many women rising in technology companies? It’s almost certainly not a lack of ability. Philip Greenspun argues that women avoid academia because it’s a crappy career. He makes a lot of strong points, and that essay is very much worth reading, even if I think a major factor (discussed here) is underexplored.

Why wouldn’t this fact (of academia being a crap career) also make men avoid it? If it’s shitty, isn’t it going to be avoided by everyone? Often cited is a gendered proclivity toward risk. People who take unhealthy and outlandish risks (such as by becoming drug dealers) tend to be men. So do overconfident people who assume they’ll end up on top of a vicious winner-take-all game. The outliers on both ends of society tend to be male. As a career with subjective upsides (prestige in addition to a middle-class salary) and severe, objective downsides it appeals to a certain type of quixotic, clueless male. Yet making bad decisions is hardly a trait of one gender. Also, we don’t see 1.5 or 2 times as many high-power (IQ 140+) men making bad career decisions. We probably see 10 times as many doing so; Silicon Valley is full of quixotic young men wasting their talents to make venture capitalists rich, and almost no women, and I don’t think that difference can be explained by biology alone.

I’m going to argue that a major component of this is not a biological trait of men or women, but an emergent property from the tendency, in heterosexual relationships, for the men to be older. I call this the “Look-Ahead Effect”. Heterosexual women, through the men they date, see doctors buying houses at 30 and software engineers living paycheck-to-paycheck at the same age. Women face a number of disadvantages in the career game, but they have access to a kind of high-quality information that prevents them from making bad career decisions. Men, on the other hand, tend to date younger women covering territory they’ve already seen.

When I was in a PhD program (for one year) I noticed that (a) women dropped out at higher rates than men, and (b) dropping out (for men and women) had no visible correlation with ability. One more interesting fact pertained to the women who stayed in graduate school: they tended either to date (and marry) younger men, or same-age men within the department. Academic graduate school is special in this analysis. When women don’t have as much access to later-in-age data (because they’re in college, and not meeting many men older than 22) a larger number of them choose the first career step: a PhD program. But the first year of graduate school opens their dating pool up again to include men 3-5 years older than them (through graduate school and increasing contact with “the real world”). Once women start seeing first-hand what the academic career does to the men they date– how it destroys the confidence even of the highly intelligent ones who are supposed to find a natural home there– most of them get the hell out.

Men figure this stuff out, but a lot later, and usually at a time when they’ve lost a lot of choices due to age. The most prestigious full-time graduate programs won’t accept someone near or past 30, and the rest don’t do enough for one’s career to offset the opportunity cost. Women, on the other hand, get to see (through the guys they date) a longitudinal survey of the career landscape when they can still make changes.

I think it’s obvious how this applies to all the goofy, VC-funded startups in the Valley. Having a 5-year look ahead, women tend to realize that it’s a losing game for most people who play, and avoid it like the plague. I can’t blame them in the least. If I’d had the benefit of 5-year look-ahead, I wouldn’t have spent the time I did in VC-istan startups either. I did most of that stuff because I had no foresight, no ability to look into the future and see that the promise was false and the road led nowhere. If I had retained any interest in VC-istan at that age (and, really, I don’t at this point) I would have become a VC while I was young enough that I still could. That’s the only job in VC-istan that makes sense.

The U.S. conservative movement is a failed eugenics project. Here’s why it could never have worked.

At the heart of the U.S. conservative movement, and most religious conservative movements, is a reproductive agenda. Old-style religious meddling in reproduction had a strong “make more of us” character to it– resulting in blanket policies designed to encourage reproduction across a society– but the later incarnations of right-wing authoritarianism, especially as they have mostly divorced themselves from religion, have been oriented more strongly toward goals judged to be eugenic, or to favor the reproduction of desirable individuals and genes; instead of a broad-based “make more of us” tribalism, it becomes an attempt to control the selection process.

The term eugenics has an ugly reputation, much earned through history, but let me offer a neutral definition of the term. Eugenics (“good genes”) is the idea that we should consciously control the genetic component of what humans are born into the world. It is not a science, since the definition of eu- is intensely subjective. As “eugenics” has been used throughout history to justify blatant racism and murder, the very concept has a negative reputation. That said, strong arguments can be made in favor of certain mild, elective forms of eugenics. For example, subsidized or free higher education is (although there are other intents behind it) a socially acceptable positive eugenic program: removal of one of a dysgenic economic force (education costs, usually borne by parents) that, empirically speaking, massively reduces fertility among the most capable people while having no effect on the least capable. 

The eugenic impulse is, in truth, fairly common and rather mundane. The moral mainstream seems to agree that eugenics (if not given that stigmatized name) is morally acceptable when participation is voluntary (i.e. no one is forced to reproduce, or not to do so) and positive (i.e. focused on encouraging desirable reproduction, rather than discouraging those deemed “unwanted”) but unacceptable when involuntary (coercive or prohibitive) and negative. The only socially accepted (and often legislated) case of negative and often prohibitive eugenics is the universal taboo against incest. That one has millennia of evolution behind it, and is also fair (i.e. it doesn’t single out people as unwanted, but prohibits intrafamilial couplings, known to produce unhealthy offspring, in general) so it’s somewhat of special case.

Let’s talk about the specific eugenics of the American right wing. The obsessions over who has sex with whom, the inconsistency between hard-line, literal Christianity and the un-Christ-like rightist economics, and all of the myriad mean-spirited weirdnesses (such as U.S. private health insurance, a monster that even most conservatives loathe at this point) that make up the U.S. right-wing movement; all are tied to a certain eugenic agenda, even if the definition of “eu-” is left intentionally vague. In addition to lingering racism, the American right wing unifies two varieties (one secular, one religious) of the same idea: Social Darwinism and predestination-centric Calvinism. This amalgam I would call Social Calvinism. The problem with it is that it doesn’t make any sense. It fails on its own terms, and the religious color it allowed itself to gain has only deepened its self-contradiction, especially now that sexuality and reproduction have been largely separated by birth control.

In the West, religion has always held strong opinions on reproduction, because the dominant religious forces are those that were able to out-populate the others. “Be fruitful and multiply.” This “us versus them” dynamic had a certain positive (in the sense of “positive eugenics”; I don’t mean to call it “good”) but coercive flair to it. The religious society sought much more strongly to increase its numbers within the world than to differentially or absolutely discourage reproduction by individuals judged as undesirable within its numbers. That said, it still had some ugly manifestations. One prominent one is the traditional Abrahamic religions’ intolerance of homosexuality and non-reproductive sex in general. In modern times, homophobia is pure ignorant bigotry, but its original (if subconscious) intention was to make a religious society populate quickly, which put it at odds with nonre7uiproductive sexuality of all forms.

Predestination (for which Calvinism is known) is a concept that emerged , much later, when people did something very dangerous to literalist religion: they thought about it. If you take religious literalism– born in the illogical chaos of antiquity– and bring it to its logical conclusions, funny things happen. An all-knowing and all-powerful God would, one can reason, have full knowledge and authority over every soul’s final destiny (heaven or hell). This meant that some people were pre-selected to be spiritual winners (the Elect) and the rest were refuse, born only to live through about seven decades of sin, followed by an eternity of unimaginable torture.

Perhaps surprisingly, predestination seemed to have more motivational capacity than the older, behavior-driven morality of Catholicism. Why would this be? People are loathe to believe in something as horrible as eternal damnation for themselves (even if some enjoy the thought for others) and so they will assume themselves to be Elect. But since they’re never quite sure, bad behavior will unsettle them with a creepy cognitive dissonance that is far more effective than ratiocination about punishments and rewards. The behavior-driven framework of the Catholic Church (donations in the form of indulgences often came with specific numbers of years by which time in purgatory was reduced) allows that a bad action can be cancelled out with future good actions, making the afterlife merely an extension of the “if I do this, then I get that” hedonic calculus. Calvinism introduced a fear of shame. Bad actions might be a sign of being one of those incorrigibly bad, damned people.

Calvinist predestination was not a successful meme (and even many of those who identify themselves in modern times as Calvinists have largely rejected it). “Our God is a sadistic asshole; he tortures people eternally for being born the wrong way” is not a selling point for any religion. That said, the idea of natural (as opposed to spiritual) predestination, as well as the Calvinist evolution from guilt-based (Catholicism) to shame-based (Calvinist) Christian morality, have lived on in American society.

Fundamental to the morality of capitalism is that some actors make better uses of resources than others (which is not controversial) and deserve to have more (likewise, not controversial). Applied to humans, this is generally if uneasily accepted; applied to organizations, it’s an obvious truth (no one wants to see the subsistence of inefficient, pointless institutions). Calvinism argued that one’s pre-determined status (as Elect or damned) could be ascertained from one’s actions; conservative capitalism argues that an actor’s (largely innate and naturally pre-determined) value can be ascertained by its success on the market.

Social Darwinism (which Charles Darwin vehemently rejected) gave a fully secular and scientific-sounding basis for these threads of thought, which were losing religious steam by the end of the 19th century. The idea that market mechanics and “creative destruction” ought to apply to institutions, patterns of behavior, and especially business organizations is controversial to almost no one. Incapable and obsolete organizations, whose upkeep costs have exceeded their social value, should die in order to free up room for newer ones. Where there is immense controversy is what should happen to people when they fail, economically. Should they starve to death in the streets? Should they be fed and clothed, but denied health care, as in the U.S.? Or should they be permitted a lower-middle-class existence by a welfare state, allowing them to recover and perhaps have another shot at economic success? The Social Darwinist seeks not to kill failed individuals per se, but to minimize their effect on society. It might be better to feed them than have them rebel, but allowing their medical treatment (on the public dime) is a bridge too far (if they’re sick, they can’t take up arms). It’s not about sadism per se, but effect minimization: to end their cultural and economic (and possibly physical) reproduction. It is a cold and fundamentally statist worldview. Where it dovetails with predestination is in the idea that certain innately undesirable people, damned early on if not from birth, deserve to be met with full effect minimization (e.g. long prison sentences since there is no hope of rehabilitation; persistent poverty because any resources given to them, they will waste) because any effect they have on the world will be negative. Whether they are killed, imprisoned, enslaved, or merely marginalized generally comes down to what is most convenient– and, therefore, effect-minimizing– and that is an artifact of what a society considers socially acceptable.

If we understand Calvinist predestination, and Social Darwinism as well, we can start to see a eugenic plan forming. Throughout almost all of our evolutionary history, prosperity and fecundity were correlated. Animals that won and controlled resources passed along their genes; those that couldn’t do so, died out. Social Darwinism, at the heart of the American conservative movement, believes that this process should continue in human society. More specifically, it holds to a few core tenets. First is that individual success in the market is a sign of innate personal merit. Second is that such merit is, at least partly, genetic and predetermined. Few would hold this correlation to be absolute, but the Social Darwinist considers it strong enough to act on. Third is that prosperity and fertility will, as they have over the billion years before modern civilization, necessarily correlate. The aspects of Social Darwinist policy that seem mean-spirited are justified by this third tenet: the main threat that a welfare state poses is that these poor (and, according to this theory, undesirable) people will take that money and breed. South Carolina’s Republican Lieutenant Governor, Andre Bauer, made this attitude explicit:

My grandmother was not a highly educated woman, but she told me as a small child to quit feeding stray animals. You know why? Because they breed. You’re facilitating the problem if you give an animal or a person ample food supply. They will reproduce, especially ones that don’t think too much further than that. And so what you’ve got to do is you’ve got to curtail that type of behavior. They don’t know any better.

The hydra of the American right wing has many heads. It’s got the religious Bible-thumping ones, the overtly racist ones, and the pseudoscientific and generally atheistic ones now coming out of Silicon Valley’s neckbeard right-libertarianism and the worse half of the “mens’ rights” movement. What unites them is a commitment to the idea that some people are innately inferior and should be punished by society, with that punishment ranging from the outright sadistic to the much more common effect-minimizing (marginalization) levels.

How it falls down

Social Calvinism is a repugnant ideology. Calvinistic predestination is an idea so bad that even conservative religion, for the most part, discarded it. The same scientists who discovered Darwinian evolution (as a truth of what is in nature, not of what should be in the human world) rejected Social Darwinism outright. It has also made a mockery of itself. It fails on its own terms. The most politically visible, mean-spirited, but also criminally inefficient manifestation of this psychotic ideology is in our health insurance system. Upper-middle-class, highly-educated people suffer– just as much as the poor do– from crappy health coverage. If the prescriptive intent behind a mean-spirited health policy is Social Calvinist in nature, the greed and inefficiency and mind-blowing stupidity of it affect the “undesirable” and “desirable” alike (unless one believes that only the 0.005% of the world population who can afford to self-insure are “desirable”). The healthcare fiasco is showing that a society as firmly committed to Social Calvinism as the U.S.– so committed to it that even Obama couldn’t make public-option (much less single-payer) healthcare a reality– can’t even succeed on its own terms. The economic malaise of the 2000s “lost decade” and the various morale crises erupting in the nation (Tea Party, #Occupy) only support the idea that the American social model fails both on libertarian and humanitarian terms.

Why do I argue that Social Calvinism could never work, in a civilized society? To put it plainly, it misunderstands evolution and, more to the point, reproduction (both biological and cultural). Nature’s correlation between prosperity and fecundity ended in the human world a long time ago, and economic stresses have undesirable side effects (which I’ll cover) on how people reproduce.

Let’s talk about biology; most of the ideas here also apply (and more strongly, due to the faster rate of memetic proliferation) to cultural reproduction. After the horrors justified in the name “eugenics” in the mid-20th century, no civilized society is going to start prohibiting reproduction. It’s not quite a “universal right”, but depriving people of the biological equipment necessary to reproduce is considered inhumane, and murdering children after the fact is (quite rightly) completely unacceptable. So people can reproduce, effectively, as much as they want. With birth control in the mix, most people can also reproduce as little as they want. So they have nearly total control over how much they reproduce, whether they are poor or rich. The Social Calvinist believes that the “undesirables” will react to socioeconomic punishment by curtailing reproduction. But do we see that happening? No, not really.

I mentioned Social Calvinism’s 3 core tenets above: (1) that socioeconomic prosperity correlates to personal merit, (2) that merit is at least significantly genetic in nature, and (3) that people will respond to prosperity by increasing reproduction (as if children were a “normal” consumer good) and to punishment by decreasing it. The first of these is highly debatable: desirable traits like intelligence, creativity and empathy may lead to personal success, but so does a lack of moral restraint. The people at the very top of society seem to be, for the most part, objectively undesirable– at least, in terms of their behavior (whether those negative traits are biological is less clear). The second is perhaps unpleasant as a fact (no humanitarian likes the idea that what makes a “good” or “bad” person is partially genetic) but almost certainly true. The third seems to fail us. Or, let me take a more nuanced view of it. Do people respond to economic impulses by controlling reproduction? Of course, they do; but not in the way that one might think.

First, let’s talk about economic stress. Stress can be good (“eustress”) or bad (“distress”) but in large doses, even the good kind can be focus-narrowing, if not hypomanic or even outright toxic. Rather than focusing on objective hardship or plenty, I want to examine the subjective sense of unhappiness with one’s socioeconomic position, which will determine how much stress a person experiences and which kind it is.  Likewise, economic inequality (by providing incentive for productive activity) can be for the social good– it’s clearly a motivator– but it is a source of (without directional judgment to the word) stress. The more socioeconomic inequality there is, the more of this stress society will generate. Proponents of high levels of economic inequality will argue that it serves eustress to the desirable people and institutions and distress to the less effective ones. Yet, if we focus on the subjective matter of whether an individual feels happy or distressed, I’d expect this to be untrue. People, in my observation, tend to feel rich or poor not based on where they are, economically, but by how they measure up to the expectations derived from their natural ability. A person with a 140 IQ who ends up as a subordinate, making a merely average-plus living doing uninteresting work, is judged (and will judge himself) as a failure. Even if that person has the gross resources necessary to reproduce (the baseline level required is quite low) he will be disinclined to do so, believing his economic situation to be poor and the prospects for any progeny to be dismal. On the other hand, a person with a 100 IQ who ends up with the average-plus income (as a leader, not a subordinate; but with the same income and wealth as the person with 140 above) will face life with confidence and, if having children is naturally something he wants, be inclined to start a family early, and possibly to have a large one.

What am I really saying here? I think that, while people might believe that meritocracy is a desirable social ideal, most people respond emotionally not to the component of their economic outcome derived from natural (possibly genetic) merit or hard work, but from the random noise term. People have a hard time believing that randomness is just that (hence, the amount of money spent on lottery tickets) and interpret this noise term to represent how much “society” likes them. In large part, we’re biologically programmed to be this way; most of us get more of a warm feeling from windfalls coming from people liking us than from those derived from natural merit or hard work. However, modern society is so complex that this variable can be regarded as pure noise. Why? Because we, as humans, devise social strategies to make us liked by an unknown stream of people and contexts we meet in the future, but whether the people and contexts we actually encounter (“serendipity”) match those strategies is just as random as the Brownian motion of the stock market. Then, the subjective sense of socioeconomic eustress or distress that drives the desire to reproduce comes not from personal merit (genetic or otherwise) but from something so random that it will have a correlation of 0.0 with pretty much anything.

This kills any hope that socioeconomic rewards and punishments might have a eugenic effect, because the part that people respond to on an emotional level (which drives decisions of family planning) is the component uncorrelated to the desired natural traits. There is a way to change that, but it’s barbaric. If society accepted widespread death among the poor– and, in particular, among poor children (many of whom have shown no lack of individual merit; i.e. complete innocents)– then it could recreate a pre-civilized and truly Darwinian state in which absolute prosperity (rather than relative/subjective satisfaction) has a major effect on genetic proliferation.

Now, I’ll go further. I think the evidence is strong that socioeconomic inequality has a second-order but potent dysgenic effect. Even when controlling for socioeconomic status, ethnicity, geography and all the rest, IQ scores seem to be negatively correlated with fertility. Less educated and intelligent people are reproducing more, while the people that humanity should want in its future seem to be holding off, having fewer children and waiting longer (typically, into their late 20s or early 30s) to have them. Why? I have a strong suspicion as to the reason.

Let’s be blunt about it. There are a lot of willfully ignorant, uneducated, and crass people out there, and I can’t imagine them saying, “I’m not going to have a child until I have a steady job with health benefits”. This isn’t about IQ or physical health necessarily; just about thoughtfulness and the ability to show empathy for a person who does not exist yet. Whether rich or poor, desirable people tend to be more thoughtful about their effects on other people than undesirable ones. The effect of socioeconomic stress and volatility will be to reduce the reproductive impulse among the thoughtful, future-oriented sorts of people that we want to have reproducing. It also seems to me that such stresses increase reproduction among the sorts of present-oriented, thoughtless sorts of people that we don’t as much want to be highly represented in the future.

I realize that speaking so boldly about eugenics (or dysgenic threats, as I have) is a dangerous (and often socially unacceptable) thing. To make it clear: yes, I worry about dysgenic risk. Now some of the more brazen (and, in some cases, deeply racist) eugenicists freak out about higher rates of fertility in developing (esp. non-white) countries, and I really don’t. Do I care if the people of the future look like me? Absolutely not. But it would be a shame if, 100,000 years from now, they were incapable of thinking like me. I don’t consider it likely that humanity will fall into something like Idiocracy; but I certainly think it is possible. (A more credible threat is that, over a few hundred years, societies with high economic inequality drift, genetically, in an undesirable direction, producing a change that is subtle but enough to have macroscopic effects.)

Why, at a fundamental level, does a harsher and more inequitable (and more stressful) society increase dysgenic risk? Here’s my best explanation. Evolutionary ecology discusses two reproductive pressures, r- and K-selection, in species, which correspond to optimizing for quantity versus quality of offspring. The r-strategist has lots of offspring, gives minimal paternal investment, and few will survive. An example is a frog giving birth to a hundred tadpoles. The K-strategist invests heavily in a smaller number of high-quality offspring with a much higher individual shot at surviving. Whales and elephants are K-strategists with long gestation periods and few offspring, but a lot of care given to them. Neither is “better” than the other, and they each succeed in different circumstances. The r-strategist tends to repopulate quickest after a catastrophe, while the K-strategist succeeds differentially at saturation.

It is, in fact, inaccurate to characterize highly evolved, complex life forms such as mammals as strong r- or K-selectors. As humans, we’re clearly both. We have an r-selective and a K-selective sexual drive, and one could argue that much of the human story is about the arms race between the two.

The r-selective sex drive wants promiscuity, has a strong present-orientation, and exhibits a total lack of moral restraint– it will kill, rape, or cheat to get its goo out there. The K-selective sex drive supports monogamy, is future-oriented, and values a stable and just society. It wants laws and cultivation (culture) and progress. Traditional Abrahamic religions have associated the r-drive with “evil” and sin. I wouldn’t go that far. In animals it is clearly inappropriate to put any moral weight into r- or K-selection, and it’s not clear that we should be doing that to natural urges that all people have (such as calling the r-selective component of our genetic makeup “original sin”). How people act on those is another matter. The tensions between the r- and K-drives have produced much art and philosophy, but civilization demands that people mostly follow their K-drives. While age and gender do not correlate as strongly to the r/K divide as stereotypes would insist (there are r-driven older women, and K-driven young men) it is nonetheless evident that most of society’s bad actors are those prone to the strongest r-drive: uninhibited young men, typically driven by lust, arrogance and greed. In fact, we have a clinical term for people who behave in a way that is r-optimal (or, at least, was so in the state of nature) but not socially acceptable: psychopaths. From an r-selective standpoint, psychopathy conferred an evolutionary advantage, and that’s why it’s in our genome.

Both sexual drives (r- and K-) exist in all humans, but it wasn’t until the K-drive triumphed that civilization could properly begin. In pre-monogamous societies, conflicts between men over status (because, when “alpha” men have 20 mates and low-status men have none, the stakes are much greater) were so common that between a quarter and a half of men died in positional violence with other men. Religions that mandated monogamy, or at least restrained polygamy as Islam did, were able to build lasting civilizations, while societies that accepted pre-monogamous distributions of sexual access were unable to get past the chaos of constant positional violence.

There are many who argue that the contemporary acceptance of casual sex constitutes a return to pre-monogamous behaviors. I don’t care to get far into this one, if only because I find the hand-wringing about the topic (on both sides) to be rather pointless. Do we see dysgenic patterns in the most visible casual sex markets (such as the one that occurs in typical American colleges)? Absolutely, we do. Even if we reject the idea that higher-quality people are less prone to r-driven casual sex, the way people (of both sexes) select partners in that game is visibly dysgenic. But to the biological future (culture is another matter) of the human species, that stuff is pretty harmless– thanks to birth control. This is where the religious conservative movement shoots itself in the foot; it argues that the advent of birth control created uncivil sexual behavior. In truth, bad sexual behavior is as old as dirt, has always been a part of the human world and probably always will be; the best thing for humanity is for it to be rendered non-reproductive, mitigating the dysgenic agents that brought psychopathy into our genome. (On the other hand, if human sexual behavior devolved to the state of high school or college casual sex and remained reproductive, the species would devolve into H. pickupartisticus and be kaputt within 500 years. I would short-sell the human species and buy sentient-octopus futures at that point.)

If humans have two sexual drives, it stands to reason that those drives would react differently to various circumstances. This brings to mind the relationship of each to socioeconomic stress. The r-drive is enhanced by socioeconomic stress– both eustress and distress. Eustress-driven r-sexuality is seen in the immensely powerful businessman or politician who frequents prostitutes, not because he is interested in having well-adjusted children (or even in having children at all) but to see if he can get away with it; the distress-driven r-sexuality has more of an escapist, “sex as drug”, flavor to it. In an evolutionary context, it makes sense that the r-drive should be activated by stress, since the r-drive is what enables a species to populate rapidly after an ecological catastrophe. On the other hand, the K-drive is weakened by socioeconomic stress and volatility. It doesn’t want to bring children into a future that might be miserable or dangerously unpredictable. The K-drive’s reaction to socioeconomic eustress is busyness (“I can’t have kids right now; my career’s taking off) and its reaction to distress is to reduce libido as part of a symptomatic profile very similar to depression.

The result of all of this is that, should society fall into a damaged state where socioeconomic inequality and stress are rampant, the r-drive will be more successful at pushing its way to reproduction, while the K-drive is muted. The result is that the people who will come into the future will disproportionately be the offspring of r-driven parents and couplings. Even if we reject the idea that undesirable people have stronger r-drives relative to their K-drives (although I believe that to be true) the enhanced power of the r-strategic sexual drive will influence partner selection and produce worse couplings. Over time, this presents a serious risk to the genetic health of the society.

Just as Mike Judge’s Idiocracy is more true of culture than of biology, we see the overgrown r-drive in the U.S.’s hypersexualized (but deeply unsexy) popular culture, and the degradation is happening much faster to the culture than it possibly could to our gene pool, given the relatively slow rate of biological evolution. Some wouldn’t see any correlation whatsoever between the return of the Gilded Age post-1980 and Miley Cyrus’s “twerking”, but I think that there’s a direct connection.

Conclusion

The Social Calvinism of the American right wing believes that severe socioeconomic inequality is necessary to flush the “undesirables” to the bottom, deprive them of resources, and prevent them from reproducing. Inherent to this strategy is the presumption (and a false one) that people are future-oriented and directed by the K-selective sexual drive, which is reduced by socioeconomic adversity. In reality, the more primitive (and more harmful, if it results in reproduction) r-selective sexual drive is enhanced by socioeconomic stresses.

In reality, socioeconomic volatility reduces the K-selective drive of most people, rich and poor. The reason for this is that a person’s subjective sense of satisfaction with socioeconomic status is not based on whether he or she is naturally “desirable” to society but his or her performance relative to natural ability and industry, which is a noise variable. It enhances the r-selective drive. Even if we do not accept that desirable people are more likely to have strong K-drives and weak r-drives, it is empirically true (seen in millennia of human sexual behavior) that people operating under the K-drive choose better partners than those operating under the r-drive.

The American conservative movement argues, fundamentally, that a mean-spirited society is the only way to prevent dysgenic risk. It argues, for example, that a welfare state will encourage the reproductive proliferation of undesirable people. The reality is otherwise. Thoughtful people, who look at the horrors of American healthcare and the rapid escalation of education costs, curtail reproduction even if they are objectively “genetically desirable” and their children are likely to perform well, in absolute terms. Thoughtless people, pushed by powerful r-selective sex drives, will not be reproductively discouraged, and might (in fact) be encouraged, by the stresses and volatility (but, also, by undeserved rewards) of the harsher society. Therefore, American Social Calvinism actually aggravates the very dysgenic risk that it exists to address.

Exploding college tuitions might be a terrifying sign

It’s well-known that college tuitions are rising at obscene rates, with the inflation-adjusted cost level having grown over 200 percent since the 1970s. Then there is the phenomenon of “dark tuition”, referring to the additional costs that parents often incur in giving their kids a reasonable shot of getting into the best schools. Because of the regional balancing (read: non-rich students from highly-represented areas have almost no shot, because they compete in the same regional pool as billionaires) effect, the insanity begins as early as preschool in places like Manhattan. Including dark tuition, some families spend nearly a million dollars on college admissions and tuition for their spawn. To write this off as a wasteful expenditure is unreasonable; it’s true that these decisions are made without data, but the connections made early in life clearly can be worth a large sum. Or, alternatively, the cost of not being connected can be quite high.

Many also note that a college degree means less than it used to, and that’s clearly true: educational credentials bring less on the job market than they once did. Yet rising tuitions are a market signal indicating that, at least for elite colleges, the value of something has gone up. Some people have complained that MBA school has become the new college, due to the latter’s devaluation. I’d argue that the data suggest the reverse. College is turning into MBA school: quality can be found at the top 200 or so institutions, but increasingly, the real big-ticket value motivating the purchase is certainly not the education, and not really the brand name– 5 years out of school, no one cares where you attended; the half of elite-school attendees who fail to make significant connections are likely to end up in mediocrity and failure like everyone else– but the network itself.

So have elite social connections become more valuable? How could it be so, in an era during which technology is supposedly liberating us from inefficiencies like good-old-boy networks? Aren’t those dinosaurs on the way to extinction? It seems not. This should be an upsetting conclusion, not so much for what it means (connections matter) but for what it suggests about the trend. Realists accept that, in the real world, connections and the attendant manipulations and (often, for those outside needing to get in) extortions matter. What we all hope is that they will matter less as time goes on, because for the opposite to be the case suggests that progress is moving backward. Is it?

The news, delivered.

“You know what the trouble is, Brucey? We used to make shit in this country, build shit. Now we just put our hand in the next guy’s pocket.” — Frank Sobotka, The Wire.

Leftists like me would typically argue that American decline began in the 1980s. The prosperity of the 1990s falsely validated the limp-handed centrism of the “New Democrats”, and the 2000s was the decade of free fall. On the other hand, despite the mean-spirited political tenor of these decades, the U.S. continued to innovate. As bad as things were, from a macroscopic and cultural perspective, the engines of progress continued running. Silicon Valley didn’t stop just because Reagan and the Bushes held power. Google, which became a household name around 2002, didn’t go out of business just because of a toxic political environment. I’m not saying that politics doesn’t matter– obviously, it does– but even in the darkest hours (Bush between 9/11 and Katrina) there was not a visible, credible threat that American innovation would, in the short term, just die.

I also don’t think that we’re in immediate danger of an out-of context innovation shut-down. It’s not something that will happen in the next two years. I do think that we’re closer than we realize.

American innovation exists for a surprisingly simple reason: forgiving bankruptcy laws regarding good-faith business failure. If your company folds, it doesn’t ruin your life. Unfortunately, that protection has been eroded. Bank loans for new businesses require personal liability, circumventing this protection outright. The alternative is equity financing, but Silicon Valley’s marquee venture capitalists have set up a collusive, feudal reputation economy in which an individual investor can be a single-point-of-failure for an entrepreneur’s entire career. The single trait of the American legal system that enabled it to be a powerhouse for new business generation– forgiving treatment of good-faith business failure– has been removed. Powerful people saw it as inconvenient, they wrote it off the ticket.

Credible long-term threats to innovation are present. Makers struggle more to get their ideas funded, or to get anywhere near the people in control of the arcane permission system that still runs the economy. The socially-connected takers who own that permission system can demand more as a price of audience. We’re seeing that. The people who really make the big money (defined as enough to comfortably buy a house) in Silicon Valley, these days, aren’t the makers implementing new, crazy ideas; but peddlers of influence using their business-school connections to get unwarranted advisory and executive positions, stitching together enough equity slices to have a viable portfolio, or those who do the former even better and become real VCs. Silicon Valley’s Era of Makers has come and gone; now, MBA culture has swept in, 22-year-olds are getting funded based on who their parents are, and its clear that Taker Culture has won… at least in the “we’ll fund your competitors if you don’t take this sheet” VC-funded world.

So… what does this have to do with college tuitions rising? Possibly nothing. There are a number of plausible causes for the tuition bubble, many having little or nothing to do with Taker Culture and the (risk of) death of innovation. Or, it might tell us a lot.

What do we actually know?

We know that college tuitions are skyrocketing. Professor salaries aren’t the cause, because the academic job market has been tanking over the past 30 years, with low-paid adjunct and graduate students replacing professors in much of undergraduate education. This suggests that the quality hasn’t improved, and I’d agree with that assessment. Administrative costs and real estate expenditures have gone up, but that seems to be more of a case of colleges wanting to do something with this massive pool of available money, than a prior cause of the escalating costs.

Housing prices in the most vital areas have also increased, even though the economy (including in those areas) has weakened considerably. I suspect that these two of the three aspects of the Satanic Trinity (housing, healthcare, and tuition costs) share a common thread: as the world becomes riskier and poorer, people are buying connections. That’s what living in New York instead of New Orleans in your 20s is about. It’s also what going to an Ivy instead of an equally adequate state university is about. Of course, the fact that connections matter enough to be bought isn’t new. People have been buying connections as long as there has been money. What is obvious is that people are paying more for connections than ever before, and that inherited social connectedness has probably reached a level of importance (even in the formerly meritocratic VC-funded startup scene) incompatible with democracy, innovation, or a forward-thinking society. Oligarchy has arrived.

What happens in an oligarchy is that the purchase of connections (via financial transfer, or ethical compromise) ceases to be an irritating sideshow of the economy– a distraction from actually making stuff– and, instead, becomes the main game.

Here’s an interesting philosophical puzzle. Does this pattern actually mean that connections have become (a) more valuable, or (b) less so? It means both, paradoxically. Social connections matter more, insofar as a much larger pool of money is being putting into chasing them, and this strongly indicates that hard work, creativity, and talent no longer matter as much. To navigate society’s dehumanizing and arcane permissions systems, “who you know” is becoming more crucial. The exchange rate between social property vs. talent and hard work now favors the first. However, connections are less valuable, also, insofar as they deliver less, requiring people to procure more social capital in order to make their way in the world. The price of something increasing does not necessarily mean that it’s worth more to the world; it might be that a reduction in its delivered value has driven up the quantity needed, thus its price. This evolution is not the functioning of a healthy economy; it’s sickness that benefits only a few. Connections matter more, make very evident by the fact that people are paying more for the same quantity, but deliver less. That means that the world, as a whole, is just getting poorer.

This is clearly happening. People are paying more for social connections and the health of the economy, in addition to this, indicates that even more social access is needed to buy as much economic value (security, opportunity, etc.) as yesteryear. Adam Smith decried Britain as a “nation of shopkeepers”. The United States, ever since the Organization Man age, has been in danger of becoming a nation of social climbers. However, there’s always been something else to its economic character; at least, enough impurity amid the bland mass to, at least, give color should the damn thing crystallize. But is that true now? In the 1970s, that “impurity” was Silicon Valley. There was cheap land that the old elite didn’t want, but that drew (for a variety of historical reasons) a lot of intelligent and capable people. Governments and businesses used this opportunity to build up one of the most impressive R&D cultures the world has seen. Maker Culture came first in Silicon Valley, generated a lot of value, and then the money started rolling in. Unfortunately, that also brought in douchebags, whose number and power have only dramatically increased. It was probably inevitable that Taker Culture (multiple liquidation preferences, note-sharing among VCs, MBA-culture startups with reckless and juvenile management, Stanford Welfare and the importance of social connections) would set in.

The New California?

We know that the California-centered Maker Culture is gone. There are still a hell of a lot of great people in that region– it might be the most talent-rich place on earth– but, with a few outstanding exceptions, they’re no longer the socially important ones. I don’t think it’s worth dissecting the death of the thing, or whining about the behaviors of venture capitalists, because I think that ecosystem is too far gone to repair itself. In the 1990s, venture capitalists rightly judged that most of the powerful, large corporations were too politically dysfunctional to innovate. Now, that same charge is even more true of the VC-funded ecosystem, which effectively functions (due to the illegal collusion of VCs, who increasingly view themselves as a single executive suite) as a single corporation, albeit with a postmodern structure.

What California was when the Maker Culture emerged, what places are like that now? Is it another city in the U.S., like Austin, perhaps? Or is it in another country? Must it even be a physical place at all? I don’t know the answers to these questions.

Or, as the escalating cost of college tuition– and the premium on social connections suggested by that– seems to indicate, is it just gone for good? Has an effete aristocracy found a way to drive meritocracy not just to a fringe (like California five decades ago) but out of existence entirely? If so, then expect innovation to die out, and an era of stagnation to set in.

Three capitalisms: yeoman, corporate, and supercapitalism

I’m going to put forward the idea, here, that what we call capitalism in the United States is actually an awkward, loveless ménage à trois between three economic systems, each of which considers itself to be the true capitalism, but all three of which are quite different. Yeoman (or lifestyle) capitalism is the most principled variety of the three, focused on building businesses to improve one’s life or community. The yeoman capitalist plays by the rules and lives or dies by her success on the market. Second, there’s the corporate capitalism whose internal behavior smells oddly of a command economy, and that often seeks to control the market. Corporate capitalism is about holding position and keeping with the expectations of office– not markets per se. Finally, there is supercapitalism whose extra-economic fixations actually render it more like feudalism than any other system and exerts even more control, but at a deeper and more subtle level, than the corporate kind.

1. Yeoman capitalism (“the American Dream”)

The most socially acceptable of the American capitalisms is that of the small business. It’s not trying to make a billion dollars per year, it doesn’t have full-time, entitled nonproducers called “executives”, and it often serves the community it grew up in. It’s sometimes called a “lifestyle business”; it generates income (and provides autonomy) for the proprietor so as to improve her quality of life. When typical Americans imagine themselves owning a business, and aspiring to the freedom that can confer, yeoman capitalism is typically what they have in mind: something that keeps them active and generates income, while conferring a bit of independence and control over one’s destiny.

Yeoman capitalism is often used as a front for the other two capitalisms, because it’s a lot more socially respected. Gus Fring, in Breaking Bad, is a supercapitalist who poses as a yeoman capitalist, making him beloved in Albuquerque.

The problem with yeoman capitalism is that, not only is it highly risky in terms of year-by-year yield, but there’s often a lack of a career in it. Small business owners do a lot more for society than executives, but get far less in terms of security. An owner-operator of a business that goes bankrupt will not easily end up with another business to run, while fired executives get new jobs (often, promotions) in a matter of weeks. Modern-day yeoman capitalism is as likely to take the form of a consulting or application (“app”) company as a standalone business and may have more security; time will tell, on that one.

While yeoman capitalism provides an attractive narrative (the American Dream, in the United States) it does not provide job security for anyone (and that’s not its goal). It also has a high barrier to entry: you need capital or connections to play. Even though it is a more likely path to wealth than the other two capitalisms are for most people, it often leads to horrible failure, because it comes with absolutely no safety net. It’s the blue-collar capitalism of working hard and hoping that the market rewards it. Sometimes, the market doesn’t. Most people can’t stomach the income volatility of this, or even amass the capital to get started.

2. Corporate capitalism (“in Soviet Russia, money spends you”)

Corporate capitalism provides much more security, but it has an institutional command-economy flavor. People don’t think like owners, because they’re not. Private-sector social climbers rule the day. It’s uninspiring. It feels like the worst of both worlds between capitalism and communism, with much of the volatility, insecurity, and greed of the first but the mediocrity, duplicity, and disengagement associated with the second. It has one thing that keeps it going and makes it the dominant capitalism of the three. It has a place for (almost) everyone. Most of those places are terrible, but they exist and they don’t change much. Corporate capitalism will give you the same job in California as you’d get in New York for your level of “track record” and “credibility” (meaning social status).

The attraction of corporate capitalism is that one has a generally good sense of where one stands. Yeoman capitalism is impersonal; market forces can fire you, even if you do everything right. Corporate capitalism gives each person a history and a personal reputation (resume) based in the quality of companies where one worked and what titles were held. At least in theory, that smooths out the bad spells because, even though layoffs and reorganizations occur, the system will always be able to find an appropriate position for a person’s “level”, and people level up at a predictable rate.

Adverse selection is one problem with corporate capitalism. People choose corporate capitalism over the yeoman kind to mitigate career risks. People who want to off-load market risks might be neutral bets from a hiring perspective, but people who want to off-load their own performance risks (i.e. because they’re incompetent slackers) are bad hires. Corporate capitalism’s “place for everyone” makes it attractive to those sorts of people, who can trust that social lethargy, in addition to legal issues, around decisions that adversely affect one’s career (i.e. actually demoting or firing someone) will buy them enough time to earn a living doing very little. Consequently, it’s hard to operate in corporate capitalism without accruing some dead weight. Worse yet, it’s hard to get rid of the deadwood, because the useless people are often the best at playing politics and evading detection. Companies that set up “fire the bottom 10 percent each year” policies end up getting ruined by the Welch Effect: stack ranking’s most common casualties are not true underperformers, but junior members of macroscopically underperforming teams (who had the least to do with this underperformance).

Compounding this is the fact that corporations must counter-weigh their extreme inequality of results (in pay, division of labor, and respect) with a half-hearted attempt at inequality of opportunity (no playing of favorites) but what this actually means is that the most talented can’t “grade skip” past the initial grunt work, but have to progress along the slow, pokey track built for the safety-seeking, disengaged losers. They don’t like this. They want the honors track, and don’t get it, because it doesn’t exist– grooming a high-potential future leader (as opposed to hiring one from the outside and then immediately reorg-ing so no one knows what just happened) is not worth pissing off the rest of the team. The sharp people leave for better opportunities. Finally, corporations tend over time toward authoritarianism because, as the ability to retain talent wanes, remaining people that the company considers highly valuable are enticed with a zero-sum but very printable currency– control over others. All of this tends toward an authoritarian mediocrity that is the antithesis of what most people think capitalism should be.

Socialism and capitalism both have a Greenspun property wherein bad implementations of one generate shitty forms of the other. Under Soviet communism, criminal black markets (similar to that existing for psychoactive drugs in the U.S.) existed for staid items like lightbulbs, so this was a case of bad socialism creating a bad capitalism. Corporate capitalism has a simliar story. Corporations are fundamentally statist institutions that operate like command economies internally. In fact, if one were to conceive as the multi-national corporation as the successor to the nation-state, one could see the corporation as an extremely corrupt socialist state. What is produced, how it is produced, and who answers to whom, all is determined centrally by an autocratic authority. Advancement has more to do with pleasing party officials than succeeding on a (highly controlled) market. Corporations do not run as free markets internally; but also, once they are powerful and established, they work to make society’s broader market less free, pulling the ladder up after using it.

3. Supercapitalism! (“You know what’s cool? Shitting all over a redwood forest for a wedding!”)

Supercapitalism is the least understood of the three capitalisms. Supercapitalists don’t have the earnestness of the yeoman capitalist; they view that a chump’s game, because of its severe downside risks. They also don’t have the patience for corporate capitalism’s pokey track. Supercapitalists rarely invest themselves in one business or product line; having a full-time job is proletarian to them. Instead, they “advise” as many different firms as they can. They’re constantly buying and selling information and social capital.

Mad Men is, at heart, about the emergence of a supercapitalist class in professional advertising. Don Draper isn’t an entrepreneur, but he’s not a corporate social climber either. He’s a manipulator. The clients are the corporate capitalists playing a less interesting game than what is, in the early 1960s, emerging on Madison Avenue– a chance to float between companies while cherry-picking their most interesting or lucrative marketing problems. The ambitious, smart, Ivy Leaguers are all working for people like Don Draper, not trying to climb the Bethlehem Steel ladder. What’s attractive about advertising is that it confers the ability to work with several businesses without committing to one. Going in-house to a client (still at a much higher level than any ladder climber can get) is the consolation prize.

One interesting trait of supercapitalism is that it’s generally only found in one or two industries at a time. Madison Avenue isn’t the home of supercapitalism anymore; now, advertising is just the unglamorous corporate kind. Investment banking took the reins afterward, but is now losing that; now it’s VC-funded internet startups (many of which have depressingly little to do with true technology) where supercapitalist activity lives. Why is it this way? Because supercapitalism, although it considers itself the most modern and stylish capitalism, has a fatal flaw. It’s obsessed with prestige, and prestige is another name for reputation, and so it generates reputation economies (feudalism). It can’t stay in one place for too long, lest it undermine itself (by developing the negative reputation it deserves, and therefore failing on its own terms).

Supercapitalism also turns into the corporate kind because its winners (and there are very few of them) get out. First, they establish high positions where they participate in very little of the work (to avoid evaluation that might prove them just to have had initial luck). They become executives, then advisors, then influential investors, and then they move somewhere else– somewhere more exciting. That leaves the losers behind, and all they can come up with are authoritarian rank cultures designed to replicate former glory.

Why does supercapitalism generate a reputation economy? That fact is extremely counterintuitive. Supercapitalism draws in some of the most talented, energetic people; and it is often (because of its search for the stylish) at the cutting edge of the economy. So why would it create something so backward and feudal as a reputation economy, which intelligent people almost uniformly despise? The answer, I think, is that supercapitalism tends to demand world-class resources in both property (capital) and talent (labor). A regular capitalist is not nearly as selective, and will take an opportunity to turn a profit from property or talent, but the sexiest and most stylish capers require top-tier backing in both. If you’re obsessed making a name for yourself (and supercapitalism is run by the most narcissistic, who are not necessarily the most greedy, people) in the most grandiose way, you don’t just need to hit your target; you also need the flashiest guns.

Right now, the eye of the supercapitalist hurricane is parked right over Silicon Valley. Sean Parker is the archetypical supercapitalist. He’s never really succeeded in any of his roles (that’s a prolish, yeoman capitalist ideal) but he’s worth billions, and now famous for being famous. While corporate capitalism focuses on mediocrity and marginalizes both extremes (deficiency and excellence) supercapitalism will always make a cushy home for colorful, charismatic failures just as eagerly as it does for unemployable excellence.

Supercapitalism will, eventually, move away from the Valley. Time will tell how much damage has been done by it, but considering the state of the housing market there and the horrible effects of high house prices on culture, I wouldn’t expect the region to survive. Supercapitalism rarely considers posterity and it tends to leave messes in its wake. 

The final reason why supercapitalism must move from one industry to another, over time, is that reputation economies deplete the opportunities that attract talent. It’s worthwhile, now, to talk about compensation and how they work in the three capitalisms. Doing so will help us understand what supercapitalism is, and how it is different from the corporate kind.

Under yeoman capitalism, the capitalist is compensated based on exactly how the market values her product. No committee decides what to pay her, and she is never personally evaluated; it’s the market value of what she sells that determines her income. Most people, as discussed, either can’t handle (financially or emotionally) this volatility or, at least, believe they can’t. Corporate capitalism and supercapitalism, on the other hand, tend to pre-arrange compensation with salaries and bonuses that are mostly predictable.

Of course, what a person’s work is worth, when that work is abstract and joint efforts are complex and nonseparable, has a wide range of defensible values. Corporate capitalism settles this by setting compensation near the lower bound of that range, but (mostly) guaranteeing it. If you make $X in base salary, there’s usually a near-100-percent chance that you’ll make that or more in a year (possibly in another job). Since people are compensated at the lower bound of this range, this generates large profits at the top; in the executive suite (above the effort thermocline) something exists that looks somewhat like a less mobile and blander supercapitalism.

People who want to move into the middle or top of their defensible salary ranges won’t get it in corporate capitalism. The work has already been commoditized and the rates are already set, and excellence premiums are pretty minimal because most corporations refuse to admit that their in-house pet efforts aren’t excellent. Thus, talented people looking for something better than the corporate deal find places where the opportunities are vast, but also poorly understood by the local property-holders, allowing them to get better deals than if the latter knew what they had. At one time, it was advertising (cutting-edge talent understood branding and psychology; industrial hierarchs didn’t). Then it was finance; later and up to now, it has been venture-funded light technology (on which the sun is starting to set). Over time, however, the most successful supercapitalists position themselves so as not to be affiliated with a single one of the efforts, but diversify themselves among many. This creates a collusive, insider-driven market like modern venture capital. Over time, this inappropriate sharing of information turns into a full-blown reputation economy.

Once a reputation economy is in place, talent stops winning, because property, by its sheer power over reputations, has full authority to set the exchange rate between property and talent. “The rate is X. Accept it or I’ll shit on your name and you’ll never see half of X.” Once that extortion becomes commonplace, what follows is a corporate rank culture. It feels like the arrangements are “worked out” and only management can win– and that’s actually how it is. Opportunities don’t disappear entirely, but they aren’t any more available to young talent than elsewhere, and the field becomes just another corporate slog. That’s where the VC-funded technology scene will be soon, if not already there. 

Supercapitalists, I should note, are not always the same people as “top talent” and they’re rarely young (i.e. hungry and unestablished) talent. Supercapitalists tend to be the rare few with connections to both property and talent at the highest levels of quality. Property they can carry with them, but talent they must chase. Talent arrives in the new place (quantitative finance, internet technology) first. Supercapitalism emerges as these well-connected and propertied “carpetbaggers” arrive, and as the next wave of young talent discovers that there are better opportunities in managing the new place (i.e. associate positions at VC firms) than working there. 

What really impels young talent to join supercapitalism is not the immediate opportunity (which is tapped out) but the possibility to move along with supercapitalism to the next new place. For example, someone who started in investment banking in 2006 is not likely to be a million-per-year MD today– that channel’s clogged– but has has a good chance of being rich, by this point, if he jumped on the venture capital bandwagon around 2007-08; he’s a VC partner on Sand Hill Road now. 

Interactions

How do these three capitalisms interact? Is there a pecking order among them? How do they view each other? What is the purpose of each?

Yeoman capitalism provides leadership opportunities for the most enterprising blue-collar people, and is the most internally consistent. It’s honest. Unlike the other capitalisms, there isn’t much room for reputation (much less prestige) aside from in one’s quality of product. The rule is: make something good, hope to win on the market. The major problem with it is its failure mode, even in good-faith business failures that aren’t the proprietor’s fault. The main competitive advantage one holds as a small business owner is property rights over a company, and one who loses that is not only jobless, but often with limited transferability of skill.

Yeoman capitalism has a lot of virtues, of course. It gives a lot back to its community, while corporate and supercapitalism tend to destroy their residences and move on. Yeoman capitalism is what blue-collar people tend to think of when they imagine capitalism as a whole, and it provides PR for the corporate capitalists and supercapitalists, who recognize that their reputations (which they hold dear) depend on the positive image that yeoman capitalism provides for the whole economic system. Yeoman capitalism is aware of corporate and supercapitalist entities in the abstract, but has little visibility into their inner workings. Most small businessmen probably know that the corporations are somewhat different from their enterprises, but not how different (in reality, living within two separate societies) at the upper levels.

Corporate capitalism provides social insurance, although with great degrees of inequity based on pre-existing social class. It’s socialism as it would be imagined by a self-serving, entitled upper class refusing to give up any real power or opportunity. It can make little meaning out of leadership, charisma, or unusual intellectual talent. In fact, it goes to great lengths to pretend that these differences among people don’t exist. Its goal is to extract some labor value from people who lack the risk tolerance for yeoman capitalism and the talent for supercapitalism, and it does so extremely well, but it also creates a culture of authoritarian mediocrity that renders it unable to excel at anything. Needs for high quality are often filled by yeoman or super-capitalism; because yeoman capitalism can provide the autonomy that top talent seeks while supercapitalism provides (the possibility of) power and extreme compensation, those capitalisms get the lion’s share of top talent. Regarding awareness, corporate capitalism understands yeoman capitalism well (it often serves yeoman capitalists) but is oblivious to the whims of supercapitalism.

Between corporate and yeoman capitalism, there isn’t a clear social superiority, because they serve different purposes. Some intelligent people prefer the validation and stability of corporate capitalism, while others prefer the blue-collar honesty of yeoman capitalism. On the other hand, a strong argument can be made that supercapitalism is the clear elite among the three. It’s built to take advantage of the freshest, just-being-discovered-now opportunities. 

Supercapitalism has a familiar process. First, the smartest people find opportunities (“before it was cool”) that the property-holders haven’t yet found a way to valuate, and negotiate favorable terms for themselves while they can, and this makes a few thousand smart people very rich. Then, the elite property-holders catch wind of the deals to be made and move in. Soon there’s a rare confluence of two forces that usually dislike, but also rely heavily upon, each other– talent and property. Supercapitalism emerges as the all-out contest to determine an exchange rate between these two resources over a new domain comes into play. Eventually property wins (reputation economy) and corporatization sets in, while those who still have the hunger to be supercapitalists move on to something else.

A puzzle to end on

There’s a fourth kind of capitalism that I haven’t mentioned, and I think it’s superior to the other three for a large class of people. What might it be? That’s one of my next posts. For a hint, or maybe a teaser: the idea comes from evolutionary biology.

What turns 99.999% of privileged people into fuckups

Generally, people who generalize are actually talking about themselves. I wouldn’t normally introduce myself as “a privileged fuckup”; however, I am more privileged than the average person in this world, and there are definitely things I have fucked up, so to some degree I must indict myself as well. Here, by “fuckup” I mean “person who has achieved substantially, and embarrassingly, less than what is possible with his or her talent and resources”. Guilty.

I had to qualify the title with the word privileged. In this case, I’m not applying it only to the rich, but to the middle classes. I feel like it’s not right to call the genuinely impoverished, who never had a chance, “fuckups”. I’d rather focus on the process that turns people who’ve had plenty of chances into (relative to what they could achieve) mediocrities, and possibly even figure out what to do about it.

There’s good news, however: I think there’s a causative agent of fuckuppery that is so pervasive as to explain almost all of it, singular enough to admit solution, toxic enough to suffice, and subtle enough to answer the question, “Why isn’t this discussed more?”

Let me first address four explanations that sound like they could be singularly causative of widespread fuckuppery, and are frequently cited as causes, but aren’t even minor players.

  1. Work is hard, yo. People are inherently lazy, one theory goes.
  2. Too much competition! There’s the argument that not everyone can achieve great things; some people must be fuckups.
  3. Lack of resources. Also known as, “I’d be published by now but for my fucking day job.”
  4. Personal weakness. I will establish this as a religious argument of minimal value.

None of these suffice to explain the epidemic of fuckuppery that we see in modern, corporatized, sanitized employment. I’ll blow each of these explanations (at best, partial causes) to pieces before I lay out the right answer.

Failed explanation #1: Work is hard

It’s true that almost everything worth doing is difficult, but that doesn’t mean it’s unpleasant. Things that are unpleasant are, in general, quite unsustainable no matter how much “will power” a person has. The mind is built to learn from (and thus, avoid) negative states. On the other hand, people can do things that are difficult or even physically painful for quite a long time if there is a superior, psychological reward involved.

I don’t think people are very different from one another in their tolerance for unpleasant mental states (and I’ll get back to this, later). So what is it (aside from extraordinary natural talent) that makes someone like Usain Bolt or Michael Jordan become a great athlete, even in spite of physical pain and exhaustion along the way? They figure out a way to separate difficulty from unpleasantness. Most people will never be professional athletes, but the skill of preventing difficulty from becoming emotional negativity is one that anyone can develop. As Buddhism teaches us, one can feel pain and not suffer. When the great athletes are exhausted from training, they don’t stew about it in negative mental states; they accept it as part of the process and, in a way, an aspect of the reward.

Some people think failure (at an ambitious project) is naturally unpleasant. It’s not. In fact, weightlifters literally train to failure, which means they lift until their muscles (momentarily) cease functioning. It’s the social stigma, especially at work, that gets people. We need to kill that. The problem is that humans have a tendency to recognize patterns when they aren’t there, and failing at one workplace project creates a sense of decline, replacing what is actually a noisy process (Brownian motion with drift) with a parabolic arc (vaulting ambition) straight out of a five-act Shakespearean tragedy.

One note I’ll make is that the education system unintentionally(?) encourages risk aversion. Instead of being encouraged to tackle very hard problems and setting the pass mark at, say, 20%; students are asked to tackle very easy problems with the pass mark set at 60 to 75% and average performance calibrated to be between 70 and 90%. This means that one total failure cancels out several excellent results; if you get one zero and three 100%’s, you’re still only average at 75%. I’d rather see the reverse: courses and work so demanding that 100% is extremely rare, but with 25 to 50 percent being a respectable score. In the real world, on projects worth doing, 50% is a hell of a good success rate compared to the maximum possible.

Is work hard? Of course. Yet as humans, we love to do hard things. We do a lot of things with zero or negative economic value (such as climbing mountains) because they are difficult and painful. We like the mental state of flow, we need to be challenged. We also enjoy physical exertion and discomfort if there is a reward involved. Hell, most of what we do on vacation is more work-like, in a primal sense, than office work. Biking 30 miles in 95-degree heat is a lot harder than sitting in a chair for eight hours, but most people would envy the first experience and not the second.

Failed explanation #2: Too much competition

Um, no. Have you seen the people out in this world? Like, really measured how diligent, engaged, and effective most of them are? If you have, you’re not worried about competition.

At least, I should say, one shouldn’t worry about competition in the grand sense. There are local competitions for specific resources and it’s not fun to see a superior competitor enter the field, but in the broader scope of things, competition is not what will hold a person back. I, for one, would love to live in a world where a person like me were average in intellect, creativity, and work ethic.

Sure, there is a lot of competition, in a less grand sense, for things that are known to have value: money, property, jobs, relationships, social status. It’s pretty easy to lose one’s creative and spiritual way and start chasing after the things everyone else wants and, when that happens, competition is the only thing one thinks about. If you live that way, you will wreck your life in battle with some of humanity’s most vicious, cutthroat people. That’s not an issue of “too much competition”, though. That’s on you. Part of the game is figuring out which subgames are worth playing and which will just waste time.

I want to make one thing clear, which is that there are genuine competitive issues in this world and many people face them. If you’re in a poor country where access to water is limited, then there are competitive forces making your life hell. That’s why I’m focusing on privileged people, who still get themselves intimidated by “all the competition”, and that obsessive focus (not the competitors themselves) does prevent them from excelling.

Guess what? There’s no threat of competition when people excel. Let’s say that you become the best Calvinball player in the world, advancing the game in ways the world hasn’t seen for centuries. There’s a sudden uptick of interest in the sport. Good for you; you make a bit more money, being strongly responsible for external world’s increased interest in the game. Now, let’s say that someone else comes along who’s slightly better than you are; you beat him sometimes, but he’s clearly the superior player. His effect (as a superior player) on you is… that you make more money. Sure, he’ll probably make even more than you do, but the degree to which he advances the game (and increases interest) benefits you. There are now two great players, which means the overall quality of the games (as no one would care to watch if you just won all the time) goes up. When you and he play, people who’ve never watched a Calvinball match in their lives come out. The match will have a winner and a loser but, economically, both sides win.

All animals and most people (the not privileged) have to worry about competition as an existential threat. In the wild, it’s deadly. For privileged people (here defined using a fairly low bar, so middle-class Americans qualify) the threat from competition is just not that great, not in the long run. If you excel and someone else is better, that just advances the field. If you suck, it’s not the fault of the competition; it’s all on you.

Besides, even in the relatively broken world of white-collar work, one never really has to worry, when doing something genuinely worth doing, about others who are better at the work. One has to worry about nasty people and political adepts, not superior craftspeople. In fact, people who are genuinely superior are usually quite nice about it, at least in my experience. It’s those who are inferior but politically powerful that are most dangerous.

Failed explanation #3: Lack of resources

This one falls down pretty quickly, because the people with tons of resources are often the biggest fuckups of all.

This is a pretty lame excuse that fails to address the real problem. Sure, a day job can slow the progress of that novel, but writers write. If you can’t get a few pages written per week while working a typical day job, you’re not a writer.

There’s something going on that prevents people from using the resources they have. They spend 3 hours per day watching TV and complain about a lack of “time”. No, that’s a lack of energy. It’s different. In fact, it’s not really a lack of energy (in the physical sense) so much as a motivational problem. I’ll get back to that, after I kill a fourth failed explanation for the epidemic of fuckuppery. The issue isn’t a lack of resources but a lack of the emotional and cognitive energy to manage what they have, which presents the (compelling) appearance of resource enervation, but it’s not actually that.

Since I’m focusing on a class of people who have 2 to 6 hours (or more) per day of free time, plus enough disposable income and technological access to learn almost any topic in the world, I don’t think we can give “lack of resources” credit for the overwhelming likelihood that a person does not excel. Sorry, but the resources are there, so I have to kill that excuse.

Failed explanation #4: Personal weakness

The knee-jerk conservative reaction to any social or psychological problem is to ascribe it to “personal weakness” or a lack of “individual responsibility”. It really is the “God of the Gaps” for those people, and it’s pretty absurd.

Why would I take time to address some macho nonsense explanation? Because I think all of us (not just mouth-breathing right-wingers) have a tendency toward self-shame when we compare what we actually accomplish to what we could achieve if we got our shit together. We tend to take our shortfalls personally, without full recognition of the forces resulting in the outcome. We either fall into an external (competition, lack of resources) or internal (personal weakness) locus-of-control explanation, without recognizing the complex mix of the two that we actually face.

By all means, if taking an extreme internal-locus-of-control mentality helps you, then let it motivate you. However, I don’t think the personal weakness argument applies, and if the shame is getting you down, then throw it aside; I’ll explain your (probable) problem just below. Some people have more favorable biology and material resources than others, but there isn’t much evidence to convince me that any of what I wish to analyze is driven by moral strength/weakness variable independent of those causes. I just don’t see it being there. Most people want to achieve things, work hard (as they understand the concept) and want to do the right thing. Yet, almost everyone deals with emotional fatigue, fluctuating motivation, and less resilience than most people would wish to have. It’s not “weakness”; it’s psychology, and a lot of this stuff is rooted more closely to the physical brain than the part of ourselves we view as nonphysical, moral, or spiritual– and possessing some kind of “character” that deserves to be rewarded or punished.

Those four dragons slain, we can get to an accurate explanation of why most people are so ineffective. It’s actually quite simple. Let’s drop into it.

Organizational “work” conditions people to associate work with subordination, making them lazy, unfocused, irresponsible, and emotionally enervated. 

That work worth doing is hard and fails sometimes is not the problem. People can deal with failure. (One of the most engaging reinforcement systems, as seen darkly in slot machines, is variable-schedule reinforcement.) The issue certainly isn’t “too much competition”, with most people achieving a small percentage of what they’re capable of and therefore not much competitive threat in the world. Moreover, the problem isn’t scarce resources (although those resources are finite, and therefore squander will likely lead to non-achievement). Since the evidence is extremely strong for conditioning (learned helplessness) I think the “personal weakness” argument can be thrown out as a claim rooted in almost a religious bias. Instead, the problem is that society is structured in such a way that it trains people to dislike work.

Most people do most of the work in their life under a subordinate context. If people can only conceive of doing difficult or taxing things when in a state of subordination, they will lose their drive to work. Over time, this will strip them of their creativity and ambition in general. If the conditioning is complete, they’ll become permanent subordinates, unsuited to anything else.

It’s not the objective difficulty, but the erratic and corrupt evaluation, that gets to most people. When the reward is divorced from the quality of the work, people lose interest in the latter. Most people, after all, associate work not with physical or mental difficulty (which people enjoy) but with economic humiliation. In a work world driven by non-meritocratic political forces and therefore subject to constant shifts in priority, they also lose a sense of coherence, and the ability to focus atrophies, since responding quickly to political injections is more valued than deliberate performance. Eventually, full-on disengagement sets in, and people lose a sense of ownership or responsibility. Over time, this creates a class of people conditioned into permanent subordination.

That’s almost all of us, sadly, to some degree. Few of us (even the wealthy, who have no need to work) are free of all traces of the subordination meme-virus. Even many self-employed consultants are had by the balls by a single client or a tight-knit network of clients who value each others’ opinions, and venture-funded entrepreneurs answer literally to their investors. Now, one might argue that “everyone has a boss”. I disagree. Everyone serves (to quote from Game of Thrones, “valar dohaeris“) but it is not strictly necessary for people to serve others on humiliating terms. That part is artificial. It doesn’t need to be there, and in the long term, it does a lot of harm.

Age discrimination is one symptom of the underlying sickness of corporate discrimination. Why is there so much ageism in the corporate world? In terms of skill and competence, older people tend to fall under a bimodal distribution, with some being very good and others being quite weak. There are some who are extremely capable, and that’s because they maintained their creativity, originality, and energy in defiance of a system that spend decades trying to squash them. They’re exceptional as advisors and independent contributors, but they sure as hell aren’t desirable by managers who demand personal subordination; that won’t happen. On the other end are those who’ve subordinated quite well, let creative atrophy set in, and now stand at a disadvantage to younger people who haven’t been burned out yet. Subordination has a long-term cost– the destruction of human capital– and ageism establishes that the penalties are borne by those whose human capital has been destroyed.

More generally, this epidemic of privileged fuckuppery exists because, even at very high levels in our society, we’ve forged generations of people who have a deep-seated association of work with subordination– one that often begins in education, where it befalls the wealthy as much as the poor. They can’t even begin projects without thinking obsessively about how they will be evaluated (which is different from the valid question of how the work will serve others) and that whittles their minds down into second-hand crappy models of other peoples’ minds. It’s no good. We have to fight it. We have to kill it. This may not be an existential threat to the biological species (that being quite resilient, and more of a threat to nature than threatened by it) but it does pose a danger to the continuance of civilization. At this point, civilization cannot continue without ongoing technical progress, especially as pertains to solving ecological problems, which means we are reliant on human creativity, which organizational subordination kills not only in the bottom, but also at the top (because it requires elevated position-holders to focus more on maintaining rank than anything else).

Workplace subordination, in the 19th century and the first half of the 20th, had major operational efficiencies. Additionally, the destruction it inflicted on human capital was there for poets and philosophers to observe and mourn, but it never threatened to cripple the economy, because its standardization effects outweighed its costs. Assembly-line workers, in truth, didn’t need to be creative to do their jobs. What has changed is that machines are taking over the subordinate work, and will soon enough capture all of it. If the job can be done by a person in subordination, that means that perfect completion can be specified (as opposed to creative work where perfect completion is not even well-defined) and if it can specified, it can be programmed, and the work can be given over to robots. Soon enough, that will happen.

The result of this is that the market value of subordinate work, on the market, is falling inexorably to zero. People who are afflicted by the long-term conditioning of subordination will have no leverage in the modern economy, and (as much as I am cautious about such things, being more strongly libertarian than I am leftist) I suspect that central intervention (socialism! gasp!) will be necessary if a nation is to survive the transition. All that will be left for us is work requiring individual creativity and personal expression, and the people who have lost these capabilities to decades of horrible conditioning will need to be given the help to recover (or, at least, enough sustenance while they can bring themselves to recover). The real discussion we need to have– involving economists, business leaders, educators, and technologists– is how to prepare ourselves for a post-subordinate world.

Here’s the proper way to evaluate a startup’s equity offerings

One thing that young people are very bad at– to their detriment, and VC-istan’s profit– is evaluating equity in a startup job offer. They don’t understand the numbers, what they mean, or the processes that lead to them holding certain values. Many focus unduly on percentages, which isn’t the right way to go. As is often noted and obvious, 1 percent from an established company would be amazing; 1 percent of a pre-funding startup is below consideration, except for very light contract work (a couple hours per week of advising). An alternative is to convert the equity grant into a dollar figure. The problem here is that the valuation process is essentially black magic. There is no “market” valuation for a VC-funded startup because VC collusion is so entrenched that there is no competitive market. Rather, it’s driven by processes into which a typical employee has no visibility. Even if you’re getting $50,000 per year of vesting in “equity”, you’re getting what finance calls penny stocks, and you should be aware of their attendant problems (even if you’re not in finance, the Series 7 process, although boring, teaches a lot) before you take those too seriously. Sure, penny stocks can make a person rich; they can also go to zero. Plan accordingly.

VC-istan runs, I think, on a fake generosity. A clueless 22-year-old has no idea what he’s worth on the market. Compared to a PhD student’s stipend of $1,700 per month, an “exciting” startup job that comes with a much higher salary (but still $40,000 below what he could command if he went east, for finance, and got a real job for adults) seems like a great deal. To boot, he’s getting $30,000 (vesting over four years, with a “cliff” provision applied to the first) worth of equity! How generous! That’s how companies bill their equity participation. “We’re giving you this, because we want you to feel like an owner.” (In this case, “feel like an owner” often means to work long hours, put up with drudge work, and favor what we baselessly claim to be firm-wide existential risks over your own career goals, health, and friendships.) In reality, employee equity always comes with vesting (as it should) and a typical schedule is four years, which means it’s $7,500 per year. So it’s not a gift; just regular compensation. In that particular case, it’s a $40,000 pay cut in exchange for $7,500 in penny stocks. Hardly a good deal.

Every equity offer comes with a vesting period (typically 4 years) and a “cliff” provision that no equity is earned if the employee leaves (or is terminated, and “cliffing” firings at 362-364 days are pretty common). It’s important to keep that in mind. The equity “grant” is contingent on an outcome that, in the VC-funded world, is pretty rare. At a typical startup, it probably won’t be worth it to keep coming into work every day for 4 years.  Six months from now, you might be answering to an outsider you’ve never heard of.

In fact, full vesting seems only to occur for the mediocrities. The bottom 15% (as well as an additional 15% who are capable but politically unlucky) get fired, often without severance, long before the four-year mark. The top 15% usually bounce, because waiting around to “vest” on some piddling 0.02% equity offering, when you can roll the dice again and possibly be a founder– or at least get a real title and be a founder two gigs later– is a pathetic excuse for not growing up. (This is another rant, but most VC-funded startups are halfway houses for college kids who’d rather waste their 20s than (gasp!) have to show up somewhere in the a.m. hours.) With the top and bottom of the pack getting drawn out, it’s the middling players (“chief vesting officers”) who are actually around for long enough to tap their full, four-year, grants. Keep that in mind. Your expected percentage of that four-year target is probably (including cliff) 25-50 percent, and closer to the 25% if you’re unusually good (or bad) at what you do.

All that said, I’m going to assume the reader knows this. Of course, there still are good startups out there, and I will never deny that fact. They’re uncommon, but they exist. People need to know how to evaluate their equity allotments, and that’s what I’ll focus on here. Below is a simple formula:

Person-Power = (Number of employees) * (Equity percentage)

This isn’t a meaningless statistic or even a heuristic. Companies exist to aggregate human labor, and equity represents a share in what the group produces. If you’re offered 0.02% of an 80-person company, that’s representative of the work of 80 * 0.0002 = 0.016 people. In other words, each week, your equity represents a payment (in time) of 0.016 * 40 = 0.64 hours of work. You put in an eight-hour day, and the equity is a return of seven minutes and 41 seconds of human time: a long bathroom break.

The person-power metric accounts for the meaninglessness of equity percentages (as again, 1% of Google would be fantastic) and the uncertainty surrounding valuations. It gives actual meaning to the equity. You can envision a 0.02% slice of an 80-person company as a 5.76 seconds of each person’s workday being done on your behalf, or (as above) 7.68 minutes of total human time. That’s not all that much, in contrast to the concessions that these small companies expect because “we’re a startup”. Of course, outside of the startup world most companies give zero equity, so one might argue that, “hey, it’s better than nothing”. Sure, but those zero-equity non-startups actually pay people real salaries, give annual raises, try harder than startups not to fire people unjustly, have a lot of slack in the schedule allowing for (semi-furtive, but easy to execute) personal career growth, and let people leave at 5:00.

So what’s a fair range for person-power? Well, it depends on the risk level. The average, across the whole organization, can never be more than 1.0. In fact, it will typically be less than that because investors, advisors, and board members need their cut (and the investors actually bring something to the table!) I’d say that 0.15-0.3 is more than fair for a junior-level employee, and 0.5-0.8 (except for a risk-taking founder) is quite generous. That is what real equity looks like.

Below 0.1, on the other hand, I’d say that the employee should write the equity off entirely and focus only on the salary (with an understanding that startups rarely give salary raises or annual bonuses; if the investor-determined valuation goes up, that is the raise). I also don’t see why companies offer low equity amounts in the first place; those seem to complicate the finances of the company for minimal benefit, because if these junior chumps have any talent, they’ll figure out the VC-istan game and either want ten times more, or become 10-to-4 “chief vesting officers” while they plan for their next gig. (If I were running a company, I’d be extremely liberal with profit-sharing but give almost no one equity; that’s for investors, but I’d encourage employees to diversity their finances beyond their employer.) The signal is negative. For me, equity has an uncanny valley. If I’m not going to get a real stake, then I’d rather just zero the equity in exchange for a market-level salary, sane working hours, annual raises and bonuses, and not being surrounded by 21-year-old college kids who think their token ownership ought to drive them to work till 11:30 at night (with various stories of unprofessional behavior emerging out of that coupling of the night hours with the office.)

I don’t have an overarching, sweeping conclusion or any real wisdom here, but I think that every startup employee should take the time to compute that Person-Power number. If it doesn’t match or exceed the percentage of market salary (including four years of raises, bonuses, and career support) that he or she is giving up to work there, it’s probably time to bounce.

Wrong places, wrong times, decline, prestige, and what it all might mean.

Here’s a deceptively simple question: why would a person be at the wrong place in the wrong time?

People who use this sort of description about places and times to describe “luck” in the business world. Someone’s success is written off as, “he was just in the right place at the right time”. I’m starting to doubt that this can really be ascribed to a lack of merit. Some people know where “right places” are, and some people don’t. It’s not pure luck. There is skill in it; it’s just a very difficult skill to measure or even detect.

I’ve had to contend with this myself. I’m almost 30, and I was one of those people about whom, when I was younger, everyone said that I’d either be successful or dead by 30. Well, I didn’t get either. Breakout success was the gold, noble death was the silver, and I’m stuck with the bronze. Well, that’s depressing. Something I realized recently when looking over my career choices is that I made a lot of decisions that would seemed good taking a timing-independent approach, but that I often made the worst choice for the given time, almost as if it were a habit. So I have to ask myself, because I’m too old to pretend these things aren’t showing a pattern, why would I be in the wrong place at the wrong time? 

I’m pretty sure this is a common issue for people. Timing is just very important. Living in Detroit in 1965 is dramatically different from living there in 1980. However, in society, we tend to evaluate peoples’ choices morally. People who are successful made good choices (and vice versa). The problem is that we also view morality in absolute terms, and quality of choices (especially economic ones) is extremely time-dependent. That often leads us to make inaccurate conclusions about ourselves but also about the decisions we must make. Ignoring timing is an error that’s often catastrophic.

For example, choosing to work at Google is one of the biggest mistakes I’ve made; but it would have been a great choice in another time. I’ve wasted a lot of emotional energy being angry at my (truly awful) manager when I was at Google. However, bad managers are a fact of life, and people survive them. What really went wrong is that I joined Google in May 2011. If you join a company while it’s great and have a bad manager, there’s a way to move around that problem. You’re in a company that wants to succeed and will make a way for you to contribute something great. If you join a company in decline, however, you’re stuck. The firm’s demand for greatness is minimal; now it wants stability, and the game is about using social polish to compete for dwindling visibility and opportunity. In technology, closed allocation is the surest sign of a declining firm. In truth, I shouldn’t be angry at my ex-boss (for being awful, but some are) or at Google (for declining, even though no company wishes to decline) but at myself for picking that company while it was in decline. That’s on me. No one forced me to do that.

What attracts people like me to decline? I think there are four explanatory causes for why people tend to put themselves in formerly-right places at wrong times.

1. Prevailing decline

This one’s not our generation’s fault. Most of American society is in decline. Perhaps one wouldn’t know it from the Silicon Valley buzz, which trumpets successes while hiding failures. Plenty of people say things like, “Why should I care about Flint, Michigan when software engineer salaries keep rising? There will always be jobs for us.” Doing what, pray tell? We’re much more interdependent than people like to believe, so this attitude infuriates me. Decay often ends up hurting everyone. Rural poverty in the 1920s turned into the 1930s Great Depression. Poverty isn’t wayward people getting bitter medicine; it’s a cancer that shuts a society down.

It’s easy to end up picking a string of declining companies when there’s so much decline to go around. That’s a big part of why it’s so hard for our generation to get established. That said, this is the least useful place to focus because no one reading this can do anything about the problem, at least not individually.

2. Nostalgia

People are more prone to nostalgia than they like to admit. This leads people to attempt to replicate former successes and sprints of progress that are no longer available. Businesses change. A person who goes into investment banking based on the movie Wall Street is going to have a rude awakening, because the Gordon Gekkos aren’t taking 24-year-old proteges underwing, but trying to protect their own asses in a harsher regulatory climate. The same is true of Silicon Valley. Is there money to be made there? Of course there is, but the easy wins of the 1990s are gone. It’s no longer enough to be “in the scene”, and people who are just getting established will probably not find themselves eligible for the best opportunities until those are gone, leaving scraps.

What’s unusual in the case of suboptimal career moves is that it’s often oblique nostalgia. People aren’t trying to relive their own good times, but to get in on a previous generation’s golden age (when that generation, having long ago recognized the closed opportunity window, has mostly left). This can actually be one of the more effective ways to play, for reasons explained in the next item.

3. Risk aversion and prestige. 

Most things that are “prestigious” are actually in decline. For example, most of the smartest undergraduates attempt graduate school. It’s what you to do to show that you’re not one of those pre-professional idiots. Academia has been in brutal decline for almost 30 years! Those tenured professorships are not coming back. Wall Street and VC-istan are also very scarce in opportunities for those who aren’t already established, yet have a lot of prestige. Prestige, alas, matters. If you were an analyst at Goldman Sachs, every VC will go out of his way to fund you; even though analyst programs have very little value (except for the proof that a person can survive punishing hours) at this point.

If you look at the opportunities for new entrants, Wall Street, Google, and VC-istan are all quite dry. There are plenty of people getting rich, but it takes years to position oneself and the opportunity will probably have moved elsewhere by the time one is able to take advantage. However, prestige offers a benefit. No one can predict where the real prize (true opportunity) will show up next, but prestigious employers help a person find some kind of position– a consolation prize– in whatever comes next by offering social proof. They provide the validation that a person was smart enough. If you got into Google or Goldman Sachs in 2007, you’re probably not rich, but you’ve proven that you’re good enough to be rich; i.e. you’re not a total loser. People will often tolerate these second-place finishes while they build up the credibility to be in the running when real opportunities come about. But is that a good strategy? I’m not so sure that it is, anymore. If you join a declining technology company, you’ll face closed-allocation and stack-ranking and bland projects, which will hurt your career. Prestige is important, but so is quality.

4. Saprophytism

Some people, but very few, have a knack for turning decay into opportunity. When they see decline, they turn it into profit. This is not a socially acceptable behavior, but it exists and for some people it works. Is it common? I have no idea. For a worker, it’s very hard to pull off. Since the worst fruits of organizational decay always fall onto the least established (“shit flows downhill”) it’s unclear how a low-level employee would be able to reliably turn decay into a win. I’m sure some people have that talent and motivation, but I’m not one of them.

How would a person profit from the decay of Corporate America? Some people answer, “A startup!”, but that’s a really bad response. VC-istan is Corporate America with better marketing, and non VC-funded startups are reliant on clients which means they’re still dependent on this decaying ecosystem. No one wins when there is so much loss to go around. I’m sure there are some financial plays (informed short-selling) that would work, but I can’t think of a great career play for a young person trying to get established. Perhaps it’s a great time to be in the so-called “tech press”; they seem to enjoy themselves when things go to hell.

Concluding thoughts

The above are small explanatory features of the problem. Why do so many intelligent people consistently put themselves in wrong place/wrong time situations that inhibit success? What’s the systematic problem? I think that risk aversion and prestige are major components, worthy of further study, except for the fact that we all cognitively know this. We know that reputation is a lagging indicator of low value in a dynamic world, but we cling to it because other people do and because we rarely have better ideas.

I don’t think that individual nostalgia is a major player, but the collective form of it is prestige, and that often creates bizarre inconsistencies. For example, the prestige of the Ivy League has nothing to do with the adderall-fueled teenagers applying to twenty colleges and test-prepping at the expense of a normal or healthy adolescence– decidedly unprestigious behavior by people who are (but very slowly) eroding the prestige of those institutions– but, rather, that prestige exists because of things that happened long before those kids were even born. The prestige of those places comes out of a time when the psychotic ratrace around admissions didn’t exist, but those colleges were accessible only to a well-connected elite, because what our society really values is legacy and wealth, not talent or striving.

Prevailing decline makes this whole game harder, of course. What I think is really at the heart of it is that it’s hard to impossible to predict the future. People go to places of past excellence for the association (prestige) in order to take advantage of the halo effect and gain social superiority in the beauty contests necessary to win at organizational life. It works, because of the nostalgia held by most people now in power, but not well enough to counteract the overall tenor of decay. Even the people who succeed find themselves bitterly unhappy, because they compare what they get out of their path to what those who traveled before them go; it turns out that a Harvard degree in 2013 is still powerful, but not the golden ticket that it was in 1970, and that joining Google now is not the same thing as joining it in 2001.

So where is the future? I don’t know. If I did, I’d be somewhere other than where I am right now. Alan Kay said, “The best way to invent the future is to create it.” I like this sentiment, but I’m not sure how practical it is. None of us who need to know where the future is have the resources to do that. Those resources (wealth, connections, power) all live with citizens of the past. It’s this fact that keeps drawing generations, one after another like crashing waves, toward the false light or prestige: the hope of getting some scrap metal out of decaying edifices so as to have the right materials when the opportunity comes to make something new. But where (and when) is the time for building?

A guess at why people hate paying for (certain) things… and a possible solution.

I’ve been thinking a lot about paywalls and why people are so averse to paying for things they use on the Internet. People don’t mind putting quarters into a vending machine to get a snack at 4:00, or handing a couple of one-dollar bills to get coffee, but put a 50-cent charge on an article, and your savvier readers will try to find it for free, while your less savvy ones will just find another distraction. People hate paywalls, and it’s not clear why. The time people spend trying to get around copyrights in a safe and reliable way is often worth more than the money that would be spent just paying for the content. Economically, it’s hard to make sense of it. The time spent reading a news article is a lot more expensive than the amount being asked-for (i.e. paid content is often only 10-20% more costly at worst, including the time) so why are paywalls so controversial? What’s the issue here?

Personally, I think it goes back to childhood. If you were in a hotel room, you didn’t touch the “Pay Channels” (perhaps as much because of what they were as their price) or you’d be in trouble. You watched the Free Channels only. You could make a few local calls, but long-distance was a no-no (when I was growing up, long distance rates were over 50 cents per minute) except on Sunday nights to relatives. For a child, things that cost what adults would recognize (given the technology of the time) as fair prices were exclusionary at the time, simply because children (and for good reasons) aren’t given a lot of money.

Most of us started using an internet at a time when the symbol $ meant that you couldn’t continue on, or you’d at least have to explain to your parents why you needed the $7/month deluxe version of the game you were playing, because you couldn’t exactly pay in cash. Sure, they’d be happy to take it out of your allowance, but just having your parents know was often too much. They’d often disapprove. “Is that stupid game really worth $7?” On to something else.

Or maybe I’m just personally stingy. It’s not that I object to spending small amounts of money. If I know I’m going to get value out of something, I spend money for it. On the other hand, I have plenty of small irritating recurring payments that I mean to get around to clearing up; with that experience, I’m unlikely to take on another one. It’s not that it’s $15 per month that gets to me; it’s that I’ll be bled for $180 per year until I remember, “oh, yeah, that fucking thing” and go through whatever hoops are involved in canceling my membership.

What I mean to get around to, however, is that we haven’t figured out how to pay for a lot of important services (and plenty of not-so-important ones, too). People have a lot of weird emotions about money, often divorced from the actual amounts. A paywall reminds people of childhood and feels exclusive, even when the amount of money involved is trivial. People also have a very justified dislike of recurring payments, given how unreasonably difficult it can sometimes be to get rid of them.

One thought I’ve had, for the web, is to set up a passive-payment ecosystem. This could apply to blogs, games, and discussion forums in a way that doesn’t mandate the individual content providers ask for money. People set a payment level somewhere in the neighborhood of $0.00 to $1.50 per hour and pay the provider of whatever they are watching or using, on a minute-per-minute basis, as they go. (The benefit of setting a higher passive-pay level is that you are served fewer ads and receive faster communication.) What’s nice about the system is that (a) the payment level is voluntary and intended to be trivial in comparison to the value of the time spent online, and (b) this has the potential to be more lucrative, for content providers, than advertising. Most importantly, though, the decision overload associated with paywalls, tip jars, recurring payments, and all of the other stuff involved in asking people for money goes away; if someone sets his payment rate at 75 cents per hour and spends 15 minutes on a site, then 18.75 cents is automatically sent to the owner of the site.

Passive payment is an interesting idea. I’m not sure where it’s going, but it’s worth exploring.

Fixing employment with consulting call options.

Here’s a radical thought that I’ve had. There are a lot of individual cases of people auctioning off percentages of their income in exchange for immediate payments, which they use to invest in education or career-improving but costly life changes like geographical moves. Someone might trade 10% of her lifetime income in exchange for $200,000 to attend college. This has a “gimmicky” feel to it as it’s set up now, and it’s something I’d be reluctant to do anything like that for the obvious reputation reasons (it seems desperate) but there’s a gem there. There’s a potential for true synergy, not only gambling or risk transfer. If a cash infusion leads a person to have better opportunities and a more successful career, then both sides win. There should be a way for individual people to engage in this sort of payment-out-of-future-prosperity that companies can easily use (it’s called finance). However, a percentage of income is too easy to scam. We need to index it to the value of that person’s time, and the best way to do that is to have the offered security represent a call option on that person’s time.

With the cash-for-percentage-of-income trade, the “Laffer curve” effect is a problem. There’s scam potential here. What if someone sells 10% of his lifetime work income for, say, $250,000, but actually finds ten buyers? Then he gets a $2.5 million infusion right away, which is enough money not to work. He also has zero incentive to work, so he won’t, and his counterparties get screwed because he has no work income. So this idea, on its own, isn’t going to go very far. The securities (shares in someone’s income) aren’t fungible, because the number of them that are outstanding has a major effect on their value.

Let’s take a different approach altogether. This one doesn’t involve a draw against someone’s income. It’s a call option on a person’s future work time. I intend it mainly for consultants and freelancers, but as the realities of the new economy push us all toward being more individualistic and entrepreneurial, it could be extended to something that applies to everyone. It’s not this gimmicky “X percent of future income” trade that doesn’t scale up to a real market (because once the trade stops novel, we can’t trust people not to sell incentive-affecting percentages of their income, and that problem naturally limits it). How does it work? Here’s a template for what such an agreement would look like.

  • Option entitles holder to T hours (typically 100; with blocks as small as 25 or as large as 2000) of seller’s time (on work that is legal) to be performed between dates S and E at a strike price of $K per hour. For a college student, typical values would be S = date of graduation and E = five years after graduation. For someone out of school, S might be set to the time of signing, and E to five years from that date. 
  • Seller must publish how many such options have been sold so buyers can properly evaluate the load (e.g. no one is allowed to sell 50,000 hours of time in the next 5 years, because that much work cannot be performed.) I would, in general, agree on a 2000-hour-per-year limit. Outstanding load is publicly available information and loads exceeding 1000 hours per year should be disclosed to future employers.
  • If the option is not exercised, then the no work is performed (but the writer still retains the value earned by selling it). If it is, seller receives an additional $K per hour. The option is exercised as a block (either all T hours or none) and buyer is responsible for travel and working costs.
  • These options are transferrable on the market. This is essential. Few people can assess their specific needs for consulting work, but it’s much easier to determine that a bright college students’ time will be worth $100/hour to someone in five years.

One thing I haven’t figured out yet is the specific scheduling policy beyond a “act in good faith” principle. If two option-holders exercise at the same time, who gets priority? How much commitment must the consultant deliver when exercise occurs (40 hours per week, making full-time employment impossible; or 10 as an upper limit, with the work then furnished over more calendar time)? Obviously, this needs to be something that the option-writer can control; buyers simply need to know what the terms are. The other issue is the ethics factor, which doesn’t apply to most of technology but would be an issue for a small class of companies. Most people would have no problem working for a meat distributor, but we’d want an escape hatch that prevents a vegan’s time from being sold to one, for example. There has to be some right to refuse work, but only based on a genuine ethical disagreement; not because a person has suddenly decided her time is worth 10x the strike price (which will almost always be lower than the predicted value of her time). The latter would defeat the point of the whole arrangement.

In spite of those problems, I think this idea can work. Why? Well, the truth is that this sort of call-option arrangement is already in place, although with an inefficient and unfair structure that leaves both sides unhappy. It’s employment.

How much is an employee’s time actually worth to the operation? Dirty secret: no one really knows. There are so many variables on each individual, each company, and each project that it’s really hard to tell. The market is opaque and extremely inefficient.  For example, I’d guess that a programmer at my level (1.7-1.9) is worth about $1000/hour as a short-term (< 20 hours) “take a look and share ideas” consultant, $250/hour as a freelance programmer, and perhaps $750,000 per year in the context of best-case full-time employment (wherein the package includes not only 2000 hours of work, but also responsibility and commitment) but well under the market salary ($100-250k, depending on location and industry) in a worst-case employment context. Almost no employer can predict where on the spectrum an employee will land between the “best-case” and “worse-case” levels of value delivery.

Employers know that for sociological reasons, a full-time employee’s observed value delivery is going to be closer to the worst-case than best-case employment potential. If you have interesting problems and a supportive environment, then a 1.5-level programmer is easily worth $300,000 per year, and a 1.8+ is worth almost a million. Most companies, though, can’t guarantee those conditions. Hostile managers and co-workers, or inappropriate projects, or just plain bad fit, all can easily shave an order of magnitude off of someone’s potential value. In fact, since doing that involves interacting with people and controlling how they treat each other, that’s seen as boundlessly expensive. If a manager has a long-standing reputation for “delivering” but is a hard-core asshole, is it worth it to unlock the $5 million per year released when he’s forced to treat his reports better, given that there is a chance of upsetting and losing him (and the “delivery” he brings, which he’s spent years making as opaque as possible)? The answer is probably yes, but the reason why he’s a manager is that he’s convinced high-level people not to take that risk. That’s how the guy got that job in the first place.

So what is employment, then? When people join a company, they’re selling their own personal financial risk. That stuff is toxic; no one wants it, so typically people offload it to the first buyer (employer) that comes along, until they’re comfortable enough to be selective (which, for most, doesn’t happen until middle age). When it comes to personal financial risk, corporations have the magic power to dissolve dogshit. They know it, and they demand favorable terms from an expected-value perspective. The employee would rather have a reliable mediocre income than a more volatile payment structure closer (in the long run) to their actual market value. So the company offers a salary somewhere around the 10th-percentile level of that person’s long-term value delivery. If the person works out well, it’s mutually beneficial. She enjoys her work, and renders to the company several times her salary in value. Since she’s happy, and since good work environments are goddamn rare and she’s not going to roll the dice and move to another (probably bad, since most are) corporate culture; a small annual raise and a bonus are enough to keep her where she is. What if she doesn’t work out? Well, she’s fired. Ultimately, then, corporate employment is a call option on the employee’s long-term ability to render value. The problem? Employee can opt out at any time. The option is contingent not merely on personal happiness, but on fulfillment. I’ll get back to that.

Why is my call-option structure better? There are a couple reasons. Obviously, everyone should have the fundamental right to opt-out of work they find objectionable. What I do want to discourage (because it would ruin the option market) is the person who refuses to work at a $75 strike because she becomes “a rockstar” and she’s now worth $1000/hour. That’s not fair to the option-holder; it’s not ethical. However, I feel like these opt-outs will be a lot rarer than job-hopping is. Why? First, everyone knows that job-hopping is a necessity in the modern economy. Almost no one gets respect, fair pay, or interesting work without creating an implicit bidding war between employers and prospective future opportunities. Sure, some manageosaurs who mistake their companies for nursing homes still enforce the stigma against job applicants with “too many jobs”, but people who weren’t born before the Fillmore administration have generally agreed that job hopping for economic reasons is an ethically OK thing to do. Two thousand hours of work per year is a gigantic commitment and exclusive of other opportunities, and almost no one would call it a career-long ethical commitment. The ethical framework (no job hopping, ever!) that enforces the call-option value (to employer) of employment is decades out of mode. It never made sense, and now it’s laughably obsolete. I would, however, say that a person who writes a call option on 100 hours of future work has an ethical responsibility to attend to it in good faith.

An equally important thought is that consulting is a generally superior arrangement to office-holding employment, except for its inability to deliver reliable income (which a robust options market could fix). Why? Well, people quit these monolithic 2000-hour-per-year office jobs all the time (often not by actually changing jobs, but by underperforming or even acting out, until they’re fired, and that takes a long time) because they don’t feel fulfilled. That’s different from being happy. A person can be happy (in the moment) doing 100 hours of boring work if he’s getting $20,000 for it. It’s not the work of “grunt work” that makes it intolerable for most people, but the social message. That’s why true consultants (not full-time contractors called such) are less likely to underperform or silently sabotage an effort when “assigned” grunt work; employees expect their careers to be nurtured in exchange for their poor conditions, while consultants get better conditions but harbor no such expectation.

On that psychology of work, I know people who can’t clean their own houses, not because the work is intolerable (it’s just mundane) but because they can’t stand the way they feel about themselves when doing such chores. However, a sufficient hourly rate will override that social message for almost anyone. How many people wouldn’t clean someone’s house, 100 hours per year, for 10 times their hourly wage? He won’t be fulfilled at such work at any price, but that’s different. It’s not hard to find someone who will be happy to perform work that most people find unpleasant. Consulting arrangements allow a price to be found. But with full-time position-holding employment, the zero/one fulfillment distinction is much harder to bring into being. People will clean, if paid to do it, but no one wants to be a cleaner forever.

The nice thing about consulting is that the middle ground between fulfillment and misery exists. You can go and do work for someone but you don’t have to be that person’s subordinate, which means that work that is neither miserable nor fulfilling (i.e. almost all of it) can be performed without severe hedonic penalty (i.e. you don’t hate that you do it). Because of modularity and the potential for multiple employment, you can refuse an undesirable project without threatening your reputation or unrelated income streams– something that doesn’t apply in regular employment, where refusing the paint that bike-shed that hideous green-brown color will have you judged as a uniform failure by your manager, even if you’re stellar on every other project. A consultant is a mercenary who does for pay, and only identifies with work if he chooses to. He sells some of his time, but not his identity. An employee, on the other hand, is forced into a monolithic, immodular 2000-hour-per-week commitment and forces identification with the work, if only because the obligation is such a massive block (yes, the image of intestinal exertion is intentional) that it dominates the person’s life, forcing identification either in submission (Stockholm Syndrome, corporate paternalism, and the long-term seething anger of dashed expectations in those for whom management doesn’t take the promised long-term interest in their careers) or in rebellion (yours truly).

So let me tie this all together rather than continuing what threatens to become a divergent rant on employment and alienation. An employee‘s main selling point is a call option written to her employer. If she matches well with the employer’s needs and its people, and if the employer continues to fulfill her desires for industrial fulfillment (which change more radically than the matter of what someone will be merely happy to do at a fair rate; the “good enough to be happy” set of work becoming broader with age, while fulfillment requirements go the other way and get steeper), and if the salary paid to her is kept within an acceptable margin (usually 20 to 40%) of her market value, she’ll deliver labor worth several times the strike price (her agreed-upon salary, plus marginal annual wage increases). Since there are a lot of ifs involved, the salary at which a company can justify employing her is several times less than her potential to render value: a mediocre salary that forces her into long-term wage-earning employment, when the value of her work at maximum potential would justify retirement after five to six years. That’s not unfair. In fact, it’s extremely fair, but an artifact of opacity and low information quality. 

Why is it like this? The truth is that the employer doesn’t participate in her long-tail upside, as it would with a genuine call option. In the worst cases, they do not exercise the option and stop employing her, but they pay transactional fees (warning time, severance, lawsuit risk, morale issues) associated with ending an employment relationship. In the mediocre cases (middling 80%) they collect some multiplier on her salary: the call option is exercised, and the company wins enough to generate a modest but uninspiring profit. In the very-good cases, she performs so well that it’s impossible to keep this from translating into macroscopic visibility and popping her market value. Since it’s not a real call option (she has no obligation at all to continue furnishing work) there is no way for the company to collect. An actual call option on some slice of her time would be superior, from the corporate perspective, because it insures them against the risk that her overperformance leads to total departure (i.e. finding another job).

How would we value such a call option? Let’s work with three model cases. One is Zach, an 18-year-old recently admitted to Stanford intending to major in computer science, with the obvious ability to complete such a course. He needs $200,000 to go to school. Let’s say that he puts the start date of the option at his rising-sophomore summer (internship) and the end date at 5 years past graduation. What’s a fair strike price? I would say that the strike price should be, in general, somewhere around 1/1500 of the person’s expected annual salary (under normal corporate employment) at the end of the exercise window. For Zach, that might be $80 per hour. The actual productive value of this time, at that point? (We can’t use a “stock price” for a Black-Scholes model, because the value of the underlying is affected by conditions including the cash infusion attendant to the sale; that’s why it’s synergistic.) I’d guess that it’s around $120, with a (multiplicative) standard deviation of 50%, which over 9 years equates to an annualized volatility of 16.7%. Using a risk-free rate of 2%, that gives the call option a Black-Scholes value of about $56. This means Zach needs to sell about 3570 hours worth of options to finance going to college. Assuming he can commit no more than 0.3 of a year for his four years of college, that’s 576 hours per year– not of free work, but of commitment to work at a potentially below-market “strike” price of $80 per hour. I think that’s a damn good deal for Zach, especially in comparison to student debt.

 

Alice is a 30-year-old programmer. She lives in Iowa City and has maxed out at an annual salary of $90,000 per year doing very senior-level work. The only way to move up is into management, which doesn’t appeal to her. She suspects that she could do a lot better in New York or San Francisco, but she can’t get jobs there because she doesn’t know anyone and resume-walls are broken– besides, how many VC-funded startups will hire a 30-year-old female making $90,000?–  and consulting (until this options market is built) is even more word-of-mouth/broken than regular employment. She knows that she’s good. She’d like to sell 7500 hours of work to the market over the next five years. Assume the option sale is enough to kick-start her career; then, her market value after five years is $250 per hour, but she sets her strike at $90. Since she’s older and her “volatility” (uncertainty in market value) is lower, let’s put her at 13% rather than Zach’s 16.7%. The fair value of her call options is $168 per hour, so she’s able to raise $1.26 million immediately: more than enough to finance her move to a new city.

Barbara is a 43-year-old stay-at-home mother whose youngest child (of five) reached six years of age. She’s no longer needed around the house, but has enough complexity in her life that full-time employment isn’t very tenable. However, she’s been intellectually active, designing websites for various local charities and organizations for a cut rate. She’s learned Python, taken a few courses on Coursera, and excelled. She wants to work on some hard programming problems, but no one will hire her because of her age and lack of “professional” experience. She decides to look for consulting work. She’s still green as a programmer, but could justify $100 per hour with access to the full market. She’s committing 1000 hours over one year, and she decides that $30/hour is the minimum hourly rate to motivate her, so she offers that as the strike. With volatility at 15% (although that’s almost irrelevant, given the low strike) she raises $71 on each option, and gets $71,000 immediately, with 1000 hours of work practically “locked in” due to the low strike price (at which anyone would retain her).

Cedar City High is a top suburban public high school in eastern Massachusetts. They’d like to have an elective course on technology entrepreneurship, and student demand is sufficient to justify two periods per day. Teaching time, including grading and preparation, will be 16 hours per week, times 40 weeks per year, for 640 hours. That’s not enough to justify a full-time teaching position, and it’d preferably be taught by someone with experience in the field. Dave is coming off yet-another startup, and has had some successes and failures but, right now, he’s decide that he wants to do something useful. He’s sick of this VC-funded, social-media nonsense. He’s not looking to get rich, but he needs to deliver some value to the community, and get paid enough for it to survive. He sets a minimum strike at $70 per hour, and he’s looking for about that 640 hours of work. Based on their assessments, Cedar City agrees to pay $15 for the options and exercise them, meaning they pay $85 per hour (or $54,400 per year, less than the cost of a full-time teacher) for the work.

Emily’s a 27-year-old investment banker who has decided that she hates the hours demanded by the industry and wants out. Her last performance review was mediocre, because the monotony of the work and the hours are starting to drain her. With her knowledge of finance and technology, she knows that she’ll be killing it in the future– if she can get out of her current career trap. However, five years of 80-hour work weeks have left her stressed-out and without a network. She’ll need a six-month break to travel, but FiDi rent (she can’t live elsewhere, given her demanding work schedule) has bled her dry and she has no savings. She realizes that the long-term five-years-out hourly value of her work– if she can get out of where she is now– is $300 per hour at median, with an annualized volatility of about 30% (she is stressed out). Unsure about her long-term career path, she offers a mere 500 hours (100 per year) with a five-year window. She sells the options at a $200/hour strike. The Black-Scholes value of them is $146 per hour, or $73,000 for the block. That gives her more than enough to finance her six months of travel, regain her normal emotional state, and find her next job.

So this is a good idea. That’s clear. What, pray tell, are we waiting for? As a generation, we need to build this damn thing.