Why corporate penny-shaving backfires. (Also, how to do a layoff right.)

One of the clearest signs of corporate decline (2010s Corporate America is like 1980s Soviet Russia, in terms of its low morale and lethal overextension) is the number of “innovations” that are just mean-spirited, and seem like prudent cost-cutting but actually do minimal good (and, often, much harm) to the business.

One of these is the practice of pooling vacation and sick leave in a single bucket, “PTO”. Ideally, companies shouldn’t limit vacation or sick time at all– but my experience has shown “unlimited vacation” to correlate with a negative culture. (If I ran a company, it would institute a mandatory vacation policy: four weeks minimum, at least two of those contiguous.) Vacation guidelines need to be set for the same reason that speed limits (even if intentionally under-posted, with moderate violation in mind) need to be there; without them, speed variance would be higher on both ends. So, I’ve accepted the need for vacation “limits”, at least as soft policies; but employers expect their people to either use a vacation day for sick leave, or come into the office while sick, are just being fucking assholes.

These PTO policies are, in my view, reckless and irresponsible. They represent a gamble with employee health that I (as a person with a manageable but irritating disability) find morally repugnant. It’s bad enough to deny rest to someone just because a useless bean-counter wants to save the few hundred dollars paid out for unused vacation when someone leaves the company. But by encouraging the entire workforce to show up while sick and contagious, they subject the otherwise healthy to an unnecessary germ load. Companies with these pooled leave, “PTO”, policies end up with an incredibly sickly workforce. One cold just rolls right into another, and the entire month of February is a haze of snot, coughing, and bad code being committed because half the people at any given time are hopped up on cold meds and really ought to be in bed. It’s not supposed to be this way. This will shock those who suffer in open-plan offices, but an average adult is only supposed to get 2-3 colds per year, not the 4-5 that are normal in an open-plan office (another mean-spirited tech-company “innovation”) or the 7-10 per year that is typical in pooled-leave companies.

The math shows that PTO policies are a raw deal even for the employer. In a decently-run company with an honor-system sick leave policy, an average healthy adult might have to take 5 days off due to illness per year. (I miss, despite my health problems, fewer than that.) Under PTO, people push themselves to come in and only stay home if they’re really sick. Let’s say that they’re now getting 8 colds per year instead of the average 2. (That’s not an unreasonable assumption, for a PTO shop.) Only 2 or 3 days are called-off, but there are a good 24-32 days in which the employee is functioning below 50 percent efficiency. Then there are the morale issues, and the general perception that employees will form of the company as a sickly, lethargic place; and the (mostly unintentional) collective discovery of how low a level of performance will be tolerated. January’s no longer about skiing on the weekends and making big plans and enjoying the long golden hour… while working hard, because one is refreshed. It’s the new August; fucking nothing gets done because even though everyone’s in the office, they’re all fucking sick with that one-rolls-into-another months-long cold. That’s what PTO policies bring: a polar vortex of sick.

Why, if they’re so awful, do companies use them? Because HR departments often justify their existence by externalizing costs elsewhere in the company, and claiming they saved money. So-called “performance improvement plans” (PIPs) are a prime example of this. The purpose of the PIP is not to improve the employee. Saving the employee would require humiliating the manager, and very few people have the courage to break rank like that. Once the PIP is written, the employee’s reputation is ruined, making mobility or promotion impossible. The employee is stuck in a war with his manager (and, possibly, team) that he will almost certainly lose, but he can make others lose along the way. To the company, a four-month severance package is far cheaper than the risk that comes along with having a “walking dead” employee, pissing all over morale and possibly sabotaging the business, in the office for a month. So why do PIPs, which don’t even work for their designed intention (legal risk mitigation) unless designed and implemented by extremely astute legal counsel, remain common? Well, PIPs a loss to the company, even compared to “gold-plated” severance plans. We’ve established that. But they allow the HR department to claim that it “saved money” on severance payments (a relatively small operational cost, except when top executives are involved) while the costs are externalized to the manager and team that must deal with a now-toxic (and if already toxic before the PIP, now overtly destructive) employee. PTO policies work the same way. The office becomes lethargic, miserable, and sickly, but HR can point to the few hundred dollars saved on vacation payouts and call it a win.

On that, it’s worth noting that these pooled-leave policies aren’t actually about sick employees. People between the ages of 25 and 50 don’t get sick that often, and companies don’t care about that small loss. However, their children, and their parents, are more likely to get sick. PTO policies aren’t put in place to punish young people for getting colds. They’re there to deter people with kids, people with chronic health problems, and people with sick parents from taking the job. Like open-plan offices and the anxiety-inducing micromanagement often given the name of “Agile”, it’s back-door age and disability discrimination. The company that institutes a PTO policy doesn’t care about a stray cold; but it doesn’t want to hire someone with a special-needs child. Even if the latter is an absolute rock star, the HR department can justify itself by saying it helped the company dodge a bullet.

Let’s talk about cost cutting more generally, because I’m smarter than 99.99% of the fuckers who run companies in this world and I have something important to say.

Companies don’t fail because they spend too much money. “It ran out of money” is the proximate cause, not the ultimate one. Some fail when they cease to excel and inspire (but others continue beyond that point). Some fail, when they are small, because of bad luck. Mostly, though, they fail because of complexity: rules that don’t make sense and block useful work from being done, power relationships that turn toxic and, yes, recurring commitments and expenses that can’t be afforded (and must be cut). Cutting complexity rather than cost should be the end goal, however. I like to live with few possessions not because I can’t afford to spend the money (I can) but because I don’t want to deal with the complexity that they will inject into my life. It’s the same with business. Uncontrolled complexity will cause uncontrolled costs and ultimately bring about a company’s demise. What does this mean about cutting costs, which MBAs love to do? Sometimes it’s great to cut costs. Who doesn’t like cutting “waste”? The problem there is that there actually isn’t much obvious waste to be cut, so after that, one has to focus and decide on which elements of complexity are unneeded, with the understanding that, yes, some people will be hurt and upset. Do we need to compete in 25 businesses, when we’re only viable in two? This will also cut costs (and, sadly, often jobs).

The problem, see, is that most of the corporate penny-shaving increases complexity. A few dollars are saved, but at the cost of irritation and lethargy and confusion. People waste time working around new rules intended to save trivial amounts of money. The worst is when a company cuts staff but refuses to reduce its internal complexity. This requires a smaller team to do more work– often, unfamiliar work that they’re not especially good at or keen on doing; people were well-matched to tasks before the shuffle, but that balance has gone away. The career incoherencies and personality conflicts that emerge are… one form of complexity.

The problem is that most corporate executives are “seagull bosses” (swoop, poop, and fly away) who see their companies and jobs in a simple way: cut costs. (Increasing revenue is also a strategy, but that’s really hard in comparison.) A year later, the company is still failing not because it failed to cut enough costs or people, but because it never did anything about the junk complexity that was destroying it in the first place.

Let’s talk about layoffs. The growth of complexity is often exponential, and firms inevitably get to a place where they are too complex (and, a symptom of this is that operations are too expensive) to survive. The result is that it needs to lay people off. Now, layoffs suck. They really fucking do. But there’s a right way and a wrong way to execute one. To do a layoff right, the company needs to cut complexity and cut people. (Otherwise, it will have more complexity per capita, the best people will get fed up and leave, and the death spiral begins.) It also needs to cut the right complexity; all the stuff that isn’t useful.

Ideally, the cutting of people and cutting of complexity would be tied together. Unnecessary business units being cut usually means that people staffed on them are the ones let go. The problem is that that’s not very fair, because it means that good people, who just happened to be in the wrong place, will lose their jobs. (I’d argue that one should solve this by offering generous severance, but we already know why that isn’t a popular option, though it should be.) The result is that when people see their business area coming into question, they get political. Of course this software company needs a basket-weaving division! In-fighting begins. Tempers flare. From the top, the water gets very muddy and it’s impossible to see what the company really looks like, because everyone’s feeding biased information to the executives. (I’m assuming that the executive who must implement the cuts is acting in good faith, which is not always true.) What this means is that the crucial decision– what business complexity are we going to do without?– can’t be subject to a discussion. Debate won’t work. It will just get word out that job cuts are coming, and political behavior will result. The horrible, iron fact is that this calls for temporary autocracy. The leader must make that call in one fell swoop. No second guessing, no looking back. This is the change we need to make in order to survive. Good people will be let go, and it really sucks. However, seeing as it’s impossible to execute a large-scale layoff without getting rid of some good people, I think the adult thing to do is write generous severance packages.

Cutting complexity is hard. It requires a lot of thought. Given that the information must be gathered by the chief executive without tipping anyone off, and that complex organisms are (by definition) hard to factor, it’s really hard to get the cuts right. Since the decision must be made on imperfect information, it’s a given that it usually won’t be the optimal cut. It just has to be good enough (that is, removing enough complexity with minimal harm to revenue or operations) that the company is in better health.

Cutting people, on the other hand, is much easier. You just tell them that they don’t have jobs anymore. Some don’t deserve it, some cry, some sue, and some blog about it but, on the whole, it’s not actually the hard part of the job. This provides, as an appealing but destructive option, the lazy layoff. In a lazy layoff, the business cuts people but doesn’t cut complexity. It just expects more work from everyone. All departments lose a few people! All “survivors” now have to do the work of their fallen brethren! The too-much-complexity problem, the issue that got us to the layoff in the first place… will figure itself out. (It never does.)

Stack ranking is a magical, horrible solution to the problem. What if one could do a lazy layoff but always cull the “worst” people? After all, some people are of negative value, especially considering the complexity load (in personality conflicts, shoddy work) they induce. The miracle of stack ranking is that it turns a layoff– otherwise, a hard decision guaranteed to put some good people out of work– into an SQL query. SELECT name FROM Employee WHERE perf <= 3.2. Since the soothsaying of stack ranking has already declared the people let-go as bottom-X-percent performers, there’s no remorse in culling them. They were dead weight”. Over time, stack ranking evolves into a rolling, continuous lazy layoff that happens periodically (“rank-and-yank”).

It’s also dishonest. There are an ungodly number of large technology companies (over 1,000) that claim to have “never had a layoff”. That just isn’t fucking true. Even if the CEO was Jesus Christ himself, he’d have to lay people off because that’s just how business works. Tech-company sleazes just refuse to use the word “layoff”, for fear of losing their “always expanding, always looking for the best talent!” image. So they call it a “low performer initiative” (stack ranking, PIPs, eventual firings). What a “low-performer initiative” (or stack ranking, which is a chronic LPI) inevitably devolves into is a witch hunt that turns the organization into pure House of Cards politics. Yes, most companies have about 10 percent who are incompetent or toxic or terminally mediocre and should be sent out the door. Figuring which 10 percent those people are, is not easy. People who are truly toxic generally have several years’ worth of experience drawing a salary without doing anything, and that’s a skill that improves with time. They’re really good at sucking (and not getting caught). They’re adept political players. They’ve had to be; the alternative would have been to have grown a work ethic. Most of what we as humans define as social acceptability is our ethical immune system, which can catch and punish the small-fry offenders but can’t do a thing about the cancer cells (psychopaths, parasites) that have evolved to the point of being able to evade or even redirect that rejection impulse. The question of how to get that toxic 10 percent out is an unsolved one, and I don’t have space to tackle it now, but the answer is definitely not stack ranking, which will always clobber several unlucky good-faith employees for every genuine problem employee it roots out.

Moreover, stack ranking has negative permanent effects. Even when not tied to a hard firing percentage, its major business purpose is still to identify the bottom X percent, should a lazy layoff be needed. It’s a reasonable bet that unless things really go to shit, X will be 5 or 10 or maybe 20– but not 50. So stack ranking is really about the bottom. The difference between the 25th percentile and 95th percentile, in stack ranking, really shouldn’t matter. Don’t get me wrong: a 95th-percentile worker is often highly valuable and should be rewarded. I just don’t have any faith in the ability of stack ranking to detect her, just as I know some incredibly smart people who got mediocre SAT scores. Stack ranking is all about putting people at the bottom, not the top. (Top performers don’t need it and don’t get anything from it.)

The danger of garbage data (and, #YesAllData generated by stack ranking is garbage) is that people tend to use it as if it were truth. The 25th-percentile employee isn’t bad enough to get fired… but no one will take him for a transfer, because the “objective” record says he’s a slacker. The result of this– in conjunction with closed allocation, which is already a bad starting point– is permanent internal immobility. People with mediocre reviews can’t transfer because the manager of the target team would prefer a new hire (with no political strings attached) over a sub-50th-percentile internal. People with great reviews don’t transfer for fear of upsetting the gravy train of bonuses, promotions, and managerial favoritism. Team assignments become permanent, and people divide into warring tribes instead of collaborating. This total immobility also makes it impossible to do a layoff the right way (cutting complexity) because people develop extreme attachments to projects and policies that, if they were mobile and therefore disinterested, they’d realize ought to be cut. It becomes politically intractable to do the right thing, or even for the CEO to figure out what the right thing is. I’d argue, in fact, that performance reviews shouldn’t be part of a transfer packet at all. The added use of questionable, politically-laced “information” is just not worth the toxicity of putting that into policy.

A company with a warring-departments dynamic might seem like a streamlined, efficient, and (most importantly) less complex company. It doesn’t have the promiscuous social graph you might expect to see in an open allocation company. People know where they are, who they report to, and who their friends and enemies are. The problem, with this insight, is that there’s hot complexity and cold complexity. Cold complexity is passive and occasionally annoying, like a law from 1890 that doesn’t make sense and is effectively never enforced. When people collaborate “too much” and the social graph of the company seems to have “too many” edges, there’s some cold complexity there. It’s generally not harmful. Open allocation tends to generate some cold complexity. Rather than metastasize into an existential threat to the company, it will fade out of existence over time. Hot complexity, which usually occurs in an adversarial context, is a kind that generates more complexity. Its high temperature means there will be more entropy in the system. Example: a conflict (heat) emerges. That, alone, makes the social graph more complex because there are more edges of negativity. Systems and rules are put in place to try to resolve it, but those tend to have two effects. First, they bring more people (those who had no role in the initial conflict, but are affected by the rules) into the fights. Second, the conflicting needs or desires of the adversarial parties are rarely addressed, so both sides just game the new system, which creates more complexity (and more rules). Negativity and internal competition create the hot complexity that can ruin a company more quickly than an executive (even if acting with the best intentions) can address it.

Finally, one thing worth noting is the Welch Effect (named for Jack Welch, the inventor of stack-ranking). It’s one of my favorite topics because it has actually affected me. The Welch Effect pertains to the fact that when a broad-based layoff occurs, the people most likely to be let go aren’t the worst (or best) performers, but newest members of macroscopically underperforming teams. Layoffs (and stack ranking) generally propagate down the hierarchy. Upper management disburses bonuses, raises, and layoff quotas based on the macroscopic performance of the departments under it, and at each level, the node operators (managers) slice the numbers based on how well they think each suborganization did (plus or minus various political modifiers). At the middle-management layer, one level separated from the non-managerial “leaves”, it’s the worst-performing teams that have to vote the most people off the island. It tends to be those most recently hired who get the axe. This isn’t especially unfair or wrong, for that middle manager; there’s often no better way to do it than to strike the least-embedded, least-invested junior hire.

The end result of the Welch Effect, however, is that the people let go are often those who had the least to do with their team’s underperformance. (It may be a weak team, it may be a good team with a bad manager, or it may be an unlucky team.) They weren’t even there for very long! It doesn’t cause the firm to lay off good people, but it doesn’t help it lay off bad people either. It has roughly the same effect as a purely seniority-based layoff, for the company as a whole. Random new joiners are the ones who are shown out the door. It’s bad to lose them, but it rarely costs the company critical personnel. Its effect on that team is more visibly negative: teams that lose a lot of people during layoffs get a public stink about them, and people lose the interest in joining or even helping them– who wants to work for, or even assist, a manager who can’t protect his people?– so the underperforming team becomes even more underperforming. There are also morale issues with the Welch Effect. When people who recently joined lose their jobs (especially if they’re fired “for performance” without a severance) it makes the company seem unfair, random, and capricious. The ones let go were the ones who never had the chance to prove themselves. In a one-off layoff, this isn’t so destructive. The Welch Effected usually move on to better jobs anyway. However, when a company lays off in many small cuts, or disguises a layoff as a “low-performer initiative”, the Welch Effect firings demolish belief in meritocracy.

That, right there, explains why I get so much flak over how I left Google. Technically, I wasn’t fired. But I had a disliked, underdelivering manager who couldn’t get calibration points for his people (a macroscopic issue that I had nothing to do with) and I was the newest on the team, so I got a bad score (despite being promised a reasonable one– a respectable 3.4, if it matters– by that manager). Classic Welch Effect. I left. After I was gone I “leaked” the existence of stack ranking within Google. I wasn’t the first to mention that it existed there, but I publicized it enough to become the (unintentional) slayer of Google Exceptionalism and, to a number of people I’ve never met and to whom I’ve never done any wrong, Public Enemy #1. I was a prominent (and, after things went bad, fairly obnoxious) Welch Effectee, and my willingness to share my experience changed Google’s image forever. It’s not a disliked company (nor should it be) but its exceptionalism is gone. Should I have done all that? Probably not. Is Google a horrible company? No. It’s above average for the software industry (which is not an endorsement, but not damnation either.) Also, my experiences are three years old at this point, so don’t take them too seriously. As of November 2011, Google had stack ranking and closed allocation. It may have abolished those practices and, if it has, then I’d strongly recommend it as a place to work. It has some brilliant people and I respect them immensely.

In an ideal world, there would be no layoffs or contractions. In the real world, layoffs have to happen, and it’s best to do them honestly (i.e. don’t shit on departing employees’ reputations by calling it a “low performer initiative”). As with more minor forms of cost-cutting (e.g. new policies encouraging frugality) it can only be done if complexity (that being the cause of the organization’s underperformance) is reduced as well. That is the only kind of corporate change that can reverse underperformance: complexity reduction.

If complexity reduction is the only way out, then why is it so rare? Why do companies so willingly create personnel and regulatory complexity just to shave pennies off their expenses? I’m going to draw from my (very novice) Buddhist understanding to answer this one. When the clutter is cleared away… what is left? Phrases used to define it (“sky-like nature of the mind”) only explain it well to people who’ve experienced it. Just trust me that there is a state of consciousness that can be attained when gross thoughts are swept away, leaving something more pure and primal. Its clarity can be terrifying, especially the first time it is experienced. I really exist. I’m not just a cloud of emotions and thoughts and meat. (I won’t get into death and reincarnation and nirvana here. That goes farther than I need, for now. Qualia, or existence itself, as opposed my body hosting some sort of philosophical zombie, is both miraculous and the only miracle I actually believe in.) Clarity. Essence. Those are the things you risk encountering with simplicity. That’s a good thing, but it’s scary. There is a weird, paradoxical thing called “relaxation-induced anxiety” that can pop up here. I’ve fought it (and had some nasty motherfuckers of panic attacks) and won and I’m better for my battles, but none of this is easy.

So much of what keeps people mired in their obsessions and addictions and petty contests is an aversion to confronting what they really are, a journey that might harrow them into excellence. I am actually going to age and die. Death can happen at any time, and almost certainly it will feel “too soon”. I have to do something, now, that really fucking matters. This minute counts, because I may not get another in this life. People are actually addicted to their petty anxieties that distract them from the deeper but simpler questions. If you remove all the clutter on the worktable, you have to actually look at the table itself, and you have to confront the ambitions that impelled you to buy it, the projects you imagined yourself using it for (but that you never got around to). This, for many people, is really fucking hard. It’s emotionally difficult to look at the table and confront what one didn’t achieve, and it’s so much easier to just leave the clutter around (and to blame it).

Successful simplicity leads to, “What now?” The workbench is clear; what are we going to do with it? For an organization, such simplicity risks forcing it to contend with the matter of its purpose, and the question of whether it is excelling (and, relatedly, whether it should). That’s a hard thing to do for one person. It’s astronomically more difficult for a group of people with opposing interests, and among whom excellence is sure to be a dirty word (there are always powerful people who prefer rent-seeking complacency). It’s not surprising, then, that most corporate executives say “fuck it” on the excellence question and, instead, decide it suffices to earn their keep to squeeze employees with mindless cost-cutting policies: pooled sick leave and vacation, “employee contributions” on health plans, and other hot messes that just ruin everything. It feels like something is getting done, though. Useless complexity is, in that way, existentially anxiolytic and addictive. That’s why it’s so hard to kill. But it, if allowed to live, will kill. It can enervate a person into decoherence and inaction, and it will reduce a company to a pile of legacy complexity generated by self-serving agents (mostly, executives). Then it falls under the MacLeod-Gervais-Rao-Church theory of the nihilistic corporation; the political whirlpool that remains once an organization has lost its purpose for existing.

At 4528 words, I’ve said enough.

Corporate atheism

Legally and formally, a corporation is a person, with the same rights (life, liberty, and property) that a human is accorded. Whether this is good is hotly debated.

In theory, I like the corporate veil (protection of personal property, reputation, and liberty in cases of good-faith business failure and bankruptcy) but I don’t see it playing well in practice. If you need $400,000 in bank loans to start your restaurant, you’ll still be expected to take on personal liability, or you won’t get the loan. I don’t see corporate personhood doing what it’s supposed to for the little guys. It seems to work only for those with the most powerful lobbyists. (Prepare for rage.) Health insurance companies cannot be sued, not even for the amount of the claim, if their denial of coverage causes financial hardship, injury, or death. (If a health-insurance executive is sitting next to you, I give you permission to beat the shit out of him.) On the other hand, a restaurant proprietor or software freelancer who makes mistakes on his taxes can get seriously fucked up by the IRS. I’m a huge fan of protecting genuine entrepreneurs from the consequences of good-faith failure. As for cases of bad-faith failure among corrupt, social-climbing, private-sector bureaucrats populating Corporate America’s upper ranks, well… not as much. Unfortunately, the corporate veil in practice seems to protect the rich and well-connected from the consequences of some enormous crimes (environmental degradation, human rights violations abroad, etc.) I can’t stand for that.

On the corporation, it’s clearly not a person like you or me. It can’t be imprisoned. It can be fined heavily (loss of status and belief) but not executed. It has immense power, if for no other reason than its reputation among “real” physical people, but no empirically discernible will, so we must trust its representatives (“executives”) to know it. It tends to operate in ways that are outside of mortal human’s moral limitations, because it is nearly immune from punishment, but a fair deal of bad behavior will be justified. The worst that can happen to it is gradual erosion of status and reputation. A mere mortal who behaved as it does would be called a psychopath, but it somehow enjoys a high degree of moral credibility in spite of its actions. (Might makes right.) Is that a person, a flesh-and-blood human? Nope. That’s a god. Corporations don’t die because they “run out of money”. They die because people stop believing in them. (Financial capitalism accelerates the disbelief process, one that used to require centuries.) Their products and services are no longer valued on divine reputation, and their ability to finance operations fails. It takes a lot of bad behavior for most humans to dare disbelieve in a trusted god. Zeus was a rapist, and the literal Yahweh genocidal, and they still enjoyed belief for thousands of years.

“God” is a loaded word, because some people will think I’m talking about their concept of a god. (This diversion isn’t useful here, but I’m actually not an atheist.) I have no issue with the philosophical concept of a supreme being. I’m actually talking about the historical artifacts, such as Zeus or Ra or Odin or (I won’t pick sides) the ones believed in today. I do have an issue with those, because their political effects on real, physical humans can be devastating. It’s not controversial in 2014 that most of these gods don’t exist– and it probably won’t be controversial in 5014 that the literal Jehovah/Allah doesn’t exist– but people believed in them at a time, and no longer do. When they were believed to be real, they (really, their human mouthpieces) could be more powerful than kings.

The MacLeod model of the organization separates it into three tiers. The Losers (checked-out grunts) are the half-hearted believers who might suspect that their chosen god doesn’t exist, but would never say it at the dinner table. The Clueless (unconditional overperformers who lack strategic vision and are destined for middle-management) are the zealots destined for the low priesthood, who clean the temple bathrooms. Not only do they believe, but they’re the ones who work to make blind faith look like a virtue. At the top are the Sociopaths (executives) who often aren’t believers themselves, but who enjoy the prosperity and comfort of being at the highest levels of the priesthood and, unless their corruption becomes obnoxiously obvious, being trusted to speak for the gods. The fact that this nonexistent being never punishes them for acting badly means there is virtually no check on their increasing abuse of “its” power.

Ever since humans began inventing gods, others have not believed in them. Atheism isn’t a new belief we can pin on (non-atheistic scientist) Charles Darwin. Many of the great Greek philosophers were atheists, to start. Buddha was, arguably, an atheist and Buddhism is theologically agnostic to this day. Socrates may not have thought himself an atheist, but one of his major “crimes” was disbelief in the literal Greek gods. In truth, I would bet that the second human ever to speak on anthropomorphic, supernatural beings said, “You’re full of shit, Asshole”. (Those may, however, have been his last words.) There’s nothing to be ashamed of in disbelief. Many of the high priests (MacLeod Sociopaths) are, themselves, non-believers!

I’m a corporate atheist and a humanist. My stance is radical. From most, these gods (and not the people doing all the damn work) are claimed to be the engines of progress and innovation. People who are not baptized and blessed by them (employment, promotion, good references) are judged to be filthy, and “natural” humans deserve shame (original sin). If you don’t have their titles and accolades, your reputation is shit and you are disenfranchised from the economy. This enables them to act as extortionists, just as religious authorities could extract tribute not because those supernatural beings existed (they never did) but because they possessed the political and social ability to banish and to justify violence.

I’m sorry, but I don’t fucking agree with any of that.

Look-ahead: a likely explanation for female disinterest in VC-funded startups.

There’s been quite a bit of cyber-ink flowing on the question of why so few women are in the software industry, especially at the top, and especially in VC-funded startups. Paul Graham’s comments on the matter, being taken out of context by The Information, made him a lightning rod. There’s a lot of anger and passion around this topic, and I’m not going to do all of that justice. Why are there almost no venture capitalists, few women being funded, and not many women rising in technology companies? It’s almost certainly not a lack of ability. Philip Greenspun argues that women avoid academia because it’s a crappy career. He makes a lot of strong points, and that essay is very much worth reading, even if I think a major factor (discussed here) is underexplored.

Why wouldn’t this fact (of academia being a crap career) also make men avoid it? If it’s shitty, isn’t it going to be avoided by everyone? Often cited is a gendered proclivity toward risk. People who take unhealthy and outlandish risks (such as by becoming drug dealers) tend to be men. So do overconfident people who assume they’ll end up on top of a vicious winner-take-all game. The outliers on both ends of society tend to be male. As a career with subjective upsides (prestige in addition to a middle-class salary) and severe, objective downsides it appeals to a certain type of quixotic, clueless male. Yet making bad decisions is hardly a trait of one gender. Also, we don’t see 1.5 or 2 times as many high-power (IQ 140+) men making bad career decisions. We probably see 10 times as many doing so; Silicon Valley is full of quixotic young men wasting their talents to make venture capitalists rich, and almost no women, and I don’t think that difference can be explained by biology alone.

I’m going to argue that a major component of this is not a biological trait of men or women, but an emergent property from the tendency, in heterosexual relationships, for the men to be older. I call this the “Look-Ahead Effect”. Heterosexual women, through the men they date, see doctors buying houses at 30 and software engineers living paycheck-to-paycheck at the same age. Women face a number of disadvantages in the career game, but they have access to a kind of high-quality information that prevents them from making bad career decisions. Men, on the other hand, tend to date younger women covering territory they’ve already seen.

When I was in a PhD program (for one year) I noticed that (a) women dropped out at higher rates than men, and (b) dropping out (for men and women) had no visible correlation with ability. One more interesting fact pertained to the women who stayed in graduate school: they tended either to date (and marry) younger men, or same-age men within the department. Academic graduate school is special in this analysis. When women don’t have as much access to later-in-age data (because they’re in college, and not meeting many men older than 22) a larger number of them choose the first career step: a PhD program. But the first year of graduate school opens their dating pool up again to include men 3-5 years older than them (through graduate school and increasing contact with “the real world”). Once women start seeing first-hand what the academic career does to the men they date– how it destroys the confidence even of the highly intelligent ones who are supposed to find a natural home there– most of them get the hell out.

Men figure this stuff out, but a lot later, and usually at a time when they’ve lost a lot of choices due to age. The most prestigious full-time graduate programs won’t accept someone near or past 30, and the rest don’t do enough for one’s career to offset the opportunity cost. Women, on the other hand, get to see (through the guys they date) a longitudinal survey of the career landscape when they can still make changes.

I think it’s obvious how this applies to all the goofy, VC-funded startups in the Valley. Having a 5-year look ahead, women tend to realize that it’s a losing game for most people who play, and avoid it like the plague. I can’t blame them in the least. If I’d had the benefit of 5-year look-ahead, I wouldn’t have spent the time I did in VC-istan startups either. I did most of that stuff because I had no foresight, no ability to look into the future and see that the promise was false and the road led nowhere. If I had retained any interest in VC-istan at that age (and, really, I don’t at this point) I would have become a VC while I was young enough that I still could. That’s the only job in VC-istan that makes sense.

The U.S. conservative movement is a failed eugenics project. Here’s why it could never have worked.

At the heart of the U.S. conservative movement, and most religious conservative movements, is a reproductive agenda. Old-style religious meddling in reproduction had a strong “make more of us” character to it– resulting in blanket policies designed to encourage reproduction across a society– but the later incarnations of right-wing authoritarianism, especially as they have mostly divorced themselves from religion, have been oriented more strongly toward goals judged to be eugenic, or to favor the reproduction of desirable individuals and genes; instead of a broad-based “make more of us” tribalism, it becomes an attempt to control the selection process.

The term eugenics has an ugly reputation, much earned through history, but let me offer a neutral definition of the term. Eugenics (“good genes”) is the idea that we should consciously control the genetic component of what humans are born into the world. It is not a science, since the definition of eu- is intensely subjective. As “eugenics” has been used throughout history to justify blatant racism and murder, the very concept has a negative reputation. That said, strong arguments can be made in favor of certain mild, elective forms of eugenics. For example, subsidized or free higher education is (although there are other intents behind it) a socially acceptable positive eugenic program: removal of one of a dysgenic economic force (education costs, usually borne by parents) that, empirically speaking, massively reduces fertility among the most capable people while having no effect on the least capable. 

The eugenic impulse is, in truth, fairly common and rather mundane. The moral mainstream seems to agree that eugenics (if not given that stigmatized name) is morally acceptable when participation is voluntary (i.e. no one is forced to reproduce, or not to do so) and positive (i.e. focused on encouraging desirable reproduction, rather than discouraging those deemed “unwanted”) but unacceptable when involuntary (coercive or prohibitive) and negative. The only socially accepted (and often legislated) case of negative and often prohibitive eugenics is the universal taboo against incest. That one has millennia of evolution behind it, and is also fair (i.e. it doesn’t single out people as unwanted, but prohibits intrafamilial couplings, known to produce unhealthy offspring, in general) so it’s somewhat of special case.

Let’s talk about the specific eugenics of the American right wing. The obsessions over who has sex with whom, the inconsistency between hard-line, literal Christianity and the un-Christ-like rightist economics, and all of the myriad mean-spirited weirdnesses (such as U.S. private health insurance, a monster that even most conservatives loathe at this point) that make up the U.S. right-wing movement; all are tied to a certain eugenic agenda, even if the definition of “eu-” is left intentionally vague. In addition to lingering racism, the American right wing unifies two varieties (one secular, one religious) of the same idea: Social Darwinism and predestination-centric Calvinism. This amalgam I would call Social Calvinism. The problem with it is that it doesn’t make any sense. It fails on its own terms, and the religious color it allowed itself to gain has only deepened its self-contradiction, especially now that sexuality and reproduction have been largely separated by birth control.

In the West, religion has always held strong opinions on reproduction, because the dominant religious forces are those that were able to out-populate the others. “Be fruitful and multiply.” This “us versus them” dynamic had a certain positive (in the sense of “positive eugenics”; I don’t mean to call it “good”) but coercive flair to it. The religious society sought much more strongly to increase its numbers within the world than to differentially or absolutely discourage reproduction by individuals judged as undesirable within its numbers. That said, it still had some ugly manifestations. One prominent one is the traditional Abrahamic religions’ intolerance of homosexuality and non-reproductive sex in general. In modern times, homophobia is pure ignorant bigotry, but its original (if subconscious) intention was to make a religious society populate quickly, which put it at odds with nonre7uiproductive sexuality of all forms.

Predestination (for which Calvinism is known) is a concept that emerged , much later, when people did something very dangerous to literalist religion: they thought about it. If you take religious literalism– born in the illogical chaos of antiquity– and bring it to its logical conclusions, funny things happen. An all-knowing and all-powerful God would, one can reason, have full knowledge and authority over every soul’s final destiny (heaven or hell). This meant that some people were pre-selected to be spiritual winners (the Elect) and the rest were refuse, born only to live through about seven decades of sin, followed by an eternity of unimaginable torture.

Perhaps surprisingly, predestination seemed to have more motivational capacity than the older, behavior-driven morality of Catholicism. Why would this be? People are loathe to believe in something as horrible as eternal damnation for themselves (even if some enjoy the thought for others) and so they will assume themselves to be Elect. But since they’re never quite sure, bad behavior will unsettle them with a creepy cognitive dissonance that is far more effective than ratiocination about punishments and rewards. The behavior-driven framework of the Catholic Church (donations in the form of indulgences often came with specific numbers of years by which time in purgatory was reduced) allows that a bad action can be cancelled out with future good actions, making the afterlife merely an extension of the “if I do this, then I get that” hedonic calculus. Calvinism introduced a fear of shame. Bad actions might be a sign of being one of those incorrigibly bad, damned people.

Calvinist predestination was not a successful meme (and even many of those who identify themselves in modern times as Calvinists have largely rejected it). “Our God is a sadistic asshole; he tortures people eternally for being born the wrong way” is not a selling point for any religion. That said, the idea of natural (as opposed to spiritual) predestination, as well as the Calvinist evolution from guilt-based (Catholicism) to shame-based (Calvinist) Christian morality, have lived on in American society.

Fundamental to the morality of capitalism is that some actors make better uses of resources than others (which is not controversial) and deserve to have more (likewise, not controversial). Applied to humans, this is generally if uneasily accepted; applied to organizations, it’s an obvious truth (no one wants to see the subsistence of inefficient, pointless institutions). Calvinism argued that one’s pre-determined status (as Elect or damned) could be ascertained from one’s actions; conservative capitalism argues that an actor’s (largely innate and naturally pre-determined) value can be ascertained by its success on the market.

Social Darwinism (which Charles Darwin vehemently rejected) gave a fully secular and scientific-sounding basis for these threads of thought, which were losing religious steam by the end of the 19th century. The idea that market mechanics and “creative destruction” ought to apply to institutions, patterns of behavior, and especially business organizations is controversial to almost no one. Incapable and obsolete organizations, whose upkeep costs have exceeded their social value, should die in order to free up room for newer ones. Where there is immense controversy is what should happen to people when they fail, economically. Should they starve to death in the streets? Should they be fed and clothed, but denied health care, as in the U.S.? Or should they be permitted a lower-middle-class existence by a welfare state, allowing them to recover and perhaps have another shot at economic success? The Social Darwinist seeks not to kill failed individuals per se, but to minimize their effect on society. It might be better to feed them than have them rebel, but allowing their medical treatment (on the public dime) is a bridge too far (if they’re sick, they can’t take up arms). It’s not about sadism per se, but effect minimization: to end their cultural and economic (and possibly physical) reproduction. It is a cold and fundamentally statist worldview. Where it dovetails with predestination is in the idea that certain innately undesirable people, damned early on if not from birth, deserve to be met with full effect minimization (e.g. long prison sentences since there is no hope of rehabilitation; persistent poverty because any resources given to them, they will waste) because any effect they have on the world will be negative. Whether they are killed, imprisoned, enslaved, or merely marginalized generally comes down to what is most convenient– and, therefore, effect-minimizing– and that is an artifact of what a society considers socially acceptable.

If we understand Calvinist predestination, and Social Darwinism as well, we can start to see a eugenic plan forming. Throughout almost all of our evolutionary history, prosperity and fecundity were correlated. Animals that won and controlled resources passed along their genes; those that couldn’t do so, died out. Social Darwinism, at the heart of the American conservative movement, believes that this process should continue in human society. More specifically, it holds to a few core tenets. First is that individual success in the market is a sign of innate personal merit. Second is that such merit is, at least partly, genetic and predetermined. Few would hold this correlation to be absolute, but the Social Darwinist considers it strong enough to act on. Third is that prosperity and fertility will, as they have over the billion years before modern civilization, necessarily correlate. The aspects of Social Darwinist policy that seem mean-spirited are justified by this third tenet: the main threat that a welfare state poses is that these poor (and, according to this theory, undesirable) people will take that money and breed. South Carolina’s Republican Lieutenant Governor, Andre Bauer, made this attitude explicit:

My grandmother was not a highly educated woman, but she told me as a small child to quit feeding stray animals. You know why? Because they breed. You’re facilitating the problem if you give an animal or a person ample food supply. They will reproduce, especially ones that don’t think too much further than that. And so what you’ve got to do is you’ve got to curtail that type of behavior. They don’t know any better.

The hydra of the American right wing has many heads. It’s got the religious Bible-thumping ones, the overtly racist ones, and the pseudoscientific and generally atheistic ones now coming out of Silicon Valley’s neckbeard right-libertarianism and the worse half of the “mens’ rights” movement. What unites them is a commitment to the idea that some people are innately inferior and should be punished by society, with that punishment ranging from the outright sadistic to the much more common effect-minimizing (marginalization) levels.

How it falls down

Social Calvinism is a repugnant ideology. Calvinistic predestination is an idea so bad that even conservative religion, for the most part, discarded it. The same scientists who discovered Darwinian evolution (as a truth of what is in nature, not of what should be in the human world) rejected Social Darwinism outright. It has also made a mockery of itself. It fails on its own terms. The most politically visible, mean-spirited, but also criminally inefficient manifestation of this psychotic ideology is in our health insurance system. Upper-middle-class, highly-educated people suffer– just as much as the poor do– from crappy health coverage. If the prescriptive intent behind a mean-spirited health policy is Social Calvinist in nature, the greed and inefficiency and mind-blowing stupidity of it affect the “undesirable” and “desirable” alike (unless one believes that only the 0.005% of the world population who can afford to self-insure are “desirable”). The healthcare fiasco is showing that a society as firmly committed to Social Calvinism as the U.S.– so committed to it that even Obama couldn’t make public-option (much less single-payer) healthcare a reality– can’t even succeed on its own terms. The economic malaise of the 2000s “lost decade” and the various morale crises erupting in the nation (Tea Party, #Occupy) only support the idea that the American social model fails both on libertarian and humanitarian terms.

Why do I argue that Social Calvinism could never work, in a civilized society? To put it plainly, it misunderstands evolution and, more to the point, reproduction (both biological and cultural). Nature’s correlation between prosperity and fecundity ended in the human world a long time ago, and economic stresses have undesirable side effects (which I’ll cover) on how people reproduce.

Let’s talk about biology; most of the ideas here also apply (and more strongly, due to the faster rate of memetic proliferation) to cultural reproduction. After the horrors justified in the name “eugenics” in the mid-20th century, no civilized society is going to start prohibiting reproduction. It’s not quite a “universal right”, but depriving people of the biological equipment necessary to reproduce is considered inhumane, and murdering children after the fact is (quite rightly) completely unacceptable. So people can reproduce, effectively, as much as they want. With birth control in the mix, most people can also reproduce as little as they want. So they have nearly total control over how much they reproduce, whether they are poor or rich. The Social Calvinist believes that the “undesirables” will react to socioeconomic punishment by curtailing reproduction. But do we see that happening? No, not really.

I mentioned Social Calvinism’s 3 core tenets above: (1) that socioeconomic prosperity correlates to personal merit, (2) that merit is at least significantly genetic in nature, and (3) that people will respond to prosperity by increasing reproduction (as if children were a “normal” consumer good) and to punishment by decreasing it. The first of these is highly debatable: desirable traits like intelligence, creativity and empathy may lead to personal success, but so does a lack of moral restraint. The people at the very top of society seem to be, for the most part, objectively undesirable– at least, in terms of their behavior (whether those negative traits are biological is less clear). The second is perhaps unpleasant as a fact (no humanitarian likes the idea that what makes a “good” or “bad” person is partially genetic) but almost certainly true. The third seems to fail us. Or, let me take a more nuanced view of it. Do people respond to economic impulses by controlling reproduction? Of course, they do; but not in the way that one might think.

First, let’s talk about economic stress. Stress can be good (“eustress”) or bad (“distress”) but in large doses, even the good kind can be focus-narrowing, if not hypomanic or even outright toxic. Rather than focusing on objective hardship or plenty, I want to examine the subjective sense of unhappiness with one’s socioeconomic position, which will determine how much stress a person experiences and which kind it is.  Likewise, economic inequality (by providing incentive for productive activity) can be for the social good– it’s clearly a motivator– but it is a source of (without directional judgment to the word) stress. The more socioeconomic inequality there is, the more of this stress society will generate. Proponents of high levels of economic inequality will argue that it serves eustress to the desirable people and institutions and distress to the less effective ones. Yet, if we focus on the subjective matter of whether an individual feels happy or distressed, I’d expect this to be untrue. People, in my observation, tend to feel rich or poor not based on where they are, economically, but by how they measure up to the expectations derived from their natural ability. A person with a 140 IQ who ends up as a subordinate, making a merely average-plus living doing uninteresting work, is judged (and will judge himself) as a failure. Even if that person has the gross resources necessary to reproduce (the baseline level required is quite low) he will be disinclined to do so, believing his economic situation to be poor and the prospects for any progeny to be dismal. On the other hand, a person with a 100 IQ who ends up with the average-plus income (as a leader, not a subordinate; but with the same income and wealth as the person with 140 above) will face life with confidence and, if having children is naturally something he wants, be inclined to start a family early, and possibly to have a large one.

What am I really saying here? I think that, while people might believe that meritocracy is a desirable social ideal, most people respond emotionally not to the component of their economic outcome derived from natural (possibly genetic) merit or hard work, but from the random noise term. People have a hard time believing that randomness is just that (hence, the amount of money spent on lottery tickets) and interpret this noise term to represent how much “society” likes them. In large part, we’re biologically programmed to be this way; most of us get more of a warm feeling from windfalls coming from people liking us than from those derived from natural merit or hard work. However, modern society is so complex that this variable can be regarded as pure noise. Why? Because we, as humans, devise social strategies to make us liked by an unknown stream of people and contexts we meet in the future, but whether the people and contexts we actually encounter (“serendipity”) match those strategies is just as random as the Brownian motion of the stock market. Then, the subjective sense of socioeconomic eustress or distress that drives the desire to reproduce comes not from personal merit (genetic or otherwise) but from something so random that it will have a correlation of 0.0 with pretty much anything.

This kills any hope that socioeconomic rewards and punishments might have a eugenic effect, because the part that people respond to on an emotional level (which drives decisions of family planning) is the component uncorrelated to the desired natural traits. There is a way to change that, but it’s barbaric. If society accepted widespread death among the poor– and, in particular, among poor children (many of whom have shown no lack of individual merit; i.e. complete innocents)– then it could recreate a pre-civilized and truly Darwinian state in which absolute prosperity (rather than relative/subjective satisfaction) has a major effect on genetic proliferation.

Now, I’ll go further. I think the evidence is strong that socioeconomic inequality has a second-order but potent dysgenic effect. Even when controlling for socioeconomic status, ethnicity, geography and all the rest, IQ scores seem to be negatively correlated with fertility. Less educated and intelligent people are reproducing more, while the people that humanity should want in its future seem to be holding off, having fewer children and waiting longer (typically, into their late 20s or early 30s) to have them. Why? I have a strong suspicion as to the reason.

Let’s be blunt about it. There are a lot of willfully ignorant, uneducated, and crass people out there, and I can’t imagine them saying, “I’m not going to have a child until I have a steady job with health benefits”. This isn’t about IQ or physical health necessarily; just about thoughtfulness and the ability to show empathy for a person who does not exist yet. Whether rich or poor, desirable people tend to be more thoughtful about their effects on other people than undesirable ones. The effect of socioeconomic stress and volatility will be to reduce the reproductive impulse among the thoughtful, future-oriented sorts of people that we want to have reproducing. It also seems to me that such stresses increase reproduction among the sorts of present-oriented, thoughtless sorts of people that we don’t as much want to be highly represented in the future.

I realize that speaking so boldly about eugenics (or dysgenic threats, as I have) is a dangerous (and often socially unacceptable) thing. To make it clear: yes, I worry about dysgenic risk. Now some of the more brazen (and, in some cases, deeply racist) eugenicists freak out about higher rates of fertility in developing (esp. non-white) countries, and I really don’t. Do I care if the people of the future look like me? Absolutely not. But it would be a shame if, 100,000 years from now, they were incapable of thinking like me. I don’t consider it likely that humanity will fall into something like Idiocracy; but I certainly think it is possible. (A more credible threat is that, over a few hundred years, societies with high economic inequality drift, genetically, in an undesirable direction, producing a change that is subtle but enough to have macroscopic effects.)

Why, at a fundamental level, does a harsher and more inequitable (and more stressful) society increase dysgenic risk? Here’s my best explanation. Evolutionary ecology discusses two reproductive pressures, r- and K-selection, in species, which correspond to optimizing for quantity versus quality of offspring. The r-strategist has lots of offspring, gives minimal paternal investment, and few will survive. An example is a frog giving birth to a hundred tadpoles. The K-strategist invests heavily in a smaller number of high-quality offspring with a much higher individual shot at surviving. Whales and elephants are K-strategists with long gestation periods and few offspring, but a lot of care given to them. Neither is “better” than the other, and they each succeed in different circumstances. The r-strategist tends to repopulate quickest after a catastrophe, while the K-strategist succeeds differentially at saturation.

It is, in fact, inaccurate to characterize highly evolved, complex life forms such as mammals as strong r- or K-selectors. As humans, we’re clearly both. We have an r-selective and a K-selective sexual drive, and one could argue that much of the human story is about the arms race between the two.

The r-selective sex drive wants promiscuity, has a strong present-orientation, and exhibits a total lack of moral restraint– it will kill, rape, or cheat to get its goo out there. The K-selective sex drive supports monogamy, is future-oriented, and values a stable and just society. It wants laws and cultivation (culture) and progress. Traditional Abrahamic religions have associated the r-drive with “evil” and sin. I wouldn’t go that far. In animals it is clearly inappropriate to put any moral weight into r- or K-selection, and it’s not clear that we should be doing that to natural urges that all people have (such as calling the r-selective component of our genetic makeup “original sin”). How people act on those is another matter. The tensions between the r- and K-drives have produced much art and philosophy, but civilization demands that people mostly follow their K-drives. While age and gender do not correlate as strongly to the r/K divide as stereotypes would insist (there are r-driven older women, and K-driven young men) it is nonetheless evident that most of society’s bad actors are those prone to the strongest r-drive: uninhibited young men, typically driven by lust, arrogance and greed. In fact, we have a clinical term for people who behave in a way that is r-optimal (or, at least, was so in the state of nature) but not socially acceptable: psychopaths. From an r-selective standpoint, psychopathy conferred an evolutionary advantage, and that’s why it’s in our genome.

Both sexual drives (r- and K-) exist in all humans, but it wasn’t until the K-drive triumphed that civilization could properly begin. In pre-monogamous societies, conflicts between men over status (because, when “alpha” men have 20 mates and low-status men have none, the stakes are much greater) were so common that between a quarter and a half of men died in positional violence with other men. Religions that mandated monogamy, or at least restrained polygamy as Islam did, were able to build lasting civilizations, while societies that accepted pre-monogamous distributions of sexual access were unable to get past the chaos of constant positional violence.

There are many who argue that the contemporary acceptance of casual sex constitutes a return to pre-monogamous behaviors. I don’t care to get far into this one, if only because I find the hand-wringing about the topic (on both sides) to be rather pointless. Do we see dysgenic patterns in the most visible casual sex markets (such as the one that occurs in typical American colleges)? Absolutely, we do. Even if we reject the idea that higher-quality people are less prone to r-driven casual sex, the way people (of both sexes) select partners in that game is visibly dysgenic. But to the biological future (culture is another matter) of the human species, that stuff is pretty harmless– thanks to birth control. This is where the religious conservative movement shoots itself in the foot; it argues that the advent of birth control created uncivil sexual behavior. In truth, bad sexual behavior is as old as dirt, has always been a part of the human world and probably always will be; the best thing for humanity is for it to be rendered non-reproductive, mitigating the dysgenic agents that brought psychopathy into our genome. (On the other hand, if human sexual behavior devolved to the state of high school or college casual sex and remained reproductive, the species would devolve into H. pickupartisticus and be kaputt within 500 years. I would short-sell the human species and buy sentient-octopus futures at that point.)

If humans have two sexual drives, it stands to reason that those drives would react differently to various circumstances. This brings to mind the relationship of each to socioeconomic stress. The r-drive is enhanced by socioeconomic stress– both eustress and distress. Eustress-driven r-sexuality is seen in the immensely powerful businessman or politician who frequents prostitutes, not because he is interested in having well-adjusted children (or even in having children at all) but to see if he can get away with it; the distress-driven r-sexuality has more of an escapist, “sex as drug”, flavor to it. In an evolutionary context, it makes sense that the r-drive should be activated by stress, since the r-drive is what enables a species to populate rapidly after an ecological catastrophe. On the other hand, the K-drive is weakened by socioeconomic stress and volatility. It doesn’t want to bring children into a future that might be miserable or dangerously unpredictable. The K-drive’s reaction to socioeconomic eustress is busyness (“I can’t have kids right now; my career’s taking off) and its reaction to distress is to reduce libido as part of a symptomatic profile very similar to depression.

The result of all of this is that, should society fall into a damaged state where socioeconomic inequality and stress are rampant, the r-drive will be more successful at pushing its way to reproduction, while the K-drive is muted. The result is that the people who will come into the future will disproportionately be the offspring of r-driven parents and couplings. Even if we reject the idea that undesirable people have stronger r-drives relative to their K-drives (although I believe that to be true) the enhanced power of the r-strategic sexual drive will influence partner selection and produce worse couplings. Over time, this presents a serious risk to the genetic health of the society.

Just as Mike Judge’s Idiocracy is more true of culture than of biology, we see the overgrown r-drive in the U.S.’s hypersexualized (but deeply unsexy) popular culture, and the degradation is happening much faster to the culture than it possibly could to our gene pool, given the relatively slow rate of biological evolution. Some wouldn’t see any correlation whatsoever between the return of the Gilded Age post-1980 and Miley Cyrus’s “twerking”, but I think that there’s a direct connection.


The Social Calvinism of the American right wing believes that severe socioeconomic inequality is necessary to flush the “undesirables” to the bottom, deprive them of resources, and prevent them from reproducing. Inherent to this strategy is the presumption (and a false one) that people are future-oriented and directed by the K-selective sexual drive, which is reduced by socioeconomic adversity. In reality, the more primitive (and more harmful, if it results in reproduction) r-selective sexual drive is enhanced by socioeconomic stresses.

In reality, socioeconomic volatility reduces the K-selective drive of most people, rich and poor. The reason for this is that a person’s subjective sense of satisfaction with socioeconomic status is not based on whether he or she is naturally “desirable” to society but his or her performance relative to natural ability and industry, which is a noise variable. It enhances the r-selective drive. Even if we do not accept that desirable people are more likely to have strong K-drives and weak r-drives, it is empirically true (seen in millennia of human sexual behavior) that people operating under the K-drive choose better partners than those operating under the r-drive.

The American conservative movement argues, fundamentally, that a mean-spirited society is the only way to prevent dysgenic risk. It argues, for example, that a welfare state will encourage the reproductive proliferation of undesirable people. The reality is otherwise. Thoughtful people, who look at the horrors of American healthcare and the rapid escalation of education costs, curtail reproduction even if they are objectively “genetically desirable” and their children are likely to perform well, in absolute terms. Thoughtless people, pushed by powerful r-selective sex drives, will not be reproductively discouraged, and might (in fact) be encouraged, by the stresses and volatility (but, also, by undeserved rewards) of the harsher society. Therefore, American Social Calvinism actually aggravates the very dysgenic risk that it exists to address.

Exploding college tuitions might be a terrifying sign

It’s well-known that college tuitions are rising at obscene rates, with the inflation-adjusted cost level having grown over 200 percent since the 1970s. Then there is the phenomenon of “dark tuition”, referring to the additional costs that parents often incur in giving their kids a reasonable shot of getting into the best schools. Because of the regional balancing (read: non-rich students from highly-represented areas have almost no shot, because they compete in the same regional pool as billionaires) effect, the insanity begins as early as preschool in places like Manhattan. Including dark tuition, some families spend nearly a million dollars on college admissions and tuition for their spawn. To write this off as a wasteful expenditure is unreasonable; it’s true that these decisions are made without data, but the connections made early in life clearly can be worth a large sum. Or, alternatively, the cost of not being connected can be quite high.

Many also note that a college degree means less than it used to, and that’s clearly true: educational credentials bring less on the job market than they once did. Yet rising tuitions are a market signal indicating that, at least for elite colleges, the value of something has gone up. Some people have complained that MBA school has become the new college, due to the latter’s devaluation. I’d argue that the data suggest the reverse. College is turning into MBA school: quality can be found at the top 200 or so institutions, but increasingly, the real big-ticket value motivating the purchase is certainly not the education, and not really the brand name– 5 years out of school, no one cares where you attended; the half of elite-school attendees who fail to make significant connections are likely to end up in mediocrity and failure like everyone else– but the network itself.

So have elite social connections become more valuable? How could it be so, in an era during which technology is supposedly liberating us from inefficiencies like good-old-boy networks? Aren’t those dinosaurs on the way to extinction? It seems not. This should be an upsetting conclusion, not so much for what it means (connections matter) but for what it suggests about the trend. Realists accept that, in the real world, connections and the attendant manipulations and (often, for those outside needing to get in) extortions matter. What we all hope is that they will matter less as time goes on, because for the opposite to be the case suggests that progress is moving backward. Is it?

The news, delivered.

“You know what the trouble is, Brucey? We used to make shit in this country, build shit. Now we just put our hand in the next guy’s pocket.” — Frank Sobotka, The Wire.

Leftists like me would typically argue that American decline began in the 1980s. The prosperity of the 1990s falsely validated the limp-handed centrism of the “New Democrats”, and the 2000s was the decade of free fall. On the other hand, despite the mean-spirited political tenor of these decades, the U.S. continued to innovate. As bad as things were, from a macroscopic and cultural perspective, the engines of progress continued running. Silicon Valley didn’t stop just because Reagan and the Bushes held power. Google, which became a household name around 2002, didn’t go out of business just because of a toxic political environment. I’m not saying that politics doesn’t matter– obviously, it does– but even in the darkest hours (Bush between 9/11 and Katrina) there was not a visible, credible threat that American innovation would, in the short term, just die.

I also don’t think that we’re in immediate danger of an out-of context innovation shut-down. It’s not something that will happen in the next two years. I do think that we’re closer than we realize.

American innovation exists for a surprisingly simple reason: forgiving bankruptcy laws regarding good-faith business failure. If your company folds, it doesn’t ruin your life. Unfortunately, that protection has been eroded. Bank loans for new businesses require personal liability, circumventing this protection outright. The alternative is equity financing, but Silicon Valley’s marquee venture capitalists have set up a collusive, feudal reputation economy in which an individual investor can be a single-point-of-failure for an entrepreneur’s entire career. The single trait of the American legal system that enabled it to be a powerhouse for new business generation– forgiving treatment of good-faith business failure– has been removed. Powerful people saw it as inconvenient, they wrote it off the ticket.

Credible long-term threats to innovation are present. Makers struggle more to get their ideas funded, or to get anywhere near the people in control of the arcane permission system that still runs the economy. The socially-connected takers who own that permission system can demand more as a price of audience. We’re seeing that. The people who really make the big money (defined as enough to comfortably buy a house) in Silicon Valley, these days, aren’t the makers implementing new, crazy ideas; but peddlers of influence using their business-school connections to get unwarranted advisory and executive positions, stitching together enough equity slices to have a viable portfolio, or those who do the former even better and become real VCs. Silicon Valley’s Era of Makers has come and gone; now, MBA culture has swept in, 22-year-olds are getting funded based on who their parents are, and its clear that Taker Culture has won… at least in the “we’ll fund your competitors if you don’t take this sheet” VC-funded world.

So… what does this have to do with college tuitions rising? Possibly nothing. There are a number of plausible causes for the tuition bubble, many having little or nothing to do with Taker Culture and the (risk of) death of innovation. Or, it might tell us a lot.

What do we actually know?

We know that college tuitions are skyrocketing. Professor salaries aren’t the cause, because the academic job market has been tanking over the past 30 years, with low-paid adjunct and graduate students replacing professors in much of undergraduate education. This suggests that the quality hasn’t improved, and I’d agree with that assessment. Administrative costs and real estate expenditures have gone up, but that seems to be more of a case of colleges wanting to do something with this massive pool of available money, than a prior cause of the escalating costs.

Housing prices in the most vital areas have also increased, even though the economy (including in those areas) has weakened considerably. I suspect that these two of the three aspects of the Satanic Trinity (housing, healthcare, and tuition costs) share a common thread: as the world becomes riskier and poorer, people are buying connections. That’s what living in New York instead of New Orleans in your 20s is about. It’s also what going to an Ivy instead of an equally adequate state university is about. Of course, the fact that connections matter enough to be bought isn’t new. People have been buying connections as long as there has been money. What is obvious is that people are paying more for connections than ever before, and that inherited social connectedness has probably reached a level of importance (even in the formerly meritocratic VC-funded startup scene) incompatible with democracy, innovation, or a forward-thinking society. Oligarchy has arrived.

What happens in an oligarchy is that the purchase of connections (via financial transfer, or ethical compromise) ceases to be an irritating sideshow of the economy– a distraction from actually making stuff– and, instead, becomes the main game.

Here’s an interesting philosophical puzzle. Does this pattern actually mean that connections have become (a) more valuable, or (b) less so? It means both, paradoxically. Social connections matter more, insofar as a much larger pool of money is being putting into chasing them, and this strongly indicates that hard work, creativity, and talent no longer matter as much. To navigate society’s dehumanizing and arcane permissions systems, “who you know” is becoming more crucial. The exchange rate between social property vs. talent and hard work now favors the first. However, connections are less valuable, also, insofar as they deliver less, requiring people to procure more social capital in order to make their way in the world. The price of something increasing does not necessarily mean that it’s worth more to the world; it might be that a reduction in its delivered value has driven up the quantity needed, thus its price. This evolution is not the functioning of a healthy economy; it’s sickness that benefits only a few. Connections matter more, make very evident by the fact that people are paying more for the same quantity, but deliver less. That means that the world, as a whole, is just getting poorer.

This is clearly happening. People are paying more for social connections and the health of the economy, in addition to this, indicates that even more social access is needed to buy as much economic value (security, opportunity, etc.) as yesteryear. Adam Smith decried Britain as a “nation of shopkeepers”. The United States, ever since the Organization Man age, has been in danger of becoming a nation of social climbers. However, there’s always been something else to its economic character; at least, enough impurity amid the bland mass to, at least, give color should the damn thing crystallize. But is that true now? In the 1970s, that “impurity” was Silicon Valley. There was cheap land that the old elite didn’t want, but that drew (for a variety of historical reasons) a lot of intelligent and capable people. Governments and businesses used this opportunity to build up one of the most impressive R&D cultures the world has seen. Maker Culture came first in Silicon Valley, generated a lot of value, and then the money started rolling in. Unfortunately, that also brought in douchebags, whose number and power have only dramatically increased. It was probably inevitable that Taker Culture (multiple liquidation preferences, note-sharing among VCs, MBA-culture startups with reckless and juvenile management, Stanford Welfare and the importance of social connections) would set in.

The New California?

We know that the California-centered Maker Culture is gone. There are still a hell of a lot of great people in that region– it might be the most talent-rich place on earth– but, with a few outstanding exceptions, they’re no longer the socially important ones. I don’t think it’s worth dissecting the death of the thing, or whining about the behaviors of venture capitalists, because I think that ecosystem is too far gone to repair itself. In the 1990s, venture capitalists rightly judged that most of the powerful, large corporations were too politically dysfunctional to innovate. Now, that same charge is even more true of the VC-funded ecosystem, which effectively functions (due to the illegal collusion of VCs, who increasingly view themselves as a single executive suite) as a single corporation, albeit with a postmodern structure.

What California was when the Maker Culture emerged, what places are like that now? Is it another city in the U.S., like Austin, perhaps? Or is it in another country? Must it even be a physical place at all? I don’t know the answers to these questions.

Or, as the escalating cost of college tuition– and the premium on social connections suggested by that– seems to indicate, is it just gone for good? Has an effete aristocracy found a way to drive meritocracy not just to a fringe (like California five decades ago) but out of existence entirely? If so, then expect innovation to die out, and an era of stagnation to set in.

Three capitalisms: yeoman, corporate, and supercapitalism

I’m going to put forward the idea, here, that what we call capitalism in the United States is actually an awkward, loveless ménage à trois between three economic systems, each of which considers itself to be the true capitalism, but all three of which are quite different. Yeoman (or lifestyle) capitalism is the most principled variety of the three, focused on building businesses to improve one’s life or community. The yeoman capitalist plays by the rules and lives or dies by her success on the market. Second, there’s the corporate capitalism whose internal behavior smells oddly of a command economy, and that often seeks to control the market. Corporate capitalism is about holding position and keeping with the expectations of office– not markets per se. Finally, there is supercapitalism whose extra-economic fixations actually render it more like feudalism than any other system and exerts even more control, but at a deeper and more subtle level, than the corporate kind.

1. Yeoman capitalism (“the American Dream”)

The most socially acceptable of the American capitalisms is that of the small business. It’s not trying to make a billion dollars per year, it doesn’t have full-time, entitled nonproducers called “executives”, and it often serves the community it grew up in. It’s sometimes called a “lifestyle business”; it generates income (and provides autonomy) for the proprietor so as to improve her quality of life. When typical Americans imagine themselves owning a business, and aspiring to the freedom that can confer, yeoman capitalism is typically what they have in mind: something that keeps them active and generates income, while conferring a bit of independence and control over one’s destiny.

Yeoman capitalism is often used as a front for the other two capitalisms, because it’s a lot more socially respected. Gus Fring, in Breaking Bad, is a supercapitalist who poses as a yeoman capitalist, making him beloved in Albuquerque.

The problem with yeoman capitalism is that, not only is it highly risky in terms of year-by-year yield, but there’s often a lack of a career in it. Small business owners do a lot more for society than executives, but get far less in terms of security. An owner-operator of a business that goes bankrupt will not easily end up with another business to run, while fired executives get new jobs (often, promotions) in a matter of weeks. Modern-day yeoman capitalism is as likely to take the form of a consulting or application (“app”) company as a standalone business and may have more security; time will tell, on that one.

While yeoman capitalism provides an attractive narrative (the American Dream, in the United States) it does not provide job security for anyone (and that’s not its goal). It also has a high barrier to entry: you need capital or connections to play. Even though it is a more likely path to wealth than the other two capitalisms are for most people, it often leads to horrible failure, because it comes with absolutely no safety net. It’s the blue-collar capitalism of working hard and hoping that the market rewards it. Sometimes, the market doesn’t. Most people can’t stomach the income volatility of this, or even amass the capital to get started.

2. Corporate capitalism (“in Soviet Russia, money spends you”)

Corporate capitalism provides much more security, but it has an institutional command-economy flavor. People don’t think like owners, because they’re not. Private-sector social climbers rule the day. It’s uninspiring. It feels like the worst of both worlds between capitalism and communism, with much of the volatility, insecurity, and greed of the first but the mediocrity, duplicity, and disengagement associated with the second. It has one thing that keeps it going and makes it the dominant capitalism of the three. It has a place for (almost) everyone. Most of those places are terrible, but they exist and they don’t change much. Corporate capitalism will give you the same job in California as you’d get in New York for your level of “track record” and “credibility” (meaning social status).

The attraction of corporate capitalism is that one has a generally good sense of where one stands. Yeoman capitalism is impersonal; market forces can fire you, even if you do everything right. Corporate capitalism gives each person a history and a personal reputation (resume) based in the quality of companies where one worked and what titles were held. At least in theory, that smooths out the bad spells because, even though layoffs and reorganizations occur, the system will always be able to find an appropriate position for a person’s “level”, and people level up at a predictable rate.

Adverse selection is one problem with corporate capitalism. People choose corporate capitalism over the yeoman kind to mitigate career risks. People who want to off-load market risks might be neutral bets from a hiring perspective, but people who want to off-load their own performance risks (i.e. because they’re incompetent slackers) are bad hires. Corporate capitalism’s “place for everyone” makes it attractive to those sorts of people, who can trust that social lethargy, in addition to legal issues, around decisions that adversely affect one’s career (i.e. actually demoting or firing someone) will buy them enough time to earn a living doing very little. Consequently, it’s hard to operate in corporate capitalism without accruing some dead weight. Worse yet, it’s hard to get rid of the deadwood, because the useless people are often the best at playing politics and evading detection. Companies that set up “fire the bottom 10 percent each year” policies end up getting ruined by the Welch Effect: stack ranking’s most common casualties are not true underperformers, but junior members of macroscopically underperforming teams (who had the least to do with this underperformance).

Compounding this is the fact that corporations must counter-weigh their extreme inequality of results (in pay, division of labor, and respect) with a half-hearted attempt at inequality of opportunity (no playing of favorites) but what this actually means is that the most talented can’t “grade skip” past the initial grunt work, but have to progress along the slow, pokey track built for the safety-seeking, disengaged losers. They don’t like this. They want the honors track, and don’t get it, because it doesn’t exist– grooming a high-potential future leader (as opposed to hiring one from the outside and then immediately reorg-ing so no one knows what just happened) is not worth pissing off the rest of the team. The sharp people leave for better opportunities. Finally, corporations tend over time toward authoritarianism because, as the ability to retain talent wanes, remaining people that the company considers highly valuable are enticed with a zero-sum but very printable currency– control over others. All of this tends toward an authoritarian mediocrity that is the antithesis of what most people think capitalism should be.

Socialism and capitalism both have a Greenspun property wherein bad implementations of one generate shitty forms of the other. Under Soviet communism, criminal black markets (similar to that existing for psychoactive drugs in the U.S.) existed for staid items like lightbulbs, so this was a case of bad socialism creating a bad capitalism. Corporate capitalism has a simliar story. Corporations are fundamentally statist institutions that operate like command economies internally. In fact, if one were to conceive as the multi-national corporation as the successor to the nation-state, one could see the corporation as an extremely corrupt socialist state. What is produced, how it is produced, and who answers to whom, all is determined centrally by an autocratic authority. Advancement has more to do with pleasing party officials than succeeding on a (highly controlled) market. Corporations do not run as free markets internally; but also, once they are powerful and established, they work to make society’s broader market less free, pulling the ladder up after using it.

3. Supercapitalism! (“You know what’s cool? Shitting all over a redwood forest for a wedding!”)

Supercapitalism is the least understood of the three capitalisms. Supercapitalists don’t have the earnestness of the yeoman capitalist; they view that a chump’s game, because of its severe downside risks. They also don’t have the patience for corporate capitalism’s pokey track. Supercapitalists rarely invest themselves in one business or product line; having a full-time job is proletarian to them. Instead, they “advise” as many different firms as they can. They’re constantly buying and selling information and social capital.

Mad Men is, at heart, about the emergence of a supercapitalist class in professional advertising. Don Draper isn’t an entrepreneur, but he’s not a corporate social climber either. He’s a manipulator. The clients are the corporate capitalists playing a less interesting game than what is, in the early 1960s, emerging on Madison Avenue– a chance to float between companies while cherry-picking their most interesting or lucrative marketing problems. The ambitious, smart, Ivy Leaguers are all working for people like Don Draper, not trying to climb the Bethlehem Steel ladder. What’s attractive about advertising is that it confers the ability to work with several businesses without committing to one. Going in-house to a client (still at a much higher level than any ladder climber can get) is the consolation prize.

One interesting trait of supercapitalism is that it’s generally only found in one or two industries at a time. Madison Avenue isn’t the home of supercapitalism anymore; now, advertising is just the unglamorous corporate kind. Investment banking took the reins afterward, but is now losing that; now it’s VC-funded internet startups (many of which have depressingly little to do with true technology) where supercapitalist activity lives. Why is it this way? Because supercapitalism, although it considers itself the most modern and stylish capitalism, has a fatal flaw. It’s obsessed with prestige, and prestige is another name for reputation, and so it generates reputation economies (feudalism). It can’t stay in one place for too long, lest it undermine itself (by developing the negative reputation it deserves, and therefore failing on its own terms).

Supercapitalism also turns into the corporate kind because its winners (and there are very few of them) get out. First, they establish high positions where they participate in very little of the work (to avoid evaluation that might prove them just to have had initial luck). They become executives, then advisors, then influential investors, and then they move somewhere else– somewhere more exciting. That leaves the losers behind, and all they can come up with are authoritarian rank cultures designed to replicate former glory.

Why does supercapitalism generate a reputation economy? That fact is extremely counterintuitive. Supercapitalism draws in some of the most talented, energetic people; and it is often (because of its search for the stylish) at the cutting edge of the economy. So why would it create something so backward and feudal as a reputation economy, which intelligent people almost uniformly despise? The answer, I think, is that supercapitalism tends to demand world-class resources in both property (capital) and talent (labor). A regular capitalist is not nearly as selective, and will take an opportunity to turn a profit from property or talent, but the sexiest and most stylish capers require top-tier backing in both. If you’re obsessed making a name for yourself (and supercapitalism is run by the most narcissistic, who are not necessarily the most greedy, people) in the most grandiose way, you don’t just need to hit your target; you also need the flashiest guns.

Right now, the eye of the supercapitalist hurricane is parked right over Silicon Valley. Sean Parker is the archetypical supercapitalist. He’s never really succeeded in any of his roles (that’s a prolish, yeoman capitalist ideal) but he’s worth billions, and now famous for being famous. While corporate capitalism focuses on mediocrity and marginalizes both extremes (deficiency and excellence) supercapitalism will always make a cushy home for colorful, charismatic failures just as eagerly as it does for unemployable excellence.

Supercapitalism will, eventually, move away from the Valley. Time will tell how much damage has been done by it, but considering the state of the housing market there and the horrible effects of high house prices on culture, I wouldn’t expect the region to survive. Supercapitalism rarely considers posterity and it tends to leave messes in its wake. 

The final reason why supercapitalism must move from one industry to another, over time, is that reputation economies deplete the opportunities that attract talent. It’s worthwhile, now, to talk about compensation and how they work in the three capitalisms. Doing so will help us understand what supercapitalism is, and how it is different from the corporate kind.

Under yeoman capitalism, the capitalist is compensated based on exactly how the market values her product. No committee decides what to pay her, and she is never personally evaluated; it’s the market value of what she sells that determines her income. Most people, as discussed, either can’t handle (financially or emotionally) this volatility or, at least, believe they can’t. Corporate capitalism and supercapitalism, on the other hand, tend to pre-arrange compensation with salaries and bonuses that are mostly predictable.

Of course, what a person’s work is worth, when that work is abstract and joint efforts are complex and nonseparable, has a wide range of defensible values. Corporate capitalism settles this by setting compensation near the lower bound of that range, but (mostly) guaranteeing it. If you make $X in base salary, there’s usually a near-100-percent chance that you’ll make that or more in a year (possibly in another job). Since people are compensated at the lower bound of this range, this generates large profits at the top; in the executive suite (above the effort thermocline) something exists that looks somewhat like a less mobile and blander supercapitalism.

People who want to move into the middle or top of their defensible salary ranges won’t get it in corporate capitalism. The work has already been commoditized and the rates are already set, and excellence premiums are pretty minimal because most corporations refuse to admit that their in-house pet efforts aren’t excellent. Thus, talented people looking for something better than the corporate deal find places where the opportunities are vast, but also poorly understood by the local property-holders, allowing them to get better deals than if the latter knew what they had. At one time, it was advertising (cutting-edge talent understood branding and psychology; industrial hierarchs didn’t). Then it was finance; later and up to now, it has been venture-funded light technology (on which the sun is starting to set). Over time, however, the most successful supercapitalists position themselves so as not to be affiliated with a single one of the efforts, but diversify themselves among many. This creates a collusive, insider-driven market like modern venture capital. Over time, this inappropriate sharing of information turns into a full-blown reputation economy.

Once a reputation economy is in place, talent stops winning, because property, by its sheer power over reputations, has full authority to set the exchange rate between property and talent. “The rate is X. Accept it or I’ll shit on your name and you’ll never see half of X.” Once that extortion becomes commonplace, what follows is a corporate rank culture. It feels like the arrangements are “worked out” and only management can win– and that’s actually how it is. Opportunities don’t disappear entirely, but they aren’t any more available to young talent than elsewhere, and the field becomes just another corporate slog. That’s where the VC-funded technology scene will be soon, if not already there. 

Supercapitalists, I should note, are not always the same people as “top talent” and they’re rarely young (i.e. hungry and unestablished) talent. Supercapitalists tend to be the rare few with connections to both property and talent at the highest levels of quality. Property they can carry with them, but talent they must chase. Talent arrives in the new place (quantitative finance, internet technology) first. Supercapitalism emerges as these well-connected and propertied “carpetbaggers” arrive, and as the next wave of young talent discovers that there are better opportunities in managing the new place (i.e. associate positions at VC firms) than working there. 

What really impels young talent to join supercapitalism is not the immediate opportunity (which is tapped out) but the possibility to move along with supercapitalism to the next new place. For example, someone who started in investment banking in 2006 is not likely to be a million-per-year MD today– that channel’s clogged– but has has a good chance of being rich, by this point, if he jumped on the venture capital bandwagon around 2007-08; he’s a VC partner on Sand Hill Road now. 


How do these three capitalisms interact? Is there a pecking order among them? How do they view each other? What is the purpose of each?

Yeoman capitalism provides leadership opportunities for the most enterprising blue-collar people, and is the most internally consistent. It’s honest. Unlike the other capitalisms, there isn’t much room for reputation (much less prestige) aside from in one’s quality of product. The rule is: make something good, hope to win on the market. The major problem with it is its failure mode, even in good-faith business failures that aren’t the proprietor’s fault. The main competitive advantage one holds as a small business owner is property rights over a company, and one who loses that is not only jobless, but often with limited transferability of skill.

Yeoman capitalism has a lot of virtues, of course. It gives a lot back to its community, while corporate and supercapitalism tend to destroy their residences and move on. Yeoman capitalism is what blue-collar people tend to think of when they imagine capitalism as a whole, and it provides PR for the corporate capitalists and supercapitalists, who recognize that their reputations (which they hold dear) depend on the positive image that yeoman capitalism provides for the whole economic system. Yeoman capitalism is aware of corporate and supercapitalist entities in the abstract, but has little visibility into their inner workings. Most small businessmen probably know that the corporations are somewhat different from their enterprises, but not how different (in reality, living within two separate societies) at the upper levels.

Corporate capitalism provides social insurance, although with great degrees of inequity based on pre-existing social class. It’s socialism as it would be imagined by a self-serving, entitled upper class refusing to give up any real power or opportunity. It can make little meaning out of leadership, charisma, or unusual intellectual talent. In fact, it goes to great lengths to pretend that these differences among people don’t exist. Its goal is to extract some labor value from people who lack the risk tolerance for yeoman capitalism and the talent for supercapitalism, and it does so extremely well, but it also creates a culture of authoritarian mediocrity that renders it unable to excel at anything. Needs for high quality are often filled by yeoman or super-capitalism; because yeoman capitalism can provide the autonomy that top talent seeks while supercapitalism provides (the possibility of) power and extreme compensation, those capitalisms get the lion’s share of top talent. Regarding awareness, corporate capitalism understands yeoman capitalism well (it often serves yeoman capitalists) but is oblivious to the whims of supercapitalism.

Between corporate and yeoman capitalism, there isn’t a clear social superiority, because they serve different purposes. Some intelligent people prefer the validation and stability of corporate capitalism, while others prefer the blue-collar honesty of yeoman capitalism. On the other hand, a strong argument can be made that supercapitalism is the clear elite among the three. It’s built to take advantage of the freshest, just-being-discovered-now opportunities. 

Supercapitalism has a familiar process. First, the smartest people find opportunities (“before it was cool”) that the property-holders haven’t yet found a way to valuate, and negotiate favorable terms for themselves while they can, and this makes a few thousand smart people very rich. Then, the elite property-holders catch wind of the deals to be made and move in. Soon there’s a rare confluence of two forces that usually dislike, but also rely heavily upon, each other– talent and property. Supercapitalism emerges as the all-out contest to determine an exchange rate between these two resources over a new domain comes into play. Eventually property wins (reputation economy) and corporatization sets in, while those who still have the hunger to be supercapitalists move on to something else.

A puzzle to end on

There’s a fourth kind of capitalism that I haven’t mentioned, and I think it’s superior to the other three for a large class of people. What might it be? That’s one of my next posts. For a hint, or maybe a teaser: the idea comes from evolutionary biology.

What turns 99.999% of privileged people into fuckups

Generally, people who generalize are actually talking about themselves. I wouldn’t normally introduce myself as “a privileged fuckup”; however, I am more privileged than the average person in this world, and there are definitely things I have fucked up, so to some degree I must indict myself as well. Here, by “fuckup” I mean “person who has achieved substantially, and embarrassingly, less than what is possible with his or her talent and resources”. Guilty.

I had to qualify the title with the word privileged. In this case, I’m not applying it only to the rich, but to the middle classes. I feel like it’s not right to call the genuinely impoverished, who never had a chance, “fuckups”. I’d rather focus on the process that turns people who’ve had plenty of chances into (relative to what they could achieve) mediocrities, and possibly even figure out what to do about it.

There’s good news, however: I think there’s a causative agent of fuckuppery that is so pervasive as to explain almost all of it, singular enough to admit solution, toxic enough to suffice, and subtle enough to answer the question, “Why isn’t this discussed more?”

Let me first address four explanations that sound like they could be singularly causative of widespread fuckuppery, and are frequently cited as causes, but aren’t even minor players.

  1. Work is hard, yo. People are inherently lazy, one theory goes.
  2. Too much competition! There’s the argument that not everyone can achieve great things; some people must be fuckups.
  3. Lack of resources. Also known as, “I’d be published by now but for my fucking day job.”
  4. Personal weakness. I will establish this as a religious argument of minimal value.

None of these suffice to explain the epidemic of fuckuppery that we see in modern, corporatized, sanitized employment. I’ll blow each of these explanations (at best, partial causes) to pieces before I lay out the right answer.

Failed explanation #1: Work is hard

It’s true that almost everything worth doing is difficult, but that doesn’t mean it’s unpleasant. Things that are unpleasant are, in general, quite unsustainable no matter how much “will power” a person has. The mind is built to learn from (and thus, avoid) negative states. On the other hand, people can do things that are difficult or even physically painful for quite a long time if there is a superior, psychological reward involved.

I don’t think people are very different from one another in their tolerance for unpleasant mental states (and I’ll get back to this, later). So what is it (aside from extraordinary natural talent) that makes someone like Usain Bolt or Michael Jordan become a great athlete, even in spite of physical pain and exhaustion along the way? They figure out a way to separate difficulty from unpleasantness. Most people will never be professional athletes, but the skill of preventing difficulty from becoming emotional negativity is one that anyone can develop. As Buddhism teaches us, one can feel pain and not suffer. When the great athletes are exhausted from training, they don’t stew about it in negative mental states; they accept it as part of the process and, in a way, an aspect of the reward.

Some people think failure (at an ambitious project) is naturally unpleasant. It’s not. In fact, weightlifters literally train to failure, which means they lift until their muscles (momentarily) cease functioning. It’s the social stigma, especially at work, that gets people. We need to kill that. The problem is that humans have a tendency to recognize patterns when they aren’t there, and failing at one workplace project creates a sense of decline, replacing what is actually a noisy process (Brownian motion with drift) with a parabolic arc (vaulting ambition) straight out of a five-act Shakespearean tragedy.

One note I’ll make is that the education system unintentionally(?) encourages risk aversion. Instead of being encouraged to tackle very hard problems and setting the pass mark at, say, 20%; students are asked to tackle very easy problems with the pass mark set at 60 to 75% and average performance calibrated to be between 70 and 90%. This means that one total failure cancels out several excellent results; if you get one zero and three 100%’s, you’re still only average at 75%. I’d rather see the reverse: courses and work so demanding that 100% is extremely rare, but with 25 to 50 percent being a respectable score. In the real world, on projects worth doing, 50% is a hell of a good success rate compared to the maximum possible.

Is work hard? Of course. Yet as humans, we love to do hard things. We do a lot of things with zero or negative economic value (such as climbing mountains) because they are difficult and painful. We like the mental state of flow, we need to be challenged. We also enjoy physical exertion and discomfort if there is a reward involved. Hell, most of what we do on vacation is more work-like, in a primal sense, than office work. Biking 30 miles in 95-degree heat is a lot harder than sitting in a chair for eight hours, but most people would envy the first experience and not the second.

Failed explanation #2: Too much competition

Um, no. Have you seen the people out in this world? Like, really measured how diligent, engaged, and effective most of them are? If you have, you’re not worried about competition.

At least, I should say, one shouldn’t worry about competition in the grand sense. There are local competitions for specific resources and it’s not fun to see a superior competitor enter the field, but in the broader scope of things, competition is not what will hold a person back. I, for one, would love to live in a world where a person like me were average in intellect, creativity, and work ethic.

Sure, there is a lot of competition, in a less grand sense, for things that are known to have value: money, property, jobs, relationships, social status. It’s pretty easy to lose one’s creative and spiritual way and start chasing after the things everyone else wants and, when that happens, competition is the only thing one thinks about. If you live that way, you will wreck your life in battle with some of humanity’s most vicious, cutthroat people. That’s not an issue of “too much competition”, though. That’s on you. Part of the game is figuring out which subgames are worth playing and which will just waste time.

I want to make one thing clear, which is that there are genuine competitive issues in this world and many people face them. If you’re in a poor country where access to water is limited, then there are competitive forces making your life hell. That’s why I’m focusing on privileged people, who still get themselves intimidated by “all the competition”, and that obsessive focus (not the competitors themselves) does prevent them from excelling.

Guess what? There’s no threat of competition when people excel. Let’s say that you become the best Calvinball player in the world, advancing the game in ways the world hasn’t seen for centuries. There’s a sudden uptick of interest in the sport. Good for you; you make a bit more money, being strongly responsible for external world’s increased interest in the game. Now, let’s say that someone else comes along who’s slightly better than you are; you beat him sometimes, but he’s clearly the superior player. His effect (as a superior player) on you is… that you make more money. Sure, he’ll probably make even more than you do, but the degree to which he advances the game (and increases interest) benefits you. There are now two great players, which means the overall quality of the games (as no one would care to watch if you just won all the time) goes up. When you and he play, people who’ve never watched a Calvinball match in their lives come out. The match will have a winner and a loser but, economically, both sides win.

All animals and most people (the not privileged) have to worry about competition as an existential threat. In the wild, it’s deadly. For privileged people (here defined using a fairly low bar, so middle-class Americans qualify) the threat from competition is just not that great, not in the long run. If you excel and someone else is better, that just advances the field. If you suck, it’s not the fault of the competition; it’s all on you.

Besides, even in the relatively broken world of white-collar work, one never really has to worry, when doing something genuinely worth doing, about others who are better at the work. One has to worry about nasty people and political adepts, not superior craftspeople. In fact, people who are genuinely superior are usually quite nice about it, at least in my experience. It’s those who are inferior but politically powerful that are most dangerous.

Failed explanation #3: Lack of resources

This one falls down pretty quickly, because the people with tons of resources are often the biggest fuckups of all.

This is a pretty lame excuse that fails to address the real problem. Sure, a day job can slow the progress of that novel, but writers write. If you can’t get a few pages written per week while working a typical day job, you’re not a writer.

There’s something going on that prevents people from using the resources they have. They spend 3 hours per day watching TV and complain about a lack of “time”. No, that’s a lack of energy. It’s different. In fact, it’s not really a lack of energy (in the physical sense) so much as a motivational problem. I’ll get back to that, after I kill a fourth failed explanation for the epidemic of fuckuppery. The issue isn’t a lack of resources but a lack of the emotional and cognitive energy to manage what they have, which presents the (compelling) appearance of resource enervation, but it’s not actually that.

Since I’m focusing on a class of people who have 2 to 6 hours (or more) per day of free time, plus enough disposable income and technological access to learn almost any topic in the world, I don’t think we can give “lack of resources” credit for the overwhelming likelihood that a person does not excel. Sorry, but the resources are there, so I have to kill that excuse.

Failed explanation #4: Personal weakness

The knee-jerk conservative reaction to any social or psychological problem is to ascribe it to “personal weakness” or a lack of “individual responsibility”. It really is the “God of the Gaps” for those people, and it’s pretty absurd.

Why would I take time to address some macho nonsense explanation? Because I think all of us (not just mouth-breathing right-wingers) have a tendency toward self-shame when we compare what we actually accomplish to what we could achieve if we got our shit together. We tend to take our shortfalls personally, without full recognition of the forces resulting in the outcome. We either fall into an external (competition, lack of resources) or internal (personal weakness) locus-of-control explanation, without recognizing the complex mix of the two that we actually face.

By all means, if taking an extreme internal-locus-of-control mentality helps you, then let it motivate you. However, I don’t think the personal weakness argument applies, and if the shame is getting you down, then throw it aside; I’ll explain your (probable) problem just below. Some people have more favorable biology and material resources than others, but there isn’t much evidence to convince me that any of what I wish to analyze is driven by moral strength/weakness variable independent of those causes. I just don’t see it being there. Most people want to achieve things, work hard (as they understand the concept) and want to do the right thing. Yet, almost everyone deals with emotional fatigue, fluctuating motivation, and less resilience than most people would wish to have. It’s not “weakness”; it’s psychology, and a lot of this stuff is rooted more closely to the physical brain than the part of ourselves we view as nonphysical, moral, or spiritual– and possessing some kind of “character” that deserves to be rewarded or punished.

Those four dragons slain, we can get to an accurate explanation of why most people are so ineffective. It’s actually quite simple. Let’s drop into it.

Organizational “work” conditions people to associate work with subordination, making them lazy, unfocused, irresponsible, and emotionally enervated. 

That work worth doing is hard and fails sometimes is not the problem. People can deal with failure. (One of the most engaging reinforcement systems, as seen darkly in slot machines, is variable-schedule reinforcement.) The issue certainly isn’t “too much competition”, with most people achieving a small percentage of what they’re capable of and therefore not much competitive threat in the world. Moreover, the problem isn’t scarce resources (although those resources are finite, and therefore squander will likely lead to non-achievement). Since the evidence is extremely strong for conditioning (learned helplessness) I think the “personal weakness” argument can be thrown out as a claim rooted in almost a religious bias. Instead, the problem is that society is structured in such a way that it trains people to dislike work.

Most people do most of the work in their life under a subordinate context. If people can only conceive of doing difficult or taxing things when in a state of subordination, they will lose their drive to work. Over time, this will strip them of their creativity and ambition in general. If the conditioning is complete, they’ll become permanent subordinates, unsuited to anything else.

It’s not the objective difficulty, but the erratic and corrupt evaluation, that gets to most people. When the reward is divorced from the quality of the work, people lose interest in the latter. Most people, after all, associate work not with physical or mental difficulty (which people enjoy) but with economic humiliation. In a work world driven by non-meritocratic political forces and therefore subject to constant shifts in priority, they also lose a sense of coherence, and the ability to focus atrophies, since responding quickly to political injections is more valued than deliberate performance. Eventually, full-on disengagement sets in, and people lose a sense of ownership or responsibility. Over time, this creates a class of people conditioned into permanent subordination.

That’s almost all of us, sadly, to some degree. Few of us (even the wealthy, who have no need to work) are free of all traces of the subordination meme-virus. Even many self-employed consultants are had by the balls by a single client or a tight-knit network of clients who value each others’ opinions, and venture-funded entrepreneurs answer literally to their investors. Now, one might argue that “everyone has a boss”. I disagree. Everyone serves (to quote from Game of Thrones, “valar dohaeris“) but it is not strictly necessary for people to serve others on humiliating terms. That part is artificial. It doesn’t need to be there, and in the long term, it does a lot of harm.

Age discrimination is one symptom of the underlying sickness of corporate discrimination. Why is there so much ageism in the corporate world? In terms of skill and competence, older people tend to fall under a bimodal distribution, with some being very good and others being quite weak. There are some who are extremely capable, and that’s because they maintained their creativity, originality, and energy in defiance of a system that spend decades trying to squash them. They’re exceptional as advisors and independent contributors, but they sure as hell aren’t desirable by managers who demand personal subordination; that won’t happen. On the other end are those who’ve subordinated quite well, let creative atrophy set in, and now stand at a disadvantage to younger people who haven’t been burned out yet. Subordination has a long-term cost– the destruction of human capital– and ageism establishes that the penalties are borne by those whose human capital has been destroyed.

More generally, this epidemic of privileged fuckuppery exists because, even at very high levels in our society, we’ve forged generations of people who have a deep-seated association of work with subordination– one that often begins in education, where it befalls the wealthy as much as the poor. They can’t even begin projects without thinking obsessively about how they will be evaluated (which is different from the valid question of how the work will serve others) and that whittles their minds down into second-hand crappy models of other peoples’ minds. It’s no good. We have to fight it. We have to kill it. This may not be an existential threat to the biological species (that being quite resilient, and more of a threat to nature than threatened by it) but it does pose a danger to the continuance of civilization. At this point, civilization cannot continue without ongoing technical progress, especially as pertains to solving ecological problems, which means we are reliant on human creativity, which organizational subordination kills not only in the bottom, but also at the top (because it requires elevated position-holders to focus more on maintaining rank than anything else).

Workplace subordination, in the 19th century and the first half of the 20th, had major operational efficiencies. Additionally, the destruction it inflicted on human capital was there for poets and philosophers to observe and mourn, but it never threatened to cripple the economy, because its standardization effects outweighed its costs. Assembly-line workers, in truth, didn’t need to be creative to do their jobs. What has changed is that machines are taking over the subordinate work, and will soon enough capture all of it. If the job can be done by a person in subordination, that means that perfect completion can be specified (as opposed to creative work where perfect completion is not even well-defined) and if it can specified, it can be programmed, and the work can be given over to robots. Soon enough, that will happen.

The result of this is that the market value of subordinate work, on the market, is falling inexorably to zero. People who are afflicted by the long-term conditioning of subordination will have no leverage in the modern economy, and (as much as I am cautious about such things, being more strongly libertarian than I am leftist) I suspect that central intervention (socialism! gasp!) will be necessary if a nation is to survive the transition. All that will be left for us is work requiring individual creativity and personal expression, and the people who have lost these capabilities to decades of horrible conditioning will need to be given the help to recover (or, at least, enough sustenance while they can bring themselves to recover). The real discussion we need to have– involving economists, business leaders, educators, and technologists– is how to prepare ourselves for a post-subordinate world.

Here’s the proper way to evaluate a startup’s equity offerings

One thing that young people are very bad at– to their detriment, and VC-istan’s profit– is evaluating equity in a startup job offer. They don’t understand the numbers, what they mean, or the processes that lead to them holding certain values. Many focus unduly on percentages, which isn’t the right way to go. As is often noted and obvious, 1 percent from an established company would be amazing; 1 percent of a pre-funding startup is below consideration, except for very light contract work (a couple hours per week of advising). An alternative is to convert the equity grant into a dollar figure. The problem here is that the valuation process is essentially black magic. There is no “market” valuation for a VC-funded startup because VC collusion is so entrenched that there is no competitive market. Rather, it’s driven by processes into which a typical employee has no visibility. Even if you’re getting $50,000 per year of vesting in “equity”, you’re getting what finance calls penny stocks, and you should be aware of their attendant problems (even if you’re not in finance, the Series 7 process, although boring, teaches a lot) before you take those too seriously. Sure, penny stocks can make a person rich; they can also go to zero. Plan accordingly.

VC-istan runs, I think, on a fake generosity. A clueless 22-year-old has no idea what he’s worth on the market. Compared to a PhD student’s stipend of $1,700 per month, an “exciting” startup job that comes with a much higher salary (but still $40,000 below what he could command if he went east, for finance, and got a real job for adults) seems like a great deal. To boot, he’s getting $30,000 (vesting over four years, with a “cliff” provision applied to the first) worth of equity! How generous! That’s how companies bill their equity participation. “We’re giving you this, because we want you to feel like an owner.” (In this case, “feel like an owner” often means to work long hours, put up with drudge work, and favor what we baselessly claim to be firm-wide existential risks over your own career goals, health, and friendships.) In reality, employee equity always comes with vesting (as it should) and a typical schedule is four years, which means it’s $7,500 per year. So it’s not a gift; just regular compensation. In that particular case, it’s a $40,000 pay cut in exchange for $7,500 in penny stocks. Hardly a good deal.

Every equity offer comes with a vesting period (typically 4 years) and a “cliff” provision that no equity is earned if the employee leaves (or is terminated, and “cliffing” firings at 362-364 days are pretty common). It’s important to keep that in mind. The equity “grant” is contingent on an outcome that, in the VC-funded world, is pretty rare. At a typical startup, it probably won’t be worth it to keep coming into work every day for 4 years.  Six months from now, you might be answering to an outsider you’ve never heard of.

In fact, full vesting seems only to occur for the mediocrities. The bottom 15% (as well as an additional 15% who are capable but politically unlucky) get fired, often without severance, long before the four-year mark. The top 15% usually bounce, because waiting around to “vest” on some piddling 0.02% equity offering, when you can roll the dice again and possibly be a founder– or at least get a real title and be a founder two gigs later– is a pathetic excuse for not growing up. (This is another rant, but most VC-funded startups are halfway houses for college kids who’d rather waste their 20s than (gasp!) have to show up somewhere in the a.m. hours.) With the top and bottom of the pack getting drawn out, it’s the middling players (“chief vesting officers”) who are actually around for long enough to tap their full, four-year, grants. Keep that in mind. Your expected percentage of that four-year target is probably (including cliff) 25-50 percent, and closer to the 25% if you’re unusually good (or bad) at what you do.

All that said, I’m going to assume the reader knows this. Of course, there still are good startups out there, and I will never deny that fact. They’re uncommon, but they exist. People need to know how to evaluate their equity allotments, and that’s what I’ll focus on here. Below is a simple formula:

Person-Power = (Number of employees) * (Equity percentage)

This isn’t a meaningless statistic or even a heuristic. Companies exist to aggregate human labor, and equity represents a share in what the group produces. If you’re offered 0.02% of an 80-person company, that’s representative of the work of 80 * 0.0002 = 0.016 people. In other words, each week, your equity represents a payment (in time) of 0.016 * 40 = 0.64 hours of work. You put in an eight-hour day, and the equity is a return of seven minutes and 41 seconds of human time: a long bathroom break.

The person-power metric accounts for the meaninglessness of equity percentages (as again, 1% of Google would be fantastic) and the uncertainty surrounding valuations. It gives actual meaning to the equity. You can envision a 0.02% slice of an 80-person company as a 5.76 seconds of each person’s workday being done on your behalf, or (as above) 7.68 minutes of total human time. That’s not all that much, in contrast to the concessions that these small companies expect because “we’re a startup”. Of course, outside of the startup world most companies give zero equity, so one might argue that, “hey, it’s better than nothing”. Sure, but those zero-equity non-startups actually pay people real salaries, give annual raises, try harder than startups not to fire people unjustly, have a lot of slack in the schedule allowing for (semi-furtive, but easy to execute) personal career growth, and let people leave at 5:00.

So what’s a fair range for person-power? Well, it depends on the risk level. The average, across the whole organization, can never be more than 1.0. In fact, it will typically be less than that because investors, advisors, and board members need their cut (and the investors actually bring something to the table!) I’d say that 0.15-0.3 is more than fair for a junior-level employee, and 0.5-0.8 (except for a risk-taking founder) is quite generous. That is what real equity looks like.

Below 0.1, on the other hand, I’d say that the employee should write the equity off entirely and focus only on the salary (with an understanding that startups rarely give salary raises or annual bonuses; if the investor-determined valuation goes up, that is the raise). I also don’t see why companies offer low equity amounts in the first place; those seem to complicate the finances of the company for minimal benefit, because if these junior chumps have any talent, they’ll figure out the VC-istan game and either want ten times more, or become 10-to-4 “chief vesting officers” while they plan for their next gig. (If I were running a company, I’d be extremely liberal with profit-sharing but give almost no one equity; that’s for investors, but I’d encourage employees to diversity their finances beyond their employer.) The signal is negative. For me, equity has an uncanny valley. If I’m not going to get a real stake, then I’d rather just zero the equity in exchange for a market-level salary, sane working hours, annual raises and bonuses, and not being surrounded by 21-year-old college kids who think their token ownership ought to drive them to work till 11:30 at night (with various stories of unprofessional behavior emerging out of that coupling of the night hours with the office.)

I don’t have an overarching, sweeping conclusion or any real wisdom here, but I think that every startup employee should take the time to compute that Person-Power number. If it doesn’t match or exceed the percentage of market salary (including four years of raises, bonuses, and career support) that he or she is giving up to work there, it’s probably time to bounce.

Wrong places, wrong times, decline, prestige, and what it all might mean.

Here’s a deceptively simple question: why would a person be at the wrong place in the wrong time?

People who use this sort of description about places and times to describe “luck” in the business world. Someone’s success is written off as, “he was just in the right place at the right time”. I’m starting to doubt that this can really be ascribed to a lack of merit. Some people know where “right places” are, and some people don’t. It’s not pure luck. There is skill in it; it’s just a very difficult skill to measure or even detect.

I’ve had to contend with this myself. I’m almost 30, and I was one of those people about whom, when I was younger, everyone said that I’d either be successful or dead by 30. Well, I didn’t get either. Breakout success was the gold, noble death was the silver, and I’m stuck with the bronze. Well, that’s depressing. Something I realized recently when looking over my career choices is that I made a lot of decisions that would seemed good taking a timing-independent approach, but that I often made the worst choice for the given time, almost as if it were a habit. So I have to ask myself, because I’m too old to pretend these things aren’t showing a pattern, why would I be in the wrong place at the wrong time? 

I’m pretty sure this is a common issue for people. Timing is just very important. Living in Detroit in 1965 is dramatically different from living there in 1980. However, in society, we tend to evaluate peoples’ choices morally. People who are successful made good choices (and vice versa). The problem is that we also view morality in absolute terms, and quality of choices (especially economic ones) is extremely time-dependent. That often leads us to make inaccurate conclusions about ourselves but also about the decisions we must make. Ignoring timing is an error that’s often catastrophic.

For example, choosing to work at Google is one of the biggest mistakes I’ve made; but it would have been a great choice in another time. I’ve wasted a lot of emotional energy being angry at my (truly awful) manager when I was at Google. However, bad managers are a fact of life, and people survive them. What really went wrong is that I joined Google in May 2011. If you join a company while it’s great and have a bad manager, there’s a way to move around that problem. You’re in a company that wants to succeed and will make a way for you to contribute something great. If you join a company in decline, however, you’re stuck. The firm’s demand for greatness is minimal; now it wants stability, and the game is about using social polish to compete for dwindling visibility and opportunity. In technology, closed allocation is the surest sign of a declining firm. In truth, I shouldn’t be angry at my ex-boss (for being awful, but some are) or at Google (for declining, even though no company wishes to decline) but at myself for picking that company while it was in decline. That’s on me. No one forced me to do that.

What attracts people like me to decline? I think there are four explanatory causes for why people tend to put themselves in formerly-right places at wrong times.

1. Prevailing decline

This one’s not our generation’s fault. Most of American society is in decline. Perhaps one wouldn’t know it from the Silicon Valley buzz, which trumpets successes while hiding failures. Plenty of people say things like, “Why should I care about Flint, Michigan when software engineer salaries keep rising? There will always be jobs for us.” Doing what, pray tell? We’re much more interdependent than people like to believe, so this attitude infuriates me. Decay often ends up hurting everyone. Rural poverty in the 1920s turned into the 1930s Great Depression. Poverty isn’t wayward people getting bitter medicine; it’s a cancer that shuts a society down.

It’s easy to end up picking a string of declining companies when there’s so much decline to go around. That’s a big part of why it’s so hard for our generation to get established. That said, this is the least useful place to focus because no one reading this can do anything about the problem, at least not individually.

2. Nostalgia

People are more prone to nostalgia than they like to admit. This leads people to attempt to replicate former successes and sprints of progress that are no longer available. Businesses change. A person who goes into investment banking based on the movie Wall Street is going to have a rude awakening, because the Gordon Gekkos aren’t taking 24-year-old proteges underwing, but trying to protect their own asses in a harsher regulatory climate. The same is true of Silicon Valley. Is there money to be made there? Of course there is, but the easy wins of the 1990s are gone. It’s no longer enough to be “in the scene”, and people who are just getting established will probably not find themselves eligible for the best opportunities until those are gone, leaving scraps.

What’s unusual in the case of suboptimal career moves is that it’s often oblique nostalgia. People aren’t trying to relive their own good times, but to get in on a previous generation’s golden age (when that generation, having long ago recognized the closed opportunity window, has mostly left). This can actually be one of the more effective ways to play, for reasons explained in the next item.

3. Risk aversion and prestige. 

Most things that are “prestigious” are actually in decline. For example, most of the smartest undergraduates attempt graduate school. It’s what you to do to show that you’re not one of those pre-professional idiots. Academia has been in brutal decline for almost 30 years! Those tenured professorships are not coming back. Wall Street and VC-istan are also very scarce in opportunities for those who aren’t already established, yet have a lot of prestige. Prestige, alas, matters. If you were an analyst at Goldman Sachs, every VC will go out of his way to fund you; even though analyst programs have very little value (except for the proof that a person can survive punishing hours) at this point.

If you look at the opportunities for new entrants, Wall Street, Google, and VC-istan are all quite dry. There are plenty of people getting rich, but it takes years to position oneself and the opportunity will probably have moved elsewhere by the time one is able to take advantage. However, prestige offers a benefit. No one can predict where the real prize (true opportunity) will show up next, but prestigious employers help a person find some kind of position– a consolation prize– in whatever comes next by offering social proof. They provide the validation that a person was smart enough. If you got into Google or Goldman Sachs in 2007, you’re probably not rich, but you’ve proven that you’re good enough to be rich; i.e. you’re not a total loser. People will often tolerate these second-place finishes while they build up the credibility to be in the running when real opportunities come about. But is that a good strategy? I’m not so sure that it is, anymore. If you join a declining technology company, you’ll face closed-allocation and stack-ranking and bland projects, which will hurt your career. Prestige is important, but so is quality.

4. Saprophytism

Some people, but very few, have a knack for turning decay into opportunity. When they see decline, they turn it into profit. This is not a socially acceptable behavior, but it exists and for some people it works. Is it common? I have no idea. For a worker, it’s very hard to pull off. Since the worst fruits of organizational decay always fall onto the least established (“shit flows downhill”) it’s unclear how a low-level employee would be able to reliably turn decay into a win. I’m sure some people have that talent and motivation, but I’m not one of them.

How would a person profit from the decay of Corporate America? Some people answer, “A startup!”, but that’s a really bad response. VC-istan is Corporate America with better marketing, and non VC-funded startups are reliant on clients which means they’re still dependent on this decaying ecosystem. No one wins when there is so much loss to go around. I’m sure there are some financial plays (informed short-selling) that would work, but I can’t think of a great career play for a young person trying to get established. Perhaps it’s a great time to be in the so-called “tech press”; they seem to enjoy themselves when things go to hell.

Concluding thoughts

The above are small explanatory features of the problem. Why do so many intelligent people consistently put themselves in wrong place/wrong time situations that inhibit success? What’s the systematic problem? I think that risk aversion and prestige are major components, worthy of further study, except for the fact that we all cognitively know this. We know that reputation is a lagging indicator of low value in a dynamic world, but we cling to it because other people do and because we rarely have better ideas.

I don’t think that individual nostalgia is a major player, but the collective form of it is prestige, and that often creates bizarre inconsistencies. For example, the prestige of the Ivy League has nothing to do with the adderall-fueled teenagers applying to twenty colleges and test-prepping at the expense of a normal or healthy adolescence– decidedly unprestigious behavior by people who are (but very slowly) eroding the prestige of those institutions– but, rather, that prestige exists because of things that happened long before those kids were even born. The prestige of those places comes out of a time when the psychotic ratrace around admissions didn’t exist, but those colleges were accessible only to a well-connected elite, because what our society really values is legacy and wealth, not talent or striving.

Prevailing decline makes this whole game harder, of course. What I think is really at the heart of it is that it’s hard to impossible to predict the future. People go to places of past excellence for the association (prestige) in order to take advantage of the halo effect and gain social superiority in the beauty contests necessary to win at organizational life. It works, because of the nostalgia held by most people now in power, but not well enough to counteract the overall tenor of decay. Even the people who succeed find themselves bitterly unhappy, because they compare what they get out of their path to what those who traveled before them go; it turns out that a Harvard degree in 2013 is still powerful, but not the golden ticket that it was in 1970, and that joining Google now is not the same thing as joining it in 2001.

So where is the future? I don’t know. If I did, I’d be somewhere other than where I am right now. Alan Kay said, “The best way to invent the future is to create it.” I like this sentiment, but I’m not sure how practical it is. None of us who need to know where the future is have the resources to do that. Those resources (wealth, connections, power) all live with citizens of the past. It’s this fact that keeps drawing generations, one after another like crashing waves, toward the false light or prestige: the hope of getting some scrap metal out of decaying edifices so as to have the right materials when the opportunity comes to make something new. But where (and when) is the time for building?

A guess at why people hate paying for (certain) things… and a possible solution.

I’ve been thinking a lot about paywalls and why people are so averse to paying for things they use on the Internet. People don’t mind putting quarters into a vending machine to get a snack at 4:00, or handing a couple of one-dollar bills to get coffee, but put a 50-cent charge on an article, and your savvier readers will try to find it for free, while your less savvy ones will just find another distraction. People hate paywalls, and it’s not clear why. The time people spend trying to get around copyrights in a safe and reliable way is often worth more than the money that would be spent just paying for the content. Economically, it’s hard to make sense of it. The time spent reading a news article is a lot more expensive than the amount being asked-for (i.e. paid content is often only 10-20% more costly at worst, including the time) so why are paywalls so controversial? What’s the issue here?

Personally, I think it goes back to childhood. If you were in a hotel room, you didn’t touch the “Pay Channels” (perhaps as much because of what they were as their price) or you’d be in trouble. You watched the Free Channels only. You could make a few local calls, but long-distance was a no-no (when I was growing up, long distance rates were over 50 cents per minute) except on Sunday nights to relatives. For a child, things that cost what adults would recognize (given the technology of the time) as fair prices were exclusionary at the time, simply because children (and for good reasons) aren’t given a lot of money.

Most of us started using an internet at a time when the symbol $ meant that you couldn’t continue on, or you’d at least have to explain to your parents why you needed the $7/month deluxe version of the game you were playing, because you couldn’t exactly pay in cash. Sure, they’d be happy to take it out of your allowance, but just having your parents know was often too much. They’d often disapprove. “Is that stupid game really worth $7?” On to something else.

Or maybe I’m just personally stingy. It’s not that I object to spending small amounts of money. If I know I’m going to get value out of something, I spend money for it. On the other hand, I have plenty of small irritating recurring payments that I mean to get around to clearing up; with that experience, I’m unlikely to take on another one. It’s not that it’s $15 per month that gets to me; it’s that I’ll be bled for $180 per year until I remember, “oh, yeah, that fucking thing” and go through whatever hoops are involved in canceling my membership.

What I mean to get around to, however, is that we haven’t figured out how to pay for a lot of important services (and plenty of not-so-important ones, too). People have a lot of weird emotions about money, often divorced from the actual amounts. A paywall reminds people of childhood and feels exclusive, even when the amount of money involved is trivial. People also have a very justified dislike of recurring payments, given how unreasonably difficult it can sometimes be to get rid of them.

One thought I’ve had, for the web, is to set up a passive-payment ecosystem. This could apply to blogs, games, and discussion forums in a way that doesn’t mandate the individual content providers ask for money. People set a payment level somewhere in the neighborhood of $0.00 to $1.50 per hour and pay the provider of whatever they are watching or using, on a minute-per-minute basis, as they go. (The benefit of setting a higher passive-pay level is that you are served fewer ads and receive faster communication.) What’s nice about the system is that (a) the payment level is voluntary and intended to be trivial in comparison to the value of the time spent online, and (b) this has the potential to be more lucrative, for content providers, than advertising. Most importantly, though, the decision overload associated with paywalls, tip jars, recurring payments, and all of the other stuff involved in asking people for money goes away; if someone sets his payment rate at 75 cents per hour and spends 15 minutes on a site, then 18.75 cents is automatically sent to the owner of the site.

Passive payment is an interesting idea. I’m not sure where it’s going, but it’s worth exploring.