Cheap votes: political degradation in government, business, and venture capital.

I’ve written a lot about how people in the mainstream business culture externalize costs in order to improve their personal careers and reputations, and the natural disconnect this creates between them and technologists, who want to get rich by creating new value, and not by finding increasingly clever ways to slough costs to other people. What I haven’t written as much about is how these private-sector social climbers, who present themselves as entrepreneurs but have more in common with Soviet bureaucrats, managed to gain their power. How exactly do these characters establish themselves as leaders? The core concept one needs to understand is one that appears consistently in politics, economics, online gaming, and social relationships: cheap votes.

Why is vote-selling illegal?

First, a question: should it be illegal to buy and sell votes? Some might find it unreasonable that this transaction is illegal; others might be surprised to know that it wasn’t always against the law, even if it seems like the sort of thing that should be. Society generally allows the transfer of one kind of power into another, so why should individual electoral power be considered sacred? On theory alone, it’s hard to make the case that it should be. 

I’ll attempt to answer that. The first thing that must be noted is that vote-buying matters. It increases the statistical power of the bought votes, to the detriment of the rest of the electorate. On paper, one vote is one vote. However, the variance contribution (or proportion of effect) of a voting bloc grows with the square of its size. In that way, the power of a 100-person, perfectly correlated (i.e. no defections) voting bloc is 10,000 times that of an individual. 

Let’s give a concrete example. Let’s say that the payoff of a gamble is based on 101 coins, 100 white and one black. The payoff is based on the heads flipped, with each white coin worth $1 and the black coin worth $100. The total range of payoffs is $0 to $200, and the black coin will, obviously, contribute $100 of that. So does the black coin have “half of” the influence over the payoff? Not quite; it has more. The white coins, as a group, will almost always contribute between $30 and $70– and between $40 and $60, 95 percent of the time. It’s a bell curve. What this means is that whether a round will have a good payoff depends, in practice, almost entirely on the black coin. If it’s heads, you’ll almost never see less than $130. If it’s tails, you’ll rarely see more than $70. The white coins matter, but not nearly as much, because many of the heads and tails cancel each other out. 

Both the white and black coins have the same mean contribution to the payoff: $50. However, the variance of the single black coin is much higher: 2500 (or a standard deviation of $50). The white coins, all together, have a variance of 25, or a standard deviation of $5. Since variance is (under many types of conditions) the best measure of relative influence, one could argue that the black coin has 100 times the mathematical influence of all the white coins added together, and 10,000 times the influence of an individual white coin. 

These simplifications break down in some cases, especially around winner-take-all elections. For example, if two factions are inflexibly opposed (because the people in them benefit or suffer personally, or think they do, based on the result of the election) and each has 45% of the vote, then the people in the remaining 10% (“spoilers”) have significantly more power, especially if something can bring them to congeal into a bloc of their own. That is a commonly-cited case in which individual, generally indifferent “swing” voters gain power. Does this contradict my claim about the disproportionate power of voting blocs? Not really. In this scenario, they have disproportionate decisive effect, but their power is over a binary decision that was already set up by the movement of the other 90%. 

Moreover, it’s improbable that the people in that 10 percent would form a bloc of their own. What prevents this? Indifference. Apathy. They often don’t really care either way about the matter being voted on. They’d probably sell their votes for a hundred dollars each. 

In quite a large number of matters, specific details are too boring for most people to care, even if those issues are extremely important. They’d much rather defer to the experts, throw their power to someone else, and get back to their arguments about colors of bikesheds. Their votes are cheap and, if its legal, people will gain power or wealth by bundling those cheap votes together and selling the blocs.

So why is vote-selling illegal? It causes democracy to degenerate (enough that, as we’ll see, many organizations eschew democracy altogether). The voters who have the most interest in the outcome, and the most knowledge, will be more inclined to vote as individuals. Though they will correlate and may fall into loose clusters (e.g., “conservatives”, “liberals”) this will tend to be emergent rather by intent. On the other hand, the blunt power of an inflexible voting bloc will be attained by… the bought votes, the cheapest votes, the “fuck it, whoever pays me” set. The voting process ceases to reflect the general will (in Rousseau’s sense of the concept) of the people, as power is transferred to those who can package and sell cheap votes– and those who buy them. 

Real-life examples

Official buying and selling of votes is illegal, but indirect forms of it are both legal and not uncommon. For example, over ninety percent of voters in a typical election will give their vote, automatically, to the candidate of one of two major political parties. These candidates are usually chosen, at this point in history, through legitimate electoral means: the party’s primary. But what about the stages before that, as incumbents in other offices issue endorsements and campaign checks are cut?

Effectively, the purpose of these parties is to assume that cheap-vote congealment (and bloc formation) has already happened, tell the populace that it’s down between two remaining candidates, and make the voters feel they have a choice between two people who are often quite very similar in economic (in the U.S., right-of-center) and social (moderately authoritarian) policies while differing on superficial cultural grounds (related to religion in a way that is regional and does generalize uniformly across the whole country). The political parties, in a way, are the most legitimate cheap-vote aggregators. They know that most Americans care more about the bike-shed difference between Democratic corporate crooks and Republican corporate crooks– the spectator-sport conflict between Springfield and Shelbyville– than the nuances of political debate and the merits of the issues.

The vote-buying process is more brazen in the media. While expensive and thorough campaigns can’t turn an unlikeable person into a winner, they can have a large effect in “swing states” or close matches. There are some people who’ll be swayed by the often juvenile political commercials that pop up in the month before an election, and those are some of the cheapest voters. The electioneer need not even buy their vote directly; it has already been sold to the television station or radio show (a highly powerful cheap vote aggregator) to whom they’ve lent their agency. 

This is one of the reasons I don’t find low voter turnout to be distressing or even undesirable, at least not on first principles. If low voter turnout is an artifact of disenfranchisement, then it’s bad. If poorer people can’t get to the polls because their bosses won’t let them have the time off work (and Election Day ought to be a day off from work, but that’s another debate) then that’s quite wrong. On the other hand, if uninformed people don’t show up, that’s fine. I don’t get involved in civic activities unless I know what and who I’m voting for; otherwise, I’d be, at best, adding statistical noise and, at worse, unwittingly giving power to the cheap vote sellers and buyers who’ve put their preferred brand into my head. 

All this said, cheap-vote dynamics aren’t limited to politics. In fact, they’re much more common in economics. Just look at advertising. People vote with their dollars on what products should be made and what businesses should continue. A market, just like an election, is a preference aggregator. The problem? No one knows all of the contenders, or could possibly know. As opposed to a handful of political candidates, there might be twenty or two hundred vendors of a product. Quite a great number of them will buy not based on product quality or personal affinity but on reputation (brand) alone. Advertising has a minimal effect on the most knowledgeable (Gladwell’s “Mavens”) but it’s extremely powerful at bringing in the cheapest votes, the on-the-fence people who’ll go with what seems like the least risky choice. 

Venture capital

Maybe it’s predictable that I would relate this to technology, but it’s so applicable here that I can’t leave the obvious facts of the issue unexplored. 

Selection of organizational leadership almost always has a cheap-vote issue, because elections with large numbers of indistinguishable alternatives are where cheap votes have the most power. (A yes/no decision that affects everyone is where cheap votes will have the least power.) Most people see the contests as wholly external, because all the credible candidates are (from the individual’s point of view) just “not me”. Or, more accurately, if no one they know is in contention, they’re not going to be invested in the matter of which bozo gets the tallest stilts. As organizations get large, the effect of this apathy becomes dominant. 

Therefore, it’s rare in any case that selection of people will be uncorrupted by cheap vote dynamics, no matter how democratic the election or aggregation process may be. While some people are great leaders and others are terrible, it’s nearly impossible to reliably determine who will be which kind until after they have led (and, sometimes, it’s not clear for some time afterward). If asked to choose leaders among 20 candidates in a group of 10,000, you’ll see nuisance (by “nuisance”, I mean, uncorrelated to policy) variables like physical attractiveness, charisma, and even order of presentation (making the person who designs the ballots a potential cheap-vote vendor) take a disproportionate effect. This is an issue in the public sector, but a much more egregious one in the private sector, given the complete lack of transparency into the “leadership” class, in addition to the managerial power relationships and the general lack of concern about organizational corruption. 

Corporations (for better or worse, and I’d argue, for the worse) eliminate this effect by simply depriving employees of the ability to choose leaders at all: supervisors and middle managers and executives are chosen from the top down, based on loyalty to those above, and the workers are assumed to be voting for the pre-selected by continuing to work there. The corporation cheapens the worker’s vote, in effect, by reducing its value to zero. “You were going to sell your vote anyway, so let’s just say that the election happened this way.” Unless they can organize, the workers are complicit in the cheapening of their votes if they continue to work for such companies and, sadly, quite a large number do. 

There are people, of course, who are energetic and creative and naturally anti-authoritarian. Such people dislike an environment where their votes have already been cheapened, bought for a pittance, and sold to the one-party system that calls itself corporate management. The argument often made about them is that they should “just do a startup”, as if the one-party system of Silicon Valley’s venture capital elite would be preferable to the one-party system of a company’s management. By and large, it’s not an improvement.

In fact, the Silicon Valley system is worse in quite a large number of ways. A corporation can fire someone, but generally won’t continue to damage that person’s reputation, for fear of a lawsuit, negative publicity, and plummeting internal morale. This means that a person who rejects, or is rejected by, one company’s one-party system can, at the least, transition over to another company that might have a better one party in charge. There is, although not to the degree that there should be, some competition among corporate managers, and that generally keeps most of them from being truly awful. On the other hand, venture capitalists, with their culture of note-sharing, collusion, and market manipulation (one which if it were applied to publicly-traded stocks instead of unregulated private equities, would result in stiff prison sentences for all of them; alas, lawmakers don’t much care what happens to the careers of middle-class 22-year-old programmers) frequently do damage the careers of those who oppose the interests of the group. Most of the VC-era “innovations” in corporate structure and culture– stack-ranking, the intentional encouragement of a machismo-driven and exclusionary culture, fast firing, horrendous health benefits because “we’re a startup”– have been for the worse. The Valley hasn’t “disrupted” the corporate establishment. It’s reinvented it in a much more onerous way. 

So how do the bastards in charge get away with this? The Silicon Valley elite are, mostly, the discards of Wall Street. They weren’t successful in their original home (the corporate mainstream) and they aren’t nearly as smart as the nerds they manage, so what gives them their power? Who gives up the power that they win? Once again, it’s a cheap vote dynamic in place. 

Venture capitalists are intermediaries between passive capital seeking above-normal returns and top technical talent. There’s a lot of passive capital out there coming from people who want to participate, financially, in new technology development. Likewise, there are a lot of smart people with great ideas but no personal ownership of the resources to implement them. The passive capitalists recognize that they don’t have the ability to judge top talent from pretenders (and neither do the narcissistic careerists on Sand Hill Road to whom they trust to their assets, but that’s another discussion) and so they sell their votes. Venture capitalists are the ones who buy those votes and package them into statistically powerful blocs. Once this is done, the decision of a single venture capitalist (bolstered by others in his industry who’ll follow his lead) determines which contender in a new industry will get the most press coverage, the most expensive programming talent, and sufficient evidence of “virality” to justify the next round of funding. 

As programmers, we (sadly) can’t do much to prevent pension funds and municipalities from erroneously trusting these Bay Area investor celebrities who couldn’t tell talent from their own asshole. I’ve said enough, to this point, about that side, and the cheap-vote buying that happens between passive capitalists and the high priests who are supposed to know better. In theory, the poor returns delivered by those agents ought to result in their eventual downfall. After all, shouldn’t people lose faith in the Sand Hill Road elite after more than a decade of mediocre returns? This seems not to be happening, largely because of the long feedback cycle and high variance intrinsic to the venture capital game. Market dynamics work in a more regularized setting, but when there is that much noise and delay in the system, capable direct judgment of talent (before the results come in) is the only reliable way to get decent performance. Unfortunately, the only people with that capability are us, programmers, and we’re near the bottom of the social hierarchy. Isn’t that odd?

So let’s talk about what we can do. Preventing the flow of capital from passive investors into careerist narcissists at VC firms who fund their underqualified friends is probably not within our power at the present time. It’s nearly impossible to prevent someone with a completely different set of interests from cheapening his or her vote. Do so aggressively, and the person is likely to vote poorly (that is, against the common interest and often his own) just to spite the regulator attempting to prevent it, just as a teenage girl might date low-quality men to offend her parents. So let’s talk about our votes.

VC-funded companies (invariably calling themselves “startups”) don’t pay very well, and the equity disbursements typically range from the trivial down to the outright insulting. Yet young engineers flock to them, underestimating the social distance that a subordinate, engineer-level role will give them from the VC king-makers. They work at these companies because they think they’ll be getting personal introductions from the CEO to investors, and join that circuit as equals; in reality, that rarely happens unless contractually specified. They strengthen the feudal reputation economy that the VCs have created by giving their own power away based on brand (e.g., TechCrunch coverage, name-brand investors). 

When young people work for these VC darlings under such rotten terms, they’re devaluing their votes. When they show unreasonable (and historically refuted) trust in corporate management by refusing to organize their labor, they are (likewise) devaluing not only their political pull, but the credibility and leverage of their profession. That’s something we, as a group, can change. We probably can’t fix the way startups are financed in the next year; maybe, if we play our local politics right and enhance our own status and credibility, we’ll have that power in ten. We can start to clean up our own backyards, and we should. 

Sadly, talent does need access to capital, more than capital needs talent. The pressing needs of the day have given capital, for over a century, that basic supremacy over labor: “you need to eat, I can wait.” But does talent need access to a specific pool of capital controlled by narcissists living in a few hundred square miles of California office park? No, it doesn’t. We need money, but we don’t need them. On the other hand, if the passive investors who provide the capital that fuels their careers even begin to pay the littlest bit of attention, the VCs will need us. After all, it’s the immense productive capacity of what we do (not what VCs do) that gives venture capital the “sexiness” that excuses its decade-plus of mediocrity. Their ability to coast, and to fund suboptimal founders, rests on the fact that no one is paying attention to whether they do their jobs well, the assumption being that we (technologists) will stay on their manor, passively keeping our heads down and saying, “politics is someone else’s job; I just want to solve hard problems.” As long as we live on the VCs’ terrain, there is no way for passive investors to get to us except through Sand Hill Road. But there is no reason for that to continue. We have the power to spot, and to vote against, bad companies (and terrible products, and demoralizing corporate cultures) as and before they form. And we ought to be using it. As I’ve said before, we as software engineers and technologists have to break out of our comfort zones and (dare I say it?) get political.

Why corporate penny-shaving backfires. (Also, how to do a layoff right.)

One of the clearest signs of corporate decline (2010s Corporate America is like 1980s Soviet Russia, in terms of its low morale and lethal overextension) is the number of “innovations” that are just mean-spirited, and seem like prudent cost-cutting but actually do minimal good (and, often, much harm) to the business.

One of these is the practice of pooling vacation and sick leave in a single bucket, “PTO”. Ideally, companies shouldn’t limit vacation or sick time at all– but my experience has shown “unlimited vacation” to correlate with a negative culture. (If I ran a company, it would institute a mandatory vacation policy: four weeks minimum, at least two of those contiguous.) Vacation guidelines need to be set for the same reason that speed limits (even if intentionally under-posted, with moderate violation in mind) need to be there; without them, speed variance would be higher on both ends. So, I’ve accepted the need for vacation “limits”, at least as soft policies; but employers expect their people to either use a vacation day for sick leave, or come into the office while sick, are just being fucking assholes.

These PTO policies are, in my view, reckless and irresponsible. They represent a gamble with employee health that I (as a person with a manageable but irritating disability) find morally repugnant. It’s bad enough to deny rest to someone just because a useless bean-counter wants to save the few hundred dollars paid out for unused vacation when someone leaves the company. But by encouraging the entire workforce to show up while sick and contagious, they subject the otherwise healthy to an unnecessary germ load. Companies with these pooled leave, “PTO”, policies end up with an incredibly sickly workforce. One cold just rolls right into another, and the entire month of February is a haze of snot, coughing, and bad code being committed because half the people at any given time are hopped up on cold meds and really ought to be in bed. It’s not supposed to be this way. This will shock those who suffer in open-plan offices, but an average adult is only supposed to get 2-3 colds per year, not the 4-5 that are normal in an open-plan office (another mean-spirited tech-company “innovation”) or the 7-10 per year that is typical in pooled-leave companies.

The math shows that PTO policies are a raw deal even for the employer. In a decently-run company with an honor-system sick leave policy, an average healthy adult might have to take 5 days off due to illness per year. (I miss, despite my health problems, fewer than that.) Under PTO, people push themselves to come in and only stay home if they’re really sick. Let’s say that they’re now getting 8 colds per year instead of the average 2. (That’s not an unreasonable assumption, for a PTO shop.) Only 2 or 3 days are called-off, but there are a good 24-32 days in which the employee is functioning below 50 percent efficiency. Then there are the morale issues, and the general perception that employees will form of the company as a sickly, lethargic place; and the (mostly unintentional) collective discovery of how low a level of performance will be tolerated. January’s no longer about skiing on the weekends and making big plans and enjoying the long golden hour… while working hard, because one is refreshed. It’s the new August; fucking nothing gets done because even though everyone’s in the office, they’re all fucking sick with that one-rolls-into-another months-long cold. That’s what PTO policies bring: a polar vortex of sick.

Why, if they’re so awful, do companies use them? Because HR departments often justify their existence by externalizing costs elsewhere in the company, and claiming they saved money. So-called “performance improvement plans” (PIPs) are a prime example of this. The purpose of the PIP is not to improve the employee. Saving the employee would require humiliating the manager, and very few people have the courage to break rank like that. Once the PIP is written, the employee’s reputation is ruined, making mobility or promotion impossible. The employee is stuck in a war with his manager (and, possibly, team) that he will almost certainly lose, but he can make others lose along the way. To the company, a four-month severance package is far cheaper than the risk that comes along with having a “walking dead” employee, pissing all over morale and possibly sabotaging the business, in the office for a month. So why do PIPs, which don’t even work for their designed intention (legal risk mitigation) unless designed and implemented by extremely astute legal counsel, remain common? Well, PIPs a loss to the company, even compared to “gold-plated” severance plans. We’ve established that. But they allow the HR department to claim that it “saved money” on severance payments (a relatively small operational cost, except when top executives are involved) while the costs are externalized to the manager and team that must deal with a now-toxic (and if already toxic before the PIP, now overtly destructive) employee. PTO policies work the same way. The office becomes lethargic, miserable, and sickly, but HR can point to the few hundred dollars saved on vacation payouts and call it a win.

On that, it’s worth noting that these pooled-leave policies aren’t actually about sick employees. People between the ages of 25 and 50 don’t get sick that often, and companies don’t care about that small loss. However, their children, and their parents, are more likely to get sick. PTO policies aren’t put in place to punish young people for getting colds. They’re there to deter people with kids, people with chronic health problems, and people with sick parents from taking the job. Like open-plan offices and the anxiety-inducing micromanagement often given the name of “Agile”, it’s back-door age and disability discrimination. The company that institutes a PTO policy doesn’t care about a stray cold; but it doesn’t want to hire someone with a special-needs child. Even if the latter is an absolute rock star, the HR department can justify itself by saying it helped the company dodge a bullet.

Let’s talk about cost cutting more generally, because I’m smarter than 99.99% of the fuckers who run companies in this world and I have something important to say.

Companies don’t fail because they spend too much money. “It ran out of money” is the proximate cause, not the ultimate one. Some fail when they cease to excel and inspire (but others continue beyond that point). Some fail, when they are small, because of bad luck. Mostly, though, they fail because of complexity: rules that don’t make sense and block useful work from being done, power relationships that turn toxic and, yes, recurring commitments and expenses that can’t be afforded (and must be cut). Cutting complexity rather than cost should be the end goal, however. I like to live with few possessions not because I can’t afford to spend the money (I can) but because I don’t want to deal with the complexity that they will inject into my life. It’s the same with business. Uncontrolled complexity will cause uncontrolled costs and ultimately bring about a company’s demise. What does this mean about cutting costs, which MBAs love to do? Sometimes it’s great to cut costs. Who doesn’t like cutting “waste”? The problem there is that there actually isn’t much obvious waste to be cut, so after that, one has to focus and decide on which elements of complexity are unneeded, with the understanding that, yes, some people will be hurt and upset. Do we need to compete in 25 businesses, when we’re only viable in two? This will also cut costs (and, sadly, often jobs).

The problem, see, is that most of the corporate penny-shaving increases complexity. A few dollars are saved, but at the cost of irritation and lethargy and confusion. People waste time working around new rules intended to save trivial amounts of money. The worst is when a company cuts staff but refuses to reduce its internal complexity. This requires a smaller team to do more work– often, unfamiliar work that they’re not especially good at or keen on doing; people were well-matched to tasks before the shuffle, but that balance has gone away. The career incoherencies and personality conflicts that emerge are… one form of complexity.

The problem is that most corporate executives are “seagull bosses” (swoop, poop, and fly away) who see their companies and jobs in a simple way: cut costs. (Increasing revenue is also a strategy, but that’s really hard in comparison.) A year later, the company is still failing not because it failed to cut enough costs or people, but because it never did anything about the junk complexity that was destroying it in the first place.

Let’s talk about layoffs. The growth of complexity is often exponential, and firms inevitably get to a place where they are too complex (and, a symptom of this is that operations are too expensive) to survive. The result is that it needs to lay people off. Now, layoffs suck. They really fucking do. But there’s a right way and a wrong way to execute one. To do a layoff right, the company needs to cut complexity and cut people. (Otherwise, it will have more complexity per capita, the best people will get fed up and leave, and the death spiral begins.) It also needs to cut the right complexity; all the stuff that isn’t useful.

Ideally, the cutting of people and cutting of complexity would be tied together. Unnecessary business units being cut usually means that people staffed on them are the ones let go. The problem is that that’s not very fair, because it means that good people, who just happened to be in the wrong place, will lose their jobs. (I’d argue that one should solve this by offering generous severance, but we already know why that isn’t a popular option, though it should be.) The result is that when people see their business area coming into question, they get political. Of course this software company needs a basket-weaving division! In-fighting begins. Tempers flare. From the top, the water gets very muddy and it’s impossible to see what the company really looks like, because everyone’s feeding biased information to the executives. (I’m assuming that the executive who must implement the cuts is acting in good faith, which is not always true.) What this means is that the crucial decision– what business complexity are we going to do without?— can’t be subject to a discussion. Debate won’t work. It will just get word out that job cuts are coming, and political behavior will result. The horrible, iron fact is that this calls for temporary autocracy. The leader must make that call in one fell swoop. No second guessing, no looking back. This is the change we need to make in order to survive. Good people will be let go, and it really sucks. However, seeing as it’s impossible to execute a large-scale layoff without getting rid of some good people, I think the adult thing to do is write generous severance packages.

Cutting complexity is hard. It requires a lot of thought. Given that the information must be gathered by the chief executive without tipping anyone off, and that complex organisms are (by definition) hard to factor, it’s really hard to get the cuts right. Since the decision must be made on imperfect information, it’s a given that it usually won’t be the optimal cut. It just has to be good enough (that is, removing enough complexity with minimal harm to revenue or operations) that the company is in better health.

Cutting people, on the other hand, is much easier. You just tell them that they don’t have jobs anymore. Some don’t deserve it, some cry, some sue, and some blog about it but, on the whole, it’s not actually the hard part of the job. This provides, as an appealing but destructive option, the lazy layoff. In a lazy layoff, the business cuts people but doesn’t cut complexity. It just expects more work from everyone. All departments lose a few people! All “survivors” now have to do the work of their fallen brethren! The too-much-complexity problem, the issue that got us to the layoff in the first place… will figure itself out. (It never does.)

Stack ranking is a magical, horrible solution to the problem. What if one could do a lazy layoff but always cull the “worst” people? After all, some people are of negative value, especially considering the complexity load (in personality conflicts, shoddy work) they induce. The miracle of stack ranking is that it turns a layoff– otherwise, a hard decision guaranteed to put some good people out of work– into an SQL query. SELECT name FROM Employee WHERE perf <= 3.2. Since the soothsaying of stack ranking has already declared the people let-go as bottom-X-percent performers, there’s no remorse in culling them. They were dead weight”. Over time, stack ranking evolves into a rolling, continuous lazy layoff that happens periodically (“rank-and-yank”).

It’s also dishonest. There are an ungodly number of large technology companies (over 1,000) that claim to have “never had a layoff”. That just isn’t fucking true. Even if the CEO was Jesus Christ himself, he’d have to lay people off because that’s just how business works. Tech-company sleazes just refuse to use the word “layoff”, for fear of losing their “always expanding, always looking for the best talent!” image. So they call it a “low performer initiative” (stack ranking, PIPs, eventual firings). What a “low-performer initiative” (or stack ranking, which is a chronic LPI) inevitably devolves into is a witch hunt that turns the organization into pure House of Cards politics. Yes, most companies have about 10 percent who are incompetent or toxic or terminally mediocre and should be sent out the door. Figuring which 10 percent those people are, is not easy. People who are truly toxic generally have several years’ worth of experience drawing a salary without doing anything, and that’s a skill that improves with time. They’re really good at sucking (and not getting caught). They’re adept political players. They’ve had to be; the alternative would have been to have grown a work ethic. Most of what we as humans define as social acceptability is our ethical immune system, which can catch and punish the small-fry offenders but can’t do a thing about the cancer cells (psychopaths, parasites) that have evolved to the point of being able to evade or even redirect that rejection impulse. The question of how to get that toxic 10 percent out is an unsolved one, and I don’t have space to tackle it now, but the answer is definitely not stack ranking, which will always clobber several unlucky good-faith employees for every genuine problem employee it roots out.

Moreover, stack ranking has negative permanent effects. Even when not tied to a hard firing percentage, its major business purpose is still to identify the bottom X percent, should a lazy layoff be needed. It’s a reasonable bet that unless things really go to shit, X will be 5 or 10 or maybe 20– but not 50. So stack ranking is really about the bottom. The difference between the 25th percentile and 95th percentile, in stack ranking, really shouldn’t matter. Don’t get me wrong: a 95th-percentile worker is often highly valuable and should be rewarded. I just don’t have any faith in the ability of stack ranking to detect her, just as I know some incredibly smart people who got mediocre SAT scores. Stack ranking is all about putting people at the bottom, not the top. (Top performers don’t need it and don’t get anything from it.)

The danger of garbage data (and, #YesAllData generated by stack ranking is garbage) is that people tend to use it as if it were truth. The 25th-percentile employee isn’t bad enough to get fired… but no one will take him for a transfer, because the “objective” record says he’s a slacker. The result of this– in conjunction with closed allocation, which is already a bad starting point– is permanent internal immobility. People with mediocre reviews can’t transfer because the manager of the target team would prefer a new hire (with no political strings attached) over a sub-50th-percentile internal. People with great reviews don’t transfer for fear of upsetting the gravy train of bonuses, promotions, and managerial favoritism. Team assignments become permanent, and people divide into warring tribes instead of collaborating. This total immobility also makes it impossible to do a layoff the right way (cutting complexity) because people develop extreme attachments to projects and policies that, if they were mobile and therefore disinterested, they’d realize ought to be cut. It becomes politically intractable to do the right thing, or even for the CEO to figure out what the right thing is. I’d argue, in fact, that performance reviews shouldn’t be part of a transfer packet at all. The added use of questionable, politically-laced “information” is just not worth the toxicity of putting that into policy.

A company with a warring-departments dynamic might seem like a streamlined, efficient, and (most importantly) less complex company. It doesn’t have the promiscuous social graph you might expect to see in an open allocation company. People know where they are, who they report to, and who their friends and enemies are. The problem, with this insight, is that there’s hot complexity and cold complexity. Cold complexity is passive and occasionally annoying, like a law from 1890 that doesn’t make sense and is effectively never enforced. When people collaborate “too much” and the social graph of the company seems to have “too many” edges, there’s some cold complexity there. It’s generally not harmful. Open allocation tends to generate some cold complexity. Rather than metastasize into an existential threat to the company, it will fade out of existence over time. Hot complexity, which usually occurs in an adversarial context, is a kind that generates more complexity. Its high temperature means there will be more entropy in the system. Example: a conflict (heat) emerges. That, alone, makes the social graph more complex because there are more edges of negativity. Systems and rules are put in place to try to resolve it, but those tend to have two effects. First, they bring more people (those who had no role in the initial conflict, but are affected by the rules) into the fights. Second, the conflicting needs or desires of the adversarial parties are rarely addressed, so both sides just game the new system, which creates more complexity (and more rules). Negativity and internal competition create the hot complexity that can ruin a company more quickly than an executive (even if acting with the best intentions) can address it.

Finally, one thing worth noting is the Welch Effect (named for Jack Welch, the inventor of stack-ranking). It’s one of my favorite topics because it has actually affected me. The Welch Effect pertains to the fact that when a broad-based layoff occurs, the people most likely to be let go aren’t the worst (or best) performers, but newest members of macroscopically underperforming teams. Layoffs (and stack ranking) generally propagate down the hierarchy. Upper management disburses bonuses, raises, and layoff quotas based on the macroscopic performance of the departments under it, and at each level, the node operators (managers) slice the numbers based on how well they think each suborganization did (plus or minus various political modifiers). At the middle-management layer, one level separated from the non-managerial “leaves”, it’s the worst-performing teams that have to vote the most people off the island. It tends to be those most recently hired who get the axe. This isn’t especially unfair or wrong, for that middle manager; there’s often no better way to do it than to strike the least-embedded, least-invested junior hire.

The end result of the Welch Effect, however, is that the people let go are often those who had the least to do with their team’s underperformance. (It may be a weak team, it may be a good team with a bad manager, or it may be an unlucky team.) They weren’t even there for very long! It doesn’t cause the firm to lay off good people, but it doesn’t help it lay off bad people either. It has roughly the same effect as a purely seniority-based layoff, for the company as a whole. Random new joiners are the ones who are shown out the door. It’s bad to lose them, but it rarely costs the company critical personnel. Its effect on that team is more visibly negative: teams that lose a lot of people during layoffs get a public stink about them, and people lose the interest in joining or even helping them– who wants to work for, or even assist, a manager who can’t protect his people?– so the underperforming team becomes even more underperforming. There are also morale issues with the Welch Effect. When people who recently joined lose their jobs (especially if they’re fired “for performance” without a severance) it makes the company seem unfair, random, and capricious. The ones let go were the ones who never had the chance to prove themselves. In a one-off layoff, this isn’t so destructive. The Welch Effected usually move on to better jobs anyway. However, when a company lays off in many small cuts, or disguises a layoff as a “low-performer initiative”, the Welch Effect firings demolish belief in meritocracy.

That, right there, explains why I get so much flak over how I left Google. Technically, I wasn’t fired. But I had a disliked, underdelivering manager who couldn’t get calibration points for his people (a macroscopic issue that I had nothing to do with) and I was the newest on the team, so I got a bad score (despite being promised a reasonable one– a respectable 3.4, if it matters– by that manager). Classic Welch Effect. I left. After I was gone I “leaked” the existence of stack ranking within Google. I wasn’t the first to mention that it existed there, but I publicized it enough to become the (unintentional) slayer of Google Exceptionalism and, to a number of people I’ve never met and to whom I’ve never done any wrong, Public Enemy #1. I was a prominent (and, after things went bad, fairly obnoxious) Welch Effectee, and my willingness to share my experience changed Google’s image forever. It’s not a disliked company (nor should it be) but its exceptionalism is gone. Should I have done all that? Probably not. Is Google a horrible company? No. It’s above average for the software industry (which is not an endorsement, but not damnation either.) Also, my experiences are three years old at this point, so don’t take them too seriously. As of November 2011, Google had stack ranking and closed allocation. It may have abolished those practices and, if it has, then I’d strongly recommend it as a place to work. It has some brilliant people and I respect them immensely.

In an ideal world, there would be no layoffs or contractions. In the real world, layoffs have to happen, and it’s best to do them honestly (i.e. don’t shit on departing employees’ reputations by calling it a “low performer initiative”). As with more minor forms of cost-cutting (e.g. new policies encouraging frugality) it can only be done if complexity (that being the cause of the organization’s underperformance) is reduced as well. That is the only kind of corporate change that can reverse underperformance: complexity reduction.

If complexity reduction is the only way out, then why is it so rare? Why do companies so willingly create personnel and regulatory complexity just to shave pennies off their expenses? I’m going to draw from my (very novice) Buddhist understanding to answer this one. When the clutter is cleared away… what is left? Phrases used to define it (“sky-like nature of the mind”) only explain it well to people who’ve experienced it. Just trust me that there is a state of consciousness that can be attained when gross thoughts are swept away, leaving something more pure and primal. Its clarity can be terrifying, especially the first time it is experienced. I really exist. I’m not just a cloud of emotions and thoughts and meat. (I won’t get into death and reincarnation and nirvana here. That goes farther than I need, for now. Qualia, or existence itself, as opposed my body hosting some sort of philosophical zombie, is both miraculous and the only miracle I actually believe in.) Clarity. Essence. Those are the things you risk encountering with simplicity. That’s a good thing, but it’s scary. There is a weird, paradoxical thing called “relaxation-induced anxiety” that can pop up here. I’ve fought it (and had some nasty motherfuckers of panic attacks) and won and I’m better for my battles, but none of this is easy.

So much of what keeps people mired in their obsessions and addictions and petty contests is an aversion to confronting what they really are, a journey that might harrow them into excellence. I am actually going to age and die. Death can happen at any time, and almost certainly it will feel “too soon”. I have to do something, now, that really fucking matters. This minute counts, because I may not get another in this life. People are actually addicted to their petty anxieties that distract them from the deeper but simpler questions. If you remove all the clutter on the worktable, you have to actually look at the table itself, and you have to confront the ambitions that impelled you to buy it, the projects you imagined yourself using it for (but that you never got around to). This, for many people, is really fucking hard. It’s emotionally difficult to look at the table and confront what one didn’t achieve, and it’s so much easier to just leave the clutter around (and to blame it).

Successful simplicity leads to, “What now?” The workbench is clear; what are we going to do with it? For an organization, such simplicity risks forcing it to contend with the matter of its purpose, and the question of whether it is excelling (and, relatedly, whether it should). That’s a hard thing to do for one person. It’s astronomically more difficult for a group of people with opposing interests, and among whom excellence is sure to be a dirty word (there are always powerful people who prefer rent-seeking complacency). It’s not surprising, then, that most corporate executives say “fuck it” on the excellence question and, instead, decide it suffices to earn their keep to squeeze employees with mindless cost-cutting policies: pooled sick leave and vacation, “employee contributions” on health plans, and other hot messes that just ruin everything. It feels like something is getting done, though. Useless complexity is, in that way, existentially anxiolytic and addictive. That’s why it’s so hard to kill. But it, if allowed to live, will kill. It can enervate a person into decoherence and inaction, and it will reduce a company to a pile of legacy complexity generated by self-serving agents (mostly, executives). Then it falls under the MacLeod-Gervais-Rao-Church theory of the nihilistic corporation; the political whirlpool that remains once an organization has lost its purpose for existing.

At 4528 words, I’ve said enough.

Corporate atheism

Legally and formally, a corporation is a person, with the same rights (life, liberty, and property) that a human is accorded. Whether this is good is hotly debated.

In theory, I like the corporate veil (protection of personal property, reputation, and liberty in cases of good-faith business failure and bankruptcy) but I don’t see it playing well in practice. If you need $400,000 in bank loans to start your restaurant, you’ll still be expected to take on personal liability, or you won’t get the loan. I don’t see corporate personhood doing what it’s supposed to for the little guys. It seems to work only for those with the most powerful lobbyists. (Prepare for rage.) Health insurance companies cannot be sued, not even for the amount of the claim, if their denial of coverage causes financial hardship, injury, or death. (If a health-insurance executive is sitting next to you, I give you permission to beat the shit out of him.) On the other hand, a restaurant proprietor or software freelancer who makes mistakes on his taxes can get seriously fucked up by the IRS. I’m a huge fan of protecting genuine entrepreneurs from the consequences of good-faith failure. As for cases of bad-faith failure among corrupt, social-climbing, private-sector bureaucrats populating Corporate America’s upper ranks, well… not as much. Unfortunately, the corporate veil in practice seems to protect the rich and well-connected from the consequences of some enormous crimes (environmental degradation, human rights violations abroad, etc.) I can’t stand for that.

On the corporation, it’s clearly not a person like you or me. It can’t be imprisoned. It can be fined heavily (loss of status and belief) but not executed. It has immense power, if for no other reason than its reputation among “real” physical people, but no empirically discernible will, so we must trust its representatives (“executives”) to know it. It tends to operate in ways that are outside of mortal human’s moral limitations, because it is nearly immune from punishment, but a fair deal of bad behavior will be justified. The worst that can happen to it is gradual erosion of status and reputation. A mere mortal who behaved as it does would be called a psychopath, but it somehow enjoys a high degree of moral credibility in spite of its actions. (Might makes right.) Is that a person, a flesh-and-blood human? Nope. That’s a god. Corporations don’t die because they “run out of money”. They die because people stop believing in them. (Financial capitalism accelerates the disbelief process, one that used to require centuries.) Their products and services are no longer valued on divine reputation, and their ability to finance operations fails. It takes a lot of bad behavior for most humans to dare disbelieve in a trusted god. Zeus was a rapist, and the literal Yahweh genocidal, and they still enjoyed belief for thousands of years.

“God” is a loaded word, because some people will think I’m talking about their concept of a god. (This diversion isn’t useful here, but I’m actually not an atheist.) I have no issue with the philosophical concept of a supreme being. I’m actually talking about the historical artifacts, such as Zeus or Ra or Odin or (I won’t pick sides) the ones believed in today. I do have an issue with those, because their political effects on real, physical humans can be devastating. It’s not controversial in 2014 that most of these gods don’t exist– and it probably won’t be controversial in 5014 that the literal Jehovah/Allah doesn’t exist– but people believed in them at a time, and no longer do. When they were believed to be real, they (really, their human mouthpieces) could be more powerful than kings.

The MacLeod model of the organization separates it into three tiers. The Losers (checked-out grunts) are the half-hearted believers who might suspect that their chosen god doesn’t exist, but would never say it at the dinner table. The Clueless (unconditional overperformers who lack strategic vision and are destined for middle-management) are the zealots destined for the low priesthood, who clean the temple bathrooms. Not only do they believe, but they’re the ones who work to make blind faith look like a virtue. At the top are the Sociopaths (executives) who often aren’t believers themselves, but who enjoy the prosperity and comfort of being at the highest levels of the priesthood and, unless their corruption becomes obnoxiously obvious, being trusted to speak for the gods. The fact that this nonexistent being never punishes them for acting badly means there is virtually no check on their increasing abuse of “its” power.

Ever since humans began inventing gods, others have not believed in them. Atheism isn’t a new belief we can pin on (non-atheistic scientist) Charles Darwin. Many of the great Greek philosophers were atheists, to start. Buddha was, arguably, an atheist and Buddhism is theologically agnostic to this day. Socrates may not have thought himself an atheist, but one of his major “crimes” was disbelief in the literal Greek gods. In truth, I would bet that the second human ever to speak on anthropomorphic, supernatural beings said, “You’re full of shit, Asshole”. (Those may, however, have been his last words.) There’s nothing to be ashamed of in disbelief. Many of the high priests (MacLeod Sociopaths) are, themselves, non-believers!

I’m a corporate atheist and a humanist. My stance is radical. From most, these gods (and not the people doing all the damn work) are claimed to be the engines of progress and innovation. People who are not baptized and blessed by them (employment, promotion, good references) are judged to be filthy, and “natural” humans deserve shame (original sin). If you don’t have their titles and accolades, your reputation is shit and you are disenfranchised from the economy. This enables them to act as extortionists, just as religious authorities could extract tribute not because those supernatural beings existed (they never did) but because they possessed the political and social ability to banish and to justify violence.

I’m sorry, but I don’t fucking agree with any of that.

Look-ahead: a likely explanation for female disinterest in VC-funded startups.

There’s been quite a bit of cyber-ink flowing on the question of why so few women are in the software industry, especially at the top, and especially in VC-funded startups. Paul Graham’s comments on the matter, being taken out of context by The Information, made him a lightning rod. There’s a lot of anger and passion around this topic, and I’m not going to do all of that justice. Why are there almost no venture capitalists, few women being funded, and not many women rising in technology companies? It’s almost certainly not a lack of ability. Philip Greenspun argues that women avoid academia because it’s a crappy career. He makes a lot of strong points, and that essay is very much worth reading, even if I think a major factor (discussed here) is underexplored.

Why wouldn’t this fact (of academia being a crap career) also make men avoid it? If it’s shitty, isn’t it going to be avoided by everyone? Often cited is a gendered proclivity toward risk. People who take unhealthy and outlandish risks (such as by becoming drug dealers) tend to be men. So do overconfident people who assume they’ll end up on top of a vicious winner-take-all game. The outliers on both ends of society tend to be male. As a career with subjective upsides (prestige in addition to a middle-class salary) and severe, objective downsides it appeals to a certain type of quixotic, clueless male. Yet making bad decisions is hardly a trait of one gender. Also, we don’t see 1.5 or 2 times as many high-power (IQ 140+) men making bad career decisions. We probably see 10 times as many doing so; Silicon Valley is full of quixotic young men wasting their talents to make venture capitalists rich, and almost no women, and I don’t think that difference can be explained by biology alone.

I’m going to argue that a major component of this is not a biological trait of men or women, but an emergent property from the tendency, in heterosexual relationships, for the men to be older. I call this the “Look-Ahead Effect”. Heterosexual women, through the men they date, see doctors buying houses at 30 and software engineers living paycheck-to-paycheck at the same age. Women face a number of disadvantages in the career game, but they have access to a kind of high-quality information that prevents them from making bad career decisions. Men, on the other hand, tend to date younger women covering territory they’ve already seen.

When I was in a PhD program (for one year) I noticed that (a) women dropped out at higher rates than men, and (b) dropping out (for men and women) had no visible correlation with ability. One more interesting fact pertained to the women who stayed in graduate school: they tended either to date (and marry) younger men, or same-age men within the department. Academic graduate school is special in this analysis. When women don’t have as much access to later-in-age data (because they’re in college, and not meeting many men older than 22) a larger number of them choose the first career step: a PhD program. But the first year of graduate school opens their dating pool up again to include men 3-5 years older than them (through graduate school and increasing contact with “the real world”). Once women start seeing first-hand what the academic career does to the men they date– how it destroys the confidence even of the highly intelligent ones who are supposed to find a natural home there– most of them get the hell out.

Men figure this stuff out, but a lot later, and usually at a time when they’ve lost a lot of choices due to age. The most prestigious full-time graduate programs won’t accept someone near or past 30, and the rest don’t do enough for one’s career to offset the opportunity cost. Women, on the other hand, get to see (through the guys they date) a longitudinal survey of the career landscape when they can still make changes.

I think it’s obvious how this applies to all the goofy, VC-funded startups in the Valley. Having a 5-year look ahead, women tend to realize that it’s a losing game for most people who play, and avoid it like the plague. I can’t blame them in the least. If I’d had the benefit of 5-year look-ahead, I wouldn’t have spent the time I did in VC-istan startups either. I did most of that stuff because I had no foresight, no ability to look into the future and see that the promise was false and the road led nowhere. If I had retained any interest in VC-istan at that age (and, really, I don’t at this point) I would have become a VC while I was young enough that I still could. That’s the only job in VC-istan that makes sense.

The U.S. conservative movement is a failed eugenics project. Here’s why it could never have worked.

At the heart of the U.S. conservative movement, and most religious conservative movements, is a reproductive agenda. Old-style religious meddling in reproduction had a strong “make more of us” character to it– resulting in blanket policies designed to encourage reproduction across a society– but the later incarnations of right-wing authoritarianism, especially as they have mostly divorced themselves from religion, have been oriented more strongly toward goals judged to be eugenic, or to favor the reproduction of desirable individuals and genes; instead of a broad-based “make more of us” tribalism, it becomes an attempt to control the selection process.

The term eugenics has an ugly reputation, much earned through history, but let me offer a neutral definition of the term. Eugenics (“good genes”) is the idea that we should consciously control the genetic component of what humans are born into the world. It is not a science, since the definition of eu- is intensely subjective. As “eugenics” has been used throughout history to justify blatant racism and murder, the very concept has a negative reputation. That said, strong arguments can be made in favor of certain mild, elective forms of eugenics. For example, subsidized or free higher education is (although there are other intents behind it) a socially acceptable positive eugenic program: removal of one of a dysgenic economic force (education costs, usually borne by parents) that, empirically speaking, massively reduces fertility among the most capable people while having no effect on the least capable. 

The eugenic impulse is, in truth, fairly common and rather mundane. The moral mainstream seems to agree that eugenics (if not given that stigmatized name) is morally acceptable when participation is voluntary (i.e. no one is forced to reproduce, or not to do so) and positive (i.e. focused on encouraging desirable reproduction, rather than discouraging those deemed “unwanted”) but unacceptable when involuntary (coercive or prohibitive) and negative. The only socially accepted (and often legislated) case of negative and often prohibitive eugenics is the universal taboo against incest. That one has millennia of evolution behind it, and is also fair (i.e. it doesn’t single out people as unwanted, but prohibits intrafamilial couplings, known to produce unhealthy offspring, in general) so it’s somewhat of special case.

Let’s talk about the specific eugenics of the American right wing. The obsessions over who has sex with whom, the inconsistency between hard-line, literal Christianity and the un-Christ-like rightist economics, and all of the myriad mean-spirited weirdnesses (such as U.S. private health insurance, a monster that even most conservatives loathe at this point) that make up the U.S. right-wing movement; all are tied to a certain eugenic agenda, even if the definition of “eu-” is left intentionally vague. In addition to lingering racism, the American right wing unifies two varieties (one secular, one religious) of the same idea: Social Darwinism and predestination-centric Calvinism. This amalgam I would call Social Calvinism. The problem with it is that it doesn’t make any sense. It fails on its own terms, and the religious color it allowed itself to gain has only deepened its self-contradiction, especially now that sexuality and reproduction have been largely separated by birth control.

In the West, religion has always held strong opinions on reproduction, because the dominant religious forces are those that were able to out-populate the others. “Be fruitful and multiply.” This “us versus them” dynamic had a certain positive (in the sense of “positive eugenics”; I don’t mean to call it “good”) but coercive flair to it. The religious society sought much more strongly to increase its numbers within the world than to differentially or absolutely discourage reproduction by individuals judged as undesirable within its numbers. That said, it still had some ugly manifestations. One prominent one is the traditional Abrahamic religions’ intolerance of homosexuality and non-reproductive sex in general. In modern times, homophobia is pure ignorant bigotry, but its original (if subconscious) intention was to make a religious society populate quickly, which put it at odds with nonre7uiproductive sexuality of all forms.

Predestination (for which Calvinism is known) is a concept that emerged , much later, when people did something very dangerous to literalist religion: they thought about it. If you take religious literalism– born in the illogical chaos of antiquity– and bring it to its logical conclusions, funny things happen. An all-knowing and all-powerful God would, one can reason, have full knowledge and authority over every soul’s final destiny (heaven or hell). This meant that some people were pre-selected to be spiritual winners (the Elect) and the rest were refuse, born only to live through about seven decades of sin, followed by an eternity of unimaginable torture.

Perhaps surprisingly, predestination seemed to have more motivational capacity than the older, behavior-driven morality of Catholicism. Why would this be? People are loathe to believe in something as horrible as eternal damnation for themselves (even if some enjoy the thought for others) and so they will assume themselves to be Elect. But since they’re never quite sure, bad behavior will unsettle them with a creepy cognitive dissonance that is far more effective than ratiocination about punishments and rewards. The behavior-driven framework of the Catholic Church (donations in the form of indulgences often came with specific numbers of years by which time in purgatory was reduced) allows that a bad action can be cancelled out with future good actions, making the afterlife merely an extension of the “if I do this, then I get that” hedonic calculus. Calvinism introduced a fear of shame. Bad actions might be a sign of being one of those incorrigibly bad, damned people.

Calvinist predestination was not a successful meme (and even many of those who identify themselves in modern times as Calvinists have largely rejected it). “Our God is a sadistic asshole; he tortures people eternally for being born the wrong way” is not a selling point for any religion. That said, the idea of natural (as opposed to spiritual) predestination, as well as the Calvinist evolution from guilt-based (Catholicism) to shame-based (Calvinist) Christian morality, have lived on in American society.

Fundamental to the morality of capitalism is that some actors make better uses of resources than others (which is not controversial) and deserve to have more (likewise, not controversial). Applied to humans, this is generally if uneasily accepted; applied to organizations, it’s an obvious truth (no one wants to see the subsistence of inefficient, pointless institutions). Calvinism argued that one’s pre-determined status (as Elect or damned) could be ascertained from one’s actions; conservative capitalism argues that an actor’s (largely innate and naturally pre-determined) value can be ascertained by its success on the market.

Social Darwinism (which Charles Darwin vehemently rejected) gave a fully secular and scientific-sounding basis for these threads of thought, which were losing religious steam by the end of the 19th century. The idea that market mechanics and “creative destruction” ought to apply to institutions, patterns of behavior, and especially business organizations is controversial to almost no one. Incapable and obsolete organizations, whose upkeep costs have exceeded their social value, should die in order to free up room for newer ones. Where there is immense controversy is what should happen to people when they fail, economically. Should they starve to death in the streets? Should they be fed and clothed, but denied health care, as in the U.S.? Or should they be permitted a lower-middle-class existence by a welfare state, allowing them to recover and perhaps have another shot at economic success? The Social Darwinist seeks not to kill failed individuals per se, but to minimize their effect on society. It might be better to feed them than have them rebel, but allowing their medical treatment (on the public dime) is a bridge too far (if they’re sick, they can’t take up arms). It’s not about sadism per se, but effect minimization: to end their cultural and economic (and possibly physical) reproduction. It is a cold and fundamentally statist worldview. Where it dovetails with predestination is in the idea that certain innately undesirable people, damned early on if not from birth, deserve to be met with full effect minimization (e.g. long prison sentences since there is no hope of rehabilitation; persistent poverty because any resources given to them, they will waste) because any effect they have on the world will be negative. Whether they are killed, imprisoned, enslaved, or merely marginalized generally comes down to what is most convenient– and, therefore, effect-minimizing– and that is an artifact of what a society considers socially acceptable.

If we understand Calvinist predestination, and Social Darwinism as well, we can start to see a eugenic plan forming. Throughout almost all of our evolutionary history, prosperity and fecundity were correlated. Animals that won and controlled resources passed along their genes; those that couldn’t do so, died out. Social Darwinism, at the heart of the American conservative movement, believes that this process should continue in human society. More specifically, it holds to a few core tenets. First is that individual success in the market is a sign of innate personal merit. Second is that such merit is, at least partly, genetic and predetermined. Few would hold this correlation to be absolute, but the Social Darwinist considers it strong enough to act on. Third is that prosperity and fertility will, as they have over the billion years before modern civilization, necessarily correlate. The aspects of Social Darwinist policy that seem mean-spirited are justified by this third tenet: the main threat that a welfare state poses is that these poor (and, according to this theory, undesirable) people will take that money and breed. South Carolina’s Republican Lieutenant Governor, Andre Bauer, made this attitude explicit:

My grandmother was not a highly educated woman, but she told me as a small child to quit feeding stray animals. You know why? Because they breed. You’re facilitating the problem if you give an animal or a person ample food supply. They will reproduce, especially ones that don’t think too much further than that. And so what you’ve got to do is you’ve got to curtail that type of behavior. They don’t know any better.

The hydra of the American right wing has many heads. It’s got the religious Bible-thumping ones, the overtly racist ones, and the pseudoscientific and generally atheistic ones now coming out of Silicon Valley’s neckbeard right-libertarianism and the worse half of the “mens’ rights” movement. What unites them is a commitment to the idea that some people are innately inferior and should be punished by society, with that punishment ranging from the outright sadistic to the much more common effect-minimizing (marginalization) levels.

How it falls down

Social Calvinism is a repugnant ideology. Calvinistic predestination is an idea so bad that even conservative religion, for the most part, discarded it. The same scientists who discovered Darwinian evolution (as a truth of what is in nature, not of what should be in the human world) rejected Social Darwinism outright. It has also made a mockery of itself. It fails on its own terms. The most politically visible, mean-spirited, but also criminally inefficient manifestation of this psychotic ideology is in our health insurance system. Upper-middle-class, highly-educated people suffer– just as much as the poor do– from crappy health coverage. If the prescriptive intent behind a mean-spirited health policy is Social Calvinist in nature, the greed and inefficiency and mind-blowing stupidity of it affect the “undesirable” and “desirable” alike (unless one believes that only the 0.005% of the world population who can afford to self-insure are “desirable”). The healthcare fiasco is showing that a society as firmly committed to Social Calvinism as the U.S.– so committed to it that even Obama couldn’t make public-option (much less single-payer) healthcare a reality– can’t even succeed on its own terms. The economic malaise of the 2000s “lost decade” and the various morale crises erupting in the nation (Tea Party, #Occupy) only support the idea that the American social model fails both on libertarian and humanitarian terms.

Why do I argue that Social Calvinism could never work, in a civilized society? To put it plainly, it misunderstands evolution and, more to the point, reproduction (both biological and cultural). Nature’s correlation between prosperity and fecundity ended in the human world a long time ago, and economic stresses have undesirable side effects (which I’ll cover) on how people reproduce.

Let’s talk about biology; most of the ideas here also apply (and more strongly, due to the faster rate of memetic proliferation) to cultural reproduction. After the horrors justified in the name “eugenics” in the mid-20th century, no civilized society is going to start prohibiting reproduction. It’s not quite a “universal right”, but depriving people of the biological equipment necessary to reproduce is considered inhumane, and murdering children after the fact is (quite rightly) completely unacceptable. So people can reproduce, effectively, as much as they want. With birth control in the mix, most people can also reproduce as little as they want. So they have nearly total control over how much they reproduce, whether they are poor or rich. The Social Calvinist believes that the “undesirables” will react to socioeconomic punishment by curtailing reproduction. But do we see that happening? No, not really.

I mentioned Social Calvinism’s 3 core tenets above: (1) that socioeconomic prosperity correlates to personal merit, (2) that merit is at least significantly genetic in nature, and (3) that people will respond to prosperity by increasing reproduction (as if children were a “normal” consumer good) and to punishment by decreasing it. The first of these is highly debatable: desirable traits like intelligence, creativity and empathy may lead to personal success, but so does a lack of moral restraint. The people at the very top of society seem to be, for the most part, objectively undesirable– at least, in terms of their behavior (whether those negative traits are biological is less clear). The second is perhaps unpleasant as a fact (no humanitarian likes the idea that what makes a “good” or “bad” person is partially genetic) but almost certainly true. The third seems to fail us. Or, let me take a more nuanced view of it. Do people respond to economic impulses by controlling reproduction? Of course, they do; but not in the way that one might think.

First, let’s talk about economic stress. Stress can be good (“eustress”) or bad (“distress”) but in large doses, even the good kind can be focus-narrowing, if not hypomanic or even outright toxic. Rather than focusing on objective hardship or plenty, I want to examine the subjective sense of unhappiness with one’s socioeconomic position, which will determine how much stress a person experiences and which kind it is.  Likewise, economic inequality (by providing incentive for productive activity) can be for the social good– it’s clearly a motivator– but it is a source of (without directional judgment to the word) stress. The more socioeconomic inequality there is, the more of this stress society will generate. Proponents of high levels of economic inequality will argue that it serves eustress to the desirable people and institutions and distress to the less effective ones. Yet, if we focus on the subjective matter of whether an individual feels happy or distressed, I’d expect this to be untrue. People, in my observation, tend to feel rich or poor not based on where they are, economically, but by how they measure up to the expectations derived from their natural ability. A person with a 140 IQ who ends up as a subordinate, making a merely average-plus living doing uninteresting work, is judged (and will judge himself) as a failure. Even if that person has the gross resources necessary to reproduce (the baseline level required is quite low) he will be disinclined to do so, believing his economic situation to be poor and the prospects for any progeny to be dismal. On the other hand, a person with a 100 IQ who ends up with the average-plus income (as a leader, not a subordinate; but with the same income and wealth as the person with 140 above) will face life with confidence and, if having children is naturally something he wants, be inclined to start a family early, and possibly to have a large one.

What am I really saying here? I think that, while people might believe that meritocracy is a desirable social ideal, most people respond emotionally not to the component of their economic outcome derived from natural (possibly genetic) merit or hard work, but from the random noise term. People have a hard time believing that randomness is just that (hence, the amount of money spent on lottery tickets) and interpret this noise term to represent how much “society” likes them. In large part, we’re biologically programmed to be this way; most of us get more of a warm feeling from windfalls coming from people liking us than from those derived from natural merit or hard work. However, modern society is so complex that this variable can be regarded as pure noise. Why? Because we, as humans, devise social strategies to make us liked by an unknown stream of people and contexts we meet in the future, but whether the people and contexts we actually encounter (“serendipity”) match those strategies is just as random as the Brownian motion of the stock market. Then, the subjective sense of socioeconomic eustress or distress that drives the desire to reproduce comes not from personal merit (genetic or otherwise) but from something so random that it will have a correlation of 0.0 with pretty much anything.

This kills any hope that socioeconomic rewards and punishments might have a eugenic effect, because the part that people respond to on an emotional level (which drives decisions of family planning) is the component uncorrelated to the desired natural traits. There is a way to change that, but it’s barbaric. If society accepted widespread death among the poor– and, in particular, among poor children (many of whom have shown no lack of individual merit; i.e. complete innocents)– then it could recreate a pre-civilized and truly Darwinian state in which absolute prosperity (rather than relative/subjective satisfaction) has a major effect on genetic proliferation.

Now, I’ll go further. I think the evidence is strong that socioeconomic inequality has a second-order but potent dysgenic effect. Even when controlling for socioeconomic status, ethnicity, geography and all the rest, IQ scores seem to be negatively correlated with fertility. Less educated and intelligent people are reproducing more, while the people that humanity should want in its future seem to be holding off, having fewer children and waiting longer (typically, into their late 20s or early 30s) to have them. Why? I have a strong suspicion as to the reason.

Let’s be blunt about it. There are a lot of willfully ignorant, uneducated, and crass people out there, and I can’t imagine them saying, “I’m not going to have a child until I have a steady job with health benefits”. This isn’t about IQ or physical health necessarily; just about thoughtfulness and the ability to show empathy for a person who does not exist yet. Whether rich or poor, desirable people tend to be more thoughtful about their effects on other people than undesirable ones. The effect of socioeconomic stress and volatility will be to reduce the reproductive impulse among the thoughtful, future-oriented sorts of people that we want to have reproducing. It also seems to me that such stresses increase reproduction among the sorts of present-oriented, thoughtless sorts of people that we don’t as much want to be highly represented in the future.

I realize that speaking so boldly about eugenics (or dysgenic threats, as I have) is a dangerous (and often socially unacceptable) thing. To make it clear: yes, I worry about dysgenic risk. Now some of the more brazen (and, in some cases, deeply racist) eugenicists freak out about higher rates of fertility in developing (esp. non-white) countries, and I really don’t. Do I care if the people of the future look like me? Absolutely not. But it would be a shame if, 100,000 years from now, they were incapable of thinking like me. I don’t consider it likely that humanity will fall into something like Idiocracy; but I certainly think it is possible. (A more credible threat is that, over a few hundred years, societies with high economic inequality drift, genetically, in an undesirable direction, producing a change that is subtle but enough to have macroscopic effects.)

Why, at a fundamental level, does a harsher and more inequitable (and more stressful) society increase dysgenic risk? Here’s my best explanation. Evolutionary ecology discusses two reproductive pressures, r- and K-selection, in species, which correspond to optimizing for quantity versus quality of offspring. The r-strategist has lots of offspring, gives minimal paternal investment, and few will survive. An example is a frog giving birth to a hundred tadpoles. The K-strategist invests heavily in a smaller number of high-quality offspring with a much higher individual shot at surviving. Whales and elephants are K-strategists with long gestation periods and few offspring, but a lot of care given to them. Neither is “better” than the other, and they each succeed in different circumstances. The r-strategist tends to repopulate quickest after a catastrophe, while the K-strategist succeeds differentially at saturation.

It is, in fact, inaccurate to characterize highly evolved, complex life forms such as mammals as strong r- or K-selectors. As humans, we’re clearly both. We have an r-selective and a K-selective sexual drive, and one could argue that much of the human story is about the arms race between the two.

The r-selective sex drive wants promiscuity, has a strong present-orientation, and exhibits a total lack of moral restraint– it will kill, rape, or cheat to get its goo out there. The K-selective sex drive supports monogamy, is future-oriented, and values a stable and just society. It wants laws and cultivation (culture) and progress. Traditional Abrahamic religions have associated the r-drive with “evil” and sin. I wouldn’t go that far. In animals it is clearly inappropriate to put any moral weight into r- or K-selection, and it’s not clear that we should be doing that to natural urges that all people have (such as calling the r-selective component of our genetic makeup “original sin”). How people act on those is another matter. The tensions between the r- and K-drives have produced much art and philosophy, but civilization demands that people mostly follow their K-drives. While age and gender do not correlate as strongly to the r/K divide as stereotypes would insist (there are r-driven older women, and K-driven young men) it is nonetheless evident that most of society’s bad actors are those prone to the strongest r-drive: uninhibited young men, typically driven by lust, arrogance and greed. In fact, we have a clinical term for people who behave in a way that is r-optimal (or, at least, was so in the state of nature) but not socially acceptable: psychopaths. From an r-selective standpoint, psychopathy conferred an evolutionary advantage, and that’s why it’s in our genome.

Both sexual drives (r- and K-) exist in all humans, but it wasn’t until the K-drive triumphed that civilization could properly begin. In pre-monogamous societies, conflicts between men over status (because, when “alpha” men have 20 mates and low-status men have none, the stakes are much greater) were so common that between a quarter and a half of men died in positional violence with other men. Religions that mandated monogamy, or at least restrained polygamy as Islam did, were able to build lasting civilizations, while societies that accepted pre-monogamous distributions of sexual access were unable to get past the chaos of constant positional violence.

There are many who argue that the contemporary acceptance of casual sex constitutes a return to pre-monogamous behaviors. I don’t care to get far into this one, if only because I find the hand-wringing about the topic (on both sides) to be rather pointless. Do we see dysgenic patterns in the most visible casual sex markets (such as the one that occurs in typical American colleges)? Absolutely, we do. Even if we reject the idea that higher-quality people are less prone to r-driven casual sex, the way people (of both sexes) select partners in that game is visibly dysgenic. But to the biological future (culture is another matter) of the human species, that stuff is pretty harmless– thanks to birth control. This is where the religious conservative movement shoots itself in the foot; it argues that the advent of birth control created uncivil sexual behavior. In truth, bad sexual behavior is as old as dirt, has always been a part of the human world and probably always will be; the best thing for humanity is for it to be rendered non-reproductive, mitigating the dysgenic agents that brought psychopathy into our genome. (On the other hand, if human sexual behavior devolved to the state of high school or college casual sex and remained reproductive, the species would devolve into H. pickupartisticus and be kaputt within 500 years. I would short-sell the human species and buy sentient-octopus futures at that point.)

If humans have two sexual drives, it stands to reason that those drives would react differently to various circumstances. This brings to mind the relationship of each to socioeconomic stress. The r-drive is enhanced by socioeconomic stress– both eustress and distress. Eustress-driven r-sexuality is seen in the immensely powerful businessman or politician who frequents prostitutes, not because he is interested in having well-adjusted children (or even in having children at all) but to see if he can get away with it; the distress-driven r-sexuality has more of an escapist, “sex as drug”, flavor to it. In an evolutionary context, it makes sense that the r-drive should be activated by stress, since the r-drive is what enables a species to populate rapidly after an ecological catastrophe. On the other hand, the K-drive is weakened by socioeconomic stress and volatility. It doesn’t want to bring children into a future that might be miserable or dangerously unpredictable. The K-drive’s reaction to socioeconomic eustress is busyness (“I can’t have kids right now; my career’s taking off) and its reaction to distress is to reduce libido as part of a symptomatic profile very similar to depression.

The result of all of this is that, should society fall into a damaged state where socioeconomic inequality and stress are rampant, the r-drive will be more successful at pushing its way to reproduction, while the K-drive is muted. The result is that the people who will come into the future will disproportionately be the offspring of r-driven parents and couplings. Even if we reject the idea that undesirable people have stronger r-drives relative to their K-drives (although I believe that to be true) the enhanced power of the r-strategic sexual drive will influence partner selection and produce worse couplings. Over time, this presents a serious risk to the genetic health of the society.

Just as Mike Judge’s Idiocracy is more true of culture than of biology, we see the overgrown r-drive in the U.S.’s hypersexualized (but deeply unsexy) popular culture, and the degradation is happening much faster to the culture than it possibly could to our gene pool, given the relatively slow rate of biological evolution. Some wouldn’t see any correlation whatsoever between the return of the Gilded Age post-1980 and Miley Cyrus’s “twerking”, but I think that there’s a direct connection.


The Social Calvinism of the American right wing believes that severe socioeconomic inequality is necessary to flush the “undesirables” to the bottom, deprive them of resources, and prevent them from reproducing. Inherent to this strategy is the presumption (and a false one) that people are future-oriented and directed by the K-selective sexual drive, which is reduced by socioeconomic adversity. In reality, the more primitive (and more harmful, if it results in reproduction) r-selective sexual drive is enhanced by socioeconomic stresses.

In reality, socioeconomic volatility reduces the K-selective drive of most people, rich and poor. The reason for this is that a person’s subjective sense of satisfaction with socioeconomic status is not based on whether he or she is naturally “desirable” to society but his or her performance relative to natural ability and industry, which is a noise variable. It enhances the r-selective drive. Even if we do not accept that desirable people are more likely to have strong K-drives and weak r-drives, it is empirically true (seen in millennia of human sexual behavior) that people operating under the K-drive choose better partners than those operating under the r-drive.

The American conservative movement argues, fundamentally, that a mean-spirited society is the only way to prevent dysgenic risk. It argues, for example, that a welfare state will encourage the reproductive proliferation of undesirable people. The reality is otherwise. Thoughtful people, who look at the horrors of American healthcare and the rapid escalation of education costs, curtail reproduction even if they are objectively “genetically desirable” and their children are likely to perform well, in absolute terms. Thoughtless people, pushed by powerful r-selective sex drives, will not be reproductively discouraged, and might (in fact) be encouraged, by the stresses and volatility (but, also, by undeserved rewards) of the harsher society. Therefore, American Social Calvinism actually aggravates the very dysgenic risk that it exists to address.

Exploding college tuitions might be a terrifying sign

It’s well-known that college tuitions are rising at obscene rates, with the inflation-adjusted cost level having grown over 200 percent since the 1970s. Then there is the phenomenon of “dark tuition”, referring to the additional costs that parents often incur in giving their kids a reasonable shot of getting into the best schools. Because of the regional balancing (read: non-rich students from highly-represented areas have almost no shot, because they compete in the same regional pool as billionaires) effect, the insanity begins as early as preschool in places like Manhattan. Including dark tuition, some families spend nearly a million dollars on college admissions and tuition for their spawn. To write this off as a wasteful expenditure is unreasonable; it’s true that these decisions are made without data, but the connections made early in life clearly can be worth a large sum. Or, alternatively, the cost of not being connected can be quite high.

Many also note that a college degree means less than it used to, and that’s clearly true: educational credentials bring less on the job market than they once did. Yet rising tuitions are a market signal indicating that, at least for elite colleges, the value of something has gone up. Some people have complained that MBA school has become the new college, due to the latter’s devaluation. I’d argue that the data suggest the reverse. College is turning into MBA school: quality can be found at the top 200 or so institutions, but increasingly, the real big-ticket value motivating the purchase is certainly not the education, and not really the brand name– 5 years out of school, no one cares where you attended; the half of elite-school attendees who fail to make significant connections are likely to end up in mediocrity and failure like everyone else– but the network itself.

So have elite social connections become more valuable? How could it be so, in an era during which technology is supposedly liberating us from inefficiencies like good-old-boy networks? Aren’t those dinosaurs on the way to extinction? It seems not. This should be an upsetting conclusion, not so much for what it means (connections matter) but for what it suggests about the trend. Realists accept that, in the real world, connections and the attendant manipulations and (often, for those outside needing to get in) extortions matter. What we all hope is that they will matter less as time goes on, because for the opposite to be the case suggests that progress is moving backward. Is it?

The news, delivered.

“You know what the trouble is, Brucey? We used to make shit in this country, build shit. Now we just put our hand in the next guy’s pocket.” — Frank Sobotka, The Wire.

Leftists like me would typically argue that American decline began in the 1980s. The prosperity of the 1990s falsely validated the limp-handed centrism of the “New Democrats”, and the 2000s was the decade of free fall. On the other hand, despite the mean-spirited political tenor of these decades, the U.S. continued to innovate. As bad as things were, from a macroscopic and cultural perspective, the engines of progress continued running. Silicon Valley didn’t stop just because Reagan and the Bushes held power. Google, which became a household name around 2002, didn’t go out of business just because of a toxic political environment. I’m not saying that politics doesn’t matter– obviously, it does– but even in the darkest hours (Bush between 9/11 and Katrina) there was not a visible, credible threat that American innovation would, in the short term, just die.

I also don’t think that we’re in immediate danger of an out-of context innovation shut-down. It’s not something that will happen in the next two years. I do think that we’re closer than we realize.

American innovation exists for a surprisingly simple reason: forgiving bankruptcy laws regarding good-faith business failure. If your company folds, it doesn’t ruin your life. Unfortunately, that protection has been eroded. Bank loans for new businesses require personal liability, circumventing this protection outright. The alternative is equity financing, but Silicon Valley’s marquee venture capitalists have set up a collusive, feudal reputation economy in which an individual investor can be a single-point-of-failure for an entrepreneur’s entire career. The single trait of the American legal system that enabled it to be a powerhouse for new business generation– forgiving treatment of good-faith business failure– has been removed. Powerful people saw it as inconvenient, they wrote it off the ticket.

Credible long-term threats to innovation are present. Makers struggle more to get their ideas funded, or to get anywhere near the people in control of the arcane permission system that still runs the economy. The socially-connected takers who own that permission system can demand more as a price of audience. We’re seeing that. The people who really make the big money (defined as enough to comfortably buy a house) in Silicon Valley, these days, aren’t the makers implementing new, crazy ideas; but peddlers of influence using their business-school connections to get unwarranted advisory and executive positions, stitching together enough equity slices to have a viable portfolio, or those who do the former even better and become real VCs. Silicon Valley’s Era of Makers has come and gone; now, MBA culture has swept in, 22-year-olds are getting funded based on who their parents are, and its clear that Taker Culture has won… at least in the “we’ll fund your competitors if you don’t take this sheet” VC-funded world.

So… what does this have to do with college tuitions rising? Possibly nothing. There are a number of plausible causes for the tuition bubble, many having little or nothing to do with Taker Culture and the (risk of) death of innovation. Or, it might tell us a lot.

What do we actually know?

We know that college tuitions are skyrocketing. Professor salaries aren’t the cause, because the academic job market has been tanking over the past 30 years, with low-paid adjunct and graduate students replacing professors in much of undergraduate education. This suggests that the quality hasn’t improved, and I’d agree with that assessment. Administrative costs and real estate expenditures have gone up, but that seems to be more of a case of colleges wanting to do something with this massive pool of available money, than a prior cause of the escalating costs.

Housing prices in the most vital areas have also increased, even though the economy (including in those areas) has weakened considerably. I suspect that these two of the three aspects of the Satanic Trinity (housing, healthcare, and tuition costs) share a common thread: as the world becomes riskier and poorer, people are buying connections. That’s what living in New York instead of New Orleans in your 20s is about. It’s also what going to an Ivy instead of an equally adequate state university is about. Of course, the fact that connections matter enough to be bought isn’t new. People have been buying connections as long as there has been money. What is obvious is that people are paying more for connections than ever before, and that inherited social connectedness has probably reached a level of importance (even in the formerly meritocratic VC-funded startup scene) incompatible with democracy, innovation, or a forward-thinking society. Oligarchy has arrived.

What happens in an oligarchy is that the purchase of connections (via financial transfer, or ethical compromise) ceases to be an irritating sideshow of the economy– a distraction from actually making stuff– and, instead, becomes the main game.

Here’s an interesting philosophical puzzle. Does this pattern actually mean that connections have become (a) more valuable, or (b) less so? It means both, paradoxically. Social connections matter more, insofar as a much larger pool of money is being putting into chasing them, and this strongly indicates that hard work, creativity, and talent no longer matter as much. To navigate society’s dehumanizing and arcane permissions systems, “who you know” is becoming more crucial. The exchange rate between social property vs. talent and hard work now favors the first. However, connections are less valuable, also, insofar as they deliver less, requiring people to procure more social capital in order to make their way in the world. The price of something increasing does not necessarily mean that it’s worth more to the world; it might be that a reduction in its delivered value has driven up the quantity needed, thus its price. This evolution is not the functioning of a healthy economy; it’s sickness that benefits only a few. Connections matter more, make very evident by the fact that people are paying more for the same quantity, but deliver less. That means that the world, as a whole, is just getting poorer.

This is clearly happening. People are paying more for social connections and the health of the economy, in addition to this, indicates that even more social access is needed to buy as much economic value (security, opportunity, etc.) as yesteryear. Adam Smith decried Britain as a “nation of shopkeepers”. The United States, ever since the Organization Man age, has been in danger of becoming a nation of social climbers. However, there’s always been something else to its economic character; at least, enough impurity amid the bland mass to, at least, give color should the damn thing crystallize. But is that true now? In the 1970s, that “impurity” was Silicon Valley. There was cheap land that the old elite didn’t want, but that drew (for a variety of historical reasons) a lot of intelligent and capable people. Governments and businesses used this opportunity to build up one of the most impressive R&D cultures the world has seen. Maker Culture came first in Silicon Valley, generated a lot of value, and then the money started rolling in. Unfortunately, that also brought in douchebags, whose number and power have only dramatically increased. It was probably inevitable that Taker Culture (multiple liquidation preferences, note-sharing among VCs, MBA-culture startups with reckless and juvenile management, Stanford Welfare and the importance of social connections) would set in.

The New California?

We know that the California-centered Maker Culture is gone. There are still a hell of a lot of great people in that region– it might be the most talent-rich place on earth– but, with a few outstanding exceptions, they’re no longer the socially important ones. I don’t think it’s worth dissecting the death of the thing, or whining about the behaviors of venture capitalists, because I think that ecosystem is too far gone to repair itself. In the 1990s, venture capitalists rightly judged that most of the powerful, large corporations were too politically dysfunctional to innovate. Now, that same charge is even more true of the VC-funded ecosystem, which effectively functions (due to the illegal collusion of VCs, who increasingly view themselves as a single executive suite) as a single corporation, albeit with a postmodern structure.

What California was when the Maker Culture emerged, what places are like that now? Is it another city in the U.S., like Austin, perhaps? Or is it in another country? Must it even be a physical place at all? I don’t know the answers to these questions.

Or, as the escalating cost of college tuition– and the premium on social connections suggested by that– seems to indicate, is it just gone for good? Has an effete aristocracy found a way to drive meritocracy not just to a fringe (like California five decades ago) but out of existence entirely? If so, then expect innovation to die out, and an era of stagnation to set in.

Three capitalisms: yeoman, corporate, and supercapitalism

I’m going to put forward the idea, here, that what we call capitalism in the United States is actually an awkward, loveless ménage à trois between three economic systems, each of which considers itself to be the true capitalism, but all three of which are quite different. Yeoman (or lifestyle) capitalism is the most principled variety of the three, focused on building businesses to improve one’s life or community. The yeoman capitalist plays by the rules and lives or dies by her success on the market. Second, there’s the corporate capitalism whose internal behavior smells oddly of a command economy, and that often seeks to control the market. Corporate capitalism is about holding position and keeping with the expectations of office– not markets per se. Finally, there is supercapitalism whose extra-economic fixations actually render it more like feudalism than any other system and exerts even more control, but at a deeper and more subtle level, than the corporate kind.

1. Yeoman capitalism (“the American Dream”)

The most socially acceptable of the American capitalisms is that of the small business. It’s not trying to make a billion dollars per year, it doesn’t have full-time, entitled nonproducers called “executives”, and it often serves the community it grew up in. It’s sometimes called a “lifestyle business”; it generates income (and provides autonomy) for the proprietor so as to improve her quality of life. When typical Americans imagine themselves owning a business, and aspiring to the freedom that can confer, yeoman capitalism is typically what they have in mind: something that keeps them active and generates income, while conferring a bit of independence and control over one’s destiny.

Yeoman capitalism is often used as a front for the other two capitalisms, because it’s a lot more socially respected. Gus Fring, in Breaking Bad, is a supercapitalist who poses as a yeoman capitalist, making him beloved in Albuquerque.

The problem with yeoman capitalism is that, not only is it highly risky in terms of year-by-year yield, but there’s often a lack of a career in it. Small business owners do a lot more for society than executives, but get far less in terms of security. An owner-operator of a business that goes bankrupt will not easily end up with another business to run, while fired executives get new jobs (often, promotions) in a matter of weeks. Modern-day yeoman capitalism is as likely to take the form of a consulting or application (“app”) company as a standalone business and may have more security; time will tell, on that one.

While yeoman capitalism provides an attractive narrative (the American Dream, in the United States) it does not provide job security for anyone (and that’s not its goal). It also has a high barrier to entry: you need capital or connections to play. Even though it is a more likely path to wealth than the other two capitalisms are for most people, it often leads to horrible failure, because it comes with absolutely no safety net. It’s the blue-collar capitalism of working hard and hoping that the market rewards it. Sometimes, the market doesn’t. Most people can’t stomach the income volatility of this, or even amass the capital to get started.

2. Corporate capitalism (“in Soviet Russia, money spends you”)

Corporate capitalism provides much more security, but it has an institutional command-economy flavor. People don’t think like owners, because they’re not. Private-sector social climbers rule the day. It’s uninspiring. It feels like the worst of both worlds between capitalism and communism, with much of the volatility, insecurity, and greed of the first but the mediocrity, duplicity, and disengagement associated with the second. It has one thing that keeps it going and makes it the dominant capitalism of the three. It has a place for (almost) everyone. Most of those places are terrible, but they exist and they don’t change much. Corporate capitalism will give you the same job in California as you’d get in New York for your level of “track record” and “credibility” (meaning social status).

The attraction of corporate capitalism is that one has a generally good sense of where one stands. Yeoman capitalism is impersonal; market forces can fire you, even if you do everything right. Corporate capitalism gives each person a history and a personal reputation (resume) based in the quality of companies where one worked and what titles were held. At least in theory, that smooths out the bad spells because, even though layoffs and reorganizations occur, the system will always be able to find an appropriate position for a person’s “level”, and people level up at a predictable rate.

Adverse selection is one problem with corporate capitalism. People choose corporate capitalism over the yeoman kind to mitigate career risks. People who want to off-load market risks might be neutral bets from a hiring perspective, but people who want to off-load their own performance risks (i.e. because they’re incompetent slackers) are bad hires. Corporate capitalism’s “place for everyone” makes it attractive to those sorts of people, who can trust that social lethargy, in addition to legal issues, around decisions that adversely affect one’s career (i.e. actually demoting or firing someone) will buy them enough time to earn a living doing very little. Consequently, it’s hard to operate in corporate capitalism without accruing some dead weight. Worse yet, it’s hard to get rid of the deadwood, because the useless people are often the best at playing politics and evading detection. Companies that set up “fire the bottom 10 percent each year” policies end up getting ruined by the Welch Effect: stack ranking’s most common casualties are not true underperformers, but junior members of macroscopically underperforming teams (who had the least to do with this underperformance).

Compounding this is the fact that corporations must counter-weigh their extreme inequality of results (in pay, division of labor, and respect) with a half-hearted attempt at inequality of opportunity (no playing of favorites) but what this actually means is that the most talented can’t “grade skip” past the initial grunt work, but have to progress along the slow, pokey track built for the safety-seeking, disengaged losers. They don’t like this. They want the honors track, and don’t get it, because it doesn’t exist– grooming a high-potential future leader (as opposed to hiring one from the outside and then immediately reorg-ing so no one knows what just happened) is not worth pissing off the rest of the team. The sharp people leave for better opportunities. Finally, corporations tend over time toward authoritarianism because, as the ability to retain talent wanes, remaining people that the company considers highly valuable are enticed with a zero-sum but very printable currency– control over others. All of this tends toward an authoritarian mediocrity that is the antithesis of what most people think capitalism should be.

Socialism and capitalism both have a Greenspun property wherein bad implementations of one generate shitty forms of the other. Under Soviet communism, criminal black markets (similar to that existing for psychoactive drugs in the U.S.) existed for staid items like lightbulbs, so this was a case of bad socialism creating a bad capitalism. Corporate capitalism has a simliar story. Corporations are fundamentally statist institutions that operate like command economies internally. In fact, if one were to conceive as the multi-national corporation as the successor to the nation-state, one could see the corporation as an extremely corrupt socialist state. What is produced, how it is produced, and who answers to whom, all is determined centrally by an autocratic authority. Advancement has more to do with pleasing party officials than succeeding on a (highly controlled) market. Corporations do not run as free markets internally; but also, once they are powerful and established, they work to make society’s broader market less free, pulling the ladder up after using it.

3. Supercapitalism! (“You know what’s cool? Shitting all over a redwood forest for a wedding!”)

Supercapitalism is the least understood of the three capitalisms. Supercapitalists don’t have the earnestness of the yeoman capitalist; they view that a chump’s game, because of its severe downside risks. They also don’t have the patience for corporate capitalism’s pokey track. Supercapitalists rarely invest themselves in one business or product line; having a full-time job is proletarian to them. Instead, they “advise” as many different firms as they can. They’re constantly buying and selling information and social capital.

Mad Men is, at heart, about the emergence of a supercapitalist class in professional advertising. Don Draper isn’t an entrepreneur, but he’s not a corporate social climber either. He’s a manipulator. The clients are the corporate capitalists playing a less interesting game than what is, in the early 1960s, emerging on Madison Avenue– a chance to float between companies while cherry-picking their most interesting or lucrative marketing problems. The ambitious, smart, Ivy Leaguers are all working for people like Don Draper, not trying to climb the Bethlehem Steel ladder. What’s attractive about advertising is that it confers the ability to work with several businesses without committing to one. Going in-house to a client (still at a much higher level than any ladder climber can get) is the consolation prize.

One interesting trait of supercapitalism is that it’s generally only found in one or two industries at a time. Madison Avenue isn’t the home of supercapitalism anymore; now, advertising is just the unglamorous corporate kind. Investment banking took the reins afterward, but is now losing that; now it’s VC-funded internet startups (many of which have depressingly little to do with true technology) where supercapitalist activity lives. Why is it this way? Because supercapitalism, although it considers itself the most modern and stylish capitalism, has a fatal flaw. It’s obsessed with prestige, and prestige is another name for reputation, and so it generates reputation economies (feudalism). It can’t stay in one place for too long, lest it undermine itself (by developing the negative reputation it deserves, and therefore failing on its own terms).

Supercapitalism also turns into the corporate kind because its winners (and there are very few of them) get out. First, they establish high positions where they participate in very little of the work (to avoid evaluation that might prove them just to have had initial luck). They become executives, then advisors, then influential investors, and then they move somewhere else– somewhere more exciting. That leaves the losers behind, and all they can come up with are authoritarian rank cultures designed to replicate former glory.

Why does supercapitalism generate a reputation economy? That fact is extremely counterintuitive. Supercapitalism draws in some of the most talented, energetic people; and it is often (because of its search for the stylish) at the cutting edge of the economy. So why would it create something so backward and feudal as a reputation economy, which intelligent people almost uniformly despise? The answer, I think, is that supercapitalism tends to demand world-class resources in both property (capital) and talent (labor). A regular capitalist is not nearly as selective, and will take an opportunity to turn a profit from property or talent, but the sexiest and most stylish capers require top-tier backing in both. If you’re obsessed making a name for yourself (and supercapitalism is run by the most narcissistic, who are not necessarily the most greedy, people) in the most grandiose way, you don’t just need to hit your target; you also need the flashiest guns.

Right now, the eye of the supercapitalist hurricane is parked right over Silicon Valley. Sean Parker is the archetypical supercapitalist. He’s never really succeeded in any of his roles (that’s a prolish, yeoman capitalist ideal) but he’s worth billions, and now famous for being famous. While corporate capitalism focuses on mediocrity and marginalizes both extremes (deficiency and excellence) supercapitalism will always make a cushy home for colorful, charismatic failures just as eagerly as it does for unemployable excellence.

Supercapitalism will, eventually, move away from the Valley. Time will tell how much damage has been done by it, but considering the state of the housing market there and the horrible effects of high house prices on culture, I wouldn’t expect the region to survive. Supercapitalism rarely considers posterity and it tends to leave messes in its wake. 

The final reason why supercapitalism must move from one industry to another, over time, is that reputation economies deplete the opportunities that attract talent. It’s worthwhile, now, to talk about compensation and how they work in the three capitalisms. Doing so will help us understand what supercapitalism is, and how it is different from the corporate kind.

Under yeoman capitalism, the capitalist is compensated based on exactly how the market values her product. No committee decides what to pay her, and she is never personally evaluated; it’s the market value of what she sells that determines her income. Most people, as discussed, either can’t handle (financially or emotionally) this volatility or, at least, believe they can’t. Corporate capitalism and supercapitalism, on the other hand, tend to pre-arrange compensation with salaries and bonuses that are mostly predictable.

Of course, what a person’s work is worth, when that work is abstract and joint efforts are complex and nonseparable, has a wide range of defensible values. Corporate capitalism settles this by setting compensation near the lower bound of that range, but (mostly) guaranteeing it. If you make $X in base salary, there’s usually a near-100-percent chance that you’ll make that or more in a year (possibly in another job). Since people are compensated at the lower bound of this range, this generates large profits at the top; in the executive suite (above the effort thermocline) something exists that looks somewhat like a less mobile and blander supercapitalism.

People who want to move into the middle or top of their defensible salary ranges won’t get it in corporate capitalism. The work has already been commoditized and the rates are already set, and excellence premiums are pretty minimal because most corporations refuse to admit that their in-house pet efforts aren’t excellent. Thus, talented people looking for something better than the corporate deal find places where the opportunities are vast, but also poorly understood by the local property-holders, allowing them to get better deals than if the latter knew what they had. At one time, it was advertising (cutting-edge talent understood branding and psychology; industrial hierarchs didn’t). Then it was finance; later and up to now, it has been venture-funded light technology (on which the sun is starting to set). Over time, however, the most successful supercapitalists position themselves so as not to be affiliated with a single one of the efforts, but diversify themselves among many. This creates a collusive, insider-driven market like modern venture capital. Over time, this inappropriate sharing of information turns into a full-blown reputation economy.

Once a reputation economy is in place, talent stops winning, because property, by its sheer power over reputations, has full authority to set the exchange rate between property and talent. “The rate is X. Accept it or I’ll shit on your name and you’ll never see half of X.” Once that extortion becomes commonplace, what follows is a corporate rank culture. It feels like the arrangements are “worked out” and only management can win– and that’s actually how it is. Opportunities don’t disappear entirely, but they aren’t any more available to young talent than elsewhere, and the field becomes just another corporate slog. That’s where the VC-funded technology scene will be soon, if not already there. 

Supercapitalists, I should note, are not always the same people as “top talent” and they’re rarely young (i.e. hungry and unestablished) talent. Supercapitalists tend to be the rare few with connections to both property and talent at the highest levels of quality. Property they can carry with them, but talent they must chase. Talent arrives in the new place (quantitative finance, internet technology) first. Supercapitalism emerges as these well-connected and propertied “carpetbaggers” arrive, and as the next wave of young talent discovers that there are better opportunities in managing the new place (i.e. associate positions at VC firms) than working there. 

What really impels young talent to join supercapitalism is not the immediate opportunity (which is tapped out) but the possibility to move along with supercapitalism to the next new place. For example, someone who started in investment banking in 2006 is not likely to be a million-per-year MD today– that channel’s clogged– but has has a good chance of being rich, by this point, if he jumped on the venture capital bandwagon around 2007-08; he’s a VC partner on Sand Hill Road now. 


How do these three capitalisms interact? Is there a pecking order among them? How do they view each other? What is the purpose of each?

Yeoman capitalism provides leadership opportunities for the most enterprising blue-collar people, and is the most internally consistent. It’s honest. Unlike the other capitalisms, there isn’t much room for reputation (much less prestige) aside from in one’s quality of product. The rule is: make something good, hope to win on the market. The major problem with it is its failure mode, even in good-faith business failures that aren’t the proprietor’s fault. The main competitive advantage one holds as a small business owner is property rights over a company, and one who loses that is not only jobless, but often with limited transferability of skill.

Yeoman capitalism has a lot of virtues, of course. It gives a lot back to its community, while corporate and supercapitalism tend to destroy their residences and move on. Yeoman capitalism is what blue-collar people tend to think of when they imagine capitalism as a whole, and it provides PR for the corporate capitalists and supercapitalists, who recognize that their reputations (which they hold dear) depend on the positive image that yeoman capitalism provides for the whole economic system. Yeoman capitalism is aware of corporate and supercapitalist entities in the abstract, but has little visibility into their inner workings. Most small businessmen probably know that the corporations are somewhat different from their enterprises, but not how different (in reality, living within two separate societies) at the upper levels.

Corporate capitalism provides social insurance, although with great degrees of inequity based on pre-existing social class. It’s socialism as it would be imagined by a self-serving, entitled upper class refusing to give up any real power or opportunity. It can make little meaning out of leadership, charisma, or unusual intellectual talent. In fact, it goes to great lengths to pretend that these differences among people don’t exist. Its goal is to extract some labor value from people who lack the risk tolerance for yeoman capitalism and the talent for supercapitalism, and it does so extremely well, but it also creates a culture of authoritarian mediocrity that renders it unable to excel at anything. Needs for high quality are often filled by yeoman or super-capitalism; because yeoman capitalism can provide the autonomy that top talent seeks while supercapitalism provides (the possibility of) power and extreme compensation, those capitalisms get the lion’s share of top talent. Regarding awareness, corporate capitalism understands yeoman capitalism well (it often serves yeoman capitalists) but is oblivious to the whims of supercapitalism.

Between corporate and yeoman capitalism, there isn’t a clear social superiority, because they serve different purposes. Some intelligent people prefer the validation and stability of corporate capitalism, while others prefer the blue-collar honesty of yeoman capitalism. On the other hand, a strong argument can be made that supercapitalism is the clear elite among the three. It’s built to take advantage of the freshest, just-being-discovered-now opportunities. 

Supercapitalism has a familiar process. First, the smartest people find opportunities (“before it was cool”) that the property-holders haven’t yet found a way to valuate, and negotiate favorable terms for themselves while they can, and this makes a few thousand smart people very rich. Then, the elite property-holders catch wind of the deals to be made and move in. Soon there’s a rare confluence of two forces that usually dislike, but also rely heavily upon, each other– talent and property. Supercapitalism emerges as the all-out contest to determine an exchange rate between these two resources over a new domain comes into play. Eventually property wins (reputation economy) and corporatization sets in, while those who still have the hunger to be supercapitalists move on to something else.

A puzzle to end on

There’s a fourth kind of capitalism that I haven’t mentioned, and I think it’s superior to the other three for a large class of people. What might it be? That’s one of my next posts. For a hint, or maybe a teaser: the idea comes from evolutionary biology.