Greed versus sadism

I’ve spent a fair amount of time reading Advocatus Diaboli, and his view on human nature is interesting. He argues that sadism is a prevailing human trait. In an essay on human nature, he states:

They all clearly a demonstrate a deep-seated and widespread human tendency to be deceitful, cruel, abusive and murderous for reasons that have almost nothing to with material or monetary gain. It is as if most human beings are actively driven a unscratchable itch to hurt, abuse, enslave and kill others even if they stand to gain very little from it. Human beings as a species will spend their own time, effort and resources to hurt other living creatures just for the joy of doing so.

This is a harsh statement, and far from socially acceptable. Sadism is a defining human characteristic, rather than a perversion? To put it forward, I don’t agree that sadism is nearly as prevalent as AD suggests. However, it’s an order of magnitude more prevalent than most people want to admit. Economists ignore it and focus on self-interest: the economic agent may be greedy (that is, focused on narrow self-interest) but he’s not trying to hurt anyone. Psychology treats sadism as pathological, and limited to a small set of broken people called psychopaths, then tries to figure out what material cause created such a monster. The liberal, scientific, philosophically charitable view is that sadistic people are an aberration. People want sex and material comfort and esteem, it holds, but not to inflict pain on others. Humans can be ruthless in their greed, but are not held to be sadistic. What if that isn’t true? We should certainly entertain the notion.

The Marquis de Sade– more of a pervert than a philosopher, and a writer of insufferably boring, yet disturbing, material– earned his place in history by this exact argument. In the Enlightenment, the prevailing view was that human nature was not evil, but neutral-leaning-good. Corrupt states and wayward religion and unjust aristocracies perverted human nature, but the fundamental human drive was not perverse. De Sade was one of the few to challenge this notion. To de Sade, inflicting harm on others for sexual pleasure was the defining trait. This makes the human problem fundamentally insoluble. If self-interest and greed are the problem, society can align peoples’ self-interests by prohibiting harmful behaviors and rewarding mutually beneficial ones. If, however, inflicting pain on others is a fundamental human desire, then it is impossible for any desirable state of human affairs to be remotely stable; people will destroy it, just to watch others suffer.

For my part, I do not consider sadism to be the defining human trait. It exists. It’s real. It’s a motivation behind actions that are otherwise inexplicable. Psychology asserts it to be a pathological trait of about 1 to 2 percent of the population. I think it’s closer to 20 percent. The sadistic impulse can overrun a society, for sure. Look at World War II: Hitler invaded other countries to eradicate an ethnic group for no rational reason. Or, the sadists can be swept to the side and their desires ignored. Refusing to acknowledge that it exists, however, is not a solution, and I’ll get to why that is the case.

Paul Graham writes about the zero-sum mentality that emerges in imprisoned or institutionalized populations. He argues that the malicious and pointless cruelty seen in U.S. high schools, prisons, and high-society wives is of a kind that emerges from boredom. When people don’t have something to do– and are institutionalized or constrained by others’ low regard for them (teenagers are seen as economically useless, high-society wives are made subservient, prisoners are seen as moral scum)– they create senseless and degrading societies. He’s right about all this. Where he is wrong is in his assertion that “the adult world” (work) is better. For him, working on his own startup in the mid-1990s Valley, it was. For the 99%, it’s not. Office politics is the same damn thing. Confine and restrain people, and reinforce their low status with attendance policies and arbitrary orders, and you get some horrendous behavior. Humans are mostly context. Almost all of us will become cruel and violent if circumstances demand it. Okay, but is that the norm? Is there an innate sadism to humans, or is it rare except when induced by poor institutional design? The prevailing liberal mentality is that most human cruelty is either the fault of uncommon biological aberration (mental illness) or incompetent (but not malicious) design in social systems. The socially unacceptable (but not entirely false) counterargument is that sadism is a fundamental attribute of us (or, at least, many of us) as humans.

What is greed?

The prevailing liberal attitude is that greed is the source of much human evil. The thing about greed is that it’s not all that bad. In computer science, we call an optimization algorithm “greedy” if it is short-sighted (i.e. not able to capture the whole space, at a given algorithmic step) and these greedy algorithms often work. Sometimes, they’re the only option because anything else requires too much in the way of computational resources. “Greed” can simplify. Greedy people want to eat well, to travel, and for their children to be well-educated. Since that’s what most people want, they’re relatable. They aren’t malignant. They’re ruthless and short-sighted and often arrogant, but they (just like anyone else) are just trying to have good lives. What’s wrong with that? Nothing, most would argue. Most importantly, they’re reasonable. If society can be restructured and regulated so that doing the right thing is rewarded, and doing the wrong thing is punished or forbidden, greedy people can be used for good. Unlike the case with sadism, the problem can be solved with design.

Is greed good? It depends on how the word is defined. We use the word ambition positively and greed negatively, but if we compare the words as they are, I’m not sure this makes a lot of sense. Generally, I view people who want power more negatively than those who want wealth (in absolute, rather than relative terms) alone. As a society, we admire ambition because the ambitious person has a long-term strategy– the word comes from the Latin ambire, which means to walk around gathering support– whereas greed has connotations of being short-sighted and petty. We conflate long-range thinking with virtue, ignoring the fact that vicious and sadistic people are capable of long-term thought as well. At any rate, I don’t think greed is good. However, greed might be, in certain contexts, the best thing left.

To explain this, note the rather obvious fact that corporate boardrooms aren’t representative samples of humanity. For each person in a decision-making role in a large business organization, there’s a reason why he’s there and, if you think it comes down to “hard work” or “merit”, you’re either an idiot or painfully naive. Society is not run by entrepreneurs, visionaries, or creators. It’s run by private-sector social climbers. Who succeeds in such a world? What types of people can push themselves to the top? Two kinds. The greedy, and the sadistic. No one else can make it up there, and I’ll explain why, later in this post.

This fact is what, in relative terms, makes greed good. It’s a lot better than sadism.

The greedy person may not value other concerns (say, human rights or environmental conservation) enough, but he’s not out to actively destroy good things either. The sadist is actively malicious and must be rooted out and destroyed. It is better, from the point of view of a violence-averse liberal, that the people in charge be merely greedy. Then it is possible to reason with them, especially because technology makes rapid economic growth (5 to 20+ percent per year) possible. What prevents that from happening now is poor leadership, not malignant obstruction, and if we can share the wealth with them while pushing them aside, that might work well for everyone. If the leaders are sadistic, the only way forward is over their dead bodies.

“The vision thing”

Corporate executives do not like to acknowledge that the vast majority of them are motivated either by greed or by sadism. Instead, they talk a great game about vision. They concoct elaborate narratives about the past, the future, and their organization’s place in the world. It makes greed more socially acceptable. Yes, I want power and wealth; and here is what I plan to do with it. In the corporate world, however, vision is almost entirely a lie, and there’s a solid technical reason why that is the case.

We have a term in software engineering called “bikeshedding“, which refers to the narcissism of petty differences. Forget all that complicated stuff; what color are we going to paint the bike shed? The issue quickly becomes one that has nothing to do with aesthetics. It’s a referendum on the status of the people in the group. You see these sorts of things in mergers often. In one company, software teams are named after James Bond villains; in the other, they’re named after 1980s hair bands. If the merger isn’t going well, you’ll see one team try to obliterate the memetic cultural marks of the other. “If you refer to Mötley Crüe in another commit message, or put umlauts where they don’t belong for any reason, I will fucking cut you.”

Bikeshedding gets ugly, because it’s a fundamental human impulse (and one that is especially strong in males) to lash out against unskilled creativity (or the perception of unskilled creativity, because the perceiver may be the defective one). You see this in software flamewars, or in stand-up comedy (with hecklers pestering comics, and the swift comics brutally insulting their adversaries.) This impulse toward denial is not sadistic or even a bad thing at its root. It’s fundamentally conservative, but inflicting brutal social punishments on incompetent wannabe chieftains is what kept early humans from walking into lions’ dens.

As a result of the very strong anti-bikeshedding impulse, creativity and vision are punished, because (a) even those with talent and vision come under brutal attack and are drawn into lose-lose ego wars, and (b) almost never are there creatively competent adults in charge who can resolve conflicts, consistently, on the right side. The end result is that these aspects of humans are driven out of organizations. If you stand for something– anything, even something obviously good for the organization– the probability that you’ll take a career-ending punch approaches one as you climb the ladder. If you want to be a visionary, Corporate America is not the place for it. If you want to be seen as a visionary in Corporate America, the best strategy is to discern what the group wants before a consensus has been reached, and espouse the viewpoint that is going to win– before anyone else has figured that out. What this means is that corporate decisions are actually made “by committee”, and that the committee is usually made up of clever but creatively weak individuals. In the same way as mixing too many pigments produces an uninspiring blah-brown color, an end result of increasing entropy, the decisions that come from such committees are usually depressing ones. They can’t agree on a long-term vision, and to propose one is to leave oneself politically exposed and be termed a “bikeshedder”. The only thing they can agree upon is short-term profit improvement. However, increasing revenue is itself a problem that requires some creativity. If the money were easy to make, it’d already be had. Cutting costs is easier; any dumbass can do that. Most often, these costs are actually only externalized. Cutting health benefits, for one example, means work time is lost to arguments with health insurance companies, reducing productivity in the long run, and being a net negative on the whole. But because those with vision are so easily called out as bikeshedding, impractical narcissists, the only thing left is McKinsey-style cost externalization and looting.

Hence, two kinds of people remain in the boardroom, after the rest have been denied entry or demoted out of the way: the ruthlessly greedy, and the sadistic.

Greedy people will do what it takes to win, but they don’t enjoy hurting people. On the contrary, they’re probably deeply conflicted about what they have to do to get the kind of life they want. The dumber ones probably believe that success in business requires ruthless harm to others. The smarter ones see deference to the mean-spirited cost-cutting culture as a necessary, politically expedient, evil. If you oppose it, you risk appearing “soft” and effeminate and impractical and “too nice to succeed”. So you go along with the reduction of health benefits, the imposition of stack ranking, the artificial scarcities inherent in systems like closed allocation, just to avoid being seen that way. That’s how greed works. Greedy people figure out what the group wants and don’t fight it, but front-run that preference as it emerges. So what influences go into that group preference? Even without sadism, the result of the entropy-increasing committee effect seems to be, “cost cutting” (because no one will ever agree on how to increase revenue). With sadism in the mix, convergence on that sort of idea happens faster, and ignorance of externalized costs is enhanced.

The sadist has an advantage in the corporate game that is unmatched. The more typical greedy-but-decent person will make decisions that harm others, but is drained by doing so. Telling people that they don’t have jobs anymore, and that they won’t get a decent severance because that would have been a losing fight against HR, and that they have to be sent out by security “by policy”, makes them pretty miserable. They’ll play office politics, and they play to win, but they don’t enjoy it. Sadists, on the other hand, are energized by harm. Sadists love office politics. They can play malicious games forever. One trait that gives them an advantage over the merely greedy is that, not only are they energized by their wins, but they don’t lose force in their losses. Greedy people hate discomfort, low status, and loss of opportunity. Sadists don’t care what happens to them, as long as someone else is burning.

This is why, while sadists are probably a minority of the general population, they make up a sizeable fraction of the upper ranks in Corporate America. Their power is bolstered by the fact that most business organizations have ceased to stand for anything. They’re patterns of behavior that have literally no purpose. This is because the decision-making derives from a committee of greedy people with no long-term plans, and sadistic people with harmful long-term plans (that, in time, destroy the organization).

Sadists are not a majority contingent in the human population. However, we generally refuse to admit that it exists at all. It’s the province of criminals and perverts, but surely these upstanding businessmen have their reasons (if short-sighted ones, but that is chalked up to a failure of regulation) for bad behaviors. I would argue that, by refusing to admit to sadism’s prevalence and commonality, we actually give it more power. When people confront frank sadism either in the workplace or in the public, they’re generally shocked. Against an assailant, whether we’re talking about a mugger or a manager presenting a “performance improvement plan”, most people freeze. It’s easy to say, “I would knee him in the nuts, gouge out his eyeballs, and break his fingers in order to get away.” Very few people, when battle visits them unprepared, do so. Mostly, the reaction is, I can’t believe this is happening to me. It’s catatonic panic. Refusing to admit that sadism is real and that it must be fought, we instead give it power by ignoring its existence, thus allowing it to ambush us. In a street fight, this is observed in the few seconds of paralytic shock that can mean losing the fight and being killed. In HR/corporate matters, it’s the tendency of the PIP’d employee to feel intense personal shame and terror, instead of righteous anger, when blindsided by managerial adversity.

The bigger problem

Why do I write? I write because I want people in my generation to learn how to fight. The average 25-year-old software engineer has no idea what to do when office politics turn against him (and that, my friends, can happen to anyone; overperformance is more dangerous than underperformance, but that’s a topic for another essay). I also want them to learn “Work Game”. It’s bizarre to me that learning a set of canned social skills to exploit 20-year-old women with self-esteem problems (pickup artistry) is borderline socially acceptable, while career advice is always of nice-guy “never lie on your resume, no exceptions” variety. (Actually, that’s technically correct. Everyone who succeeds in the corporate game has lied to advance his career, but never put an objectively refutable claim in writing.) Few people have the courage to discuss how the game is actually played. If men can participate in a “pickup artist” culture designed to exploit women with low self-respect and be considered “baller” for it, and raise millions in venture funding… then why it is career-damaging to be honest about what one has to do in the workplace just to maintain, much less advance, one’s position? Why do we have to pretend to uphold this “nice guy”/AFC belief in office meritocracy?

I write because I want the good to learn how to fight. We need to be more ruthless, more aggressive, and sometimes even more political. If we want anything remotely resembling a “meritocracy”, we’re going to have to fight for it and it’s going to get fucking nasty.

However, helping people hack broken organizations isn’t that noble of a goal. Don’t get me wrong. I’d love to see the current owners of Corporate America get a shock to the system. I’d enjoy taking them down (that’s not sadism, but a strong– perhaps pathologically strong, but that’s another debate– sense of justice.) Nonetheless, we as a society can do better. This isn’t a movie or video game in which beating the bad guys “saves the world”. What’s important, if less theatric and more humbling, is the step after that: building a new and better world after killing off the old one.

Here we address a cultural problem. Why do companies get to a point where the ultimate power is held by sadists, who can dress up their malignant desires as hard-nosed cost-cutting? What causes the organization to reach the high-entropy state in which the only self-interested decision it can make is to externalize a cost, when there are plenty of overlooked self-interested decisions that are beneficial to the world as a whole? The answer is the “tallest nail” phenomenon. The tallest nail gets hammered down. As a society, that’s how we work. Abstractly, we admire people who “put themselves out there” and propose ideas that might make their organizations and the world much better. Concretely, those people are torn down as “bikeshedders”, by (a) their ideological opponents, who usually have no malicious intent but don’t want their adversaries to succeed– at least, not on that issue–; (b) sadists relishing the opportunity to deny someone a good thing; (c) personal political rivals, which any creative person will acquire over time; and (d) greedy self-interested people who perceive the whim of the group as it is emerging and issue the final “No”. We have a society that rewards deference to authority and punishes creativity, brutally. And capitalism’s private sector, which is supposed to be an antidote to that, and which is supposed to innovate in spite of itself, is where we see that tendency in the worst way.

Greed (meaning self-interest) can be good, if directed properly by those with a bit of long-term vision and an ironclad dedication to fairness. Sadism is not. The combination of the two, which is the norm in corporate boardrooms, is toxic. Ultimately, we need something else. We need true creativity. That’s not Silicon Valley’s “make the world a better place” bullshit either, but a genuine creative drive that comes from a humble acknowledgement of just how fucking hard it is to make the world a tolerable, much less “better”, place. It isn’t easy to make genuine improvements to the world. (Mean-spirited cost-cutting, sadistic game-playing, and cost externalization are much easier ways to make money. Ask any management consultant.) It’s brutally fucking difficult. Yet millions of people every day, just like me, go out and try. I don’t know why I do it, given the harm that even my mild public cynicism has brought to my career, but I keep on fighting. Maybe I’ll win something, some day.

As a culture, we need to start to value that creative courage again, instead of tearing people down over petty differences.


Silicon Valley and the Rise of the Disneypreneur

Someone once explained the Las Vegas gambling complex as “Disneyland for adults”, and the metaphor makes a fair amount of sense. The place sells a fantasy– expensive shows, garish hotels (often cheap or free if “comped”) and general luxury– and this suspension of reality enables people to take financial risks they’d usually avoid, giving the casino an edge. Comparing Silicon Valley to Vegas, also, makes a lot of sense. Even more than a Wall Street trading floor, it’s casino capitalism. Shall we search for some kind of transitivity? Yes, indeed. Is it possible that Silicon Valley is a sort of “Disneyland”? I think so.

It starts with Stanford and Palo Alto. The roads are lined with palm trees that do not grow there naturally, and cost tens of thousands of dollars a piece to plant. The whole landscape is designed and fake. In a clumsy attempt to lift terminology from Southern aristocrats, Stanford’s nickname is “the Farm”. At Harvard or Princeton, there’s a certain sense of noblesse oblige that students are expected to carry with them. A number of Ivy Leaguers eschew investment banking in favor of a program like Teach for America. Not so much at Stanford, which has never tempered itself with Edwardian gravity (by, for example, encouraging students to read literature from civilizations that have since died out) in the way that East Coast and Midwestern colleges have. The rallying cry is, “Go raise VC.” Then, they enter a net of pipelines: Stanford undergrad to startup, startup to EIR gig, EIR to founder, founder to venture capitalist. The miraculous thing about is that progress on this “entrepreneurial” path is assured. One never needs to take any risk to do it! Start in the right place, don’t offend the bosses-I-mean-investors, and there are three options: succeed, fail up, or fail diagonal-up. Since they live in an artificial world in which real loss isn’t possible for them, but one that also limits them from true innovation, they perform a sort of Disney-fied entrepreneurship. They’re the Disneypreneurs.

Just as private-sector bureaucrats (corporate executives) who love to call themselves “job creators” (and who only seem to agree on anything when they’re doing the opposite) are anything but entrepreneurs, I tend to think of these kids as not real entrepreneurs. Well, because I’m right, I should say it more forcefully. They aren’t entrepreneurs. They take no risk. They don’t even have to leave their suburban, no-winter environment. They don’t put up capital. They don’t risk sullying their reputations by investing their time in industries the future might despise; instead, they focus on boring consumer-web plays. They don’t go to foreign countries where they might not have all the creature comforts of the California suburbs. They don’t do the nuts-and-bolts operational grunt work that real entrepreneurs have to face (e.g. payroll, taxes) when they start new businesses, because their backers arrange it all for them. Even failure won’t disrupt their careers. If they fail, instead of making their $50-million payday sin this bubble cycle, they’ll have to settle for a piddling $750,000 personal take in an “acqui-hire”, a year in an upper-middle-management position, and an EIR gig. VC-backed “founders” take no real risk, but get rewarded immensely when things go their way. Heads, they win. Tails, they don’t lose.

Any time someone sets up a “heads I win, tails I-don’t-lose” arrangement, there’s a good bet that someone else is losing. Who? To some extent, it’s the passive capitalists whose funds are disbursed by VCs. Between careerist agents (VC partners) seeking social connection and status, and fresh-faced Disneypreneurs looking to justify their otherwise unreasonable career progress (due to their young age, questionable experience, and mediocrity of talent) what is left for the passive capitalist is a return inferior to that offered by a vanilla index fund. However, there’s another set of losers for whom I often prefer to speak, their plight being less well-understood: the engineers. Venture capitalists risk other peoples’ money. Founders risk losing access to the VCs if they do something really unethical. Engineers risk their careers. They’ve got more skin in the game, and yet their rewards are dismal.

If it’s such a raw deal to be a lowly engineer in a VC-funded startup (and it is) then why do so many people willingly take that offer? They might overestimate their upside potential, because they don’t know what questions to ask (such as, “If my 0.02% is really guaranteed to be worth $1 million in two years, then why do venture capitalists value the whole business at only $40 million?”). They might underestimate the passage of time and the need to establish a career before ageism starts hitting them. Most 22-year-olds don’t know what a huge loss it is not to get out of entry-level drudgery by 30. However, I think a big part of why it is so easy to swindle so many highly talented young people is the Disneyfication. The “cool” technology company, the Hooli, provides a halfway house for people just out of college. At Hooli, no one will make you show up for work at 9:00, or tell you not to wear sexist T-shirts, or expect you to interact decently with people too unlike you. You don’t even have to leave the suburbs of California. You won’t have to give up your car for Manhattan, your dryer for Budapest, your need to wear sandals in December for Chicago, or your drug habit for Singapore. It’s comfortable. There is no obvious social risk. Even the mean-spirited, psychotic policy of “stack ranking” is dressed-up as a successor to academic grading. (Differences glossed over are (a) that there’s no semblance of “meritocracy” in stack ranking; it’s pure politics, and a professor who graded as unfairly as the median corporate manager would be fired; (b) academic grading is mostly for the student’s benefit while stack-ranking scores are invariably to the worker’s detriment; and (c) universities genuinely try to support failing students while corporations use dishonest paperwork designed to limit lawsuit risk.) The comfort offered to the engineer by the Disney-fied tech world, which is actually more ruthlessly corporate (and far more undignified) than the worst of Wall Street, is completely superficial.

That doesn’t, of course, mean that it’s not real. Occasionally I’m asked whether I believe in God. Well, God exists. Supernatural beings may not, and the fictional characters featured in religious texts are almost certainly (if taken literally) pure nonsense, but the idea of God has had a huge effect on the world. It cannot be ignored. It’s real. The same of Silicon Valley’s style of “entrepreneurship”. Silicon Valley breathes and grows because, every year, an upper class of founders and proto-founders are given a safe, painless path to “entrepreneurial glory” and a much larger working class of delusional engineers are convinced to follow them. It really looks like entrepreneurship.

I should say one thing off the bat: Disneypreneurs are not the same thing as wantrapreneurs. You see more of the second type, especially on the East Coast, and it’s easy to conflate the two, but the socioeconomic distance is vast. The wantrapreneur can talk a big game, but lacks the drive, vision, and focus to ever amount to anything. He’s the sort of person who’s too arrogant to work for someone else, but can’t come up with a convincing reason why anyone should work for him, and doesn’t have the socioeconomic advantages that’d enable him to get away with bullshit. Except in the most egregious bubble times, he wouldn’t successfully raise venture capital, not because VCs are discerning but because the wantrapreneur usually lacks sufficient vision to learn how to do even that. Quite sadly, wantrapreneurs sometimes do find acolytes among the desperate and the clueless. They “network” a lot, sometimes find friends or relatives clueless enough to bankroll them, and produce little. Almost everyone has met at least one. There’s no barrier to entry in becoming a wantrapreneur.

Like a wantrapreneur, Disneypreneurs lack drive, talent, and willingness to sacrifice. The difference is that they still win. All the fucking time. Even when they lose, they win. Evan Spiegel (Snapchat) and Lucas Duplan (Clinkle) are just two examples, but Sean Parker is probably the most impressive. If you peek behind the curtain, he’s never actually succeeded at anything, but he’s a billionaire. They float from one manufactured success to another, build impressive reputations despite adding very little value to anything. They’re given the resources to take big risks and, when they fail, their backers make sure they fail up. Being dropped into a $250,000/year VP role at a more successful portfolio company? That’s the worst-case outcome. Losers get executive positions and EIR gigs, break-evens get acqui-hired into upper-six-figure roles, and winners get made.

One might ask: how does one become a Disneypreneur? I think the answer is clear: if you’re asking, you probably can’t. If you’re under 18, your best bet is to get into Stanford and hope your parents have the cardiac fortitude to see the tuition bill and not keel over. If you’re older, you might try out the (admirably straightforward, and more open to middle-class outsiders than traditional VC) Y Combinator. However, I think that it’s obvious that most people are never going to have the option of Disneypreneurship, and there’s a clear reason for that. Disneypreneurship exists to launder money (and connections, and prestige, and power; but those are highly correlated and usually mutually transferrable) for the upper classes, frank parasitism from inherited wealth being still socially unacceptable. The children of the elites must seem to work under the same rules as everyone else. The undeserving, mean-reverting progeny of the elite must be made to appear like they’ve earned the status and wealth their parents will bequeath upon them.

Elite schools were once intended toward this end. They were a prestige (multiple meanings intended) that appeared, from the outside, to be a meritocracy. However, this capacity was demolished by an often-disparaged instrument, the S.A.T. Sometimes, I’ll hear a knee-jerk leftist complain about the exam’s role in educational inequality, citing (correctly) the ability of professional tutoring (“test prep”, a socially useless service) to improve scores. In reality, the S.A.T. isn’t creating or increasing socioeconomic injustices in terms of access to education; it merely measures some of them. The S.A.T. was invented with liberal intentions, and (in fact) succeeded. After its inception in the 1920s, “too many” Jews were admitted to Ivy League colleges, and much of the “extracurricular” nonsense involved in U.S. college admissions was invented in a reaction to that. Over the following ninety years, there’s been a not-quite-monotonic movement toward meritocracy in college admissions. If I had to guess, college admissions are a lot more meritocratic than 90 years ago (and, if I’m wrong, it’s not because the admissions process is classist but because it’s so noise-ridden, thanks to technology enabling the application of a student to 15-30 colleges; 15 years ago, five applications was considered high). The ability-to-pay factor, however, keeps this meritocracy from being realized. Ties are, observably, broken on merit and there is enough meritocracy in the process to threaten the existing elite. The age in which a shared country-club membership of parent and admissions officer ensured a favorable decision is over. Now that assurance requires a building, which even the elite cannot always afford.

These changes, and the internationalization of the college process, and those pesky leftists who insist on meritocracy and diversity, have left the ruling classes unwilling to trust elite colleges to launder their money. They’ve shifted their focus to the first few years after college: first jobs. However, most of these well-connected parasites don’t know how to work and certainly can’t bear the thought of their children suffering the indignity of actually having to earn anything, so they have to bump their progeny automatically to unaccountable upper-management ranks. The problem is that very few people are going to respect a talentless 22-year-old who pulls family connections to get what he wants, and who gets his own company out of some family-level favor. Only a California software engineer would be clueless enough to follow someone like that– if that person calls himself “a founder”.

Statistics, cooperation, politics, and programming.

Open: a simple dice “game”

Let’s say that you’re playing a one-player “game”, where your payout (score) is determined according to the rolls of 101 dice. One of them is black and 100 are white, and your payoff is $100 times the value on the black die, plus the sum of the values on the 100 white dice. (In RPG terms, that’s d6x100 + 100d6.) The question is: how much more “important” (I’ll define this, more rigorously, below) is the black die, relative to a single one of the white dice?

Most people would say that the black die is 100 times as important; its influence on the payoff is a $500 swing (from $100 to $600) while each of the white dice has a $5 swing ($1 to $6). That would lead us to conclude that the black die is equally important as the hundred white dice, all taken together– or that the black die has 50% of the total importance. That’s not true at all. Why not? Let’s do some simulations. Here’s the code (in Clojure).

(defn white-die []
.  (inc (rand-int 6)))

(defn black-die []
.   (* 100 (inc (rand-int 6))))

(defn play []
.   (let [bd-value (black-die)
.         wd-value (reduce + 0 (repeatedly 100 white-die))]
.      (printf "Black die: %d, White dice: %d, Payoff: %d\n"
.               bd-value wd-value (+ bd-value wd-value))))

Here are some results:

user=> (play)
Black die: 100, White dice: 345, Payoff: 445
nil ;; other returns omitted.
user=> (play)
Black die: 600, White dice: 343, Payoff: 943
user=> (play)
Black die: 400, White dice: 352, Payoff: 752
user=> (play)
Black die: 100, White dice: 338, Payoff: 438
user=> (play)
Black die: 300, White dice: 322, Payoff: 622
user=> (play)
Black die: 200, White dice: 345, Payoff: 545
user=> (play)
Black die: 500, White dice: 326, Payoff: 826
user=> (play)
Black die: 300, White dice: 362, Payoff: 662
user=> (play)
Black die: 100, White dice: 353, Payoff: 453
user=> (play)
Black die: 500, White dice: 359, Payoff: 859

The quality of the payoff has a lot more to do with the black die than the white ones. A good payoff (above $700, the mean) seems to occdur if and if only if the black die roll is good (a 4, 5, or 6) because the sum of the white dice is never far from the mean value of 350. We can formalize this intuition by noting that when independent random variables are added, the variance of their sum is the sum of their variances. The variance of a 6-sided die is 35/12 (2.9166…) and so the variance of the 100 white dice, taken together, is 3500/12 = 291.666…, resulting in a standard deviation just slightly over 17. With a hundred dice being summed together, we can assume the sum of white dice to be approximately Gaussian: 99 percent of the time, the white dice will come in between $306 and $394. Even if the white dice perform terribly (say, $300) a ‘6’ on the black die is going to ensure a great payoff.

While standard deviation is a more commonly used measure of dispersion, random variables cumulate according to their variance (the square of the standard deviation). The variance of the black die is not 100, but 10,000 times, that of the white die. This means that it’s 100 times more influential over the payoff than all of the white dice taken together. It contributes just over 99 percent of the variance in the payoff.


What does this have to do with human behavior and cooperation? Well, consider voting. Some people complain about the supposed “disenfranchisement” of voters in large, non-swing states such as California (reliably blue) and Texas (reliably red) under the electoral college system. When the blocs are predictable, that can be true. However, being part of a voting bloc will, in general, magnify ones’ voting power, just as the black die in the example above dominates the payoff to the point where the white dice hardly matter.

If fifteen people agree to vote the same way, they’ve increased their voting power (variance) by 225 times that of an individual, meaning that each one becomes 15 times more powerful. Let’s go a step further. Say there are 29 people in a voting body, and that simple majority is all that’s required to pass a measure. If fifteen of those agree to hold a private vote, and then vote as a bloc based on the result of that vote, the other fourteen peoples’ votes don’t matter at all, so each bloc member becomes approximately twice as individually powerful. This can further be corrupted by creating nested blocs. Eight of those people could break off and hold their own private vote and become a bloc-within-the-bloc. Of course, secrecy is required; otherwise the out-crowd of that bloc might defect. At least in theory, nothing stops a group of five people within that eight from forming a third-level bloc, and so on. This could devolve into an almost dictatorial situation where two people determine the entire vote. It isn’t always long-term stable, of course; disenfranchised people within blocs will (over time) leave, possibly joining other blocs.

One should be able to see, by now, why something like a two-party political system is so common in government. Coalitions build, because it magnifies the individual’s statistical power (percentage of the variance) to form blocs. It seems to continue until there are two coalitions in the 45 to 50 percent range, and what limits this process is that, as a coalitions grow, they become more predictable and less nimble; once they are predictable, unaffiliated (“swing”) voters have substantially more power than they should according to the principles above; while variance potential of a bloc grows as the square of its size, highly predictable blocs have very little actual variance. In other words, the equilibrium happens when the (quadratically growing) bulk power of blocs is offset by the declining true variance inherent to their predictability, leaving the few swing players as individually powerful (being unpredictable) as they would be as members of a bloc.

Economics and work

Bloc power is a major reason why collective bargaining (unionization) is such a big deal. The Brownian motion of individually unimportant workers joining and leaving a company has a minimal effect on the business. There will be good days and bad for the company due to these small fluctuations but, on the whole, an individual’s vote (whether to work or quit) is meaningless amid the noise. The low-level worker has no real vote. Collective bargaining, on the other hand, can be powerful: a large group voting against its management (a strike) at the same time can have a real impact.

The past two hundred years have proven that, without some variety of collective action, workers (even highly skilled ones) are unlikely to get a fair deal. It doesn’t matter how smart, how capable, or even how necessary they are if their votes don’t matter. There are three approaches that have been used to solve this problem (aside from a fourth, beloved by some wealthy, which is not to solve it). The first is to form a union. As much as there is a problem of corruption within unions, I don’t think any reasonable person can review history and conclude them to have been unnecessary. The second is to form a profession, which is essentially a reputation management organization that (a) keeps the individual member’s credibility high enough to keep that person employable, so he or she can challenge management, since professions require ethical obligations that supersede managerial authority (i.e. there’s no Nuremberg Defense); while (b) occasionally leveraging its role as a reputation bank to push, as a bloc, for specific causes. The third approach is a welfare state, which does not confer bloc-like power for low-level producers (i.e. workers) but (a) gives them power as consumers and, more importantly, it (b) gives individual producers the ability to refuse adverse terms of work.

These form a spectrum of solutions, with unions being the most political (an explicit bloc forms, subverting to some extent the Brownian tug-of-war that occurs in free markets and elections) while the welfare state is apolitical (it does not tell capitalists how to run their companies) while pushing a universal sea change in the market– improved leverage for workers in all industries, liberated from a month-by-month need for work income. Professions, as it were, exist between these two extremes; they are not as explicitly political or bloc-like as unions, but their ability to prevent the professional’s credibility from falling to zero– even if fired by one comp[any’s management, he’s still a member of that profession unless disbarred for ethical reasons, and will usually find new work easily– has them functioning like a private, conditional welfare state.

I’m not going to argue, among the solutions above, that any is superior to the others, or that one of those three should be favored uniformly. In fact, they don’t even conflict; societies tend to have all three of the above in some form. They seem to serve different purposes, also spanning a spectrum from local to global, like so:

  • The union exists to guarantee, as much as it can, employment at a specific company (local) on favorable terms for good-faith workers. It often wrests from management the authority to terminate people. Its downside is that, because it is an explicitly political organization, it often invents by-laws (seniority systems being the most abhorred) that reduce performance. The extreme guarantees against adverse change that unions often provide may result in a low quality of work, eroding the union’s clout in the long run. Unions are, however, the best solution when there is a small number of potential employers (oligopsony).
  • The profession exists to provide credibility (reputation) sufficient to guarantee appropriate employment, but not at a specific employer. The profession doesn’t interfere with individual terminations or promotions, nor does it often tell employers how to behave; its goal is to provide appropriate results for all good-faith members without managing a specific employer. This is more global than the union, because professionals may have to move to different companies or geographic locations to take advantage of the profession’s auspices, but more local than a welfare state because it focuses on a specific class of workers. Professions work well when there is a large and changing set of potential employers, but over a fairly fixed scope of work.
  • The welfare state (a global solution, as it involves a definition of social justice that a central government attempts to enforce uniformly) doesn’t guarantee market employment at all. It does, however, attempt to create an economic floor below which people cannot fall. Even if they lose power as producers (because the market may not want anything they can make) they retain some power as consumers. The moral purpose of this is two-fold. First, unneeded workers can retrain and become viable producers. Second, the welfare state’s existence gives workers enough leverage that they stand a chance at getting a fair deal– without necessarily having to form collectives in order to do it. Welfare states do the best job at the large-scale, society-wide problems; for example, they can provide education and training for those who have not yet entered a union or profession.

What’s most relevant to all this, however, is that collective action is as relevant today as it was 100 years ago. There are a lot of people who claim, for example, that labor unions “were good in their time, but have served their purpose”. I don’t think that’s true. There are, of course, many problems with existing labor unions and with the professions, but the statistical politics underlying their formation is still quite relevant.


Software engineers in particular are a group of people who’ve never fully decided whether they want to be blue-collar (making unionization a relevant strategy) or white-collar (necessitating a profession). It’s not clear to me that either of these approaches, as commonly imagined, will do what we need in order to get programmers fairly paid and their work reasonably evaluated. I would argue, however, that the existing culture of free agency seems to be leading nowhere. Software and hardware engineers, in addition to designers and operational people, need to develop a common tribal identity as makers. Otherwise, management will continue to run divide-and-conquer strategies against them that leave them with the worst of both the blue-collar and white-collar worlds: the low autonomy and job security of an hourly wage worker, but the unreasonable expectations and long hours associated with salarymen.

The needs of the most creative and effective technology workers should be given consideration; maker culture is becoming a real thing, and the societies and organizations that prosper in the next fifty years will be those that find a way to contend with it. Thanks to personal computing, the internet, and quite likely 3D printing, we’re coming into an era in which the zero-sum approach to resources that has existed for thousands of years no longer makes sense. Copying a book used to be a painstaking, miserable process. (The reason for the beautiful calligraphy and illustrations in hand-copied medieval books is that the work would be intolerable without some room for creative flourish.) Now it’s a Unix command that takes less than second. Information scarcity is rapidly ending and of more interest is the culture (maker culture) that has sprung up around that, starting in the open source world that is making its way into software, which is structurally cooperative.

Maker culture is centered on the positive-sum worldview that makes sense in such a world. Makers tend to no longer see each other as competitors amid existing scarcity; rather, the greater war is against scarcity itself.

Good programmers no longer buy in to traditional industrial competition. They’d rather work on open source projects that improve the world (and their own individual reputations) than line the corporate war chest, because the benefits of tapping into the larger society (open source economy) are much greater, not only for them but often also for their employers, than those of restricting themselves to one corporate silo.  They’ll work on closed-source “secret sauce” projects in a somewhat privileged (“ninja”) position, but not in the commoditized role associated with the “code monkey” appellation. Those jobs, as portrayed less than affectionately in the movie Office Space, are going to die out.

In twenty years, top maker talent will no longer be employed so much as it is sponsored. This will be good for the world, as it will generate a much more cooperative economy than what existed before it, but a large number of organizations will find themselves unable to adapt and will fail.

What Ayn Rand got right and wrong

Ayn Rand is a polarizing figure, and it should be pretty clear that I’m not her biggest fan. I find her views on gender repulsive and her metaphysics laughable. I tend to be on the economic left; she heads to the far right. She and I have one crucial thing in common– extreme political passions rooted in emotionally damaging battles with militant mediocrity– but our conclusions are very different. Her nemesis was authoritarian leftism; mine is corporate capitalism. Of course, an evolved mind in 2013 will recognize that, while both of these forces are evil, there isn’t an either/or dichotomy between them. We don’t need authoritarian leftism or corporate capitalism, and both deserve to be reject out of hand.

What did Rand get right?

As much as I dislike Ayn Rand’s worldview, it’s hard to say that it isn’t a charismatic one, which explains her legions of acolytes. There are a few things she got right, and in a way that few people had the courage to espouse. Namely, she depicted authoritarianism as a process through which the weak (which she likened to vermin) gang up on, and destroy, the strong. She understood the fundamental human problem of her (and our) time: militant mediocrity.

Parasitism, in my view, isn’t such a bad thing. (I probably disagree with Rand on that.) After all, each of us spends nine months as a literal biological parasite. I am actually perfectly fine with much of humanity persisting in a “parasitic” lifestyle wherein they receive more sustenance from society than they would earn on the market. I’m fine with that. It’s a small cost to society, and the long-term benefits (especially including the ability for some people to escape parasitism and become productive) outweigh it. What angers me is when the parasites on the opposite end (the high one) of the socioeconomic spectrum behave as if their fortune and social connections entitle them to tell their intellectual superiors (most viscerally, when that intellectual superior is me) what to do.

Rand’s view was harsh and far from democratic. She conceived of humanity consisting of a small set of “people of the mind” and a much larger set of parasitic mediocrities. In her mind, there was no distinction between (a) average people, who neither stand out in terms of accomplishment or militancy, and (b) the aggressive, anti-intellectual, and authoritarian true parasites against which society must continually defend itself. That was strike one: it just seemed bitchy and mean-spirited to decry the majority of humanity as worthless. (I can’t stand with her on that, either. We’re all mediocre most of the time; it’s militant mediocrity that’s our adversary.) Yet most good ideas seem radical when first voiced, and their proponents are invariably first attacked for their tone and attitude rather than substance, a dynamic that means “bitchiness” is often positively correlated with quality of ideas. I think much of why Rand’s philosophy caught on is that it was so socially unacceptable in the era or the American Middle Class; and intellectuals understand all too well that great ideas often begin as rejected ones.

To understand Ayn Rand further, keep in mind the context of the time during which she rose to fame: the American post-war period. Even the good kinds of greed were socially unacceptable. So a lot of people found her “new elitism” (which was a dressing-up of the old kind) to be refreshing and– in a world that tried to make reality look very different from what it was (see: 1950s television)– honest. By 1980, there was a strong current of opinion that the inclusive capitalism and corporate paternalism had failed, and elitism became sexy again.

Where was the value in this very ugly (but charismatic) philosophy? I’d say that there are a few things Ayn Rand got completely right, as proven by experience at the forefront of software technology:

  1. Most progress comes from a small set of people. Pareto’s “80/20” is far too generous. It’s more like 80/3. In programming, we call this the “10x” effect, because good programmers are 10 times as effective as average ones (and the top software engineers are 10 times as effective as the merely-good ones like me). Speaking on the specific case of software, it’s pretty clear that 10x is not driven by talent alone. That’s a factor, but a small one. More relevant are work ethic, experience, project/person fit, and team synergies. There isn’t a “10x programmer” gene out there; a number of things come into play. It’s not always the same people who are “10x-ers”, and this “10x” superiority is far from intrinsic to the person, having as much to do with circumstance. That said, there are 10x differences in effectiveness all over the place when at the forefront.
  2. Humanity is plagued by authoritarian mediocrity. If you excel, you become a target. It is not true that the entire rest of humanity will despise you for being exceptionally intelligent, creative, industrious, or effective. In fact, many people will support you. However, there are some (especially in positions of power, who must maintain them) who harbor jealous hatred, and they tend to focus on a small number of people. In authoritarian leftism, they attack those who have economic success. In corporate capitalism, they attack their intellectual superiors.
  3. Social consensus is often driven by the mediocre. The excellent have a tendency to do first and sell later. Left to their own devices, they’d rather build something great and seek forgiveness than try to get permission, which will never come if sought at the front door. The mediocre, on the other hand, generate no new ideas and therefore have never felt that irresistible desire to take that kind of social risk. They quickly learn a different set of skills: how to figure out who’s influential and who’s ignored, what the influential people want, and how to make their own self-serving conceptions (which are never far-fetched, being only designed to advance the proponent, because there is otherwise no idea in them) seem like the objective common consensus.

A bit of context

Ayn Rand’s view of authoritarian leftism was spot-on. Much of that movement’s brutality was rooted in a jealous hatred that we know as militant mediocrity. Its failure to become anything like true communism (or even successful leftism) proved this. Militant mediocrity is blindly leftist when poor and out-of-power and rabidly conservative when rich and established. Of course, in the Soviet case, it never became “rich” so much as it made everyone poor. This enabled it to keep a leftish veneer even as it became reactionary.

Rand’s experiences with toxic leftism were so damaging that when she came to the United States, she continued to advance her philosophy of extreme egoism. This dovetailed with the story of the American social elite. Circa 1960, they felt themselves to be a humiliated set of people. Before 1930, they lived in elaborate mansions and lived opulent, sophisticated lifestyles. After the Great Depression, which they caused, they fell into fear and reservation; that is why, to this day, the “old money” rich prefer to live in houses not visible from the road. They remained quite wealthy but, socially, they retreated. They were no longer the darlings at the ball, because there was no ball. It wasn’t until their grandchildren’s generation came forward that they had the audacity to reassert themselves.

While this society’s parasitic elite was in social exile, paternalistic, pay-it-forward capitalism (“Theory Y”) replaced the old, meaner industrial elite, and the existing upper class found themselves increasingly de-fanged as the social distance between them and the rising middle class shrunk. It was around 1980 that they began to fight back with a force that society couldn’t ignore. The failed, impractical Boomer revolutions of the late 1960s were met, about 10 to 15 years later, with a far more effective “yuppie” counterrevolution that won. Randism became its guiding philosophy. And, boy, did it prove to be wrong about many things.

What did Rand get wrong?

Ayn Rand died in 1982, before she was able to see any of her ideas in implementation. Her vision was of the individual capitalist as heroic and excellent. What we got, instead, was these guys.

Ayn Rand interpreted capitalism using a nostalgic view of industrial capitalism, when it was already well into its decline. The alpha-male she imagined running a large industrial operation no longer existed; the frontier had closed, and the easy wins available to risk-seeking but rational egoists (as opposed to social-climbing bureaucrats) had already been taken. The world was in full swing to corporate capitalism, which has been taking an increasingly collectivist character on for the past forty years.

Corporatism turns out to have the worst of both systems between capitalism and socialism. Transportation, in 2013, is a perfect microcosm of this. Ticket prices are volatile and fare-setting strategies are clearly exploitative– the worst of capitalism– while service rendered is of the quality you might expect from a disengaged socialist bureaucracy; flying an airplane today is certainly not the experience one would get from a triumphant capitalistic enterprise.

Suburbia also has a “worst of both worlds” flavor, but of a more vicious nature, being more obvious in how it merges two formerly separate patterns of life to benefit one class of people and harm another. By the peak of U.S. suburbanization, almost everyone (rich and poor) lived in a suburb, and this was deemed the essence of middle-class life. Suburbia is well-understood as a combination of urban and rural life– an opportunity for people to hold high-paying urban jobs, but live in more spacious rural settings. What’s missed is that, for the rich, it combines the best of both lifestyles– it gives them social access, but protects them from urban life’s negatives; for the poor, it holds the worst of both– urban crime and violence, rural isolation.

This brings us directly to the true nature of corporate capitalism. It’s not really about “making money”. Old-style industrial capitalism was about the multiplication of resources (conveniently measured in dollar amounts). New-style corporate capitalism is about social relationships (many of those being overtly extortive) and “connections”. It’s about providing the best of two systems– capitalism and socialism– for a well-connected elite. They get the outsized profit opportunities (“performance” bonuses during favorable market trends that should more honestly be appreciated as luck) of capitalism, but the cushy assured favoritism and placement (acq-hires and “entrepreneur-in-residence” gigs) of socialism. Everyone else is stuck with the worst of both systems: a rigged and conformist corporate capitalism that will gladly punish them for failure, but that will retard their successes via its continual demands for social permission.

What’s ultimately fatal to Rand’s ideology– and she did not live long enough to see it play out this way– was the fact that the entrepreneurial alpha males she was so in love with (and who probably never existed, in the form she imagined) never came back. In the 1980s, the world was sold to emasculated, influence-peddling, social-climbing private-sector bureaucrats, and not heroic industrialists. Whoops!

What we now have is a world that claims to be (and is) capitalistic, but is run by the sorts of parasitic, denial-focused, militantly mediocre position-holders that Rand railed against. This establishes her ideology as a failed one, and the elitism-is-cool-again “yuppie” counterrevolution of the 1980s has thus been shown to be just as impractical and vacuous as the 1960s “hippie” movement and the authoritarian leftism of the “Weathermen”. Unfortunately, it was a far more effective– and, thus, more damaging– one, and we’ll probably be spending the next 15 years cleaning up its messes.

Why an Atlas Shrugged smart people strike would never work.

I’m not a major fan of Ayn Rand, but one of the more attractive ideas coming out of her work is from Atlas Shrugged, written about a world in which the “people of the mind”– business leaders, artists, philosophers– go on strike. It’s an attractive idea. What would happen if those of us in the “cognitive 1 percent” decided, as a bloc, to secede from the mediocrity of Corporate America? Would we finally get our due? Would we stop having to answer to idiots? Would the dumb-dumbs come crawling to us, begging that we return?

No. That would never happen. They have as much pride as we do.

It’s an appealing concept, for sure. Individually, not one of us is substantial to society– that’s not a personal statement; no one person is that important. Any one of us could be cast into the flames with little cost to society. Yet we tend to feel like, as a group, we are critical. We’re right. am insignificant, but societies live or die based on what proportion of the few thousand people like me per generation get their ideas into implementation, and it’s only after the fact that one knows which side of the critical percentage a society is on. Atlas could shrug. Society could be brought to its knees if the most intelligent people developed a tribal identity, acted as a political bloc, were still ignored, and chose to secede. Science and the arts would stagnate, the economy would fall into decline, and society would be unable to correct for its own morale problems. The culture would crater, innovation would die, and whatever society endured such a “strike” would quickly fall to third-class status on the world stage.

That doesn’t mean we, the smart people who might threaten such a strike, would get whatever we want. Imagine trying to extort a masochist. “I’ll beat you up unless if you give me $100.” “You mean I can not give you $100 and get beaten up? For free? I’ll take that option; you’re so kind.”

I don’t mean to call society masochistic, because it isn’t so. Societies don’t make choices. People in them do, often with minimal or no concern with the upkeep of this edifice we call “civilization”. Now, the people at the top of ours (Corporatist America) are stupid, short-sighted, uncultured, and defective human beings. All of that is true. To assess them as weak because of this is inaccurate. I’m pretty sure that crocodiles don’t crack 25 on an IQ test, but I wouldn’t want to be in a physical fight with one. These people are ruthless and competitive and they’re very good at what they do– which is to acquire and hold position, even if it requires charming people (including people like us, much smarter than they are) to get it. They’d also rather reign in hell than serve in heaven. That’s why we’ll never be able to pull an Atlas Shrugged move against them. They care far more about their relative standing in society than its specific level of health. We’d be giving them exactly what they want: less competition to hold the high social status they currently have.

Also, I think that an Atlas Shrugged phenomenon is already happening in American society, with so little fanfare as to render it comically underwhelming. Smart people all over the country are underperforming, mostly not by choice, but because they are not getting opportunities to excel. Scientists spend an increasing amount of time applying for grants and lobbying their bosses for the autonomy that they had, implicitly, a generation ago. The quality of our arts has suffered substantially. Our political climate is disastrous and right-wing because a lot of intelligent people have just given up. Has the elite looked at the slow decline of the society and said, “Man, we really need to treat those smart people better, and hand our plum positions over to those who actually deserve them?” Nope. That has not happened; it would be absurd to think of it, as the current elite has too much pride. And if we scale that up from unintentional, situational underperformance to a full-fledged strike of the cognitive elite, we will be ignored for doing so. We won’t bring society to a calamitous break and get our due. We’ll see slow decay and the only people smart enough to make the connection between our strike and that degradation will be the strikers themselves. We already have a pervasively mediocre society and things still work– not well, but we haven’t seen catastrophic society-wide failures yet. It might get to that point, but it’ll be too late for the kind of action we might want.

In sum…

  • fantasy: the cognitive elite could go on “strike” and the existing elite (corporate upper class, tied together by social connections rather than anything related to excellence) would, after society fell to pieces, beg us to rejoin on our terms, inverting the power dynamic that currently exists between us and them.
  • reality: those parasitic fuckers don’t give a shit about the broad-based health of society. We’re not exactly a real competitive threat to them because they hold most of the power, but we do have some power and we’d just be making their lives easier if we withdrew from the world and gave that power up entirely.

As intellectuals, or at least as people who aspire to be such, we look at civil decline as tragic and painful. When we learn about expansive civilizations that fall into decadence and ruin, we tend to imagine it as a personal death that’s directly experienced, rather than a gradual historic change that few people notice in contrast to the day-to-day struggles of higher personal importance. So we often delude ourselves into thinking that “society” has its own will and makes “choices” according to its own interests, as opposed to the parochial interests of whatever idiots happen to be running it at the time. Thus, we believe that if “society” refuses to listen to our ideas and place us in appropriately high positions, we can withdraw as a bloc, render it ineffective, and impel it to “come crawling back” to us with better terms. We’re dead wrong in believing that this is a possibility. Yes, we can render it ineffective through underperformance (hell, it’s already arguably at that point, just based on the pervasive conformity and mediocrity that have declawed most of us) but this reorganization that we seek will never happen. We tend to overestimate the moral character– while underestimating the competitive capability (again: think crocodiles)– of our enemies. They are all about their own egos and they will gladly have society burn just to stay on top.

One concrete example of this is in software engineering, where the culture is mostly one of anti-intellectualism and mediocrity. Why is it this way? Given that an elite programmer is 10-100 times as effective as a mediocre code monkey, why do companies tailor their environments to the hiring en masse of unskilled “commodity” developers? Bad programmers are not cheap; they’re hilariously expensive. So what’s going on? The answer is that most managers don’t care about the good of the company. It’s their own egos they want to protect. A good programmer costs only 25 percent more than a mediocre one, but is 5 times as effective. Why not hire the good one, then? The answer is that the manager loses his real motivation for going to work: being the smartest guy in the room, and the unambiguous alpha male. Saving the company some money is not, to most managers, worth that price.

What we fail to realize, as the cognitive 1 percent, is that while society abstractly relies on us, the people running society think we’re huge pains in the ass and would be thrilled not to have to deal with us at all.

Do I believe that it’s time for the cognitive 1 percent to mobilize, and to take back our rightful control over society’s direction? Absolutely. In fact, I think it’s a moral responsibility, because the world is facing some problems (such as climate change) too complex for the existing elite to solve. The incapacity and mediocrity of our current corporate elite is literally an existential risk to humanity. We ought to assert ourselves, as a group, and start fixing the world. But the Atlas Shrugged model is the wrong way to go about that.

The U.S. upper class: Soviet blatnoys in capitalist drag.

One thing quickly learned when studying tyranny (and lesser, more gradual, failures of states and societies such as observed in the contemporary United States) is that the ideological leanings of tyrants are largely superficial. Those are stances taken to win popular support, not sincere moral positions. Beneath the veneer, tyrants are essentially the same, whether fascist, communist, religious, or centrist in nature. Supposedly “right-wing” fascists and Nazis would readily deploy “socialist” innovations such as large public works projects and social welfare programs if it kept society stable in a way they preferred, while the supposedly “communist” elites in the Soviet Union and China were self-protecting, deeply anti-populist, and brutal– not egalitarian or sincerely socialist in the least. The U.S. upper class is a different beast from these and, thus far, less malevolent than the communist or fascist elites (although if they are unchecked, this will change). It probably shares the most in common with the French aristocracy of the late 18th-century, being slightly right-of-center and half-hearted in its authoritarianism, but deeply negligent and self-indulgent. For a more recent comparison, I’m going to point out an obvious and increasing similarity between the “boardroom elite” (of individuals who receive high-positions in established corporations despite no evidence of high talent or hard work) and an unlikely companion: the elite of the Soviet Union.

Consider the Soviet Union. Did political and economic elites disappear when “business” was made illegal? No, not at all. Did the failings of large human organizations suddenly have less of a pernicious effect on human life? No; the opposite occurred. What was outlawed, effectively, was not the corporation (corporate power existed in the government) but small-scale entrepreneurship– a necessary social function. Certainly, elitism and favoritism didn’t go away. Instead, money (which was subject to tight controls) faded in importance in favor of blat, an intangible social commodity describing social connection as well as the peddling of influence and favors. With the money economy hamstrung by capitalism’s illegality, blat became a medium of exchange and a mechanism of bribery. People who were successful at accumulating and using social resources were called blatnoys. The blatnoy elite drove their society into corruption and, ultimately, failure. But… that’s irrelevant to American capitalism, right?

Well, no. Sadly, corporate capitalism is not run by “entrepreneurs” in any sense of the word. Being an entrepreneur is about putting capital at risk to achieve a profit. Someone who gets into an elite college because a Senator owes his parents a favor, spends four years in investment banking getting the best projects because of family contacts, gets into a top business school because his uncle knows disgusting secrets about the dean of admissions, and then is hired into a high position in a smooth-running corporation or private equity firm, is not an entrepreneur. Anything but. That’s a glorified private-sector bureaucrat at best and, at worst, a brazen, parasitic trader of illicit social resources.

There are almost no entrepreneurs in the American upper class. This claim may sound bizarre, but first we must define terms– namely, “upper class”. Rich people are not automatically upper class. Steve Jobs was a billionaire but never entered it; he remained middle-class (in social position, not wealth) his entire life. His children, if they want to enter its lower tier, have a shot. Bill Gates is lower-upper class at best, and has worked very hard to get there. Money alone won’t buy it, and entrepreneurship is (by the standards of the upper class) the least respectable way to acquire wealth. Upper class is about social connections, not wealth or income. It’s important to note that being in the upper class does not require a high income or net worth; it does, however, require the ability to secure a position of high income reliably, because the upper class lifestyle requires (at a minimum) $300,000 after tax, per person, per year.

The wealth of the upper class follows from social connection, and not the other way around. Americans frequently make the mistake of believing (especially when misled on issues related to taxation and social justice) that members of the upper class who earn seven- and eight-digit salaries are scaled-up versions of the $400,000-per-year, upper-middle-class neurosurgeon who has been working intensely since age 4. That’s not the case. The hard-working neurosurgeon and the well-connected parasite are diametric opposites, in fact. They have nothing in common and could not stand to be in the same room together, because their values are too much at odds. The upper class views hard work as risky and therefore a bit undignified. It perpetuates itself because there is a huge amount of excess wealth that has congealed at the apex of society, and it’s relatively easy to exchange money and blat on an informal but immensely pernicious market.

Consider the fine art of politician bribery. The cash-for-votes scenario, as depicted in the movies, is actually very rare. The Bush family did have their their “100k club” when campaign contributions were limited to $1000-per-person, but entering that set required arranging for 100 people to donate the maximum amount. Social effort was required to curry favor, not merely a suitcase full of cash. Moreover, to walk into even the most corrupt politician’s office today offering to exchange $100,000 in cash for voting a certain way would be met with a nasty reception. Most scumbags don’t realize that they’re scumbags, and to make a bribe as overt as that is to call a politician a scumbag. Instead, politicians must be bribed in more subtle manners. Want to own a politician? Throw a party every year in Aspen. Invite up-and-coming journalists just dying to get “sources”. Then invite a few private-equity partners so the politician has a million-dollar “consulting” sinecure waiting if the voters wise up and fire his pasty ass. Invite deans of admissions from elite colleges if he has school-age children. This is an effective strategy for owning (eventually) nearly all of America’s decision makers; but it’s hard to pull off if you don’t own any of them. What I’ve described is the process of earning interest on blat and, if it’s done correctly and without scruples, the accrual can occur rapidly– for people with enough blat to play.

Why is such “blat bribery” so common? It makes sense in the context of the mediocrity of American society. Despite the image of upper management in large corporations as “entrepreneurial”, they’re actually not entrepreneurs at all. They’re not the excellent, the daring, the smartest, or the driven. They’re successful social climbers; that’s all. The dismal and probably terminal mediocrity of American society is a direct result of the fact that (outside of some technological sectors) it is incapable of choosing leaders, so decisions of leadership often come down to who holds the most blat. Those who thrive in corporate so-called capitalism are not entrepreneurs but the “beetle-like” men who thrived in the dystopia described in George Orwell’s 1984.

Speaking of this, what is corporate “capitalism”? It’s neither capitalism nor socialism, but a clever mechanism employed by a parasitic, socially-closed but internally-connected elite to provide the worst of both systems (the fall-flat risk and pain of capitalism, the mediocrity and procedural retardation of socialism) while providing the best (the enormous rewards of capitalism, the cushy safety of socialism) of both for themselves.

These well-fed, lily-livered, intellectually mediocre blatnoys aren’t capitalists or socialists. They’re certainly not entrepreneurs. Why, then, do they adopt the language and image of alpha-male capitalist caricatures more brazen than even Ayn Rand would write? It’s because entrepreneurship is a middle-class virtue. The middle class of the United States (for not bad reasons) still has a lot of faith in capitalism. Upper classes know that they have to seem deserving of their parasitic hyperconsumption, and to present the image of success as perceived by the populace at large. Corporate boardrooms provide the trappings they require for this. If the middle class were to suddenly swing toward communism, these boardroom blatnoys would be wearing red almost immediately.

Sadly, when one views the social and economic elite of the United States, one sees blatnoys quite clearly if one knows where to look for them. Fascists, communists, and the elites of corporate capitalism may have different stated ideologies, but (just as Stephen King expressed that The Stand‘s villain, Randall Flagg, can represent accurately any tyrant) they’re all basically the same guy.

Criminal Injustice: The Bully Fallacy

As a society, we get criminal justice wrong. We have an enormous number of people in U.S. prisons, often for crimes (such as nonviolent drug offenses) that don’t merit long-term imprisonment at all. Recidivism is shockingly high as well. On the face of it, it seems obvious that imprisonment shouldn’t work. Imprisonment is a very negative experience, and a felony conviction has long-term consequences for people who are already economically marginal. The punishment is rarely appropriately matched to the crime, as seen in the (racially charged) discrepancies in severity of punishment for possession of crack vs. cocaine. What’s going on? Why are we doing this? Why are the punishments inflicted on those who fail in society often so severe?

I’ll ignore the more nefarious but low-frequency ills behind our heavy-handed justice system, such as racism and disproportionate fear. Instead, I want to focus on a more fundamental question. Why do average people, with no ill intentions, believe that negative experiences are the best medicine for criminals, despite the overwhelming amount of evidence that most people behave worst after negative experiences? I believe that there is a simple reason for this. The model that most people have for the criminal is one we’ve seen over and over: The Bully.

A topic of debate in the psychological community is whether bullies suffer from low or high self-esteem. Are they vicious because they’re miserable, or because they’re intensely arrogant to the point of psychopathy? The answer is both: there are low-self-esteem bullies and high-self-esteem bullies, and they have somewhat different profiles. Which is more common? To answer this, it’s important to make a distinction. With physical bullies, usually boys who inflict pain on people because they’ve had it done to themselves, I’d readily believe that low self-esteem is more common. Most physical bullies are exposed to physical violence either by a bigger bully or by an abusive parent. Also, physical violence is one of the most self-damaging and risky forms of bullying there is. Choosing the wrong target can put the bully in the hospital, and the consequences of being caught are severe. Most physical bullies are, on account of their coarse and risky means of expression, in the social bottom-20% of the class of bullies. On the whole, and especially when one includes adults in the set, most bullies are social bullies. Social bullies include “mean girls”, office politickers, those who commit sexual harassment, and gossips who use the threat of social exclusion to get their way. Social bullies may occasionally use threats of physical violence, usually by proxy (e.g. a threat of attack by a sibling, romantic partner, or group) but their threats generally involve the deployment of social resources to inflict humiliation or adversity on other people. In the adult world, almost all of the big-ticket bullies are social bullies.

Physical bullies are split between low- and high-self-esteem bullies. Social bullies, the only kind that most people meet in adult life, are almost always high-self-esteem bullies, and often get quite far before they are exposed and brought down. Some are earning millions of dollars per year, as successful contenders in corporate competition. Low self-esteem bullies tend to be pitied by those who understand them, which is why most of us don’t have any desire to hunt down the low self-esteem bullies who bothered us as children. It’s high self-esteem bullies that gall people the most. High self-esteem bullies never show remorse, often are excellent at concealing the damage they do, even to the point of bringing action consequences of their actions to the bullied instead of to themselves, and they generally become more effective as they get older. It’s easy to detest them; it would be unusual not to.

How is the high self-esteem bully relevant to criminal justice? At risk of being harsh, I’ll assert what most people feel regarding criminals in general, because for high-self-esteem bullies it’s actually true: the best medicine for a high self-esteem bully is an intensely negative and humiliating experience, one that associates undesirable and harmful behaviors with negative outcomes. This makes high-self-esteem bullies different from the rest of humanity. They are about 3 percent of the population, and they are improved by negative, humiliating experiences. The other 97 percent are, instead, made worse (more erratic, less capable of socially desirable behavior) by negative experiences.

The most arrogant people only respond to direct punishment, because nothing else (reward or punishment) can matter to them, coming from people who “don’t matter” in their minds. Rehabilitation is not an option, because such people would rather create the appearance of improvement (and become better at getting away with negative actions) than actually improve themselves. The only way to “matter” to such a person is to defeat him. If the high-self-esteem bully’s negative experiences are paralyzing, all the better.

Before going further, it’s important to say that I’m not advocating a massive release of extreme punishment on the bullies of the world. I’m not saying we should make a concerted effort punish them all so severely as to paralyze them. There are a few problems with that. First, it’s extremely difficult to determine, on an individual basis, a high self-esteem bully from a low-self-esteem one, and inflicting severe harm on the latter kind will make him worse. Humiliating a high-self-esteem bully punctures his narcissism and hamstrings him, but doing so to a low-self-esteem bully accelerates his self-destructive addiction to pain (for self and others) and leads to erratic, more dangerous behaviors. What comes to mind is the behavior of Carl in Fargo: he begins the film as a “nice guy” criminal but, after being savagely beaten by Shep Proudfoot, he becomes capable of murder. In practice, it’s important to know which kind of bully one is dealing with before deciding whether the best response is rehabilitation (for the low self-esteem bully) or humiliation (for the high self-esteem bully). Second, if bullying were associated with extreme punishments, the people who’d tend to be attracted to positions able to affix the “bully” label would be, in reality, the worst bullies (i.e. a witch hunt). That high self-esteem bullies are (unlike most people) improved by negative experience is a fact that I believe few doubt, but “correcting” this class of people at scale is a very hard problem, and doing so severely involves risk of morally unacceptable collateral damage.

How does this involve our criminal justice policy? Ask an average adult to name the 3 people he detests most among those he personally knows, and it’s very likely that all will be high self-esteem bullies, usually (because physical violence is rare among adults) of the social variety. This creates a template to which “the criminal” is matched. We know, as humans, what should be done to high-self-esteem bullies: separation from their social resources in an extremely humiliating way. Ten years of extremely limited freedom and serious financial consequences, followed by a lifetime of difficulty securing employment and social acceptance. For the office politicker or white-collar criminal, that works and is exactly the right thing. For the small-time drug offender or petty thief? Not so much. It’s the wrong thing.

Most caught criminals are not high self-esteem bullies. They’re drug addicts, financially desperate people, sufferers of severe mental illnesses, and sometimes people who were just very unlucky. To the extent that there are bullies in prison, they’re mostly the low-self-esteem kind– the underclass of the bullying world, because they got caught, if for no other reason. Inflicting negative experiences and humiliation on such people does not improve them. It makes them more desperate, more miserable, and more likely to commit crimes in the future.

I’ve discussed, before, why Americans so readily support the interests of the extremely wealthy. Erroneously, they believe the truly rich ($20 million net worth and up) to be scaled-up versions of the most successful members of the middle class. They conflate the $400,000-per-year neurosurgeon who has been working hard since she was 5 with the parasite who earns $3 million per year “consulting” with a private equity firm on account of his membership in a socially-closed network of highly-consumptive (and socially negative) individuals. Conservatives mistake the rich for the highly productive because, within the middle class, this correlation of economic fortune and productivity makes some sense, while it doesn’t apply at all to society’s extremes. The same is at hand in the draconian approach this country takes to criminal justice. Americans project the faces of the bullies onto the criminal, assuming society’s worst actors and most dangerous failures to be scaled-up version of the worst bullies they’ve dealt with. They’re wrong. The woman who steals $350 of food from the grocery store out of desperation is not like the jerk who stole kids’ lunch money for kicks, and the man who kills someone believing God is telling him to do so (this man will probably require lifetime separation from society, for non-punitive reasons of public safety and mental-health care) is not a scaled-up version of the playground bully.

In the U.S., the current approach isn’t working, of course, unless its purpose is to “produce” more prisoners (“repeat customers”). Few people are improved by prison, and far fewer are helped by the extreme difficulty that a felony conviction creates in the post-incarceration job search. We’ve got to stop projecting the face of The Bully onto criminals– especially nonviolent drug offenders and mentally ill people. Because right now, as far as I can tell, we are The Bully. And reviewing the conservative politics of this country’s past three decades, along with its execrable foreign policy, I think there’s more truth in that claim than most people want to admit.