Greed versus sadism

I’ve spent a fair amount of time reading Advocatus Diaboli, and his view on human nature is interesting. He argues that sadism is a prevailing human trait. In an essay on human nature, he states:

They all clearly a demonstrate a deep-seated and widespread human tendency to be deceitful, cruel, abusive and murderous for reasons that have almost nothing to with material or monetary gain. It is as if most human beings are actively driven a unscratchable itch to hurt, abuse, enslave and kill others even if they stand to gain very little from it. Human beings as a species will spend their own time, effort and resources to hurt other living creatures just for the joy of doing so.

This is a harsh statement, and far from socially acceptable. Sadism is a defining human characteristic, rather than a perversion? To put it forward, I don’t agree that sadism is nearly as prevalent as AD suggests. However, it’s an order of magnitude more prevalent than most people want to admit. Economists ignore it and focus on self-interest: the economic agent may be greedy (that is, focused on narrow self-interest) but he’s not trying to hurt anyone. Psychology treats sadism as pathological, and limited to a small set of broken people called psychopaths, then tries to figure out what material cause created such a monster. The liberal, scientific, philosophically charitable view is that sadistic people are an aberration. People want sex and material comfort and esteem, it holds, but not to inflict pain on others. Humans can be ruthless in their greed, but are not held to be sadistic. What if that isn’t true? We should certainly entertain the notion.

The Marquis de Sade– more of a pervert than a philosopher, and a writer of insufferably boring, yet disturbing, material– earned his place in history by this exact argument. In the Enlightenment, the prevailing view was that human nature was not evil, but neutral-leaning-good. Corrupt states and wayward religion and unjust aristocracies perverted human nature, but the fundamental human drive was not perverse. De Sade was one of the few to challenge this notion. To de Sade, inflicting harm on others for sexual pleasure was the defining trait. This makes the human problem fundamentally insoluble. If self-interest and greed are the problem, society can align peoples’ self-interests by prohibiting harmful behaviors and rewarding mutually beneficial ones. If, however, inflicting pain on others is a fundamental human desire, then it is impossible for any desirable state of human affairs to be remotely stable; people will destroy it, just to watch others suffer.

For my part, I do not consider sadism to be the defining human trait. It exists. It’s real. It’s a motivation behind actions that are otherwise inexplicable. Psychology asserts it to be a pathological trait of about 1 to 2 percent of the population. I think it’s closer to 20 percent. The sadistic impulse can overrun a society, for sure. Look at World War II: Hitler invaded other countries to eradicate an ethnic group for no rational reason. Or, the sadists can be swept to the side and their desires ignored. Refusing to acknowledge that it exists, however, is not a solution, and I’ll get to why that is the case.

Paul Graham writes about the zero-sum mentality that emerges in imprisoned or institutionalized populations. He argues that the malicious and pointless cruelty seen in U.S. high schools, prisons, and high-society wives is of a kind that emerges from boredom. When people don’t have something to do– and are institutionalized or constrained by others’ low regard for them (teenagers are seen as economically useless, high-society wives are made subservient, prisoners are seen as moral scum)– they create senseless and degrading societies. He’s right about all this. Where he is wrong is in his assertion that “the adult world” (work) is better. For him, working on his own startup in the mid-1990s Valley, it was. For the 99%, it’s not. Office politics is the same damn thing. Confine and restrain people, and reinforce their low status with attendance policies and arbitrary orders, and you get some horrendous behavior. Humans are mostly context. Almost all of us will become cruel and violent if circumstances demand it. Okay, but is that the norm? Is there an innate sadism to humans, or is it rare except when induced by poor institutional design? The prevailing liberal mentality is that most human cruelty is either the fault of uncommon biological aberration (mental illness) or incompetent (but not malicious) design in social systems. The socially unacceptable (but not entirely false) counterargument is that sadism is a fundamental attribute of us (or, at least, many of us) as humans.

What is greed?

The prevailing liberal attitude is that greed is the source of much human evil. The thing about greed is that it’s not all that bad. In computer science, we call an optimization algorithm “greedy” if it is short-sighted (i.e. not able to capture the whole space, at a given algorithmic step) and these greedy algorithms often work. Sometimes, they’re the only option because anything else requires too much in the way of computational resources. “Greed” can simplify. Greedy people want to eat well, to travel, and for their children to be well-educated. Since that’s what most people want, they’re relatable. They aren’t malignant. They’re ruthless and short-sighted and often arrogant, but they (just like anyone else) are just trying to have good lives. What’s wrong with that? Nothing, most would argue. Most importantly, they’re reasonable. If society can be restructured and regulated so that doing the right thing is rewarded, and doing the wrong thing is punished or forbidden, greedy people can be used for good. Unlike the case with sadism, the problem can be solved with design.

Is greed good? It depends on how the word is defined. We use the word ambition positively and greed negatively, but if we compare the words as they are, I’m not sure this makes a lot of sense. Generally, I view people who want power more negatively than those who want wealth (in absolute, rather than relative terms) alone. As a society, we admire ambition because the ambitious person has a long-term strategy– the word comes from the Latin ambire, which means to walk around gathering support– whereas greed has connotations of being short-sighted and petty. We conflate long-range thinking with virtue, ignoring the fact that vicious and sadistic people are capable of long-term thought as well. At any rate, I don’t think greed is good. However, greed might be, in certain contexts, the best thing left.

To explain this, note the rather obvious fact that corporate boardrooms aren’t representative samples of humanity. For each person in a decision-making role in a large business organization, there’s a reason why he’s there and, if you think it comes down to “hard work” or “merit”, you’re either an idiot or painfully naive. Society is not run by entrepreneurs, visionaries, or creators. It’s run by private-sector social climbers. Who succeeds in such a world? What types of people can push themselves to the top? Two kinds. The greedy, and the sadistic. No one else can make it up there, and I’ll explain why, later in this post.

This fact is what, in relative terms, makes greed good. It’s a lot better than sadism.

The greedy person may not value other concerns (say, human rights or environmental conservation) enough, but he’s not out to actively destroy good things either. The sadist is actively malicious and must be rooted out and destroyed. It is better, from the point of view of a violence-averse liberal, that the people in charge be merely greedy. Then it is possible to reason with them, especially because technology makes rapid economic growth (5 to 20+ percent per year) possible. What prevents that from happening now is poor leadership, not malignant obstruction, and if we can share the wealth with them while pushing them aside, that might work well for everyone. If the leaders are sadistic, the only way forward is over their dead bodies.

“The vision thing”

Corporate executives do not like to acknowledge that the vast majority of them are motivated either by greed or by sadism. Instead, they talk a great game about vision. They concoct elaborate narratives about the past, the future, and their organization’s place in the world. It makes greed more socially acceptable. Yes, I want power and wealth; and here is what I plan to do with it. In the corporate world, however, vision is almost entirely a lie, and there’s a solid technical reason why that is the case.

We have a term in software engineering called “bikeshedding“, which refers to the narcissism of petty differences. Forget all that complicated stuff; what color are we going to paint the bike shed? The issue quickly becomes one that has nothing to do with aesthetics. It’s a referendum on the status of the people in the group. You see these sorts of things in mergers often. In one company, software teams are named after James Bond villains; in the other, they’re named after 1980s hair bands. If the merger isn’t going well, you’ll see one team try to obliterate the memetic cultural marks of the other. “If you refer to Mötley Crüe in another commit message, or put umlauts where they don’t belong for any reason, I will fucking cut you.”

Bikeshedding gets ugly, because it’s a fundamental human impulse (and one that is especially strong in males) to lash out against unskilled creativity (or the perception of unskilled creativity, because the perceiver may be the defective one). You see this in software flamewars, or in stand-up comedy (with hecklers pestering comics, and the swift comics brutally insulting their adversaries.) This impulse toward denial is not sadistic or even a bad thing at its root. It’s fundamentally conservative, but inflicting brutal social punishments on incompetent wannabe chieftains is what kept early humans from walking into lions’ dens.

As a result of the very strong anti-bikeshedding impulse, creativity and vision are punished, because (a) even those with talent and vision come under brutal attack and are drawn into lose-lose ego wars, and (b) almost never are there creatively competent adults in charge who can resolve conflicts, consistently, on the right side. The end result is that these aspects of humans are driven out of organizations. If you stand for something– anything, even something obviously good for the organization– the probability that you’ll take a career-ending punch approaches one as you climb the ladder. If you want to be a visionary, Corporate America is not the place for it. If you want to be seen as a visionary in Corporate America, the best strategy is to discern what the group wants before a consensus has been reached, and espouse the viewpoint that is going to win– before anyone else has figured that out. What this means is that corporate decisions are actually made “by committee”, and that the committee is usually made up of clever but creatively weak individuals. In the same way as mixing too many pigments produces an uninspiring blah-brown color, an end result of increasing entropy, the decisions that come from such committees are usually depressing ones. They can’t agree on a long-term vision, and to propose one is to leave oneself politically exposed and be termed a “bikeshedder”. The only thing they can agree upon is short-term profit improvement. However, increasing revenue is itself a problem that requires some creativity. If the money were easy to make, it’d already be had. Cutting costs is easier; any dumbass can do that. Most often, these costs are actually only externalized. Cutting health benefits, for one example, means work time is lost to arguments with health insurance companies, reducing productivity in the long run, and being a net negative on the whole. But because those with vision are so easily called out as bikeshedding, impractical narcissists, the only thing left is McKinsey-style cost externalization and looting.

Hence, two kinds of people remain in the boardroom, after the rest have been denied entry or demoted out of the way: the ruthlessly greedy, and the sadistic.

Greedy people will do what it takes to win, but they don’t enjoy hurting people. On the contrary, they’re probably deeply conflicted about what they have to do to get the kind of life they want. The dumber ones probably believe that success in business requires ruthless harm to others. The smarter ones see deference to the mean-spirited cost-cutting culture as a necessary, politically expedient, evil. If you oppose it, you risk appearing “soft” and effeminate and impractical and “too nice to succeed”. So you go along with the reduction of health benefits, the imposition of stack ranking, the artificial scarcities inherent in systems like closed allocation, just to avoid being seen that way. That’s how greed works. Greedy people figure out what the group wants and don’t fight it, but front-run that preference as it emerges. So what influences go into that group preference? Even without sadism, the result of the entropy-increasing committee effect seems to be, “cost cutting” (because no one will ever agree on how to increase revenue). With sadism in the mix, convergence on that sort of idea happens faster, and ignorance of externalized costs is enhanced.

The sadist has an advantage in the corporate game that is unmatched. The more typical greedy-but-decent person will make decisions that harm others, but is drained by doing so. Telling people that they don’t have jobs anymore, and that they won’t get a decent severance because that would have been a losing fight against HR, and that they have to be sent out by security “by policy”, makes them pretty miserable. They’ll play office politics, and they play to win, but they don’t enjoy it. Sadists, on the other hand, are energized by harm. Sadists love office politics. They can play malicious games forever. One trait that gives them an advantage over the merely greedy is that, not only are they energized by their wins, but they don’t lose force in their losses. Greedy people hate discomfort, low status, and loss of opportunity. Sadists don’t care what happens to them, as long as someone else is burning.

This is why, while sadists are probably a minority of the general population, they make up a sizeable fraction of the upper ranks in Corporate America. Their power is bolstered by the fact that most business organizations have ceased to stand for anything. They’re patterns of behavior that have literally no purpose. This is because the decision-making derives from a committee of greedy people with no long-term plans, and sadistic people with harmful long-term plans (that, in time, destroy the organization).

Sadists are not a majority contingent in the human population. However, we generally refuse to admit that it exists at all. It’s the province of criminals and perverts, but surely these upstanding businessmen have their reasons (if short-sighted ones, but that is chalked up to a failure of regulation) for bad behaviors. I would argue that, by refusing to admit to sadism’s prevalence and commonality, we actually give it more power. When people confront frank sadism either in the workplace or in the public, they’re generally shocked. Against an assailant, whether we’re talking about a mugger or a manager presenting a “performance improvement plan”, most people freeze. It’s easy to say, “I would knee him in the nuts, gouge out his eyeballs, and break his fingers in order to get away.” Very few people, when battle visits them unprepared, do so. Mostly, the reaction is, I can’t believe this is happening to me. It’s catatonic panic. Refusing to admit that sadism is real and that it must be fought, we instead give it power by ignoring its existence, thus allowing it to ambush us. In a street fight, this is observed in the few seconds of paralytic shock that can mean losing the fight and being killed. In HR/corporate matters, it’s the tendency of the PIP’d employee to feel intense personal shame and terror, instead of righteous anger, when blindsided by managerial adversity.

The bigger problem

Why do I write? I write because I want people in my generation to learn how to fight. The average 25-year-old software engineer has no idea what to do when office politics turn against him (and that, my friends, can happen to anyone; overperformance is more dangerous than underperformance, but that’s a topic for another essay). I also want them to learn “Work Game”. It’s bizarre to me that learning a set of canned social skills to exploit 20-year-old women with self-esteem problems (pickup artistry) is borderline socially acceptable, while career advice is always of nice-guy “never lie on your resume, no exceptions” variety. (Actually, that’s technically correct. Everyone who succeeds in the corporate game has lied to advance his career, but never put an objectively refutable claim in writing.) Few people have the courage to discuss how the game is actually played. If men can participate in a “pickup artist” culture designed to exploit women with low self-respect and be considered “baller” for it, and raise millions in venture funding… then why it is career-damaging to be honest about what one has to do in the workplace just to maintain, much less advance, one’s position? Why do we have to pretend to uphold this “nice guy”/AFC belief in office meritocracy?

I write because I want the good to learn how to fight. We need to be more ruthless, more aggressive, and sometimes even more political. If we want anything remotely resembling a “meritocracy”, we’re going to have to fight for it and it’s going to get fucking nasty.

However, helping people hack broken organizations isn’t that noble of a goal. Don’t get me wrong. I’d love to see the current owners of Corporate America get a shock to the system. I’d enjoy taking them down (that’s not sadism, but a strong– perhaps pathologically strong, but that’s another debate– sense of justice.) Nonetheless, we as a society can do better. This isn’t a movie or video game in which beating the bad guys “saves the world”. What’s important, if less theatric and more humbling, is the step after that: building a new and better world after killing off the old one.

Here we address a cultural problem. Why do companies get to a point where the ultimate power is held by sadists, who can dress up their malignant desires as hard-nosed cost-cutting? What causes the organization to reach the high-entropy state in which the only self-interested decision it can make is to externalize a cost, when there are plenty of overlooked self-interested decisions that are beneficial to the world as a whole? The answer is the “tallest nail” phenomenon. The tallest nail gets hammered down. As a society, that’s how we work. Abstractly, we admire people who “put themselves out there” and propose ideas that might make their organizations and the world much better. Concretely, those people are torn down as “bikeshedders”, by (a) their ideological opponents, who usually have no malicious intent but don’t want their adversaries to succeed– at least, not on that issue–; (b) sadists relishing the opportunity to deny someone a good thing; (c) personal political rivals, which any creative person will acquire over time; and (d) greedy self-interested people who perceive the whim of the group as it is emerging and issue the final “No”. We have a society that rewards deference to authority and punishes creativity, brutally. And capitalism’s private sector, which is supposed to be an antidote to that, and which is supposed to innovate in spite of itself, is where we see that tendency in the worst way.

Greed (meaning self-interest) can be good, if directed properly by those with a bit of long-term vision and an ironclad dedication to fairness. Sadism is not. The combination of the two, which is the norm in corporate boardrooms, is toxic. Ultimately, we need something else. We need true creativity. That’s not Silicon Valley’s “make the world a better place” bullshit either, but a genuine creative drive that comes from a humble acknowledgement of just how fucking hard it is to make the world a tolerable, much less “better”, place. It isn’t easy to make genuine improvements to the world. (Mean-spirited cost-cutting, sadistic game-playing, and cost externalization are much easier ways to make money. Ask any management consultant.) It’s brutally fucking difficult. Yet millions of people every day, just like me, go out and try. I don’t know why I do it, given the harm that even my mild public cynicism has brought to my career, but I keep on fighting. Maybe I’ll win something, some day.

As a culture, we need to start to value that creative courage again, instead of tearing people down over petty differences.


Silicon Valley and the Rise of the Disneypreneur

Someone once explained the Las Vegas gambling complex as “Disneyland for adults”, and the metaphor makes a fair amount of sense. The place sells a fantasy– expensive shows, garish hotels (often cheap or free if “comped”) and general luxury– and this suspension of reality enables people to take financial risks they’d usually avoid, giving the casino an edge. Comparing Silicon Valley to Vegas, also, makes a lot of sense. Even more than a Wall Street trading floor, it’s casino capitalism. Shall we search for some kind of transitivity? Yes, indeed. Is it possible that Silicon Valley is a sort of “Disneyland”? I think so.

It starts with Stanford and Palo Alto. The roads are lined with palm trees that do not grow there naturally, and cost tens of thousands of dollars a piece to plant. The whole landscape is designed and fake. In a clumsy attempt to lift terminology from Southern aristocrats, Stanford’s nickname is “the Farm”. At Harvard or Princeton, there’s a certain sense of noblesse oblige that students are expected to carry with them. A number of Ivy Leaguers eschew investment banking in favor of a program like Teach for America. Not so much at Stanford, which has never tempered itself with Edwardian gravity (by, for example, encouraging students to read literature from civilizations that have since died out) in the way that East Coast and Midwestern colleges have. The rallying cry is, “Go raise VC.” Then, they enter a net of pipelines: Stanford undergrad to startup, startup to EIR gig, EIR to founder, founder to venture capitalist. The miraculous thing about is that progress on this “entrepreneurial” path is assured. One never needs to take any risk to do it! Start in the right place, don’t offend the bosses-I-mean-investors, and there are three options: succeed, fail up, or fail diagonal-up. Since they live in an artificial world in which real loss isn’t possible for them, but one that also limits them from true innovation, they perform a sort of Disney-fied entrepreneurship. They’re the Disneypreneurs.

Just as private-sector bureaucrats (corporate executives) who love to call themselves “job creators” (and who only seem to agree on anything when they’re doing the opposite) are anything but entrepreneurs, I tend to think of these kids as not real entrepreneurs. Well, because I’m right, I should say it more forcefully. They aren’t entrepreneurs. They take no risk. They don’t even have to leave their suburban, no-winter environment. They don’t put up capital. They don’t risk sullying their reputations by investing their time in industries the future might despise; instead, they focus on boring consumer-web plays. They don’t go to foreign countries where they might not have all the creature comforts of the California suburbs. They don’t do the nuts-and-bolts operational grunt work that real entrepreneurs have to face (e.g. payroll, taxes) when they start new businesses, because their backers arrange it all for them. Even failure won’t disrupt their careers. If they fail, instead of making their $50-million payday sin this bubble cycle, they’ll have to settle for a piddling $750,000 personal take in an “acqui-hire”, a year in an upper-middle-management position, and an EIR gig. VC-backed “founders” take no real risk, but get rewarded immensely when things go their way. Heads, they win. Tails, they don’t lose.

Any time someone sets up a “heads I win, tails I-don’t-lose” arrangement, there’s a good bet that someone else is losing. Who? To some extent, it’s the passive capitalists whose funds are disbursed by VCs. Between careerist agents (VC partners) seeking social connection and status, and fresh-faced Disneypreneurs looking to justify their otherwise unreasonable career progress (due to their young age, questionable experience, and mediocrity of talent) what is left for the passive capitalist is a return inferior to that offered by a vanilla index fund. However, there’s another set of losers for whom I often prefer to speak, their plight being less well-understood: the engineers. Venture capitalists risk other peoples’ money. Founders risk losing access to the VCs if they do something really unethical. Engineers risk their careers. They’ve got more skin in the game, and yet their rewards are dismal.

If it’s such a raw deal to be a lowly engineer in a VC-funded startup (and it is) then why do so many people willingly take that offer? They might overestimate their upside potential, because they don’t know what questions to ask (such as, “If my 0.02% is really guaranteed to be worth $1 million in two years, then why do venture capitalists value the whole business at only $40 million?”). They might underestimate the passage of time and the need to establish a career before ageism starts hitting them. Most 22-year-olds don’t know what a huge loss it is not to get out of entry-level drudgery by 30. However, I think a big part of why it is so easy to swindle so many highly talented young people is the Disneyfication. The “cool” technology company, the Hooli, provides a halfway house for people just out of college. At Hooli, no one will make you show up for work at 9:00, or tell you not to wear sexist T-shirts, or expect you to interact decently with people too unlike you. You don’t even have to leave the suburbs of California. You won’t have to give up your car for Manhattan, your dryer for Budapest, your need to wear sandals in December for Chicago, or your drug habit for Singapore. It’s comfortable. There is no obvious social risk. Even the mean-spirited, psychotic policy of “stack ranking” is dressed-up as a successor to academic grading. (Differences glossed over are (a) that there’s no semblance of “meritocracy” in stack ranking; it’s pure politics, and a professor who graded as unfairly as the median corporate manager would be fired; (b) academic grading is mostly for the student’s benefit while stack-ranking scores are invariably to the worker’s detriment; and (c) universities genuinely try to support failing students while corporations use dishonest paperwork designed to limit lawsuit risk.) The comfort offered to the engineer by the Disney-fied tech world, which is actually more ruthlessly corporate (and far more undignified) than the worst of Wall Street, is completely superficial.

That doesn’t, of course, mean that it’s not real. Occasionally I’m asked whether I believe in God. Well, God exists. Supernatural beings may not, and the fictional characters featured in religious texts are almost certainly (if taken literally) pure nonsense, but the idea of God has had a huge effect on the world. It cannot be ignored. It’s real. The same of Silicon Valley’s style of “entrepreneurship”. Silicon Valley breathes and grows because, every year, an upper class of founders and proto-founders are given a safe, painless path to “entrepreneurial glory” and a much larger working class of delusional engineers are convinced to follow them. It really looks like entrepreneurship.

I should say one thing off the bat: Disneypreneurs are not the same thing as wantrapreneurs. You see more of the second type, especially on the East Coast, and it’s easy to conflate the two, but the socioeconomic distance is vast. The wantrapreneur can talk a big game, but lacks the drive, vision, and focus to ever amount to anything. He’s the sort of person who’s too arrogant to work for someone else, but can’t come up with a convincing reason why anyone should work for him, and doesn’t have the socioeconomic advantages that’d enable him to get away with bullshit. Except in the most egregious bubble times, he wouldn’t successfully raise venture capital, not because VCs are discerning but because the wantrapreneur usually lacks sufficient vision to learn how to do even that. Quite sadly, wantrapreneurs sometimes do find acolytes among the desperate and the clueless. They “network” a lot, sometimes find friends or relatives clueless enough to bankroll them, and produce little. Almost everyone has met at least one. There’s no barrier to entry in becoming a wantrapreneur.

Like a wantrapreneur, Disneypreneurs lack drive, talent, and willingness to sacrifice. The difference is that they still win. All the fucking time. Even when they lose, they win. Evan Spiegel (Snapchat) and Lucas Duplan (Clinkle) are just two examples, but Sean Parker is probably the most impressive. If you peek behind the curtain, he’s never actually succeeded at anything, but he’s a billionaire. They float from one manufactured success to another, build impressive reputations despite adding very little value to anything. They’re given the resources to take big risks and, when they fail, their backers make sure they fail up. Being dropped into a $250,000/year VP role at a more successful portfolio company? That’s the worst-case outcome. Losers get executive positions and EIR gigs, break-evens get acqui-hired into upper-six-figure roles, and winners get made.

One might ask: how does one become a Disneypreneur? I think the answer is clear: if you’re asking, you probably can’t. If you’re under 18, your best bet is to get into Stanford and hope your parents have the cardiac fortitude to see the tuition bill and not keel over. If you’re older, you might try out the (admirably straightforward, and more open to middle-class outsiders than traditional VC) Y Combinator. However, I think that it’s obvious that most people are never going to have the option of Disneypreneurship, and there’s a clear reason for that. Disneypreneurship exists to launder money (and connections, and prestige, and power; but those are highly correlated and usually mutually transferrable) for the upper classes, frank parasitism from inherited wealth being still socially unacceptable. The children of the elites must seem to work under the same rules as everyone else. The undeserving, mean-reverting progeny of the elite must be made to appear like they’ve earned the status and wealth their parents will bequeath upon them.

Elite schools were once intended toward this end. They were a prestige (multiple meanings intended) that appeared, from the outside, to be a meritocracy. However, this capacity was demolished by an often-disparaged instrument, the S.A.T. Sometimes, I’ll hear a knee-jerk leftist complain about the exam’s role in educational inequality, citing (correctly) the ability of professional tutoring (“test prep”, a socially useless service) to improve scores. In reality, the S.A.T. isn’t creating or increasing socioeconomic injustices in terms of access to education; it merely measures some of them. The S.A.T. was invented with liberal intentions, and (in fact) succeeded. After its inception in the 1920s, “too many” Jews were admitted to Ivy League colleges, and much of the “extracurricular” nonsense involved in U.S. college admissions was invented in a reaction to that. Over the following ninety years, there’s been a not-quite-monotonic movement toward meritocracy in college admissions. If I had to guess, college admissions are a lot more meritocratic than 90 years ago (and, if I’m wrong, it’s not because the admissions process is classist but because it’s so noise-ridden, thanks to technology enabling the application of a student to 15-30 colleges; 15 years ago, five applications was considered high). The ability-to-pay factor, however, keeps this meritocracy from being realized. Ties are, observably, broken on merit and there is enough meritocracy in the process to threaten the existing elite. The age in which a shared country-club membership of parent and admissions officer ensured a favorable decision is over. Now that assurance requires a building, which even the elite cannot always afford.

These changes, and the internationalization of the college process, and those pesky leftists who insist on meritocracy and diversity, have left the ruling classes unwilling to trust elite colleges to launder their money. They’ve shifted their focus to the first few years after college: first jobs. However, most of these well-connected parasites don’t know how to work and certainly can’t bear the thought of their children suffering the indignity of actually having to earn anything, so they have to bump their progeny automatically to unaccountable upper-management ranks. The problem is that very few people are going to respect a talentless 22-year-old who pulls family connections to get what he wants, and who gets his own company out of some family-level favor. Only a California software engineer would be clueless enough to follow someone like that– if that person calls himself “a founder”.

Statistics, cooperation, politics, and programming.

Open: a simple dice “game”

Let’s say that you’re playing a one-player “game”, where your payout (score) is determined according to the rolls of 101 dice. One of them is black and 100 are white, and your payoff is $100 times the value on the black die, plus the sum of the values on the 100 white dice. (In RPG terms, that’s d6x100 + 100d6.) The question is: how much more “important” (I’ll define this, more rigorously, below) is the black die, relative to a single one of the white dice?

Most people would say that the black die is 100 times as important; its influence on the payoff is a $500 swing (from $100 to $600) while each of the white dice has a $5 swing ($1 to $6). That would lead us to conclude that the black die is equally important as the hundred white dice, all taken together– or that the black die has 50% of the total importance. That’s not true at all. Why not? Let’s do some simulations. Here’s the code (in Clojure).

(defn white-die []
.  (inc (rand-int 6)))

(defn black-die []
.   (* 100 (inc (rand-int 6))))

(defn play []
.   (let [bd-value (black-die)
.         wd-value (reduce + 0 (repeatedly 100 white-die))]
.      (printf "Black die: %d, White dice: %d, Payoff: %d\n"
.               bd-value wd-value (+ bd-value wd-value))))

Here are some results:

user=> (play)
Black die: 100, White dice: 345, Payoff: 445
nil ;; other returns omitted.
user=> (play)
Black die: 600, White dice: 343, Payoff: 943
user=> (play)
Black die: 400, White dice: 352, Payoff: 752
user=> (play)
Black die: 100, White dice: 338, Payoff: 438
user=> (play)
Black die: 300, White dice: 322, Payoff: 622
user=> (play)
Black die: 200, White dice: 345, Payoff: 545
user=> (play)
Black die: 500, White dice: 326, Payoff: 826
user=> (play)
Black die: 300, White dice: 362, Payoff: 662
user=> (play)
Black die: 100, White dice: 353, Payoff: 453
user=> (play)
Black die: 500, White dice: 359, Payoff: 859

The quality of the payoff has a lot more to do with the black die than the white ones. A good payoff (above $700, the mean) seems to occdur if and if only if the black die roll is good (a 4, 5, or 6) because the sum of the white dice is never far from the mean value of 350. We can formalize this intuition by noting that when independent random variables are added, the variance of their sum is the sum of their variances. The variance of a 6-sided die is 35/12 (2.9166…) and so the variance of the 100 white dice, taken together, is 3500/12 = 291.666…, resulting in a standard deviation just slightly over 17. With a hundred dice being summed together, we can assume the sum of white dice to be approximately Gaussian: 99 percent of the time, the white dice will come in between $306 and $394. Even if the white dice perform terribly (say, $300) a ‘6’ on the black die is going to ensure a great payoff.

While standard deviation is a more commonly used measure of dispersion, random variables cumulate according to their variance (the square of the standard deviation). The variance of the black die is not 100, but 10,000 times, that of the white die. This means that it’s 100 times more influential over the payoff than all of the white dice taken together. It contributes just over 99 percent of the variance in the payoff.


What does this have to do with human behavior and cooperation? Well, consider voting. Some people complain about the supposed “disenfranchisement” of voters in large, non-swing states such as California (reliably blue) and Texas (reliably red) under the electoral college system. When the blocs are predictable, that can be true. However, being part of a voting bloc will, in general, magnify ones’ voting power, just as the black die in the example above dominates the payoff to the point where the white dice hardly matter.

If fifteen people agree to vote the same way, they’ve increased their voting power (variance) by 225 times that of an individual, meaning that each one becomes 15 times more powerful. Let’s go a step further. Say there are 29 people in a voting body, and that simple majority is all that’s required to pass a measure. If fifteen of those agree to hold a private vote, and then vote as a bloc based on the result of that vote, the other fourteen peoples’ votes don’t matter at all, so each bloc member becomes approximately twice as individually powerful. This can further be corrupted by creating nested blocs. Eight of those people could break off and hold their own private vote and become a bloc-within-the-bloc. Of course, secrecy is required; otherwise the out-crowd of that bloc might defect. At least in theory, nothing stops a group of five people within that eight from forming a third-level bloc, and so on. This could devolve into an almost dictatorial situation where two people determine the entire vote. It isn’t always long-term stable, of course; disenfranchised people within blocs will (over time) leave, possibly joining other blocs.

One should be able to see, by now, why something like a two-party political system is so common in government. Coalitions build, because it magnifies the individual’s statistical power (percentage of the variance) to form blocs. It seems to continue until there are two coalitions in the 45 to 50 percent range, and what limits this process is that, as a coalitions grow, they become more predictable and less nimble; once they are predictable, unaffiliated (“swing”) voters have substantially more power than they should according to the principles above; while variance potential of a bloc grows as the square of its size, highly predictable blocs have very little actual variance. In other words, the equilibrium happens when the (quadratically growing) bulk power of blocs is offset by the declining true variance inherent to their predictability, leaving the few swing players as individually powerful (being unpredictable) as they would be as members of a bloc.

Economics and work

Bloc power is a major reason why collective bargaining (unionization) is such a big deal. The Brownian motion of individually unimportant workers joining and leaving a company has a minimal effect on the business. There will be good days and bad for the company due to these small fluctuations but, on the whole, an individual’s vote (whether to work or quit) is meaningless amid the noise. The low-level worker has no real vote. Collective bargaining, on the other hand, can be powerful: a large group voting against its management (a strike) at the same time can have a real impact.

The past two hundred years have proven that, without some variety of collective action, workers (even highly skilled ones) are unlikely to get a fair deal. It doesn’t matter how smart, how capable, or even how necessary they are if their votes don’t matter. There are three approaches that have been used to solve this problem (aside from a fourth, beloved by some wealthy, which is not to solve it). The first is to form a union. As much as there is a problem of corruption within unions, I don’t think any reasonable person can review history and conclude them to have been unnecessary. The second is to form a profession, which is essentially a reputation management organization that (a) keeps the individual member’s credibility high enough to keep that person employable, so he or she can challenge management, since professions require ethical obligations that supersede managerial authority (i.e. there’s no Nuremberg Defense); while (b) occasionally leveraging its role as a reputation bank to push, as a bloc, for specific causes. The third approach is a welfare state, which does not confer bloc-like power for low-level producers (i.e. workers) but (a) gives them power as consumers and, more importantly, it (b) gives individual producers the ability to refuse adverse terms of work.

These form a spectrum of solutions, with unions being the most political (an explicit bloc forms, subverting to some extent the Brownian tug-of-war that occurs in free markets and elections) while the welfare state is apolitical (it does not tell capitalists how to run their companies) while pushing a universal sea change in the market– improved leverage for workers in all industries, liberated from a month-by-month need for work income. Professions, as it were, exist between these two extremes; they are not as explicitly political or bloc-like as unions, but their ability to prevent the professional’s credibility from falling to zero– even if fired by one comp[any’s management, he’s still a member of that profession unless disbarred for ethical reasons, and will usually find new work easily– has them functioning like a private, conditional welfare state.

I’m not going to argue, among the solutions above, that any is superior to the others, or that one of those three should be favored uniformly. In fact, they don’t even conflict; societies tend to have all three of the above in some form. They seem to serve different purposes, also spanning a spectrum from local to global, like so:

  • The union exists to guarantee, as much as it can, employment at a specific company (local) on favorable terms for good-faith workers. It often wrests from management the authority to terminate people. Its downside is that, because it is an explicitly political organization, it often invents by-laws (seniority systems being the most abhorred) that reduce performance. The extreme guarantees against adverse change that unions often provide may result in a low quality of work, eroding the union’s clout in the long run. Unions are, however, the best solution when there is a small number of potential employers (oligopsony).
  • The profession exists to provide credibility (reputation) sufficient to guarantee appropriate employment, but not at a specific employer. The profession doesn’t interfere with individual terminations or promotions, nor does it often tell employers how to behave; its goal is to provide appropriate results for all good-faith members without managing a specific employer. This is more global than the union, because professionals may have to move to different companies or geographic locations to take advantage of the profession’s auspices, but more local than a welfare state because it focuses on a specific class of workers. Professions work well when there is a large and changing set of potential employers, but over a fairly fixed scope of work.
  • The welfare state (a global solution, as it involves a definition of social justice that a central government attempts to enforce uniformly) doesn’t guarantee market employment at all. It does, however, attempt to create an economic floor below which people cannot fall. Even if they lose power as producers (because the market may not want anything they can make) they retain some power as consumers. The moral purpose of this is two-fold. First, unneeded workers can retrain and become viable producers. Second, the welfare state’s existence gives workers enough leverage that they stand a chance at getting a fair deal– without necessarily having to form collectives in order to do it. Welfare states do the best job at the large-scale, society-wide problems; for example, they can provide education and training for those who have not yet entered a union or profession.

What’s most relevant to all this, however, is that collective action is as relevant today as it was 100 years ago. There are a lot of people who claim, for example, that labor unions “were good in their time, but have served their purpose”. I don’t think that’s true. There are, of course, many problems with existing labor unions and with the professions, but the statistical politics underlying their formation is still quite relevant.


Software engineers in particular are a group of people who’ve never fully decided whether they want to be blue-collar (making unionization a relevant strategy) or white-collar (necessitating a profession). It’s not clear to me that either of these approaches, as commonly imagined, will do what we need in order to get programmers fairly paid and their work reasonably evaluated. I would argue, however, that the existing culture of free agency seems to be leading nowhere. Software and hardware engineers, in addition to designers and operational people, need to develop a common tribal identity as makers. Otherwise, management will continue to run divide-and-conquer strategies against them that leave them with the worst of both the blue-collar and white-collar worlds: the low autonomy and job security of an hourly wage worker, but the unreasonable expectations and long hours associated with salarymen.

The needs of the most creative and effective technology workers should be given consideration; maker culture is becoming a real thing, and the societies and organizations that prosper in the next fifty years will be those that find a way to contend with it. Thanks to personal computing, the internet, and quite likely 3D printing, we’re coming into an era in which the zero-sum approach to resources that has existed for thousands of years no longer makes sense. Copying a book used to be a painstaking, miserable process. (The reason for the beautiful calligraphy and illustrations in hand-copied medieval books is that the work would be intolerable without some room for creative flourish.) Now it’s a Unix command that takes less than second. Information scarcity is rapidly ending and of more interest is the culture (maker culture) that has sprung up around that, starting in the open source world that is making its way into software, which is structurally cooperative.

Maker culture is centered on the positive-sum worldview that makes sense in such a world. Makers tend to no longer see each other as competitors amid existing scarcity; rather, the greater war is against scarcity itself.

Good programmers no longer buy in to traditional industrial competition. They’d rather work on open source projects that improve the world (and their own individual reputations) than line the corporate war chest, because the benefits of tapping into the larger society (open source economy) are much greater, not only for them but often also for their employers, than those of restricting themselves to one corporate silo.  They’ll work on closed-source “secret sauce” projects in a somewhat privileged (“ninja”) position, but not in the commoditized role associated with the “code monkey” appellation. Those jobs, as portrayed less than affectionately in the movie Office Space, are going to die out.

In twenty years, top maker talent will no longer be employed so much as it is sponsored. This will be good for the world, as it will generate a much more cooperative economy than what existed before it, but a large number of organizations will find themselves unable to adapt and will fail.

What Ayn Rand got right and wrong

Ayn Rand is a polarizing figure, and it should be pretty clear that I’m not her biggest fan. I find her views on gender repulsive and her metaphysics laughable. I tend to be on the economic left; she heads to the far right. She and I have one crucial thing in common– extreme political passions rooted in emotionally damaging battles with militant mediocrity– but our conclusions are very different. Her nemesis was authoritarian leftism; mine is corporate capitalism. Of course, an evolved mind in 2013 will recognize that, while both of these forces are evil, there isn’t an either/or dichotomy between them. We don’t need authoritarian leftism or corporate capitalism, and both deserve to be reject out of hand.

What did Rand get right?

As much as I dislike Ayn Rand’s worldview, it’s hard to say that it isn’t a charismatic one, which explains her legions of acolytes. There are a few things she got right, and in a way that few people had the courage to espouse. Namely, she depicted authoritarianism as a process through which the weak (which she likened to vermin) gang up on, and destroy, the strong. She understood the fundamental human problem of her (and our) time: militant mediocrity.

Parasitism, in my view, isn’t such a bad thing. (I probably disagree with Rand on that.) After all, each of us spends nine months as a literal biological parasite. I am actually perfectly fine with much of humanity persisting in a “parasitic” lifestyle wherein they receive more sustenance from society than they would earn on the market. I’m fine with that. It’s a small cost to society, and the long-term benefits (especially including the ability for some people to escape parasitism and become productive) outweigh it. What angers me is when the parasites on the opposite end (the high one) of the socioeconomic spectrum behave as if their fortune and social connections entitle them to tell their intellectual superiors (most viscerally, when that intellectual superior is me) what to do.

Rand’s view was harsh and far from democratic. She conceived of humanity consisting of a small set of “people of the mind” and a much larger set of parasitic mediocrities. In her mind, there was no distinction between (a) average people, who neither stand out in terms of accomplishment or militancy, and (b) the aggressive, anti-intellectual, and authoritarian true parasites against which society must continually defend itself. That was strike one: it just seemed bitchy and mean-spirited to decry the majority of humanity as worthless. (I can’t stand with her on that, either. We’re all mediocre most of the time; it’s militant mediocrity that’s our adversary.) Yet most good ideas seem radical when first voiced, and their proponents are invariably first attacked for their tone and attitude rather than substance, a dynamic that means “bitchiness” is often positively correlated with quality of ideas. I think much of why Rand’s philosophy caught on is that it was so socially unacceptable in the era or the American Middle Class; and intellectuals understand all too well that great ideas often begin as rejected ones.

To understand Ayn Rand further, keep in mind the context of the time during which she rose to fame: the American post-war period. Even the good kinds of greed were socially unacceptable. So a lot of people found her “new elitism” (which was a dressing-up of the old kind) to be refreshing and– in a world that tried to make reality look very different from what it was (see: 1950s television)– honest. By 1980, there was a strong current of opinion that the inclusive capitalism and corporate paternalism had failed, and elitism became sexy again.

Where was the value in this very ugly (but charismatic) philosophy? I’d say that there are a few things Ayn Rand got completely right, as proven by experience at the forefront of software technology:

  1. Most progress comes from a small set of people. Pareto’s “80/20″ is far too generous. It’s more like 80/3. In programming, we call this the “10x” effect, because good programmers are 10 times as effective as average ones (and the top software engineers are 10 times as effective as the merely-good ones like me). Speaking on the specific case of software, it’s pretty clear that 10x is not driven by talent alone. That’s a factor, but a small one. More relevant are work ethic, experience, project/person fit, and team synergies. There isn’t a “10x programmer” gene out there; a number of things come into play. It’s not always the same people who are “10x-ers”, and this “10x” superiority is far from intrinsic to the person, having as much to do with circumstance. That said, there are 10x differences in effectiveness all over the place when at the forefront.
  2. Humanity is plagued by authoritarian mediocrity. If you excel, you become a target. It is not true that the entire rest of humanity will despise you for being exceptionally intelligent, creative, industrious, or effective. In fact, many people will support you. However, there are some (especially in positions of power, who must maintain them) who harbor jealous hatred, and they tend to focus on a small number of people. In authoritarian leftism, they attack those who have economic success. In corporate capitalism, they attack their intellectual superiors.
  3. Social consensus is often driven by the mediocre. The excellent have a tendency to do first and sell later. Left to their own devices, they’d rather build something great and seek forgiveness than try to get permission, which will never come if sought at the front door. The mediocre, on the other hand, generate no new ideas and therefore have never felt that irresistible desire to take that kind of social risk. They quickly learn a different set of skills: how to figure out who’s influential and who’s ignored, what the influential people want, and how to make their own self-serving conceptions (which are never far-fetched, being only designed to advance the proponent, because there is otherwise no idea in them) seem like the objective common consensus.

A bit of context

Ayn Rand’s view of authoritarian leftism was spot-on. Much of that movement’s brutality was rooted in a jealous hatred that we know as militant mediocrity. Its failure to become anything like true communism (or even successful leftism) proved this. Militant mediocrity is blindly leftist when poor and out-of-power and rabidly conservative when rich and established. Of course, in the Soviet case, it never became “rich” so much as it made everyone poor. This enabled it to keep a leftish veneer even as it became reactionary.

Rand’s experiences with toxic leftism were so damaging that when she came to the United States, she continued to advance her philosophy of extreme egoism. This dovetailed with the story of the American social elite. Circa 1960, they felt themselves to be a humiliated set of people. Before 1930, they lived in elaborate mansions and lived opulent, sophisticated lifestyles. After the Great Depression, which they caused, they fell into fear and reservation; that is why, to this day, the “old money” rich prefer to live in houses not visible from the road. They remained quite wealthy but, socially, they retreated. They were no longer the darlings at the ball, because there was no ball. It wasn’t until their grandchildren’s generation came forward that they had the audacity to reassert themselves.

While this society’s parasitic elite was in social exile, paternalistic, pay-it-forward capitalism (“Theory Y”) replaced the old, meaner industrial elite, and the existing upper class found themselves increasingly de-fanged as the social distance between them and the rising middle class shrunk. It was around 1980 that they began to fight back with a force that society couldn’t ignore. The failed, impractical Boomer revolutions of the late 1960s were met, about 10 to 15 years later, with a far more effective “yuppie” counterrevolution that won. Randism became its guiding philosophy. And, boy, did it prove to be wrong about many things.

What did Rand get wrong?

Ayn Rand died in 1982, before she was able to see any of her ideas in implementation. Her vision was of the individual capitalist as heroic and excellent. What we got, instead, was these guys.

Ayn Rand interpreted capitalism using a nostalgic view of industrial capitalism, when it was already well into its decline. The alpha-male she imagined running a large industrial operation no longer existed; the frontier had closed, and the easy wins available to risk-seeking but rational egoists (as opposed to social-climbing bureaucrats) had already been taken. The world was in full swing to corporate capitalism, which has been taking an increasingly collectivist character on for the past forty years.

Corporatism turns out to have the worst of both systems between capitalism and socialism. Transportation, in 2013, is a perfect microcosm of this. Ticket prices are volatile and fare-setting strategies are clearly exploitative– the worst of capitalism– while service rendered is of the quality you might expect from a disengaged socialist bureaucracy; flying an airplane today is certainly not the experience one would get from a triumphant capitalistic enterprise.

Suburbia also has a “worst of both worlds” flavor, but of a more vicious nature, being more obvious in how it merges two formerly separate patterns of life to benefit one class of people and harm another. By the peak of U.S. suburbanization, almost everyone (rich and poor) lived in a suburb, and this was deemed the essence of middle-class life. Suburbia is well-understood as a combination of urban and rural life– an opportunity for people to hold high-paying urban jobs, but live in more spacious rural settings. What’s missed is that, for the rich, it combines the best of both lifestyles– it gives them social access, but protects them from urban life’s negatives; for the poor, it holds the worst of both– urban crime and violence, rural isolation.

This brings us directly to the true nature of corporate capitalism. It’s not really about “making money”. Old-style industrial capitalism was about the multiplication of resources (conveniently measured in dollar amounts). New-style corporate capitalism is about social relationships (many of those being overtly extortive) and “connections”. It’s about providing the best of two systems– capitalism and socialism– for a well-connected elite. They get the outsized profit opportunities (“performance” bonuses during favorable market trends that should more honestly be appreciated as luck) of capitalism, but the cushy assured favoritism and placement (acq-hires and “entrepreneur-in-residence” gigs) of socialism. Everyone else is stuck with the worst of both systems: a rigged and conformist corporate capitalism that will gladly punish them for failure, but that will retard their successes via its continual demands for social permission.

What’s ultimately fatal to Rand’s ideology– and she did not live long enough to see it play out this way– was the fact that the entrepreneurial alpha males she was so in love with (and who probably never existed, in the form she imagined) never came back. In the 1980s, the world was sold to emasculated, influence-peddling, social-climbing private-sector bureaucrats, and not heroic industrialists. Whoops!

What we now have is a world that claims to be (and is) capitalistic, but is run by the sorts of parasitic, denial-focused, militantly mediocre position-holders that Rand railed against. This establishes her ideology as a failed one, and the elitism-is-cool-again “yuppie” counterrevolution of the 1980s has thus been shown to be just as impractical and vacuous as the 1960s “hippie” movement and the authoritarian leftism of the “Weathermen”. Unfortunately, it was a far more effective– and, thus, more damaging– one, and we’ll probably be spending the next 15 years cleaning up its messes.

Why an Atlas Shrugged smart people strike would never work.

I’m not a major fan of Ayn Rand, but one of the more attractive ideas coming out of her work is from Atlas Shrugged, written about a world in which the “people of the mind”– business leaders, artists, philosophers– go on strike. It’s an attractive idea. What would happen if those of us in the “cognitive 1 percent” decided, as a bloc, to secede from the mediocrity of Corporate America? Would we finally get our due? Would we stop having to answer to idiots? Would the dumb-dumbs come crawling to us, begging that we return?

No. That would never happen. They have as much pride as we do.

It’s an appealing concept, for sure. Individually, not one of us is substantial to society– that’s not a personal statement; no one person is that important. Any one of us could be cast into the flames with little cost to society. Yet we tend to feel like, as a group, we are critical. We’re right. am insignificant, but societies live or die based on what proportion of the few thousand people like me per generation get their ideas into implementation, and it’s only after the fact that one knows which side of the critical percentage a society is on. Atlas could shrug. Society could be brought to its knees if the most intelligent people developed a tribal identity, acted as a political bloc, were still ignored, and chose to secede. Science and the arts would stagnate, the economy would fall into decline, and society would be unable to correct for its own morale problems. The culture would crater, innovation would die, and whatever society endured such a “strike” would quickly fall to third-class status on the world stage.

That doesn’t mean we, the smart people who might threaten such a strike, would get whatever we want. Imagine trying to extort a masochist. “I’ll beat you up unless if you give me $100.” “You mean I can not give you $100 and get beaten up? For free? I’ll take that option; you’re so kind.”

I don’t mean to call society masochistic, because it isn’t so. Societies don’t make choices. People in them do, often with minimal or no concern with the upkeep of this edifice we call “civilization”. Now, the people at the top of ours (Corporatist America) are stupid, short-sighted, uncultured, and defective human beings. All of that is true. To assess them as weak because of this is inaccurate. I’m pretty sure that crocodiles don’t crack 25 on an IQ test, but I wouldn’t want to be in a physical fight with one. These people are ruthless and competitive and they’re very good at what they do– which is to acquire and hold position, even if it requires charming people (including people like us, much smarter than they are) to get it. They’d also rather reign in hell than serve in heaven. That’s why we’ll never be able to pull an Atlas Shrugged move against them. They care far more about their relative standing in society than its specific level of health. We’d be giving them exactly what they want: less competition to hold the high social status they currently have.

Also, I think that an Atlas Shrugged phenomenon is already happening in American society, with so little fanfare as to render it comically underwhelming. Smart people all over the country are underperforming, mostly not by choice, but because they are not getting opportunities to excel. Scientists spend an increasing amount of time applying for grants and lobbying their bosses for the autonomy that they had, implicitly, a generation ago. The quality of our arts has suffered substantially. Our political climate is disastrous and right-wing because a lot of intelligent people have just given up. Has the elite looked at the slow decline of the society and said, “Man, we really need to treat those smart people better, and hand our plum positions over to those who actually deserve them?” Nope. That has not happened; it would be absurd to think of it, as the current elite has too much pride. And if we scale that up from unintentional, situational underperformance to a full-fledged strike of the cognitive elite, we will be ignored for doing so. We won’t bring society to a calamitous break and get our due. We’ll see slow decay and the only people smart enough to make the connection between our strike and that degradation will be the strikers themselves. We already have a pervasively mediocre society and things still work– not well, but we haven’t seen catastrophic society-wide failures yet. It might get to that point, but it’ll be too late for the kind of action we might want.

In sum…

  • fantasy: the cognitive elite could go on “strike” and the existing elite (corporate upper class, tied together by social connections rather than anything related to excellence) would, after society fell to pieces, beg us to rejoin on our terms, inverting the power dynamic that currently exists between us and them.
  • reality: those parasitic fuckers don’t give a shit about the broad-based health of society. We’re not exactly a real competitive threat to them because they hold most of the power, but we do have some power and we’d just be making their lives easier if we withdrew from the world and gave that power up entirely.

As intellectuals, or at least as people who aspire to be such, we look at civil decline as tragic and painful. When we learn about expansive civilizations that fall into decadence and ruin, we tend to imagine it as a personal death that’s directly experienced, rather than a gradual historic change that few people notice in contrast to the day-to-day struggles of higher personal importance. So we often delude ourselves into thinking that “society” has its own will and makes “choices” according to its own interests, as opposed to the parochial interests of whatever idiots happen to be running it at the time. Thus, we believe that if “society” refuses to listen to our ideas and place us in appropriately high positions, we can withdraw as a bloc, render it ineffective, and impel it to “come crawling back” to us with better terms. We’re dead wrong in believing that this is a possibility. Yes, we can render it ineffective through underperformance (hell, it’s already arguably at that point, just based on the pervasive conformity and mediocrity that have declawed most of us) but this reorganization that we seek will never happen. We tend to overestimate the moral character– while underestimating the competitive capability (again: think crocodiles)– of our enemies. They are all about their own egos and they will gladly have society burn just to stay on top.

One concrete example of this is in software engineering, where the culture is mostly one of anti-intellectualism and mediocrity. Why is it this way? Given that an elite programmer is 10-100 times as effective as a mediocre code monkey, why do companies tailor their environments to the hiring en masse of unskilled “commodity” developers? Bad programmers are not cheap; they’re hilariously expensive. So what’s going on? The answer is that most managers don’t care about the good of the company. It’s their own egos they want to protect. A good programmer costs only 25 percent more than a mediocre one, but is 5 times as effective. Why not hire the good one, then? The answer is that the manager loses his real motivation for going to work: being the smartest guy in the room, and the unambiguous alpha male. Saving the company some money is not, to most managers, worth that price.

What we fail to realize, as the cognitive 1 percent, is that while society abstractly relies on us, the people running society think we’re huge pains in the ass and would be thrilled not to have to deal with us at all.

Do I believe that it’s time for the cognitive 1 percent to mobilize, and to take back our rightful control over society’s direction? Absolutely. In fact, I think it’s a moral responsibility, because the world is facing some problems (such as climate change) too complex for the existing elite to solve. The incapacity and mediocrity of our current corporate elite is literally an existential risk to humanity. We ought to assert ourselves, as a group, and start fixing the world. But the Atlas Shrugged model is the wrong way to go about that.

The U.S. upper class: Soviet blatnoys in capitalist drag.

One thing quickly learned when studying tyranny (and lesser, more gradual, failures of states and societies such as observed in the contemporary United States) is that the ideological leanings of tyrants are largely superficial. Those are stances taken to win popular support, not sincere moral positions. Beneath the veneer, tyrants are essentially the same, whether fascist, communist, religious, or centrist in nature. Supposedly “right-wing” fascists and Nazis would readily deploy “socialist” innovations such as large public works projects and social welfare programs if it kept society stable in a way they preferred, while the supposedly “communist” elites in the Soviet Union and China were self-protecting, deeply anti-populist, and brutal– not egalitarian or sincerely socialist in the least. The U.S. upper class is a different beast from these and, thus far, less malevolent than the communist or fascist elites (although if they are unchecked, this will change). It probably shares the most in common with the French aristocracy of the late 18th-century, being slightly right-of-center and half-hearted in its authoritarianism, but deeply negligent and self-indulgent. For a more recent comparison, I’m going to point out an obvious and increasing similarity between the “boardroom elite” (of individuals who receive high-positions in established corporations despite no evidence of high talent or hard work) and an unlikely companion: the elite of the Soviet Union.

Consider the Soviet Union. Did political and economic elites disappear when “business” was made illegal? No, not at all. Did the failings of large human organizations suddenly have less of a pernicious effect on human life? No; the opposite occurred. What was outlawed, effectively, was not the corporation (corporate power existed in the government) but small-scale entrepreneurship– a necessary social function. Certainly, elitism and favoritism didn’t go away. Instead, money (which was subject to tight controls) faded in importance in favor of blat, an intangible social commodity describing social connection as well as the peddling of influence and favors. With the money economy hamstrung by capitalism’s illegality, blat became a medium of exchange and a mechanism of bribery. People who were successful at accumulating and using social resources were called blatnoys. The blatnoy elite drove their society into corruption and, ultimately, failure. But… that’s irrelevant to American capitalism, right?

Well, no. Sadly, corporate capitalism is not run by “entrepreneurs” in any sense of the word. Being an entrepreneur is about putting capital at risk to achieve a profit. Someone who gets into an elite college because a Senator owes his parents a favor, spends four years in investment banking getting the best projects because of family contacts, gets into a top business school because his uncle knows disgusting secrets about the dean of admissions, and then is hired into a high position in a smooth-running corporation or private equity firm, is not an entrepreneur. Anything but. That’s a glorified private-sector bureaucrat at best and, at worst, a brazen, parasitic trader of illicit social resources.

There are almost no entrepreneurs in the American upper class. This claim may sound bizarre, but first we must define terms– namely, “upper class”. Rich people are not automatically upper class. Steve Jobs was a billionaire but never entered it; he remained middle-class (in social position, not wealth) his entire life. His children, if they want to enter its lower tier, have a shot. Bill Gates is lower-upper class at best, and has worked very hard to get there. Money alone won’t buy it, and entrepreneurship is (by the standards of the upper class) the least respectable way to acquire wealth. Upper class is about social connections, not wealth or income. It’s important to note that being in the upper class does not require a high income or net worth; it does, however, require the ability to secure a position of high income reliably, because the upper class lifestyle requires (at a minimum) $300,000 after tax, per person, per year.

The wealth of the upper class follows from social connection, and not the other way around. Americans frequently make the mistake of believing (especially when misled on issues related to taxation and social justice) that members of the upper class who earn seven- and eight-digit salaries are scaled-up versions of the $400,000-per-year, upper-middle-class neurosurgeon who has been working intensely since age 4. That’s not the case. The hard-working neurosurgeon and the well-connected parasite are diametric opposites, in fact. They have nothing in common and could not stand to be in the same room together, because their values are too much at odds. The upper class views hard work as risky and therefore a bit undignified. It perpetuates itself because there is a huge amount of excess wealth that has congealed at the apex of society, and it’s relatively easy to exchange money and blat on an informal but immensely pernicious market.

Consider the fine art of politician bribery. The cash-for-votes scenario, as depicted in the movies, is actually very rare. The Bush family did have their their “100k club” when campaign contributions were limited to $1000-per-person, but entering that set required arranging for 100 people to donate the maximum amount. Social effort was required to curry favor, not merely a suitcase full of cash. Moreover, to walk into even the most corrupt politician’s office today offering to exchange $100,000 in cash for voting a certain way would be met with a nasty reception. Most scumbags don’t realize that they’re scumbags, and to make a bribe as overt as that is to call a politician a scumbag. Instead, politicians must be bribed in more subtle manners. Want to own a politician? Throw a party every year in Aspen. Invite up-and-coming journalists just dying to get “sources”. Then invite a few private-equity partners so the politician has a million-dollar “consulting” sinecure waiting if the voters wise up and fire his pasty ass. Invite deans of admissions from elite colleges if he has school-age children. This is an effective strategy for owning (eventually) nearly all of America’s decision makers; but it’s hard to pull off if you don’t own any of them. What I’ve described is the process of earning interest on blat and, if it’s done correctly and without scruples, the accrual can occur rapidly– for people with enough blat to play.

Why is such “blat bribery” so common? It makes sense in the context of the mediocrity of American society. Despite the image of upper management in large corporations as “entrepreneurial”, they’re actually not entrepreneurs at all. They’re not the excellent, the daring, the smartest, or the driven. They’re successful social climbers; that’s all. The dismal and probably terminal mediocrity of American society is a direct result of the fact that (outside of some technological sectors) it is incapable of choosing leaders, so decisions of leadership often come down to who holds the most blat. Those who thrive in corporate so-called capitalism are not entrepreneurs but the “beetle-like” men who thrived in the dystopia described in George Orwell’s 1984.

Speaking of this, what is corporate “capitalism”? It’s neither capitalism nor socialism, but a clever mechanism employed by a parasitic, socially-closed but internally-connected elite to provide the worst of both systems (the fall-flat risk and pain of capitalism, the mediocrity and procedural retardation of socialism) while providing the best (the enormous rewards of capitalism, the cushy safety of socialism) of both for themselves.

These well-fed, lily-livered, intellectually mediocre blatnoys aren’t capitalists or socialists. They’re certainly not entrepreneurs. Why, then, do they adopt the language and image of alpha-male capitalist caricatures more brazen than even Ayn Rand would write? It’s because entrepreneurship is a middle-class virtue. The middle class of the United States (for not bad reasons) still has a lot of faith in capitalism. Upper classes know that they have to seem deserving of their parasitic hyperconsumption, and to present the image of success as perceived by the populace at large. Corporate boardrooms provide the trappings they require for this. If the middle class were to suddenly swing toward communism, these boardroom blatnoys would be wearing red almost immediately.

Sadly, when one views the social and economic elite of the United States, one sees blatnoys quite clearly if one knows where to look for them. Fascists, communists, and the elites of corporate capitalism may have different stated ideologies, but (just as Stephen King expressed that The Stand‘s villain, Randall Flagg, can represent accurately any tyrant) they’re all basically the same guy.

Criminal Injustice: The Bully Fallacy

As a society, we get criminal justice wrong. We have an enormous number of people in U.S. prisons, often for crimes (such as nonviolent drug offenses) that don’t merit long-term imprisonment at all. Recidivism is shockingly high as well. On the face of it, it seems obvious that imprisonment shouldn’t work. Imprisonment is a very negative experience, and a felony conviction has long-term consequences for people who are already economically marginal. The punishment is rarely appropriately matched to the crime, as seen in the (racially charged) discrepancies in severity of punishment for possession of crack vs. cocaine. What’s going on? Why are we doing this? Why are the punishments inflicted on those who fail in society often so severe?

I’ll ignore the more nefarious but low-frequency ills behind our heavy-handed justice system, such as racism and disproportionate fear. Instead, I want to focus on a more fundamental question. Why do average people, with no ill intentions, believe that negative experiences are the best medicine for criminals, despite the overwhelming amount of evidence that most people behave worst after negative experiences? I believe that there is a simple reason for this. The model that most people have for the criminal is one we’ve seen over and over: The Bully.

A topic of debate in the psychological community is whether bullies suffer from low or high self-esteem. Are they vicious because they’re miserable, or because they’re intensely arrogant to the point of psychopathy? The answer is both: there are low-self-esteem bullies and high-self-esteem bullies, and they have somewhat different profiles. Which is more common? To answer this, it’s important to make a distinction. With physical bullies, usually boys who inflict pain on people because they’ve had it done to themselves, I’d readily believe that low self-esteem is more common. Most physical bullies are exposed to physical violence either by a bigger bully or by an abusive parent. Also, physical violence is one of the most self-damaging and risky forms of bullying there is. Choosing the wrong target can put the bully in the hospital, and the consequences of being caught are severe. Most physical bullies are, on account of their coarse and risky means of expression, in the social bottom-20% of the class of bullies. On the whole, and especially when one includes adults in the set, most bullies are social bullies. Social bullies include “mean girls”, office politickers, those who commit sexual harassment, and gossips who use the threat of social exclusion to get their way. Social bullies may occasionally use threats of physical violence, usually by proxy (e.g. a threat of attack by a sibling, romantic partner, or group) but their threats generally involve the deployment of social resources to inflict humiliation or adversity on other people. In the adult world, almost all of the big-ticket bullies are social bullies.

Physical bullies are split between low- and high-self-esteem bullies. Social bullies, the only kind that most people meet in adult life, are almost always high-self-esteem bullies, and often get quite far before they are exposed and brought down. Some are earning millions of dollars per year, as successful contenders in corporate competition. Low self-esteem bullies tend to be pitied by those who understand them, which is why most of us don’t have any desire to hunt down the low self-esteem bullies who bothered us as children. It’s high self-esteem bullies that gall people the most. High self-esteem bullies never show remorse, often are excellent at concealing the damage they do, even to the point of bringing action consequences of their actions to the bullied instead of to themselves, and they generally become more effective as they get older. It’s easy to detest them; it would be unusual not to.

How is the high self-esteem bully relevant to criminal justice? At risk of being harsh, I’ll assert what most people feel regarding criminals in general, because for high-self-esteem bullies it’s actually true: the best medicine for a high self-esteem bully is an intensely negative and humiliating experience, one that associates undesirable and harmful behaviors with negative outcomes. This makes high-self-esteem bullies different from the rest of humanity. They are about 3 percent of the population, and they are improved by negative, humiliating experiences. The other 97 percent are, instead, made worse (more erratic, less capable of socially desirable behavior) by negative experiences.

The most arrogant people only respond to direct punishment, because nothing else (reward or punishment) can matter to them, coming from people who “don’t matter” in their minds. Rehabilitation is not an option, because such people would rather create the appearance of improvement (and become better at getting away with negative actions) than actually improve themselves. The only way to “matter” to such a person is to defeat him. If the high-self-esteem bully’s negative experiences are paralyzing, all the better.

Before going further, it’s important to say that I’m not advocating a massive release of extreme punishment on the bullies of the world. I’m not saying we should make a concerted effort punish them all so severely as to paralyze them. There are a few problems with that. First, it’s extremely difficult to determine, on an individual basis, a high self-esteem bully from a low-self-esteem one, and inflicting severe harm on the latter kind will make him worse. Humiliating a high-self-esteem bully punctures his narcissism and hamstrings him, but doing so to a low-self-esteem bully accelerates his self-destructive addiction to pain (for self and others) and leads to erratic, more dangerous behaviors. What comes to mind is the behavior of Carl in Fargo: he begins the film as a “nice guy” criminal but, after being savagely beaten by Shep Proudfoot, he becomes capable of murder. In practice, it’s important to know which kind of bully one is dealing with before deciding whether the best response is rehabilitation (for the low self-esteem bully) or humiliation (for the high self-esteem bully). Second, if bullying were associated with extreme punishments, the people who’d tend to be attracted to positions able to affix the “bully” label would be, in reality, the worst bullies (i.e. a witch hunt). That high self-esteem bullies are (unlike most people) improved by negative experience is a fact that I believe few doubt, but “correcting” this class of people at scale is a very hard problem, and doing so severely involves risk of morally unacceptable collateral damage.

How does this involve our criminal justice policy? Ask an average adult to name the 3 people he detests most among those he personally knows, and it’s very likely that all will be high self-esteem bullies, usually (because physical violence is rare among adults) of the social variety. This creates a template to which “the criminal” is matched. We know, as humans, what should be done to high-self-esteem bullies: separation from their social resources in an extremely humiliating way. Ten years of extremely limited freedom and serious financial consequences, followed by a lifetime of difficulty securing employment and social acceptance. For the office politicker or white-collar criminal, that works and is exactly the right thing. For the small-time drug offender or petty thief? Not so much. It’s the wrong thing.

Most caught criminals are not high self-esteem bullies. They’re drug addicts, financially desperate people, sufferers of severe mental illnesses, and sometimes people who were just very unlucky. To the extent that there are bullies in prison, they’re mostly the low-self-esteem kind– the underclass of the bullying world, because they got caught, if for no other reason. Inflicting negative experiences and humiliation on such people does not improve them. It makes them more desperate, more miserable, and more likely to commit crimes in the future.

I’ve discussed, before, why Americans so readily support the interests of the extremely wealthy. Erroneously, they believe the truly rich ($20 million net worth and up) to be scaled-up versions of the most successful members of the middle class. They conflate the $400,000-per-year neurosurgeon who has been working hard since she was 5 with the parasite who earns $3 million per year “consulting” with a private equity firm on account of his membership in a socially-closed network of highly-consumptive (and socially negative) individuals. Conservatives mistake the rich for the highly productive because, within the middle class, this correlation of economic fortune and productivity makes some sense, while it doesn’t apply at all to society’s extremes. The same is at hand in the draconian approach this country takes to criminal justice. Americans project the faces of the bullies onto the criminal, assuming society’s worst actors and most dangerous failures to be scaled-up version of the worst bullies they’ve dealt with. They’re wrong. The woman who steals $350 of food from the grocery store out of desperation is not like the jerk who stole kids’ lunch money for kicks, and the man who kills someone believing God is telling him to do so (this man will probably require lifetime separation from society, for non-punitive reasons of public safety and mental-health care) is not a scaled-up version of the playground bully.

In the U.S., the current approach isn’t working, of course, unless its purpose is to “produce” more prisoners (“repeat customers”). Few people are improved by prison, and far fewer are helped by the extreme difficulty that a felony conviction creates in the post-incarceration job search. We’ve got to stop projecting the face of The Bully onto criminals– especially nonviolent drug offenders and mentally ill people. Because right now, as far as I can tell, we are The Bully. And reviewing the conservative politics of this country’s past three decades, along with its execrable foreign policy, I think there’s more truth in that claim than most people want to admit.

United States 4.0, or: why you should welcome American socialism

Since the American Declaration of Independence, three distinct national identities have existed. Historians have characterized them in a number of ways depending on what aspect of the nation they wish to analyze, and I will do my best to do so in a way that is economically interesting, and that can inform us on our future. In the case I’m dealing with, these delineations have been gradual rather than sharp, and it’s important to note this, and the truth is that all of these aspects of our history remain in our culture today. Still, for the sake of simplicity and analysis, I must define (somewhat arbitrary) boundaries. From 1776 to approximately 1870, we were a nation of citizens. From about 1870 to approximately 1950, we were a nation of producers. From about 1950 until now, we’ve been a nation of consumers. Perhaps for wholly coincidental reasons, each of these transitions has coincided with a difficult and violent period of history. The Civil War in the mid-19th century was one of the nation’s bloodiest, and the World Wars of the 20th-century were utterly catastrophic for Europe. Likewise, the American transition to its next phase will coincide in time with a World Revolution that, although I wish for it to be entirely peaceful– and noting the example offered by Northern European nations that have already peacefully adopted rationalistic, libertarian socialism, I believe it can be nonviolent– probably will not be so, at least not in all corners of the world where it will take place. Before discussing the American nation’s next incarnation, it’s worth discussing the advances and the ultimately fatal flaws of the three that have existed.

1. Citizen America: rational but elitist, enlightened but hypocritical.

The United States, despite the tarnished reputation it has earned on account of its hypocritical, underaccomplished and already-dying Empire, deserves one hell of a lot of credit. For all its flaws, it’s a great country, and this nation was one of the modern world’s first attempts, if not the first, at rational government at such a vast geographic scale. Stepping away from the unreliable leadership offered by hereditary kings and religious clerics, the nation’s architects designed a political framework with the intention of building an enlightened republic. They certainly did not, for the most part, intend direct democracy. This concept seemed radical even to most of them. What they wanted was a nation governed by what would today be considered an aristocracy, but for the benefit of all people, in which the most educated and genteel 1-20 percent would be citizens, or peers, with the right to vote and the same legal status as a legislator or president.

Often it’s claimed that America’s founders would be appalled by the state of the nation today, either because its integrity has been compromised by plutocracy (as the left alleges) or because the federal government has become so expansive (as the right alleges). I disagree. As educated and rational people, they knew that even the best governmental structures can only mitigate the innate instability of popular governments. I think they should be pleasantly surprised, if not shocked, that the government that they built (a) actually tried democracy, to mixed results but certainly more success than even an optimist in their time would have predicted, and (b) remained intact for over 200 years, even in the midst of true revolutions (some peaceful, such as FDR’s New Deal; others not). Nations live a long time, but governments very rarely survive even one human lifetime, much less three.

Thomas Jefferson envisioned an agrarian utopia in which farmers would plow the fields in the summer and study the classics in winter. Federalists like Alexander Hamilton wanted to use the new nation’s abundant land and natural resources to build an industrial powerhouse. Rationalistic freethinkers like Thomas Paine and Ben Franklin wanted to establish a fully secular government and a nation in which any religion, so long as it did not oppose its will on others, could be honored. To far more success than a cynic would have imagined, these visions were realized and, for quite some time, worked.

The Jeffersonian notion of an agrarian utopia and the Federalists’ championing of industry deserve special mention in light of how contrary they were to the more cynical and pessimistic view of life common in much of Europe at the time. Malthus, the ultimate pessimist, argued at the 18th century’s end that the world population would reach such a state of congestion as to wreak apocalyptic conditions upon the human species. His mathematical model (which held economic growth to be linear, rather than exponential, leading to its inevitable failure to match the pace of population growth) was completely wrong, but his conclusion agreed with much of popular thought, and it would have been correct had the Industrial Revolution (already in its early stages) not hastened. The Malthusian worldview was like that of mercantilistic economics, which held a zero-sum view of economics. For a contrast, self-reliant farmers and creative industrialists embody the opposite of zero-sum behavior; they more wealth to the world than they take from it. (Although industrialists could be cruel and self-serving, their efforts were undoubtedly positive-sum, at least until externalized environmental costs became the evident and severe problem they are now.)

This “Citizen Nation” had its share of quite obvious problems. Though it championed positive-sum progressivism, it was founded on land that was stolen in a campaign of execrable violence against indigenous people. Moreover, by modern standards it was elitist to the point of repugnant hypocrisy. Black slaves were treated abysmally, and an underclass of immigrant and freed-slave workers emerged in the 19th century, especially in Northern cities. For every abolitionist, feminist or liberal wishing to expand the definition of “citizen”, there was a status-quo conservative trying to hold this pressure back. This tension, as Americans all know, resulted in the Secession Crisis and Civil War in the 19th century.

Each of these iterations of the United States is a world-leading liberal model on its inception and falls prey to reactionary conservatism toward its precarious end. Thomas Jefferson, an enlightened statesman in his day, would be a reactionary and an egregious racist by modern standards. Those who stood with him, ideologically speaking, in 1785, and had not moved by 1855 found themselves on the wrong side of history, especially on the matter of slavery. They had become like Preston Brooks, much more the father of the “Tea Party” conservative movement than any of America’s 18th-century “founding fathers”, who would despise that movement’s religious radicalism and anti-intellectualism.

By Reconstruction, the definition of “citizen” had expanded greatly. Although not there yet, the nation was well on its way to universal suffrage. Though a positive change on balance, this also diluted the meaning of the word “citizen”. Suddenly, penniless people who were working 16 hours per day and therefore, through no fault of their own, utterly ignorant on complex matters, were trusted with the vote. Although necessary from a humanitarian perspective, since the “enlightened aristocracy” could not be trusted to rule in the peoples’ best interests, the extension of suffrage to poorly-educated workers led, in part, to the corrupt machine politics of the Gilded Age.

2. Producer America: scientific but bellicose, industrious and cruel.

The American nation of the Gilded Age was, by historical standards as well as comparative standards in its time, very wealthy. Although this period is sometimes regarded as a natinal nadir, it was the point at which the average American’s standard of living began to rise at a perceptible pace. The average person’s standard of living actually began improving a couple centuries before that, but so gradually that it could not be perceived from year to year or even from decade to decade; by the 19th century, this changed and progress was evident. Of course, the distribution of these gains was not merely unjust, but laughably lopsided. At the same time, ethnic tensions were growing rapidly. Political corruption was high, and toward the end of this era, its government had slouched toward plutocracy.

Due to the Industrial Revolution, there were immense gains made in this much-maligned era. Electricity became available, and food became abundant. Petroleum was discovered, preventing an otherwise disastrous scarcity of whale oil (much like that which awaits us in a couple of decades if we do not move away from our dependence on petroleum) from upsetting economic progress. The progressive movement brought suffrage to women and made the first (disastrously failed) efforts at ending international conflict. Moreover, a nation that had once been deeply racist had been forced, by Producer America’s end circa 1950, to not merely tolerate but to embrace ethnic diversity.

An important cultural change emerged around this time. The Puritan work ethic had been replaced by a more enlightened descendant: the American work ethic. Although it may seem bizarre to associate an “American work ethic” with virtue, with the nation wracked then and now by overwork, it was in many ways a positive force. With what seemed to be the final destruction of aristocracy (before its re-emergence as corporate capitalism, with its entitled and crypto-hereditary caste of “executives” who only ape the trappings of those who work) in the United States and Europe, the goal of many people became not to establish a parasitic lifestyle in which they did not work yet had others work for them, but a productive lifestyle in which, due to the creative use of technology and social advancements, they could work and have a life of quality. Technological leverage made it possible to produce at a high level without sacrifice and toil. In essence, that is the goal of the American worker: to live a life that is of value to others, but to do so without such severe sacrifice (as most working people, historically, had to make) as to render that life miserable for the person living it.

Because its vices and crimes were like those of our much milder present Gilded Age, the faults of Producer America hardly need explanation. The poor were treated abysmally, working conditions were calamitous, and democracy fell at the expense of plutocracy. Although the era was rich in comparison to what came before it, and idyllically peaceful on inception compared to the horrific wars that came at its end, the Gilded Age lingers in the American imagination as a warning of what not to become (and also what we, for the past 30 years or so, have been rapidly becoming): a corrupt plutocracy. The danger, one notes, of a limited, enlightened, and libertarian government is that unelected and often unchecked corporate power may step in and become just as onerous as the monarchies, empires, and theocracies that reigned before the Age of Reason.

Producer America became a victim of its own success. Abundance of crops, which any previous historical era would have considered a wonderful problem to have, caused a severe drop in food prices, leading to rural poverty. As poverty is a cancer that, without mediation, grows until it devours a whole society, this led to the Great Depression. Following that was a peaceful and entirely legal revolution known as the New Deal, against the backdrop of some horrific and violent ones (often in the name of extreme leftism) in much of the world, and the Second inning of the Great War which destroyed the intellectual respectability of racism (although one must note that racism never should have been intellectually respectable) and established colonialism to be a calamitous failure.

Keynesian economics was also born during this time and, with it, the recognition that it was useless to produce goods if no one could afford to buy them. Economists discovered that, contrary to the claims of conservative moralists, poverty did not “build character” or discourage laziness, but was a systemic malignancy capable of destroying an entire society. This led, logically, for the need of government to aggressively fight poverty. Also, the New Deal and the demand generated by American participation in the Second World War created, for the first time in history, a true middle class that encompassed the majority of a nation’s population. New products such as refrigerators, air conditioning, and air travel became not merely available, but available to average people shortly after their inception. Consumer America had begun.

3. Consumer America: brilliant but hollow, powerful but fragile.

Consumer America began with an era that is often considered to be the American “Golden Age”, spanning from 1945 to 1973. Although deeply flawed, especially in the treatment of women and racial minorities, this was a time of widespread economic and technological optimism within the United States. The forty-hour work week– a major victory– had been achieved by the labor movement, and children born in the 1950s believed that a 20-hour workweek would be established once they were in middle age. (Whoops!)

AMC, perhaps by an odd coincidence, is running two television shows focused on the exact high and low points (temporally speaking) of Consumer America: Mad Men and Breaking Bad, erotic and thanatoptic portrayals of an era’s bookends. The first of these shows, set in the advertising industry in the 1960s, illustrates the giddy optimism (coupled with the necessary antithesis: the sardonic cynicism of the era’s most intelligent, embodied in Don Draper) of a nation flush with new products, and in which the merely middle-class strivers of Madison Avenue can look forward to assured sunny (but perhaps a bit boring) futures. The latter drama, situated among the present day’s former middle class, revolves around the misery and severe, self-serving misanthropy of a failed genius and high-school chemistry teacher, forced by the 21st-century American pogrom (eradicated by the now more civilized Europeans) that we call “medical billing”, into methamphetamine production. In these dramas, we see not only the ascendancy (for Don Draper) and collapse (for Walter White) of brilliant, cynical, ambitious and deeply dishonest men; but those of a society itself, and that’s Consumer America.

Consumer America’s moral emptiness became apparent early on, but proved the severity of its malignancy in the 1980s with the “Reagan Revolution”. In the 1950s, it may have been boring and empty, but it was inclusive. The most selfish and uncivil elements of society wanted to come to affluence far faster than others, but they wanted everyone to get there. If for no other reason, the betterment of all would build a more stable society, and even the most selfish rich people want that. With the belligerent post hoc elitism of the yuppie era, that changed, replaced by a mean-spirited and exclusive mentality rooted in the assumption that a thing was not good unless other people could not have it. The reactionary politics that became stylish, even among the educated upper-middle classes, in the 1980s gave encouragement to the religious radicalism, the so-called “neoconservatism”, and the militant right-wing insanity that ravaged the nation in the 1990s and 2000s.

A deep moral failure in Consumer America is the worship of consumption, even at the expense of production. The claim underlying the enormous rewards given to capitalism’s winners is that they’re the most productive people, but often that’s not the case, and it’s such a thin argument that only an idiot would buy into it. Far more often, they are hyperconsumers but not high-power producers. For example, celebrity culture encourages the worship of those who already consume a great deal of others’ attention. Here is the naked absurdity of post hoc elitism: because they are able to consume at enormity, these “VIPs” are held to be of higher value, and on no other basis. Then there are the “executives”, the social climbers within private-sector bureaucracies, who have managed to acquire the status of being capitalism’s high priests merely because they’ve established the acumen to become hyperconsumers. Unlike actual entrepreneurs, who actually were high-power producers, they are merely adept consumers and social climbers, no different in form or activity than their counterparts in the aristocracy of 18th-century Versailles or corrupt clergies. Meanwhile, when the hyperconsumptive are getting welfare hand-outs in the form of elitist and unnecessary tax breaks, those who actually produce things are getting hosed, in the form of outsourcing, layoffs, stagnant wages for over a decade, and a government that seems to have little regard for their interests. Those who actually work in this country are held in low regard, but the useless elite clerics of these empty gods (immortal and without character; unimprisonable and thus fearless) we call corporations make millions and run the country.

Consumer America is headed toward its fiery end. It had two high points– one in the 1950s and ’60s, the second in the late 1990s– but its inexorable decline began in 2001 when it became clear that this morally empty regime could not head off otherwise surmountable calamities. An educated and virtuous citizenry would have never allowed Bush to win re-election after lying our way into an illegal, unnecessary, enormously expensive, idiotic and evil war. Able and effective civil authorities, responsible for infrastructural integrity, would not have allowed a hurricane of only moderate severity (Katrina was severe at its peak, but only Category 2-3 when it hit New Orleans) to demolish a major city. Now, in the form of the Tea Party, the specter of right-wing violence has re-emerged as a frightening possibility. In 2011, it seems evident that Consumer America is about to end, abruptly and possibly violently.

As I discussed, the transition from Citizen to Producer America took place in the context of the Civil War. That from Producer to Consumer America took place during a peaceful and truly glorious revolution in the United States (the New Deal, the rise of progressive capitalism in the U.S. and, later, social democracy in Europe) but while one of history’s most horrendous wars raged abroad. It is possible that the transition out of Consumer America, which will kill corporate capitalism and humble our upper class– benign compared to historical counterparts, but exceedingly arrogant– will be peaceful. Certainly I hope for this. It could also be enormously violent, in the context of a Paris-1793-style uprising.

4. World Revolution, and American socialism

A World Revolution, launched by the Internet as well as Europe’s experiments with federalism, libertarian socialism, and Second Enlightenment humanism, is taking place. Likely to continue for 100 years, it will radically reshape the globe. For one thing, the concept of an impoverished “Third World” country will seem utterly bizarre to the children of 2125 when they learn of this notion in their history classes. The severe relationship between geography and economic well-being that exists now will (rightly) strike them as barbaric. At the World Revolution’s end, we will have a “wired” world characterized by rationalism and libertarian socialism. We may have achieved the indefinite lifespan by that point, and we are likely to be “post-scarcity” to a substantial degree. (The World Revolution is a colorful name for the frenetic stage of humanity one might call trans-scarcity.) By 2125, barring an ecological or exogenous catastrophe, both the intrinsic scarcity of primitive times and the artificial scarcity of corporate capitalism will be abolished, and being “poor” will mean having to get on a waiting list to visit the Moon. Utopia will not be accomplished, but what is achieved by that time will be closer to it than most people alive today even consider possible. Yet the question for those of us alive now, who do not expect to live so long, is: what will it be like to get there?

I’d love to believe that each chapter of the World Revolution shall be peaceful, but we’re already seeing that this is unlikely. Odds of a peaceful transition are good-to-excellent in the social democracies of Northern Europe, poor-to-fair in the United States, due to the likelihood of an authoritarian crackdown by our corporate elite, and very poor in corners of the world where violent tendencies still reign, and in which corrupt elites will wish to prevent their subjects from having access to the increasing bounty made available by technology (a tendency we already see in countries that disallow their people to use the Internet).

The first stage of trans-scarcity humanity, which began in the 1990s and could either end soon or persist for decades, has proven to be quite harsh for the United States. As I discussed, the spiral of poverty that created the Great Depression began with rural poverty, a result of agricultural plenty. The same is happening to all human labor in the United States. Human labor is gradually becoming obsolete. The ability for a well-positioned and bright person to earn a decent living in technology will exist for a few decades at least, but most people will never again be able to reliably earn a living selling labor to the market. Don’t look for that to come back; it never will. It is already at the point where one year of unpaid education or training is necessary to secure two years of paid work, and as the workplace becomes increasingly specialized, this ratio will deteriorate. The civil unrest this will create, in a society with a weak and denuded social safety net, will be immense.

The United States will eventually, as all countries must when confronted with this “problem of plenty”, wise up and choose libertarian socialism, eventually instituting a guaranteed minimum income and freely available training for what (very highly skilled) work remains necessary, but this isn’t likely to happen without a fight. Our upper classes profit enormously from the artificial scarcity imposed by corporate capitalism, and have no problem with using that system’s meltdown to increase the market price of their “protection”, establishing themselves in a similar manner to feudal Europe’s nobility. This is what they want: scarcity and fear instead of reason and plenty. In truth, they don’t care if the economy “collapses” from a middle-class perspective. The current American elite would love to see a lot of middle-class office jobs disappear outright because they really want to be able to hire white, college-educated maids at low wages.

It’s an open secret that the American corporate elite is preparing for violent unrest. The use of private mercenaries such as Blackwater/Xe Security in overseas conflicts is part of this, and the threat these forces will cross the Rubicon is serious. Also, there is the Tea Party. This movement itself is unlikely to become a major menace. Still, this not-really-grassroots movement deserves attention for the machinations (cf. Koch brothers) that its existence proves. While the Tea Party’s main purpose was to win the 2010 election for the Republican Party, and its secondary purpose is to discourage the left-leaning and disaffected from even considering violent revolution– it reminds us (sadly, correctly) that, if there is a violent revolution, it almost certainly won’t be a kind we would want– there is also much experimentalism being done. Fox News may seem like a sick joke, but it’s an experiment designed to assess how much bullshit the American people will take. Data is being collected on that, and if America’s reactionary movement ever needs a Goebbels, this data will be made available to him or her.

To call these efforts of the upper classes “conspiracies” is not a stretch. There almost certainly is not “one Conspiracy to rule them all”, and organizations like the Bilderberg Group and Skull and Bones almost certainly have less power than their detractors think they do, but lower-case-c conspiracies certainly exist, and aren’t even well hidden. They don’t need to be. With “friends” (legal, above-ground, but secretive and elitist institutions) like the corporations, who needs the shadowy enemies dreamt up by conspiracy theorists? The upper classes are self-serving, greedy, inbred and socially exclusive. This makes them innately conspiratorial with absolutely no need for cloak-and-dagger secret societies or the laughably simplistic intrigues dreamt up by conspiracy theorists.

On the other hand, the American elite, devoid of vision and purpose other than pure greed, might allow peace, accepting a decline in their relative status, just as European nobilities did in the wake of the French Revolution. One can only hope so. There is no reason whatsoever that the transition to libertarian socialism requires violence. It is just a sad and miserable fact that the American upper classes are likely to instigate it in order to preserve their power and social status. If they do use such violence, we have every right to respond. With 45,000 people murdered every year by health insurance companies, we were merciful to remain nonviolent as long as we have.

Despite the dark possibilities of the short-term future, once the World Revolution has resolved itself and we are into late trans-scarcity or early post-scarcity times, life will be quite excellent. Although the ability for an average or even well-above-average person to reliably “earn a living” by selling labor to the market will be gone forever, it won’t be needed in a plentiful world with a basic income that grows increasingly generous as the decades pass.

The comfort of a post-scarcity world is self-evident, but the secondary effects of it will be immense and predicted by few. When technology removes most of the drudgery involved in large-scale efforts, and when people are relieved of the crushing and all-consuming need to do paid work, which is often servile and of little general value, the masses will be liberated to concentrate on real work. The arts, science, spirituality, community service, experimental small businesses (startups) and education will flourish like never before, and humanity will shine to a degree that seems impossible at this time. Levels of intellectual brilliance and creative contribution once associated with “genius” will become commonplace.

To answer the obvious question, “Who will clean the toilets?”, the answer is that in libertarian socialism, people will still do such jobs, because other people will still be able to pay them to do it (manually in the early phases; technologically, later on). Libertarian socialism does not eliminate the free market; rather, it ensures a basic social safety net, and then it gets out of the way and allows at truly free market economy to work. Free the people, then free the markets.

Corporate capitalism, in the current semi-totalitarian form enabled by the constant need for paid (read: corporate-approved) work, placing the average American in a system that is the most fluid, affluent, and benign form of slavery known to history, but still wage slavery, will be relegated to history’s dust heap. Business corporations will exist, just as religious institutions, governments, and even hereditary monarchies still do (often in reduced and benign, or even beneficial, forms) in the Developed World. That they will exist is necessary, since rational libertarian socialism must recognize the right of the individual to start a business and become prosperous, and the result of success in business can be a large corporation. Some people and companies will get very rich and become quite influential, far above their peers in this regard. However, the oppressive and deleterious power currently held by the American elite will no longer exist, once people are no longer dependent on them to earn a living wage.

Examining social and technological trends, the conclusions become obvious. Consumer America will end, perhaps with much fire and gnashing of teeth, and a socialist United States will replace it. Culturally, how will this America look? My guess is that it will contain the best aspects of Citizen, Producer, and Consumer Americas.

In a wealthy, fair, and rational world, people can be educated and true democracy becomes viable and stable, something that has been utterly untrue throughout most of humanity’s history due to the ignorance that economic scarcity breeds. Citizenship can come back, but in a form that is available to the masses rather than an elite. In short, the education that will be available to a truly free people will allow democracy to actually work. (There will always be some who are willfully ignorant, as Palinism establishes; our job in a post-scarcity world will be to encourage such people to treat life as a vacation, and to live well but harmlessly.)

Likewise, the virtues, but not the vices, of Producer America are likely to come back in a world where the average person has economic freedom. Liberated from the need to find corporate-approved work, people will be able to pursue work at which they are actually productive, rather than enduring the pointless servility of work for the lower classes, or the insipid social climbing of the white-collar elite. Attitudes toward work will fundamentally change. Instead of work being a place where the average person takes orders and is subjected to the mean-spirited infliction of stress, it will be one where he or she contributes. In a world of material plenty, people will hunger for opportunities to produce rather than consume, consumption being so freely available and easy as to be of minimal interest, just as food is not an all-consuming point of focus healthy, well-nourished people. The right to consume will not be something people fight bitterly in order to secure; it will be guaranteed, and people will focus their energies toward production, and work out of a genuine desire to make a better world.

Finally, it should go without saying that socialist America will preserve the affluence, racial and sexual tolerance, optimism, and creativity that Consumer America had when it was at its best. When the root evil of scarcity is eradicated, branches like racial hatred don’t stand a chance. There will always be some people with evil intentions, but their numbers are few and only scarcity can provide them with their armies of desperate, poor, ignorant, angry and confused people. Scarcity, thus, lends power to the evil. They will be unable to “rabble-rouse” when there is no rabble to be roused.

Moreover, once we arrive at a post-scarcity world, if not before that, it will no longer make sense to speak of the American theater as a separate entity. The World Revolution, the victory of second-enlightenment ideals and libertarian socialism, and the resounding defeat of scarcity and corporate capitalism at the hands of technology, all will be worldwide phenomena. It is likely that, by 2200 if not 2125, the bitter, pointless, and utterly unjust poverty inflicted by accidents of birth and geography will be eradicated.

We do not need a violent fight or heroic efforts to in order to arrive at a fair, post-scarcity world. Technological, historical, and economic trends, in the long term, are on the side of good. Blood does not need to run in the streets, nor do we need some stroke of enormous luck. We merely need to step back and take a rational approach to solving the problems in front of us. We don’t need to be lucky or brutal or unimaginably brilliant to overcome the (admittedly, quite serious) problems facing us; we merely need to be rational and somewhat intelligent going forward.

Yes, rich kids already won the career game. Here’s why.

Americans like to believe that the modern workplace, like school, is a meritocracy. Sure, some people have a lot of money and don’t have to work, but Americans prefer to believe that, among those who do work, side-by-side in the same environment, it’s a fair competition. To their chagrin, they observe that their co-workers from wealthy backgrounds advance three times as fast, and wonder what the hell is going on. Why does one person, no more skilled than any of his co-workers, advance so effortlessly because of who his daddy is?

I don’t intend to insinuate that companies or managers are knowingly being elitist. No company or manager would intentionally give favor to one who has already enjoyed so many external advantages, especially if that person’s level of talent did not merit it. People in offices are out for themselves, not trying to preserve (or to combat) the social status quo. Rather, this is a subconscious and irresistible force, and it comes from one root cause: rich kids don’t fear the boss. That’s extremely important.

Consider two analysts at a prestigious financial firm, both 24 years old and of equal drive, intelligence, and talent. One is from a double-income family in suburban Connecticut earning $125,000 per year– a decent sum by average standards, but less than the analysts hope to be making by 26. The other’s father is a hedge fund manager earning $10 million per year. Let’s also assume, for now, that none of their co-workers or managers know either analyst’s family background, except through their behavior. The middle-class kid spends the bulk of his time trying not to offend, not to behave in a way that might jeopardize the job he worked so hard to get and could not easily replace if he lost it. He doesn’t invite himself to meetings, avoids contact with high-ranking executives, and doesn’t offer suggestions when in meetings. Thanks to the fear he experiences on a daily basis, he’s seen as “socially awkward” and “mousy” by higher-ups. Nothing recommends him, and he will not advance.

Middle-class kids generally fuck up their first few years of the career game in one of two ways. Either they fear authority tremendously, which is crippling from a career perspective and renders them devoid of creative energy, or they show an open distaste for managerial authority, described by the wealthy as having a proletarian “chip” on one’s shoulder, and fail to advance on account of the dislike they thus inspire. Even when they are cognitively aware of how to manage authority, the stakes of the career game for a middle-class striver, who will fall into humiliation and possibly poverty if he fails it, are so severe that only the well-trained and steel-nerved few can prevent these calamitously high risks from, at least to some degree, disrupting their game.

The rich kid, on the other hand, relates even to the highest-ranking executives as equals, because he knows that they are his social equals. He’ll answer to them, but with an understanding that his subordination is limited and offered in exchange for mentoring and protection. He views them as partners and colleagues, not judges or potential adversaries. Perhaps this is counterintuitive, but most of his bosses like this. (Most bosses aren’t assholes and don’t like to be feared, at all. In fact, they’d be happy to forget that they are bosses.) His career advances fast. He’s “up and coming”. This occurs even if no one has any idea that he’s from a wealthy background.

The rich kid, fearless on account of not needing to keep his job, can effortlessly walk the middle path. He’s neither a cowering weakling who crumbles at the sight of authority, nor an obnoxious brat whose sense of entitlement and dislike for managerial authority limit his progress prematurely. He respects others and himself and has an uncanny air of effortless “coolness” (by which I mean freedom from anxiety) that enables him to actually get things done. It becomes common knowledge that he’s “up-and-coming”, a rising star in his company. Even if his performance is smack-average or somewhat below, his effortless rise will not be deterred. It is assumed. With that advantage, he can concentrate on actually getting work done, yet another uncommon advantage.

This “middle path” between self-defeat and entitled arrogance is narrow– a tightrope, metaphorically speaking. It is, I should note, of equal width and tension for both rich and poor. There is no intentional preference given to one class over the other. The difference is that children of wealth traverse it at a height of one meter over a mattress, while the middle-class and poor traverse it at a height of 20 meters over a lava pit.

Thus, I have described the inevitable advantages the children of wealth hold in the career game. This assumes that there is no knowledge of their economic standing. The rich kid, even when no one knows that he is rich, still wins. He has the right air about him, and the same freedom from anxiety and free-flowing creative energy of a college student because, for him, college (i.e. the time of life in which most middle-class peoples’ lives peak) never ended. His entry-level job is not a place of stress, but a continuation of school; a place where he can learn and grow.

If the employees’ economic situations were known, it might be expected that some advantage would be conferred to the industrious “striver” from the middle class. In practice, this isn’t really true. While the worst scions of wealth, rich brats as seen in documentaries like Born Rich, disgust people and generally negate the advantages conferred by their social capital; the majority of rich kids who are well-behaved and decent are valued more highly when their circumstances are discovered. In practice, one finds that people would rather gain the connections and favors available to the rich than satisfy any small sense of altruism by extending benefits to the hard-working middle and lower classes.

What’s more, the attitude shown to the wealthy in the workplace is one of appreciation. Consider the example above, of two fairly identical analysts in a high-stress financial job, and assume that their familial economic standings are known (as is usually the case). The middle-class analyst is assumed to be there because he likes the money. This doesn’t endear him to anyone, and if he asks his boss why he isn’t getting his way in project allocation or career advancement, he can be given a reply like, “That’s why we pay you the big bucks.” (If he responds justly to that comment and makes its issuer a better person, he’ll be summarily fired and, if this action earns him a reputation, unemployable.)  Such an insulting reply, except with gauche irony, would never be given to his counterpart, if his economic standing were known. By contrast, as it’s known that the rich kid has no need to work, he is appreciated for doing so. He is assumed (unlike the middle-class striver) to have a strong work ethic just because he shows up sober to work every day. He doesn’t have to go over the top to establish that he has a decent work ethic; that he is working at a level of reliability taken for granted from his middle-class counterparts is taken to prove his work ethic and stamina.

This advantage held by the wealthy, more prominent on the East Coast and outside of technology, is nearly impossible to compete against in most companies. I wouldn’t advise a person even to try. “Faking rich” is going to lead a person to seem pathetic and materialistic, not refined and free of anxiety. Moreover, feigning the cavalier attitude toward executive authority that rich kids hold effortless is very dangerous if one lacks the requisite social skills. Overdone, it can lead quickly to the unemployment line.

For the individual, I can offer no personal solution to this deep sociological problem. As far as I know, there’s none. I would advise those who are sufficiently talented to work in technology, which tends to be more meritocratic than other industries, and to avoid old-style business. Beyond that, I know of no solution.

So why did I write this essay, if I can offer no solution? First, it’s because I believe my generation will overthrow the arbitrary and brutal authority of corporate capitalism and bigoted conservatism in favor of rationalistic, libertarian socialism driven by a scientific approach and a concern for universal social justice, and I want to encourage this to happen. If I raise awareness of a defective and unfair situation, perhaps I can encourage people to change it. Second: although this is one of corporate capitalism’s milder flaws, leading a multitude to moderate disappointment but with little-to-no acute danger or loss of life, a rising awareness of the career game’s unfairness might result in less energy wasted, across the whole of society, attempting to ascend the proverbial “corporate ladder”. Establishing that a gambling house provides only rigged games is the first step toward depriving it of players, and therefore setting in motion the first stages of its destruction.

Left means losing: the mindset of the American Idiot

Europeans find U.S. politics shocking and perverse for a number of reasons, foremost among which is that our political culture is so right-wing. In most European countries, the Democratic Party would be a center-right conservative party, and the Republicans would be a fringe party, mostly ignored by the conservative coalition it would have to enter to have any pull whatsoever. That is, clearly and unfortunately, not the case here. The American right wing has such considerable power that it has prevented us from achieving universal healthcare, a minimal qualification for a society that wants to consider itself First World.

Within the context of the Republicrat duopoly, there is no substantial “left” in American politics, and politicians flee from the word “liberal” (note: in the U.S., “liberal” means a left-of-center libertarian) as if the word were cursed. Europeans, although as industrious and productive as we are, consider their welfare state an accomplishment, whereas Americans tend to dread anything within a stone’s throw of “socialism”. This sentiment that holds us back, socially, 50 years behind where we should be, and threatens to drag us even further into the muck of historical failures that we ought to have learned from.

Why is this? I think there’s a simple answer. Despite the rationalizations about the deficit, austerity, and too much or too little government, many Americans avoid even considering the political left as acceptable because they conflate it with losing– class envy, bitterness, external locus of control. They describe leftism negatively as “class warfare”, ignorant of the actual class warfare being waged upon them from above and from the right. In a society characterized by individual overconfidence, and by muddied waters regarding who is winning and who is losing, people are largely free to define themselves as winners or as losers. Who’s to say otherwise? Class identity is, in post-modernity, a matter of perception rather than station. I know three-digit millionaires who feel put-upon by life, and I know impoverished souls who consider themselves privileged. All in all, many people have acquired the association of conservative politics with “success”, and have moved to the right because of this.

American conservatism is the politics of people who want to believe that they are on the winning team. If the actual winning team is pounding them to a pulp, they will identify with their bullies anyway. When reality intrudes, they develop a persecution complex that does not admit defeat, but claims their misfortune is the result of others’ resentment of their success. In the mind of the decliningly middle-class American conservative, “real Americans” are not losing economic ground because of the arrogant and short-sighted mentality that has infected the upper class; rather, they are under attack from foreigners who “hate freedom” and “take their jobs”, freeloaders who are “drinking the water instead of carrying the water”, and “elitist liberals” who hate their simple, morally superior way of life and therefore identify with the supposedly depraved lower classes as a means of subverting traditional morality. In this way, they integrate their fear and sense of persecution into their identity in such a way that they can still cast themselves as winners.

The problem is that these people conflate politics and identity. Instead of politics being a rational debate about how to build a just society, and a debate that allows one’s opinions to change as new information is learned, it becomes an immutable aspect of a person. This is the mindset of a class of people I’ll call the “American idiot”, not necessarily because they lack native intelligence (some do, some don’t) but because of their astounding and willful ignorance. They’ll proudly say “I’m a Republican”, without knowing in detail what that means, just as they’re willing to identify as Biblical Christians without having read most of the Bible, simply because it allows them to identify with the successful. They associate conservatism with an internal locus of control and a willingness to take “individual responsibility”, and liberalism with an external locus of control and a childish need for help.

This is what upper-middle-class liberals, driven also by a fear of American decline and by an attraction to the superior quality of life enjoyed by our European counterparts, miss. As we are educated in economics, politics and history, progressive libertarianism (i.e. liberalism) becomes, self-evidently, the politics of rationality. We’re not bitter communists looking to starve the rich or corrode traditional values. We don’t want a part in the personalized distributive squabbling that seems to characterize the political views of the more ignorant half. Rather, we’re concerned citizens who want to build a just society. However, unless we comprehend and confront the entrenched ignorance we are up against, we will make no progress.