The Time I Ruined Programming

 

Part I: The Personal

I remember the time I ruined programming. It was long enough ago (and I recovered) that I’m no longer embarrassed by the fact.

Nearly all programmers have slowdowns and periods where they just can’t get anything done. It’s why we hate two-week “sprint” nonsense and stack-ranking; no one wants to be watched by a guy in a guard tower with a gun, just waiting to pop the programmer who slows down the chain.

When people learn how to program, they do so enticed by what the job once was: an R&D job of pure creation, free of nonsensical deadlines and business-driven engineering. A computer science curriculum doesn’t prepare one to be a bored grunt, but for a life of work on projects like compilers, interactive games, and machine learning algorithms. Of course, only a lucky few get paid to do that kind of stuff. Most programmers end up writing and maintaining boring business programs, solving problems that are only challenging because of tight deadlines and the sheer weight of bad decisions made before.

It’s easy to ruin a creative activity: do it as a subordinate, and you’ll often grow to hate it. This is why, although I’m a decent enough writer, I’ve never considered writing jobs; in fact, I’m wary of traditional publishing and its increasing tendency to push writers into an unfavorable employee/employer relationship. Authors pay a lot for that “Not Entirely Awful” stamp of approval they get from being “published”, thinking it means more than it does. Publishers then drive terms that turn authors into subordinates, and agents aren’t going to push back on behalf of a midlist or unknown author. Getting dumped by a publisher– or worse, an agent– can be worse than being fired. You don’t just lose your relationships, but your reputation.

I don’t intend to imply that it invariably crushes passion to work for someone else. Everyone works for someone else. In the abstract, we’re all subordinates. The alternative is solipsistic madness, idiocy in the original Greek definition, by which the word idiot meant not a lack of inborn intelligence, but one who non-virtuously chose to ignore public life. An idiot was a man only into himself. Our notion diametrically opposite the idiot, held up as the pinnacle of human character, is the hero. She’s not a subordinate in an organizational sense, but she still adheres to a set of ethical principles and works toward benefits she wishes to deliver to the world. She serves, but she takes agency over whom and how she serves.

Of course, workplace subordination is not heroic. To subordinate to the private benefit of someone else, who is under no obligation to return the loyalty (and probably won’t) is demoralizing and toxic.

The truth is that I have no use for people who are constitutionally insubordinate. To blindly disobey orders, because they are orders, is even more idiotic than blindly following orders. Civilization requires what I call operational subordination. An example would be stopping at red lights while driving. We do this not because we consider ourselves inferior to these robotic lights, but because driving wouldn’t be safe if we didn’t obey their directives. We don’t think of it as subordination; it’s just good sense. .

Workplaces, of the Theory X variety that has become the norm since the downsizing epidemic of the past few decades, don’t settle for operational subordination. They want personal subordination. The good of the company (read: the careers and reputations of executives) must take a higher priority than the career goals and personal needs of the worker, and the worker is expected not simply to obey stated commands, but to internalize this sense of moral inferiority. If he has no orders, he must ask for more work. If he seems to value his own advancement over that of his superiors– a successful out-of-work side project suffices to create this impression– he’ll be terminated even if he does nothing wrong.

Programmers, by and large, don’t mind operational subordination. In fact, we have an affinity for it. We like to solve complex problems with simple rules that make sense. We operationally subordinate, every day, to the syntactical demands of a compiler that simply won’t do anything with code it cannot parse. When rules are sane, and the benefit in their existence is obvious, we eagerly follow them. It’s the personal subordination that burns us out. We’re smart enough to spot a system that demands personal loyalty from us, while refusing to reciprocate, and it disgusts us. We recognize that our rules-based, overly rational way of thinking is under attack; someone is trying to hack us and take advantage.

The nature of employment has changed for programmers, and for the worse. Software development used to be a highly-compensated job with an R&D flavor, where programmers were trusted professionals rather than overworked grunts subjected to the likes of Scrum. The open-allocation environment of a company like Valve used to be the norm. What changed? Employers realized that line-of-business code could be written by mediocre people, and replaced the high-talent curmudgeons with this continuing churn of fresh-faced halfwits who have never written a program from scratch, have no mathematical or theoretical insight, and don’t know what “buffer overflow” or “O(n^2)” mean.

By and large, it has worked. Employers have successfully dumbed programming down. Large companies may need a few excellent programmers, but line-of-business software can be done by people of mediocre talent who’ll accept bad wages and worse working conditions. Scrum is not going away. Does it produce excellent software? No, not even close. It doesn’t even produce secure or safe or maintainable software. It works just well enough that deliverables get deliverated, and just barely so but at a sufficiently close approximation to working software that executives get promoted away from their messes before anything starts falling apart at a macroscopically visible level.

For my part, I got through that code slowdown, long enough ago that I don’t mind writing about it.

These days, I’m more interested in small programs that exist to solve problems than large ones written to justify budgets or “prove” technical choices. (I don’t care to convince anyone that Haskell is “ready for production”. If we can’t agree on a language, that’s fine; I’ll write C. It’s old and a bit wonky but it works.) One of my current projects is to build an AI for the card game Ambition (which I came up with in 2003, and have been refining since then) because I want to test out certain design changes; that, because I use Ambition for a pivotal card game scene in Farisa’s Crossing and I wanted to fix certain flaws before giving the game this added visibility. In order to test these changes, I need to run simulations with believable players; random-move players give some statistical insight, but they don’t explore any interesting strategy spaces. The “AI” doesn’t need to be AlphaGo, and it won’t be: if a simple Q-learner, backed by a basic backpropagation network suffices, that’s what I’ll use. If I need something more, I’ll write something more complex. These days, I’m rarely (if ever) impressed by code. Lines of code are spent, not acquired.

Now that I’m older, I have almost no passion for programming as an end in itself– an average corporate codebase is far more complex than anything I would intentionally write, and yet that complexity is wasteful and ugly– but, still, certain problems that can be solved with software interest me.

On the other hand: code for code’s sake, in million-line piles of corporate cruft and the dead shells of Jira tickets? Nah, I’m done with that. That really is a young idiot’s game.

Part II: The Fourth Turning

I reached a dark night of the soul with regard to software and technology. There were moments when I looked around and realized that my total contribution to humanity, by working for an increasingly maleficent industry, might be negative. The 21st century’s American theatre has featured the dismantling of the middle class, and I can’t say I had nothing to do with it.

In the 1990s, we had to hear about those “South Park Republicans”. In 2018, I find that I’m a “Black Mirror Liberal”. I loathe and fear Silicon Valley, with its complete lack of morality, more than Donald Trump; the latter is, to be frank, too stupid and too transparent in his self-indulgence to pull fascism off. If we don’t learn our lesson this time, a 39-year-old startup founder, more reserved and competent than the orange idiot, could do a lot more damage.

I don’t view technology as evil; however, I fear what humans will do with it. Like the Black Mirror series, I have the conservative’s skepticism toward human nature. Technological progress without moral improvement will lead us straight to hell. That, to me, is the important lesson of Black Mirror, arguably the most important television series of our time. Yes, science and technology are good, and liberal economics is largely correct; but, without cultural and moral improvements as well, other forms of progress can only do so much.

To me, the quintessential Black Mirror episode is “Fifteen Million Merits”. I shan’t do it justice here, but let me sum it up briefly. It’s set in (probably) a far-future dystopia. The middle classes ride stationary bikes to generate power, and outside of work they’re doomed to a virtualized, entertainment-driven life in which one must pay not to see ads (many of which are offensive, even pornographic). Those unfit to ride the bikes fall into the lower classes (“lemons”) and are treated as subhuman. There seems to be no “outdoors” in this world; food is grown in a Petri dish using energy that comes from… the bikes. Or, at least, that’s what the bike slaves are told. I’ll get to the plausibility of that, in a second.

The main characters try to escape their position. There’s a television called Hot Shots , modeled on shows like American Idol, through which talented people vie for a chance to join the celebrity upper class. Most of them fail, and some are forced into neoliberal prostitution. One character becomes a pornographic actress and her humiliation is broadcast to the entirety of her society; another, after a display of authenticity and rage, is enticed to make a performance out of his anger (thus commoditizing and killing his own authenticity).

There’s a scientific easter egg in “Fifteen Million Merits”. Examine the principle of it: humans must ride bikes to generate power for a society that, so it says, has been reduced to growing food artificially. This is physically impossible. Human efficiency is about 25 percent: if we eat 2000 kilocalories’ worth of food, we can generate 500 kilocalories of mechanical energy. It could conceivably increase, but never would it reach (much less exceed) 100 percent. No life is a power source; autotrophs like plants and algae consume solar power that the rest of us eat. Without an energy source like the sun (a giant, faraway fusion reactor) we could not survive.

Low-tech societies used on human power (stored solar power; faraway nuclear power) because they had no alternatives. If, however, a society were forced to use human power to grow food to feed humans, it would die. Therefore, we conclude that the society in “Fifteen Million Merits” isn’t energy poor, but energy rich. It can waste human effort and physical energy on pointless drudgery. The bikers think they’re keeping the lights on and the food growing, but anyone scientifically literate would recognize that as a lie.

I don’t think this is a mistake in the show’s writing. Someone would have pointed it out, I imagine. (Maybe not; The Matrix got this wrong.) For my part, I think it’s intentional. This means that the society has eliminated the need for work, but it has kept scarcity and work around. Why? Sadism.

That’s the horrifying truth that sneaks out of “Fifteen Million Merits”. It’s not a post-apocalyptic society that needs every erg of human effort. Rather, I suspect that it’s a rich society that has kept scarcity around to dominate and humiliate people, as entertainment both for the put-upon middle classes (who must be given seemingly useful work, lest they rebel) and the vicious upper classes.

To me, that’s quite a plausible future. Technology will eliminate the need for human work. It will eliminate most forms of scarcity. Is this going to lead us to a wealthy, egalitarian utopia? It might, but there are few guarantees. The elites could keep scarcity in place, valuing dominance over others too much to let the rest share in the wealth. There’s a lot that we don’t know about so-called “human nature”, and we’ve never had that kind of rich world before.

At any rate, let’s get back to the 21st century. One disturbing trend is that work is becoming more subordinate. I know, because I’ve been involved in making it so.

A few years ago, I worked on a “performance management” system, imposed on truckers, that would track which drivers were too fast, which were too slow, and even which drivers were eating lunch off-route to save money or have time with their children. It doesn’t save much to prohibit a driver from eating off-route: how much does five miles’ worth of gas cost? But, thanks to technology, this surveillance costs even less.

One could argue that Agile and Jira are programmers’ own professional karma. What we’ve allowed to be done to every other class of worker is now being done to us. It shocks us more than it should. Many of us worked toward bad ends– some examples would be so-called “performance management” systems, misuses of data to corporate benefit and human detriment, systems that allowed health insurers to deny care– and it’s only fitting that the surveillance capitalism we created would, at some point, be turned on us.

Technology has, I’ll note, also damaged the world for professional writers. (It may fix it; self-publishing is becoming a viable alternative, but that’s another topic.) Mastering writing and mastering sales tend to happen at different times. In the 1980s, a strong novelist whose first four books had mediocre sales could easily get a fifth book deal. These days, it’s a lot harder, because surveillance capitalism has no qualms about using data toward bad, career-denying ends. It’s not publishers who are to blame, in fact; the chain bookstores did this. Publishers, in the old days, would offer a fifth deal, along with an advance and some promotion, to a talented author who hadn’t yet built a following or mastered the golden touch. In the late 1990s, though, bookstores started pushing back and gutting authors who stayed on the midlist. Why risk space for someone who has “failed” four times, when an unknown might be a breakout bestseller? The long-term effect has been detrimental. Publishers have lost money and many have been put out of business or forced into conglomeration; writers’ careers have been wrecked by this; and the literary world has become more commercial and overall worse in terms of quality. The true culprit in this clusterfuck is, yet again, technology used for worker (in this case, author) surveillance. In the 1980s, an author with mediocre sales got more chances; in 2018, the bookstores run her numbers and say, “No thanks”, even if she’s quite talented.

Technology used to be in alliance with human progress, and now it seems to be moving against us. What’s going on? I think we’re headed for a Fourth Turning, or a historical crisis. The benefits of technology and surveillance accrue to the rich overseers, and they’re not going to give their ill-gotten wealth or power up.

We can assess the Fourth Turning, toward which we careen, by looking at the crises of the 20th century: the World Wars and the Great Depression.

As we moved from the 19th century into the 20th, we got very good at making food. It’s hard to imagine this being a bad thing; yet, it led to a Great Depression in North America and long-running, total wars (of a kind that prior agricultural capabilities and supply chains couldn’t support) in Europe. Ill-managed prosperity is more dangerous than true scarcity, it seems. Scarcity’s bad, but at least it slows things down.

In North America, food prices started dropping in the 1920s. Farms couldn’t survive. Our later corrections persist and, in some ways, are pernicious; due to our perennial corn surplus, we spike soft drinks with high-fructose Franken-sugars. However, that came after the Depression. In the last Gilded Age, consensus was that it was best to let these farmers fail. So, rural small towns that served the farmers died along with them. Heavy industry got hit around 1925-27, and the stock market grew more volatile in turn. Then it crashed, notably, in October 1929, but that didn’t “cause” the Depression; we’ve had worse stock market crashes since then (e.g., 1987) to minimal effect. Rather, the Great Depression actually started in the early 1920s; it just wasn’t until about 1930 that it started hitting rich people in the cities. We learned that poverty wasn’t some “moral medicine” to shock people back into adhering to the so-called protestant work ethic; rather, it was a cancer that, left to its own devices, would keep spreading until it destroyed a society.

What killed the American economy in the 1930s? Conservative mismanagement of agricultural prosperity.

What’s killing the American middle class in 2018? Conservative mismanagement of technological prosperity.

What happened to farmers in the 1920s is happening to… all human labor. Let that sink in. The vast majority of Americans produce things of less usefulness than what farmers produce. If farmers weren’t safe in the ’20s, public relations managers and computer programmers aren’t safe today. We’ll die without food; we’ll be fine without TPS reports.

Globalization is inevitable and desirable, but we’re letting it go off in a way that benefits the rich and eviscerates the middle class. Technological automation is wiping out jobs, and surveillance is turning high-autonomy, fulfilling jobs (like what programming used to be, when it had that R&D flavor) into drudge work. The high-surveillance culture doesn’t merely make the workplace unpleasant, but also lowers the bar for who can contribute– it becomes profitable to employ unskilled scabs, if surveillance becomes cheap enough– depressing wages further. Though Agile Scrum reduces the effectiveness of competent programmers, it turns incompetent ones into marginally employable code-cutters. So what happens? Well, employers replace the high-talent curmudgeonly experts with scrummy rent-a-coders, and wages nosedive. Furthermore, as jobs are wiped out in one industry, people who worked in it become “refugees” and flow into another, making that industry more competitive, and driving wages down further.

The wage slide is going to be far more severe than most people predict, due to economic inelasticity. When people have a desperate need for something– like gas to go to work, or water in the desert, or illegal drugs to which they’re physically addicted– they will pay nearly any price, and therefore small drops in availability will drive massive price increases. During the oil shocks of the 1970s, the supply only dropped by about 5 percent, but prices quadrupled. That’s demand inelasticity, but supply (of labor) experiences the same phenomenon. If people desperately need jobs to survive, then we should expect that either a small increase in worker availability or a slight decrease in demand for labor will tank wages, disproportionate to the actual amount of change.

For example, self-driving trucks will put millions of drivers out of business in the next twenty years. There are millions more who serve those drivers: hotel owners in the Midwest, logistics coordinators, and so on. There are a lot of smug programmers who think that this won’t affect their wages. Wrong. Again, no one is safe.

Let’s say that half the truck drivers lose their jobs. We won’t even account for all that supporting labor: the hotels and diners. That’s 1.75 million workers, fresh on the market. Let’s be conservative say that only 5% of them learn how to write code and become programmers; most will go into other industries. That’s 87,500 new programmers on the U.S. market. That’s about a 2.5 percent increase in our numbers. Seems survivable, right? If we expected wages to drop only by 2.5 percent, yes; but keeping in mind inelasticity, they could drop by 20 percent, just due to that.

Now, add in the workers who support trucking, the hotel owners who see the writing on the wall. They won’t all go into programming. Some will go into other industries (and push out other words, perhaps). Multiple industries will have refugee-driven inelasticity crises and wage collapses. It won’t be just us; but, no one will be safe. Look at what has happened to attorneys (once considered a safe, protected profession) over the past thirty years; many work long hours for mediocre salaries and are paying off student debt into their 40s. If it could happen to them, it can happen to us.

Is the world of 2050, even if we leave corporate capitalism to its own perverse devices, going to be jobless? No. Society will invent things for people to do. If nothing else, the rich will spend more time in school and the poor will spend more time in prison. Is it possible that continual downward wage shocks and industrial refugee crises will demolish the middle class? Yes. If we don’t recognize the danger that technological unemployment represents, and come up with proactive solutions, it will happen.

I used to hold the somewhat pleasant belief that technological progress would, if it hurt the economy enough, blow out its own flames. It ought to be self-limiting, right? If things get bad, people will stop creating (a sort of unprogrammed strike) and technical progress will slow… and we can find a way to clean up our messes. That was what I hoped for.

I’m no longer so sure of this. Technical progress, from a humanistic standpoint, has slowed down. Basic research funding has collapsed and we’re no longer putting people on the Moon; we’re tweeting about “covfefe” instead. Antibiotics did more to improve human life than a blog post about technological immortality written by some 24-year-old who optimizes ad placements; that much is true. Finally, the great technological marvel of the 21st century is a cloud. Not “the Cloud”; that’s just jargon for “the Internet”. I’m talking about the cloud of prehistoric algal carbon pumped out by the literal septillions of useless computations, performed to mine so-called “BitCoins”. Yes, that cloud, for anyone planning a tropical vacation to the North Pole. Ah, BitCoin and its ilk; this high-pitched electric whine, inaudibly belch-screaming carbon dioxide into our atmosphere, might be the siren song of a desperate middle class, seeing its impending demise at the hands of authoritarian capitalism, and not knowing what else to do but invest in libertarian wank tulips. Technology is becoming less useful and less friendly, from a humanist’s standpoint, but it does not seem to be self-limiting. Its direction leaves much to be desired, but its speed remains high.

Technology and globalization shall continue. There’s no other way. If we let our society and morality implode, this whole picture will take an undesirable direction. Without innovations that are currently untenable, such as a universal basic income and the eventual eradication of global poverty– in my view, in the next 50 years we ought to take a preemptive Marshall Plan attitude toward the elimination of third-world misery– Black Mirror is our future.

Technical achievement and beneficence have stalled. Sadly, the technological process will continue even if there’s a complete lack of human progress. There will, alas, always be money in helping rich people unemploy those pesky, expensive worker-serfs… and, also, helping the powerful surveil those whom they haven’t yet figured out how to unemploy.

Advertisements

Does Genius Exist?

Does genius exist?

I would argue that it does, but not in the way our society imagines it. Our society holds the notion that people like Albert Einstein, Nikola Tesla, or Andrew Wiles are natural geniuses, born apart from the rest of the world. It would be dishonest to ignore the unusual natural capability that such people have. However, are they born-apart geniuses, or people of high achievement? Does the distinction matter?

No such thing as “a genius”

The notion of a person as “a genius”, somehow separate from humanity and proto-angelic because of her superior intellect, I find harmful. As a discrete zero/one variable, this thing called “genius” doesn’t exist.

In 1999, I attended the Math Olympiad Summer Program, or MO(s)P. I met people with much more natural talent than me. To call them “geniuses”, however, would mitigate the sheer amount of work that it took them to accomplish what they did. I asked an IMO Gold Medalist how much time he spent thinking about math. He said, “50 percent”. Remember that this was in high school. Did he have a lot of natural ability? Of course, but he also worked his ass off (although it may not have felt like “work” to him).

This binary notion of “genius” seems to be a justification for our society’s othering of highly intelligent people: mad scientists, failed prodigies, bitter adult geniuses seeking revenge against a society that never recognized them. Then, to make matters worse, we have the cultural blight that is venture-funded, private-sector technology (“Silicon Valley”) in which the supposed genius nerds have come around full circle and become the bullies. Silicon Valley is the worst PR for smart people since Nagasaki.

The standard Silicon Valley narrative is that the horrible men at the top of the VC-funded ecosystem are taking revenge for two or three decades of unpopularity. That’s mostly correct, but let me add a couple of notes. First of all, the nerds/jocks model of high school is largely incorrect. I went to high school. I remember it. Not all smart people were unpopular. (I was smack-average in the hierarchy; not prom king, but not picked on either. I wasn’t invited to all the parties, but people respected me.) Also, “jocks” never were the bad guys. Jocks worked as hard as nerds, and most of them were nice people. Bullies were the bad guys. Sure, a few bullying jocks existed, and so did bullying nerds, and so did bullying jockish nerds. Some of today’s tech barons were nerds and some were more fratty, some were unpopular and some were not, but most of all they’re bullying assholes and always have been. Forget the Hollywood narrative; most bullies were unpopular in high school. (Were there popular bullies? Sure, but all being equal, bullying hurts one’s popularity.) It doesn’t take popularity to be a jerk.

Today’s Silicon Valley barons aren’t nerds, though they’ve appropriated nerd culture. They certainly aren’t geniuses; I’m much closer to that than they are, and I wouldn’t put that label on myself, because I’m hesitant to believe that it exists. Mostly, they are bullies who pretend to be nerds because it pays. As for bullying, I suspect they’ve always had that inclination, but corporate capitalism gives them a power they lacked in adolescence.

Are there people in Silicon Valley who approach or even exceed the level of cognitive excellence that we’d call “genius”? Sure, but they’re never the ones at the top of the pecking order, making millions of dollars or getting news articles written about them. They’re research scientists and software engineers no one has ever heard of, and they can’t afford a house less than an hour and a half from work. So it goes.

Okay, okay…

Highly intelligent people get othered. We’re nerds who “must” have poor social skills, because we spend so much time reading and thinking; at the same time, we’re geniuses who never had to work for anything. Note the inconsistency: we lack social skills because we work too hard on other things, but we’re lazy bastards to whom everything came easy. It’s a bit hard to win, in a society so focused on corporate back-biting that any positive trait or distinction gets a person torn down.

My experience leads me to conclude that: (1) natural ability is a continuous variable, not a discrete, binary one we could call “genius” and use to set people apart, (2) that meaningful natural ability is so hard to test beyond 2.5-3.0 standard deviations (IQ 137-145) that it’s hardly worth the bother, (3) that natural cognitive abilities tend to diverge below the level we’d call “genius”, so a mathematical genius might be only average at music composition or the visual arts, (4) that cognitive originality is separate from what we consider “IQ”, and (5) that said cognitive originality requires hard work far more than it requires natural talent. How one can be original without the sheer work it takes to study the work of others, imitating what is competent while rejecting what is overused? No intellectual achievement comes naturally; experience matters and that takes hard work. I don’t write well because I was born with a silver semicolon; I write well because I’ve written millions of words, and some of that stuff, early-early on, was terrible.

I’m not going to lie. Does the average child, age seven, have a shot at winning the International Math Olympiad in ten years? No. It’s not fair to place that kind of expectation on him. Discrepancies of natural ability exist, and their importance grows as a technological society develops a use for the ability to grapple with abstraction. That’s clear. But, does a person require a have-it-or-not single variable we can call “genius” in order to reach the highest levels of achievement. My experience has led me to conclude that the answer is “no”.

It’s a harmful notion, too, because most of us can’t agree on what “a genius” is. In business, people of rather bland cognitive ability (with nonexistent diligence and originality) routinely take credit for their subordinates’ work and are hailed as “visionaries” or “geniuses”, while the hundreds of smarter people doing the actual work toil in obscurity.

Sure, some people have more natural ability than others. Some have more drive. Some work harder. Some people are luckier in terms of what experiences life gives them. Over decades, these result in drastic differences in (perceived and real) capability. All of that seems to be true, without much controversy. Yet the archetype of “a genius” has so much baggage that it might be wise to get rid of it.

So what is genius?

If I reject the notion of a person as “a genius”, I still prefer not to throw out the concept of genius, not as a binary personal variable but as a notion we might affix to high intellectual performance. There are people who can reach a state wherein their cognitive originality, conceptual clarity, and productive competence is far beyond the norm. When they achieve the right state of consciousness, they reach a level of insight and capability that seems inaccessible to everyone else, like stage magic.

Natural ability plays a role in this, but it’s not as big a factor as we tend to think. Less than 0.01 percent of people in our society are truly high-performing intellectuals, as I see it, and yet I doubt that 99.99th percentile natural ability (IQ 156) is required. (In practice, we can’t measure intelligence that high in adults– and childhood IQs are both inflated and not as meaningful, since adult capability is what we care about.) Perhaps 0.5 percent, or even 2 percent, of people have the requisite natural ability. Perhaps it’s more, because we still don’t understand what intelligence is or where it comes from. Did Shakespeare have a 190 IQ? Or was he a 125 IQ person with a knack for wordplay and storytelling? Does it even matter? Not really.

In childhood, one’s creativity is high but competence is low. In adulthood, rote industrial competence is high, but creativity tends to be low. Society beats it out of most people, forcing them to subordinate to a system run mostly by people of mediocre intellect. Workaday life crushes originality and excellence. So, we see that two ingredients for what we might call “genius” exist at separate times in life and, for most people, never the twain shall meet. Neither time is conducive to anything we’d consider meaningful genius: the child’s inexperienced creativity lacks the insight necessary to achieve originality; while the adult’s mechanical, risk-averse fairly-goodness, borne of the perfusive oppression of needing an income, clearly falls short.

Most gifted children seem like they might achieve creative excellence in adulthood; very few actually do. I’ve observed the careers of extremely intelligent (i.e., IQ 160+) people and the results are, at best, disappointing. About half go to graduate school; the other half go to Wall Street or Silicon Valley straight out of college. Either way, they expect to defeat the morons in business handily, retire within ten years, and dedicate the remainders of their lives to intellectual pursuits. It almost never works out that way. It’s not uncommon for highly intelligent people to be mobbed and bullied in their corporate jobs by resentful mediocrities, although even more common is for them to disappear into the bland, beige fog, and to lose every element of originality they once had. Most often, they disappear somewhere in the folds of middle management, and do what they can to hide what they once were.

When highly creative people enter the corporate world, they perceive (correctly) that they are singled out for unfair treatment because of their abilities. However, they tend to attribute this to envy and resentment, as it was with bullies in school. It’s not so; there are two problems with this argument. First, plenty of high-IQ people join in the bullying. While creatively and morally stunted, having acquiesced to mediocrity a long time ago, such people do not perceive themselves as inferior. Second, the upper tiers of intelligence and creativity do not bring the economic performance or social status that such people would envy. The corporate masters crush the brilliant not because they’re resentful, but because they’re afraid. To that point, they’re afraid that the irreverence (if not the talent) of such people could become infectious and undermine their authority. Natural intellectual talent may or may not be a necessary condition for creative excellence – that element can be debated – but without an indifference toward the sort of arbitrary authority that exists in the corporate world, it is so difficult to protect creativity that it might as well be considered impossible. The corporate regime cannot allow authority to be seen protecting, or even tolerating, the irreverence that human excellence requires. Therefore, the excellent must be crushed until they lose the desire to excel. But it is not envy that drives this; it is a calculated purge.

It’s important to recognize the notion of genius, not as a separate category of human, because such a thing never existed, but as a rare combination of traits we ought to encourage rather than crush. It’s not something that nature doles out in rare packets. Rather, it’s a combination of traits that can be encouraged or defeated by circumstances. Our society has reached a state in which the latter is the norm; thus, our squalid culture and stagnant economy should surprise no one.

The rare element: arrogant humility

What makes cognitive originality so rare?

I’ll confess it. I’m not that smart. If I had to guess my adulthood IQ, it’d be somewhere between 140 and 160. Statistically speaking, it’s nearly a guarantee that there’s someone smarter than me on a typical subway train in New York, or in a traffic jam on I-95, or serving coffee at Starbucks.

Yet, I am a misfit in human society. The corporate world, with its polite but militant brand of mediocrity, has tried to crush me; somehow, I’ve retained this diamantine cognitive integrity that holds negative economic value, flouting self-preservation for a reason even I don’t fully know. People like me become rarer with age; some of us just give in. At age 34, with my cultural and cognitive integrity intact, I’m an absurd, alien rarity for reasons that have little to do with natural ability. What makes me different from all the drones out there is not “IQ”; I know plenty of high-IQ who lack it, and it’s plausible that people can retain it with only average cognitive gifts.

If not natural intelligence, what is the set of traits one needs in order to preserve a child’s cognitive integrity, while gaining the competence and experience of adulthood? In two words, I’d call it arrogant humility. One needs the pugnacious assertiveness of a child, coupled with the quiet diligence of an adult who knows how much she doesn’t know.

To be better than other people, you have think that you’re better. You must set a higher standard for yourself than you’d set for anyone else. In addition, you have to fight all the people who want to bring you down to their level: the corporate vassals and barons who can’t stand to be outshined, society’s myriad talentless gatekeepers, and so on. That’s difficult. A person who does this seems aloof and will be disliked.

To be honest, I think that what we now call “mental illness” – many of these are physical illnesses with mental symptoms, but that’s a topic for another time – can be, in its milder forms, an asset. During a depression, the brain comes up with a certain brand of charismatically negative, but incorrect and damaging, though: you’re a loser and you’ll never amount to anything, so just be content to bump along the bottom. If you’re used to telling this sort of impulse to fuck right off, because your survival has literally depended on it for more than a decade, then you’re more likely to hold integrity when you enter the corporate world and hundreds of supposed social superiors are sending you the same sort of message (in more polite words). You recognize them as Team Depression and, just like the enemy within, full of nonsense.

To hold your cognitive integrity in a world that wants to break it, you’ve got to be a bit arrogant. You have to believe that you’re better than the mediocrity you see everywhere, especially in a time of organizational decay like this one. Maybe you are better; I am. I’m not afraid to say it, because I value honest perception more than politesse.

At the same time, believing in one’s superiority doesn’t make it so. It can lead to laziness and failure. The Dunning-Kruger results have been, in a bit of irony, misinterpreted by most people who cite them. (They indicate not that unskilled people invariably perceive themselves as superior; but that, when it comes to social skills in particular, there’s no correlation.) Still, there are domains in which people accurately estimate their level of capability, and others where they don’t. Nonetheless, if being arrogant were sufficient for genius, we’d see a lot more geniuses than we do. Most arrogant people lack insight into their own limitations and underestimate how hard they’ll still have to work, no matter how talented they are, if they want to achieve anything significant.

In my experience, the smartest people know what they are. They find workaday life oppressive, toxic, and boring. They mock the corporate world’s needless busyness and its bland replica of hypomania. At the same time, if they wish to sustain high cognitive performance and originality, they can’t get complacent. It’s not that they feel competitive pressure (we don’t compete; however, others compete with us) because they’re more resistant to social impulses than average people; it’s that they recognize how little they know, and how much is out there that they need to learn.

Cognitive excellence is an anomaly. I’ve often wondered why there seem to be severe differences in human intellectual capacity. It doesn’t seem that 140 IQ brains are larger or more consumptive than 100 IQ brains, so why didn’t we rapidly evolve to (and then fly right past) the IQ 140 level? Why did human intelligence stop at “high, but enough”, when it could have kept going? There are three possible answers. One is: the premise is wrong, and human intelligence continues to increase. It’s plausible, though I suspect the Flynn Effect overstates it. A second possibility is that the world in which we evolved had no use for intelligence beyond IQ 100. To be honest, I doubt that. Though I doubt the difference between IQ 140 and IQ 130 matters in the wild, I’d imagine that an IQ of 110 would provide an advantage over 100, even in the Stone Age. As with height, 99th-percentile intelligence might make one a freak; but in a species that evolved in tribal groups, 90th-percentile intelligence would make one the pre-eminent alpha. A third explanation, which I find most plausible, is that high intelligence correlates with undesirable genetic traits – in particular, mental illness.

IQ and mental illness don’t seem to correlate. Creativity (as far as it can be measured) and mental illness do, but there’s a question of which drives which, and how. Does mental illness provide a creative impulse, or are creative people so oppressed by an inferior society that they break down? I suspect that both are true in some degree.

Perhaps there is value, to a creative person, in mental illness. To be honest, I doubt that extreme states of consciousness such as frank mania or clinical depression serve much value, except perhaps when experienced once. (As a writer, I can write a panic attack scene. Not everyone can.) However, the constant need to reject one’s own thoughts in order to survive tends to imbue a certain humility that is otherwise rare in this world.

I’ve met people who perform at a cognitive level one would call genius. I’ve also met business executives. (The intersection between these two sets is almost empty.) What I’ve noticed in executives is their lack of introspection, a total absence of precision in their thinking; but, most of all, their inability to reject errant impulses. They’re not mentally ill. However, they’re prone to biased thinking, emotional reasoning, and hubristic nonsense. These afflict everyone, and especially children, and business executives seem to regress to the child’s level. They’re inexperienced at rejecting their own impulses (being neurotypical, they haven’t had to do so in order to survive) but they’re also surrounded by people who can’t tell them “No”. They revert to a juvenile state, though they lack the creativity they may have had as children. (They lost it while climbing the corporate ladder, and they won’t get it back.)

Corporate barons have the arrogance down pat, but they lack humility. Their capacity for original thought is, therefore, disappointing. Being an executive is like being stoned; you think you’re at the pinnacle of performance, but it’s a delusion. Clearly, arrogance alone does not lead to intellectual excellence. Unchecked by a broader acknowledgement of the world’s vastness, it pushes the other way.

How does one cultivate this paradoxical combination of arrogance and humility? To be honest, I’m not sure. Decided theists like Cantor or Tolkien might have a pat answer: humble before God, arrogant toward Mammon. Personally, I lean Buddhist and accept my lack of knowledge, though it is a hard thing to pray to. I don’t know, for sure, if there are gods or what happens after I die – and that’s, in fact, part of samsara. So long as I remain in samsara, I will have to experience death without truly knowing if anything’s on the other side of it. (I imagine that I have held many world religions, and none from time to time, in past lives.) Even if I could delude myself into believing that I understand the bulk of what’s important in this world – and, if I haven’t made this obvious, let me admit that I don’t – I’d have to contend with my complete lack of expertise on what, if anything, is to come.

When I examine the state of workaday society, I rightfully recognize myself as superior, and I cannot fully take part. Sometimes, I wish I could. Anti-mediocrity is a disability that has cost me millions. Still, I recognize it as built for something lesser than what I am, and I shall not condescend. However, when I look at a universe that existed for billions of years without me, or at the highest reaches of human potential, or even at the simple contentedness of a happy cat… I must be humble. I find myself not as an outsider, and neither inferior nor superior, but as a perpetual beginner.