The Time I Ruined Programming

 

Part I: The Personal

I remember the time I ruined programming. It was long enough ago (and I recovered) that I’m no longer embarrassed by the fact.

Nearly all programmers have slowdowns and periods where they just can’t get anything done. It’s why we hate two-week “sprint” nonsense and stack-ranking; no one wants to be watched by a guy in a guard tower with a gun, just waiting to pop the programmer who slows down the chain.

When people learn how to program, they do so enticed by what the job once was: an R&D job of pure creation, free of nonsensical deadlines and business-driven engineering. A computer science curriculum doesn’t prepare one to be a bored grunt, but for a life of work on projects like compilers, interactive games, and machine learning algorithms. Of course, only a lucky few get paid to do that kind of stuff. Most programmers end up writing and maintaining boring business programs, solving problems that are only challenging because of tight deadlines and the sheer weight of bad decisions made before.

It’s easy to ruin a creative activity: do it as a subordinate, and you’ll often grow to hate it. This is why, although I’m a decent enough writer, I’ve never considered writing jobs; in fact, I’m wary of traditional publishing and its increasing tendency to push writers into an unfavorable employee/employer relationship. Authors pay a lot for that “Not Entirely Awful” stamp of approval they get from being “published”, thinking it means more than it does. Publishers then drive terms that turn authors into subordinates, and agents aren’t going to push back on behalf of a midlist or unknown author. Getting dumped by a publisher– or worse, an agent– can be worse than being fired. You don’t just lose your relationships, but your reputation.

I don’t intend to imply that it invariably crushes passion to work for someone else. Everyone works for someone else. In the abstract, we’re all subordinates. The alternative is solipsistic madness, idiocy in the original Greek definition, by which the word idiot meant not a lack of inborn intelligence, but one who non-virtuously chose to ignore public life. An idiot was a man only into himself. Our notion diametrically opposite the idiot, held up as the pinnacle of human character, is the hero. She’s not a subordinate in an organizational sense, but she still adheres to a set of ethical principles and works toward benefits she wishes to deliver to the world. She serves, but she takes agency over whom and how she serves.

Of course, workplace subordination is not heroic. To subordinate to the private benefit of someone else, who is under no obligation to return the loyalty (and probably won’t) is demoralizing and toxic.

The truth is that I have no use for people who are constitutionally insubordinate. To blindly disobey orders, because they are orders, is even more idiotic than blindly following orders. Civilization requires what I call operational subordination. An example would be stopping at red lights while driving. We do this not because we consider ourselves inferior to these robotic lights, but because driving wouldn’t be safe if we didn’t obey their directives. We don’t think of it as subordination; it’s just good sense. .

Workplaces, of the Theory X variety that has become the norm since the downsizing epidemic of the past few decades, don’t settle for operational subordination. They want personal subordination. The good of the company (read: the careers and reputations of executives) must take a higher priority than the career goals and personal needs of the worker, and the worker is expected not simply to obey stated commands, but to internalize this sense of moral inferiority. If he has no orders, he must ask for more work. If he seems to value his own advancement over that of his superiors– a successful out-of-work side project suffices to create this impression– he’ll be terminated even if he does nothing wrong.

Programmers, by and large, don’t mind operational subordination. In fact, we have an affinity for it. We like to solve complex problems with simple rules that make sense. We operationally subordinate, every day, to the syntactical demands of a compiler that simply won’t do anything with code it cannot parse. When rules are sane, and the benefit in their existence is obvious, we eagerly follow them. It’s the personal subordination that burns us out. We’re smart enough to spot a system that demands personal loyalty from us, while refusing to reciprocate, and it disgusts us. We recognize that our rules-based, overly rational way of thinking is under attack; someone is trying to hack us and take advantage.

The nature of employment has changed for programmers, and for the worse. Software development used to be a highly-compensated job with an R&D flavor, where programmers were trusted professionals rather than overworked grunts subjected to the likes of Scrum. The open-allocation environment of a company like Valve used to be the norm. What changed? Employers realized that line-of-business code could be written by mediocre people, and replaced the high-talent curmudgeons with this continuing churn of fresh-faced halfwits who have never written a program from scratch, have no mathematical or theoretical insight, and don’t know what “buffer overflow” or “O(n^2)” mean.

By and large, it has worked. Employers have successfully dumbed programming down. Large companies may need a few excellent programmers, but line-of-business software can be done by people of mediocre talent who’ll accept bad wages and worse working conditions. Scrum is not going away. Does it produce excellent software? No, not even close. It doesn’t even produce secure or safe or maintainable software. It works just well enough that deliverables get deliverated, and just barely so but at a sufficiently close approximation to working software that executives get promoted away from their messes before anything starts falling apart at a macroscopically visible level.

For my part, I got through that code slowdown, long enough ago that I don’t mind writing about it.

These days, I’m more interested in small programs that exist to solve problems than large ones written to justify budgets or “prove” technical choices. (I don’t care to convince anyone that Haskell is “ready for production”. If we can’t agree on a language, that’s fine; I’ll write C. It’s old and a bit wonky but it works.) One of my current projects is to build an AI for the card game Ambition (which I came up with in 2003, and have been refining since then) because I want to test out certain design changes; that, because I use Ambition for a pivotal card game scene in Farisa’s Crossing and I wanted to fix certain flaws before giving the game this added visibility. In order to test these changes, I need to run simulations with believable players; random-move players give some statistical insight, but they don’t explore any interesting strategy spaces. The “AI” doesn’t need to be AlphaGo, and it won’t be: if a simple Q-learner, backed by a basic backpropagation network suffices, that’s what I’ll use. If I need something more, I’ll write something more complex. These days, I’m rarely (if ever) impressed by code. Lines of code are spent, not acquired.

Now that I’m older, I have almost no passion for programming as an end in itself– an average corporate codebase is far more complex than anything I would intentionally write, and yet that complexity is wasteful and ugly– but, still, certain problems that can be solved with software interest me.

On the other hand: code for code’s sake, in million-line piles of corporate cruft and the dead shells of Jira tickets? Nah, I’m done with that. That really is a young idiot’s game.

Part II: The Fourth Turning

I reached a dark night of the soul with regard to software and technology. There were moments when I looked around and realized that my total contribution to humanity, by working for an increasingly maleficent industry, might be negative. The 21st century’s American theatre has featured the dismantling of the middle class, and I can’t say I had nothing to do with it.

In the 1990s, we had to hear about those “South Park Republicans”. In 2018, I find that I’m a “Black Mirror Liberal”. I loathe and fear Silicon Valley, with its complete lack of morality, more than Donald Trump; the latter is, to be frank, too stupid and too transparent in his self-indulgence to pull fascism off. If we don’t learn our lesson this time, a 39-year-old startup founder, more reserved and competent than the orange idiot, could do a lot more damage.

I don’t view technology as evil; however, I fear what humans will do with it. Like the Black Mirror series, I have the conservative’s skepticism toward human nature. Technological progress without moral improvement will lead us straight to hell. That, to me, is the important lesson of Black Mirror, arguably the most important television series of our time. Yes, science and technology are good, and liberal economics is largely correct; but, without cultural and moral improvements as well, other forms of progress can only do so much.

To me, the quintessential Black Mirror episode is “Fifteen Million Merits”. I shan’t do it justice here, but let me sum it up briefly. It’s set in (probably) a far-future dystopia. The middle classes ride stationary bikes to generate power, and outside of work they’re doomed to a virtualized, entertainment-driven life in which one must pay not to see ads (many of which are offensive, even pornographic). Those unfit to ride the bikes fall into the lower classes (“lemons”) and are treated as subhuman. There seems to be no “outdoors” in this world; food is grown in a Petri dish using energy that comes from… the bikes. Or, at least, that’s what the bike slaves are told. I’ll get to the plausibility of that, in a second.

The main characters try to escape their position. There’s a television called Hot Shots , modeled on shows like American Idol, through which talented people vie for a chance to join the celebrity upper class. Most of them fail, and some are forced into neoliberal prostitution. One character becomes a pornographic actress and her humiliation is broadcast to the entirety of her society; another, after a display of authenticity and rage, is enticed to make a performance out of his anger (thus commoditizing and killing his own authenticity).

There’s a scientific easter egg in “Fifteen Million Merits”. Examine the principle of it: humans must ride bikes to generate power for a society that, so it says, has been reduced to growing food artificially. This is physically impossible. Human efficiency is about 25 percent: if we eat 2000 kilocalories’ worth of food, we can generate 500 kilocalories of mechanical energy. It could conceivably increase, but never would it reach (much less exceed) 100 percent. No life is a power source; autotrophs like plants and algae consume solar power that the rest of us eat. Without an energy source like the sun (a giant, faraway fusion reactor) we could not survive.

Low-tech societies used on human power (stored solar power; faraway nuclear power) because they had no alternatives. If, however, a society were forced to use human power to grow food to feed humans, it would die. Therefore, we conclude that the society in “Fifteen Million Merits” isn’t energy poor, but energy rich. It can waste human effort and physical energy on pointless drudgery. The bikers think they’re keeping the lights on and the food growing, but anyone scientifically literate would recognize that as a lie.

I don’t think this is a mistake in the show’s writing. Someone would have pointed it out, I imagine. (Maybe not; The Matrix got this wrong.) For my part, I think it’s intentional. This means that the society has eliminated the need for work, but it has kept scarcity and work around. Why? Sadism.

That’s the horrifying truth that sneaks out of “Fifteen Million Merits”. It’s not a post-apocalyptic society that needs every erg of human effort. Rather, I suspect that it’s a rich society that has kept scarcity around to dominate and humiliate people, as entertainment both for the put-upon middle classes (who must be given seemingly useful work, lest they rebel) and the vicious upper classes.

To me, that’s quite a plausible future. Technology will eliminate the need for human work. It will eliminate most forms of scarcity. Is this going to lead us to a wealthy, egalitarian utopia? It might, but there are few guarantees. The elites could keep scarcity in place, valuing dominance over others too much to let the rest share in the wealth. There’s a lot that we don’t know about so-called “human nature”, and we’ve never had that kind of rich world before.

At any rate, let’s get back to the 21st century. One disturbing trend is that work is becoming more subordinate. I know, because I’ve been involved in making it so.

A few years ago, I worked on a “performance management” system, imposed on truckers, that would track which drivers were too fast, which were too slow, and even which drivers were eating lunch off-route to save money or have time with their children. It doesn’t save much to prohibit a driver from eating off-route: how much does five miles’ worth of gas cost? But, thanks to technology, this surveillance costs even less.

One could argue that Agile and Jira are programmers’ own professional karma. What we’ve allowed to be done to every other class of worker is now being done to us. It shocks us more than it should. Many of us worked toward bad ends– some examples would be so-called “performance management” systems, misuses of data to corporate benefit and human detriment, systems that allowed health insurers to deny care– and it’s only fitting that the surveillance capitalism we created would, at some point, be turned on us.

Technology has, I’ll note, also damaged the world for professional writers. (It may fix it; self-publishing is becoming a viable alternative, but that’s another topic.) Mastering writing and mastering sales tend to happen at different times. In the 1980s, a strong novelist whose first four books had mediocre sales could easily get a fifth book deal. These days, it’s a lot harder, because surveillance capitalism has no qualms about using data toward bad, career-denying ends. It’s not publishers who are to blame, in fact; the chain bookstores did this. Publishers, in the old days, would offer a fifth deal, along with an advance and some promotion, to a talented author who hadn’t yet built a following or mastered the golden touch. In the late 1990s, though, bookstores started pushing back and gutting authors who stayed on the midlist. Why risk space for someone who has “failed” four times, when an unknown might be a breakout bestseller? The long-term effect has been detrimental. Publishers have lost money and many have been put out of business or forced into conglomeration; writers’ careers have been wrecked by this; and the literary world has become more commercial and overall worse in terms of quality. The true culprit in this clusterfuck is, yet again, technology used for worker (in this case, author) surveillance. In the 1980s, an author with mediocre sales got more chances; in 2018, the bookstores run her numbers and say, “No thanks”, even if she’s quite talented.

Technology used to be in alliance with human progress, and now it seems to be moving against us. What’s going on? I think we’re headed for a Fourth Turning, or a historical crisis. The benefits of technology and surveillance accrue to the rich overseers, and they’re not going to give their ill-gotten wealth or power up.

We can assess the Fourth Turning, toward which we careen, by looking at the crises of the 20th century: the World Wars and the Great Depression.

As we moved from the 19th century into the 20th, we got very good at making food. It’s hard to imagine this being a bad thing; yet, it led to a Great Depression in North America and long-running, total wars (of a kind that prior agricultural capabilities and supply chains couldn’t support) in Europe. Ill-managed prosperity is more dangerous than true scarcity, it seems. Scarcity’s bad, but at least it slows things down.

In North America, food prices started dropping in the 1920s. Farms couldn’t survive. Our later corrections persist and, in some ways, are pernicious; due to our perennial corn surplus, we spike soft drinks with high-fructose Franken-sugars. However, that came after the Depression. In the last Gilded Age, consensus was that it was best to let these farmers fail. So, rural small towns that served the farmers died along with them. Heavy industry got hit around 1925-27, and the stock market grew more volatile in turn. Then it crashed, notably, in October 1929, but that didn’t “cause” the Depression; we’ve had worse stock market crashes since then (e.g., 1987) to minimal effect. Rather, the Great Depression actually started in the early 1920s; it just wasn’t until about 1930 that it started hitting rich people in the cities. We learned that poverty wasn’t some “moral medicine” to shock people back into adhering to the so-called protestant work ethic; rather, it was a cancer that, left to its own devices, would keep spreading until it destroyed a society.

What killed the American economy in the 1930s? Conservative mismanagement of agricultural prosperity.

What’s killing the American middle class in 2018? Conservative mismanagement of technological prosperity.

What happened to farmers in the 1920s is happening to… all human labor. Let that sink in. The vast majority of Americans produce things of less usefulness than what farmers produce. If farmers weren’t safe in the ’20s, public relations managers and computer programmers aren’t safe today. We’ll die without food; we’ll be fine without TPS reports.

Globalization is inevitable and desirable, but we’re letting it go off in a way that benefits the rich and eviscerates the middle class. Technological automation is wiping out jobs, and surveillance is turning high-autonomy, fulfilling jobs (like what programming used to be, when it had that R&D flavor) into drudge work. The high-surveillance culture doesn’t merely make the workplace unpleasant, but also lowers the bar for who can contribute– it becomes profitable to employ unskilled scabs, if surveillance becomes cheap enough– depressing wages further. Though Agile Scrum reduces the effectiveness of competent programmers, it turns incompetent ones into marginally employable code-cutters. So what happens? Well, employers replace the high-talent curmudgeonly experts with scrummy rent-a-coders, and wages nosedive. Furthermore, as jobs are wiped out in one industry, people who worked in it become “refugees” and flow into another, making that industry more competitive, and driving wages down further.

The wage slide is going to be far more severe than most people predict, due to economic inelasticity. When people have a desperate need for something– like gas to go to work, or water in the desert, or illegal drugs to which they’re physically addicted– they will pay nearly any price, and therefore small drops in availability will drive massive price increases. During the oil shocks of the 1970s, the supply only dropped by about 5 percent, but prices quadrupled. That’s demand inelasticity, but supply (of labor) experiences the same phenomenon. If people desperately need jobs to survive, then we should expect that either a small increase in worker availability or a slight decrease in demand for labor will tank wages, disproportionate to the actual amount of change.

For example, self-driving trucks will put millions of drivers out of business in the next twenty years. There are millions more who serve those drivers: hotel owners in the Midwest, logistics coordinators, and so on. There are a lot of smug programmers who think that this won’t affect their wages. Wrong. Again, no one is safe.

Let’s say that half the truck drivers lose their jobs. We won’t even account for all that supporting labor: the hotels and diners. That’s 1.75 million workers, fresh on the market. Let’s be conservative say that only 5% of them learn how to write code and become programmers; most will go into other industries. That’s 87,500 new programmers on the U.S. market. That’s about a 2.5 percent increase in our numbers. Seems survivable, right? If we expected wages to drop only by 2.5 percent, yes; but keeping in mind inelasticity, they could drop by 20 percent, just due to that.

Now, add in the workers who support trucking, the hotel owners who see the writing on the wall. They won’t all go into programming. Some will go into other industries (and push out other words, perhaps). Multiple industries will have refugee-driven inelasticity crises and wage collapses. It won’t be just us; but, no one will be safe. Look at what has happened to attorneys (once considered a safe, protected profession) over the past thirty years; many work long hours for mediocre salaries and are paying off student debt into their 40s. If it could happen to them, it can happen to us.

Is the world of 2050, even if we leave corporate capitalism to its own perverse devices, going to be jobless? No. Society will invent things for people to do. If nothing else, the rich will spend more time in school and the poor will spend more time in prison. Is it possible that continual downward wage shocks and industrial refugee crises will demolish the middle class? Yes. If we don’t recognize the danger that technological unemployment represents, and come up with proactive solutions, it will happen.

I used to hold the somewhat pleasant belief that technological progress would, if it hurt the economy enough, blow out its own flames. It ought to be self-limiting, right? If things get bad, people will stop creating (a sort of unprogrammed strike) and technical progress will slow… and we can find a way to clean up our messes. That was what I hoped for.

I’m no longer so sure of this. Technical progress, from a humanistic standpoint, has slowed down. Basic research funding has collapsed and we’re no longer putting people on the Moon; we’re tweeting about “covfefe” instead. Antibiotics did more to improve human life than a blog post about technological immortality written by some 24-year-old who optimizes ad placements; that much is true. Finally, the great technological marvel of the 21st century is a cloud. Not “the Cloud”; that’s just jargon for “the Internet”. I’m talking about the cloud of prehistoric algal carbon pumped out by the literal septillions of useless computations, performed to mine so-called “BitCoins”. Yes, that cloud, for anyone planning a tropical vacation to the North Pole. Ah, BitCoin and its ilk; this high-pitched electric whine, inaudibly belch-screaming carbon dioxide into our atmosphere, might be the siren song of a desperate middle class, seeing its impending demise at the hands of authoritarian capitalism, and not knowing what else to do but invest in libertarian wank tulips. Technology is becoming less useful and less friendly, from a humanist’s standpoint, but it does not seem to be self-limiting. Its direction leaves much to be desired, but its speed remains high.

Technology and globalization shall continue. There’s no other way. If we let our society and morality implode, this whole picture will take an undesirable direction. Without innovations that are currently untenable, such as a universal basic income and the eventual eradication of global poverty– in my view, in the next 50 years we ought to take a preemptive Marshall Plan attitude toward the elimination of third-world misery– Black Mirror is our future.

Technical achievement and beneficence have stalled. Sadly, the technological process will continue even if there’s a complete lack of human progress. There will, alas, always be money in helping rich people unemploy those pesky, expensive worker-serfs… and, also, helping the powerful surveil those whom they haven’t yet figured out how to unemploy.

Advertisements

Does Genius Exist?

Does genius exist?

I would argue that it does, but not in the way our society imagines it. Our society holds the notion that people like Albert Einstein, Nikola Tesla, or Andrew Wiles are natural geniuses, born apart from the rest of the world. It would be dishonest to ignore the unusual natural capability that such people have. However, are they born-apart geniuses, or people of high achievement? Does the distinction matter?

No such thing as “a genius”

The notion of a person as “a genius”, somehow separate from humanity and proto-angelic because of her superior intellect, I find harmful. As a discrete zero/one variable, this thing called “genius” doesn’t exist.

In 1999, I attended the Math Olympiad Summer Program, or MO(s)P. I met people with much more natural talent than me. To call them “geniuses”, however, would mitigate the sheer amount of work that it took them to accomplish what they did. I asked an IMO Gold Medalist how much time he spent thinking about math. He said, “50 percent”. Remember that this was in high school. Did he have a lot of natural ability? Of course, but he also worked his ass off (although it may not have felt like “work” to him).

This binary notion of “genius” seems to be a justification for our society’s othering of highly intelligent people: mad scientists, failed prodigies, bitter adult geniuses seeking revenge against a society that never recognized them. Then, to make matters worse, we have the cultural blight that is venture-funded, private-sector technology (“Silicon Valley”) in which the supposed genius nerds have come around full circle and become the bullies. Silicon Valley is the worst PR for smart people since Nagasaki.

The standard Silicon Valley narrative is that the horrible men at the top of the VC-funded ecosystem are taking revenge for two or three decades of unpopularity. That’s mostly correct, but let me add a couple of notes. First of all, the nerds/jocks model of high school is largely incorrect. I went to high school. I remember it. Not all smart people were unpopular. (I was smack-average in the hierarchy; not prom king, but not picked on either. I wasn’t invited to all the parties, but people respected me.) Also, “jocks” never were the bad guys. Jocks worked as hard as nerds, and most of them were nice people. Bullies were the bad guys. Sure, a few bullying jocks existed, and so did bullying nerds, and so did bullying jockish nerds. Some of today’s tech barons were nerds and some were more fratty, some were unpopular and some were not, but most of all they’re bullying assholes and always have been. Forget the Hollywood narrative; most bullies were unpopular in high school. (Were there popular bullies? Sure, but all being equal, bullying hurts one’s popularity.) It doesn’t take popularity to be a jerk.

Today’s Silicon Valley barons aren’t nerds, though they’ve appropriated nerd culture. They certainly aren’t geniuses; I’m much closer to that than they are, and I wouldn’t put that label on myself, because I’m hesitant to believe that it exists. Mostly, they are bullies who pretend to be nerds because it pays. As for bullying, I suspect they’ve always had that inclination, but corporate capitalism gives them a power they lacked in adolescence.

Are there people in Silicon Valley who approach or even exceed the level of cognitive excellence that we’d call “genius”? Sure, but they’re never the ones at the top of the pecking order, making millions of dollars or getting news articles written about them. They’re research scientists and software engineers no one has ever heard of, and they can’t afford a house less than an hour and a half from work. So it goes.

Okay, okay…

Highly intelligent people get othered. We’re nerds who “must” have poor social skills, because we spend so much time reading and thinking; at the same time, we’re geniuses who never had to work for anything. Note the inconsistency: we lack social skills because we work too hard on other things, but we’re lazy bastards to whom everything came easy. It’s a bit hard to win, in a society so focused on corporate back-biting that any positive trait or distinction gets a person torn down.

My experience leads me to conclude that: (1) natural ability is a continuous variable, not a discrete, binary one we could call “genius” and use to set people apart, (2) that meaningful natural ability is so hard to test beyond 2.5-3.0 standard deviations (IQ 137-145) that it’s hardly worth the bother, (3) that natural cognitive abilities tend to diverge below the level we’d call “genius”, so a mathematical genius might be only average at music composition or the visual arts, (4) that cognitive originality is separate from what we consider “IQ”, and (5) that said cognitive originality requires hard work far more than it requires natural talent. How one can be original without the sheer work it takes to study the work of others, imitating what is competent while rejecting what is overused? No intellectual achievement comes naturally; experience matters and that takes hard work. I don’t write well because I was born with a silver semicolon; I write well because I’ve written millions of words, and some of that stuff, early-early on, was terrible.

I’m not going to lie. Does the average child, age seven, have a shot at winning the International Math Olympiad in ten years? No. It’s not fair to place that kind of expectation on him. Discrepancies of natural ability exist, and their importance grows as a technological society develops a use for the ability to grapple with abstraction. That’s clear. But, does a person require a have-it-or-not single variable we can call “genius” in order to reach the highest levels of achievement. My experience has led me to conclude that the answer is “no”.

It’s a harmful notion, too, because most of us can’t agree on what “a genius” is. In business, people of rather bland cognitive ability (with nonexistent diligence and originality) routinely take credit for their subordinates’ work and are hailed as “visionaries” or “geniuses”, while the hundreds of smarter people doing the actual work toil in obscurity.

Sure, some people have more natural ability than others. Some have more drive. Some work harder. Some people are luckier in terms of what experiences life gives them. Over decades, these result in drastic differences in (perceived and real) capability. All of that seems to be true, without much controversy. Yet the archetype of “a genius” has so much baggage that it might be wise to get rid of it.

So what is genius?

If I reject the notion of a person as “a genius”, I still prefer not to throw out the concept of genius, not as a binary personal variable but as a notion we might affix to high intellectual performance. There are people who can reach a state wherein their cognitive originality, conceptual clarity, and productive competence is far beyond the norm. When they achieve the right state of consciousness, they reach a level of insight and capability that seems inaccessible to everyone else, like stage magic.

Natural ability plays a role in this, but it’s not as big a factor as we tend to think. Less than 0.01 percent of people in our society are truly high-performing intellectuals, as I see it, and yet I doubt that 99.99th percentile natural ability (IQ 156) is required. (In practice, we can’t measure intelligence that high in adults– and childhood IQs are both inflated and not as meaningful, since adult capability is what we care about.) Perhaps 0.5 percent, or even 2 percent, of people have the requisite natural ability. Perhaps it’s more, because we still don’t understand what intelligence is or where it comes from. Did Shakespeare have a 190 IQ? Or was he a 125 IQ person with a knack for wordplay and storytelling? Does it even matter? Not really.

In childhood, one’s creativity is high but competence is low. In adulthood, rote industrial competence is high, but creativity tends to be low. Society beats it out of most people, forcing them to subordinate to a system run mostly by people of mediocre intellect. Workaday life crushes originality and excellence. So, we see that two ingredients for what we might call “genius” exist at separate times in life and, for most people, never the twain shall meet. Neither time is conducive to anything we’d consider meaningful genius: the child’s inexperienced creativity lacks the insight necessary to achieve originality; while the adult’s mechanical, risk-averse fairly-goodness, borne of the perfusive oppression of needing an income, clearly falls short.

Most gifted children seem like they might achieve creative excellence in adulthood; very few actually do. I’ve observed the careers of extremely intelligent (i.e., IQ 160+) people and the results are, at best, disappointing. About half go to graduate school; the other half go to Wall Street or Silicon Valley straight out of college. Either way, they expect to defeat the morons in business handily, retire within ten years, and dedicate the remainders of their lives to intellectual pursuits. It almost never works out that way. It’s not uncommon for highly intelligent people to be mobbed and bullied in their corporate jobs by resentful mediocrities, although even more common is for them to disappear into the bland, beige fog, and to lose every element of originality they once had. Most often, they disappear somewhere in the folds of middle management, and do what they can to hide what they once were.

When highly creative people enter the corporate world, they perceive (correctly) that they are singled out for unfair treatment because of their abilities. However, they tend to attribute this to envy and resentment, as it was with bullies in school. It’s not so; there are two problems with this argument. First, plenty of high-IQ people join in the bullying. While creatively and morally stunted, having acquiesced to mediocrity a long time ago, such people do not perceive themselves as inferior. Second, the upper tiers of intelligence and creativity do not bring the economic performance or social status that such people would envy. The corporate masters crush the brilliant not because they’re resentful, but because they’re afraid. To that point, they’re afraid that the irreverence (if not the talent) of such people could become infectious and undermine their authority. Natural intellectual talent may or may not be a necessary condition for creative excellence – that element can be debated – but without an indifference toward the sort of arbitrary authority that exists in the corporate world, it is so difficult to protect creativity that it might as well be considered impossible. The corporate regime cannot allow authority to be seen protecting, or even tolerating, the irreverence that human excellence requires. Therefore, the excellent must be crushed until they lose the desire to excel. But it is not envy that drives this; it is a calculated purge.

It’s important to recognize the notion of genius, not as a separate category of human, because such a thing never existed, but as a rare combination of traits we ought to encourage rather than crush. It’s not something that nature doles out in rare packets. Rather, it’s a combination of traits that can be encouraged or defeated by circumstances. Our society has reached a state in which the latter is the norm; thus, our squalid culture and stagnant economy should surprise no one.

The rare element: arrogant humility

What makes cognitive originality so rare?

I’ll confess it. I’m not that smart. If I had to guess my adulthood IQ, it’d be somewhere between 140 and 160. Statistically speaking, it’s nearly a guarantee that there’s someone smarter than me on a typical subway train in New York, or in a traffic jam on I-95, or serving coffee at Starbucks.

Yet, I am a misfit in human society. The corporate world, with its polite but militant brand of mediocrity, has tried to crush me; somehow, I’ve retained this diamantine cognitive integrity that holds negative economic value, flouting self-preservation for a reason even I don’t fully know. People like me become rarer with age; some of us just give in. At age 34, with my cultural and cognitive integrity intact, I’m an absurd, alien rarity for reasons that have little to do with natural ability. What makes me different from all the drones out there is not “IQ”; I know plenty of high-IQ who lack it, and it’s plausible that people can retain it with only average cognitive gifts.

If not natural intelligence, what is the set of traits one needs in order to preserve a child’s cognitive integrity, while gaining the competence and experience of adulthood? In two words, I’d call it arrogant humility. One needs the pugnacious assertiveness of a child, coupled with the quiet diligence of an adult who knows how much she doesn’t know.

To be better than other people, you have think that you’re better. You must set a higher standard for yourself than you’d set for anyone else. In addition, you have to fight all the people who want to bring you down to their level: the corporate vassals and barons who can’t stand to be outshined, society’s myriad talentless gatekeepers, and so on. That’s difficult. A person who does this seems aloof and will be disliked.

To be honest, I think that what we now call “mental illness” – many of these are physical illnesses with mental symptoms, but that’s a topic for another time – can be, in its milder forms, an asset. During a depression, the brain comes up with a certain brand of charismatically negative, but incorrect and damaging, though: you’re a loser and you’ll never amount to anything, so just be content to bump along the bottom. If you’re used to telling this sort of impulse to fuck right off, because your survival has literally depended on it for more than a decade, then you’re more likely to hold integrity when you enter the corporate world and hundreds of supposed social superiors are sending you the same sort of message (in more polite words). You recognize them as Team Depression and, just like the enemy within, full of nonsense.

To hold your cognitive integrity in a world that wants to break it, you’ve got to be a bit arrogant. You have to believe that you’re better than the mediocrity you see everywhere, especially in a time of organizational decay like this one. Maybe you are better; I am. I’m not afraid to say it, because I value honest perception more than politesse.

At the same time, believing in one’s superiority doesn’t make it so. It can lead to laziness and failure. The Dunning-Kruger results have been, in a bit of irony, misinterpreted by most people who cite them. (They indicate not that unskilled people invariably perceive themselves as superior; but that, when it comes to social skills in particular, there’s no correlation.) Still, there are domains in which people accurately estimate their level of capability, and others where they don’t. Nonetheless, if being arrogant were sufficient for genius, we’d see a lot more geniuses than we do. Most arrogant people lack insight into their own limitations and underestimate how hard they’ll still have to work, no matter how talented they are, if they want to achieve anything significant.

In my experience, the smartest people know what they are. They find workaday life oppressive, toxic, and boring. They mock the corporate world’s needless busyness and its bland replica of hypomania. At the same time, if they wish to sustain high cognitive performance and originality, they can’t get complacent. It’s not that they feel competitive pressure (we don’t compete; however, others compete with us) because they’re more resistant to social impulses than average people; it’s that they recognize how little they know, and how much is out there that they need to learn.

Cognitive excellence is an anomaly. I’ve often wondered why there seem to be severe differences in human intellectual capacity. It doesn’t seem that 140 IQ brains are larger or more consumptive than 100 IQ brains, so why didn’t we rapidly evolve to (and then fly right past) the IQ 140 level? Why did human intelligence stop at “high, but enough”, when it could have kept going? There are three possible answers. One is: the premise is wrong, and human intelligence continues to increase. It’s plausible, though I suspect the Flynn Effect overstates it. A second possibility is that the world in which we evolved had no use for intelligence beyond IQ 100. To be honest, I doubt that. Though I doubt the difference between IQ 140 and IQ 130 matters in the wild, I’d imagine that an IQ of 110 would provide an advantage over 100, even in the Stone Age. As with height, 99th-percentile intelligence might make one a freak; but in a species that evolved in tribal groups, 90th-percentile intelligence would make one the pre-eminent alpha. A third explanation, which I find most plausible, is that high intelligence correlates with undesirable genetic traits – in particular, mental illness.

IQ and mental illness don’t seem to correlate. Creativity (as far as it can be measured) and mental illness do, but there’s a question of which drives which, and how. Does mental illness provide a creative impulse, or are creative people so oppressed by an inferior society that they break down? I suspect that both are true in some degree.

Perhaps there is value, to a creative person, in mental illness. To be honest, I doubt that extreme states of consciousness such as frank mania or clinical depression serve much value, except perhaps when experienced once. (As a writer, I can write a panic attack scene. Not everyone can.) However, the constant need to reject one’s own thoughts in order to survive tends to imbue a certain humility that is otherwise rare in this world.

I’ve met people who perform at a cognitive level one would call genius. I’ve also met business executives. (The intersection between these two sets is almost empty.) What I’ve noticed in executives is their lack of introspection, a total absence of precision in their thinking; but, most of all, their inability to reject errant impulses. They’re not mentally ill. However, they’re prone to biased thinking, emotional reasoning, and hubristic nonsense. These afflict everyone, and especially children, and business executives seem to regress to the child’s level. They’re inexperienced at rejecting their own impulses (being neurotypical, they haven’t had to do so in order to survive) but they’re also surrounded by people who can’t tell them “No”. They revert to a juvenile state, though they lack the creativity they may have had as children. (They lost it while climbing the corporate ladder, and they won’t get it back.)

Corporate barons have the arrogance down pat, but they lack humility. Their capacity for original thought is, therefore, disappointing. Being an executive is like being stoned; you think you’re at the pinnacle of performance, but it’s a delusion. Clearly, arrogance alone does not lead to intellectual excellence. Unchecked by a broader acknowledgement of the world’s vastness, it pushes the other way.

How does one cultivate this paradoxical combination of arrogance and humility? To be honest, I’m not sure. Decided theists like Cantor or Tolkien might have a pat answer: humble before God, arrogant toward Mammon. Personally, I lean Buddhist and accept my lack of knowledge, though it is a hard thing to pray to. I don’t know, for sure, if there are gods or what happens after I die – and that’s, in fact, part of samsara. So long as I remain in samsara, I will have to experience death without truly knowing if anything’s on the other side of it. (I imagine that I have held many world religions, and none from time to time, in past lives.) Even if I could delude myself into believing that I understand the bulk of what’s important in this world – and, if I haven’t made this obvious, let me admit that I don’t – I’d have to contend with my complete lack of expertise on what, if anything, is to come.

When I examine the state of workaday society, I rightfully recognize myself as superior, and I cannot fully take part. Sometimes, I wish I could. Anti-mediocrity is a disability that has cost me millions. Still, I recognize it as built for something lesser than what I am, and I shall not condescend. However, when I look at a universe that existed for billions of years without me, or at the highest reaches of human potential, or even at the simple contentedness of a happy cat… I must be humble. I find myself not as an outsider, and neither inferior nor superior, but as a perpetual beginner.

Panic, Office Work, and What Each Taught Me

I have panic disorder. I get mild-to-moderate anxiety attacks two to five times per week, disruptive panic attacks about once per month, and the severe kind that wear me out (a “go-homer”) every six months or so. It sucks, but it used to be worse. Nine years ago, before treatment, I’d have the hallucinatory, dissociative kind that felt like my heart was about to explode. I haven’t had one of those for almost a decade. These days, they’re annoying interruptions.

The disorder remits slowly. It leaves a lesser shadow of what it once was, but never quite ends. After you experience the five hundred bizarre symptoms that panic can throw– phantom smells, visual flashing, light-headedness, vomiting– you lose a fear of them. The attacks become painful and annoying, like headaches, but they’re not especially scary, per se, to a person who’s been through several hundred. They’re not dangerous; they just suck.

There’s no cure or exit that always works. Sometimes, a glass of fruit juice helps; sometimes, it doesn’t. A walk can make it better, or worse. Usually better, but it feels like a gamble. Reading isn’t usually possible; writing might be, but expect typos. The peak of an attack only lasts for a minute or so; an attack may have one peak or ten spaced a few minutes apart, and the recovery is long: 30 to 90 minutes, usually, before the damn thing is conclusively and safely over.

Do It Calmly

There does seem to be an overarching principle to the long recovery: whatever you do, do it calmly. That’s not much to say, but it’s what I’ve got.

That’s harder than it sounds. When your anxiety goes up to 11, and the fight-or-flight response is out of control, the impulse is to sprint out of a place, not walk calmly as if everything were normal. Yet, walking is often better for escape than running, in this case. Though regular exercise reduces the frequency and severity of panic attacks, I wouldn’t recommend all-out effort during one.

I find that panic produces two desires: one helpful and one hurtful. The first is the desire to complete a task that might make it better: a walk in the cold, going home, playing a mindless video game, or getting to the bathroom. It gives the mind and body something to work on, not so taxing as to exacerbate anxiety, but engaging enough to pass the time and spend some of that surplus adrenaline and neuro-electrical activity. That can be useful. The second impulse, the harmful one, is that raging urge to do it quickly and “get out of” the panic experience as fast as possible. That blistering hatred for the experience keeps it going.

The panicked mind thinks that achieving some task (say, leaving a crowded place) will make the attack end. The process of working toward the task usually helps; but, achieving the task in a hurried, tense, or reckless manner will create more anxiety. One escapes the situation that caused the attack, only to find the same panic in a new theatre (and, then, falsely attribute it to some other cause, back where one started). Sure, you escaped the movie theater, but now you’re having the same attack in a parking lot.

I don’t have all the answers, but the best I’ve got for panic is: find something to do but, no matter what it is, do it calmly. There’s something zen about the notion; more important than what you do is how you do it. That’s an attitude I try to take to most of life. Between health issues and work and random events, I can’t control the bulk amount of output I can produce. I don’t (and can’t) even know whether I have 60 years or 60 minutes left of life. The quality, I decide. Quantity isn’t up to me.

When I have a panic attack at home, I do housework. It takes twice as long as it otherwise would, and perhaps I do a less-than-stellar job, but the time isn’t wasted. If I have one while working out, I keep doing what I’m doing, in order to dump the adrenaline into exercise. At the office? Well, I still haven’t figured that out. The work never was the problem; it’s the subordinate context that makes office life so terrible. I suspect there’s no cure, for anyone, save sweeping societal overhaul (which will, though potentially beneficial, itself be stressful).

One good thing about panic attacks is that they end. Office life never does, really. Unemployed people have to search for jobs, which is just as stressful; retired people are so worn out by office existence that they often struggle to adapt, after having been institutionalized for so long. For this reason, the year after retirement has an above-normal rate of death by all causes: natural ones, and suicide. Such people have taken themselves out of the office– or, in most cases, been pushed out due to ageism– but the can’t take the office out of themselves.

Do It Tensely (…Or Else)

Let’s invert the directive above. Instead of “do it calmly”, let’s examine the distinctively unhealthy advice, “do it tensely.” You now have a recipe for anxiety– but, to its credit, also one for office survival.

One of most important social skills in the corporate world is mood mirroring. If the executives are tense, middle managers will be tenser, and workers who want to be upwardly mobile will be most tense at all. Carrying around this anxiety does no good. Nothing gets done faster, and the quality of work is worse than what it otherwise would have been. So what’s the purpose of holding this unpleasant mood? In most companies, management and especially upper management has no idea who’s good at his job and who’s just collecting a salary. Managers and executives are too busy managing up to pay much attention to what’s happening below them. By and large, it might be only 1 in 5 managers who has an intelligent sense of which of her reports are good and which are bad; as for upper management, two or three or eight levels above the workers: forget it. But, they all have opinions. Those opinions are largely formed based on appearances and perceptions: body language, verbal intonation, and various mood cues. As an executive sees it, the people who are running around anxious are team players; the relaxed people don’t give a shit.

High-ranking individuals within companies assess their subordinates based on what they perceive to be merit, but they’re usually picking up superficial cues. To social status, they respond strongly. The aversion to cognitive dissonance is sufficient that a person who reliably projects high social status will be viewed as competent and high-performing in any workplace. The problem is that this is hard to do. Everyone’s trying to project high status– it’s less taxing than projecting anxiety– but most people who play this game come across as low-status strivers. Also, groups define status in different ways and one who projects the wrong kind of status will draw resentment. To be coolly aristocratic might work in a London investment bank; it will inspire anger if you’re a foreman at a steel mill or a programmer in Silicon Valley. Since it takes a long time to discern what kind of social status a group values, and because these tokens are often hard to fudge, it’s more prudent to fall back on the second element: mood mirroring. When the boss seems angry, be angry. If the boss relaxes, relax– but not quite as much as he does, for that’s sometimes a trap. Forget what companies say about themselves being casual, fun, or (gag) family-like places to work. The old “shit rolls downhill” rules about not leaving before one’s boss, and not sitting while he stands, still apply in Corporate America.

If you seem relaxed at work, people assume you don’t care. If you’re the odd-man-out in theatre of shared suffering, that’s points off. Substance doesn’t matter. Merit doesn’t matter. Merit? What, you think you’re still in college?

Often, this anxiety comes straight from the top. Corporate executives are like toddlers with guns. They can’t do anything for themselves, but they can cause incredible damage, quickly. As bad as it is to suffer their whims, there might be slight comfort (not much) in the observation that their position is also stressful. They hold notional responsibility, but can’t do any of the real work that powers the company. They’re constantly in a betting position with regard to the performance– which they cannot measure, and the people to whom they must trust this job cannot be trusted– of the people below them. The personal stakes are low– if an executive fails and is fired, his buddies on the board will line up another job– but the anxiety is real. And, of course, executives take what anxiety they have and deliberately amplify it. As they see it, a display of twitchy trigger finger will motivate the people at the bottom.

Thus, corporate capitalism will always be a game where one of the most important skills is the ability to display anxiety in deference to the nervous brats up top. Of course, one shouldn’t go overboard. A full-on panic attack is embarrassing. It suggests that one can’t handle the stress. One who wishes to maximize his corporate survival ought to display about 1.2 times the anxiety level of his immediate superior: not so much to become the long pole in the tent or seem unreliable, but enough to send the sycophantic message, “When you’re upset, I’m upset.”

To wind down (or avoid) a panic attack: find something (anything) to do, and do it calmly. Try to do it well.

To survive at the office: find something (anything) to do, and do it tensely. It doesn’t matter if you do it well.

One might see why office life wears people out. Most people can’t mirror anxiety without themselves becoming anxious. Making an angry face tends to make a person angry; the same applies to smiling, grimacing, and other expressions. Just holding one’s facial pose tends to create the emotion. Even professional actors do best “in character” and need time to recover from tense scenes, and that’s when they know the events and emotions aren’t real. (In the office, the fear often is real, because most people live on the wrong end of arbitrary but brutal power relationships.) Being around people who at least have to pretend to be anxious will make most people, in accord, actually anxious.

Of course, though I hold a low opinion of corporate executives, I don’t intend to assert that they want people to be running around with panic attacks. They probably don’t think about that issue at all. (It is rare for a corporate executive to think about anything other than his own enrichment.) That a small percentage of the population would react to standard, mandatory office anxiety in a painful and counterproductive way is, from their perspective, tolerable suffering since they are not the ones who suffer. It is not within their intent, though. They are ignorant assholes, but almost never deliberate oppressors.

This particular divergence between what is mentally healthy and what is necessary to survive the subordination to rich people that we call “work” is not a unique one, and I’d argue that it’s the tip of a seemingly unrelated but fascinating iceberg: the notion of minimalism.

I can’t do the entire topic justice, but what I’ve realized as I’ve gotten older is that people are generally happiest with few things of high quality, rather than a plethora of low-quality objects (“clutter” or “junk”). A sturdy small house might be better than a troublesome large one, since most people spend the bulk of their home time in a small number of places, while a house that’s falling apart in a constant source of cost and headaches. High-quality objects and experiences, even if few in number, make people happy. Unreliable junk makes people miserable, even if they have a lot of it.

Most people know that, I think. Further, when there are a small number of elements to contend with, one can transform them. Cold, if one is properly dressed and therefore in no danger from it, becomes invigorating. The difficulty and pain of physical exercise can be offset by the design and healthful simplicity of the task: run 10 miles, or deadlift 225 pounds, or swim 50 laps. People do these things for free, with no economic purpose, because they enjoy physical exertion. Yet I don’t know anyone who would move a typical apartment for free. That work is just unpleasant: packing boxes, awkward objects, possessions breaking. Likewise, virtually no one can “transform” a torrent of junk and clutter into something enjoyable.

Office work is the opposite of minimalism. In bulk, it forces workers to contend with low-quality communication, low-quality ideas, low-quality projects, low-quality power relationships; and, in most cases, a low quality of delivered work. It’s busy, but it’s inefficient and often produces things of low or even negative social value. This incessant junk generation isn’t merely tolerated, but celebrated. If people weren’t spewing garbage into the stream of ideas, practices, records and discourse; well then, how would anyone know that they were working?

George Carlin decried our society as one where people buy shit they don’t need with money they don’t have. That’s bad enough, but one can opt out. Workplace clutter, on the other hand? Ha. Good luck with that.

Just as office social demands push people to treat minor shortfalls or risks as if they were personal, existential crises; it deprives them of the right to recognize clutter as such. It forces people to contend with, and generate, garbage so their managers can put together status reports on which they’ll be judged not on what was accomplished, but on the number of bullet points and words the report contains.

Largely, this exists because the best way to survive in a corporate environment is to flood the channel. Executives are always looking to cut costs; this usually means cutting people. Savvy corporates learn that it’s best to have complete opacity about how much one does and how much effort it takes to achieve it. Obviously, it’s no good to be seen as lazy; but it’s also damaging to be seen as hard-working, since it can present a person as over-eager, and it can set high expectations. You don’t want executives to know if you work a 2-hour day; you also don’t want them to know if you work a 14-hour day; or if you work a 14-hour Tuesday and a 2-hour Wednesday; or if you work exactly 8 hours every day. There is no upside in being surveilled by people with a hard-on for cutting (and, in many cases, externalizing) costs. Corporates with any sense of self-preservation will ensure that their superiors cannot pinpoint them in time and space.

 

In the cat-and-mouse game between executives looking to cut costs and workers trying to survive, one of the most common defenses is a channel-flooding attack. It doesn’t work not to communicate at all; it’s far more effective, in most cases, to flood executives with so much extraneous information that they can’t help but fall back on the default decision (to delay) with regard to anything that might cause harm.

Consequently, the main activity performed in corporate offices is not meaningful “work” but channel flooding. I don’t expect this to change any time soon.

But, imagine how much better of a world we’d have if this mandatory waste of time and emotional energy were eradicated.

The Seasonality of Workplace Conflict

January is supposed to mark the beginning of a new year, and yet as I get older, I’m shocked by how little changes from year to year. We don’t have flying cars, we still die of the same diseases at around the same age, and most of us still have to go to offices and perform bureaucratic chores for rich people in the name of “Work”. We had a rapid era of progress in society between 1945 and 2000 and then it just… ended. We don’t have more recessions than we used to (in fact, we have fewer) but we also fail to recover from them. Our economy continues to grow, though at a mediocre pace; our society is falling to pieces.

Every year, we each get closer to death, we burn a few billion barrels of oil, and our progress is, on the whole, disappointing. Our corporate masters get richer and their parties more lavish; the upper-middle class sees stagnation, while the middle and lower classes face outright decline. It’s an ugly picture. Yet, people have to spend 2,000 hours per year pretending that something else is the case. They go to work and put forth efforts as if they’re on a fast track to promotion and wealth; most of them (and all of their superiors) know they’re not. Social expectations force them to work as hard as they would if they were to be made CEOs tomorrow, even when it’s clearly not true.

I’ve seen enough of the corporate world to know that there are three seasons in which conflict is more likely than in the rest of the year. One needs to watch out every day in the corporate world; not a day goes by where someone, somewhere, isn’t stabbed in the back. Yet, there are three periods that are dangerous enough to merit special concern: the post-holiday depths of winter, the first warm days in the spring, and those waning dog days of summer that linger on into an autumn of disappointment. The conflicts in each hunting season are different– so are those who tend to hunt and be hunted. It’s worth exploring each.

All of these assessments are based on an American work culture and a New York climate. It’s probable that they apply differently to other climates and cultures.

A Sky Blue As Death (Jan. 2 to Jan. 31)

The holidays end with a crash; what was once a proud and perfectly good young tree is left tossed into billowing darkness, landing with a thud on the ice-caked street, destined either for a landfill, or to be pulp for tomorrow’s TPS reports. People have tired of festivities and fake merriment, but it’s still going to be winter for two or three months.

I’ll discuss this later: poor people pin their futile hopes on the summer, while rich people pin theirs on the early winter: ski season, free-flowing spicy booze, the kiss-at-midnight of the romance novel. (The middle classes venerate autumn, reminiscent of the elite New England schools they want their kids to attend; hence, the “pumpkin spice” craze.) Disappointment abounds everywhere; it just comes at different times for different people.

Executives return to work on January 2nd, having failed to reconnect with their work-orphaned children and having been asked too many unwanted questions about their work (since they’d freely admit, after a few drinks, that they don’t know what they do, but that it’s clearly socially harmful). They’re annoyed by the failure of the holiday season, but also amped. It’s a New Year; a new chance.

Corporate executives are predatory creatures. Their primary goal is to take risks where the benefits accrue to them and the (probable) losses are externalized to someone else. Usually, their companies suffer, but this doesn’t bother them. If they’re any good at being executives, they’ll be promoted away from their messes before anyone notices what happened. January is the perfect time to make a mess and try to turn to it into personal profit. If it doesn’t work, one has the whole rest of the year to clean it up. So, what do executives do? The only thing they know how to do: reorganize. This often involves firings, demotions, unwanted changes in responsibilities, and various forms of cost-cutting.

People tend to reflect most on their mortality at two times: their birthday, and the New Year. A year has passed, they think, and what have I done with it? Executives aren’t immune to this. They see VPs who’ve become SVPs and EVPs and C-Words in record time, or founders who’ve raised $100 million while they’re scrounging along on a Series A, and they compare their lap times to others, and they get pissed off. They take it out on their workers. Clearly, they reason, their inferior career results (relative to what they consider their peer group) are the fault of the people below them.

If there’s something good to be said for the January conflicts, it’s that they tend to stay within the first month of the year. The anger tends to flow downward from the top, which means that the people most pissed off can act quickly, and so it ends fast as well. Workers get fired and move on, and by the beginning of February, it’s over. The same can’t be said of the other two conflict seasons, which come at unpredictable times and tend to linger.

Slushy Times — Low Conflict (Feb. 1 to March 15)

After the January reorganizations, companies find a new, slightly lesser rhythm, and it becomes “the new normal”. The sky is gray, the sun is faint, and the world whirrs along, the season less oppressive each morning. Situation normal. Ah, back to work. Add a shot of espresso to my chai, please.

Then spring comes along and everything gets fucked up again.

Breakup Season (March 15 to May 15)

This is the only conflict season, of the three, that seems to be driven by weather rather than the cultural aspects we ascribe to seasons. Office people spend so much time indoors that climactic seasons aren’t that relevant. Holiday stress and post-holiday misery will be about the same in San Diego or Miami as in Boston or New York. Summer, as well, is cultural more than climatic; it’s the time of freedom for schoolchildren, the period of barbecues and long evenings, and in North America, the time of our high national holidays (May 5 in Mexico; July 1 in Canada; July 4 is the U.S.)

“Spring fever”, on the other hand, seems to come from the weather itself, and spring weather in the U.S. is notoriously unpredictable. March 15 might be 80 degrees; it might be the day of a blizzard.

In college, spring was “breakup season”. People tended to leave unsatisfying relationships in search of better partners. “Men shed their beards, women shed their men,” the saying was. It’s also a rutting period. “Alpha male” conflicts are common, and since men and women are far more similar than they are different, I imagine that alpha female conflicts occur with similar frequency. This “spring fever” period is short– perhaps three weeks– but it’s unpredictable when it will arrive. It may not exist in tropical climates; I’m not sure.

Winter conflicts tend to come from above: management is pissed off and acts; and, although the company is worse for it– because, please note, executives do things for the good of themselves and never that of the company– it’s over quickly. Summer conflicts tend to come from below; dashed proletarian hopes make August anything but. Springtime hostilities come from everywhere. No one is safe. As people come out of hibernation, they assess their social status and, if unhappy– which, in the artificial scarcity of office life, they always will be– they will fight over turf.

Not all of these conflicts are directed at improving salary or career potential. If two men are competing over a “work wife”, one of them will get the other fired. There’s no economic reasoning to this sort of thing, and that makes it dangerously unpredictable.

This is perhaps the meanest of the three conflict seasons, since it seems to lack purpose. Most of the fights, though their results are brutal and jobs are lost in them, are over nothing. Though 90 percent of reorganizations and layoffs are poorly-thought-out, ill-intended, and harmful; people understand that corporations actually need to do these things in order to survive. Winter conflicts tend to be impersonal: people get laid off, but they get severance. In the typical January conflict, no one’s trying to hurt anyone. Spring conflicts, in the workplace, have a personal flavor to them: people are trying to take their rivals down. They’ll fight over the silliest things; or they’ll fight over nothing. The least politically astute tend to fail hardest in this time, because they’re drawn into conflicts that seem to be about one thing but are, in fact, about something else. Efforts to resolve the notional problem often worsens the real one, as the makers of the problem profit when their rivals misestimate their true motives.

Intern Season — Low Conflict (May 16 to Aug. 15)

Spring’s conflicts die out, the fever breaks, and people find themselves exhausted enough to be satisfied, for a while, with their new positions in the pecking order. Those who’ve lost rank (if not fired) will start to look for other jobs, and those who’ve gained position will take a breather. As a result, there’s a low-conflict period in the early summer. People start to go on vacations.

This low-conflict period may be endangered. Why? As the American work culture gets meaner and stupider, taking vacation gets more dangerous. Startups offer “unlimited vacation” knowing people won’t take more than one week at a time. They eschew long vacations not because they’re scared of their bosses; they’re afraid of their co-workers snatching turf while they’re gone: better responsibilities taken away, and worse ones delegated onto them. If American work culture continues as it has, and vacations go extinct as they have among Japanese salarymen, we can expect this to become a meaner and more bitter time when people are pissed off because late spring used to be a relaxed season.

In the more prestigious companies, there’s another factor: interns. The company wants to put on a nice face. Elite college students, and recent graduates, have the backing of what’s effectively a union for smart people. If one Harvard or MIT kid has a bad experience, recruiting at the whole college will be more difficult for years. Corporations like to play nice during the summer, because they don’t want to look bad in front of the young people they need to recruit (bait, then switch) in order to get their grunt work done.

Failed Harvest (Aug. 16 to Nov. 15)

Ah, summer. Catching fireflies, building tree houses, telling ghost stories. Lemonade, water balloon fights, and freedom that lasts forever…

…then you grow up, and become an office drone, and summer is just shit. Hazy, humming, tepid shit.

As I said, the rich executive types tend to pin their false hope on the holiday season, while the poors and subordinates venerate the summer. It disappoints. It always does. The weekend gets rained out, or is too hot; that 78-degree sunny day happens on Tuesday and is therefore worthless. The stack of novels one intended to “get around to” remains unread, because who has time to read after capitalism eats its fill of one’s time and energy?

January conflicts tend to be top-down; executives are looking to make cuts and changes that will enable them, if random fluctuations turn in their favor, to claim credit. Summer conflicts tend to come from the bottom. The poors collectively find themselves thinking, Well, summer is over, and what a lousy one it was. This begins around August. Company-wide mailing lists and Slack chats blow up. Passive aggression mounts. Terminal middle managers, stewing about their lot, decide which peons they’re willing to sacrifice in the next “harvest”. One doesn’t feel the snappy bitterness of the executive-initiated January conflicts, or the aggressive head-butting of spring fever; this is more of a dull, dog-days warping, like train rails bent and made useless by heat. The intensity is low, but it goes on for a long time.

As the daylight wanes in August, people realize how little time they spent outdoors because they were cooped up in an office, because that’s what adults are supposed to do. September’s just sad; it used to be a time of new beginnings, and now it’s more-of-the-same with less daylight. Then comes October, when one sees the pretty foliage on the way to work and promptly forgets that it’s out there. Sneezy, drippy November whirls in and, in an open-plan office, everyone gets the sniffles.

Dismal Gray Merriment — Low Conflict (Nov. 16 to Jan. 1)

To be an office peon in November is truly depressing. It’s dark when one leaves work. Yet it’s a shared depression. It hits everyone the same. Summer is a distant memory (until it is spring and that failed summer was just yesterday) and the weather’s not that cold, but it’s not warm either. Then come holidays and travel (to the often colder Jobless Interior, where most people grew up, and which would be a decent place to live if there were, you know, still jobs there… like in the ’70s) and alcohol-fueled parties.

This is a period of low conflict. The holidays are distracting, and performance expectations are low. It’s unpleasant to have to go to work, because the sunny, warm afternoons are becoming rarer, but it’s not to bad to be there, because there aren’t a lot of executive hard-ons poking around either.

In American work culture, people try to take turf in a colleague’s absence. If someone goes to Australia for three weeks, he might come back to find that someone else has become his boss, or that some smiling dickhead took over his plum project. Yet, most absences in the holiday period are due to the mild but annoying illnesses that circulate in the winter. Because performance expectations are low toward the end of the year, it’s not a time when there’s high demand to go on vacation.

That most absences are due to transient illness makes it difficult for people who would vie for the absentee’s turf, because they don’t know how long a person will be out. It’s hard to campaign against someone when you don’t know how much time you have. Moreover, a young person who never takes vacation can attack someone who’s in another country, without fearing retaliation; but colds and flus and stomach bugs get everyone. It’s socially acceptable, in American work culture, to fuck someone over while he’s on vacation, but someone who steals from the sick will have it done right back to him when he gets the dog flu a month later.

Conclusion?

This topic deserves no conclusion. After all, the droning cycle of office competition– with petty motives; but, often, catastrophic results– does not end. If it has no conclusion, with one misspent year rolling soundlessly into the next, should an essay on the topic? What is there to say?

I describe here what is, not what should be. Office culture should not be. It’s a blight. It’s useless. It’s not even very productive; robots will obviate it soon, and any fight against them will be a losing one. Yet, so long as people must survive and endure it, there will be value available to the world in analyzing its ebbs and flows. That said, this cycle of winter bitterness, spring mania, summer disappointment, and autumn wretchedness serves no purpose. It is not natural; in failing to extinguish subordination to the rich as a necessity for survival, we as a species have created it. It sucks, and I do not expect it to change. Storms and droughts end, but their causes do not.

Happy New Year?

2017 Revealed Executive Predators in Entertainment; High Water Mark for Corporate Capitalism?

I made the joke earlier that, at the end of 2016, someone asked God to stop killing celebrities. She listened, and spent 2017 revealing the creeps.

This could be– I use the words could be, because predicting the future is impossible and the risk of future embarrassment is high when one tries to do so– the end of an era; it’s possible that 2017 is the high water mark for Corporate America.

I will probably never be a billionaire. It’s not my inclination. If I could get a reliable middle-class life by writing, and never have to subordinate in the false name of “work”, I’d probably do that instead. I care more about cultural influence and personal integrity than a bunch of expensive toys I’d get used to in fifteen minutes. Yet I know that others are not like me. There are people in the world who wish to dominate, to humiliate, and to grow in power without bound. It’s not material wealth that drives them; like anyone else would, they bore of the toys quickly. It’s the subordination of the world that energizes them: a private view of a well-bred, Ivy-educated girl willing to debase herself.

We can’t end sexual harassment in Corporate America without ending Corporate America. This is no obstacle; we should end them both.

Let’s take note of what we’re up against. Harvey Weinstein infamously said, “that was the culture then.” With more secrecy in it, it’s the culture now. Understand this: to the sorts of people who become corporate executives, there is no such thing as an “abuse of power”. What we call abuse, they call power. Their imagined right to use their social and economic position for sexual gratification is, for many of them, the executive perk they care the most about. They’ll give up the private flights and bully pulpits before they give up hope of debasing a woman (or, in some cases, a man) half their age.

Everyone gets funneled into the corporate game, and most decent people hate it. Some decent people beat the odds against them and succeed, and even after that, they hate it. Yet there are a few people who are energized by endless social competition, zero-sum status games, and the plethora of pointless nonsense that characterize high bureaucratic capitalism. They will win. The whole process takes energy from normal people until they cease to compete, but there are some who gain lift from it. They don’t mind “working” 100-hour weeks because the courtiers’ games are the only thing they care to do. They’ll always have an advantage. It cannot be fixed. Narcissists and psychopaths will, in the current system, inexorably win and drive out everyone else.

This is why I mock so-called “corporate feminism”, the Lean In nonsense. Of course, corporate feminism is not completely self-contradictory. In theory, a matriarchy could exist that is an oppressive as the patriarchy that exists today (which is, though this does not excuse bad behavior, relatively mild compared to patriarchies of antiquity). Such a society could be just as corporate as this one. A vision of corporate dominion coupled with female supremacy would be internally consistent. But, to most people, feminism means female equality, not supremacy. Gender equality is not compatible with the corporate system as it exists today, because the projection of power matters, more than merit and more than anything that might otherwise be a source of power, and society will always deem one gender more capable (contrary to reality, which seems to have endowed the genders with roughly equal amounts of competence) and powerful than the other. The only major difference between men and women is that we look different (obvious and underwhelming, right?) but the post-truth corporate system runs on appearances.

Furthermore, the corporate system cannot survive without executive payoffs that decent people would consider distasteful and wrong. Seven- and eight-figure salaries are not enough. People get used to those, mighty quick. After three months, a person’s salary becomes, to him, an objective entitlement. Material wealth is not enough for these people; they must dominate.

One might say that if a model of society requires people to be mistreated, and that this falls disproportionately on one-half of the population that tends to hold less power for historical reasons, then we ought to scrap that model of society. I agree. Let’s end executive capitalism. In Scandinavia, managers work for companies; in the U.S., companies work for managers. Let’s become more civilized. Do we really need an overpaid, self-indulgent feudal overclass? I doubt it. Society would run better without those idiots in charge.

It is possible– I do not say certain or even likely, but possible— that 2017 represents a high water mark for the self-indulgent corporate capitalism that Donald Trump has exemplified for the past forty years. I’ve heard people crack jokes about “Hollywood executives”. Don’t make that mistake; it’s not just Hollywood. We hear the Hollywood stories, because those tend to have prettier women and uglier men involved, but this runs deep and for every creep who gets caught, there are twenty who’ve gotten away with it. Our system exists in its current form because the most powerful people go to work for reasons other than money (they’ve got enough). Take away their perceived “right” to abuse power, and they’ll lose motivation and drop out of the system, and that’ll be good for everyone. Our society is broken in more ways than I can count right now; we’ve got a lot of earnest work to do, in order to fix it.

Crazy Is Better Than Stupid

Something I’ve observed in the corporate world is that most people lose intelligence and creativity after 5 to 10 years. Sometimes they burn out even faster. The flame goes out; a robot stands where a person once was.

A few of us don’t go that way; we’re the rebellious ones who refuse to turn stupid. We rarely rise up the ranks. On the contrary, our self-protection often gets us disliked, excluded, and even fired.

This midlife cognitive decline is far from inevitable; it’s not biological. Our society romanticizes youth and exception, as if it were the norm to know everything at 25, but people who surround themselves with intelligent company, who continue to read and learn, and who find opportunities meaningful contribution to the world around them, will continue to grow, quite late in life. It’s true that Keats and Galois peaked young; they also died young. The norm is to peak after 40, even in mathematics, which is notable (more in romance than truth) for its early blossoming. Writers, on average, seem to peak after 50. Some of the great ones started after 50.

In other words, people who specialize at getting smart continue to get smarter as they grow older. People who specialize in climbing organizational ladders were probably the smartest when they were in school.

My suspicion is that the midlife cognitive decline one observes in Corporate America is, in addition, a low-grade and otherwise subclinical variety of depression. Is it the severe depression that requires electroconvulsive therapy and leads to catatonia and literal bed-shitting? No, of course not. It’s a rational adaptation to defeat. Though I imagine that major depression was always pathological, its milder cousin might be an adaptive, life-preserving response to transient low social status. Archaeological evidence and present-day observation suggest that violence related to social status killed about 1-in-5 adult men in the primitive world. Mild depression would keep one out of conflict. That dimming and shutting down saved lives. Is it useful now? I doubt it. It seems to have left us with a sleepy, bored, corporate-dominated society that can barely keep its own lights on.

In the business world, we see midlife cognitive decline both among the vast majority who lose, but we also in the small number of winners. Power gets to them and, though it has different effects, it seems just as capable of rendering formerly smart people stupid.

The rebels stay smart. The people who recognize the corporate game as pointless bullshit, who know that even if they get rich it’ll come with a sacrifice of identity that’s hardly worth it because the part of the person that matters will have died, they’re the ones who hit 30, 45, 60, and can still hold an interesting conversation. They’re not immune to depression, but they seem built to fight it. They don’t get stupid. Morose? Yeah. Anxious? Sure. Miserable? Sometimes.

If these rebels speak out, they’ll be labelled crazy. I shan’t unpack that word, nor shall I discuss mental illness and how poorly most people understand what it is and is not. There isn’t time for that, today. The accusation of craziness hurts us, not because we are insecure about our mental health, but because we are moralistic and self-critical, and our negative experiences pile on– that is the essence of any fight, and rebellion is no exception– and this causes a divergence between what we want to be and what we are.

We aren’t, in general, crazy. I’ll talk about that some other time. If we were, though, it’d still be better than turning stupid.

State of Society, 2017 (Part 1)

Writ large, what’s happening in the United States? How did we get from Roosevelt and Eisenhower to Trump? What’s going wrong, what’s going right; and how can we fix it?

The Disappointment

There’s been progress, for sure. We elected a black President, we’ve legalized gay marriage, and the computers we use for phones are superior to mainframes not long ago.

In terms of economic life, though, the 21st century has been one of disappointment. The poor are getting poorer. The middle-class is getting less upwardly mobile. The upper-middle-class is growing out of touch, so much that the lower middle-class hate-voted for a rich psychopath to spite us. The rich are not just getting richer, but greedier and stupider.

In the 1960s, the world economy grew at 5-6 percent per year, and U.S. research and development was a major reason for this. We led. After winning a war in the 1940s, we rebuilt the vanquished countries, instead of exploiting their position of weakness for some short-sighted, zero-sum gain. That approach worked so well that the countries disallowed (by the Soviet Union) from participating are still suffering for it. In truth, the main reason Eastern Europe is poor is not that “communism didn’t work” but, more specifically, that those countries weren’t allowed to participate in the Marshall Plan.

Yet, our country ran off the rails. It’s hard to say if it’s even capitalistic anymore. The Capitalist Party, the boardroom elite, seems no more devoted to capitalism and free markets than the Communist Party of Stalin’s Russia. They, in the Capitalist Party, collude with each other, they vote up each others’ “executive incentive plans” (and golden parachutes), and they generally have the system rigged so a social elite wins, irrespective of market forces or the decidedly middle-class notion of “merit”.

Subtly, in the 1970s, old-style Gilded Age elitism came back, with more force in the 1980s and ’90s, and now it dominates the business environment outright. In my view, this had not much to do with traditional left-right politics; Reagan didn’t cause it, Clinton didn’t slow it, the Bushes were oblivious, and Obama couldn’t stop it.

Trump’s America wasn’t born overnight. Like Trump, it’s been around for decades, and it never really hid.

The Satanic Trinity

For now, let’s ignore the big cultural issues. Plenty of people do. The story of the failing American middle-class comes down to the Satanic Trinity: housing, healthcare, and education costs. No one really gets to avoid what’s happening on these fronts. They affect everyone. So, we’ll talk about all three.

Healthcare costs are spiraling for a simple reason: people want not to be in pain, not to die before their time, and not to inflict sadness and misery on people who depend on them. We have this fiction in mind of efficient markets where fair prices are discovered based on abstract principles: supply (how hard is it to make widgets?) and demand (what value can people extract from widgets?). Often, this works. In many cases, markets are the best solution to the pricing problem.

Here’s an issue, though: in bad times, value becomes binary. What’s the value of water? If you’re fifteen minutes away from dying of thirst, it’s infinite. If you’re healthy and live in normal conditions, it’s practically free. I’d argue that this applies even to money itself. When people feel they have enough, they spend it without much thought. When money’s tight, they watch every cent.

For most commodities, including money itself, the marginal value is either nearly zero or nearly infinite; and, it’s never good to be in the latter situation.

What fuels the hospital-industrial complex? Fear, obviously. People who can afford it will pay whatever it costs not to die in pain of a tooth infection that leads to sepsis. It’s hard to put a number on not dying in the streets of a preventable illness. Moreover, it functions as a middle-class inheritance tax. The rich, who bequeath $10-million estates to their heirs, are largely unaffected by medical bills. Middle-class people who die with $250,000 to $2 million in net worth? Often, their finances are wiped out by end-of-life medical bills. Their kids are too deep in grief to care about money right then; so, that’s the best time for the hospital-industrial complex to strike.

The hospital-industrial complex is, financially, not very different from another, much older and more nakedly barbaric, wealth transfer from semi-rich old people to unscrupulous young people: witch hunting. I don’t believe for a second that educated people in the 17th century actually believed that these old women were hanging out with Satan in the forests of Massachusetts or Bavaria. Witch hunting existed not because everyone believed in that nonsense, but because it was so profitable for the hunter (and the church) to seize the wealth of a person who’d amassed significant money, but who lacked the vigor (and, in a time when a woman’s testimony was given little value, male gender) to defend it.

The medical-industrial complex exists for one reason: old people have money they can’t use and that they have neither the vigor nor reason to defend.

Let’s go to tuition next.

First, let’s admit that college education always had an economic purpose. High-minded ideals? Bullshit. We might wish it were otherwise, but less than 1 percent of people can afford to take four years off from the subtle, gray-beige oppression of economic life.

What differentiated colleges from trade schools is the obliquity of the approach to the individual economic problem. Trade schools provide knowledge that’s profitable now. So long as the trade stays strong, the tradesman is well-off. That’s always been true and it’s still true. There are probably more six-figure plumbers than six-figure art history professors. The issue is that no one can predict which trades will be strong, 20 years out. Trucking is a solid middle-class job that might be automated out of existence in a decade. So-called “blue-collar” jobs are commodity work and no one pretends otherwise. Wages are at the mercy of the market.

The selling point of a college degree is insurance against the vicissitudes of commodity values. Officer of a horse-carriage company, in 1902? Well, you might lose that job soon; but, thanks to your college degree, you have all the upper-middle-class skills necessary to run a car company.

We’re now at a point where there are so many college degrees, and the quality of graduated students is so variable, that college degrees bring very little. Largely, their job-winning purpose is to wash off the negative social signal of not having one.

To be fair in this assessment, the quality of education that’s available has probably never been higher. Just as a $5 bottle of wine today is superior to what Napoleon drank, someone who attends a no-name school and actually does the work will get a better education, in most ways, than someone who went to Harvard in 1980. The Internet has changed so much; it makes the smart, smarter, and the dumb, dumber. (That’s a topic for another time.) I’d even argue that the objective differences between colleges have shrunk– the difference in quality between the elite colleges and the next tier, which was never all that much, is one-third of what it used to be– but the social ramifications have spread out.

I work in the technology industry. In Silicon Valley, there are three schools that matter: Stanford, MIT, and Harvard. It has nothing to do with lay prestige (“Ivy League”) nor with the quality of the schools themselves. In venture-funded software, even Yale (or Princeton, or Dartmouth) is indistinguishable from the University of Michigan, which is indistinguishable from Northern Illinois University in De Kalb, which is indistinguishable from a community college, which is indistinguishable from nine weeks at a “code boot camp”. It’s not about the quality of the educational programs. It’s about venture capital and it’s about connections. If you want to raise a $2,000,000 seed round at 22, go to Stanford and not Yale; in the tech industry, Yale is Siberia.

These prejudices vary by industry. Consider traditional publishing. (I mean, don’t consider it as a career, unless your uncle is a power agent; but let’s examine it.) Yale and Princeton have incredible pull, whereas Stanford is openly mocked; MIT might get you a 6-figure advance, but only if you’re writing science fiction. If you want the New York Times to review your memoir, I Took Classes, then go to Yale and not Stanford; in trade publishing, Stanford is Siberia.

I’m sure that Stanford’s English program is excellent… and, besides, one doesn’t become a great writer by taking classes, but by reading and writing, which you can do anywhere. I’m sure that Yale’s undergraduate computer science is very good. You can get a rock-solid education at a state flagship college, a top-50 or so liberal arts college, et cetera. If you have the work ethic, you don’t necessarily need to go to college; there’s plenty of excellent material online. It’s probably no surprise that the tuition bubble is about that. It’s about connections. It’s about extracting personal benefit from the low-level corruption that exists in almost any society.

So why are tuitions rising? Why do people pay $65,000 per year to go to college, in addition to hundreds of thousands spent on elite boarding schools, private admissions consultants,  and donations? It’s not like connections “weren’t a thing” back in the 1980s. Small-scale corruption has always existed and always will. So what changed? Why is there a market for private college admissions counseling, to the tune of $50,000 per head?

It’s the panic purchase of credibility and connections, as the U.S. middle class dies.

We’ve covered two items of the Satanic Trinity: healthcare and tuition.

I suppose that there is some good news in both. With healthcare, the costs are usually exacted when we have no personal use for the money. There are exceptions. People get cancer at 47 (whoops!) but it’s rare. Just as most people in Vichy France didn’t die in camps, after all; the majority of people won’t suffer egregiously from the U.S. medical-industrial complex and its extortive billing practices. Relax, you’ll be fine… well, 95 percent of you.

People live in terror that if they don’t toe the line and stay employed and insured, they’ll be left to die on a hospital doorstep. That happens but, statistically, it’s far more likely that a person will suffer the opposite: being kept alive longer than, if they had their full faculties aforethought, they would want.

What about tuition? I suppose we can take solace in the fact that the spending is self-limiting to a degree. Colleges will still admit low- and middle-income students– it helps the rich feel better about themselves, and they can self-segregate by social class once behind the gates– and offer price reductions (to what people can actually just-barely afford) under the guise of “financial aid”. The prices will be calibrated to drain, but not break, the parents.

Behind healthcare billing, the soothing whisper is, “You don’t need this; you’re going to die soon.” Behind tuition, it’s, “Relax, this is what a good parent does; pay out, today, and your kids will be set for life.”

What about housing? Housing is… much more complex.

If you took an unweighted geographic average, you’d find that houses in the U.S. are reasonably priced. Cheap, even. There isn’t even anything wrong, either, with most of the places where one can find affordable houses. I’d rather live in the mountains of North Carolina than some overpriced, depressing suburb in the Bay Area that’s three hours (accounting for traffic) from anything worth looking at (i.e. the parts of California that are actually beautiful). Houses are only obnoxiously expensive… where there are jobs.

Oops. This country wasn’t supposed to have a Jobless Interior.

This is where it gets super touchy. It brings in gender and race and politics– why do people in so-called “red” versus “blue” states seem to live in different realities?– and there’s no easy solution to it. I’m about to get into trouble here.

Let’s take a break. We’ll talk about housing in the next installment.