Smart People Are Not Ruining America

Someone sent me an article by David Brooks, “How We Are Ruining America“.

Brooks is a conservative columnist, and a capable writer with interesting ideas, but my issue with him is not that he’s conservative so much as that he conflates various elements of class in a far-sighted, muddled way. The “we” that is ruining America is the supposed intellectual elite, the sneering over-literate people who sometimes get too snobby.

I’ll take out this passage:

Recently I took a friend with only a high school degree to lunch. Insensitively, I led her into a gourmet sandwich shop. Suddenly I saw her face freeze up as she was confronted with sandwiches named “Padrino” and “Pomodoro” and ingredients like soppressata, capicollo and a striata baguette. I quickly asked her if she wanted to go somewhere else and she anxiously nodded yes and we ate Mexican.

First of all, I’m probably part of that educated upper-middle-class elite that people in the Jobless Interior blame for wrecking the country. (We didn’t do it. More on that.) Even still, I had to Google soppressata and capicollo. I do a lot of Googling in restaurants and, quite honestly, prefer to avoid eating at the expensive places. High IQ or no, I’m still a central Pennsylvania boy at heart. Also, what’s wrong with eating Mexican food? I’ll take a taco truck over a $13 sandwich place, any day of the week.

We have a problem in this country. The economic elite is destroying it, and the intellectual elite is largely powerless to stop the wreckage, and while there are many sources of our powerlessness, one of the main ones is that we get the bulk of the hate. The plebeians lump us all together, because the economic elite has told them to do so. They make no distinction between the magazine columnist, who can barely afford her studio in Brooklyn, and the private-jet billionaire who just fired them by changing numbers in a spreadsheet.

Brooks has some good points, and the essay that I linked to is worth reading, not because he’s right on every call, but because he’s not wrong. For example, he writes:

Over the past few decades, upper-middle-class Americans have embraced behavior codes that put cultivating successful children at the center of life. As soon as they get money, they turn it into investments in their kids.

As a person in his 30s, this resonates with me. To be blunt, I don’t share the same feel-good attitudes toward “family” and childbearing that are common in our culture. The biological drives are very strong, but these impulses perpetuate inequality. Once you have a child, you give up all hope of an enlightened, disinterested view toward matters of progeny. You become an extreme partisan, no matter how smart you are, because that’s how biology works. There are now a small number of people in the rising generation for whom you’d literally kill if it protected their safety, health, and future place in society. Moreover, if you fail to give them what are objectively unfair advantages, you will think of yourself as a failed parent forever. Continuing the species mandates that people parent; unfortunately, parenting mandates that people perpetuate crushing inequalities that, from a distance, are inexcusably unjust.

Let’s be honest: college admissions are about the parents, not the kids. Well-adjusted 18-year-olds aren’t thinking about which schools are targets for investment bankers. In fact, an 18-year-old who even knows what investment banking even is… well, that kid has severe emotional problems. Investment banking is supposed to be a safety career, not a dream, and certainly not something that a teenager aspires to enter. There are good reasons for young kids to want to go to the best school possible, but careerism is not one of them.

What Brooks observes is true. I don’t think he gets to the core of why it’s true. Adults know that our society is in decline, which is why they’ll shank each other to get unfair advances for their own kids. The dynamic isn’t new, but the intensity of it is. Three decades ago, you got farther with a degree from a state university in what is now the Jobless Interior than you get, today, with an Ivy League degree. Parents obsess over educational pedigree because they know how important it is to have connections when society goes to shit. When the land is dark and thin, merit is devalued and nostalgia wins. Elite graduates aren’t highly valued because they’re smarter than anyone else, but because their cultural and educational experiences are reminiscent of “the time before”.

Brooks also says:

Well-educated people tend to live in places like Portland, New York and San Francisco that have housing and construction rules that keep the poor and less educated away from places with good schools and good job opportunities.

These rules have a devastating effect on economic growth nationwide. Research by economists Chang-Tai Hsieh and Enrico Moretti suggests that zoning restrictions in the nation’s 220 top metro areas lowered aggregate U.S. growth by more than 50 percent from 1964 to 2009. The restrictions also have a crucial role in widening inequality. An analysis by Jonathan Rothwell finds that if the most restrictive cities became like the least restrictive, the inequality between different neighborhoods would be cut in half.

All true. All valid. Except, the emphasis is completely wrong. He implies that well-educated people are the problem. No. This is like the conservative contention that anti-vaxxers are liberal. Scientifically illiterate anti-intellectuals (on the left and right) are the problem, not leftists. Some of the NIMBYs are well-educated, and some are not.

The zoning/housing issue has little to do with educational pedigree. It’s generational. Boomers got into the housing market when prices were fair; then, they passed a bunch of self-serving legislation to thwart supply growth (as noted) and let a bunch of nonresident scumbags buy coastal real estate in order to spike land prices and apartment rents. Generation X was affected, but Millennials just got screwed. Further, Boomers have perpetuated a work culture based on hierarchy and socio-physical dominance, making it difficult to have a career in a company unless one works on-site in close proximity to the (very wealthy) people at the top. This creates abnormal demand for real estate in major cities, because peoples’ careers depend on them living there, even though the Internet was supposed to make location irrelevant. Consequently, we have a bipolar nation where one stretch of the country has affordable houses, even in beautiful locations, but offers no jobs; and the other offers jobs but offers no path to homeownership other than winning a hedge-fund or startup lottery.

Truth time

This is not a balanced country, politically speaking. First, while we have two parties, we’ve become polarized to such a point that most places suffer under a local one-party system. Sure, if you were to integrate over the country’s 3.8 million square miles, you might get a balanced picture. However, it’s much rarer these days, as opposed to fifty years ago, to see liberals on the Republican ticket in Massachusetts, or Democratic state legislatures in places like Texas. That’s bad for everyone. It means that there’s less competition in politics, especially at the local level from which important services are delivered. Gerrymandering is much at fault, and so is economic inequality, and I can’t cover these topics in depth. The point is: one-party systems suck, and the people in so-called “Red” and “Blue” States both deserve better.

Second, we’ve tilted far to the right over the past thirty years. The right wing has been winning, and the left flank sits in territory that would be conservative by European standards. Therefore, if you want to explore what’s wrong with the country, you cannot give equal time to left and right. Sure, there are plenty of stupid, obnoxious, and inflexible leftists who deserve some ridicule (much like their stupid, obnoxious, inflexible counterparts on the right) but the bulk of the damage has been done by one side. Three-sigma leftists are completely irrelevant in the U.S., while outlier right-wingers may not be in charge, but get audience with those who are, and have been able to shape the future over the past forty years. The 3-sigma right-winger of 1970 wanted to reduce the inheritance tax; in 2010, it was eliminated.

To equate the sometimes-smug, deadpan-self-deprecating, left-of-center intellectual elite with the bullying, pilfering, warmongering, and environmentally perilous economic elite is… beyond irresponsible. Who does more damage, the Buzzfeed columnist who can barely afford his Manhattan rent and takes a pot shot at obese Wal-Mart customers, or the billionaires who are selling off the country?

I don’t know what “the cultural elite” is or whether I’m part of it. (After all, I had to Google capicollo.) Let’s say that I am; I’m close enough to take blame and dislike for it, at least. I’m a skilled writer; good enough with words to provoke such envious rage from Paul Graham that he had me banned from Hacker News and Quora. That’s something, right?

When I look around in my circle, I don’t see an exclusive “intellectual elite”. I see people from all sorts of backgrounds: black, Latino, transgender, Midwestern, Southern, European, Asian, sons of restaurant owners and daughters of coal miners. We accept people who are different from us. If you’re smart, no one cares where you’re from; we don’t even really care where you went to college, because it’s correlated with almost nothing after age 30. Most of the best writers and artists don’t have elite degrees at all.

For a contrast, how often do you see Davos Men hang around with anyone but other Davos Men? Never. How much do corporate executives care about people who weren’t born into their milieu. They don’t.

The intellectual elite is far more diverse in every dimension than the economic one. People who are interesting and curious don’t like boring people, but interesting people come from everywhere: even from dismal white towns in Appalachia. We may sneer at conservative, redneck culture, but we’re a lot more open to people from that sort of background than the economic elite. Why? The economic elite wants to hold power at all costs. It does so by creating artificial scarcities… real scarcities that actually hurt people.

Let’s draw a concrete example. A white kid with a 1500 SAT, from a merely middle-middle-class background, in St. Louis… will not get into Stanford, because of that obnoxious extracurricular bar. I’m not going to defend legacy admissions, or the systematic preference for non-academic activities and traits that are highly class-correlated (e.g., interesting travel, recommendations from notable people, “achievements” that involved parental pull). If I were in charge, I’d argue for 100% academic admissions, but I’m not. Both the economic and intellectual elite seem to benefit from the socioeconomic garbage that infests college admissions.

I suppose that when people get bounced at this particular door, they feel like it’s “the intellectual elite” that’s keeping them out. After all, the admissions officer isn’t part of the economic elite, so that puts her in the other one by default, right? Here’s the thing, though. The admissions officer doesn’t enjoy rejecting people. She has to make painful close calls based on limited information. And she’d probably agree that she turns down more great students than she can admit, and asks the question: why does getting into Stanford matter so much in the first place? It’s not like the country only has ten good colleges.

It’s a good question. Why does getting in to an exclusive college matter so much? It’s not the intellectual elite who are making elite degrees so essential in the corporate world (and so expensive). It’s the economic elite.

For the Baby Boom generation, college pedigree wasn’t nearly as big of a deal. You didn’t need to go to Stanford or Harvard to have a decent shot at a good first job, because good jobs were more plentiful back then. It’s a uglier picture now, but whose fault is that? Harvard professors didn’t take away the good jobs; corporate executives did.

There’s more disparity than overlap between the economic elite and the intellectual elite. People with the good fortune to be part of both are exceedingly rare. You can write that region as a set of measure zero, and not be that far off. Meanwhile, interests of the two elites have diverged. We’re headed for conflict, and our society needs for the right side to win.

The intellectual elite wants what’s best for the world’s people. We’re haughty and occasionally imperious. Sometimes, we get things wrong. We dish out a lot of figurative shit to people who misuse the word “literally”. However, we still have a stake in the rest of the world because… well, because we come from it. No one is born an intellectual. That’s kind of the point; how would it be virtuous if one could be born with it? The economic elite, however, is hereditary both by construction and intent. They fancy themselves a superior species, and have the resources to isolate their progeny from the consequences of everything they do. They’ve receded from all forms of accountability (except for the final kind that may come if they destroy the planet). They’ve weakened national governments while advancing corporate hegemony. They’ve thrown social justice, cultural progress, and environmental sustainability under the bus just to get 8.0% richer each year rather than 7.5%.

There’s a tendency in American culture, out of some skewed interest in fairness, to represent the intellectual elite (conflated with the left, though that may not be fair to intellectuals or to leftists) and the corporate/economic elite (which is more authoritarian than conservative) in some kind of parity. Sure, those downsizing corporate executives are elitist jerks… but so are people who buy arugula at Whole Foods!

Above is one of those cases where attempted parity leads to absurdity. Since the autumn of 2016, I’ve felt it necessary to scream out against these sorts of false equivalences. They’re incendiary, incorrect, and dangerous. Why is there so much more disproportionate hatred for a mostly-harmless intellectual elite over a destructive, global economic one? I think that the answer’s obvious: availability. People in the Jobless Interior come in contact with those of us in the perceived intellectual or cultural elite. Many of us are from those places that have since become jobless, and are disturbed by what has happened. Meanwhile, the (justifiably) angry people in the Red States never meet the Davos Men and Sand Hill Road tech barons who are actually destroying their lives.

The proletarians get screwed over, and usually don’t know why it’s happening. For whatever reason, we get blamed, as if not only ought we to have protected them, but as if we built the processes that take their jobs away and immiserate their cities. I’d be willing to take all the blame, if it were due. The problem is that it’s incorrect to blame us. If you want to hate me for the books I read or words I use or food I eat, go ahead. Let’s not get distracted. We have a shared enemy. The country isn’t being destroyed by people using the word “intersectionality”. No, it’s being wrecked by the weakening of unions, corporate downsizing, accumulated environmental damage, rising anti-intellectualism, and creeping plutocracy. We have a real enemy and it’s time to put our (very mild) differences aside and fight.

Evil, and its relationship to the tech industry.

Earlier tonight, I read something that I wrote on the Internet a few years ago. I won’t link to it. I regret it. It was an impulsive, not-very-coherent “wall of text” post on a message board. It disturbed me to read it and realize that I had written it.

The VC-funded tech industry, these days, swarms with talentless narcissists. In that world, you can find yourself face-to-face with raw evil– the kind that lacks form or purpose. It’s something that you don’t encounter in most industries– at least, not in the same way. That might be the most understated occupational hazard of that industry.

For a contrast, let’s talk about finance. There’s a lot of greed, but Wall Street isn’t evil. In finance, people don’t go out of their way to ruin each others’ lives and careers. That sort of vindictive behavior is common in Silicon Valley. For all the claptrap about creating new wealth, the attitude revealed by techie behavior is zero-sum at best and blindly malicious at its worst.

A managing director at a bank would never resort to physical violence over a blog post that was critical of his employer. I know venture capitalists who have. It’s not an uncommon thing in Silicon Valley.

If I had to guess the difference, it’s that financiers are honest about why they go to work. They do it for the money, and they’ll admit as much. Their industry is amoral and has a few bad actors, but most of them are decent, ethical people. The techies, on the other hand, think they’re such a gift to humanity that any wrong behavior can be justified in terms of some distant-future greater good (e.g. the Singularity) in which they’ve placed a bizarre pseudo-religious faith.

I spent a lot of time (and millions of words) trying to fix the tech industry. That’s why I blogged. I failed, of course. I didn’t make a dent. The corporate software industry is as scummy, socially harmful, and downright evil in 2017 as it was in 2011, when I started writing about it.

Also, you have to be careful about fighting evil. When you stare it down, face on, you risk going nuts for a while. Evil people operate safely in mental spaces that good people cannot tolerate without getting warped.

You can come back from it, of course, but it takes time. A lot of time, and it’s exhausting.

Crossing the Equator 7: What Is Bad Writing?

Bad writing. I bring the topic up not to mock bad writing, because it’s rarely worth the time, and also because most of the sins of bad writing have also been committed by good writers, either when they were inexperienced or in quick first drafts. It’s useful to explore the topic, though. What is bad writing, and why does it exist, and why do so many people produce it? Even most intelligent people write more bad prose than good. Where does this come from?

Not (Necessarily) Bad Writing

Some tastes are arbitrary. Let’s take so-called “swear” words. Shit was once an unobjectionable term for feces; fuck, for copulation, and cunt, for the vulva. These words became objectionable because of the social classes and ethnicities of those who used them, centuries ago. Bloody is mildly profane in the UK, but laughable in the US. One of the worst German profanities translates as “pig-dog”, which would be insulting but not obscene in English.

Of course, sometimes profane words aren’t “bad words” at all. Sometimes, they’re excellent words. It depends on context.

In addition to these high-stakes word-choice issues, we have various shibboleths. Most people think that this sentence is grammatically incorrect.

There’s three people at the door.

Is it? Well, Shakespeare would have said no. If “is” must agree with the pronoun “there”, it checks out. “There”, in this context, is shorthand for “What is there”, which is always singular. “To be” can cross from singular to plural and there’s no consistent agreement on which side wins. Usually, it’s the prior/left side with which the verb must agree:

I found out that “she” was actually three people working shifts.

So, “there is three people at the door” is, although non-standard, defensible.

I grew up in Central Pennsylvania, so I frequently catch myself saying “Are you coming with?” instead of “Are you coming with me?” or “Are you coming along?” Prepositions are weird animals that make up their own rules and don’t transfer well across languages. Why is along better than with? It’s arbitrary.

Another Central Pennsylvania usage that is frequently called wrong: “needs fixed” as opposed to “needs to be fixed”. How bad is it, really? It saves two words and communicates the same idea. On that note, let’s talk about a word-saving usage that is without controversy but was probably considered wrong at one point: the modal verb used to.

I used to cook.

This is a fine, grammatically correct sentence. Everyone knows what it means. But, it probably made grammarians twitch at one point. It looks colloquial, imprecise, and incorrect, because used to has nothing to do with used or to.

If I had to guess, the used to modal verb came from the wordier “I am used to”, where used is a past participle and “am” is the archaic device where “to be” instead of “to have” is used for the tense (e.g. “I am come”, “she is gone”, “he is dead”; two of those live on as adjectives and are rarely thought of as participles). In Shakespeare’s time, you would say “I am born in 1983” rather than “I was born”. This still lives on in some of the Romance languages. I’d imagine that “I used to cook” is a shortening of “I have been used [for] cooking”. It’s politely servile in a way that, like “my lord”, is now anachronistic.

For another interesting note, many people believe that “will and shall” is a dead distinction. It’s not. It lives on, but with less rigidity. The contraction forms (“I’ll”, “he’ll”) are descendants of shall most of the time. People still say “I will” when they mean (according to the older rules) will and use the contraction when they mean shall. “I will go to the store tomorrow.” “If they can’t cure me, I’ll die.” There are exceptions, the most notable one being when people de-contract for imperative emphasis: “you will show up on time”. The commanding shall tends to be de-contracted to use will, while the matter-of-fact neutral shall (which was far more common than the commanding usage) is left contracted (“I’ll be at home tomorrow”).

Don’t try to argue that contractions are incorrect either. That’s bullshit. Shakespeare used more than we do today. Contractions are excellent.

At any case, when I talk about shitty writing, I’m not talking about “different than” or “try and” or even “towards”. Even “irregardless” is embarrassing, but it doesn’t really block communication or bore the reader or spawn undesirable resentments. It has two extra letters and it’s ugly, but people know what’s meant. I couldn’t care less about it. (Yes, that was intentional.)

For extra fun, let’s take “Where are you at?” Some people hate this. In the right place, it’s excellent. The at is superfluous, but it’s a jab. It isn’t uneducated; it’s exasperated. It’s jarring, but it’s supposed to be. There’s impatience in that usage.

Dangerous Good Writing and Rhetoric

There’s an amusing sub-category of writing that I’d like to talk about. There are places where good writing is more dangerous than shitty writing. Corporate America is one such place. For one thing, you might still get in trouble for using a contraction in a corporate memo. You don’t want a human touch; you want formality and stiffness.

It has come to our attention that you have been viewing inappropriate material during working hours. Under these circumstances, we cannot continue your employment.

Change “cannot” to “can’t”, and you add a slight bit of human touch. In this firing letter, though, that’s exactly what shouldn’t be there. The adverse decision must be presented as impersonal, civilized, and inevitable. You say “can’t” when you want to come across as a vulnerable human; you say “cannot” when you want to suggest an objective limitation that is out of your control.

One of the biggest differences between corporate writing and real writing is in the role of passive versus active voice. English teachers hate passive voice and strike it out with red ink. They’re right, if they’re teaching people to be writers. Novels are slowed the fuck down by passive voice. The ball being thrown by John puts focus in the wrong place, unless the narrator is a cat, because the cat’s eyes are glued to that ball. (You thought they were in the sockets, didn’t you?) Yet, in business writing, the passive voice is often mandatory. Use active voice, especially around the pronoun I, and you sound like you’re trying to be an impatient executive. If you’re not an executive, you can get in trouble for that.

Shitty writing thrives in the corporate world, and it’ll never go away. Executives can use active voice, but most people are not executives and will need to acquire bad habits if they want to be employable.

Let’s talk about rhetoric. This is such an abused word today. So many people complain about “politicians and their rhetoric” with a note of vomit on that last word. What is rhetoric? It can be quite beautiful. I use it all over the place, and most people do, often without realizing it. Rhetoric is the art of designed speech or writing. Thought was put into it, to make it more clear, persuasive, or invigorating. Marc Antony keeps coming back to “Brutus is an honorable man”, assassinating his character with the repetition. Parallelism (“see the sights, hear the sounds, smell the odors”) is rhetoric. It can be odious, or it can work very well. Some of its rules are odd but work, such as the principle of threes (tricolon). “Friends, Romans, countrymen” is far more effective with three synonyms than two or four. Why? I don’t know. There are many plausible theories, but no one really knows what is magical about three but not four.

Rhetoric has an aesthetic purpose and a voice. You can inflate yourself, or show humility, or form a sense of commonality. (“Who among us has not sinned?”) You can use the imperative mood liberally. Sometimes, you break rules or even use multiple layers of meaning. “Now is the winter of our discontent” is a great example. Let’s look at Richard III’s original speech:

Now is the winter of our discontent
Made glorious summer by this sun of York;
And all the clouds that lour’d upon our house
In the deep bosom of the ocean buried.

Contrary to how the opening line is remembered, Richard wasn’t declaring it to be the winter of discontent. In fact, he was saying that the sun/son of York, Edward IV, had ended it. However, he resented his brother Edward. So, Richard cleverly speaks well of Edward in a way that’s amenable to being taken out of context.

To see how hard this is to pull off, note what changes with the truncation. In the original form, he’s using “Now” to justify a word-order inversion that occurs in conditional statements, i.e. “Only after eating your vegetables may you have dessert.” He’s therefore saying that the winter of discontent is over. But, truncate it at the first line, and the function of “is” changes. In the full passage, “Now” modifies “is” to suggest progression (rather than equality, the usual function of to be verbs): the winter of discontent is over, and has been made glorious summer. (This also exists in computer science statements; X = X + 1 is invalid mathematics but an assignment statement, valid under a progressive interpretation of “=”.) After the truncation, “is” becomes the regular equality statement and “Now” becomes not a modifier but an operand. He’s equating “Now” to the “winter of our discontent”, and the meaning becomes opposite to what he’s formally saying. It’s brilliant.

There’s a beauty to rhetoric, but it injects a personal voice, and persuasive desire. It reminds the audience that there is a speaker. This is also an area where many fiction writers fail. Should a novelist use rhetoric? Yes. But, in general, it should be that of the character. (Omniscient POV, I won’t cover here.) Otherwise, it becomes author intrusion. Impressing readers with cumbersome locutions went out style almost two centuries ago. It can still pass, but only when narrating in a certain kind of character that is so tedious in real life that it takes exceptional work to pull it off. Ignatius Reilly comes to mind.

In corporate prose, the objective is not a specific voice but no voice. The machine is supposed to look like a machine. Why? Because it’s not a machine. Every decision that “it” makes has human motivations behind it, but often those are socially unacceptable, and the people making those decisions are often self-serving scumbags. Therefore, corporations have to create an objective, mechanical voice that hides their true intentions. “I’m firing half of you and putting the budget into my ‘performance’ bonus” will get an executive’s car set on fire. Instead, it’s “Due to difficult business conditions, we have been forced into an uncomfortable blah blah bullshit blah.”

It’s a fun experiment to switch up and use active voice in business communication. I enjoy it. However, I’m also insane. You’ll be surprised how many people find that they “just don’t like him” (or her) where “him” (or her) equals you.

There are times, like switches of magnetic poles, when these expectations invert. For example, Donald Trump used a limited vocabulary and coarse style, presenting his more prepared, polished rival as “establishment” and therefore phony. She wasn’t. Her speaking style wasn’t corporate. It was precise, as you’d expect from a politics wonk. Trump managed to turn her style, which would usually be more authoritative and therefore superior, into a negative… and a lot of people “just didn’t like her”. (Okay, 90 percent of “just don’t like her” was sexism, just as the corporate world uses “culture fit” to justify its own sexism and racism; but the other 10 was an unforeseen switch in the rhetorical expectations of politicians.) In the corporate world, there are times of crisis in which active voice becomes preferable. In times of acute crisis, no one wants to hear “It has come to my attention”.

Corporate writing is also deliberately slow. This is because 95 percent of corporate writing exists to tell people why actions adverse to their interests have been taken. Good news is delivered verbally. Bad news is delivered in writing using templates of boring, cover-your-ass prose that unfolds slowly. Take this generic form rejection letter:

Dear Candidate,

Thank you for your interest in the position of Associate Hitman for the Global Company. We had a number of highly qualified candidates for this opening, and unfortunately we will not be moving forward with your application at this time. We wish you the best of luck with your job search.

Aside from the obvious bit of information (“no”) there is no information content, but the slow rolling is an expected bit of politeness. The passive voice encourages the recipient (*cough* rejectee) not to take the news personally, not that it matters if he does.

Shitty Writing

What is shitty writing and where does it come from? This is hard to answer.

Let’s talk about weeds. See, weeds don’t really exist. It’s not a botanical term. It’s a word that humans made up for plants they don’t like, or that are in the wrong part of the garden. It’s the same with writing and speech. Passive voice is expected in corporate communication. Never say “do” when you can say “deliver”. Always add authority to what you’re saying with the prefix, “At the end of the day”. That’s shitty writing, though. Let’s be honest about it. Outside of the intentionally soulless context of office writing, no one with a soul uses “deliver” intransitively unless talking about food. Pizza Hut delivers. If you’re a programmer, you write code. If you describe someone as “not delivering”, or if you “deliver solutions”, then fuck off and die.

Much shitty writing comes from the mismatch of styles. Office writing should be adverb-heavy and verbose and, most importantly, that it must be be non-committal enough to allow exits and bland enough to be safe even when read half-heartedly and taken (either by negligence or malice) out of context. For a contrast, fiction should be punchy. Characters should do things. It should rain. John should not “come to a point at which life processes cannot continue”; he should die. Different styles. Fragments OK. Adverbs are acceptable in fiction when they add precision but very bad when used for emphasis, insofar as they diminish authority, unless of course the author wants a less reliable narrator. If I sound inconsistent and full of myself, that’s because there are no rules. But, there are styles. Some work and some don’t.

The simplest kinds of bad writing (grammar errors, misspelled words) will tank an office memo or a novel, but for entirely different reasons. In the office memo, they add character that is not wanted. They suggest that an errant human, rather than the mechanical beast that is the company, wrote the memo. For the novel, readers want a human writer. There, the issue is that bad grammar slows the reader down. Not by much, I’ll point out. Reading is about 20 percent slower for the worst kinds of misspelled or grammatically awful writing as opposed to crisp, good writing. It might feel slower, in the same way that driving 50 mph on a 70 mph road feels like crawling, but it’s usually 10 to 20 percent. Now, in an office memo, that 20% difference wouldn’t matter, because office writing is supposed to be slow, vapid, and imperious with the reader’s time. It can kill a novel, though. If you write a 100,000-word novel in 120,000 words, you’re dead unless you’re an exceptional belletrist. Agents and editors have a hair-trigger sense of wasted words and for good reason; they suggest other weaknesses in the writing (or story) that are more subtle and require a long form read (which agents don’t have the time for) to pick out. Small differences, information-theoretic margins of a few percent, make the difference between best-seller and perma-slush. If you’re a novelist, you want to have few grammatical errors because they slow the reader down with unimportant details… not because you’re trying to achieve a mechanical aesthetic. We get to the same general rule (“use good grammar most of the time”) along two very different paths.

Similarities between those two styles end there, though. Active voice or passive? Active for fiction, passive for office. How about rhetorical questions? Okay for a novel (suggestive of inner dialogue) but inadmissible in office prose. When can you break the rules? Even stiff business writing (which invented the non-word synergize) breaks rules of good writing all the time, but you have know exactly which rules you can break.

A good novel convinces the reader to suspend disbelief and invest her time and emotional energy in a 100,000-word account of events that never happened. The promise is that this story, technically a lie, will tell a deeper truth than many of our actual experiences. It’s hard to convince a reader of one’s authorial stature; there are many who try, but don’t merit it. Rhetoric is a big part of that.

Business writing is anti-rhetorical. In part, it wants voicelessness because the American business environment is so militantly anti-intellectual, and voice is something that most businessmen can’t hack. (So, get it out of here! Burn it with fire!) Corporate writing is bland because bland writing doesn’t make middling minds insecure. The fiction writer must convince a reader to read the next thousand words of prose. She must motivate her readers to continue with the difficult activity of staring at patterns made with chemicals on decaying plant matter. Business writing, for a contrast, tries to remove convincing and the reader and the writer; everything must dissolve, and this document must be accepted as objective truth, freshly printed by the machine, with nothing that suggests voice or character because those introduce the subjective and intimidate the less intelligent.

Rhetoric, done well, can be beautiful. Almost every well-remembered line of prose or poetry had some rhetorical device, perhaps used subconsciously, behind it. Hemingway’s deliberate use of short, bare sentences (the man was not limited, and wrote some great long ones, too) is what rhetoricians call parataxis. It worked very well for him. Is all rhetoric good, though? No. In fact, much of the shitty writing that comes from competent grammarians and orthographers, who’ve mastered the basics but still inflict low-quality prose on us all, is… badly-deployed rhetoric.

Rhetoric tends to have music to it, and music is repetitive. Repetition can be obnoxious. Or, it can be memorable. Rhyming, in poetry and song, probably became fashionable for purely practical reasons: it made it easier for actors to remember their lines precisely. Rhyme and rhetoric have the same effect on readers. They make words and phrases memorable and quotable. That can work very well, or it can fail.

Let’s explore diacope. What’s diacope? It’s when you use a word or phrase twice, with an intervening element.

 It is what it is.

“Love,” she said. “Love.”

Tom only cares about what is good for Tom.

“You got me! Oh, you better believe you got me.”

“Bond. James Bond.”

There’s a “rule” of grammar or style that is not really a rule about never repeating words. (See how I repeated “rule”, and it worked?) Most languages can’t afford this, but English has a ridiculous number of words and so a lot of people go to ridiculous extents to avoid repetition. (That repetition of “ridiculous” didn’t work quite as well.) This aversion to repeated words can lead to actual errors, e.g. “amount” as a synonym for “number”, which is it of course not. The truth is: repeating words can be very powerful. Or, it can be clunky. It depends on what word is repeated and how it is used. It draws emphasis. You actually can start twenty sentences in a row with “I”. You should do that if you want to write a self-centered character in first-person. You can tell that if you’re telling a single-person, direct story. You shouldn’t do that if you don’t know what you’re doing, though.

If you say, “She had a blue coat, a blue hat, and blue shoes”, you are drawing attention to “blue”. This may or may not be (fun tautology, there) what you want. It depends on context. Let’s say that it’s not what you want, and that this emphasis of blue is undesirable. Changing her hat to “azure” and shoes to “cerulean” isn’t going to fix the problem. It’ll make it worse.

Rhetoric is memorable. It’s catchy. It sticks out and can make a line memorable. Sometimes, it’s great writing. And, sometimes, it’s absolute shite. Bad writers often don’t the difference. It can be hard, because it’s usually contextual, the determination of whether a rhetorical device is useful and when it’s jarring or ugly. In fiction, it can depend on the character who is narrating. Some people have cliché minds and would totally narrate like this:

At the end of the day, Erika just wasn’t delivering. It was time to give her the axe. He would have to speak with the team about it on Monday, after the dust had settled. Next week, the team would need to fire on all cylinders.

If your POV character is a soulless corporate drone destined to plateau in middle management, that’s great writing. If you want the reader not to wish for your POV character to die in a copier fire, then it’s poor writing.

For this reason, it’s very hard to come up with snippets of bad writing. For anything that I can point at and say, “That’s bad writing”, there is a context in which it would be good writing. It takes a few hundred words to really know, and yet there’s a point where it becomes obvious. As in Jacobellis v. Ohio, I know it when I see it. Sometimes, the sin is author intrusion: a writer trying too hard to push a message or just trying too hard to be clever. Sometimes, it’s an introduction of one style or form into another that doesn’t work. It could be too many styles (flipping back and forth between business cruft writing and journalistic prose) or it could be the lack of one.

I think rhetoric is often at the core of it, though. Rhetoric accentuates. It adds a musical dimension. When used well, it’s powerful. When used sloppily, it’s terrible. Most people aren’t aware when they’re using it. That, I think, is the problem.

Overfitting

I’m going to bring in a concept from machine learning, which is overfitting. Machine learning, broadly speaking, is the attempt to simulate decisions considered intelligent (that is, those that traditionally required an expensive carbon-based organism instead of a machine to perform them) such as image recognition by turning it into a hard math problem that, while impossible without data (or, as we say, a priori) becomes tractable given massive data sets, a few well-studied algorithms from operations research, and time. Explicitly programming a computer to recognize hand-written characters would be so time-consuming and error-prone (there are about a hundred thousand characters in the Unicode standard) that it wouldn’t be worth doing; it’s better to train a machine to learn from millions of labeled examples.

Of course, the machine isn’t actually intelligent. It’s just doing a very complex rote computation involving lots of data, and it can easily infer things that aren’t true. Incidental artifacts can be incorporated into the model. Let’s say that an agent is being trained to recognize men from women based on facial photographs, but that the men’s and women’s pictures are taken in separate rooms with different lighting. Then, the machine might learn that men have brighter faces. It isn’t true, but the machine doesn’t know that. It’s very easy to build a machine learning system that learns everything about its training set, but does so by incorporating incidental artifacts of the data that don’t represent the real world, and therefore performs poorly on new cases. That’s called overfitting.

How does it apply to writing? Well, when we write, we draw on what stuck with us as readers. Those lines tend to be rhetorical. Behind most memorable lines is a rhetorical device. If these devices taken into a context where they don’t belong, they fail. If they’re overused, they’re just clichés, even if they worked when originally deployed. They’re also hard to modify without breaking. Let me give a famous example, from A Tale of Two Cities.

It was the best of times, it was the worst of times

This is a great opening line. You can’t use it, because it’s been done. Now, let me just show how sensitive that line is to something that most of us don’t think about: inflection.

Let’s assume that the language English’ (pronounced “English-Prime”) is exactly like ours but with the words “best” and “worst” swapped in meaning. Nothing else changes. Now, in English’, could you start off with this? It would mean the same thing.

It was the worst of times, it was the best of times (English’)

I would say no. Here’s why. Even though “worst” in English’ means “best” in our English, you’re now inflecting downward, because that’s how the line is read. “It was the woooorst.” Bass. Hear those vibrations in “worst”? Then, you have “it was the best of times”, with the treble of “best”, but in a language where “best” is negative. What worked as an expository note on contradictory indicators at a time in history is, instead, made dissonant and sarcastic.

Actually, in English’, the words “best” and “worst” would be likely to fade for the same reason that “pulchritudinous” (a not-beautiful word meaning “beautiful”) has become uncommon.

In English’, the same exact opening line wouldn’t work and it has nothing to do with the words or their meanings, but with how we say them. What makes that line work is an artifact of English, in the same way that “veni vidi vici” exploits a Latin artifact for alliteration, but becomes the clunky tricolon “I came, I saw, I conquered” in English.

Bad writing, then, I would argue to be a form of overfitting. It’s when one takes an example of good writing, learns the rhetorical device, but ignores the artifacts that make it work. This is an error that we’ve all made. We take what’s memorable and don’t fully know why (when we’re inexperienced or immature and still figuring things out) and misuse it. The result is rhetoric out of place, often deployed without cognizance.

In my experience, as one who wrote a few million words of it before I wrote anything good, bad writing tends to be either inscrutable or too-obvious in its intentions. The obvious cases are the trying-too-hard examples. If someone goes into hard-core hypotaxis and drops 265 words to describe a character waking up and having breakfast, that’s archaic because people don’t write like that anymore. It may have been impressive in a time when books were so expensive and rare that you read every one you got your hands on, but in 2017, the reader feels that her time was wasted, and she goes off and starts something else. The inscrutable often comes from imprecision. A rhetorical device goes off, but it’s not clear whether it was meant to be there, or whether it planted itself via memetic infection and writer overfitting. Or, to be less pretentious about the whole thing, it’s “Did she mean to repeat that word, or was she in a loop?” I wrote a short story in high school where I used the word auspicious seven times in 2,900 words, and used ostensibly as a “smarter” synonym for obviously. Yuck.

On that, misuse of a “big” or “educated” word is just unforgivably terrible. It’s the penultimate sin of writing.

America’s 4th Phase

The United States, I would argue, has had three distinct phases: Citizen America, Producer America, and Consumer America. We’re heading into an unknown fourth one. In this light, it’s useful to understand the assets and drawbacks of each of the previous ones, as well as why each one faded and gave way to its successor.

Conveniently for generational theories like the Strauss-Howe model, each seemed to live for about ninety years. Of course, none of these had precise end or start dates, and seem to blur together at the edges. Citizen America didn’t “die”, but at a certain point in our history, we began to identify more as workers (producers) than as statesmen (citizens). Likewise, by mid-century we identified more as consumers than as workers, because we got wealthier.

Citizen America (ca. 1750 – 1845)

European philosophers like Rousseau, Voltaire, Locke and Hume argued for rational government. We should be governed, they argued, by laws rather than charismatic or religiously-ordained figures. The American Revolution was one attempt to achieve rational government; the French Revolution was another. Neither of these were perfect, but the attempts inspired a new attitude toward public service and government.

We had to decide, after the American Revolution, what kind of country we wanted to be. Hamilton had one vision; Jefferson had a different one. The Industrial Revolution was in its early phases, while slavery and western expansion became sources of conflict. Tensions grew between the established coastal rich (who started to perceive themselves as a new nobility) and the poorer people in the western hills. Immigration was a major point of contention. While there was nearly universal agreement on political equality for those deemed to citizens, there was no agreement on who ought to be a citizen. Land-owners only, or all free people? Should slavery even be continued? Jefferson, most flagrantly, said that “all men are created equal” while fighting to retain ownership of slaves.

Citizen America allowed modern capitalism to flourish, but its culture was pre-capitalistic. Inspired by the Greeks and Romans, it held that public life was the highest virtue. For an aside, the insult idiot comes from the Greeks: it meant “private person” and referred to one whose concerns were solely commercial or parochial. Most philosophers and public figures, in the 18th century, believed that a person whose interests were solely mercantile deserved to live with the lower classes, no matter how rich he became. Poets and philosophers and statesmen, they felt, ought to outrank men of commerce. That is one thing that was good about that time: there was an esteem for intellectuals that has largely disappeared.

One of Citizen America’s fatal flaws was that most people couldn’t participate. Slaves, for example, were treated as non-citizens within their own country. Free blacks didn’t always fare better. Women couldn’t vote in most states. Jefferson’s vision of the agrarian farmer-intellectual, reciting Cicero as he tilled the fields, turned out not to be the most practical vision. Andrew Jackson brought forth an ugliness in our national character that was truer to the reality of the common working person.

The high point of Citizen America was around 1800, and the decline was obvious by the 1830s. Then came the Mexican War, the atrocious Dred Scott decision, and the Civil War. By this point, we were well into Producer America. The role of labor, and the social position of those who performed it, became the central question of our society. We still had foundational questions about the country, but they were largely tied to labor and the importance of those who did it.

Producer America (ca. 1845 – 1940)

The Industrial Revolution came into full swing. Technology enabled people to work more. While farmers had periods of toil and others of rest, factory workers could suffer 300-day working years and 16-hour days, thanks to artificial lighting. High immigration made for an overflowing pool of cheap wage labor. The state offered no checks against this. Smart workers realized that it was in their advantage to organize, although the legal status of unions wasn’t well-defined. It took a lot of fighting to get official legal recognition of the mere concept. Meanwhile, business corporations used violence to prevent labor from asserting itself. This was the era of the Pinkertons and the Triangle Shirtwaist Fire.

We went full-on into the Gilded Age, with the infamous political corruption and financial instability, bringing on the Long Depression (1873-98) and culminating in the Great Depression (1929-39). This was also a time of high ethnic strife: Northern Europeans versus Southern Europeans, natives versus immigrants, freed blacks versus working-class whites. History tells us that a corrupt upper class will often not find it difficult to divide the working people against each other. That, we saw a lot of as the working classes fought for bare survival in tenement slums.

Still, there’s a nostalgia that people have toward Producer America. Like every age, it had its virtues. It’s the era of the Wild West and of steampunk, when people pickled their own vegetables and carved their own ice. It was easier, if one had the means, to enter business and stay there. If you were a middle-class male, you could get a job in business by asking for one, and being a full-fledged businessman after seven to ten years of clerking. Moreover, the consolidation of corporate power had only started. There were, for a fact that surprises most people, more American car companies in 1915 than there are today.

There was a maker culture much stronger than what exists now, but much of this was by necessity. One had to be skilled at repairing mechanical devices, or one would not have them, because they were expensive and most people were (by today’s standard) very poor. People in the northern states put trust in their neighbors, because of the severe winters. (One sees this today in Midwestern politeness; one could not afford to be a jerk in a challenging climate with 19th-century technology.) On the whole, Producer America was a difficult place to live, but there was plenty of work to be done. It kept people active.

In Producer America, people identified with their work and, increasingly, their social class. Laborers invented unions and white-collar skilled workers invented professional associations (which are unions by another name). From dress codes to punch clocks, most of our work culture was invented on this time.

Let’s talk about the Election of 2016. I did not vote for Donald Trump. I, however, recognized early on that he was consistently underestimated, and was in fact running an intelligent (if offensive and distasteful) campaign. I found myself repeatedly arguing that his slogan, “Make America Great Again”, was brilliant. No, it wasn’t dog-whistle racism that made the motto resonate. (There certainly were racists among his supporters, but racism wasn’t the only factor.) In fact, great was not the operative word, but make, coupled with the imperative mood. Trump’s subconscious promise was of a return to a time when people made tangible things and had jobs that mattered. Will he deliver on this promise? Can he? I have my doubts. Do we even want to get into coal mining again? Of course not. That doesn’t matter. What most people missed was that “Make America Great Again” wasn’t about racist or sexist nostalgia, but rather a deep longing to return to a time when human labor had esteem because it delivered tangible value. The fact that this required strong collective bargaining seems to have been lost on most of today’s right-wing populists.

Producer America was poor, beset by political corruption, and financially brittle. We had a quarter-century-long depression at the end of the 19th century. We had frequent financial panics, much worse than those that exist today. Banks often failed, zeroing savings accounts. A typical household earned less than $10,000 per year in today’s money. Let’s not romanticize this period of our history. People identified with their roles as workers, and with production, in large part because they were so poor. One’s job was the only source of income, esteem, and hope for a person, and often a meager source for all of those.

The system started to break down in the 1920s. The Great Depression wasn’t, in my view, caused by the Oct. 29, 1929 stock market crash. We had a worse one in 1987 and it didn’t even cause a mild recession. The 1929 crash was a symptom of something that had been building for some time. What tanked the economy, in the late 1920s, was ill-managed prosperity. By 1920, we were very good at making food. So, prices dropped. Seems like a good thing, right? More food and cheaper. Remember, of course, that a large proportion of Americans were directly involved in food production. By 1925, we had endemic rural poverty. Farmers who couldn’t afford the new technologies died out. Towns that served these farmers fell apart, too. By 1927, we were seeing weakness in industry. If farmers went out of business, the market for tractors went with them. Weakness in heavy industry was clear. 1929 wasn’t when the Depression started, but when it hit the cities and the richest people and it was recognized as a Depression. Producer America had broken down.

In that time, there was a widespread belief that poverty was a sign of personal moral failure. It was a bitter medicine that might impel a person to work harder, stop drinking, or be more frugal. Modern psychology tells us the opposite, but at the time, the Horatio Alger narrative and so-called Protestant work ethic dominated. What happened in the 1920s was that poverty spread out of control. It wasn’t the fault of the rural inkeeper that he had no business; the community that he served had no money. It took massive government intervention, catalyzed by an overseas war, to bring the economy back from its own wreckage.

When we re-emerged, we found ourselves in an era of higher complexity. We found ourselves reliant on governmental machinery designed by people with doctorate degrees in economics and operations research. Production of most good had become too complicated for individuals to participate: airplanes and computers require massive infrastructure and human capital. People could (and still do, in 2017) build their own motorcycles and cars, but they don’t stand a chance of selling them. There are far more stable jobs repairing the machines that large corporations make than there are in direct competition with them.

In the U.S., the most controversial change to follow from Producer America’s insolvency was the expanded role of the federal government. We needed it in the 1930s to dampen the damage done by runaway capitalism, and in the 1940s to fight the Second World War. It’s important and necessary and we rely on it, but a lot of people remain uncomfortable about this bare fact. I’d bet that 90 percent of people, which includes many who complain about “big government”, like the services that government provides, but some people wish they could have late-life medical care and decent roads through other means. (I don’t consider this practical, but I’m not them.) To quote the Tea Party protesters, “keep your government hands off my Medicare”. The age of perfect self-reliance never existed. In fact, the supposedly rugged cowboys relied on the U.S. Government (which displaced the native population) quite heavily. By 1940, though, it had clearly and truly ended.

The upshot of this upheaval is that it worked well. We built the first society with a large middle class. We had rapid economic growth and technological advancement. A truck driver who lived in rural Michigan in 1950 lived better than a European viscount in 1910.

Consumer America (ca. 1940 – 20??)

The AMC series, Mad Men, showed the birth of Consumer America, for good and bad. We generated enough wealth that people could work less and spend more. People began to identify with their purchases more than their jobs.

At the same time, we started seeing a problem with “jobs”. They became somewhat of a mess, because we started to suspect either that our working lives were suboptimal, or that we would be deprived of said jobs as soon as it were expedient. Both of these suspicions, felt acutely by individuals and dully by society, turned out to be right.

A fundamental problem with working for money follows. If your work has objective, legible value, someone will out-compete you at a cheaper price. Even if the low-price competition is unsustainable (dumping) it does not matter. The naive young person who burns out will be replaced, and so will the impoverished country that becomes less-impoverished as work moves to it, but there will always be one on offer, somewhere, to the employer. On the other hand, if the work is intangible (which is not to say that it’s not valuable) then one is reliant on a matrix of cultural, social, and generational support, skills, and infrastructure. What does it take to get paid for intangible work? Sales. Most people do not enjoy selling. In fact, they hate it, especially when it is their own work they must sell. Most people would rather take standard office jobs for reliable mediocre pay than put up with the constant humiliation, volatility, interpersonal rejection, and sheer chaos of having to sell themselves on a day-by-day basis.

We learned in the 1930s not to hang one’s income on the price of a commodity. This is especially true now, as commodities become cheaper. Rather, a worker survives by making his work intangible. The selling point of a college degree was that its economic value was independent of fluctuations in commodity prices. Oil prices might drop, companies might go bankrupt and zero their stock, but that college degree would never lose value. (Ha.) Management became the most coveted job, and it’s easy to see why. In commodity labor, it’s obvious if someone is bad at the job. If one person drills 20 holes per hour and another drills 15, the latter will be fired first. With management, the people who know if a manager is bad cannot say so, for fear of being fired themselves. The manager can always fall back on superior educational pedigree and higher social position. One-on-one, he has higher credibility and can use this to amass even more credibility. Eventually, we reached a state where the major leagues of management, called “executives”, not only take extreme salaries, but can transfer easily from one part of the economy to another. Getting fired, for an executive, is a paid vacation and a better next job. Sales and especially management have gained ground, and labor has lost it.

Under Consumer America, we became a society where most people go to work and don’t really do anything. The machines make stuff and other people called customers buy it, and the corporation functions as self-reinforcing eddy driven by inertial factors like brand reputation and convenience.

In fact, we do a lot more making in 2017 than we did in our supposed high era of manufacturing. We’ve even become quite good at it, due to technology. We make better things. If one includes hobbyists, we probably make more creative things. The problem is that humans who actually make things, for commerce, face imminent loss of income at every moment. Most people can’t stand the stress. At some point, they’d rather give up on their dreams and become executives whose “products” are meetings and bad ideas.

In Consumer America, people seek social status through consumption, whether of college degrees or clothing or housing in fashionable places, because that’s how one gets a reliable income. Production is too dangerous a route to sustainable income, because one can always be outcompeted. One must, instead, demonstrate something that looks like superior taste, culture, or intelligence, and that is done through consumption.

The highest-ranking people in our societies are not elite producers, but elite consumers. The polished businessman suggests effortlessness in everything he does. That’s his charm. He wears a thousand-dollar suit that looks like the fabric has never been folded. He sells the dream that if others just follow his ethereal “vision”, they could also be entitled to the ultra-consumer life that he enjoys.

The most valued trait in our culture is called “celebrity”, which is a preternatural ability to consume attention. We’ve given up on the ability to evaluate what people produce, so we use their consumption as a proxy. We conflate price with value.

Where might this lead?

Breakdown

Labor seems to be at the center of each phase’s inevitable breakdown.

With Citizen America, the fatal flaw was slavery. Hamilton and Adams predicted that slavery would destroy the U.S., and it nearly did. Whatever one’s gripes may be with industrial capitalism, it was an improvement over five millennia of humans using violence to force unpaid labor out of other people.

Producer America, to a large extent, couldn’t handle its own success. There are plenty of bad things to say about industrial capitalism, but the fact is… it works. It would have seemed unfathomable that prosperity in agriculture would lead to a crippled economy and (overseas) to authoritarianism and war. Yet it did exactly that.

Consumer America seems to headed down a familiar path. What happened to food prices in the 1920s is happening to all human labor. The “sharing economy” is a reinvention of what the early 20th century called “hobos”: itinerant workers taking what work they could.

Office workers like attorneys and software engineers might think they have little in common with 1920s farmers, but history will prove them wrong.

It’s hard to define a clear adversary. Some people attack “globalization”. The truth, however, is that foreign competition isn’t the greatest threat to American workers. To a large degree, I believe that the threat of foreign competition (especially when bandied about by management, such as when unions are under discussion) is much more of an issue than the actuality of it. The greater threat is technology. It’s often ignored, because it doesn’t have a face, and because we all recognize both its necessity and inevitability.

There’s a lot of “othering” at the heart of the resurgent nationalistic populism that we’re seeing in the world’s working classes. You can other a person who looks different, lives thousands of miles away, and that some dickhead manager told you is eager to take your job for one-fourth the price. You can’t other the phone, a supercomputer by the standards of 30 years ago, that sits in your pocket. So, we tend to ignore the dangers presented by technology. “Outsourcing” looks like something that we’ve seen for millennia: other tribes or nations, full of hungry people, threatening to conquer us. Technology doesn’t look like anything visceral. It appears non-threatening if it’s well-designed.

Should we dread technology? Yes and no. Automation is a double-edged sword, but it’s going to happen and there is no value in trying to prevent it. Governments should not try to preserve specific jobs in, say, coal mining. They should, however, attempt to prevent sudden losses of income and especially of worker leverage. I can’t emphasize this enough. Most people think their jobs are safe, and they’re wrong. What do they think those laid-off truck drivers are going to do? Many are going to retrain and contend for the supposedly safe jobs, like software engineering. If the labor market collapses, it will fall as one.

The bigger problem around technology is not automation. Automation’s desirable. Rather, the danger is that technology is often used toward bad ends. The lasting effect of the echo-dot-com boom isn’t a technical or political advancement. Scientific progress seems to be slowing down right now. Rather, it’s the increasing shift of power from employees to employers. Let’s take social media. I’ve been involved in hiring and I’ve seen people turned down for jobs or fired based on social media activity, sometimes quite anodyne. I’ve also seen people turned down because they didn’t have social media profiles, which was deemed “creepy”. If he didn’t have a Facebook or LinkedIn profile, what was he hiding? That’s right; someone was punished for not rendering personal information unto surveillance capitalism.

I once worked on a performance-management system for drivers. Most of the people who engineer such software believe that it’s harmless, and are usually told by management that drivers appreciate the work. That’s often false. Such systems increase stress and even the probability of workplace violence. Technology often suits employers’ needs and the expense of employees. In one case that I know of, a few years back, a GPS monitoring system that was supposedly intended to improve gas mileage was actually used to catch drivers eating lunch off-route (either to go home, or see their kids at school). That’s not increased efficiency. That’s being an evil, greedy fuckbag.

I worry much more about about technology toward evil ends than I do about automation. With automation, we need to be smart as a society and put the dividends back into the common good. That’s a hard problem, but it’s easy in comparison. A permanent shift in the power relationship between employers and employees, in the wrong direction, could render us unstable, impoverish the masses, and leave the country prone to populist or even fascist revolt.

Will Consumer America die soon? Yes. We’re seeing the early phases. It’s not pretty.

As I said, every job that provides direct salable value has a target on it, and most of those have been automated out of existence. Jobs that remain are in abstract work.

In an office, you have some people who run around and try to quantify the work of others: managers, HR executives, consultants, and the like who try to spot opportunities for cost-cutting. Those people produce little. Nine times out of ten, they’re externalizing costs and risks rather than eliminating them. 99 times out of 100, any assets “liberated” by cutting these costs is put into executive coffers instead of forward-thinking investment. Most of these cost-cutting wizards are worthless, but they have a lot of power. People fear them. They will drive abstract laborers toward concrete measurements. They’ll take the creative process of programming and split it up into 4-hour units called “story points”. What you end up with is a civil war where most of one side– the workers, playing defense because even if they wanted to play offense, they wouldn’t have time on top of their (increasing, with each layoff) assigned work– has no idea what is going on, or even that they’re in a civil war at all.

When this happens, employees lose. Costs are externalized or transmuted into risk rather than removed, so shareholders get a bunch of under-documented risk dumped on them as organizations become more brittle and shorter-lived. It looks like stagnation, but it’s actually a hollowing-out. For example, in the corporate world, workers face increased instability and expectations without fair compensation. Moreover, when companies implode (as has become common) they aren’t replaced with better ones. Whatever “tech startup” meant in the Silicon Valley heyday of 1970-1995, it now means “new company with worse health benefits and an ill-defined career path.”

This can have far-reaching social effects. The rich man’s habit of dividing poor blue-eyed man against poor brown-eyed man (or black man against white man, or man against woman) leads to misplaced resentments that stack up over time, and you have a lot of people who are pissed off at the wrong people for an incoherent mess of reasons. Then, you get right-wing populism, which we’ve seen flare up all over the world. Anger drives out the more subtle emotions, and eventually conflict reaches a boiling point.

Downfall

When did Consumer America start to decline? I think that the civic downfall began in the late 1970s. Studio 54 is emblematic: elitism became sexy again. We fully committed ourselves to the wrong path in the 1980s.

While this period of time is called “the Reagan Era”, I doubt that a single center-right politician, no matter how powerful and charismatic, can take singular blame. Did Ronald Reagan invent employee stack ranking? No, that was Jack Welch and Jeff Skilling. What went wrong was more about culture than politics, and it happened in other countries that didn’t have conservative leadership. Mean-spirited corporate behavior, not transient conservative politics, is what killed us in the 1980s.

Leftist leadership wouldn’t have prevented a devastating cultural phenomenon: the repolarization of the American elite. To understand this, we have to understand the history of our national elite. What was it, and how did it change?

For most of human history, most people who were rich either inherited or stole their wealth. It was rare that a rich person wasn’t a scumbag, bully, or crook. This is why Jesus could say what he did about the eye of a needle. With near-zero economic growth on a per-year basis, life was pretty much zero-sum. It was a reasonable presumption that most rich people prospered at other’s costs. Then, we came into a perceived Golden Age when this seemed to be less true: from about 1940 to 1980 in the developed world. It has often been considered a historical anomaly. It doesn’t have to be so.

Not only the Great Depression, but the Second World War and the flirtations with extremism all over the world, all convinced the American elite to slow down and be happy with what it had. They elected to get richer somewhat slower than others in society. Noblesse oblige. Inequality went down, but so did their risk of imminent overthrow. Perhaps not knowing it as thus, they chose graceful relative decline as their survival strategy. It worked. They were plenty rich, throughout the 1950s and ’60s. They never stopped getting richer; they just slowed their pace and let everyone else catch up.

A CEO in the late 1970s made about $500,000 per year. His source of pride wasn’t his income but his stewardship of the company he ran. Even if it meant a personal cost to him, he’d do what he could to keep his people employed and happy. Companies invested in their people. There was a large middle class. If you were unemployed, you could call about a job at 10:30, interview over lunch with the CEO, and be hired by 2:00. What happened? Why did this country throw it all away?

Upper-class people who remembered the tumultuous 1930s and ’40s recognized that social stability and cultural advancement were more important than personal enrichment. Their kids didn’t. Their kids traveled to other countries and came in contact with countries where the old way reigned, and where feudal lords and scumbags still dominated the upper class. They met oil sheikhs who married 9-year-olds, third-world despots who could kill with impunity, and (after 1989) post-Soviet kleptocrats buying private islands and penthouse apartments all over the world. In comparison, the American rich were more restrained, more civilized, and also poorer. They still had to follow their country’s laws! They flew first class instead of private!

At some point, the American upper class desired to join the global elite. They sold the country out. They made it legal for nonresident foreigners, often of criminal origin, to buy real estate in Silicon Valley and Manhattan, creating permanent housing shortages. They created a culture in which labor is ill-viewed and consumption reigns.

We’re now back to a Gilded Age, but a global one. Whatever we learned in the 1930s and ’40s, we forgot. Filth floats to the top again. The hyper-consumptive global elite is in charge. Even national governments must often play by their rules, as they constantly threaten to move capital elsewhere if asked to pay their taxes.

Our global society is, because it is badly run, quite brittle.

We actually don’t have more recessions in this terrible new economy than we did in the old one. We have fewer, but they hit harder. In the old economy, you worried that recession might get you laid off. It might mean a tough year or two. In the new one, people worry that it will end their careers, because that happens a lot. People find themselves out of relevant work for two or three years and are replaced, when the situation improves, with a mix of unskilled young workers and better software off the shelf. For the bottom 98 percent of the labor market, each recession is more severe and each recovery is more jobless.

We are getting to a dangerous point. Let’s talk about various possible future outcomes.

Worst: Catastrophe

I make no predictions for the worst-case scenario. Climate change, international conflict, resource depletion, a successful Business Plot, even another 1918-like disease epidemic… there are a number of ways in which the U.S. could not survive the end of Consumer America, or in which it could be radically altered. Some of these catastrophes are more manageable than others. Some involve a painful decade and a recovery; others go into darker places. I can’t say too much here, because the nature of these events is everything becomes unpredictable when one happens.

Catastrophic events are an ahistorical threat. The Yellowstone Supervolcano is unlikely to explode, but it doesn’t know or care about human generational cycles. What is different about this time, as opposed to others, is the brittleness of the American economic fabric.

Baseline: Renter America

Renter America is where we seem, in 2017, to be headed.

To sum it up, it’s a worse version of Consumer America. Life goes on, but people have less control over where and how they live. People continue to need these short-ended power relationships called “jobs”, and spend more time on busy work or protecting position than actually doing anything. Meaningful, productive work becomes a coveted, scarce resource and one must engage in political intrigue in order to get it.

In Renter America, people live increasingly on their reputations (which are easily controllable by corporate interests) rather than their skills. Jobs get harder to find and easier to lose. Long-term unemployment, financed by credit, becomes the norm. Corporate investment in individuals goes to zero, offset by escapist fantasies (e.g., Silicon Valley startups) and those who are wise enough to see through them– or, more to the point, old enough that they should see through them– are discarded. People lose a sense of ownership in their economic lives and become permanent, itinerant renters, ambling through life on credit and student loans they’ll never pay back. Homeownership and starting one’s own business become impossible for most people.

This is where we seem to be headed in 2017. There’s no sign of an imminent national catastrophe (although there are many risks) but there’s also little reason to have hope about our economic or political future.

What I question about Renter America is its stability. Material well-being doesn’t get worse in Renter America, but it ceases to improve and there is a loss of dignity and self-determination. People are forced to move, threatened with medical bills they can’t pay, put into jobs involving more busy work and less actual production or self-improvement, and generally kicked around more. Their jobs and lives become mindless and highly surveilled.

Renter America delivers mostly insult rather than injury. A few people die prematurely because they lack health insurance or work in dangerous jobs, but most people are afforded vaguely dissatisfying but semi-comfortable lives. The elite recedes into its own world where things still work: schools lead into jobs, jobs lead to skill growth and wealth, et cetera.

If it doles out only injury, will Renter America ever be overthrown? I’m not sure. We could see a widespread slacker culture: the Japanese hikikomori or the European mileuristas and NEETS are becoming the norm. I don’t see it as inevitable that a mediocre, boring future gets itself overthrown. I hope that I’m wrong.

Better: Patriot America

In the 1960s, national governments were perceived as being in cahoots with the global corporate elite. To a large extent, they were. Companies weren’t as malignant as they are now. Back then, private companies invested in their people, paid well, didn’t try to avoid paying taxes at all costs, and seemed neither at odds with the needs of government nor the people.

Conflict with the global corporate elite is possible. To the surprise of some, I believe that national governments will be our allies when this happens. They don’t like tax-cheating, rule-breaking criminals any more than we do. National governments don’t get everything right, but they’ve been left as the sole adults in the room. Who funds basic research? The age of Bell Labs and Xeroc Parc ended a long time ago; short-term optimizers won.

Patriot America would be a more inclusive reprisal of Citizen America, in which the defeat of the global corporate elite becomes a point of national pride. We could, for example, demand that all nonresident real estate owners sell within 14 days or forfeit their holdings. This would do a lot to make housing more affordable in places like New York and San Francisco. We could ramp up research funding for renewable energy and not only end our dependence on foreign oil, but take leadership on climate change as well. (I realize that, right now, it looks like we’re going in the opposite direction.) This is going to be unappealing to the anarchist element of the left, but it will first be through governments that people most effectively take on the global corporate elite.

This variety of patriotism isn’t exclusionary. Sam Adams was not patriotic at the expense of other nations, and neither should we be. Local and national governments will have to work together with each other in order to defeat two major adversaries in the future. One is the environmental damage wrought by climate change. The other is the global corporate elite.

Patriotism is not an assertion of superiority over other nations. Intellectually, we all know that we aren’t superior because of where we were born. Rather, it’s an admission of one’s own limitations. No one can fix the world. It’s too big of a job. But people can work together to fix their communities, then their cities, and then their countries.

Destinations and Lessons

Are there other possible fourth phases? Of course there are. Renter America seems to be the baseline disappointing turn of events, and Patriot America is a broad sketch of something that might be better.

We ought to learn from the three previous incarnations of this country before we build the fourth. What worked, and what didn’t?

The virtue of Citizen America was its insistence on rational government. We now need a rational economy. Universal basic income is a start, but we also need meaningful work for people, and there is plenty of work that needs to be done. Additionally, we ought to recognize kinship with people in other countries. Patriotism shouldn’t be pride in what is, because intrinsic national superiority doesn’t exist, and that idea has done far too much harm already. It should be pride in what one does to make one’s community (whether local, national, or global) better.

What Producer America got right, although it took a long time, was that it eventually put dignity into work. It recognized the human need for a productive role. Also, work in that time was not the psychological monoculture of today’s office work. People did a lot of different things. We need to learn from that, and get away from the culture in which people are shoehorned into bland roles that are often substantially below their levels of ability.

Finally, let’s talk about Consumer America. In the 1950s, most people believed that we’d have a ten-hour workweek by now, and that economic scarcity would be nonexistent or trivial. Yes, if you were unemployed, you might have to wait two months longer to take your vacation to the Moon. Well, we’ve failed. In the 1980s, we allowed bad leadership to come in. It wasn’t our political leadership that shat the bed, though. It was our corporate leadership. In order to get the next iteration of this country right, we first have to take stock of what previous generations got so wrong.

Just as the noblesse oblige national elite of the Kennedy Era learned, from the Gilded Age, that a vicious unequal society would burn them in the end (as it did, in the 1930s) we will need for the current global corporate elite to learn a hard lesson. We’ll have to replace them with something else. In order for that “something else” to be anything better, though, we have to study our past.

Phishing/Hacking Attempt

In April, I got an email about a CTO-level position. It was a personalized message. The person writing it knew who I was and my capabilities. Naturally, I checked it out. It never hurts to talk to people. As is typical, a résumé/CV was not enough. I had to use that company’s web-portal. Okay, why not. I have time, says the dog.

I didn’t hear back. I should have suspected something given the lack of response. Now, everyone gets rejected, even people like me. (Especially people like me.) There’s nothing odd about getting turned down. That said, above the VP level, you get a personal response and a truthful explanation of why you didn’t get the job. Usually, it’s impersonal (it could be, “the other candidate has 20 years more experience”) and you move on. If you don’t hear anything, at my level, it’s fishy. Or, should I say, phishy?

It was a fake job portal. The company that this attacker purported to be was not looking for a CTO. To be clear, they had no involvement in this and were professional in every way.

A few weeks later, someone tried to access my account on multiple cloud services using the password I used (I create a new one for every job site) and hundreds of variations thereof. I got calls about this. (No one got into anything.) These attempts came from a reputable technology company in the San Francisco Bay Area. I know exactly who they are and what they were after. They’re probably pissed off that they weren’t able to get into it.

At this time, that is all I intend to say.

Crossing the Equator 6: Villains in Fantasy Versus Real Life

I open Farisa’s Courage with the heroine running for her life. Her memory is breaking down (a consequence of her magic, when used too far) and she’s confused, desperate, exhausted. In an unknown city, feet and legs caked in miles of trail mud, she bangs on a stranger’s door. She’s forgotten several years of her life. By the time she reaches (transient) safety, she doesn’t know where (or even who) she is. (She recovers, of course.) Meanwhile, the antagonist doesn’t get much stage time in the early chapters. That’s intentional. It’s also unusual, per fantasy genre conventions.

Many fantasy novels open with the Big Bad Antagonist doing something terrible. He destroys a village, or he tortures a child. Often, no reason is given; of course the bad guy would do something bad. The dragon just likes gold, though she never spends it. The sixteen-eyed beholder has to slurk out of its dungeon, eat a peasant child, and then slurk back because, if the heroes sought and killed it for treasure or “experience points”, then they’d be the villains.

In Farisa’s Courage, the first book of the Antipodes series, the main antagonist is a corporation, the Global Company. They’re bad, but like business organizations in the real world, they’re reactive and effete. They do more damage (early in the story arc, that is) through incompetence than by intention. I open Courage with asymmetry. The heroine is in danger, but the antagonist is comfortable (and unaware that it is anyone’s antagonist). That’s how good and evil work in the real world.

Fifty years before, the “Globbies” were a corporate police firm. That exists in the real world; they’re the infamous Pinkertons, who are still around. The Globbies also had a flair for witch hunting (which also still exists, even if witches don’t). When Farisa’s story opens, they control 70 percent of the known world’s economy. (It’s a steampunk dystopia where the Pinkertons won, and evolved into something worse. Something similar almost happened here.) They don’t take much of an interest in Farisa. They know that she exists, and that she’s a mage, but they also know that magic is unreliable and dangerous. They have been through forty years’ worth of failed attempts to harness it. So, they don’t think much of her. They’re only interested in her because she’s been accused of a crime that she didn’t commit (and, in fact, they know that she’s innocent of it).

Farisa doesn’t see world-fucking evil from them in the first 200 pages. The reader sees the Company’s low-level, self-protective evil, but nothing threatening the end of the world. That’s intentional. You didn’t see world-fucking evil from the Nazis until Kristallnacht, either. They’d been around for almost twenty years by then.

Epic fantasy is often Manichaeist. Good and evil exist as diametrical opposites. In the first or second chapter, the reader often sees the Big Bad doing some horrible thing. It has to be shown early who the Big Bad is. In my experience, though, evil doesn’t reveal itself until it needs to do so. There’s a potent asymmetry between good and evil. Good must act, and evil can wait. Good is desperate to survive, like a candle in a hurricane. It will rescue a child from a house fire (and fire, though dangerous, is not even evil). Evil can use slow corruption, hiding and waiting. It’s usually done in by its own complacency and arrogance, but that takes time.

Epic fantasy often wants symmetry. It wants evil that is as desperate to do harm (and to kill the heroes) as the heroes are desperate to survive. It wants evil that can’t plop down on its haunches and wait. It must burn that village! It must abduct that princess! In my experience, that’s a rare kind of evil, and evil itself is not all that rare.

This might explain why our culture is fascinated by serial killers. They’re very rare, but they show us a refreshingly different kind of evil from what lurks in corporate boardrooms. The serial killer is intense and desperate. Why does Vic the Biter eat the faces of human children? Because he’s an insane fucker, that’s why. His desperation mirrors that of the good. He’s fighting for survival, because his mind is broken, and eating children’s faces is the only thing that gives him respite from his own demons. His vampire-like hunger drives him to make mistakes that render him easy enough to capture that the story can be told in a two-hour movie or a 90,000-word novel. If there must be evil in our world, that’s the kind we want: a kind that is as desperate for its own survival as is good.

The desperate, belligerent kind of evil exists, but it’s not the kind that’s running the world. The Davos elites view the rest of us with phlegmatic contempt– they prefer not to think of us at all– but not burning hatred. Yes, the Davos Man would rape a child if there were a billion dollars in it; but, outside of that laughable, contrived scenario, he’d rather go back to his hotel room and sleep off his drunk.

Not to take the metaphor too far, but this mirrors the asymmetry between capital and labor; the former can wait, the latter must eat. Now, I don’t wish to say that capital is evil or that labor is good. Neither’s true. The parallels of their struggles, though, I find to be worth note.

Labor and capital both perceive themselves as at war with entropy, but one conflict has higher stakes.  Labor must consume two thousand kilocalories per capita of chemical energy or burn itself to death. Capital issues weak complaints about meager stock market returns, and the declining quality of private boarding schools, and too many brown people at the country clubs. Labor is a stroke of bad luck from dying on the streets. Capital is slightly perturbed by the notion that things aren’t as good as they used to be, or could be, or are for someone else. It’s the same with good and evil. Good lives in constant warfare with selfishness, stupidity, disengagement, petty and grand malevolence, and myriad other entropic forces. Evil? Well, it rarely recognizes itself as evil, to start. When it’s losing, it’s a chaotic force. When it’s winning, it thinks as little as possible. It too has its slight unsettlements, but rarely feels a need to fight against the world for its own survival.

It’s unfashionable, in the postmodern world, to believe that good and evil exist. Some view them as relics, like ethnic gods, that simpletons cling to. We’re not enlightened enough to see the complexities of human power struggles from all angles. I don’t know whether gods exist, but good and evil do. An issue is that they’re viewed as compact entities or forces rather than patterns of behavior. As “alignments”, they don’t exist. There’s no unifying banner of “Good”, nor one of “Evil”. Yet, we experience good and evil in daily life, from the small to the large. Is a convicted murderer an evil person? Not necessarily. Prima facie, there’s a lot of context that we don’t know. He could be mentally ill. He could be innocent, or have killed in self defense. We can agree, though, that murder is usually an evil act.

Good people value what is good, though we’re slow to find perfect agreement. There are good people with bad ideas. There are good people who’ve been infected with evil ideas. Most of our so-called “founding fathers” were racist, and racism is without a doubt one of the most evil ideas that humans have ever concocted. That aside, some of those men were arguably good, even heroic.

Evil does not, in general, value evil. German and Japanese authoritarians fought together, but regarded the other as racial inferiors. (Stalin was pretty vicious as well, but fought on our side.) Corporate executives and child molesters despise each other; you don’t see them seated together at Evil Conventions, because those don’t exist. Good values good, but evil doesn’t value evil. Evil values and seeks strength, and a position of strength is one from which one can wait.

I have actually battled evil, and suffered for it. I wrote hundreds of thousands (if not millions) of words on how to survive corporate fascism. I have exposed union-busting, labor law violations, and shady practices of all kinds in Silicon Valley. It has been an edifying (if expensive) ride. I’ve probably mentioned that Evil has won some of its battles. It may yet win the war.

In this light, we have to understand it. We have to know how it works, what it values and what it doesn’t, and why it wins. It wins because it can. It wins because often it wins if nothing happens.

Does the hungry evil of the vampire or serial killer exist? It does, but it’s rare. The more prosaic boardroom variety of evil is far more common. Often, the most dangerous thing about it is its most boring advantage. If it wants to do so, it can sit in its castle, and wait, and hope that we fuck up before it dies of its own ennui.

Swamp Baseball

My warning meant nothing | You’re dancing in quicksand…

— Tool, “Swamp Song”, 1993

Swamp Baseball is like regular baseball, but with a few changes:

  • You play in a muddy bog. Outfielders can fall into quicksand. The “run” to each base can take 30 seconds. Swimming is allowed, but bare-eyed (no goggles!) only.
  • The ball is covered in mud and will spin and fly unpredictably. Every pitch has its own character. Instead of bats, you remove and use tree branches.
  • Each inning, you have to remove leeches. Whichever team has fewer leeches gets an additional run. Lampreys count as four leeches each. (This does make the game notably higher-scoring than regular baseball.)
  • Dangerous mosquitos are shipped in, if not already present; therefore, you will probably die of malaria (and, thus, be kicked out of the game with nothing to show for it) before you are 40.

Who wants to play Swamp Baseball? I’m guessing that the answer is “No one”. Nor would most people want to watch it as anything more than a novelty. We like to see humans play the sport in a more appropriate habitat. There’s nothing wrong with swamps. They’re good for the world. They just aren’t where we do our best running– or pitching or fielding or spectating. If you want to see baseball played in top form, you’ll go to a ballpark rather than a malarial bog. It may be, in the abstract, more of an accomplishment to score a home run in Swamp Baseball, but who cares?

In the career sense, I’ve played a lot of Swamp Baseball. I’ve become an expert on the topic. I used to have the leading blog on the ins and outs of Swamp Baseball: how it’s played, why it exists, and how not to lose too much. I’ve fought actual fascists in corporate environments and had my share of runs and outs, wins and losses.

Here’s the problem: no one cares about Swamp Baseball. Why should they? It’s a depressing, muddy sport where even the winners get their blood sucked out by leeches and lampreys. It doesn’t inspire. No one sees the guy who slides into home plate for a run, only to get his face ripped off by an alligator, and says, “I want to be like him when I grow up!”

Technology can be a creative force, and programming can be an intellectually thrilling activity. Getting a complicated machine learning system to compile, run, and produce right answers might be more exciting than the crack of a bat (says a guy who has no hope of being any kind of professional athlete). Like writing and mathematics, it’s one of the Great Games. Victories are hard-won but often useful and sometimes even profitable.

Yet, most programmers are going to be playing their sport in the swamps. There won’t be literal mud pits, but legacy code that management refuses to budget the time to fix. There won’t be literal lampreys and leeches but there will be middle managers and project managers trying to get the team to do more with less– bloodsuckers of a different kind. Just as all swamps are different, all corporate obstructions are unique.

Here’s the problem. Swamp Baseball can be fun in a perverse way, but it would fail as a watchable sport because one’s success has more to do with the terrain than the players or teams. Runner falls into a mud pit? Whoops, too bad! Fielder faints due to blood loss, thanks to leeches? Looks like the other team’s getting a run. Real baseball has boring terrain and lets the players write the story. Swamp Baseball has interesting terrain but no sport or art. If the sport existed, it would just be artificially hobbled people failing at everything because they’re in the wrong habitat.

Corporate life is, likewise, all about the swamps. The success or failure of a person’s career has nothing to do with batting or running or fielding, but whether that person trips over an alligator or not on the way to first base… or whether the shortstop collapses because the lampreys and leeches have exsanguinated him in time. Sometimes the terrain wrecks you, and sometimes it wrecks everyone else and leaves you the winner, but… in the end, who cares?

Swamp Baseball wouldn’t get zero viewers, of course. Some people enjoy comic relief, which in this case is a euphemism for schadenfreude. It wouldn’t be respectable to watch it, nor to play, but some people would watch and for enough money, some would play. Corporate life is the same. Its myriad dysfunctions and self-contradictions make for lots of entertainment, often at another’s literal and severe expense, but it’s fundamentally lowbrow.

That’s why I don’t like to write about corporate software engineering (or “the tech industry”) anymore. And if I stay in technology (which I intend to do) then I want to play the real game.