Crossing the Equator 7: What Is Bad Writing?

Bad writing. I bring the topic up not to mock bad writing, because it’s rarely worth the time, and also because most of the sins of bad writing have also been committed by good writers, either when they were inexperienced or in quick first drafts. It’s useful to explore the topic, though. What is bad writing, and why does it exist, and why do so many people produce it? Even most intelligent people write more bad prose than good. Where does this come from?

Not (Necessarily) Bad Writing

Some tastes are arbitrary. Let’s take so-called “swear” words. Shit was once an unobjectionable term for feces; fuck, for copulation, and cunt, for the vulva. These words became objectionable because of the social classes and ethnicities of those who used them, centuries ago. Bloody is mildly profane in the UK, but laughable in the US. One of the worst German profanities translates as “pig-dog”, which would be insulting but not obscene in English.

Of course, sometimes profane words aren’t “bad words” at all. Sometimes, they’re excellent words. It depends on context.

In addition to these high-stakes word-choice issues, we have various shibboleths. Most people think that this sentence is grammatically incorrect.

There’s three people at the door.

Is it? Well, Shakespeare would have said no. If “is” must agree with the pronoun “there”, it checks out. “There”, in this context, is shorthand for “What is there”, which is always singular. “To be” can cross from singular to plural and there’s no consistent agreement on which side wins. Usually, it’s the prior/left side with which the verb must agree:

I found out that “she” was actually three people working shifts.

So, “there is three people at the door” is, although non-standard, defensible.

I grew up in Central Pennsylvania, so I frequently catch myself saying “Are you coming with?” instead of “Are you coming with me?” or “Are you coming along?” Prepositions are weird animals that make up their own rules and don’t transfer well across languages. Why is along better than with? It’s arbitrary.

Another Central Pennsylvania usage that is frequently called wrong: “needs fixed” as opposed to “needs to be fixed”. How bad is it, really? It saves two words and communicates the same idea. On that note, let’s talk about a word-saving usage that is without controversy but was probably considered wrong at one point: the modal verb used to.

I used to cook.

This is a fine, grammatically correct sentence. Everyone knows what it means. But, it probably made grammarians twitch at one point. It looks colloquial, imprecise, and incorrect, because used to has nothing to do with used or to.

If I had to guess, the used to modal verb came from the wordier “I am used to”, where used is a past participle and “am” is the archaic device where “to be” instead of “to have” is used for the tense (e.g. “I am come”, “she is gone”, “he is dead”; two of those live on as adjectives and are rarely thought of as participles). In Shakespeare’s time, you would say “I am born in 1983” rather than “I was born”. This still lives on in some of the Romance languages. I’d imagine that “I used to cook” is a shortening of “I have been used [for] cooking”. It’s politely servile in a way that, like “my lord”, is now anachronistic.

For another interesting note, many people believe that “will and shall” is a dead distinction. It’s not. It lives on, but with less rigidity. The contraction forms (“I’ll”, “he’ll”) are descendants of shall most of the time. People still say “I will” when they mean (according to the older rules) will and use the contraction when they mean shall. “I will go to the store tomorrow.” “If they can’t cure me, I’ll die.” There are exceptions, the most notable one being when people de-contract for imperative emphasis: “you will show up on time”. The commanding shall tends to be de-contracted to use will, while the matter-of-fact neutral shall (which was far more common than the commanding usage) is left contracted (“I’ll be at home tomorrow”).

Don’t try to argue that contractions are incorrect either. That’s bullshit. Shakespeare used more than we do today. Contractions are excellent.

At any case, when I talk about shitty writing, I’m not talking about “different than” or “try and” or even “towards”. Even “irregardless” is embarrassing, but it doesn’t really block communication or bore the reader or spawn undesirable resentments. It has two extra letters and it’s ugly, but people know what’s meant. I couldn’t care less about it. (Yes, that was intentional.)

For extra fun, let’s take “Where are you at?” Some people hate this. In the right place, it’s excellent. The at is superfluous, but it’s a jab. It isn’t uneducated; it’s exasperated. It’s jarring, but it’s supposed to be. There’s impatience in that usage.

Dangerous Good Writing and Rhetoric

There’s an amusing sub-category of writing that I’d like to talk about. There are places where good writing is more dangerous than shitty writing. Corporate America is one such place. For one thing, you might still get in trouble for using a contraction in a corporate memo. You don’t want a human touch; you want formality and stiffness.

It has come to our attention that you have been viewing inappropriate material during working hours. Under these circumstances, we cannot continue your employment.

Change “cannot” to “can’t”, and you add a slight bit of human touch. In this firing letter, though, that’s exactly what shouldn’t be there. The adverse decision must be presented as impersonal, civilized, and inevitable. You say “can’t” when you want to come across as a vulnerable human; you say “cannot” when you want to suggest an objective limitation that is out of your control.

One of the biggest differences between corporate writing and real writing is in the role of passive versus active voice. English teachers hate passive voice and strike it out with red ink. They’re right, if they’re teaching people to be writers. Novels are slowed the fuck down by passive voice. The ball being thrown by John puts focus in the wrong place, unless the narrator is a cat, because the cat’s eyes are glued to that ball. (You thought they were in the sockets, didn’t you?) Yet, in business writing, the passive voice is often mandatory. Use active voice, especially around the pronoun I, and you sound like you’re trying to be an impatient executive. If you’re not an executive, you can get in trouble for that.

Shitty writing thrives in the corporate world, and it’ll never go away. Executives can use active voice, but most people are not executives and will need to acquire bad habits if they want to be employable.

Let’s talk about rhetoric. This is such an abused word today. So many people complain about “politicians and their rhetoric” with a note of vomit on that last word. What is rhetoric? It can be quite beautiful. I use it all over the place, and most people do, often without realizing it. Rhetoric is the art of designed speech or writing. Thought was put into it, to make it more clear, persuasive, or invigorating. Marc Antony keeps coming back to “Brutus is an honorable man”, assassinating his character with the repetition. Parallelism (“see the sights, hear the sounds, smell the odors”) is rhetoric. It can be odious, or it can work very well. Some of its rules are odd but work, such as the principle of threes (tricolon). “Friends, Romans, countrymen” is far more effective with three synonyms than two or four. Why? I don’t know. There are many plausible theories, but no one really knows what is magical about three but not four.

Rhetoric has an aesthetic purpose and a voice. You can inflate yourself, or show humility, or form a sense of commonality. (“Who among us has not sinned?”) You can use the imperative mood liberally. Sometimes, you break rules or even use multiple layers of meaning. “Now is the winter of our discontent” is a great example. Let’s look at Richard III’s original speech:

Now is the winter of our discontent
Made glorious summer by this sun of York;
And all the clouds that lour’d upon our house
In the deep bosom of the ocean buried.

Contrary to how the opening line is remembered, Richard wasn’t declaring it to be the winter of discontent. In fact, he was saying that the sun/son of York, Edward IV, had ended it. Of course, as a Lancastrian, he hated Edward. So, Richard cleverly speaks well of Edward in a way that’s amenable to being taken out of context.

To see how hard this is to pull off, note what changes with the truncation. In the original form, he’s using “Now” to justify a word-order inversion that occurs in conditional statements, i.e. “Only after eating your vegetables may you have dessert.” He’s therefore saying that the winter of discontent is over. But, truncate it at the first line, and the function of “is” changes. In the full passage, “Now” modifies “is” to suggest progression (rather than equality, the usual function of to be verbs): the winter of discontent is over, and has been made glorious summer. (This also exists in computer science statements; X = X + 1 is invalid mathematics but an assignment statement, valid under a progressive interpretation of “=”.) After the truncation, “is” becomes the regular equality statement and “Now” becomes not a modifier but an operand. He’s equating “Now” to the “winter of our discontent”, and the meaning becomes opposite to what he’s formally saying. It’s brilliant.

There’s a beauty to rhetoric, but it injects a personal voice, and persuasive desire. It reminds the audience that there is a speaker. This is also an area where many fiction writers fail. Should a novelist use rhetoric? Yes. But, in general, it should be that of the character. (Omniscient POV, I won’t cover here.) Otherwise, it becomes author intrusion. Impressing readers with cumbersome locutions went out style almost two centuries ago. It can still pass, but only when narrating in a certain kind of character that is so tedious in real life that it takes exceptional work to pull it off. Ignatius Reilly comes to mind.

In corporate prose, the objective is not a specific voice but no voice. The machine is supposed to look like a machine. Why? Because it’s not a machine. Every decision that “it” makes has human motivations behind it, but often those are socially unacceptable, and the people making those decisions are often self-serving scumbags. Therefore, corporations have to create an objective, mechanical voice that hides their true intentions. “I’m firing half of you and putting the budget into my ‘performance’ bonus” will get an executive’s car set on fire. Instead, it’s “Due to difficult business conditions, we have been forced into an uncomfortable blah blah bullshit blah.”

It’s a fun experiment to switch up and use active voice in business communication. I enjoy it. However, I’m also insane. You’ll be surprised how many people find that they “just don’t like him” (or her) where “him” (or her) equals you.

There are times, like switches of magnetic poles, when these expectations invert. For example, Donald Trump used a limited vocabulary and coarse style, presenting his more prepared, polished rival as “establishment” and therefore phony. She wasn’t. Her speaking style wasn’t corporate. It was precise, as you’d expect from a politics wonk. Trump managed to turn her style, which would usually be more authoritative and therefore superior, into a negative… and a lot of people “just didn’t like her”. (Okay, 90 percent of “just don’t like her” was sexism, just as the corporate world uses “culture fit” to justify its own sexism and racism; but the other 10 was an unforeseen switch in the rhetorical expectations of politicians.) In the corporate world, there are times of crisis in which active voice becomes preferable. In times of acute crisis, no one wants to hear “It has come to my attention”.

Corporate writing is also deliberately slow. This is because 95 percent of corporate writing exists to tell people why actions adverse to their interests have been taken. Good news is delivered verbally. Bad news is delivered in writing using templates of boring, cover-your-ass prose that unfolds slowly. Take this generic form rejection letter:

Dear Candidate,

Thank you for your interest in the position of Associate Hitman for the Global Company. We had a number of highly qualified candidates for this opening, and unfortunately we will not be moving forward with your application at this time. We wish you the best of luck with your job search.

Aside from the obvious bit of information (“no”) there is no information content, but the slow rolling is an expected bit of politeness. The passive voice encourages the recipient (*cough* rejectee) not to take the news personally, not that it matters if he does.

Shitty Writing

What is shitty writing and where does it come from? This is hard to answer.

Let’s talk about weeds. See, weeds don’t really exist. It’s not a botanical term. It’s a word that humans made up for plants they don’t like, or that are in the wrong part of the garden. It’s the same with writing and speech. Passive voice is expected in corporate communication. Never say “do” when you can say “deliver”. Always add authority to what you’re saying with the prefix, “At the end of the day”. That’s shitty writing, though. Let’s be honest about it. Outside of the intentionally soulless context of office writing, no one with a soul uses “deliver” intransitively unless talking about food. Pizza Hut delivers. If you’re a programmer, you write code. If you “deliver solutions”, then fuck off and die.

Much shitty writing comes from the mismatch of styles. Office writing should be adverb-heavy and verbose and, most importantly, that it must be be non-committal enough to allow exits and bland enough to be safe even when read half-heartedly and taken (either by negligence or malice) out of context. For a contrast, fiction should be punchy. Characters should do things. It should rain. John should not “come to a point at which life processes cannot continue”; he should die. Different styles. Fragments OK. Adverbs are acceptable in fiction when they add precision but very bad when used for emphasis, insofar as they diminish authority, unless of course the author wants a less reliable narrator. If I sound inconsistent and full of myself, that’s because there are no rules. But, there are styles. Some work and some don’t.

The simplest kinds of bad writing (grammar errors, misspelled words) will tank an office memo or a novel, but for entirely different reasons. In the office memo, they add character that is not wanted. They suggest that an errant human, rather than the mechanical beast that is the company, wrote the memo. For the novel, readers want a human writer. There, the issue is that bad grammar slows the reader down. Not by much, I’ll point out. Reading is about 20 percent slower for the worst kinds of misspelled or grammatically awful writing as opposed to crisp, good writing. It might feel slower, in the same way that driving 50 mph on a 70 mph road feels like crawling, but it’s usually 10 to 20 percent. Now, in an office memo, that 20% difference wouldn’t matter, because office writing is supposed to be slow, vapid, and imperious with the reader’s time. It can kill a novel, though. If you write a 100,000-word novel in 120,000 words, you’re dead unless you’re an exceptional belletrist. Agents and editors have a hair-trigger sense of wasted words and for good reason; they suggest other weaknesses in the writing (or story) that are more subtle and require a long form read (which agents don’t have the time for) to pick out. Small differences, information-theoretic margins of a few percent, make the difference between best-seller and perma-slush. If you’re a novelist, you want to have few grammatical errors because they slow the reader down with unimportant details… not because you’re trying to achieve a mechanical aesthetic. We get to the same general rule (“use good grammar most of the time”) along two very different paths.

Similarities between those two styles end there, though. Active voice or passive? Active for fiction, passive for office. How about rhetorical questions? Okay for a novel (suggestive of inner dialogue) but inadmissible in office prose. When can you break the rules? Even stiff business writing (which invented the non-word synergize) breaks rules of good writing all the time, but you have know exactly which rules you can break.

A good novel convinces the reader to suspend disbelief and invest her time and emotional energy in a 100,000-word account of events that never happened. The promise is that this story, technically a lie, will tell a deeper truth than many of our actual experiences. It’s hard to convince a reader of one’s authorial stature; there are many who try, but don’t merit it. Rhetoric is a big part of that.

Business writing is anti-rhetorical. In part, it wants voicelessness because the American business environment is so militantly anti-intellectual, and voice is something that most businessmen can’t hack. (So, get it out of here! Burn it with fire!) Corporate writing is bland because bland writing doesn’t make middling minds insecure. The fiction writer must convince a reader to read the next thousand words of prose. She must motivate her readers to continue with the difficult activity of staring at patterns made with chemicals on decaying plant matter. Business writing, for a contrast, tries to remove convincing and the reader and the writer; everything must dissolve, and this document must be accepted as objective truth, freshly printed by the machine, with nothing that suggests voice or character because those introduce the subjective and intimidate the less intelligent.

Rhetoric, done well, can be beautiful. Almost every well-remembered line of prose or poetry had some rhetorical device, perhaps used subconsciously, behind it. Hemingway’s deliberate use of short, bare sentences (the man was not limited, and wrote some great long ones, too) is what rhetoricians call parataxis. It worked very well for him. Is all rhetoric good, though? No. In fact, much of the shitty writing that comes from competent grammarians and orthographers, who’ve mastered the basics but still inflict low-quality prose on us all, is… badly-deployed rhetoric.

Rhetoric tends to have music to it, and music is repetitive. Repetition can be obnoxious. Or, it can be memorable. Rhyming, in poetry and song, probably became fashionable for purely practical reasons: it made it easier for actors to remember their lines precisely. Rhyme and rhetoric have the same effect on readers. They make words and phrases memorable and quotable. That can work very well, or it can fail.

Let’s explore diacope. What’s diacope? It’s when you use a word or phrase twice, with an intervening element.

 It is what it is.

“Love,” she said. “Love.”

Tom only cares about what is good for Tom.

“You got me! Oh, you better believe you got me.”

“Bond. James Bond.”

There’s a “rule” of grammar or style that is not really a rule about never repeating words. (See how I repeated “rule”, and it worked?) Most languages can’t afford this, but English has a ridiculous number of words and so a lot of people go to ridiculous extents to avoid repetition. (That repetition of “ridiculous” didn’t work quite as well.) This aversion to repeated words can lead to actual errors, e.g. “amount” as a synonym for “number”, which is it of course not. The truth is: repeating words can be very powerful. Or, it can be clunky. It depends on what word is repeated and how it is used. It draws emphasis. You actually can start twenty sentences in a row with “I”. You should do that if you want to write a self-centered character in first-person. You can tell that if you’re telling a single-person, direct story. You shouldn’t do that if you don’t know what you’re doing, though.

If you say, “She had a blue coat, a blue hat, and blue shoes”, you are drawing attention to “blue”. This may or may not be (fun tautology, there) what you want. It depends on context. Let’s say that it’s not what you want, and that this emphasis of blue is undesirable. Changing her hat to “azure” and shoes to “cerulean” isn’t going to fix the problem. It’ll make it worse.

Rhetoric is memorable. It’s catchy. It sticks out and can make a line memorable. Sometimes, it’s great writing. And, sometimes, it’s absolute shite. Bad writers often don’t the difference. It can be hard, because it’s usually contextual, the determination of whether a rhetorical device is useful and when it’s jarring or ugly. In fiction, it can depend on the character who is narrating. Some people have cliché minds and would totally narrate like this:

At the end of the day, Erika just wasn’t delivering. It was time to give her the axe. He would have to speak with the team about it on Monday, after the dust had settled. Next week, the team would need to fire on all cylinders.

If your POV character is a soulless corporate drone destined to plateau in middle management, that’s great writing. If you want the reader not to wish for your POV character to die in a copier fire, then it’s poor writing.

For this reason, it’s very hard to come up with snippets of bad writing. For anything that I can point at and say, “That’s bad writing”, there is a context in which it would be good writing. It takes a few hundred words to really know, and yet there’s a point where it becomes obvious. As in Jacobellis v. Ohio, I know it when I see it. Sometimes, the sin is author intrusion: a writer trying too hard to push a message or just trying too hard to be clever. Sometimes, it’s an introduction of one style or form into another that doesn’t work. It could be too many styles (flipping back and forth between business cruft writing and journalistic prose) or it could be the lack of one.

I think rhetoric is often at the core of it, though. Rhetoric accentuates. It adds a musical dimension. When used well, it’s powerful. When used sloppily, it’s terrible. Most people aren’t aware when they’re using it. That, I think, is the problem.

Overfitting

I’m going to bring in a concept from machine learning, which is overfitting. Machine learning, broadly speaking, is the attempt to simulate decisions considered intelligent (that is, those that traditionally required an expensive carbon-based organism instead of a machine to perform them) such as image recognition by turning it into a hard math problem that, while impossible without data (or, as we say, a priori) becomes tractable given massive data sets, a few well-studied algorithms from operations research, and time. Explicitly programming a computer to recognize hand-written characters would be so time-consuming and error-prone (there are about a hundred thousand characters in the Unicode standard) that it wouldn’t be worth doing; it’s better to train a machine to learn from millions of labeled examples.

Of course, the machine isn’t actually intelligent. It’s just doing a very complex rote computation involving lots of data, and it can easily infer things that aren’t true. Incidental artifacts can be incorporated into the model. Let’s say that an agent is being trained to recognize men from women based on facial photographs, but that the men’s and women’s pictures are taken in separate rooms with different lighting. Then, the machine might learn that men have brighter faces. It isn’t true, but the machine doesn’t know that. It’s very easy to build a machine learning system that learns everything about its training set, but does so by incorporating incidental artifacts of the data that don’t represent the real world, and therefore performs poorly on new cases. That’s called overfitting.

How does it apply to writing? Well, when we write, we draw on what stuck with us as readers. Those lines tend to be rhetorical. Behind most memorable lines is a rhetorical device. If these devices taken into a context where they don’t belong, they fail. If they’re overused, they’re just clichés, even if they worked when originally deployed. They’re also hard to modify without breaking. Let me give a famous example, from A Tale of Two Cities.

It was the best of times, it was the worst of times

This is a great opening line. You can’t use it, because it’s been done. Now, let me just show how sensitive that line is to something that most of us don’t think about: inflection.

Let’s assume that the language English’ (pronounced “English-Prime”) is exactly like ours but with the words “best” and “worst” swapped in meaning. Nothing else changes. Now, in English’, could you start off with this? It would mean the same thing.

It was the worst of times, it was the best of times (English’)

I would say no. Here’s why. Even though “worst” in English’ means “best” in our English, you’re now inflecting downward, because that’s how the line is read. “It was the woooorst.” Bass. Hear those vibrations in “worst”? Then, you have “it was the best of times”, with the treble of “best”, but in a language where “best” is negative. What worked as an expository note on contradictory indicators at a time in history is, instead, made dissonant and sarcastic.

Actually, in English’, the words “best” and “worst” would be likely to fade for the same reason that “pulchritudinous” (a not-beautiful word meaning “beautiful”) has become uncommon.

In English’, the same exact opening line wouldn’t work and it has nothing to do with the words or their meanings, but with how we say them. What makes that line work is an artifact of English, in the same way that “veni vidi vici” exploits a Latin artifact for alliteration, but becomes the clunky tricolon “I came, I saw, I conquered” in English.

Bad writing, then, I would argue to be a form of overfitting. It’s when one takes an example of good writing, learns the rhetorical device, but ignores the artifacts that make it work. This is an error that we’ve all made. We take what’s memorable and don’t fully know why (when we’re inexperienced or immature and still figuring things out) and misuse it. The result is rhetoric out of place, often deployed without cognizance.

In my experience, as one who wrote a few million words of it before I wrote anything good, bad writing tends to be either inscrutable or too-obvious in its intentions. The obvious cases are the trying-too-hard examples. If someone goes into hard-core hypotaxis and drops 265 words to describe a character waking up and having breakfast, that’s archaic because people don’t write like that anymore. It may have been impressive in a time when books were so expensive and rare that you read every one you got your hands on, but in 2017, the reader feels that her time was wasted, and she goes off and starts something else. The inscrutable often comes from imprecision. A rhetorical device goes off, but it’s not clear whether it was meant to be there, or whether it planted itself via memetic infection and writer overfitting. Or, to be less pretentious about the whole thing, it’s “Did she mean to repeat that word, or was she in a loop?” I wrote a short story in high school where I used the word auspicious seven times in 2,900 words, and used ostensibly as a “smarter” synonym for obviously. Yuck.

On that, misuse of a “big” or “educated” word is just unforgivably terrible. It’s the penultimate sin of writing.

America’s 4th Phase

The United States, I would argue, has had three distinct phases: Citizen America, Producer America, and Consumer America. We’re heading into an unknown fourth one. In this light, it’s useful to understand the assets and drawbacks of each of the previous ones, as well as why each one faded and gave way to its successor.

Conveniently for generational theories like the Strauss-Howe model, each seemed to live for about ninety years. Of course, none of these had precise end or start dates, and seem to blur together at the edges. Citizen America didn’t “die”, but at a certain point in our history, we began to identify more as workers (producers) than as statesmen (citizens). Likewise, by mid-century we identified more as consumers than as workers, because we got wealthier.

Citizen America (ca. 1750 – 1845)

European philosophers like Rousseau, Voltaire, Locke and Hume argued for rational government. We should be governed, they argued, by laws rather than charismatic or religiously-ordained figures. The American Revolution was one attempt to achieve rational government; the French Revolution was another. Neither of these were perfect, but the attempts inspired a new attitude toward public service and government.

We had to decide, after the American Revolution, what kind of country we wanted to be. Hamilton had one vision; Jefferson had a different one. The Industrial Revolution was in its early phases, while slavery and western expansion became sources of conflict. Tensions grew between the established coastal rich (who started to perceive themselves as a new nobility) and the poorer people in the western hills. Immigration was a major point of contention. While there was nearly universal agreement on political equality for those deemed to citizens, there was no agreement on who ought to be a citizen. Land-owners only, or all free people? Should slavery even be continued? Jefferson, most flagrantly, said that “all men are created equal” while fighting to retain ownership of slaves.

Citizen America allowed modern capitalism to flourish, but its culture was pre-capitalistic. Inspired by the Greeks and Romans, it held that public life was the highest virtue. For an aside, the insult idiot comes from the Greeks: it meant “private person” and referred to one whose concerns were solely commercial or parochial. Most philosophers and public figures, in the 18th century, believed that a person whose interests were solely mercantile deserved to live with the lower classes, no matter how rich he became. Poets and philosophers and statesmen, they felt, ought to outrank men of commerce. That is one thing that was good about that time: there was an esteem for intellectuals that has largely disappeared.

One of Citizen America’s fatal flaws was that most people couldn’t participate. Slaves, for example, were treated as non-citizens within their own country. Free blacks didn’t always fare better. Women couldn’t vote in most states. Jefferson’s vision of the agrarian farmer-intellectual, reciting Cicero as he tilled the fields, turned out not to be the most practical vision. Andrew Jackson brought forth an ugliness in our national character that was truer to the reality of the common working person.

The high point of Citizen America was around 1800, and the decline was obvious by the 1830s. Then came the Mexican War, the atrocious Dred Scott decision, and the Civil War. By this point, we were well into Producer America. The role of labor, and the social position of those who performed it, became the central question of our society. We still had foundational questions about the country, but they were largely tied to labor and the importance of those who did it.

Producer America (ca. 1845 – 1940)

The Industrial Revolution came into full swing. Technology enabled people to work more. While farmers had periods of toil and others of rest, factory workers could suffer 300-day working years and 16-hour days, thanks to artificial lighting. High immigration made for an overflowing pool of cheap wage labor. The state offered no checks against this. Smart workers realized that it was in their advantage to organize, although the legal status of unions wasn’t well-defined. It took a lot of fighting to get official legal recognition of the mere concept. Meanwhile, business corporations used violence to prevent labor from asserting itself. This was the era of the Pinkertons and the Triangle Shirtwaist Fire.

We went full-on into the Gilded Age, with the infamous political corruption and financial instability, bringing on the Long Depression (1873-98) and culminating in the Great Depression (1929-39). This was also a time of high ethnic strife: Northern Europeans versus Southern Europeans, natives versus immigrants, freed blacks versus working-class whites. History tells us that a corrupt upper class will often not find it difficult to divide the working people against each other. That, we saw a lot of as the working classes fought for bare survival in tenement slums.

Still, there’s a nostalgia that people have toward Producer America. Like every age, it had its virtues. It’s the era of the Wild West and of steampunk, when people pickled their own vegetables and carved their own ice. It was easier, if one had the means, to enter business and stay there. If you were a middle-class male, you could get a job in business by asking for one, and being a full-fledged businessman after seven to ten years of clerking. Moreover, the consolidation of corporate power had only started. There were, for a fact that surprises most people, more American car companies in 1915 than there are today.

There was a maker culture much stronger than what exists now, but much of this was by necessity. One had to be skilled at repairing mechanical devices, or one would not have them, because they were expensive and most people were (by today’s standard) very poor. People in the northern states put trust in their neighbors, because of the severe winters. (One sees this today in Midwestern politeness; one could not afford to be a jerk in a challenging climate with 19th-century technology.) On the whole, Producer America was a difficult place to live, but there was plenty of work to be done. It kept people active.

In Producer America, people identified with their work and, increasingly, their social class. Laborers invented unions and white-collar skilled workers invented professional associations (which are unions by another name). From dress codes to punch clocks, most of our work culture was invented on this time.

Let’s talk about the Election of 2016. I did not vote for Donald Trump. I, however, recognized early on that he was consistently underestimated, and was in fact running an intelligent (if offensive and distasteful) campaign. I found myself repeatedly arguing that his slogan, “Make America Great Again”, was brilliant. No, it wasn’t dog-whistle racism that made the motto resonate. (There certainly were racists among his supporters, but racism wasn’t the only factor.) In fact, great was not the operative word, but make, coupled with the imperative mood. Trump’s subconscious promise was of a return to a time when people made tangible things and had jobs that mattered. Will he deliver on this promise? Can he? I have my doubts. Do we even want to get into coal mining again? Of course not. That doesn’t matter. What most people missed was that “Make America Great Again” wasn’t about racist or sexist nostalgia, but rather a deep longing to return to a time when human labor had esteem because it delivered tangible value. The fact that this required strong collective bargaining seems to have been lost on most of today’s right-wing populists.

Producer America was poor, beset by political corruption, and financially brittle. We had a quarter-century-long depression at the end of the 19th century. We had frequent financial panics, much worse than those that exist today. Banks often failed, zeroing savings accounts. A typical household earned less than $10,000 per year in today’s money. Let’s not romanticize this period of our history. People identified with their roles as workers, and with production, in large part because they were so poor. One’s job was the only source of income, esteem, and hope for a person, and often a meager source for all of those.

The system started to break down in the 1920s. The Great Depression wasn’t, in my view, caused by the Oct. 29, 1929 stock market crash. We had a worse one in 1987 and it didn’t even cause a mild recession. The 1929 crash was a symptom of something that had been building for some time. What tanked the economy, in the late 1920s, was ill-managed prosperity. By 1920, we were very good at making food. So, prices dropped. Seems like a good thing, right? More food and cheaper. Remember, of course, that a large proportion of Americans were directly involved in food production. By 1925, we had endemic rural poverty. Farmers who couldn’t afford the new technologies died out. Towns that served these farmers fell apart, too. By 1927, we were seeing weakness in industry. If farmers went out of business, the market for tractors went with them. Weakness in heavy industry was clear. 1929 wasn’t when the Depression started, but when it hit the cities and the richest people and it was recognized as a Depression. Producer America had broken down.

In that time, there was a widespread belief that poverty was a sign of personal moral failure. It was a bitter medicine that might impel a person to work harder, stop drinking, or be more frugal. Modern psychology tells us the opposite, but at the time, the Horatio Alger narrative and so-called Protestant work ethic dominated. What happened in the 1920s was that poverty spread out of control. It wasn’t the fault of the rural inkeeper that he had no business; the community that he served had no money. It took massive government intervention, catalyzed by an overseas war, to bring the economy back from its own wreckage.

When we re-emerged, we found ourselves in an era of higher complexity. We found ourselves reliant on governmental machinery designed by people with doctorate degrees in economics and operations research. Production of most good had become too complicated for individuals to participate: airplanes and computers require massive infrastructure and human capital. People could (and still do, in 2017) build their own motorcycles and cars, but they don’t stand a chance of selling them. There are far more stable jobs repairing the machines that large corporations make than there are in direct competition with them.

In the U.S., the most controversial change to follow from Producer America’s insolvency was the expanded role of the federal government. We needed it in the 1930s to dampen the damage done by runaway capitalism, and in the 1940s to fight the Second World War. It’s important and necessary and we rely on it, but a lot of people remain uncomfortable about this bare fact. I’d bet that 90 percent of people, which includes many who complain about “big government”, like the services that government provides, but some people wish they could have late-life medical care and decent roads through other means. (I don’t consider this practical, but I’m not them.) To quote the Tea Party protesters, “keep your government hands off my Medicare”. The age of perfect self-reliance never existed. In fact, the supposedly rugged cowboys relied on the U.S. Government (which displaced the native population) quite heavily. By 1940, though, it had clearly and truly ended.

The upshot of this upheaval is that it worked well. We built the first society with a large middle class. We had rapid economic growth and technological advancement. A truck driver who lived in rural Michigan in 1950 lived better than a European viscount in 1910.

Consumer America (ca. 1940 – 20??)

The AMC series, Mad Men, showed the birth of Consumer America, for good and bad. We generated enough wealth that people could work less and spend more. People began to identify with their purchases more than their jobs.

At the same time, we started seeing a problem with “jobs”. They became somewhat of a mess, because we started to suspect either that our working lives were suboptimal, or that we would be deprived of said jobs as soon as it were expedient. Both of these suspicions, felt acutely by individuals and dully by society, turned out to be right.

A fundamental problem with working for money follows. If your work has objective, legible value, someone will out-compete you at a cheaper price. Even if the low-price competition is unsustainable (dumping) it does not matter. The naive young person who burns out will be replaced, and so will the impoverished country that becomes less-impoverished as work moves to it, but there will always be one on offer, somewhere, to the employer. On the other hand, if the work is intangible (which is not to say that it’s not valuable) then one is reliant on a matrix of cultural, social, and generational support, skills, and infrastructure. What does it take to get paid for intangible work? Sales. Most people do not enjoy selling. In fact, they hate it, especially when it is their own work they must sell. Most people would rather take standard office jobs for reliable mediocre pay than put up with the constant humiliation, volatility, interpersonal rejection, and sheer chaos of having to sell themselves on a day-by-day basis.

We learned in the 1930s not to hang one’s income on the price of a commodity. This is especially true now, as commodities become cheaper. Rather, a worker survives by making his work intangible. The selling point of a college degree was that its economic value was independent of fluctuations in commodity prices. Oil prices might drop, companies might go bankrupt and zero their stock, but that college degree would never lose value. (Ha.) Management became the most coveted job, and it’s easy to see why. In commodity labor, it’s obvious if someone is bad at the job. If one person drills 20 holes per hour and another drills 15, the latter will be fired first. With management, the people who know if a manager is bad cannot say so, for fear of being fired themselves. The manager can always fall back on superior educational pedigree and higher social position. One-on-one, he has higher credibility and can use this to amass even more credibility. Eventually, we reached a state where the major leagues of management, called “executives”, not only take extreme salaries, but can transfer easily from one part of the economy to another. Getting fired, for an executive, is a paid vacation and a better next job. Sales and especially management have gained ground, and labor has lost it.

Under Consumer America, we became a society where most people go to work and don’t really do anything. The machines make stuff and other people called customers buy it, and the corporation functions as self-reinforcing eddy driven by inertial factors like brand reputation and convenience.

In fact, we do a lot more making in 2017 than we did in our supposed high era of manufacturing. We’ve even become quite good at it, due to technology. We make better things. If one includes hobbyists, we probably make more creative things. The problem is that humans who actually make things, for commerce, face imminent loss of income at every moment. Most people can’t stand the stress. At some point, they’d rather give up on their dreams and become executives whose “products” are meetings and bad ideas.

In Consumer America, people seek social status through consumption, whether of college degrees or clothing or housing in fashionable places, because that’s how one gets a reliable income. Production is too dangerous a route to sustainable income, because one can always be outcompeted. One must, instead, demonstrate something that looks like superior taste, culture, or intelligence, and that is done through consumption.

The highest-ranking people in our societies are not elite producers, but elite consumers. The polished businessman suggests effortlessness in everything he does. That’s his charm. He wears a thousand-dollar suit that looks like the fabric has never been folded. He sells the dream that if others just follow his ethereal “vision”, they could also be entitled to the ultra-consumer life that he enjoys.

The most valued trait in our culture is called “celebrity”, which is a preternatural ability to consume attention. We’ve given up on the ability to evaluate what people produce, so we use their consumption as a proxy. We conflate price with value.

Where might this lead?

Breakdown

Labor seems to be at the center of each phase’s inevitable breakdown.

With Citizen America, the fatal flaw was slavery. Hamilton and Adams predicted that slavery would destroy the U.S., and it nearly did. Whatever one’s gripes may be with industrial capitalism, it was an improvement over five millennia of humans using violence to force unpaid labor out of other people.

Producer America, to a large extent, couldn’t handle its own success. There are plenty of bad things to say about industrial capitalism, but the fact is… it works. It would have seemed unfathomable that prosperity in agriculture would lead to a crippled economy and (overseas) to authoritarianism and war. Yet it did exactly that.

Consumer America seems to headed down a familiar path. What happened to food prices in the 1920s is happening to all human labor. The “sharing economy” is a reinvention of what the early 20th century called “hobos”: itinerant workers taking what work they could.

Office workers like attorneys and software engineers might think they have little in common with 1920s farmers, but history will prove them wrong.

It’s hard to define a clear adversary. Some people attack “globalization”. The truth, however, is that foreign competition isn’t the greatest threat to American workers. To a large degree, I believe that the threat of foreign competition (especially when bandied about by management, such as when unions are under discussion) is much more of an issue than the actuality of it. The greater threat is technology. It’s often ignored, because it doesn’t have a face, and because we all recognize both its necessity and inevitability.

There’s a lot of “othering” at the heart of the resurgent nationalistic populism that we’re seeing in the world’s working classes. You can other a person who looks different, lives thousands of miles away, and that some dickhead manager told you is eager to take your job for one-fourth the price. You can’t other the phone, a supercomputer by the standards of 30 years ago, that sits in your pocket. So, we tend to ignore the dangers presented by technology. “Outsourcing” looks like something that we’ve seen for millennia: other tribes or nations, full of hungry people, threatening to conquer us. Technology doesn’t look like anything visceral. It appears non-threatening if it’s well-designed.

Should we dread technology? Yes and no. Automation is a double-edged sword, but it’s going to happen and there is no value in trying to prevent it. Governments should not try to preserve specific jobs in, say, coal mining. They should, however, attempt to prevent sudden losses of income and especially of worker leverage. I can’t emphasize this enough. Most people think their jobs are safe, and they’re wrong. What do they think those laid-off truck drivers are going to do? Many are going to retrain and contend for the supposedly safe jobs, like software engineering. If the labor market collapses, it will fall as one.

The bigger problem around technology is not automation. Automation’s desirable. Rather, the danger is that technology is often used toward bad ends. The lasting effect of the echo-dot-com boom isn’t a technical or political advancement. Scientific progress seems to be slowing down right now. Rather, it’s the increasing shift of power from employees to employers. Let’s take social media. I’ve been involved in hiring and I’ve seen people turned down for jobs or fired based on social media activity, sometimes quite anodyne. I’ve also seen people turned down because they didn’t have social media profiles, which was deemed “creepy”. If he didn’t have a Facebook or LinkedIn profile, what was he hiding? That’s right; someone was punished for not rendering personal information unto surveillance capitalism.

I once worked on a performance-management system for drivers. Most of the people who engineer such software believe that it’s harmless, and are usually told by management that drivers appreciate the work. That’s often false. Such systems increase stress and even the probability of workplace violence. Technology often suits employers’ needs and the expense of employees. In one case that I know of, a few years back, a GPS monitoring system that was supposedly intended to improve gas mileage was actually used to catch drivers eating lunch off-route (either to go home, or see their kids at school). That’s not increased efficiency. That’s being an evil, greedy fuckbag.

I worry much more about about technology toward evil ends than I do about automation. With automation, we need to be smart as a society and put the dividends back into the common good. That’s a hard problem, but it’s easy in comparison. A permanent shift in the power relationship between employers and employees, in the wrong direction, could render us unstable, impoverish the masses, and leave the country prone to populist or even fascist revolt.

Will Consumer America die soon? Yes. We’re seeing the early phases. It’s not pretty.

As I said, every job that provides direct salable value has a target on it, and most of those have been automated out of existence. Jobs that remain are in abstract work.

In an office, you have some people who run around and try to quantify the work of others: managers, HR executives, consultants, and the like who try to spot opportunities for cost-cutting. Those people produce little. Nine times out of ten, they’re externalizing costs and risks rather than eliminating them. 99 times out of 100, any assets “liberated” by cutting these costs is put into executive coffers instead of forward-thinking investment. Most of these cost-cutting wizards are worthless, but they have a lot of power. People fear them. They will drive abstract laborers toward concrete measurements. They’ll take the creative process of programming and split it up into 4-hour units called “story points”. What you end up with is a civil war where most of one side– the workers, playing defense because even if they wanted to play offense, they wouldn’t have time on top of their (increasing, with each layoff) assigned work– has no idea what is going on, or even that they’re in a civil war at all.

When this happens, employees lose. Costs are externalized or transmuted into risk rather than removed, so shareholders get a bunch of under-documented risk dumped on them as organizations become more brittle and shorter-lived. It looks like stagnation, but it’s actually a hollowing-out. For example, in the corporate world, workers face increased instability and expectations without fair compensation. Moreover, when companies implode (as has become common) they aren’t replaced with better ones. Whatever “tech startup” meant in the Silicon Valley heyday of 1970-1995, it now means “new company with worse health benefits and an ill-defined career path.”

This can have far-reaching social effects. The rich man’s habit of dividing poor blue-eyed man against poor brown-eyed man (or black man against white man, or man against woman) leads to misplaced resentments that stack up over time, and you have a lot of people who are pissed off at the wrong people for an incoherent mess of reasons. Then, you get right-wing populism, which we’ve seen flare up all over the world. Anger drives out the more subtle emotions, and eventually conflict reaches a boiling point.

Downfall

When did Consumer America start to decline? I think that the civic downfall began in the late 1970s. Studio 54 is emblematic: elitism became sexy again. We fully committed ourselves to the wrong path in the 1980s.

While this period of time is called “the Reagan Era”, I doubt that a single center-right politician, no matter how powerful and charismatic, can take singular blame. Did Ronald Reagan invent employee stack ranking? No, that was Jack Welch and Jeff Skilling. What went wrong was more about culture than politics, and it happened in other countries that didn’t have conservative leadership. Mean-spirited corporate behavior, not transient conservative politics, is what killed us in the 1980s.

Leftist leadership wouldn’t have prevented a devastating cultural phenomenon: the repolarization of the American elite. To understand this, we have to understand the history of our national elite. What was it, and how did it change?

For most of human history, most people who were rich either inherited or stole their wealth. It was rare that a rich person wasn’t a scumbag, bully, or crook. This is why Jesus could say what he did about the eye of a needle. With near-zero economic growth on a per-year basis, life was pretty much zero-sum. It was a reasonable presumption that most rich people prospered at other’s costs. Then, we came into a perceived Golden Age when this seemed to be less true: from about 1940 to 1980 in the developed world. It has often been considered a historical anomaly. It doesn’t have to be so.

Not only the Great Depression, but the Second World War and the flirtations with extremism all over the world, all convinced the American elite to slow down and be happy with what it had. They elected to get richer somewhat slower than others in society. Noblesse oblige. Inequality went down, but so did their risk of imminent overthrow. Perhaps not knowing it as thus, they chose graceful relative decline as their survival strategy. It worked. They were plenty rich, throughout the 1950s and ’60s. They never stopped getting richer; they just slowed their pace and let everyone else catch up.

A CEO in the late 1970s made about $500,000 per year. His source of pride wasn’t his income but his stewardship of the company he ran. Even if it meant a personal cost to him, he’d do what he could to keep his people employed and happy. Companies invested in their people. There was a large middle class. If you were unemployed, you could call about a job at 10:30, interview over lunch with the CEO, and be hired by 2:00. What happened? Why did this country throw it all away?

Upper-class people who remembered the tumultuous 1930s and ’40s recognized that social stability and cultural advancement were more important than personal enrichment. Their kids didn’t. Their kids traveled to other countries and came in contact with countries where the old way reigned, and where feudal lords and scumbags still dominated the upper class. They met oil sheikhs who married 9-year-olds, third-world despots who could kill with impunity, and (after 1989) post-Soviet kleptocrats buying private islands and penthouse apartments all over the world. In comparison, the American rich were more restrained, more civilized, and also poorer. They still had to follow their country’s laws! They flew first class instead of private!

At some point, the American upper class desired to join the global elite. They sold the country out. They made it legal for nonresident foreigners, often of criminal origin, to buy real estate in Silicon Valley and Manhattan, creating permanent housing shortages. They created a culture in which labor is ill-viewed and consumption reigns.

We’re now back to a Gilded Age, but a global one. Whatever we learned in the 1930s and ’40s, we forgot. Filth floats to the top again. The hyper-consumptive global elite is in charge. Even national governments must often play by their rules, as they constantly threaten to move capital elsewhere if asked to pay their taxes.

Our global society is, because it is badly run, quite brittle.

We actually don’t have more recessions in this terrible new economy than we did in the old one. We have fewer, but they hit harder. In the old economy, you worried that recession might get you laid off. It might mean a tough year or two. In the new one, people worry that it will end their careers, because that happens a lot. People find themselves out of relevant work for two or three years and are replaced, when the situation improves, with a mix of unskilled young workers and better software off the shelf. For the bottom 98 percent of the labor market, each recession is more severe and each recovery is more jobless.

We are getting to a dangerous point. Let’s talk about various possible future outcomes.

Worst: Catastrophe

I make no predictions for the worst-case scenario. Climate change, international conflict, resource depletion, a successful Business Plot, even another 1918-like disease epidemic… there are a number of ways in which the U.S. could not survive the end of Consumer America, or in which it could be radically altered. Some of these catastrophes are more manageable than others. Some involve a painful decade and a recovery; others go into darker places. I can’t say too much here, because the nature of these events is everything becomes unpredictable when one happens.

Catastrophic events are an ahistorical threat. The Yellowstone Supervolcano is unlikely to explode, but it doesn’t know or care about human generational cycles. What is different about this time, as opposed to others, is the brittleness of the American economic fabric.

Baseline: Renter America

Renter America is where we seem, in 2017, to be headed.

To sum it up, it’s a worse version of Consumer America. Life goes on, but people have less control over where and how they live. People continue to need these short-ended power relationships called “jobs”, and spend more time on busy work or protecting position than actually doing anything. Meaningful, productive work becomes a coveted, scarce resource and one must engage in political intrigue in order to get it.

In Renter America, people live increasingly on their reputations (which are easily controllable by corporate interests) rather than their skills. Jobs get harder to find and easier to lose. Long-term unemployment, financed by credit, becomes the norm. Corporate investment in individuals goes to zero, offset by escapist fantasies (e.g., Silicon Valley startups) and those who are wise enough to see through them– or, more to the point, old enough that they should see through them– are discarded. People lose a sense of ownership in their economic lives and become permanent, itinerant renters, ambling through life on credit and student loans they’ll never pay back. Homeownership and starting one’s own business become impossible for most people.

This is where we seem to be headed in 2017. There’s no sign of an imminent national catastrophe (although there are many risks) but there’s also little reason to have hope about our economic or political future.

What I question about Renter America is its stability. Material well-being doesn’t get worse in Renter America, but it ceases to improve and there is a loss of dignity and self-determination. People are forced to move, threatened with medical bills they can’t pay, put into jobs involving more busy work and less actual production or self-improvement, and generally kicked around more. Their jobs and lives become mindless and highly surveilled.

Renter America delivers mostly insult rather than injury. A few people die prematurely because they lack health insurance or work in dangerous jobs, but most people are afforded vaguely dissatisfying but semi-comfortable lives. The elite recedes into its own world where things still work: schools lead into jobs, jobs lead to skill growth and wealth, et cetera.

If it doles out only injury, will Renter America ever be overthrown? I’m not sure. We could see a widespread slacker culture: the Japanese hikikomori or the European mileuristas and NEETS are becoming the norm. I don’t see it as inevitable that a mediocre, boring future gets itself overthrown. I hope that I’m wrong.

Better: Patriot America

In the 1960s, national governments were perceived as being in cahoots with the global corporate elite. To a large extent, they were. Companies weren’t as malignant as they are now. Back then, private companies invested in their people, paid well, didn’t try to avoid paying taxes at all costs, and seemed neither at odds with the needs of government nor the people.

Conflict with the global corporate elite is possible. To the surprise of some, I believe that national governments will be our allies when this happens. They don’t like tax-cheating, rule-breaking criminals any more than we do. National governments don’t get everything right, but they’ve been left as the sole adults in the room. Who funds basic research? The age of Bell Labs and Xeroc Parc ended a long time ago; short-term optimizers won.

Patriot America would be a more inclusive reprisal of Citizen America, in which the defeat of the global corporate elite becomes a point of national pride. We could, for example, demand that all nonresident real estate owners sell within 14 days or forfeit their holdings. This would do a lot to make housing more affordable in places like New York and San Francisco. We could ramp up research funding for renewable energy and not only end our dependence on foreign oil, but take leadership on climate change as well. (I realize that, right now, it looks like we’re going in the opposite direction.) This is going to be unappealing to the anarchist element of the left, but it will first be through governments that people most effectively take on the global corporate elite.

This variety of patriotism isn’t exclusionary. Sam Adams was not patriotic at the expense of other nations, and neither should we be. Local and national governments will have to work together with each other in order to defeat two major adversaries in the future. One is the environmental damage wrought by climate change. The other is the global corporate elite.

Patriotism is not an assertion of superiority over other nations. Intellectually, we all know that we aren’t superior because of where we were born. Rather, it’s an admission of one’s own limitations. No one can fix the world. It’s too big of a job. But people can work together to fix their communities, then their cities, and then their countries.

Destinations and Lessons

Are there other possible fourth phases? Of course there are. Renter America seems to be the baseline disappointing turn of events, and Patriot America is a broad sketch of something that might be better.

We ought to learn from the three previous incarnations of this country before we build the fourth. What worked, and what didn’t?

The virtue of Citizen America was its insistence on rational government. We now need a rational economy. Universal basic income is a start, but we also need meaningful work for people, and there is plenty of work that needs to be done. Additionally, we ought to recognize kinship with people in other countries. Patriotism shouldn’t be pride in what is, because intrinsic national superiority doesn’t exist, and that idea has done far too much harm already. It should be pride in what one does to make one’s community (whether local, national, or global) better.

What Producer America got right, although it took a long time, was that it eventually put dignity into work. It recognized the human need for a productive role. Also, work in that time was not the psychological monoculture of today’s office work. People did a lot of different things. We need to learn from that, and get away from the culture in which people are shoehorned into bland roles that are often substantially below their levels of ability.

Finally, let’s talk about Consumer America. In the 1950s, most people believed that we’d have a ten-hour workweek by now, and that economic scarcity would be nonexistent or trivial. Yes, if you were unemployed, you might have to wait two months longer to take your vacation to the Moon. Well, we’ve failed. In the 1980s, we allowed bad leadership to come in. It wasn’t our political leadership that shat the bed, though. It was our corporate leadership. In order to get the next iteration of this country right, we first have to take stock of what previous generations got so wrong.

Just as the noblesse oblige national elite of the Kennedy Era learned, from the Gilded Age, that a vicious unequal society would burn them in the end (as it did, in the 1930s) we will need for the current global corporate elite to learn a hard lesson. We’ll have to replace them with something else. In order for that “something else” to be anything better, though, we have to study our past.

Phishing/Hacking Attempt

In April, I got an email about a CTO-level position. It was a personalized message. The person writing it knew who I was and my capabilities. Naturally, I checked it out. It never hurts to talk to people. As is typical, a résumé/CV was not enough. I had to use that company’s web-portal. Okay, why not. I have time, says the dog.

I didn’t hear back. I should have suspected something given the lack of response. Now, everyone gets rejected, even people like me. (Especially people like me.) There’s nothing odd about getting turned down. That said, above the VP level, you get a personal response and a truthful explanation of why you didn’t get the job. Usually, it’s impersonal (it could be, “the other candidate has 20 years more experience”) and you move on. If you don’t hear anything, at my level, it’s fishy. Or, should I say, phishy?

It was a fake job portal. The company that this attacker purported to be was not looking for a CTO. To be clear, they had no involvement in this and were professional in every way.

A few weeks later, someone tried to access my account on multiple cloud services using the password I used (I create a new one for every job site) and hundreds of variations thereof. I got calls about this. (No one got into anything.) These attempts came from a reputable technology company in the San Francisco Bay Area. I know exactly who they are and what they were after. They’re probably pissed off that they weren’t able to get into it.

At this time, that is all I intend to say.

Crossing the Equator 6: Villains in Fantasy Versus Real Life

I open Farisa’s Courage with the heroine running for her life. Her memory is breaking down (a consequence of her magic, when used too far) and she’s confused, desperate, exhausted. In an unknown city, feet and legs caked in miles of trail mud, she bangs on a stranger’s door. She’s forgotten several years of her life. By the time she reaches (transient) safety, she doesn’t know where (or even who) she is. (She recovers, of course.) Meanwhile, the antagonist doesn’t get much stage time in the early chapters. That’s intentional. It’s also unusual, per fantasy genre conventions.

Many fantasy novels open with the Big Bad Antagonist doing something terrible. He destroys a village, or he tortures a child. Often, no reason is given; of course the bad guy would do something bad. The dragon just likes gold, though she never spends it. The sixteen-eyed beholder has to slurk out of its dungeon, eat a peasant child, and then slurk back because, if the heroes sought and killed it for treasure or “experience points”, then they’d be the villains.

In Farisa’s Courage, the first book of the Antipodes series, the main antagonist is a corporation, the Global Company. They’re bad, but like business organizations in the real world, they’re reactive and effete. They do more damage (early in the story arc, that is) through incompetence than by intention. I open Courage with asymmetry. The heroine is in danger, but the antagonist is comfortable (and unaware that it is anyone’s antagonist). That’s how good and evil work in the real world.

Fifty years before, the “Globbies” were a corporate police firm. That exists in the real world; they’re the infamous Pinkertons, who are still around. The Globbies also had a flair for witch hunting (which also still exists, even if witches don’t). When Farisa’s story opens, they control 70 percent of the known world’s economy. (It’s a steampunk dystopia where the Pinkertons won, and evolved into something worse. Something similar almost happened here.) They don’t take much of an interest in Farisa. They know that she exists, and that she’s a mage, but they also know that magic is unreliable and dangerous. They have been through forty years’ worth of failed attempts to harness it. So, they don’t think much of her. They’re only interested in her because she’s been accused of a crime that she didn’t commit (and, in fact, they know that she’s innocent of it).

Farisa doesn’t see world-fucking evil from them in the first 200 pages. The reader sees the Company’s low-level, self-protective evil, but nothing threatening the end of the world. That’s intentional. You didn’t see world-fucking evil from the Nazis until Kristallnacht, either. They’d been around for almost twenty years by then.

Epic fantasy is often Manichaeist. Good and evil exist as diametrical opposites. In the first or second chapter, the reader often sees the Big Bad doing some horrible thing. It has to be shown early who the Big Bad is. In my experience, though, evil doesn’t reveal itself until it needs to do so. There’s a potent asymmetry between good and evil. Good must act, and evil can wait. Good is desperate to survive, like a candle in a hurricane. It will rescue a child from a house fire (and fire, though dangerous, is not even evil). Evil can use slow corruption, hiding and waiting. It’s usually done in by its own complacency and arrogance, but that takes time.

Epic fantasy often wants symmetry. It wants evil that is as desperate to do harm (and to kill the heroes) as the heroes are desperate to survive. It wants evil that can’t plop down on its haunches and wait. It must burn that village! It must abduct that princess! In my experience, that’s a rare kind of evil, and evil itself is not all that rare.

This might explain why our culture is fascinated by serial killers. They’re very rare, but they show us a refreshingly different kind of evil from what lurks in corporate boardrooms. The serial killer is intense and desperate. Why does Vic the Biter eat the faces of human children? Because he’s an insane fucker, that’s why. His desperation mirrors that of the good. He’s fighting for survival, because his mind is broken, and eating children’s faces is the only thing that gives him respite from his own demons. His vampire-like hunger drives him to make mistakes that render him easy enough to capture that the story can be told in a two-hour movie or a 90,000-word novel. If there must be evil in our world, that’s the kind we want: a kind that is as desperate for its own survival as is good.

The desperate, belligerent kind of evil exists, but it’s not the kind that’s running the world. The Davos elites view the rest of us with phlegmatic contempt– they prefer not to think of us at all– but not burning hatred. Yes, the Davos Man would rape a child if there were a billion dollars in it; but, outside of that laughable, contrived scenario, he’d rather go back to his hotel room and sleep off his drunk.

Not to take the metaphor too far, but this mirrors the asymmetry between capital and labor; the former can wait, the latter must eat. Now, I don’t wish to say that capital is evil or that labor is good. Neither’s true. The parallels of their struggles, though, I find to be worth note.

Labor and capital both perceive themselves as at war with entropy, but one conflict has higher stakes.  Labor must consume two thousand kilocalories per capita of chemical energy or burn itself to death. Capital issues weak complaints about meager stock market returns, and the declining quality of private boarding schools, and too many brown people at the country clubs. Labor is a stroke of bad luck from dying on the streets. Capital is slightly perturbed by the notion that things aren’t as good as they used to be, or could be, or are for someone else. It’s the same with good and evil. Good lives in constant warfare with selfishness, stupidity, disengagement, petty and grand malevolence, and myriad other entropic forces. Evil? Well, it rarely recognizes itself as evil, to start. When it’s losing, it’s a chaotic force. When it’s winning, it thinks as little as possible. It too has its slight unsettlements, but rarely feels a need to fight against the world for its own survival.

It’s unfashionable, in the postmodern world, to believe that good and evil exist. Some view them as relics, like ethnic gods, that simpletons cling to. We’re not enlightened enough to see the complexities of human power struggles from all angles. I don’t know whether gods exist, but good and evil do. An issue is that they’re viewed as compact entities or forces rather than patterns of behavior. As “alignments”, they don’t exist. There’s no unifying banner of “Good”, nor one of “Evil”. Yet, we experience good and evil in daily life, from the small to the large. Is a convicted murderer an evil person? Not necessarily. Prima facie, there’s a lot of context that we don’t know. He could be mentally ill. He could be innocent, or have killed in self defense. We can agree, though, that murder is usually an evil act.

Good people value what is good, though we’re slow to find perfect agreement. There are good people with bad ideas. There are good people who’ve been infected with evil ideas. Most of our so-called “founding fathers” were racist, and racism is without a doubt one of the most evil ideas that humans have ever concocted. That aside, some of those men were arguably good, even heroic.

Evil does not, in general, value evil. German and Japanese authoritarians fought together, but regarded the other as racial inferiors. (Stalin was pretty vicious as well, but fought on our side.) Corporate executives and child molesters despise each other; you don’t see them seated together at Evil Conventions, because those don’t exist. Good values good, but evil doesn’t value evil. Evil values and seeks strength, and a position of strength is one from which one can wait.

I have actually battled evil, and suffered for it. I wrote hundreds of thousands (if not millions) of words on how to survive corporate fascism. I have exposed union-busting, labor law violations, and shady practices of all kinds in Silicon Valley. It has been an edifying (if expensive) ride. I’ve probably mentioned that Evil has won some of its battles. It may yet win the war.

In this light, we have to understand it. We have to know how it works, what it values and what it doesn’t, and why it wins. It wins because it can. It wins because often it wins if nothing happens.

Does the hungry evil of the vampire or serial killer exist? It does, but it’s rare. The more prosaic boardroom variety of evil is far more common. Often, the most dangerous thing about it is its most boring advantage. If it wants to do so, it can sit in its castle, and wait, and hope that we fuck up before it dies of its own ennui.

Swamp Baseball

My warning meant nothing | You’re dancing in quicksand…

— Tool, “Swamp Song”, 1993

Swamp Baseball is like regular baseball, but with a few changes:

  • You play in a muddy bog. Outfielders can fall into quicksand. The “run” to each base can take 30 seconds. Swimming is allowed, but bare-eyed (no goggles!) only.
  • The ball is covered in mud and will spin and fly unpredictably. Every pitch has its own character. Instead of bats, you remove and use tree branches.
  • Each inning, you have to remove leeches. Whichever team has fewer leeches gets an additional run. Lampreys count as four leeches each. (This does make the game notably higher-scoring than regular baseball.)
  • Dangerous mosquitos are shipped in, if not already present; therefore, you will probably die of malaria (and, thus, be kicked out of the game with nothing to show for it) before you are 40.

Who wants to play Swamp Baseball? I’m guessing that the answer is “No one”. Nor would most people want to watch it as anything more than a novelty. We like to see humans play the sport in a more appropriate habitat. There’s nothing wrong with swamps. They’re good for the world. They just aren’t where we do our best running– or pitching or fielding or spectating. If you want to see baseball played in top form, you’ll go to a ballpark rather than a malarial bog. It may be, in the abstract, more of an accomplishment to score a home run in Swamp Baseball, but who cares?

In the career sense, I’ve played a lot of Swamp Baseball. I’ve become an expert on the topic. I used to have the leading blog on the ins and outs of Swamp Baseball: how it’s played, why it exists, and how not to lose too much. I’ve fought actual fascists in corporate environments and had my share of runs and outs, wins and losses.

Here’s the problem: no one cares about Swamp Baseball. Why should they? It’s a depressing, muddy sport where even the winners get their blood sucked out by leeches and lampreys. It doesn’t inspire. No one sees the guy who slides into home plate for a run, only to get his face ripped off by an alligator, and says, “I want to be like him when I grow up!”

Technology can be a creative force, and programming can be an intellectually thrilling activity. Getting a complicated machine learning system to compile, run, and produce right answers might be more exciting than the crack of a bat (says a guy who has no hope of being any kind of professional athlete). Like writing and mathematics, it’s one of the Great Games. Victories are hard-won but often useful and sometimes even profitable.

Yet, most programmers are going to be playing their sport in the swamps. There won’t be literal mud pits, but legacy code that management refuses to budget the time to fix. There won’t be literal lampreys and leeches but there will be middle managers and project managers trying to get the team to do more with less– bloodsuckers of a different kind. Just as all swamps are different, all corporate obstructions are unique.

Here’s the problem. Swamp Baseball can be fun in a perverse way, but it would fail as a watchable sport because one’s success has more to do with the terrain than the players or teams. Runner falls into a mud pit? Whoops, too bad! Fielder faints due to blood loss, thanks to leeches? Looks like the other team’s getting a run. Real baseball has boring terrain and lets the players write the story. Swamp Baseball has interesting terrain but no sport or art. If the sport existed, it would just be artificially hobbled people failing at everything because they’re in the wrong habitat.

Corporate life is, likewise, all about the swamps. The success or failure of a person’s career has nothing to do with batting or running or fielding, but whether that person trips over an alligator or not on the way to first base… or whether the shortstop collapses because the lampreys and leeches have exsanguinated him in time. Sometimes the terrain wrecks you, and sometimes it wrecks everyone else and leaves you the winner, but… in the end, who cares?

Swamp Baseball wouldn’t get zero viewers, of course. Some people enjoy comic relief, which in this case is a euphemism for schadenfreude. It wouldn’t be respectable to watch it, nor to play, but some people would watch and for enough money, some would play. Corporate life is the same. Its myriad dysfunctions and self-contradictions make for lots of entertainment, often at another’s literal and severe expense, but it’s fundamentally lowbrow.

That’s why I don’t like to write about corporate software engineering (or “the tech industry”) anymore. And if I stay in technology (which I intend to do) then I want to play the real game.

Crossing the Equator 5: Natural Writers

There are a large number of intelligent, well-intended people who “might write a novel” someday. Spoiler alert: they won’t.

I’m not trashing them. The world needs more readers, much more than it needs more writers. The reason why most of those people will never “write their novel” is that they’re not weird enough. They’re doctors who read beach books. They’re professors who read a sci-fi book every now and then. There’s nothing wrong with that. I wish they had more time to read and bought more books, because writers don’t make enough money, but I’m not here to criticize them. A world in which everyone had the inclination to be a writer wouldn’t work. The job isn’t for everyone. At 33 years old, I’ve got a good sense of what I can and cannot do. It’s not a stretch to say that I’ll never put my hands inside a man’s chest and manipulate an intact heart, an innocent human life on the line. I’m very glad people exist who can do that job. I’m happy to have them make more money than I do. It just isn’t me, who will do that, unless something bizarre happens like an apocalypse that demands for me to fill the role.

Natural writers

I’m going to talk about natural writers. It’s a term that I invented, although it’s a statistical certainty that it’s been coined somewhere else with a different definition.

First of all, it’s not really about natural talent in any form that would pop up in school. The general intelligence (“IQ”) necessary is significant but not astronomical. If I had to guess, I’d put it around the 95th percentile: IQ 125 to 130. More can help or hurt. Higher intelligence can make the research and self-editing aspects of writing go by faster. On the other hand, many highly intelligent people have no aesthetic sense or, worse yet, have the “four-wheel drive” problem of getting stuck in more inaccessible places. The ability to write great literature may be rare, and the inclination certainly is, but that’s not because of an IQ-related barrier.

One in twenty-five people might have the raw intelligence necessary to be a good writer, but I’d guess that one person in about a thousand has the necessary skills. The skills are hard to learn.

First, you have to read and write millions of words. Also, you have to read the whole quality spectrum, from excellent canonical work down to Internet forum comments. If you want to be able to write dialogue, you need to have ear for how people talk at differing levels of education and in various emotional states. None of us speak with perfect grammar and some are worse than others.

Further, to be able to write something interesting, you have to read and learn all kinds of random stuff– literature, history, philosophy, science– that most people (including, to my surprise upon becoming an adult, most expensively-educated people) stop caring about once they stop getting graded on it. You don’t need to know more words than the average reader– you can do a hell of a lot with the 10,000-or-so most common words– but you have know them deeper than most people do. You have to know how to communicate complex ideas efficiently, but also when not to transmit complexity at all. If your novel has someone eating a sausage, you have to when and when not to write about how it was made.

Rarest yet is the inclination to be a writer. On this topic, there’s a joke programmers have, and there’s a picture here:

HER DIARY:

Tonight, I thought my husband was acting weird. […] I thought he was upset at the fact that I was a bit late, but he made no comment on it. Conversation wasn’t flowing, so I suggested that we go somewhere quiet so we could talk. He agreed, but he didn’t say much. I asked him what was wrong; He said, ‘Nothing’. […] He just sat there quietly, and watched TV. […] I still felt that he was distracted, and his thoughts were somewhere else. He fell asleep – I cried. I don’t know what to do. I’m almost sure that his thoughts are with someone else. My life is a disaster.

HIS DIARY:

My code is broken, can’t figure out why.

To make it clear, I don’t advocate alienating one’s bed partner. That’s bad. Writing is a marathon, not a sprint, and it’s no excuse for being an asshole. That said, the reality of being a creative person is that we’re not accessible on demand. This is why we hate open-plan offices, which emphasize availability at the expense of productivity. When we’re mentally or emotionally all in, it’s hard to turn our minds off. I’ve watched every episode of Silicon Valley this season, but at about 40 percent comprehension because so much of my mind is on Farisa’s Courage.

Every writer will see where I’m going. One could replace “My code is broken” with “Chapter 6 needs to be rewritten” or “I’m not sure I have Erika’s motivation worked out” or (never listen to this voice) “a ‘real writer’ would slash this to pieces”. Among natural writers, we’ve all been there. We’ve all been out to dinner and thinking about how we’re going to resolve a plot issue or flesh out a character.

That said, a percentage of commercially successful writers are not natural writers. There are some who clearly are– Stephen King comes to mind. He’s a writer. A few of them don’t enjoy writing. I’d never name people, but it’s been confessed to me. They make a lot of money doing it, and they’ll keep writing as long as that’s the case, but they’re not the sorts of oddballs who’d be tweaking manuscripts at 9:30 on a Saturday night. They write because it pays, and because (at least in commercial terms) they’re good at it. Some hate it, some tolerate it, and some enjoy it– but not to the extent of a compulsive natural writer. You don’t have to be a natural writer to find commercial success as a writer, but I don’t know why you’d try. The odds and effort, even with talent and inclination, are worse than in business. Natural writers, on the other hand, loathe the corporate world– truth-seekers don’t like jobs that require defending lies– and often find themselves without other options.

There’s a Boomer misconception that to be good at something, you have to love it in the way that natural writers love writing. It’s not true. We all know that there are people who love to write, but will never be good at it no matter what they do. On the same token, there are those who can write engaging stories but don’t enjoy doing it. They might love reading, and discussing their books, and having written, but the slightly-masochistic act of forcing their brains to come out with thousands of words of coherent prose per day is something they put up with, because it pays– in some cases, very well. It shouldn’t surprise anyone, in a world where so many people go to jobs they hate for less than $50,000 per year, there are others who’ll do jobs they don’t enjoy for more.

The truth is that there are a lot of people who want to “be a writer”– and who have an unrealistic sense of what that means– but very few who actually want to write.

Legacy, Talent, and 45 degrees

I’m focusing on the natural writer, but the more general concept here is of the natural artist. Writing has a special place, and I’ll explore it, but the arts in general attract provocateurs. Why is that? What’s the connection between creating aesthetically pleasing objects and wanting to troll people? It isn’t at all obvious that one should exist. It does, though. I’ve met some brilliant writers and artists, and they’re almost all weird.

I have a theory that most of human politics and economic struggles can be expressed in terms of Legacy versus Talent.

In the abstract, most economic commodities aren’t very different. If you have $15 in your bank account but a Manhattan penthouse, you’re a millionaire. If you have a low income and net worth but your family can set you up with a high-paying corporate job, you’re a rich person. We don’t about the differences then between silver and oil and paper cash and electronic wealth. There are two abstract commodities that really matter: Legacy and Talent. The exchange rate between the two is a fundamental indicator of a society’s state. When Legacy trades high against Talent, social mobility is low and aristocracy sets in. When Talent trades high, skilled people can do very well on their abilities alone.

Legacy includes wealth, social relationships including the formal ones called “jobs”, credentials, and interpersonal connections. It’s the stuff that some people got and some people didn’t– for reasons that feel random and unfair, and are mostly related to the mischievous conduct of previous generations. Some people got lucky, and some got fucked over. Chances are, most of my readers are not in the “fucked over” category, although it often feels that way, subjectively. I’ll get to that.

Talent here includes natural abilities, skills, and the inclination to work hard. Again, some people were born with a lot of it and some people got screwed. It’s hard to make the case that possessing Talent conveys any sort of moral virtue. I’d love to be able to make that case, because there’d be personal benefit in it, but there are plenty of capable people who are also terrible.

As social forces, Legacy and Talent are always at odds. One is the past trying to preserve its longevity, and the other is the future pounding against the walls of an egg.

Here’s where it gets political, and perhaps controversial. People can, approximately speaking, be ranked for where they stand in each. Sally might not conceive of herself as “94th-percentile Talent” and “33rd-percentile Legacy”, but she knows that she’s smarter than her workplace assumes her to be. If she’s young, she may see upward mobility in the school system. If she’s old, she’ll probably get bitter because she feels like she’s surrounded by relative idiots.

In reality, Legacy and Talent ranks don’t exist in an exact form, but most people have some sense of where they stand. Plot Talent on the vertical (Y) axis and Legacy on the horiztonal (X) axis, and then draw the Y = X line, at a 45-degree angle to the axes.

You can often predict peoples’ political biases according to where they are in relation to that line. “Left-siders” have more Talent than Legacy. They want transformation. They’ll challenge existing systems. They’re not happy with the role that society has given them. “Right-siders” have more Legacy than Talent. They support the status quo. That does not mean that they’ll be economic conservatives. In fact, in an authoritarian leftist society, they’d be loyal communists.

Do people know where they stand? What about the Dunning-Kruger Effect? To be honest, I think that people are close enough to knowing for the errors to cancel out. Yes, plenty of people think that they’re smarter than they actually are, and that might create a left-sider bias. On the other hand, there are plenty of people who think their houses, diplomas, and social connections– all forms of Legacy– are worth more than they really are. Individuals may get their positions wrong all the time but, on balance, I think the errors cancel out.

On that 45-degree line, you have self-described “moderates” who are suspicious of left-siders and right-siders both. They complain about the (right-sider) coal miner wearing a “MAGA” hat who doesn’t really deserve that six-figure job he has because his grandfather was in the union. They also complain about “entitled” left-sider Millennials who don’t enjoy being slotted into subordinate roles of the corporate hierarchy.

Natural artists are constitutional left-siders. They’ll reject any role, high or low, that society tries to put them in. Even if it puts them at the pinnacle, they often hate the idea that there is a top, not to mention the moral compromises that come with being and staying there. It’s not about social rankings for them, and it’s definitely not about money. It’s about creative control and life on their own terms.

Not all talented artists are natural artists. You see this when a promising young artist or writer turns into a hack after becoming famous and being invited into the Manhattan elite. There are intelligent people among our society’s corporate elite, but curiosity is frowned upon and if you spend too much time around them, you’ll end up as anti-intellectual as they are. For all their claims of sophistication, the people in the upper echelons are provincial and allergic to novelty unless it fits a narrow script.

For an example in the technology business, look at Paul Graham. He wrote transformative (if silly and hyperbolic) essays when he was young, but he hasn’t had an interesting idea in more than 10 years. Why? Well, most people turn back into rubes when they get the power or wealth they crave.

Text

What is it, in this exposition, that’s particular to writers? In particular, I’m talking about novelists more than screenwriters, but screenwriters more than visual artists. There’s a spectrum in human creative work between the cryptic and the ostensible (both terms that I’ll define in a minute) and I intend to explore it. What’s special about text? I contend that on the spectrum between the ultimately cryptic and the supremely ostensible, text has a privileged position.

Visual and auditory arts, as experienced by the average person, are ostensible. I’m not a painter, and I’m not aware of the dialogue between different pieces of art, but I can tell a beautiful painting from an ugly one. Now, that ugly one might be brilliant in a way that my rube mind– I believe I am a natural writer, but I’m still a rube in most ways, like anyone else– can’t comprehend. I don’t know. I do know that I like Beethoven and Mozart. I don’t always know why I like them, and I can’t defend the sophistication of my tastes, but I do.

Computer programming, on the other hand, is very cryptic. I’m a good programmer. Actually, that’s an understatement, but let’s start there. I can post twenty lines of “great code” and it’ll mean nothing to the average person. In fact, it’ll mean nothing to the average programmer. (In fact, it doesn’t mean much, because the difference between beautiful and ugly code doesn’t matter on twenty-line toy examples.) What I mean to say is that most people (including users of software that I write) will have to take it on faith (or not, because I really have no say in what they think of me) that I’m a good programmer for now, because I can’t show it in 20 lines of code. To tell a good programmer from a bad ones, you need thousands of lines of code and to look very closely at it.

Prose writing lives between those extremes. It’s possible to make writing more ostensible by overusing emphasis and Gratuitous Capitalization, but good writers often avoid that. Our tool is text. We’ll sometime use italics. You can use them to inflect dialogue and give it a different meaning (e.g. “I didn’t ask you to come” means “I asked someone to come, but not you”). You can use them for internal monologue. You have to use them for book titles. You shouldn’t use them all the time. Text, preferably unadorned and linear, is our tool. We try to do as much with it as we can.

A visual picture is sub-linear in terms of the expectation of a reader’s effort as a function of how much is presented. A painting might have had hundreds of hours of effort put into it, but the goal is for the average viewer to think, “Yeah, that looks nice.” The clouds and the mountains and the tiny details matter, but no one expects the viewer to check out every one. That probably has something to do with ostensibility: you can show “100 times more painting” (whatever that means) and the viewer doesn’t have to do 100 times more work.

On the other hand, software is super-linear. A 100-line program is more than twice as complex as a 50-line program. Actually, the 100-line program has the potential to be way more than twice as complex. There are problems where the amount of effort required to understand a computational object grows more than exponentially as a function of its size and complexity. (It gets worse than that, in fact.) Reasoning about arbitrary software code is mathematically impossible.

The cryptic nature of code is a source of pain for programmers, because it denies us the chance to prove ourselves without demanding additional work of those who might evaluate what we produce. Programmers’ immediate bosses rarely know who their best programmers are. Opinions of peers and clients carry some signal, but the only way to judge a programmer is to read the work, and the super-linear scaling of software system complexity makes that extremely difficult. Many of the complaints of programmers about their industry come from being introverts an industry where observable final quality (e.g. website performance, lack of errors) is objective, but where the quality question around a specific artifact is so hard to evaluate that, in practice, it’s never done at an individual level. Therefore, personal attribution of responsibility for (often, brutally objective) events comes down to social skills. Programmers hate that. The cryptic nature of what we do doesn’t make us geniuses and wizards. It puts us at a social disadvantage.

What about text? Is a novel sub-linear or super-linear? Or is it exactly linear?

Readers want and expect to expend linear effort. It’s quite possible to shove more-than-linear effort into prose, by creating layers of context that require looking back and even forward through the text. It’s not what they want, though, because they want to forget that they’re reading text at all.

In return, they want above-linear payoff in exchange for their efforts. If a 100,000-word novel doesn’t deliver more than twice as much enjoyment as a 50,000-word novel, it’s too long. Now, I won’t pretend that reading enjoyment or literary complexity can be quantified mathematically. I doubt that they can. My strongest suggestion is that text endures because it demands linear effort as a function of what’s presented. Text presents a challenge. How can a writer using a flat medium create pictures out of nothing?

After all, that’s what we as writers do. We make things called settings, characters, and plots out of thin air. Formally, they don’t exist. You can’t point to page 179 in Crime and Punishment and say “that’s where the plot is”. We create images out of some 30,000-ish symbols called “words”. We don’t always understand how they form. “Forks and knives” is different from “knives and forks”, even though both phrases denote the same thing.

Let’s get specific: In October, a girl sits under the orange tree. Most people have an image in mind already. How old is the girl? She could be six or sixteen or every twenty-six (we’ll side-step the political correctness issue around calling an adult woman a “girl”). What about the orange tree? I never said that the leaves were orange. A reader from Massachusetts assumes that based on the cue, “October”. A reader in a tropical climate might think that she’s sitting under a fruit tree. If that detail doesn’t matter, I’ve done my job. I don’t need the reader to picture the same mountain or bar or tree that I have. If the detail matters (her age probably matters) then I’ve under-described. Efficiency matters too, though. If we already know that the setting is Massachusetts, it might be better to say “orange tree” than “tree with orange leaves”. Or maybe not. It probably depends on the implied third person, the girl.

The visual artist might present as much detail (a few megabytes) as the novelist. However, the painter has no illusion of a viewer who’ll look at every brush stroke, or recognize that a new pigment was invented, or understand the brilliance behind a new shadowing technique. If no one likes the painting enough to give it a second glance, that’s on the artist. On the other hand, a computer program at a few megabytes may possibly beyond our comprehension. Most real-world software systems can only be understood by running them and seeing what happens. (Billions of dollars are bet on such systems every day. Scared? You should be.) I’ve reviewed lots of source code, command several hundreds of dollars per hour to do that work, and I’m arguably worth it; and even my abilities are pedestrian. A one-character change can make the difference between a running program and a catastrophic failure and, when it comes to reasoning about what a piece of code will actually do, we’re all out of our depth.

Text is linear. The act of reading is linear, unless we expect readers to continually look back for context, and that’s not being a very kind writer. Complexities emerge from this flat array of symbols. Characters and plots and settings and philosophies that wouldn’t otherwise exist (and, from first principles, don’t exist) emerge, almost magically. We paint pictures with words, sometimes few of them. This is hard to to do well. It takes a lifetime to get sort-of decent at it, and there are a lot of ways to mess it up. Don’t believe me? Here’s an example: “John went downstairs after getting out of bed and waking up.” Logically, there’s nothing wrong with it. It’s not an incorrect sentence, but its reverse chronology is jarring to the reader. It doesn’t paint an image, because its order of presentation goes the wrong way. It reminds the reader that she’s reading writing– most likely, an amateur’s writing.

The linearity of text is a major reason why great writers are averse to, for example, overusing emphasis. The need to draw in a not-yet-committed reader with a “hook” in the first chapters, we accept with some grudge. However, we’re not going to highlight words like one weird trick, because we expect the reader to give every word the respect that it deserves. We’ll use emphasis for semantic benefit and structure, but the novelist doesn’t care much about the enjoyment of the skimmer.

All together

I’ve covered a lot of territory, so far. Some people are ill-adjusted, cranky, and creative enough to be natural artists. Similarly, a larger tension exists within our society around what should be the exchange rate between Legacy and Talent. In practice, extreme left-siders and right-siders are viewed with suspicion, and natural artists are constitutional left-siders who will reject any role that society tries to shoehorn them into. Natural artists and writers and philosophers, like Buddha, are just as likely to walk away from a throne as a cubicle.

There’s a sociological element to the struggle of the artist or writer, of course. How creative work is evaluated has a lot to do with the social status of the person who created it. Every creative person finds this infuriating, but it’s not going to change. In software, the effect of this is huge. Not only are software artifacts difficult to evaluate for work quality, but people have to get approval for their projects before any finished work exists. Socioeconomic status doesn’t matter that much, because intra-office political status dominates, but it projects a similar injustice that dominates the character of the entire industry. Just getting a company to use the right programming language for a project can be a herculean battle that ages a person five years in a month.

Writers aren’t as bad off as programmers in this regard, but are worse off than visual artists. Musical or visual talent is obvious in a way that literary talent isn’t. You can size up a painting quickly, but you have to actually read a novel (a 3- to 24-hour investment) to know if it’s any good. Sometimes, it’s not obvious whether the writer did a good job until the whole thing has been read.

Now, here’s the paradoxical and frustrating thing for a writer: it’s almost impossible to get a blind read. If you’re seen as a nobody, you can be immensely talented and still struggle to get an open-minded read. Literary agents aren’t exactly looking for new talent to “discover”. They’re booked. You might hear back in 9 months if the agent’s intern flagged your manuscript as worth a serious read. So, if you have low social status, you work will be prematurely rejected. It doesn’t get better if you have high social status. You get an audience, and you get easy publication, but nothing prevents you from embarrassing yourself. You’ll get, from most people, the same superficial read that you’d get as a nobody, but you’ll be prematurely accepted. (You may get some resentful lashing-out, but that’s another form of acceptance.) Impostor syndrome doesn’t go away in high-status, famous people. It gets worse as they realize that they can say or publish anything and have it called brilliant.

Visual artists know that their job is done or not done in the first few seconds. If the painting looks bad, it doesn’t matter if the brushwork is brilliant. Computer programmers accept that, with high likelihood, no one will ever trudge through their code to understand the details, because the point of software is not the code but what it does when a machine runs it. The important feedback is usually automatic: does the code run, does it run fast, and does it get the right answer? A writer, though? Writers have to wait for an audience. And they want impartial, honest audiences that are blind to their social status. For that reason, writers especially will live outside of, and at odds with, whatever socioeconomic topography the human world wishes to inflict.

Crossing the Equator 4: Small x Large, Publishing, and St. Petersburg Math

From the title, it doesn’t sound like this is an essay about writing. It is. More generally, it’s about successes and failures in publicity, and the mathematics involved.

Pop quiz: What’s a small number times a large number?

  • a large number.
  • a small number.
  • I have no idea.

There is a right answer. It’s “I have no idea.” It’s a poorly specified question. Yet, predicting the performance of creative work requires us to multiply small numbers by large ones. Let’s say that you’re a writer. What’s the probability that a person who goes into a bookstore looking to buy one book chooses yours? It’s small. It’s much less than 1 percent. What about the number of people who go into bookstores? It’s large. What does this mean? Well, it could mean anything.

I’ve spent more than a decade studying the economics of creativity. It pays– and it costs. The expected value (or mean return) of a creative effort is quite high. There are two problems:

  • Variance: most people would rather have $100,000 than a 50% chance at $200,000 or a 0.01% chance at $1 billion, even if all have the same average payoff. Most creative endeavors have high variance and $0.00 is not an uncommon result. If this is hobby writing, that’s fine. If one relies on creativity for an income, it’s dangerous.
  • Value capture: people who are good at creating value tend to be below average in the social skills involved in capturing value. So, the hardest-working and best people see most of their efforts enrich other people. This is a depressing social problem that I don’t expect to see solved in my lifetime.

Probability

I’ll do my best to keep this discussion accessible. This is about psychology, more than math. In general, people are bad with probabilities less than 10%.

Another pop quiz: which of the following probabilities is closest to that of being killed by a shark on a beach trip?

  • 0.01% (10,000-to-1)?
  • 0.001% (100,000-to-1)?
  • 0.0001% (1-million-to-1)?
  • 0.0000001% (1-billion-to-1)?

The answer is… the last. It’s very rare. Beach traffic kills far more people than sharks. You’re probably sucking down more micromorts if you lay on the beach and drink than if you get in the ocean, so long as you know how to swim. As humans, we’re bad when it comes to low probabilities. That’s part of why lotteries make so much money. We don’t have a resonant model for probabilities like “1 in 160 million”. When we multiply an out-of-context large event by a small probability, we have no intuition for what the result should be.

Why is publishing so hard?

I’m putting a finishing polish on Farisa’s Courage. This has put a lot of my mental attention on figuring out the publishing process. It’s not to be taken lightly. Self-publishing and traditional publishing both have pitfalls. They’re both very hard to do well.

The median outcome, for people who try either approach, is near zero. Most people who try to find agents never will. Most self-published books sell less than 100 copies. It’s the upside potential that keeps them going. Very few people make money on books. Some will improve their careers in other ways, and some view writing as a public service. However, we ought to be frank about the fact that the median book fares poorly. Perhaps it deserves to do so, because it’s easy to throw one word after another until one has 100,000 or so, but writing a good book is hard.

If I got Farisa’s Courage in front of a “Big 5” editor (chosen at random) in New York, I’d give it a very high chance (over 95 percent) chance that she recognizes it as publishable, professional-quality work. That doesn’t mean that I don’t get rejected. Publishers can’t afford to take every good manuscript. What about the likelihood that she picks it up and reads it for pleasure? Probably 1 percent. It might not be a genre that she cares about. She might be distracted and put the book down at page 20, never to pick it up again. That happens with books. To be honest, that 1 percent is a good number. I’m rating myself favorably, there. Anyone would could sell a book to 1 percent of the reading public would be set for life.

The painful truth about trade publishing isn’t that rejection is common. It’s the realization that it ought to be more common. Let me explain. Editors will go to bat for books they read and fall in love with, but that’s rare. It’s rare for all of us. Getting a trade publisher doesn’t mean that anyone loves your book. It could mean that, but it could also mean that you’re a “might surprise” and you’ll get no promotion. If you fail to earn out your (paltry) advance, you’ll probably never sell another book, even if it’s not your fault. The goal isn’t to “get published”, because much of what’s published is in the “might surprise” category that the publisher doesn’t really believe in, but to get a real deal (i.e., “all in” versus “might surprise”) that comes with a six-figure promotion package. That matters a lot more than an advance, which is just a signal. Even with an excellent book, the probability that someone has the reaction necessary to get that treatment is small.

I start reading about 120 books each year. I complete about 40. Perhaps 5 leave that kind of “this is so awesome” impression that I will tell others about them. Keep in mind that these are published books, often from highly selective houses. Those 115 others weren’t bad books. A publisher thought that they were good, and so did I, and I may have still felt that way after finishing. It just wasn’t enough to make me recommend it. Someone else fell in love with them; just not me. That’s 4%, and nothing was wrong with the other 96%.

When an editor decides whether to publish a book, she has to guess at how many other people will have that falling-in-love reaction… or at least enough short-term limerence (or curiosity) to buy it. It’s a rare thing, but there are a lot of people in the world. So it’s a “small times large” calculation and, as humans, we often have no idea what we’re doing. Who can tell the difference between a book that 0.25% of people will fall in love with (enough to recommend it to their friends) and one that 5% will feel that way about? The former is a publishable book of above-average quality. The latter is the next Harry Potter.

When it comes to word of mouth, which is the only sustainable way to get a book out there, it’s the extreme but infrequent reactions that drive. A “7” and a “2” reaction are the same: no movement. But look at the 50 Shades series. People hate those books. The writing’s bad and the characters are terrible. Yet enough people really love them that they’ve sold hundreds of millions of copies.

St. Petersburg paradox

A lottery ticket that pays out $100 million with 1-in-200-million odds is worth 50 cents. A stock that has a 50 percent chance of being worth $78 tomorrow and a 50 percent chance of being worth $76 is worth $77. That’s expected value: 0.5 * 76 + 0.5 * 78 = 77.

Okay, so let’s say that we have a lottery ticket with a base payoff of $1. The bearer flips a fair coin and the payoff doubles for each head, until a tail comes up. So, if he flips three heads, then a tail, it pays off $8. Here’s a random sample of 20 payouts:

$4 $1 $2 $16 $1 $2 $1 $2 $2 $1 $1 $128 $8 $1 $4 $1 $1 $1 $2 $1

We see that the ticket’s worth at least $1, and probably not more than $20. In twenty trials, we made $207, so we might estimate an expected value of $10.35. However, most of that value is in one trial where we got very lucky and made $128. Take that out, and we’re at $4.16. So what’s it worth?

If one ticket is for sale, most people would pay about $5 for it. They’d lose 7/8 of the time, but win big when they win.

I ran a simulation: 20 trials in which I pooled together 100 St. Petersburg tickets. The median value was $501, or $5.01 per ticket. With 10,000 in each packet, the median payoff was $7.66 each. With 1,000,000 in each, the median was $9.56 each. In other words, the more tickets you can afford to buy, the more that they’re worth.

This is counterintuitive. We see it because improbable high-impact events (like that $128 above) dominate.

To a large extent, publishing is driven by the St. Petersburg phenomenon. Some readers might love a book but not tell anyone about. Others will tweet to their 50,000 followers. Some reviews will sell 20 books and others will sell 200,000. Making it harder for writer and publisher alike, no one knows what the rules are.

Some writers might bemoan the death of the advance. Writers used to be able to live off advances. Now, advances are low and mostly a source of anxiety, because even paltry advances are not always earned-out. Why are advances disappearing? Publishers no longer know what sells books. (Also, publishers lose if bookstores screw up and can’t move product, but that’s not new.) They used to know exactly whose numbers to dial when they thought they might have a best-seller on their hands. Now, it seems to be out of their control. Books with million-dollar advances and top-shelf reviews can flop.

A good publisher will deliver reviews, radio and TV appearances, and opportunities to speak without the author paying out-of-pocket. (If you’re not getting this package, and it’s increasingly rare, then self-publish. Trade is only worth it if they’re all-in on you. Otherwise, keep the rights. You might want them.) Promotion and intangibles still matter. They seem to matter less, is all.

What does this have to do with St. Petersburg math? Well, let’s say that you’ve written an excellent book and you convince 20 people to buy it. What happens then? Probably, nothing. Your book is great, but that “I love this and will tweet incessantly about it” reaction is uncommon. If it happens for 2 percent of people (and that would be a great book) there’s a 67 percent chance that it happened with no one. Of those 20, you might get four who sell one more book and one who sells three. That’s seven more. They might move two more copies. From that base of 20, you’ve sold a total of 29 copies. Now, let’s say that you convince 2,000 people to buy it (or give 2,000 copies away). Your likelihood of reaching a “super-spreader” is a lot higher. Then, you can set off a word-of-mouth explosion, reach escape velocity, et cetera.

In the old world of trade publishing, we knew who those super-spreaders were. They worked in prestigious houses, or they were reviewers at the New York Times, or they were agents who drank with the big fish. If you kept getting rejected, you saw them as choke-points, but once you got in, they were your biggest allies and the only people you needed to please. Those still exist, but they don’t have the propulsion that they once did. This is probably good for the world. Readers matter more. However, it makes for a more confusing world– and nobody really knows how it works.

The value, in trade publishing, was that of having a super-spreader in your corner. St. Petersburg math tells us that if you sell 50 copies of an excellent book, you’re still probably going to be forgotten, because super-spreaders are rare and you probably won’t reach one. If you sell 50,000 copies of a shitty book, you’ll be a blip that fades out (celebrity books tend to do this). On the other hand, if you sell 50,000 copies of a great book, it’s only a matter of time before you sell 50,000 more. You might to be able to reach escape velocity with a smaller print run than that. There’s a critical point in initial propulsion (and, as importantly, initial social proof) that makes the difference between obscurity and success.

The death (?) of trade publishing

Is trade publishing going to die? No. It will evolve. I think that it’s going to be smaller and a lot more selective.

In every guild-like industry, the profits are made on people in their mid-career years. You lose money training novices, and paying them when they’re not producing salable work. You make some money from advanced professionals, but they’re free agents who can command market rate. Publishing used to have a similar model. They knew that they wouldn’t make their money back when they put a six-figure marketing campaign behind a first-time author, until she finished up her third or fourth book.

As in the rest of the corporate world, conditions have become somewhat better for free agents but worse for those expecting employers (or, in this case, publishers) to invest in them. The publisher’s not going to throw a six-figure marketing campaign behind a first-time author, out of fear that she’ll bounce to another publisher offering a better rate for her second book. Those relationships don’t seem to exist anymore.

So, how is trade publishing going to change? I think that we’ll see fragmentation as self-publication becomes more common, and as the infrastructure connecting readers and writers directly becomes more mature. Mass-market fiction authors who can sell millions and have turned their work into utilities, they’ll find trade publishers. We’ll also see trade publishers holding on to historical nonfiction and biography, where the research and fact-checking demands are high and the validation of being traditionally published still matters. Established literary authors will probably continue to be traditionally published, even at a loss for the house. This is probably the last generation for which a first-time author can get traditional publishing. Look at where the system is now. Getting the right to submit to a publisher requires finding an agent. Now, agents don’t even read manuscripts. Interns, who function like agents for agents, do that. It’s decentralized and unplanned, but it’s a lot of bureaucracy. It’s still navigable, but just barely, and a lot of talented writers are looking at what it offers (small

That’s not to say that one shouldn’t pursue trade publishing and query agents. For one thing, there’s still a prestige benefit. That, actually, might grow. It’s a lot more competitive to get into Harvard now than it was in 1987. What does that mean? It means that everyone who went there in ’87 had his stock go up. Likewise, as trade publishing contracts, the prestige of having used it successfully might increase. Further, when it works for an author, it works very well. A deal that comes with a six-figure promotion budget is usually worth taking.

The standard package that most authors get isn’t worth giving up the rights. Let me explain why. Most trade-published books are just over the line and get a “might surprise” package. The publisher isn’t going to promote those books. In fact, they may ask for a marketing plan in addition to a manuscript. The author does the work. Essentially, it’s self-publishing but with important rights given up and shitty royalties– in exchange for a paltry advance. The only reason why authors take such deals is that the publishing process is so exhausting and time-consuming that they’re just relieved to be done with.

If you’re in the “might surprise” category, publishers demand that you have a social media presence, because they’re not going to market the book. Now, social media marketing is going to do, in the 2020s, what traditional marketing did in the 2010s. That is, it’ll lose effectiveness. What is unlikely to lose effectiveness is giving high-quality stuff away: gift copies, lowered prices, free chapters, and search-engine indexing if free material is posted online. If you use trade publishing, you can’t do any of that. You’ve given up those rights. You can’t offer your e-book at 99 cents on your protagonist’s birthday. Your strongest option is to ask people in 140 characters to buy a rectangular solid made of plant matter and chemicals.

Ten years ago, few talented authors self-published. These days, most say that they will self-publish if they can’t find a trade publisher. Over time, self-publishing (which may be renamed to entrepreneurial publishing, because to do it right, one has to do things like hire professional editors and cover designers, and that’s business) will likely be the default way for talented authors to break out. Trade publishing will be for victory laps only.