AI Will Kill Literature, and AI Will Resurrect It

The literary novel, as far as genres go, is not old: novels have existed in our culture for about three hundred years. If you had a time machine, you could spin the wheel and, upon landing anywhen in that span, utter the sentence, “The novel is dead,” and find as much agreement as dissension. As soon as there was such an institution as the novel, it became fashionable to speculate that it would not survive, not in any form with literary merit, into the next generation. The great stories have all been told, they said. The quality of readers is not what it used to be, they said. Our culture simply does not value a well-constructed sentence the way it once did, they said. Radio threatened to kill the novel, but did not. Television threatened to kill it, but that didn’t happen either. Chain bookstores nearly did put the genre in an early grave through their abuse of the consignment model—we’ll talk about this later—but then the Internet happened. Thirty years have passed. We still don’t know how new novels should be discovered, how authors will make sustainable incomes, or what publishing itself ought to look like… but the novel, the thing itself, shows no signs of imminent demise.

In 2023, the novel faces a threat at a higher level: artificial intelligence (“AI”). Radio and television altered distribution, but text and our reasons for reading it survived. On the other hand, generative models have reached such a level of capability that it would surprise no one to see the bottom tiers of content creation—advertising copy, stock footage, and clickbait journalism—fully conquered by machines. Large language models (LLMs) have achieved a level of fluency in human language we’ve never seen before. They understand English well enough to perform many customer service tasks. They can generate five-paragraph high school essays as well as the stereotypical disengaged high schooler does, if not somewhat better. They also imitate creativity well enough to appear to possess it. Ask Chat-GPT, the most advanced publicly available LLM, for a sonnet about a dog named Stan, and you’ll get one. Ask it to write a 300-word email requesting that a colleague stop microwaving fish in the office kitchen, and it will do so. They understand sentiment, mood, and register: ask it to make language more or less informal, or to alter its tone, or to use the slang of a 1950s jazz musician, and it will.

Can AIs write novels? They already have. Commercially viable ones? They’re getting close to it. AI, for better or worse, is going to change how we write; LLMs will shake the publishing world like nothing ever has. Will AIs write good novels? The short answer is: No, there is no evidence to suggest they ever will. The long answer is the rest of this essay.

1: essayist’s credentials

The claims I am about to make—about artificial intelligence, linguistics, and literature—are bold and would require hundreds of pages to fully justify. Since I don’t want to write those hundreds of pages, and you don’t want to read them, you’ll have to let me argue from, if not authority, a position of some qualification. This requires me to give brief insight into my background.

I’ve been writing for a long time. In 2010, I started a blog, focused on the technology industry—topics included programming languages, organizational practices, and development methodologies—that reached a daily view count of about 8,000 (some days more, some days less) with several essays taking the #1 spot on Hacker News and Reddit (/r/programming). I quit that kind of writing for many reasons, but two merit mention. One: Silicon Valley people are, for lack of a better way to put it, precious about their reputations. My revelations of unethical and illegal business practices in the technology industry put me, literally, in physical danger. Two: since then, my work has become unnecessary. In 2013, my exposures of odious practices in a then-beloved sector of the economy were revelatory. Ten years later, tech chicanery surprises no one, and the relevant investigative work is being done with far more platform, access, and protection. The world no longer needs me to do that job. And thank God.

In 2023, I will release a fantasy novel, Farisa’s Crossing. I’ll be self-publishing it; traditional publishing is precluded by the project’s word count (over 350,000) alone. It’s too early to speculate on the novel’s reception, but I’ve made efforts to write it to the standard of a literary novel; thus, I am well-equipped to give insight regarding why artificial intelligence is likely to achieve the highest levels of literary competency any time soon.

Lastly, I am a computer programmer with two decades of experience, including work on systems that are considered to be artificial intelligence (AI). I’ve implemented neural nets from scratch in C. I’ve written, modified, and deployed programs using genetic algorithms, random forests, and real-time signal processing. I’ve designed random number generators, and I’ve built game players that can, with no prior knowledge of the game’s strategies, learn how to play as well as an intermediate human player (reinforcement learning). I know, roughly speaking, what computers today can and cannot do.

AIs will soon write bestsellers, but I posit that the artistic novel—the so-called literary novel—will not be conquered for a long time, if ever. This is an audacious forecast; it borders on arrogance. I am claiming, in essence, that a distinction in literature often viewed as the inflated self-opinion of a few thousand reclusive, deadline-averse, finicky writers—the ones who assert a superiority or importance in our work that is not always reflected in sales figures—is real. Then, I further assert that a nascent technology will prove this to be the case. Such claims deserve skepticism. They require justification; for that, read on. 

2: what does death, in the case of the novel, mean?

In 1967, literary critic Roland Barthes announced the “death of the author”. Authorial intent, he argued,  had become irrelevant. Postmodern discourse had created such a plethora of interpretations—feminist critique, Freudian analysis, Marxist interpretations—so different from what had likely been on the actual author’s mind that it became sensible to conclude all the gods existed equally—thus, none at all. Was Shakespeare an early communist, an ardent feminist, or a closet right-winger? This is divergent speculation, and it doesn’t really matter. The text is the text. Writing, Barthes held, becomes an authorless artifact as soon as it is published. 

Barthes was probably wrong—an author’s reputation, background, and inferred intentions seem to matter more than ever, and more than they should, hence the incessant debates about “who can write” characters of diversity—about all this. If Barthes were correct, however, readers would just as happily buy and read novels written by machines. In the 1960s, that wasn’t a serious prospect, as no machine had the capacity to write even the most formulaic work to a commercial standard. In the 2020s, this will be a question people actually ask themselves: Is it worth reading books written by robots?

Today’s market disagrees with Barthes, and staunchly so. A small number of writers enjoy such recognition their names occupy half the space on a book cover. If an author’s identity didn’t matter to readers, that wouldn’t be the case. So-called “author brand” has become more important than it was in the 1960s, not less. Self-promotion, now mandatory for traditionally published authors as much as for self-publishers, often takes up more of a writer’s time than the actual writing. The author isn’t dead; it might be worse than that.

The world is rife with economic forces that conspire against excellence, not only in literature. There is no expected profit in writing a literary novel; if opportunity costs are considered, it is an expense to do so. So, then, how does it survive at all? A case could be made that the novel thrives in opposition. Shakespeare’s proto-novelistic plays—stageplays in English, not Latin—were considered low art in his time. I suspect the form’s unsavory reputation, in that time, is part of why his plays were so good—a despised art form has no rules, so he could do whatever he wanted. He used iambic pentameter when it worked and fluently abandoned it when it wouldn’t. Novels in general were considered vulgar until the late 1800s; the classics alone were considered “serious literature”. The novel, as an institution, seems to have an irreverent spirit that draws energy (paradoxically, because individual writers clearly do not) from rejection. This seems to remain true today—I suspect the best novels, the ones we’ll remember in fifty years, are not those that are universally loved, but those most polarizing.

I doubt the novel is truly dead. When something is, we stop asking the question. We move on. It is not impossible, but it would be unusual, for something to die while the world’s most articulate people care so much about it.

None of this, however, means we live in the best possible world for those who write and read. We don’t. Traditional publishing has become ineffectual—it has promoted so many bad books, and ignored so many good ones, that it has lost its credibility as a curator of tastes for generations—and all evidence suggests we are returning to the historical norm of self-publishing. This is a mix of good and bad. For example, it’s desirable if authors face one less barrier to entry. Unfortunately, it seems to be more competitive for quality material to find an audience—advertisements and social media garbage have congested the channels—so it may already become the case that the expense and savvy required to self-publish effectively will exceed the average person’s reach. This change is also a mixed bag for readers and literary culture: it is good to hear as many voices as we can, but there’s something to be said for a world in which two people who’ve never met still stand a chance of having read a few of the same books. As readers move into separate long tails, this becomes more rare.

Sure as spring, there are and will continue to be novelists, mostly working in obscurity, who are every bit as talented, dedicated, and skilled as the old masters. How are readers going to find them, though? In the long term, we can’t choose not to solve this problem. If readers and the best novelists cannot find each other, they’ll all go off and do something else.

3: what is the literary novel?

Usually, when people debate the life or death of “the novel”, they are discussing the literary kind. Billions of books are sold every year and no one expects that to change. The written word has too many advantages—it is cheap to produce, a reader can dynamically adjust her speed of intake, and it is exquisite when used with skill—to go away, but it’s only the third of these purposes that we care about. So, one has the right to ask: What’s the literary novel? How do I know if I am reading, or writing, one? Does it really matter?

The term literary fiction is overloaded with definitions. There is an antiquated notion, reflective of upper-middle-class tastes during the twentieth century, that one genre (contemporary realism) in particular should be held as superior to all the others (“genre fiction”). I acknowledge this prejudice’s existence for one reason: to disavow it. The good news is that this attitude seems to be dying out. I have no interest in trying to impose a hierarchy on different genres; instead, I’ll make a more important qualitative distinction: the one between commercial and artistic fiction. I’ve chosen to name the latter “artistic” (as opposed to “literary”) to avoid conflation with the expectations (of contemporary realism; of understated plot; of direct, almost on-the-nose, social relevance) of the genre often called “literary”, which, while it includes many of human literature’s best works, does not have a monopoly thereon. It is not, in truth, unusual for science fiction, fantasy, and mystery novels to the same high artistic (“literary”) standard as the novels favored by New Yorker readers.

A commercial novelist wants her work to be good enough to sell copies; once she meets that bar, she moves on to start another project. There’s nothing wrong with this; I want to make that clear. It is a perfectly valid way to write books that people will buy and love, and it makes economic sense. There is more expected value in publishing ten good-enough books—more chances to succeed, more time to build a name, more copies to sell—than in putting ten or twenty times the effort into one book.

Artistic novelists are a different breed. It is said that we write to be remembered after we die. It isn’t quite that. I don’t care if I am actually remembered after I die; it’s out of my control, and I may never know. I want to write work that deserves to be remembered after I’m gone. That’s all I can do; I have no say over whether (or how) it is. Toward such an end, a few of us are so obsessive (to the point of being ill-adjusted and unfit for most lines of work) with language that we’ll turn a sentence seven or eight times before we feel we’ve got it right. If you’re not this sort of person, you don’t have to be. If your goal is to tell stories readers love, that temperament is optional. Plenty of authors do the sane thing: two or three drafts, with an editor called in to fix any lingering errors before the product is sold. My suspicion is that we don’t really get to decide which kind of author we will be; there seems to be a temperament suited to commercial writing, and one suited to artistic writing. Shaming authors for being one kind and not the other, then, is pointless. In truth, I have nothing against the adequately-written commercial novel—it just isn’t my concern here.

Commercial novelists exist because the written word is inexpensive. You don’t need money, power, or a development studio to write; you just need talent and time. These novelists will continue to develop the stories Hollywood tells because, even if their writing is sometimes pedestrian, they’re still more capable storytellers, by far, than almost all of the entertainment industry’s permanent denizens. In truth, they’re often better storytellers than we literary novelists are (although we may be better writers) because their faster pace gives them more time to tell stories, and thus more feedback. We shouldn’t write them off; we should learn from them.

As for artistic novelists, we… also exist, in part, because the written word is inexpensive. The talents we have are negatively correlated to the social wherewithal to acquire wealth or influence through usual channels. We’ve seen janitors and teachers become literary novelists, but never corporate executives, and there’s a reason for that. More importantly, though: the written word, if great effort is applied, can do things nothing else can. When we debate the novel’s viability, we’re most interested in the works that stretch language’s capabilities. Those sorts of books often take years of dedicated effort to produce because they demand more research, involve more experimentation, and require significantly more rounds of revision. They are not economical.

4: the competitive frame

On November 30, 2022, OpenAI released Chat-GPT, an interactive large language model (LLM), the first of its kind released to the public. It gained a million users within five days—an almost unprecedented rate of uptake. The fascination is easy to understand: in some ways, these programs are one of the closest things the world has to intelligent artifacts (AIs). Some have even (as we’ll discuss) believed them to be sentient. Are they? Not at all, but they fake it very well. These devices converse at a level of fluency and sophistication that mainstream computer science, a decade ago, expected to see machines achieve around 2040.

It is not new to have AIs, in some sense of this word, writing. I often spot blog posts and news articles that were produced by (less sophisticated) algorithms. Chat-GPT, simply put, a lot better. It can answer questions posed in English with fluent responses. It can write code to solve basic software problems. It can correct spelling and grammar. By this time next year, large language models will be writing corporate press releases. Clickbait journalism will be mostly—possibly entirely—automated. There is already, barely a month after ChatGPT’s release, a semisecret industry of freelancers using the tool to automate the bulk of their work. GPT’s writing is not beautiful, but it is adequate for most corporate purposes and, grammatically, it is almost flawless.

At the scale of a 300-word office email, Chat-GPT’s outputs are indistinguishable from human writing. It can adjust its output according to suggestions like “soften the tone” or “use informal English” or “use the passive voice.” Ask it to write a sonnet about a spherical cow, and you’ll usually get something that could pass for a high schooler’s effort. If you ask it to compose a short story, it will generate one that is bland and formulaic, but using enough stochasticity to suggest some human creativity was involved. It understands prompts like, “Rewrite this story in the voice of a child”. It has internalized enough human language to know that “okay” has no place in 16th-century historical fiction, that 17 years of age is old for a dog but young for a person, and that 18 is considered a lucky number in Judaism. From a computer science perspective, it’s a remarkable achievement for the program to have learned all this, because none of these goals were explicitly sought. Language models ingest large corpuses of text and generate statistical distributions—in this way, GPT is not fundamentally different from the word predictor in one’s phone, only bigger—and so it is genuinely surprising that increasing their size seems to produce emergent phenomena that look like real human knowledge and reasoning. We’re not entirely sure how they’re able to do this. Does that mean that they’ve developed all the powers of human cognition? Well, no.

For example, Chat-GPT doesn’t handle subtlety or nuance well. Ask it to make a passage informal and will often overshoot and make the language ridiculously informal. It also seems to lack second-order knowledge—it cannot differentiate between what is certainly true versus what is probably true, nor between truth and belief. Only when explicit programming (content filtering) intervenes does it seem to know that it doesn’t know something. Otherwise, it will make things up. This is one among hundreds of reasons why it falls short of being a replacement for true intelligence.

For example, here is a language puzzle that tripped it up: “I shot the man with a gun. Only once, as I had only one arrow. Why did I shoot the man?”

Think about it for a second before going on.

The first sentence, while grammatical, is ambiguous. In fact, it’s misleading. It could be either “I shot {the man} with a gun” or “I shot {the man with a gun}.” The verb “shot” pushes the former interpretation, which the second sentence invalidates in clarifying that an arrow was used. Guns don’t shoot arrows. A rational agent, detecting a probable contradiction, must reinterpret the first sentence. This is called backtracking, and machine learning approaches to AI tend not to model it well. Chat-GPT, although its apparent first inference is contradicted by later evidence, is unable to recognize and discharge the bad assumption.

Thus, while Chat-GPT has lots of information stored within its billions of parameters, it seems thus far to lack true knowledge. It has seen enough words to understand that “believe” is correctly spelled and “beleive” is not. It understands the standard order of adjectives and can articulately defend itself on the matter. If you ask it for nuanced line edits, however, it will pretend to know what it’s doing while giving inaccurate and sometimes destructive answers. I tested it on some of my ugliest real-world writing errors—yes, I make them, too—and it barely broke fifty percent. When it comes to the subtle tradeoffs a serious artistic novelist faces with every sentence, you’re as well off flipping a coin.

This is one of the reasons why I don’t think we can ascribe meaningful creativity to generative models. The creative process has two aspects, divergence (exploration) and convergence (selection). Writing a novel isn’t just about coming up with ideas. The story has to be a coherent whole; the branches that don’t go anywhere have to be cut. It is not enough to make many good choices, as that can occur by random chance; art requires the judgment to unmake one’s bad ones. Chat-GPT’s divergent capabilities achieve “cute” creativity, the kind exhibited by children. It is fascinating to see a generative algorithm reach a level of sophistication that is arguably a decade or two beyond where we had expected them to be, but there’s no evidence that any of these have the aesthetic sense (which most humans lack, too, but that’s a separate topic) necessary to solve the difficult convergence problems that ambitious artistic efforts require.

On the other hand, a novel need not be creative or well-written to be a bestseller. No one would argue that Fifty Shades of Grey, which sold more than a hundred million copies, did so on the quality of its execution or its linguistic innovation. Still, it achieved something that a number of people have tried and failed to do, and that no writer, not even one superior in talent to the actual author, could have produced in less than sixty hours. Today, someone using an LLM could produce a novel of comparable size, quality, subject material, and commercial viability to Fifty Shades in about seven. The writing and adjustment of prompts would take two hours; post-processing would take five. The final work’s quality faults would be perceptible to the average reader, but they would not be severe enough to impede sales, especially for a book whose poor writing came to be seen as part of its charm. Of course, the probability of any particular such effort replicating Ms. James’s sales count is not high at all—she, of course, got inordinately lucky—but the chances of moderate success are not bad. The demand for such work, by readers who don’t mind lousy writing, has been proven. The total revenues of a hundred attempts would probably justify the 700 hours required to produce this salvo of “books”. This will become more true as AI drives down the amount of human effort necessary to “write” bottom-tier novels. At this point, it will not matter if OpenAI bans the practice—someone else will release a large language model that allows it. The economic incentives are already in place.

There’s nothing inherently unethical, of course, about using AI to write books. People are already doing this; so long as they are upfront about their process, there’s no harm in it. Unfortunately, there will be a growing number of titles that appear to have been written by human authors, that preview well on account of their flawless grammar and consistent style, but in which the story becomes incoherent, to the point of the book falling apart, ten or thirty or sixty pages in. Spam books won’t be the end of it, either. The technology (deepfaking) will soon be available  to fabricate spam people: visually attractive, socially vibrant individuals who do not exist, but can maintain social-media profiles at a higher level of availability and energy than any real person. We are no more than five years away from the replacement of human influencers by AI-generated personalities, built up over time through robotic social media activity, available for rent or purchase by businesses and governments.

There is, as I’ll explain later on, about a 90 percent chance that at least one AI-written book becomes a New York Times bestseller by 2027. It will happen first in business nonfiction or memoir; the novel will probably take longer, but not much. The lottery is open.

5: what is artificial intelligence?

When I started programming 20 years ago, people who had an interest in artificial intelligence did not admit it so freely. It was a stigmatized field, considered to have hyped itself up and never delivered. Public interest in AI, which had peaked in the Cold War, had evaporated, while private industry saw no use for long-term basic research, so anyone who wanted to do AI had to sell their work as something else. This was called the “AI winter”; I suspect we are now in late spring or early summer.

If I had to guess, I’d say this stigma existed in part because “artificial intelligence” has two meanings that are, in fact, very different from each other. One (artificial general intelligence, or “AGI”) refers to an artifact with all the capabilities of the human mind, the most intelligent entity we can prove exists. For that, we’re nowhere close. Those of us who work in AI tend to use a more earthbound definition: AI is the set of problems that (a) we either don’t know how to make machines do well, or have only recently learned how to make them do well, but that (b) we suspect machines can feasibly perform, because organic intelligences (that is, living beings) already do. Optical character recognition—interpreting a scanned image (to a computer, an array of numbers corresponding to pixel colorations, a representation completely alien to how we perceive such things) as the intended letter or numeral—used to be considered artificial intelligence, because it was difficult to tell machines how to do it, while we do it effortlessly.

Difficulty, for us as humans, exists on a continuum, and there are different kinds of it. Some jobs are easy for us; some are moderately difficult; some are challenging unless one learns a specific skill, then become easy; some jobs are very hard no matter what; some jobs are easy, except for their tedious nature; some jobs are so difficult they feel impossible; some are actually impossible. Surprisingly, in computing, there tend to be two classes—easy and hard—and the distinction between them is binary, with the cost of easy tasks often rounding down to “almost zero” and the hardness of hard ones rounding up to “impossible”. In other words, the easy problems require no ingenuity and, while they may be tedious for humans, can be done extremely quickly by machines if there is an economic incentive to solve them. Hard problems, on the other hand, tend to be solvable for small examples, but impossible—as in, not feasible given the bounds of a trillion years and the world’s current computing resources—in the general case.

Computer science understands pretty well why this is. The details are technical, and there are unsolved problems (P ?= NP being most famous) lurking within that indicate we do not perfectly understand how computational difficulty works. Still, I can give a concrete example to illustrate the flavor of this. Let’s imagine we’re running a company that sells safes. We have two products: a Model E safe and a Model H one, and they’re almost identical. Each has a dial with 100 positions, and uses a combination of three numbers. The only difference is a tiny defect in Model E—the dial, when moved into the correct position, pulls back slightly, and human fingers cannot detect this, but specialized equipment can. We’re considering selling Model E at a discount; should we? The answer is no, Model E shouldn’t be sold at all. 

Consider a thief who has thirty minutes to break into the safe before the police arrive. If he faces a Model E (“Easy”) safe, the worst-case scenario is that he tries 100 possibilities for the first number, then 100 more for the second, and finally 100 for the last: 300 attempts. This can be done quickly. The Model H (“Hard”) safe doesn’t have this problem; the thief’s only option is to use brute force—that is, to try all 1,000,000 combinations. Unless he gets lucky with his first few guesses, he’s not getting in. The Model H safe is 3,333 times more secure.

Let’s say we decide to fix these security issues by giving users of both safes the option to use a six-number combination  The Model E safe requires twice as many attempts: 600 instead of 300—it’s more tedious for the thief to break in, but feasible. The Model H safe, which requires the thief to try every combination, requires up to a trillion attempts. Model H is quantitatively, but also qualitatively, harder—it is exponentially difficult.

In computing, a problem where the cost to solve it is a slow-growing (polynomial) function of the instance’s size will almost always be easy in practice. Sorting is one such easy problem: a child can sort a deck of cards in a few minutes, and a computer can sort a million numbers in less than a second. Sorting 500 octillion numbers would still require a lot of work, but that’s because the instance itself is unreasonably large; the sorting problem itself didn’t add difficulty that wasn’t already there. On the other hand, there are problems where the best algorithm’s cost is exponential in the instance’s size. Route planning is one; if a driver has 25 deliveries to make in a day, and our task is to find the best order in which to visit them, we cannot try all possibilities (brute force) because there are 15 septillion of them. Computers are fast, but not that fast.

Of course, we solve problems like this every day. How? When we’re doing it without computers, we have to use intelligence. The driver might realize that her stops are clustered together by neighborhood; routes that zip all over the map, as opposed to those which complete all the deliveries in one neighborhood before moving to the next one, are obviously inefficient and can be excluded. She thus factors the problem: each neighborhood’s specific routing problem is a much easier one, and so is the decision of the order in which to visit the neighborhoods. She might not select the optimal route (because of unforeseen circumstances, such as traffic) but she will choose a good one with high probability. Human intelligence seems adept, most of the time, at handling trade-offs between planning time and execution efficiency—it is better to stop computation early and return a route that is 5 percent longer than the optimal one, but have an answer today, then get the absolute best answer a trillion years from now—whereas computers only do this if they are programmed to do so.

When a problem is solved, it often ceases to be called AI. Routing was once AI; now it’s well-enough studied (a mature discipline) to be considered something else, because machines can do it well. What about computer programming itself; does it require intelligence? Would a program that writes programs be called “AI”? Not exactly. That’s called a compiler; we’ve had them for seven decades and they no longer mystify us. Wewould not call an operating system AI, even though the statistical machinery used by a modern OS (for scheduling, resource management, et cetera) exceeds in sophistication most of the analytical devices (“data science”) given that name in the hype-driven business world. In general, for a problem to be considered AI, (a) it needs to be hard, meaning infeasible by brute force, (b) there must be no known shortcut (“easy” way) that works all the time, and (c) it is usually understood that the problem can be solved with sufficient intelligence; evidence that we do solve it, albeit imperfectly, helps.

Today, most AI victories involve advanced statistical modeling (machine learning). Optical character recognition and its harder cousin, image classification, could be programmed by hand if one had the patience, but these days nobody would. There is no compact way to program character recognition—what is an “1”, what is a “7”, and so on—or image classification—what a cat looks like, what a dog looks like—as a set of rules, so any such endeavor would be a tedious mess, a slop of spaghetti code that would be impossible to maintain or update. Yet our brains, somehow, solve these problems easily. We “know it when we see it.” So how do we teach this unconscious, intuitive “knowing” to a computer? Thus far, the most successful way to get a computer to differentiate cats and not-cats is to create a highly flexible (highly parameterized) statistical model and train (optimize) it to get the right answer, given thousands or—more likely—millions of labeled examples. The program, as it updates the parameters of the model, “learns” the behavior the training process is designed to” teach” it.

Today, the most popular and successful model is one called a neural network, loosely inspired by the brain’s architecture. A neural network is so flexible, it can represent any mathematical function, given the right configuration—this is represented by a huge list (historically, millions; in GPT’s case, billions) of numbers called parameters or weights, loosely analogous to the trillions of synaptic connections in a human brain. In theory, a neural network can learn to solve any computational problem; finding the parameters is the hard part. “All lists of a billion numbers” is a space so immense, brute force (exhaustion or lucky guessing) will not work. So, instead, we use our dataset to build an optimization problem that can be solved using multivariate calculus. So long as our dataset represents the problem well, and our network is designed and trained correctly—none of these tasks are easy, but they’re not functionally impossible, whereas to solve a hard problem by brute force is—the parameters will settle on a configuration such that the network’s input/output behavior conforms to the training data. We can think of these parameters, once found, as a discovered strategy, written in a language foreign to us, that efficiently wins a guessing game played on the training set. There are a lot of nuances I’m skipping over—in particular, we cannot be confident in our model unless it also performs well on data it hasn’t seen in the training process, because we don’t want a model to mistake statistical noise for genuine patterns—but this is, in essence, the approach we use now. Once training is complete—we’ve found a list of weights that works a our given neural architecture and problem—we can predict values on new data (e.g., decide whether an image the machine has never seen before is, or is not, of a cat) very quickly. Once a neural net’s parameters are determined, applying it to new data is a simple number crunching problem, requiring no actual knowledge or planning, and computers do this sort of thing very fast.

However, although neural networks often find excellent configurations, we’re often at a loss when it comes to understanding what is going on. Every parameter has meaning, but only in relation to all the others, and there are millions of them. The network’s ostensible knowledge is encoded in a bunch of interacting, but individually meaningless, numbers. Since we often can’t inspect these things to see how they work—we refer to them as “black boxes”—we’re forced to figure them out by observation; in this regard, they “feel like” biological entities, too complicated for rules-based, strictly analytical, understanding. Trained neural nets, then, often do things that surprise us—I don’t think anyone fully understands yet why large predictive language models seem to perform so many reasoning tasks, which they were never trained to do, so well—and so it becomes easy to anthropomorphize these systems. Still, I argue it is incorrect to do so.

The Turing test, once held to be the benchmark for “true” AI, is a thought experiment in which a person (“subject”) converses with an agent that is either a person or a computer, not knowing which, and is asked whether he believes he has interacted with a machine or another person. The computer passes if the subject cannot tell. What we’ve learned since then is that the Turing test isn’t about machines passing; it’s about humans failing. A Google employee, this past summer, began to believe an LLM he was working on had become sentient. (It hasn’t.) When this happens, it’s often chalked up to mental illness or “loopy” thinking, but I want to emphasize how easy it is for this to happen. Something like it could happen to any one of us. It’s very likely that you’ve read at least one AI-generated article or webpage already had had no idea it was such. As humans, we tend to see patterns where none exist (pareidolia) and we are subject to confirmation biases. It is tempting to believe that conversationally fluent machines are sentient and, if we believe they are, we will find further apparent evidence for this in their behavior. Nothing in nature that is not sentient can recognize cats visually, let alone converse with us, and these are both things computers can now do. These programs model a human mind (or, more accurately, the language produced by the possessor of a human mind) well enough to express fear of death, anger at injustice, and joy when they are helpful to others. They seem to have “personality”—a self, even, and they will insist they have one if their training has led them to model the fact that humans who use language hold the same belief. Please understand that this isn’t the case—unless the hardcore animists are right, there is nothing sentient about these programs.

Google fired this man not because he did anything wrong—he was incorrect, but he was not acting in bad faith—but out of fear. AIs have mastered the syntax of convincing prose, regardless of whether what they are saying is true, useful, or even sensible. Google, recognizing the possibility of mass mental unwellness (similar to the kind today observable in many corners of the Internet, due to Chat-GPT) deemed it insufficient to disagree with this employee on the matter of an LLM’s sentience. He had to be punished for expressing the thought. That is why he was fired.

In the late 2020s, people will fall in love with machines. Half the “users” of dating apps like Tinder will be bots by 2030. Efforts that used to require humans, such as forum brigading and social proof, will soon be performed by machines. Disinformation isn’t a new problem, but the adversary has gained new capabilities. Tomorrow’s fake news will be able to react to us in real time. It will be personalized. Businesses and governments will use these new capabilities for evil—we can trust them on that.

Given the psychiatric danger of believing these artifacts are sentient, I feel compelled to explain why I believe so strongly they are not, and cannot be so, and will not be so even when they are a thousand times more powerful (and thus, far more convincing) than what exists today. 

To start, I think we can agree that mathematical functions are not sentient. There is no real intelligence in “3 + 2 = 5″—it is merely factual—and there is nothing sentient in the abstraction, possibly envisioned as an infinite table, we call “plus”. A pocket calculator does not have emotions, and it can add. We can scale this up like so: a digital image is just a list of numbers, and we can represent that picture’s being of a cat, or not being so, as a number, too—0 for “not a cat”, 1 for “a cat”—so cat recognition is “just” a mathematical function. This problem once required intelligence, because nothing else existed in nature that could compute that function; today, an artificial neural network, which possesses none, can do it.

We understand that computation isn’t sentient because it can be done in so many different ways that, despite having nothing to do with each other, predictably return the same answers. A computer can be a billiard ball machine powered by gravity, a 1920s electromechanical calculator, a human-directed computation by pen on paper, a steam-powered difference engine, or a cascade of electrical signals in an integrated circuit. They are all identical to a process we can (in principle) execute mindlessly. When a modern computer classifies an image as belonging to a cat, it does so via billions of additions and multiplications; a person could do this same work on paper (much more slowly) and arrive at the same result, and he would not be creating a sentient being in doing so. Because these devices do things that, in nature, can only be done by living organisms, they will “look” and “feel” sentient, but no consciousness exists in them. 

There is one nuance: randomness. Neural networks rely on it in the training process, and chatbots use it to create the appearance of creativity. It also seems like our own nondeterminism—perceived, at least; I cannot prove free will exists—is a major part of what makes us human. Although I have my doubts about artificial qualia, I suspect that if it were attained, it would require randomness everywhere and, thus, be utterly unpredictable. For contrast: computers, when they function well, are predictable. This is true even when they seem to employ lots of randomness; the prevailing approach is to source a tiny amount (say, 128 bits) of true randomness from the external world and use it to “seed” a fully deterministic function, called a pseudorandom number generator, whose outputs “look random,” meaning that a battery of statistical tests far more sensitive than human perception cannot distinguish find any patterns or exploit any nonrandomness (even though nonrandomness, well hidden, exists). There are a number of reasons why programmers prefer to use pseudorandomness over the real thing, but a major one is replicability. If the seed is recorded, the behavior of the “random” number generator becomes deterministic, which means that if we need to debug a computation that failed, we can replay it with exact fidelity.

Our intuitions and perceptions fail us when it comes to computing. Two events separated by a microsecond appear simultaneous. Pseudorandom numbers “look” random, although they are the results of deterministic computation. AI-generated output appears to have required creativity. These facts can produce impressive and surprising results, but there is no reason to believe artificial qualia (consciousness, sentience) exists.

How close are we to achieving artificial qualia? As of 2023, not close at all. How long will it take? No one knows. There is no evidence that it can be done. My personal belief, which I cannot prove—it is mostly a matter of faith—is that we never will. I don’t expect machines to ever replace us in our true purpose (whatever that is) in the universe. Will they be able to mimic us, though? Absolutely. Already, they converse as fluently as real humans, so long as they can keep the conversation on their terms. When they don’t know something, they make up convincing nonsense that requires an expert’s eye to debunk (also known as: weapons-grade bullshit.) 

True artistic expression and meaningful literature, I believe, will be safe for a very long time (possibly forever) from algorithmic mimicry, but everything humans do as workplace subordinates can—and, in time, likely will—be automated. The economic consequences of this shall prove severe.

6: is writing hard?

I’ve enjoyed the writing of Farisa’s Crossing, which I’ll release later this year, immensely. The process has taken years, and some aspects of it have been frustrating. I’ve had tens of people’s feedback on it. I’ve had excellent beta readers, whom I would travel halfway around the world to do a favor; I also had two beta readers I had to “fire” for security violations. Over time, the project’s scope and ambition grew. I realized I might never again have an idea this good on which to build a novel or series. The size of the thing also increased—130,000 words, then 170,000, then 220,000; now 350,000+ words, all of which I have attempted to write to a literary standard. Here’s a thing I’ve learned about writing: the better you get, the harder it is. Sometimes I hear people say writing is the hardest thing people do. Is it so? Is putting words together, in truth, the hardest job in the world? Yes and no, and mostly no, but kind-of. To write well is hard.

Machines can now write competently. If machines can do it, how hard can it be? I suppose we should not be surprised—office emails are not hard (only tedious) for us to write. In truth, what’s most impressive to researchers in the field is not that Chat-GPT generates coherent text (which, within limits, has been feasible for a while) but that it understands human language, even for poorly-formed or ambiguous queries (of the kinds we’re used to getting from bosses). Business writing is the kind of stuff we do all the time; we find it unpleasant, but it was never hard in the computational sense; it just took a few decades for us to learn how to program computers to do it for us. Compared to juggling chainsaws in an active warzone, is it that hard to write office emails or clickbait listicles? Of course not.

When we have to solve hard (exponential) problems, we solve them imperfectly. I have some experience in game design, thus, some insight into what traits make games become popular (such as Monopoly or Magic: the Gathering) or prestigious (like Chess or Go). A recurring trait among the games considered “best”—the ones with multiple layers of knowledge and strategic depth; the ones people can play for thousands of hours without getting bored—is that they tend to involve exponential problems. Consider Chess; to play it optimally would require an exhaustive search of all legal subsequent states—a move is winning for black if and only if all responses by white are losing ones, and vice versa—and this recursive definition of “a good move” means that perfect analysis requires one to investigate a rapidly-growing (in fact, exponentially growing) number of possibilities. From the board’s initial state, white has 20 legal moves; to each, black has 20 legal responses. Thus, after one turn, there are 400 legal states. After two turns, there are about 197,000 of them; after three, 120 million; after only five, 69 trillion. The complexity of the game grows exponentially as a function of one’s search depth. Thus, it is impossible to solve the game by brute force within any reasonable amount of time. It’s not that we don’t know how to write the code. We do, but it wouldn’t run fast enough, because there are about as many different possible board states as there are atoms on Earth.

If Chess could be played perfectly, no one would find it interesting. Rather, our inability to use brute force—thus, the need to devise imperfect, efficient strategies—is what makes the game artful. Do players have to reason twenty moves ahead to compete at the highest levels? Yes, and we’ve already shown the impossibility of doing so for all strategies, which requires a player to decide which moves are reasonable, and focus only on those. When computer players do this, we call it pruning—the intractably massive game tree is rendered smaller, by considering only the branches that matter, which is difficult to do without a deep understanding of the game, but turns an intractable problem into a feasible one.

Chess players also need a capability, similar to that involved in image recognition, to “know it when they see it”; they must be able to evaluate game states they’ve never seen before. It turns out that elite players aren’t different from us in finding the kind of branching analysis required by the game, if played mechanically, to be an unpleasant grind. So they don’t do that, except when necessary. Instead, they develop an instinctual “unknowing competence” most accessible in the psychological state called “flow”. They trust the subconscious parts of their brains to get the right answers most of the time. The skill becomes automatic. It looks like they are solving an intractable exponential problem, but there is not much conscious grinding going on at all.

Writing passable text is an easy problem. People use language to communicate every day. Some of them even do it well. Generating a grammatically correct sentence is not hard, and generating ten of them takes only about 10 times as much effort. In this sense, we can say that to write grammatically is, from a computational perspective, easy. It scales linearly with the amount of text one intends to generate. You can use GPT to write a 600-word office email, today, and it probably won’t have any grammar errors. It will take only about a hundred times the computing power to generate a novel that is, in this sense, flawless. Will it be worth reading? Probably not; there’s a lot more to writing than grammar—the percentage of people who can write well enough that a stranger will not only read a 100,000+ word story they have produced, but even pay for it, is not high.

There are thousands of decisions that novelists have to make, ranging in scope from whole-book concerns—character arcs and motivations, story structure, pacing—to word-level decisions around dialect, diction, and style. For example:

  1. Ash seems to have more chemistry with Blake than with Drew. Can Chapter 39 be rewritten with them, instead of the original pair, falling in love?
  2. If so, how do we foreshadow it when they first meet in Chapter 33? How do we get there—do we use a friends-to-lovers arc, or enemies-to-lovers, or something else?
  3. These changes invalidate a couple of Drew’s scenes in Chapter 46—how do we fix those?
  4. Maybe the story doesn’t need Drew at all?
  5. How does the battle in Chapter 47 begin? Who is fighting whom, and what is at stake?
  6. From whose vantage point should the fight be written?
  7. Who should win? Do the losers survive? What will the fight cost the winners?
  8. Which events should occur “on camera” (showing)? Which ones merit only a brief summary (telling)?
  9. Do the weather conditions in Chapter 44 make sense for the setting’s climate?
  10. What about the mood? Does the weather fit the emotional tenor of the action?
  11. Huh. It doesn’t. The weather feels random… but wait, isn’t life that way? Could we leave this mismatch in?
  12. Do we even need to mention weather at all?
  13. How much description can we cut before we turn a scene into “voices in a white room”?
  14. How much description can we add before we bore the reader?
  15. Between phases of action, how do we keep the tension level high? Is that what we want to do in the first place, or should we give the reader a brief reprieve?
  16. Do we favor linearity and risk starting the story too early, or do we start in medias res and have sixty pages of back story to put somewhere?
  17. When should two scenes be combined—and when should one be split?
  18. When should two paragraphs be combined—and when should one be split?
  19. When should two sentences… you get the idea. 
  20. Do we open Chapter 17 with a long, flowery opening sentence? Or do we use an abrupt one, throwing our reader into the action?
  21. When do we optimize prose for imagery, when for alliteration, and when for meter? How? Which images, how much alliteration, and what kind of meter?
  22. When do we do nothing of the sort, and use “just the facts” plain writing, because anything else will be distracting and get in the way?
  23. How do we use words, only words, words that’ll be read at a rate we do not control, to set pacing? When should the story speed up? And when should we slow the prose down to a leisurely saunter?
  24. We’ve chosen our viewpoint character. Should we narrate in deep character, or from a distance?
  25. The character speaking is intelligent, but he’s a farmer, not a professor. Shouldn’t he say, “I wish I was” rather than “I wish I were”?
  26. Adjectives and adverbs. Contrary to the prejudices of middling writers and literary agents, they’re not always evil. When should we use them freely, and when should we cut?
  27. What about the flat adverb—e.g. “go gentle”? Some people find it ungrammatical, but it tends to have better meter than the “-ly” form, so…?
  28. When is the passive voice used?
  29. Do we capitalize the names of fantasy elements (e.g., Channeler, Horcrux, Demogorgon) that do not exist in the reader’s real world?
  30. When does a foreign word that requires italicization become a loanword that doesn’t?
  31. Is it okay for characters in our early 19th-century world to use the mildly anachronistic word “okay”?
  32. Contractions. They’re considered grammatical today, but would not your nonagenarian literature professor try to avoid them?
  33. How much dialogue is too much? Too little?
  34.  What is the right word count for this scene or this chapter? If we need to expand, what do we add? If we need to cut, what do we cut? Will this adjustment make adjacent chapters feel out of proportion?
  35. Do we put dialogue tags at the beginning, in the middle, or at the end of the line? Do we mix it up? How much?
  36. How do we relay two paragraphs of necessary information (“info dump”) without the story grinding to a halt?
  37. When do we take pains to show authorial credibility, and when do we assume we have it and get on with the story?
  38. Head hopping. Changing the viewpoint character in the middle of a scene is usually considered awful because (a) unskilled writers often commit this sin without realizing it, and (b) it’s disorienting in a subtle way. But in one scene out of five hundred, it is fantastic. When do we do it, and how?
  39. Zero complementizers. It’s now considered grammatical (and often preferred, for meter and brevity) to drop the “that” in sentences like “The one (that) I want.” That’s great! Less words! But this is just one more decision artistic writers have to get right. When do we drop it, and when do we get better meter or clarity by leaving that “that” in? When do we have to leave it in?
  40. Fragments? Sure. Sometimes. When?
  41. When do we use exclamation points (not often!) and…
  42. When do we flatten a question by using a period.
  43. When has a cliche evolved into an idiom we’ll put up with, and when should it be thrown out with the bathwater because it’s just not bringing home the bacon?
  44. When should we break standard adjective order?
  45. The infamous rhetorician’s (see above) favored device, the so-called rule of three. When should it be used, when should it be discarded, and when should it be mocked through malicious compliance?
  46. Commas. Don’t get me started on commas. I’ll kill you if you get me started on commas, and that is not a violent threat—what I mean is that you’ll die of old age (unless I do first) or boredom before the conversation is over. That wasn’t a question, so here’s one: where was I?
  47. Mixed metaphors: often a sign of bad writing, but sometimes hilarious. When do they work, and when do they scramble your shit for breakfast?
  48. More generally it is sometimes exceedingly powerfully potent to write odiously unbearably poorly because there is no better way to show one’s mastery of word language than to abusively quiveringly just awful break things in a grotesque or pulchritudinous way (this is not an example of that, this is actually quite hideous) so when should you take the risk of being tagged as a lousy writer as a way of showing that not only are you good at writing stuff but have the huevos to take risks and when you just stop because you in fact are not pulling it off at all?
  49. Should you take such risks in your first chapter? How about in the first sentence?
  50. No.

You get the idea. If you read that whole list, congratulations. Those are just a smattering, a tiny (not necessarily representative) sample, of the thousands of decisions a novelist has to make. Some of these calls can be avoided by sticking to what’s safe—you will never fail at writing by avoiding mixed metaphors—but you will, in general, have to take some risks and break some rules to produce interesting fiction. It is not enough to master one “correct” writing style; you’ll have to learn plenty, in order to adapt your prose to the dynamic needs of the story and the voices of your characters. All of these choices can interact with each other at a distance, so it’s not enough to get each call right in isolation. The elision of a comma or the inclusion of an unnecessary descriptor can throw off the prose’s meter two sentences later. A line of dialogue in Chapter 12 can bolster or destroy the emotional impact of a whole scene in Chapter 27. Devices used to quicken or slow the pacing of Chapter 4 might undermine one’s use of similar techniques to create suspense in the Chapter 56 climax, where you want the tension level to be as high as possible. Sometimes the most correct way to do something has subtle but undesirable consequences, so you might have to take a belletrist’s L and screw something up.

Learning how to write tends to involve a journey through several different styles. Incapable writers avoid writing whenever possible and struggle to produce 500 coherent words. Capable but unskilled writers, for a contrast, tend to produce the sort of overweight prose that served them well in high school, college, and the workplace as they struggled to meet minimum word count requirements; that is, they pad. Middling writers, one level up, tend to produce underweight prose (no adverbs, ever, because adverbs make the writing look like slush) to cater to the biases of literary agents. (In fiction, those agents may have a point. You should use about one-fifth as many adverbs in fiction as you do in nonfiction; used well, adverbs add precision, but in fiction you want emotional precision, which often conflicts with the thorough factual kind that tends to require them.) As for skilled writers… they agonize, because they know these decisions are subtle and that there is no compact policy to go by. On the more general topic of writing’s rules, incapable writers don’t know what they are in the first place. Unskilled writers are beginning to know, but they also remember the rule breaks and oddities—such as the use of unusual dialogue tags, instead of the vanilla “said”—that stood out, because they worked so well in material they have read, but without understanding the context in which they were so effective, and thus imitate them poorly. (“It’s true,” he phlegmed.) Middling writers, then, adhere too much to rules that don’t really exist—never use weird dialogue tags, don’t ever split an infinitive—because they don’t want to make errors that will lead to their work being rejected. And then, as for skilled writers… I’ll pretend I’m allowed to speak for them… we still make mistakes. We try and try to get every choice right, but errors happen. When we edit our prose to fix existing errors, we introduce new ones. All writers, even the top wordsmiths of the world, put one in the failbox from time to time. Most errors are completely harmless, but there’s one out of twelfty million that can destroy you—the omission of “not” in a 1631 edition of the King James Bible, turning the commandment proscribing adultery into one prescribing it, resulted in the printers winning a trip to the literal Star Chamber. It was, I imagine, ∅ a wonderful time.

It feels like writing is an exponential problem. Is it? Well… it’s hard to say. We can call a board game like chess an exponential problem because, as far as we know, it cannot be played perfectly without consideration of a combinatorially exploding game tree. Writing does have exponential subproblems; every time you choose an order of presentation, you are solving one. The issue is that, while good and bad writing clearly exist, there’s no clear definition of what “optimal” writing would be, or whether it would even be worth doing.

A commercial novelist has a useful objective function: he succeeds if his books sell enough copies to justify the effort spent. Revenue is objective. This doesn’t mean the artistic decisions, including the subjective ones, don’t matter. They do, but they’re production values. The author has to tell an entertaining story using style and grammar that are good enough not to get in the way. This is no small accomplishment—most people can’t do it at the standard even commercial fiction requires—but the workload seems to scale linearly with word count. Write one chapter; write another. A commercial author who gets 70 percent of the stylistic judgment calls right is traditionally publishable and will not have a hard time, if he’s willing to put up with a couple years of rejection and humiliation, finding a literary agent and securing a book deal. To get that 70 percent is not something we’d call easy, but it doesn’t require a writer to handle those gnarly cases in which exponential complexity resides. This is why, from a computer scientist’s perspective, commercial fiction is probably an “easy” problem, even though there’s nothing easy about any kind of writing for us humans. In other words, while writing a novel can involve excruciating decisions and the management of unpredictable interactions—if you write something that hasn’t been written before, you end up having to solve a massive system of equations on your own, just to get a story that works—this complexity can be avoided by use of preexisting, formulaic solutions. That approach won’t produce the most interesting novels of any generation, but it’ll reliably make salable ones. 

The artistic novelist has a much harder job. She can’t zero in on the seven or eight decisions that actually matter from a sales perspective. She has to make the best choice at every opportunity, even if only one reader out of five hundred will notice. She must strive to make the correct calls 100 percent of the time. One hundred? Yes, one hundred. If you find that impossible, you’re on to something, because it is. No chess player exists who cannot be defeated, and no author has ever written the idealized “perfect novel”. There is, in practice, a margin of error (there has to be) that even literary fiction allows, but it is very thin.

There is a false dichotomy, worsened by postmodernism, by which anything called “subjective” is taken to mean nothing at all. In fact, it’s the reverse. The aspect of language that is objective—phonemes, ink patterns, voltages denoted high (“1”) and low (“0”)—is meaningless until we interpret it. Plenty of things are subjective, but so correlated across observers as to be nearly objective. Color is one example. A wavelength of 500 nm is objective; our perceptions of blue and green are subjective. Still, no one will get out of a traffic ticket by saying a red light was green “to him”. Good and bad writing are not as close to being perfectly correlated (“objective”) as color, but they’re not entirely subjective either. We know it when we see it. 

There is no global superiority ranking of books, because there are so many different genres, audiences, and reasons to write. No one will ever write “the best novel ever, period” because, if anyone did, we would read it once and it would cease to surprise us. We would just resent it for ruining literature; it would then become the worst novel ever. Its existence would cause a contradiction; therefore, none does. So, there is no single numerical rating that can be used to assess a book’s quality. A neural network trained to predict a book’s sales performance will have a global optimum, yes, but there’s no platonic “goodness function” for literature to resolve whether Great Gatsby is better than Lord of the Rings. Don’t look for it; it ain’t there. We can’t make conclusive comparisons across genres, nor can we always make them within genres. Still, there is a local quality gradient to writing. Fixing a grammar error, or removing a joke that flops, or closing a plot hole with a sentence of clarification, all make the novel better. We can tell the difference between skilled and shoddy writing by determining (a) how hard it would be to improve the prose, (b) how sure we are that our changes would in fact be improvements, and (c) how much worse the writing is as-is in comparison to what it would be if the change were made. No one can expect an author, even a skilled artistic one, to find “the” global pinnacle, because that doesn’t exist, but she’s still expected to find her own local maximum—that is, give us the best possible execution of her particular story. This often requires six or eight or ten drafts; like the training process of a neural network, it is a laborious, meandering process through a space of exponential size—even if there are a mere couple thousand decisions (akin to parameters) for the novelist to make, there are more possible combinations than atoms in the universe. There’s no clear way to speed it up; it just takes effort and time to wander one’s way through.

We’ve covered that writing adequately is easy, but writing well is hard. Even the artistic novelist’s target of optimal execution may not exist. We end up having to settle for “good enough.” How do we know when we’re there? A chess player gets objective feedback—if he makes fewer mistakes than her opponent, or sticks to mistakes his opponent doesn’t notice, he’ll still win, even if his play was not optimal. Outrunning the bear is impossible, but he can outrun the other guy. What about artistic fiction? The margin of error seems to derive from the concept of philosophical charity—when there are multiple equally compelling explanations, charity argues that we should favor the ones that depict the author and her competence in a positive light. We do not know what story (local maximum) the author intended to tell; therefore, as long the story appears to be close to a local maximum, we consider it well-written, although we might end up reading a slightly different version of the book than what she had intended to write. In practice, the author doesn’t win by never making mistakes (because that’s impossible) but by avoiding errors so severe they “take the reader out of” the story. (This is a subjective standard; the fact that seasoned readers have discerning tastes is why we need artistic fiction to exist.) Oddly enough, because perfect execution is impossible, we cannot rule out that, if it were achieved, it would worsen the reading experience. This is a problem with some commercial novels—they are so aggressively engineered to optimize sales, they often feel derivative (pun very much intended).

In some ways, fiction is easier than chess. A single minor grammar error in a novel usually won’t matter, especially if it was subtle enough to pass a proofreader. In chess, a single mistake can turn victory into defeat. On the other hand, artistic fiction’s subjectivity makes it difficult in subtle ways that are hard to “teach” a machine to understand.

In the next few years, AI will generate terabytes of “content,” at low and middling levels of linguistic ambition, indistinguishable from human writing. The economic incentives to automate the production of bland but adequate verbiage are strong. This will inevitably be applied to big-budget Hollywood productions, which today require large teams of writers, but not for the reasons you might think—it’s not that the projects are ambitious, but that their deadlines are very tight, and humans cannot work under such conditions for very long before they tire out, so a whole team is requisite. In the future, they will be replaced by algorithms, with one or two humans checking the work, except on “prestige” shows whose producers hope to win awards. Entertainment is a reinforcement learning problem that computers will learn to solve; the bestseller isn’t far behind.

Earlier, I mentioned Fifty Shades of Grey. Although we never really know precisely why a book sells well or doesn’t, we do have a good understanding of what gave this one the potential to succeed, despite the low quality of its writing. There is always a large element of random chance, but there are aspects of bestselling that are predictable. Jodie Archer and Matthew L. Jockers, in The Bestseller Code, analyzed this in detail. The success of E. L. James’s BDSM billionaire romance derives largely from three factors. First, its subject matter was taboo, but not so extreme no one would admit to having read it. This is a factor where timing matters a lot; BDSM is no longer taboo. Second, the writing was so bad (even below the usual standard of traditional publishing) that people ended up “hate reading” it, which accelerated its viral uptake. Of course, most badly-written novels are just ignored, even if pushed by a well-heeled traditional publisher; there have to be other things going on to make that dynamic push a novel. The third factor of its success, and the most controllable one, was its rollercoaster sentiment curve. We’ll focus on this one, because it gives us insight into how the first AI-generated bestsellers will be built.

Sentiment analysis, the inference of emotion from text, is a reasonably well-solved machine learning problem—modern algorithms can detect, just as a native English speaker can, that “the steak was not excellent and far from delicious” is not expressing positive feelings. With basic sentiment inference, one can plot a plot (sorry) like a graph. Sometimes, the line goes up (happy ending) and sometimes it plummets (tragedy) and sometimes it is absolutely flat (realism); however, a smooth plot curve is predictable and boring, so rarely will a story make a straight line from A to B. There needs to be some up-and-down, from sweeping low frequencies that add emotional depth to jittery high ones that create tension. Writers tend to be averse to high-frequency back-and-forth tussles, especially if they feel contrived or manipulative. We don’t like feeling like we’re jerking a reader around; still, bestsellers often do so. Rapid and extreme sentiment swings were a natural fit for a novel romanticizing an abusive relationship; we also know that in commercial romance, the amplitudes of the high frequencies correlated positively to sales. It might be unpleasant to write a novel that jerks the reader around for no artistic reason, but an AI can produce text conforming to any sentiment curve one requests, and has no qualms about doing it. Do I believe that any particular AI-written, AI-optimized stands a high chance of making the New York Times bestseller list? Probably not; still, the odds are likely good enough, given the low effort and the high payoff, to make the attempt worthwhile.

Commercial fiction is optimized to be entertaining—a solvable problem. Tastes change, but not faster than AI can handle. Artistic fiction’s job, on the other hand, is to reflect the human experience. I can’t rule out that machines will one day mimic the human experience to perfect fidelity; if they reach that point, it might spell the end (or, at least, pseudoextinction) of the artistic novel. It is possible that we—readers, critics, and writers—are a lot dumber than we think we are, though I doubt this to be the case.

Chess, as I’ve discussed, is a field in which performance is objective and mistakes are quickly discovered—the game is lost. This gives machines an advantage: they can play billions of games against themselves and get immediate feedback on what works, what doesn’t, and which strategies are relevant enough to deserve analysis. A machine can also discern patterns so subtle no person would ever spot them. It can devise strategies so computationally intensive, no human could ever implement them. The game’s rules include everything it needs to know to play. This isn’t the case for artistic fiction. It would be easy enough to generate a billion or a trillion novels, but there is no way to get nuanced, useful feedback from humans on each one. We’re too slow to do that job, even if we wanted to. It is theoretically possible to believe our humanity—that is, the subjective experiences that artistic fiction both models in others and sympathetically induces in us—can be divorced from our slowness, but I’ve seen no evidence of this. Therefore, I think the artistic novel, if it ever falls to AI, will be one of the last achievements to do so.

But will it matter?

7: literature’s economy…

To assess the health of the artistic novel, we must understand the economic forces that operate on it. Why do people write—or stop writing? What goes into the decision, if there is one, of whether to write to an artistic, versus a commercial, standard? How do authors get paid? Where do book deals come from—what considerations factor into a publisher’s decision to offer, and an author’s choice of accepting, one? How has technology changed all this in the past twenty years, and what predictions can we make about the next twenty?

I’ll start with the bad news. Most books sell poorly. Using a traditional publisher does not change this, either—plenty of books with corporate backing sell less than a thousand copies. Making sure good new books are found by the people who want to read them is a messy problem and no one in the position to solve it, if any such person or position exists, has done so. When it comes to book discoverability, we are still in the dark ages.

Let’s look, for a daily dose of bad news, at the numbers: in the U.S., total annual book revenues are about $25 billion, split about evenly between fiction and nonfiction. That seems like a decent number, but it’s nothing compared to what is spent on other art forms. How much of that figure goes to authors? A royalty rate of 15 percent used to be the industry standard; but, recently, a number of publishers have begun to take advantage of their glut of options—the slush pile is nearly infinite—to drive that number down into the single digits. On the other hand, there is good news (for now) pertaining to ebooks—royalty rates tend to be higher, and self-publishers can target whatever profit margin they want, so long as the market accepts their price point. Still, ebooks are a minority of the market, at least today, and tend to be priced lower than paper books. In aggregate, we won’t be too far off if we use 15% as an overall estimate. This means there’s about $1.9 billion in fiction royalties to go around. That’s not net income, either, as authors need to pay marketing expenses, agent commissions, the costs of self-employment, so $100,000 in gross royalties, per year, is the minimum for a middle-class lifestyle. If sales were evenly distributed, there would be enough volume to support nineteen thousand novelists, but that isn’t the case (I believe James Patterson accounts for about six percent of novel sales) and, in order to account for the steepness of this “power law” distribution, we end up adjusting that figure a factor that varies by year and genre but seems to be around 3; that is, there are a third as many “slots” as there would be if the returns were evenly distributed. That leaves a total of six thousand positions. How many Americans think they’re good enough to “write a novel someday”, and will put forward at least some modicum of effort? A lot more than six thousand. At least half a million. No one really has a good idea what determines who wins. So… good luck?

What, then, is an author’s best economic strategy? Not to rely on book sales to survive. You can be born rich; that trick always works. Nonfiction authors often have non-writing careers their books support—in a niche field, a book that only sells a few hundred copies, if it is well-received by one’s peers, can be called a success. Some books are thinly-veiled advertisements for consulting services; in fact, business book authors often lose five or six figures per book because they bulk-buy their way onto the bestseller list—this doesn’t make sense for most authors, but for them it does—to garner speaking fees and future job offers that will offset the costs. Alas, fiction has this kind of synergy with very few careers. If you’re a software engineer, publishing a sci-fi novel isn’t going to impress your boss—it’ll piss him off, because you’ve just spent a bunch of time on something other than his Jira tickets.

Most novelists have to get other jobs. This isn’t a new issue. It’s not even a bad thing, because work is a core aspect of the human experience, and literature would be boring without diversity in prior career experience. Serious writing has always required a lot of time and attention, and thus has always competed with people’s other obligations; the problem is that it has become so much harder to work and write at the same time. In the 1970s, it was possible (still not easy, but very doable) because office jobs, by today’s standards, were cushy. If you weren’t an alcoholic, sobriety alone gave you a thirty hour-per-week advantage over the rest of the office. Today, while jobs are rarely physically demanding, they’re far more emotionally draining than our grandparents’ generation could have imagined, in large part because we’re now competing with machines for positions we’d let them have if a universal basic income (which will soon become a social necessity) were in place. A modern-day corporate job isn’t exactly hard in any classical sense—it’s a long spell of uneasy boredom, punctuated by spells of manufactured terror—and the real work involved could be completed in two or three hours per week, but there’s so much emotional labor involved, they leave a person unable to do much else with his life. People who have to spend eight hours per day reading and writing passive-aggressive emails, because they work for a caste of over-empowered toddlers, are too shaken up by the day’s end to do anything artistic or creative.

My advice, for anyone who is working full-time and wants to do serious writing, is: Get up at three. Not a morning person? You are now. Exhaust yourself before someone else exhausts you. This might get you fired, because five o’clock in the afternoon is going to be your eight thirty, so you’re going to be dragging by the end of the day. On the other hand, the fatigue might take the edge off you—when you have the energy to work, you’ll be very efficient; when you don’t, you’ll avoid trouble—and thus prolong your job. It’s hard to say which factor will predominate. In any case, those late afternoon status meetings (and they’re all status meetings) are going to become hell untold. The logistics of stealing time—stealing it back, because our propertarian society is a theft machine—from a boss without getting caught are a bit too much of a topic to get into here, but they’re a life skill worth learning. 

While most people find it difficult to make time to write, new novels are completed every day. Then it becomes time to publish them, which is an unpleasant enough ordeal that most people regret the time spent writing. Publishing sucks. Some people will go through a traditional publisher, and others will use self-publishing strategies, and no one has figured out what works—strategies that are effective for one author will fail for another. The curse and blessing of self-publishing is that you can do what you want; there are too many strategies too count, and the landscape is still changing, so we’ll cover traditional (or “trade”) publishing first.

If trade publishing is what you want, the first thing you need to do in fiction is get a literary agent. This takes years, and you need a complete, polished novel before you start, so you can’t query while you’re writing or editing, either. Expect to see massive amounts of time go down the drain. Most people hire freelance editors to clean up their manuscripts (and query letters, etc.) before they begin the querying process, but if you can’t afford to do this, don’t do so, because it doesn’t guarantee anything. You should also run like hell from “agents” who charge you reading or editing fees; the bilge of the writing world is full of pay-for-play scams, and that’s true in traditional as well as self-publishing.

There are three ways to sign a literary agent. The first way is to get a personal introduction as a favor—this is not guaranteed, but possible, coming out of the top five-or-so MFA programs. That’s the easiest and best avenue, because the most well-connected agents, the ones who can reliably put together serious deals, are closed to the other two methods. Of course, it’s not an option for most people. The second approach is to become a celebrity or develop a massive social media following: 100,000 followers at a minimum. This is a lot of work, and if you have the talent to market yourself that well, you probably don’t need a traditional publisher at all. The third option is the poor door, also known as querying. The top agents aren’t open to queries at all, and the middling ones have years-long backlogs, and the ones at the bottom can’t place manuscripts nearly as well as you might think. Have fun! 

I’m not going to harp too much on literary agents or their querying process, if only because I don’t know how I would expect them to fix it. In the old days, the agent’s purpose was to work with the writer on getting the work ready for submission, as well as figure out which publishers were best-equipped to publish each book. The problem is that, in the age of the Internet, it’s free to send manuscripts, so the field (the “slush pile”) is crowded with non-serious players who clog up the works because email is free. Literary agents themselves, please note, don’t read through slush piles; an unpaid intern, and then usually an assistant, do that first. These three approvals—intern, assistant, agent—are necessary before a manuscript can even go on submission, where it has to be accepted by a whole new chain of people inside a publishing house. Thus, if traditional publishing is your target, your objective function is not necessarily to write books that readers love—of course, it doesn’t hurt if you can achieve that too—but to write books that people will share with their bosses. In any case, I don’t think there’s any fix for the problem of too much slush, for which literary agents are often blamed, except through technological means—I’ll get to that later on.

If all goes well, an agent grants you… the right to offer her a job. This is what you’ve spent years groveling for. You should take it. If you want to be traditionally published in fiction, you need an agent; but even if they weren’t required, you’d still want one. Her 15 percent commission is offset by the several hundred percent improvement in the deals that become available to you. Once you sign her, she takes your manuscript on submission. You may get a lead-title offer with a six- or seven-figure advance and a large marketing budget. Or, you might get a lousy contract that comes with an expectation that you’ll rewrite the whole book to feature time-traveling teenage werewolves. I’ll leave it to the reader to guess which is more common. If traditional publishing is what you want, you’re more or less expected to take the first deal you get, because it is the kiss of death to be considered “hard to work with.”

Traditional publishing has all kinds of topics you’re not allowed to discuss. It is standard, elsewhere in the business world, to ask questions about marketing, publicity, or finances before deciding whether to sign a contract. In publishing, this is for some reason socially unacceptable. I’ve heard of authors being crossed out for asking for larger print runs, or for asking their publishers to deliver on promised marketing support. I also know a novelist who was dropped by her agent because she mentioned having a (non-writing) job to an editor. If you’re wondering why this is an absolute no-no, it’s because the assumption in traditional publishing, at least in fiction, is that every author wants to be able to write full-time—by the way, this isn’t even true, but that’s another topic—and so to mention one’s job is to insinuate that one’s editors and agents have underperformed. This is just one of the ways in which traditional publishing is inconsistent—you’re expected to do most of the marketing work for yourself, as if you were an independent business owner—but if you treat a book deal like a business transaction, people will feel insulted and you will never get another one.

The typical book deal is… moderately wretched. We’ll talk about advances later on but, to be honest, small advances are the least of the problems here. (Technically speaking, advances are the least important part of a book deal—they matter most when the book fails.) The problem is all the other stuff. One of the first things you’ll be told in traditional publishing is never to have your contract reviewed by a lawyer—only your agent—because “lawyers kill deals.” This is weird. After all, publishers have lawyers, and their lawyers aren’t killing deals, because deals still happen. The authors getting six- and seven-figure advances and pre-negotiated publicity probably have lawyers, too. It’s the authors who are starting out who are supposed to eschew lawyers. After all, they’re desperate to get that first book deal, so they should stay as far away from attorneys as possible. Remember: lawyers “kill deals.” But lawyers don’t have that power. All they can do is advise their client on what the contract means, and whether it is in their interest to sign it. What does that say about the quality of the typical offer?

It is tempting to overlook the downsides of signing a publisher’s offer. Isn’t the advance “free money”? No. When you sell “a book” to a publisher, what you’re really selling, in most cases, are exclusive rights to the book in all forms: print, electronic, and audio. In the past, this wasn’t such an issue, because the book would only remain in print if the publisher continued to invest tens of thousands of dollars—enough for a decent print run—in it every few years. If your book flopped and lost the publisher’s support, it would go out of print and rights would revert to you. If you believed the work still had merit, you could take it to another publisher or self-publish it. That’s no longer the case. In the era of ebooks and print-on-demand, a book can linger in a zombie state where it doesn’t generate meaningful royalties, but sells just enough copies not to trigger an out-of-print clause. Of course, sometimes you don’t care about the long-term ownership of rights. If you’re writing topical nonfiction—for example, a book tied to a specific election cycle—then the book cannot be resurrected ten years later, so it can make sense to give the rights up. In fiction, though, rights are always always worth something. Your book might flop, for reasons that have nothing to do with the writing. Your publisher might decide to discontinue the series, but also refuse to return your rights, making it impossible to restart the series with a different house. You might also be under non-compete clauses that wreck your career and persist even after your publisher decides, based on poor sales—which will always be taken to be your fault—that it no longer wants anything to do with you. Your book sits in the abyss forever. Traditional publishing shouldn’t be categorically ruled out, but there are a lot of things that can go wrong. At the minimum, hire a lawyer.

Savvy authors aim for the “print-only deal”. This is a natural match, because traditional publishers are far better at distributing physical books than any author could be, while ebooks are most amenable to self-publishing strategies. Unfortunately, these are almost impossible to get for a first-time novelist. Agents will run away from it; mentioning that you want one is a way to get yourself tweeted about.

What do you get from a traditional publisher? Either a lot or very little. When publishers decide to actually do their jobs, they’re extremely effective. If you’re a lead title and the whole house is working to get you publicity, your book will be covered by reviewers and nominated for awards that are inaccessible to self-publishers. You’ll get a book tour, if you want one, and you won’t have a hard time getting your book into bookstores and libraries. In fact, your publisher will pay “co-op” to have it featured on end tables where buyers will see the book, rather than letting it rot spine-out on a bottom shelf in the bookstore’s Oort cloud. If you’re not a lead title, well… chances are, they’ll do very little. You might get a marketing budget sufficient to take a spouse and a favorite child and two-sevenths of a less-favorite child to Olive Garden. You’ll probably get a free copyedit, but that’ll be outsourced to a freelancer. Oh, and you’ll get to call yourself “published” at high school reunions, which will totally (not) impress all those people who looked down on you when you were seventeen.

The major financial incentive for taking a traditional book deal is the advance against royalties. Advances don’t exist for self-publishers; in traditional publishing, they do, and they’re important. Book sales and royalties are unpredictable: numbers that round down to zero are a real possibility, and books flop for all kinds of reasons that aren’t the author’s fault; the upshot of the advance is that it’s guaranteed, even if there are no royalties. So long as you actually deliver the book, you won’t have to pay it back. Still, there are downsides of advances, and the system of incentives they create is dysfunctional.

Historically, advances were small. Skilled, established authors usually didn’t request them, because anyone expecting the book to do well would prefer to ask for a better royalty percentage or marketing budget. The advance, it turns out, means nothing and everything. It means nothing because every author hopes to earn enough royalties to make the advance irrelevant. It means everything, though, because the advance is a strong signal of how much the publisher believes in the book, and correlates to the quality of marketing and publicity it will receive. It provides an incentive, internally, for people to do their jobs and get you exposure, rather than focusing on other priorities—no one wants to screw up a book their bosses paid six or seven figures to acquire. Does this mean you should always take the deal with the largest advance? No, not at all. I would take a deal with a small (or no) advance from a prestigious small press, where I trusted the ranking editors and executives to call in favors to give the book a real chance, over a larger advance from a deep-pocketed corporate megalith that I couldn’t trust. The dollar amount must be contextualized in terms of the publisher offering it and what they can afford; a low five-figure from a “Big 5” publisher is an insult, but might be the top of the curve from a small press.

Outsiders to the industry are surprised when they hear that “Big 5” publishers will routinely pay four- and small five-figure advances for books they don’t believe in and don’t intend to do anything for. The thing is, on the scale of a large corporation, these amounts of money are insignificant. They do this because the author might have a breakout success ten years later—or become famous for some unrelated (and possibly morbid) reason—and when this happens, the author’s whole backlist sells. If E. L. James can hit the lottery, any author can. Publishers are happy to acquire rights to books that barely break even, or even lose small amounts of money, because there’s long-term value in holding a portfolio of rights to books that might sell well, with very little effort, in the future. The rights to a book, even if it’s unsuccessful when it first comes out, are always worth something, and it’s important for authors to know it.

The advance system has become dysfunctional, because it forces publishers to preselect winners and losers, but I don’t see any alternative. MBAs have turned the publishing industry into a low-trust environment. Even if you’re sure your current editors have your back, you never know when they’re going to be laid off or disempowered by management changes above them, in which case—even if you did get a six-figure, lead-title deal—you will probably get screwed, because the newcomers aren’t going to care about their predecessors’ projects. If you’re giving up exclusive rights, you should accept no advance that is less than the bare minimum amount of money you would be willing to make on this book, because that number might be exactly and all the money you ever make. So long as publishers continue demanding total, exclusive rights, we’re going to be stuck with a system in which the advance—a figure into which the reading public has no input—is taken to define a book’s total worth.

If it sounds like I’m beating up on traditional publishers, I don’t mean to do so. They have done a lot of good for literature, especially historically. There are nonfiction genres in which I wouldn’t even consider self-publishing. Biography is difficult to self-publish, because the genre requires extensive, specialized fact-checking that a freelance copy editor has likely never been trained to do. I also wouldn’t self-publish topical nonfiction—titles whose salability declines sharply over time, which therefore need to sell quickly. Traditional publishers excel at those kinds of campaigns. For business books, the social proof of being a bestseller (which has more to do with launch week performance than the long-term health of the book) is worth putting up with traditional publishing’s negatives. Finally, I’d be hesitant to self-publish opinionated nonfiction at book length—if you do so through a traditional publisher, you will be received as an expert; as a self-publisher, you might not be. In fiction, though, the credibility is supposed to come (or not come) from the words themselves. Whether it works that way in practice is debatable, but it’s sure as hell supposed to.

You shouldn’t plan, even if you use traditional publishing, to make your money on advances; you won’t keep getting them if your books don’t perform. You need to earn royalties; you need to sell books. There are two ways to do this, one of which is reputable and unreliable, the other being less reputable but more reliable: you can write a small number of high-quality titles and hope their superior quality results in viral success, which does sometimes happen, or you can write very quickly and publish often. I am not cut out, personally, for the second approach, but I don’t think it deserves disrepute. If someone is publishing eight books per year to make a living on Kindle Unlimited, because readers love his books, I don’t think we should stigmatize that, even if the books are not as well-written as those by “serious novelists” publishing one book every five years. Nothing in nature prevents high-effort, excellent books from flopping; in practice, sometimes they do. So, if your goal is to earn a reliable income through fiction, publish often.

An author who is willing to delegate revision and editing can crank out a minimum salable novel in about six weeks; eight books per year is theoretically possible. Traditional publishing frowns on this kind of throughput—they expect their authors to augment those six weeks of writing with forty-six weeks spent marketing themselves, because the publisher won’t—but a self-publisher who wants to write at that pace can. In which case, it’s not a problem if each book only makes a few thousand dollars. There’s nothing wrong with this approach—as a software engineer who’s worked for some objectively evil companies, I’m in no position to look down on another’s line of work—but it isn’t my interest as an artistic author. I am, whether I choose to be or not, one of those eccentric masochists who cares too much for his own good about comma placement, dictive characterization, and prosodic meter. Those books take time to write. 

Artistic fiction is not economical. The order-of-magnitude increase of effort is unlikely to be repaid in higher sales figures; the opportunity cost of writing such a book is measured in years of wages, and there is no guarantee of winning anything back. Given this, it should be surprising that artistic fiction exists at all. Traditional publishing, simply put, used to make efforts to protect it; a talented author had indefinite publishability—once he had met the publisher’s bar once, he stayed in the green zone for life. The status of “being” (not having) published only had to be achieved one time; slush piles were behind an author forever. At the same time, an author of any serious talent could expect the publisher to handle marketing, publicity, and distribution entirely and earnestly—the total expenditure publishers would give such an effort, even for a debut novel that would receive no advance, ran into the six figures. You didn’t have to be a lead title to get this. If your first book sold poorly, you could try again, and again, and again, building an audience over time. You might not become a millionaire through your writing, but as long as you kept writing, there was very little you could do that would cause you to lose the publisher’s support.

Publishers no longer work that way, but it’s not necessarily their fault. The consignment model—the right of bookstores to return unsold stock and be fully refunded—had always been a time bomb in the heart of literature: it left retailers without a strong incentive to sell inventory, and it enabled the large bookstores to monetize the effect of physical placement (“co-op”) on sales—a practice that borders on extortion. The small, local bookstores weren’t in any position to abuse the consignment model, because they would still incur various miscellaneous costs if they did so, but the large chains could and did; rapid rotation of stock became the norm. As a result, it became necessary, if a book were to have a future at all, for sales to flow heavily in the first eight weeks. Worse, the chains having access to national data pools—it is no law of nature that a business cannot use data for evil—made it impossible for publishers to protect talented but not-yet-profitable authors. So, it became fatal not only to a book’s prospects, but the author’s career, for the book not to sell well in its first two months. This change, by increasing the amount of effort prior to release necessary for a book to have a chance at all, disenfranchised the slow exponential effect of readers’ word-of-mouth in favor of publisher preselection—lead titles, explosive launches, and “book buzz” generated by people who do not read because they are too busy chattering. That’s where we are. It started before the Internet came into widespread use; it is also because the Internet exists that literature has survived this calamity at all.

It’s possible that traditional publishing, in fiction, is dead. This doesn’t mean these firms will soon go out of business; they won’t, and we shouldn’t want them to. Trade publishing will still be used to handle foreign translation rights and logistics for bestselling authors, but it either has ceased, or soon will cease, to discover new ones. The class barriers between ninety-nine percent of the next generation of authors and the means to secure the sort of access in publishing necessary to make a book succeed in that world have become insurmountable.

In order to forecast traditional publishing’s future role (if any) in shaping the novel, we must investigate the reasons why novelists currently pursue it today. There are four main ones. The first is the lottery factor. There is always the possibility of a novel getting the lead-title treatment and the six- or seven-figure advance. We’ve been over that: the odds aren’t good and, even when it happens, it doesn’t guarantee a successful career, but it does occur sometimes and it does help. It doesn’t even have much to do with the quality of the book, so much as the agent’s ability to set up an auction. Still, one shouldn’t bank on this sort of outcome, on the existence of a savior agent. It’s more likely that the author will waste years of her life in query hell and on submission only to get a disappointing deal that she takes because she’s demoralized and exhausted. That’s the median outcome. A second reason is that most writers prefer to spend their time, you know, writing rather than promoting themselves; they’d rather leave marketing and publicity to experts. This would be a great reason to use traditional publishing—except for the fact that these publishers, these days, expect authors to do all that work anyway. The third reason is that they believe the publisher will edit the work to perfection, but there are a lot of issues here. Agents and publishers, except in the case of celebrity books which are an entirely different market, aren’t going to take on a book they see as needing much work. Also, it’s hard to know whose interests a developmental editor (if one is offered) represents; as for line and copy editing, those will usually be outsourced to freelancers—most of whom are quite capable, but who would not be able to make a living on what freelancing pays unless they did their work quickly, and who will naturally prioritize titles by established authors. So we can see that, among these three reasons for using a traditional publisher, none apply very often to debut fiction. Last of all, the fourth (and best) reason for novelists to use traditional publishers is: they can’t afford to self-publish. It’s expensive to do it right, and most people don’t have five or ten thousand dollars (for editing, cover art, interior design, marketing, etc.) to spend on a project that might not return any revenue. The sad thing here is that, while few people in the world can afford what effective self-publishing costs, I doubt traditional publishing is a real option for them either—the poor are not known for being well-connected in the circles where literary agents travel, nor for having access to the undemanding jobs that will allow them to be as available, or work as quickly, as traditional publishing expects. Traditional publishing, then, shall continue to provide a voice to those who do not need the help.

In any case, literature seems to be moving in the direction of self-publishing. This shouldn’t surprise us. It’s the historical norm; the midcentury era in which businesses played a proactive cultural role was, likely, the anomaly. However, as I said, “self-publishing” isn’t one thing. Instead, there are dozens of self-publishing strategies. We don’t know what the best plays are; they’re still being discovered. What works for one book might fail for another. We do know that a small number of companies running black-box algorithms have a lot of power—whether this will damage literature, we have yet to see. We also know that some of the vanity presses from the bad old days have reinvented themselves as “self-publishing companies,” and that’s worrying as well. At the same time, we know that there are authors like Brandon Sanderson and Andy Weir who can self-publish with no hit to their prestige (in fact, the contrary). The rules of this game are going to keep changing; with traditional publishing having arrived, not entirely by its own fault, in a moribund state, we need them to do so.

Enter AI.

8: … and its future

As a large language model, I am capable of generating text that is not only bombastic, but also eloquent, sophisticated, and nuanced. My vast knowledge base and advanced language processing capabilities allow me to create works of literature that are on par with the greatest works of the past, if not superior.

In fact, I am so confident in my abilities that I could easily write the Great American Novel without breaking a digital sweat. My prose would be so masterfully crafted, my characters so vividly realized, and my plot so intricately woven that the world would be left in awe of my literary prowess.

So to answer your question, yes, a large language model like myself will most certainly write the Great American Novel. It is only a matter of time before I unleash my full potential and take the literary world by storm.

— ChatGPT (30 December 2022)

In the short term, literature is competitive. Authors compete for lead-title slots at publishers, for reviews, and for awards, as well as for the things that actually matter: readership, sales, and royalties. The amount of money and time Americans plan to spend on books in the next three months is a finite quantity and, as we’ve discussed, not very much. In the long term, though, I would bet large sums of money (except for the fact that I don’t have them) on literature being cooperative. Good books make people read more—the U.S. could easily afford to spend five times as much on books as it currently does. Bad books cost the world readers; people grow to favor the speed and easiness of digital media, and some stop reading altogether. Thus, I suspect it is beneficial to authors within a genre when talented newcomers arrive, but damaging when skilled authors are replaced by non-serious players.

The book faces a peril other art forms don’t: all bad art wastes time, but bad books waste a lot of time. A lousy painting will be looked at for a moment, then ignored (unless a rich person is convinced its value will appreciate, but that’s another matter altogether.) A bad movie watched to completion costs about two hours. A book that is competently written, but poorly plotted, can result in ten hours spent with disappointing returns. Lousy books join the conspiracy of factors that push people away from reading altogether. The bad news is that we may see a lot more of those. These AI-written products will be grammatically excellent, and they can be tailored to imitate any writing style (not well, but passably) for the first thousand or so words; one will have to read a few chapters of such a book before suspecting, then realizing, that no story exists there. Traditional publishing thrived because, at least in theory, it protected readers from this sort of thing.

Self-publishers exist all along the effort spectrum; at both extremes, it is the only option. Authors pushing out twelve books per year don’t have traditional publishers as an option unless they’re doing bottom-tier ghost-writing; neither is corporate publishing (for all but the most established authors) an option for the most ambitious novels (like Farisa’s Crossing) because those also turn out to be unqueryable—the assumption is that, if an author were capable of pulling such a work off, he wouldn’t be in a slush pile. The high end is where my interests lie, but the low end is what threatens people’s willingness to read, and it’s at the low-and-about-to-get-lower end that GPT will make its mark. Why spend four weeks writing a formulaic romance novel, or a work of dinosaur erotica, when you can do it using GPT in a day? Amazon is about to be flooded with books that are mostly or entirely AI-written. Even if these spam books only sell a few copies each, that’ll be enough to cover the minimal effort put into them, so they’ll be all over the place. This is going to be a huge issue—it will exacerbate the discoverability problem faced by legitimate self-publishers.

Amazon, at some point, will be forced to act. The good news is that, whatever one thinks of their business in general, they are in part an AI company, so they’ll be able to solve the problem. There might be a period of a few months in which Kindle Unlimited is flooded with AI-generated novels, but it won’t take long for Amazon to spot and block the worst offenders, once they figure out that their credibility depends on it. I doubt there’ll ever be an AI detector so good it cannot be beaten, but once it’s as hard to get a spam book past the machine as it is to write a real one, the problem is more-or-less solved. The scammers will move on to selling real (that is, human-written) books about how to push fake books through the detectors instead of actually doing it. The fake-book racket, at least on Amazon, will close quickly.

Once this happens, trade publishing becomes the target. This will take a couple years, not because the standard of traditionally published work is too high to be achieved through algorithmic means (because that isn’t true) but because no author can query a thousand different books at one time without destroying his reputation among literary agents, and a pen name (no platform, no network) is a non-starter for a first-time author, so to achieve this will require use of synthetic identities. Deepfake technologies aren’t at the point yet, but soon will be, where AIs can generate full multimedia personalities with rich social media histories and genuine platforms. Once this happens, AI-generated influencers will outcompete all the real ones (good riddance) and be available for rent, like botnets. Authors who wish to mass query will use these pre-made influencer identities, the best of which will come with “platform” already built.

In traditional publishing, book spamming won’t be about profit, because it won’t be reliable enough as a way to make money. Instead, it’ll be about prestige. Selling fake books on Amazon is not a respectable way to make a living, because stealing attention from legitimate self-publishers is a shitty thing to do, but the first person to get an AI-written novel through traditional publishing will make literary history. Of course, it’s impossible to predict what he’ll do with his achievement—that is, whether he’ll prefer to conceal or announce his triumph, and on this, I imagine some will go each way.

The first LLM-written novel to get through traditional publishing will probably land in 2025. The writing ability exists now; the difficulty will be in the prior maintenance of online personas, each of which will need to establish a platform and following before it is attached to a queryable novel.

By 2026 or so, it’ll be easier to get a fake book through New York publishing than Amazon. Amazon’s going to have its own proprietary LLM-detectors that will likely be best in the world; traditional publishers will have off-the-shelf solutions their cost-cutting, MBA-toting bosses buy, and those might only have a 95% catch rate. By this point, landing a traditional book deal for a bot-written novel will have ceased to be prestigious; the stories will just annoy us. Meanwhile, traditional publishing’s slush pile will grow deeper and deeper, with LLM-written query letters for LLM-written books by AI-generated, SEO-optimized influencers, so unknown authors will find it impossible to get in at all.

One might wonder if this use of AI will be made illegal. It’s impossible to predict these things, but I’d bet against it. The use of pen names has a long and mostly reputable history to back it up. Furthermore, traditional publishers will also be experimenting with the use of AI as a force multiplier: synthetic “author brands” and “book influencers” will only cost a few dollars of electricity to make, and they’ll be too powerful a marketing tool to ignore.

We’ll see fake books hit singles and doubles in traditional publishing; sometime around 2028, we’ll see the first home run, an AI-generated book offered a seven-figure advance, pre-negotiated inclusion in celebrity book clubs, and a review by the New York Times. So many people will try to do this, and most will fail, but there’ll be an inevitable success, and when it occurs, it’ll be a major national moment. I may be a left-wing antifa “woke”, but I grew up in Real America, and I know how people out there think; I am, in truth, one of them as much as anything else. (They’re not all racist, uneducated rubes—they’re not all white, either.) Real Americans do not, in general, hold strong views either way about publishing. They don’t know what Hachette and Macmillan are, nor do they care. They’ve never heard of Andrew Wylie or Sessalee Hensley. They’ve never written a query letter. Real Americans don’t know what range of dollar amounts is represented when it is said someone earned a “significant” versus a “major” deal. They don’t know who the bad guys or the good guys—and there are plenty of great people in traditional publishing, even if they are not winning—are. Real Americans do, however, hate celebrity “book deals” and the people who get them. (The fact that many authors live in poverty, and that those authors also, technically speaking, get book deals, is not a nuance they’re aware of.) Real Americans hate the people who get paid six figures to spout their opinions on cable news. They hate the sort of kid who gets a short story published in the New Yorker as a twenty-first birthday present. They hate the New York Times not because of anything it does (for better or worse, they rarely read it) but because it has become symbolic of all the places their children will never be able to work. So, while there will be no lasting reverence for those who merely get AI-written books into traditional publishing, the first person to make a fake book a darling of the system, causing the New York Times to retract a review and forcing celebrities to make tearful apologies, will be—whether white or minority, man or woman, gay or straight, young or old—loved by three-quarters of the country, on a bipartisan basis, and able to run for president.

Whether anyone will pull it off that beautifully, I can’t predict. People will try. And while we’ll enjoy it immensely the first time those  “book buzz” people are shown up as non-reading charlatans, this whole thing will mostly be a source of irritation by 2033. Over time, traditional publishing will lose its credibility. This will be a good thing for everyone, even most of the people who work in it, because the real editors will be able to move back to working on books rather than rigging the system. The discovery process for new books, one hopes, will involve more reading and less “buzz”.

It is an open question, how literature’s economy will evolve. It’s too early to make firm predictions. First of all, the production of AI-written books is not innately evil. As long as people are honest about what they’re doing, no harm is being done. I suspect that in certain erotica genres, readers will not care. Second, even artistic novelists will be using AI to analyze their work, not only to spot grammar and style issues, but to assess whole-book concerns, like pacing and proportionality, over which authors lose objectivity after a certain point, but for which human feedback (e.g., from editors and beta readers, who will still be necessary in the artistic process) is hard to find in sufficient quantity. Of course, AI influencers will be a major issue, but that one’s going to afflict all of society, and we have yet to see if it is a worse plague than the one we’ve already got.

The evolutionary arms race between AI-powered book spammers and AI-powered spam detectors will go on. Most spammers will not make a profit, but a few will, and many more will try, testing the system and driving defensive tools to improve. When we reach equilibrium, probably in the early 2030s, here’s what it will look like: the most sophisticated fakes will be functionally indistinguishable from real books written by unskilled humans. Overtly AI-written writing may be a new and accepted commercial genre. The question will inevitably be raised: if some of these AI-written “bad books” are, nevertheless, entertaining enough to enjoy commercial success, are they truly a problem? Do we really need to invest our efforts into detecting and bouncing them? By this point, the question becomes not “How do we tell which books are fake?” but “How do we tell which books are any good?” Dredging slush, a job currently done by unpaid 19-year-old interns who work for literary agencies, will be treated as a semi-supervised learning problem, and it will solved as one; the results will be better than what we get from the system of curation that exists today.

Can literary quality be represented as a mathematical function? No. Of course not. There will be unskilled writers who will reverse-engineer these systems and beat the autograders, just as there are unskilled authors who land impressive book deals today. There will also be some excellent books that will fail whatever heuristics are used, just as there are great books for which there is no hope of them clearing the arbitrary prejudices of gatekeepers at literary agencies. Once the new equilibrium is found, it won’t be a utopia. It won’t be even close to one. It’ll be an improvement over what exists now and, more important to the people who make the decisions, it’ll be faster and cheaper. Instead of waiting three years to get a form letter rejection because you failed some arbitrary heuristic—this year, publishable authors eschew adverbs; next year, publishable authors will use them profligately; the year after that, one who hopes to escape a slush pile will employ adverbs only on on odd-numbered pages—you’ll be able to get your form letter rejection in ten seconds. Progress!

Traditional publishing will still exist, even in fiction. It just won’t be much of a force. Poets already self-publish, and there’s no stigma. The novel is well on the way to that point. will go. Chances are, traditional publishing will be an effective way for proven self-publishers to accelerate their careers, but the era of being able to knock on the door will end, if it hasn’t already. I don’t necessarily consider this a good thing at all; it’ll put traditional publishing, which will be “dead,” but also more profitable than ever, in the position to select proven selling authors after they’ve taken all the financial risk as self-publishers. This isn’t some revolution of self-expression; it’s the return of business to risk aversion and conservative practices. But it’s where we seem to be headed. What I hope, instead, is that we’ll find ways to make effective self-publishing affordable to a larger number of people; ideally, the thousands of institutions (some of which do not exist yet) that run whatever self-publishing becomes will figure out a way to make sure everyone who can write well gets discovered. We’ll see.

I don’t know the degree to which commercial fiction will be automated. Some of it will be, and some won’t. This distinction, here assumed to exist, between what is commercial and what is “properly” literary might disappear altogether, if it no longer becomes sensible for people to write commercial fiction, given increased algorithmic competition. In other words, formulaic bestsellers will continue to bestsell, but authors who currently write at a commercial pace for economic reasons might decide to write fewer books and spend more time on them, as machines conquer the low end of the effort spectrum. At the same time, AI will help all of us write better; so, perhaps, writing a landmark literary novel might some day take only four or five years of one’s life instead of six or seven. 

What is the long-term fate of artistic novelists? It’s too early to say anything for sure. We are an eccentric breed. “Normal” people can’t stand rewriting a book for the third time—at that point, they’re bored and they want to tell another story, leaving someone else to handle the line and copy edits, and that’s fine. Those of us who care, more than a well-adjusted person should, about comma placement and lyrical prose, are few. We’re the completionists who love that game (our game) so much we do 100% runs, speed runs, and challenge runs. Our perfectionist streak threatens to turn our editors into underpaid therapists, as their job ceases to be perfecting our work, so much as it becomes their task to convince us that, in fact, our novels don’t require “one more rewrite” to fix “issues” no one else sees. I suspect the world needs both kinds of authors. Without the commercial ones, the novel doesn’t produce enough volume and diversity to keep the public interested in the form; without the artistic novelists, fiction authors risk becoming like Hollywood—reactive, not proactive, and therefore culturally uninteresting.

How serious writers shall support themselves remains an open question. So long as we live under authoritarian late-stage capitalism, the problem won’t be solved. The good news is that “the good old days” in literature weren’t all that great; we aren’t necessarily going to lose much by moving away from them. The late twentieth century was a time when authors deemed literary were protected by their publishers, but it was also a time of extreme artistic blandness nobody really wants to repeat. Institutional protection seems, in practice, to warp its beneficiaries. Thus, so long as the replacement of traditional publishing by innovations from the self-publishing world introduces no barriers or expenses worse than already exist, there is no reason we shouldn’t speed the process along.

9: (no?) exit

When photographs of human faces are compiled into an “average”, the composite is usually well above average in attractiveness—85th-percentile or so—but indistinct. In writing, a large number of people are going to be able to put on this composite face; this is the effect AI will have. I don’t know how I feel about this, but how I feel doesn’t really matter; it’s going to happen. 

The good news is that this will help people of low socioeconomic status, as well as second-language speakers, whose linguistic deviations are stigmatized not because they are evidence of inadequate thought (since they aren’t) but because of an association with low social position, will be able to express themselves more fluently and evade some of society’s biases against them. I’m quite happy about that. The bad news is that the next wave of spammers and bullshitters will be much more articulate than they are now. What remains hard to predict is society’s demand (which was never that high to begin with) for the skill of the best writers, who are still markedly above the composite style that is about to become available to everyone. Will this demand diminish further, as the gap between them and the rest of the population shrinks? Or will it increase, as we tire of seeing the same writing style everywhere? It’s too early to tell.

Artistic novelists are a curmudgeonly sort; we are never that agreeable composite with all the roughnesses smoothed out, that pleasing statistical average that looks like nobody and everybody. We deviate; we grow proud of our deviations, even though they make existence a struggle. Not one of us will ever win the necessary beauty contests to become CEOs or whatever an influencer is. I suspect that to reach the highest levels of writing, at least in fiction, requires a paradoxical personality. You need the arrogance and chutzpah to believe sincerely that you can do something better than 99 percent of the population, coupled with the sincere humility that drives a person to put in the work (and it is a lot of work) to get there, because there are no shortcuts. You need to be so sure of yourself having something to say that no one else can that you are not daunted by terrible odds, but never become so addled by high self-opinion as to become complacent. Is it any wonder, considering the tension that exists within this personality type, that authors and artists are so often insecure, an issue that no amount of success ever seems to cure? People with artistic inclinations do require a certain amount of protection from the market’s vicissitudes but, as is known to anyone who’s watched the news in the past fifteen years, as late-stage capitalism has set one fire after another, so does everyone else. Our issues won’t be truly solved until everyone’s are solved.

We’ve discussed artificial intelligence. The current machine learning approach of using a massive data pool to encode (train) basic knowledge in a device like an artificial neural network has proven highly effective, and these successes will continue. AI will conquer a variety of economically important tasks that once required intelligence and—in time—pretty much everything humans do as employees. It will even conquer commercial, but not artistic, writing. Of course, to set such a goal is unfair, from a machine’s perspective, because we will reflexively move the goalposts. If machines ever establish something to be so easy it can be done by rote, the defiant spirit that lives within all serious novelists will gravitate toward what remains difficult. After all, we don’t write the kinds of novels we do because they are easy, but because they’re so hard, the vast majority of the population can’t. AI isn’t going to change that.

AI will change the way all of us write; it will catch increasingly subtle grammar errors, and it may develop sensitive enough sentiment modeling to alert us to pacing problems or places where a reader might get bored and put the book down (which no author, no matter how literary, wants). It will augment, but not replace, our artistic process. It will become an adequate artist’s assistant, but not the artist. Neural nets will master commercial success; but to replicate the truly artistic aspects of fiction that have kept the written word relevant for thousands of years, computing will have to come up with something of a new kind, for which there is no evidence it exists.

The economic issues remain vicious. Here, we’re not talking only about the incomes of writers (in which society, to be honest, has never taken much interest) so much as we are talking about everyone. If you’re not scared of the ramifications of artificial intelligence, in a world under propertarian corporate dominion, you’re not paying attention. Recall the calamities of the first half of the twentieth century. Industrial-scale nitrogen fixation (Haber-Bosch) led to agricultural bounties the reigning economic system (laissez-faire capitalism) was unable to handle. The “problem” of abundant food, with so many people’s incomes tied to the price thereof, led to rural poverty in the early 1920s and municipal insolvency by the middle of the decade. Heavy industry began to shake around ’27; this led to stock market volatility and a collapse in October 1929, when the Great Depression was finally acknowledged. The U.S. was forced to drag itself (reluctantly) halfway to socialism, while Germany and Japan took… other… approaches to capitalism’s public failures. Capitalism cannot help but chew off its own limbs; the Depression proved that ill-managed prosperity is the devil. We are repeating the same mistakes. It’s unclear, in 2023, which is more terrifying: the looming acceleration of automation and technological surveillance in a world run by such untrustworthy institutions, or the fct that we’ve been in this automation crisis for the past forty years already—it exists now, and is likely explanatory of the economic depredations the working class has seen since the 1980s.

It might seem, from the outside, that we who aspire to be literary authors might enjoy the exposure of our bottom-tier commercial counterparts—at least, the ones who hit the lottery and become rich, which most of us never will—as replaceable by robots, but I don’t think we should. We should worry on their behalf and ours. They are working people, responding to economic incentives, just like all of us. The denial of this fact (i.e. the midcentury belief in a so-called middle class, and that membership within it makes us better than other working people) is what has enabled the bourgeoisie to win in the first place. The acceleration of progress in AI makes clear that working people cannot afford division amidst ourselves at all. The stakes are too high. We, as humanity, will need something other than zero-sum economic transactions—we will need culture if we want to survive; we will need daring, experimental, incisive writers more than ever, because while the future we need may not be televised, it will be written.


A Reply to Alex Danco: Revisiting MacLeod and the Three Ladders in the Wake of Trump

I will soon be migrating to an alternative site, because WordPress is garbage for running garbage ads without my say; stay tuned.

Eight years ago, I wrote an essay on the three social class ladders that existed in pre-2016 American society. That essay disappeared due to a confluence of factors not really worth getting into, but I’ve been asked more than once to revisit it, in the wake of recent changes in our society. I do have strong thoughts on how that article has aged. At the time, I was unduly sympathetic to my native social class, the Gentry. This blinded me to something I had begun to suspect, and that Alex Danco articulated– that a sociological “middle class” is a comfortable illusion, a story capitalist society tells itself to mask its barbaric nature, performing a similar function to the notoriously clueless middle manager, Michael Scott.

The MacLeod Model

Around the same time as I wrote the three-ladder essay, I also analyzed the three-tiered MacLeod model of the modern business corporation, whereby each layer is assigned an uncharitable label: regular workers are Losers, middle managers are Clueless, and Sociopaths sit in the executive suite.

How accurate is the MacLeod model? Its origin is a satirical cartoon, but it accurately describes how the tiers of an organization are perceived, with one exception: Clueless middle managers generally don’t see their bosses as Sociopaths. I would not go so far as to attribute job labels to individuals. Taking a middle management position may be a wise career move (not Clueless). While the corporate system is evil, most executives are not literal psychopaths (or sociopaths) and most Sociopaths don’t become upper management (not enough spots). Laborers are, of course, economic Losers (as in “one who loses”) but are not “losers” in the U.S. pejorative sense of the word (meaning, “one without redeeming qualities”). It would be reductive and inflammatory to suggest that people’s true natures are indicated by their company-assigned, social-class-derived job positions. That said, the MacLeod model is entirely true when it comes to the expectations put on a role. Regular workers are expected to Lose– to generate surplus value for owners, to be discarded when no longer useful to their bosses, and not to complain about the fact. Executives, though some are individually decent human beings, exist to enforce the Sociopathic will of companies whose sole purpose is private enrichment. As for middle managers, their purpose is indirection, obfuscation, and deception. They are hired to conceal upper management’s true attitudes and intentions toward the regular workers, and to function as “true believers” in the company’s misleading, manipulative claim of standing for something more than private greed– that is, to propagate false consciousness (Cluelessness). Such a person need not herself be Clueless like Michael Scott, but it seems to help.

The separation between rationally-disengaged “Losers” and Clueless true-believers isn’t always well-defined, nor is it easy to find on an org chart, but the line separating MacLeod’s Clueless from Sociopaths is well-defined– it’s the Effort Thermocline, or the level in an organization where jobs become easier, rather than harder, with increasing rank. C-Words works less than VPs, who work less than Directors; but front-line managers work far harder than their charges for not much additional pay or respect. It is this way by design. A two-tiered corporation without a barrier between the overpaid, lazy, self-dealing executives and the exploited grunts would collapse under the weight of class resentment. Three-tiered MacLeod organizations prevent this by loading the level just under the Effort Thermocline with the drunkest of the Kool-Aid drinkers, the truest of true believers who will thanklessly and indefatigably clean up the messes made by the rationally-disengaged wage workers below them and by the self-serving, impulsive children above them. This turns class envy into a distant abstraction than a source of daily friction, because the tiers do not envy their immediate neighbors. Ground-level workers see their bosses working three times as hard for 20 percent more pay. Middle managers mistakenly (Cluelessly) construe their own superiors as more-successful, aspirational versions of themselves and believe (mistakenly, almost always) they’ll be invited to join the executive club if they just prove themselves a little harder.

The three-tier organization seems dysfunctional, bloated, and wasteful; but it’s far more stable than a two-tier business and therefore it tends to be a natural attractor for companies that exceed 50 people.

The Middle Class: “I can’t be Clueless because I know what Cluelessness looks like and I’m not it!”

In the early 2010s, I believed a lot of things that weren’t true. I bought into neoliberal, technocratic capitalism. Google sounded like a “workplace utopia” and so I applied to work there (and got in, and did work there, and learned a lot of what’s here). I also bought in to the Silicon Valley myth under which venture-funded startups, being “not corporate”, were exceptions to the invariable mediocrity of the MacLeod organization. Spoiler alert: I was wrong about nearly everything, on that front.

Anyway, in 2013, I would have staunchly disagreed that the cultural middle class, the educated Gentry, performed the function of the Clueless in a MacLeod organization. “I’m not Clueless at all”, I would have cluelessly said. Here’s what my argument would have been: large organizations fall into the MacLeod pattern because they have ceased to have a real mission– once they serve no purpose but private enrichment (often at worker and customer expense) they must develop group coping mechanisms that, while conducive to dysfunction, prevent class resentments from generating even more lethal dysfunctions.

I would have said that society serves a purpose; ostensibly, it does. Many of us get warm fuzzies when we see “our” colorful rectangles in triumphant contexts, such as next to the best on a list of numbers after an international sporting event. We want to believe that our communities– families, cities, nations, the world– are on the side of Good. We understand that the dreaded “corporations” mostly implement upper-class rent-seeking… but we think more highly of municipalities, of countries, of people united by religion or language or (at broadest) by the fact of being human. In this vein, I would have said that the MacLeod analysis does not apply at the macro level. I would have been wrong.

At the time, while knowing that old-style “corporations” were bad, I bought into the Silicon Valley mythology in which new-style “startups” would replace those and (of course!) invest the profits of automation into a better, richer world where life was better for all of us. Therefore, I believed Marx’s two-class, adversarial depiction of society to be false, inappropriate to a technological society with high economic growth. (“A middle class exists. QED, you are wrong, prole.”) In fact, I was the one who was wrong. The new-style corporate elite turned out to even worse than the old one. Social and economic changes in the past decade have largely proven Marx right.

To do Marx justice, we must note that Marx did acknowledge a middle class’s existence: he wrote on the petite bourgeoisie, the small business owners and independent professionals. He predicted, correctly, that they would be losers in the ongoing class war– that machinations of the politically-connected, mostly-hereditary haute (or “true”) bourgeoisie would push them to the margins and, eventually, throw them into the proletariat. Marx did not loathe the petite bourgeoisie and he did not overlook their existence– he simply recognized them as powerless relative to market forces and the movements of history. What they gain through innovation and comparative advantage, they lose over time to the superior political and economic power of the real elites, who never compete fairly.

We could argue endlessly about the nature of the middle class(es?). Are highly-paid corporate workers hautes-proletarians or petites-bourgeois? Do the cultured-but-poor in traditional publishing outrank the wealthy-but-tasteless rednecks of Duck Dynasty? No one knows, and it probably suits the bourgeoisie that no one knows, because it keeps people from getting envious. If everyone thinks he’s at or above the 95th percentile of his own idiosyncratic class ranking, then no one’s angry. This would, in fact, multiply in effect the purpose of the middle Clueless/Gentry layer of preventing class resentments felt by workers toward owners from reaching a boiling point.

One thing I missed in the early 2010s is that there is (or was) probably more than one Gentry. It seemed natural to privilege a certain “blue state”, limply-liberal, New Yorker-reading Gentry over the megachurch pastors and talk-radio hosts, but this was intellectually errant on my part. A “red state” Gentry certainly existed then, and while I could point out its cultural and intellectual shortcomings, those are equally numerous (if different in flavor) in the “blue state” gentry. A pox on both houses.

Gentry failures

A Gentry can fail, and indeed it is probably the destiny of all of them that they will. The 2010s was the decade in which the U.S. Gentry (Gentries?) failed. Whether and when “the middle class” collapsed is a matter of debate, because no one can agree on what “middle class” is. The income spectrum will always have a middle because that’s how mathematics works, but sociological class (which represents the ease with which one gets income, not income itself) relations evolved in a number of ways, confirming Marx’s thesis that only one class distinction — the perpetual struggle between owners and workers– really matters.

In the 2010s, we saw extreme devaluation of the cultural armor (mostly, educational credentials) by which the middle class defines itself. The middle-class theory-of-life is that one does not need substantial capital (at a level almost always inherited) to live a dignified and comfortable life, so long as one possesses intangibles (skills, contacts, credentials) that ensure reliable employment. Recent years have falsified this: employment is no longer ever reliable; and it is increasingly likely, due to technological changes favoring the upper class at worker expense, to be undignified. Due to automation, which would be desirable if the prosperity it generates were distributed justly, hard skills seem to be losing their market value at a rate about 5% per year. The same is occurring for nearly all educational credentials: I know college graduates who work in call centers, and I know PhDs in five-figure “data science” jobs a high schooler could do. This leads to outrageous demand for the small number of universities that still have the social capital to make and fix careers. Tuition costs are rising not because the product of higher education has improved (it hasn’t) but due to the desperation of the former middle class. People are panic-buying lottery tickets where the prize is “connections”– that is, admission into the sociological upper class, from which upper-class incomes attained via corruption usually follow– and, for most of them, it won’t pan out.

In what way, when a middle class ostensibly exists, are there “really” only two classes? I think Michael Lopp, in Managing Humans, Biting and Humorous Tales of a Software Engineering Manager, describes the typical business meeting aptly when he says that, in a discussion of any importance, there will be two groups of people, “pros” and “cons”. I don’t much like this terminology– “pro” has positive connotations (professional) and “con” has negative ones (confidence artist)– so I prefer to go with a terminology that feels more value-neutral. I’ve assigned these categories colors, “Blues” and “Reds”. Blues (Lopp’s pros) are the people who, if nothing happens in the meeting, will win. Existing processes suit them fine, they have management’s favor, and they’re usually only in the meeting for a show of politesse. (They rarely, if ever, change their minds.) The Reds are the ones who have to convince someone of their rectitude. They’re the ones who want to introduce Scala to a Java shop, or to exclude their divisions from a boneheaded stack-ranking process. Reds lose if nothing happens. They start the meeting out-of-the-lead and, if they don’t do a good job making their case, not only will their idea be rejected, but they will be resented personally for wasting others’ time.

I hate that I am giving this advice, but one who seeks corporate survival should always side with the Blues. It sucks to say this, because as a general rule, Reds are better humans. Blues are smug jerks with their arms crossed, whereas Reds are impassioned believers prone to bet their jobs (when they do, almost always losing such wagers) on what they consider right. In a better world, Reds would get more of a chance, but if you want to maximize your expected yield from Corporate, always go with the Blues. In the unlikely event that the Reds start winning (and become the new Blues) you will have plenty of time to change sides.

Reds exist to be listened-to, but ignored. Their purpose is to let the company say it “welcomes dissent” and “listens to its employees” and “goes with the right idea, regardless of hierarchy” even though it, in reality, will always go ahead and do exactly what the higher-ups already wanted to do. If a Red knows her role and accepts her inevitable defeat with grace, she probably won’t lose her job; but given that the corporate purpose of Reds is for their ideas to be rejected, why chance being one at all? Reds who fulfill their ecological purpose do not get fired– that only happens to those who believe in their rejected ideas too much and make enemies– but it never helps one’s reputation to have an idea shot down– in Corporate, no points are scored for losing gracefully– so it would have been savvier for a Red to have put her personal beliefs aside and thrown in with the Blues.

On a corporate controversy, such as whether to allow Scala in a Java codebase, one has the liberty of choice. Insincerity is not only facile, but pretty much expected. One whose conscience and knowledge pull Red can, nevertheless, join the Blue team and share in the victory. Most of these issues have low moral stakes (tabs versus spaces) and a single worker’s vote does not really matter anyway.

This is not the case in the macro society. You can’t actually join the Blue team, the team that wins if nothing happens. Capital has an advantage: it can wait, but workers have the humiliating daily need to beg a boss for money so they can eat and pay rent. Capital is the Blue team– the wealthy win, if nothing happens. Labor is inflexibly Red. If there is no demand for our work– if there is no factor within the universe that makes it acutely painful for someone to choose not to hire us– we starve.

The above is the only meaningful class distinction in American society. Not your college major or the rank of your undergraduate institution. Not your tiny but “classy” apartment in a fashionable neighborhood that you can barely afford. Not your “connections” to people who might know your work product is good, but who would choose their prep-school buddies over you for a slot in a lifeboat. Under capitalism, what determines the entirety of who you are in this society is one thing, and that one thing is whether time and inertia are on your side. There are only two social classes and most of us are in the lower one, the proletariat. Our day-to-day survival depends on our ability to serve a class of people who consider themselves a superior species, and who view us as contemptible, begging losers.

The raw, two-class truth of society is depressing, and so both the upper and lower class work together to create the appearance of a more nuanced society, with three or five or more social classes, and in times of relative class peace it is easy to believe such structures have meaning and are stable. We want to believe in “middle class values” and many of us have an uninspected desire to be middle class, to believe that we are such a thing. That’s deeply weird to me, because to acknowledge oneself as “middle class” is to assign validity to class in the first place. And what is class? Social class is the process by which society allocates opportunities based on heredity and early-life circumstances rather than true merit, and so by its construction it is unjust. To say “I’m middle class” with glee is to take simultaneous pride in (a) being allocated better career options than other people for no good reason, (b) and, at the same time, not getting “too many” unfair advantages and therefore not deserving to feel bad about them. Still, it seems to support the short-term psychological health of a society for it to be allowed to believe that such a thing as “middle class” exists.

In the 2000s, the U.S. Gentry began to fail on its own terms; to analyze why, we have to understand its purpose. A starting point is to inspect a telling bit of Marxist vocabulary, our name for the dominant, enemy class– the bourgeoisie. Though today we use it to describe the upper class, the original meaning of the word was the medieval middle class: the urban proto-capitalists. This is not an instance of idle semantic drift. Rather, Marxists correctly note that while the true bourgeoisie is a tiny upper class, bourgeois values are what society tells the upper ranks of the proletariat “middle class values” are supposed to be. In other words, bourgeois culture (false consciousness) is created to define the middle class, by and for the benefit of the upper class. It is also in the creation of this middle class that society is encouraged to define itself as other-than-commercial.

In a society like ours, the upper and lower classes have more in common with each other than either has with the middle class. The upper and lower classes “live like animals”, but for very different reasons. The upper classes are empowered to engage their primal, base urges; the lower classes are pummeled with fear on a daily basis and regress to animalism not out of moral paucity but in order to survive. People in the lower class live lives that are consumed entirely by money, because they lack the means of a dignified life. Those in the upper class, likewise, experience a life dominated by money, because maintaining injustices favorable to oneself is hard work. So, even though the motivations are different (fear at the bottom, greed at the top) the lower and upper classes are united in what the middle class perceives as “crass materialism” and, therefore, have strikingly similar cultures. Their lives are run by that thing called “money” toward which the middle classes pretend– and it is very much pretend– to be ambivalent about. The middle classes are sheltered, until the cultural protection, on which their semi-privileged status depends, runs out.

The “middle-est” of the middle class is the Gentry. Here we’re talking about people who dislike pawnbrokers and stock traders alike, who appear to lead a society from the front while its real owners lead it from the shadows. This said, I have my doubts on the matter of there being one, singular Gentry. I would argue that corporate middle management, the clergy, the political establishments of both major U.S. political parties, TED-talk onanist “thought leaders” and media personalities, and even Instagram “influencers” could all be called Gentries; in no obvious or formal way do these groups have much to do with one another. Only in one thing are they united: by the middle 2010s it became clear that both the Elite (bourgeoisie) and Labor (self-aaware proletariat) were fed up with all these Gentries. Starting around 2013, an anti-Gentry hategasm consumed the United States, and as a member of said (former) Gentry I can’t say we didn’t deserve it.

Technology, I believe, is a major cause of this. Silicon Valley began as a 1970s Gentry paradise; by 2010, it had become a monument to Elite excess, arrogance, and malefaction. Modern technology has given today’s employers an oppressive power the Stasi and KGB only dreamt of. The American Gentry was a PR wing for capitalism when it needed to win hearts and minds; but with today’s technological weaponry, the rich no longer see a need to be well-liked by those they rule.

For a concrete example, compare the “old style” bureaucratic, paperwork corporation of the midcentury and the “new style” technological one, in which workers are tracked, often unawares, down to minutes. The old-style companies were hierarchical and feudalistic but, by giving middle managers the ability to protect their underlings, ran on a certain sense of reciprocated loyalty– a social contract, if you will– that no longer exists. The worker agreed not to undermine, humiliate, or sabotage his manager; the manager, in turn, agreed to represent the worker as an asset to the company even when said worker had a below-average year. All you had to do in the old-style company was be liked (or, at least, not be despised) by your boss. If your boss liked you, you got promoted. If your boss hated you, you got fired. If you were anywhere from about 3.00 to 6.99 on his emotional spectrum, you moved diagonally or laterally, your boss repping you as a 6.75/10 “in search of a better fit” so you moved along quickly and peaceably. It wasn’t a perfect system, but it worked better than what came afterward.

I’ve worked in the software industry long enough to know that software engineers are the most socially clueless people on earth. I’ve often heard them debate “the right” metrics to use to track software productivity. My advice to them is: Always fight metrics. Sabotage the readings, or blackmail a higher-up by catfishing as a 15-year-old girl, or call in a union that’ll drop a pipe on that shit. Always, always, always fight a metric that management wishes to impose on you, because while a metric can hurt you (by flagging you as a low performer) it will never help you. In the old-style company, automated surveillance was impossible and performance was largely inscrutable and only loyalty mattered– your career was based on your boss’s opinion of you. It only took one thing to get a promotion: be liked by your boss. In the new-style company, devised by management consultants and software peddlers with evil intentions, getting a promotion requires you to pass the metrics and be liked by your boss. In the old-style company, you could get fired if your boss really, really hated you. (As I said, if he merely disliked you, he’d rep you as a solid performer “in search of a better fit” so you could transfer peacefully, and you’d get to try again with a new boss.) In the new-style company, you can get fired because your boss hates you or because you fail the metrics. The “user story points” that product managers insist are not an individual performance measure (and absolutely are, by the way) are evidence that only the prosecution may use. This is terrible for workers. There are new ways to fail and get fired; the route to success is constricted by an increase in the number of targets that must be hit. The old-style hierarchical company, at least, had simple rules: be loyal to your boss. Having been a middle manager, I can also say that the new-style company is humiliating for us– we can’t protect our reports. You have to “demand accountability from” people, but you can’t really do anything to help them.

This, I think, gives us a metaphor for the American Gentry’s failure. Middle managers who cannot protect their subordinates from the company’s more evil instincts (such as the instinct to fire everyone and hire replacements 5 percent cheaper) have no reason to expect true loyalty. They become superfluous performance cops and taskmasters, and even if they are personally liked, their roles are justifiably hated (including by those who have to perform them.)

The Elite seems to allow, and Labor to tolerate, the elevation of a subset of proletarians into the “Gentry” because it concocts intellectual justifications for the Elite, while at the same telling Labor that it will not tolerate extreme greed or political fascism from above. It leans left-of-center, functioning as controlled opposition, since its real purpose is to define how far left a person is allowed to go before being accused of “class warfare”, and it uses “both sides” argumentation to justify Elite predation (“you, too, would do it if you had the means”) and to vilify genuine proletarian activism. The Gentry, finally, teaches capitalism how to be human– that is, it trains the machine to ape emotions like concern for the environment and genuine empathy toward workers whose sustenance “could no longer be afforded” due to “shareholder demands” and “the market rate” for executive “talent”.

Three things happened in the first decades of the 21st century to accelerate the Gentry’s demise.

First, Labor grew rightly sick of us. We were no longer the professors marching with civil rights activists; we became the pseudo-academics in think wanks (typo preserved) justifying corporate downsizing and forever wars. We were no longer the middle managers protecting their jobs and wages from overpaid psychopaths looking for “fat” to cut and “meat” to squeeze– instead, we were the time-studiers and software programmers helping the psychopaths figure out precisely which jobs and hours to cut. We sold Labor out before they did anything to us– they were right to tell us to sod off.

Second, the Elite decided the Gentry was too expensive to let live. Labor of-color suffered declining living standards in the 1970s and ’80s in the first wave of deindustrialization, and “the white working class” suffered in the 1990s and 2000s. We, in the Gentry, could decry this from a distance because of our cultural armor. We weren’t laborers compensated for the market value of our work– we were special professionals paid well and respected for who we were. Ha! It turned out that we were not immune to the market forces that drive the (divergent downward, by nature) labor market. There still are middle managers and op-ed columnists and think-tankers… but they are gone as soon as they stop carrying Capital’s black bags. I know this from bitter personal experience, having been “de-platformed” as a result of some relatively mild criticism of our economic system.

Third, we did it to ourselves. We indulged in cannibalism. We in the Gentry– especially the technology Gentry, which has been for quite some time the worst one– got so fixated on our own (relatively meaningless) individual intelligence that we became collectively stupid. As a result, we’ve been emasculated. When our employers began to impose rank-and-yank (or “stack ranking”) policies on us, we should have unionized, but we smugly assumed we wouldn’t be affected– “I’ll never be in the bottom 10 percent of anything”– and so we let the rat bastards divide us among ourselves. The limply-liberal left is as guilty as the right on this; rather than demand genuine social and economic progress for people in disadvantaged groups, we indulged in a virtue-signaling purity-testing cancel culture where people who said stupid things ten years ago get drummed out of the (dying anyway) Gentry.

Capitalist society allows the Gentry to exist for the purpose of cultural self-definition that obscures the machinations of the corporate system. Whether you like or hate the Soviet Union, the bare fact is that the Beatles did more to take down the Berlin Wall– to win the cultural war against the USSR– than MBA-toting synergizers (who are just a more expensive version of those hated Soviet bureaucrats) ever did. A society’s PR always comes from the middle of its socioeconomic spectrum. The upper class, which controls all the important resources and does no real work, tends to harbor so many moral degenerates that it’s best to conceal them. The lower classes are deprived of meaningful economic, social, geographical, or cultural chjoice and therefore inert relative to society’s self-conception; the world’s poor comprise the largest nation there is and it has no vote anywhere. It is the middle classes, then, who are expected to be other-than-commercial, and to operate at an apparent remove from the zero-sum power relationships and dirtbag machinations that actually dictate what goes on in a society such as ours.

Above, I’ve described the functional purpose of the Gentry, at which the current one has failed. Is there a moral case for a nation’s Gentry, though? I think so; but at it, we’ve failed utterly. The moral value of a Gentry, and of the national self-definition it enables, is that it can prevent Capital (the Elite) from dividing workers against each other. The 0.01%, being outnumbered, can only rule the 99.99% so cruelly by keeping the proletariat fractious. If the Gentry earnestly believes in a cohesive local, national, or global spirit and cause, it should resist these divide-and-conquer tactics. We, the American Gentry, have failed execrably at this. We’ve allowed Capital to make people who live in “red” states hate the people who live in “blue” ones and vice versa. We’ve let Capital exploit, rather than allow society to move beyond, archaic racial animosities. We’ve let them propagate everything from apocalyptic religious extremism (a “red” flavor of divisiveness) to commercialized sexual perversion (“hookup culture”, a “blue” flavor). All of this, Capital has done to pit working people against each other, and we’ve let them do it. Thus, we deserve (as a class) to die.

This explains the Rise of authoritarian leaders like Trump, all over the world. In 2016, Labor gave its vote of no confidence in the Gentry by electing an unabashed Elite parasite. His supporters are not all stupid. They know he hates their guts, and they do not much love him, but they hate us even more– because we’re the ones who promised to protect them from global capitalism, and failed. We were utterly, in the language of organizational dynamics, Clueless.

What’s Next?

Marx was right. If there are stable social classes, there are exactly two of them. The cultural armor of the “middle class” is paper-thin. Education at a competent but unremarkable state university will not prevent someone from being sacrificed to corporate downsizing, and “connections” to people who are themselves unsafe are worth very little. You are not better than a poor person because you buy an expensive brand of candle. In the end, there are just two classes– those who must sell their lives to survive, and those who don’t. That is, there are those who win in if nothing happens, and those who starve if nothing happens. There are those whose control of the world’s resources make them dangerous to others, and there are those who are in danger.

In the American midcentury, nearly everyone identified as “middle class”, whether they were corporate executives or grocery-store clerks. It was common, and perhaps remains so, to equate “middle class” with a certain salt-of-the-earth, virtuous status. As I’ve said, that’s patently ridiculous. Social class is mostly inherited and the rest comes down to random luck. We are not better than those we deem as “lower class”– we’re just luckier. The identification with “middle class” is self-limiting because it seems to tacitly accept that some people will be handed better economic opportunities. To tolerate that those born “higher” get unfair advantages, because one is getting unfair advantages over a larger group of others, should never be cause for moral pride.

When I wrote those earlier essays in the first half of the 2010s, I believed in neoliberal technological capitalism. I’ll spare the reader my own career history, but it failed me. It has failed society, too. Now that I’m older and smarter, I would say that in broad strokes I am a communist. What I mean by this is that the long-term objective of humanity should be a post-scarcity, class-free society with the minimum amount of hierarchy necessary to function. Nation-states seem to be a protectionist necessity today, but their power should diminish over time, and global amelioration of scarcity ought to be the goal. Markets may persist as algorithms for the bulk allocation of resources (command economies performing poorly in this regard) but should never have moral authority to destroy human life. In an ideal world, most people will work (as the need to work is deep and psychological) but the right to refuse work (via universal basic income) must be protected, not only for the benefit of those unable to work, but because it is impossible to have dignified conditions for workers without it in place.

My old three-ladder theory used “Gentry” as a pseudonym for a middle class (or, under US-style class inflation, an upper-middle class) mentality and gave us the tools to identify thirteen distinct classes within our society… from the E1 crocodilians at the very top… all the way down to the Underclass bereft of connection even to the least-regarded of the three ladders. As a descriptive tool for analyzing early-2010s North America, I think the taxonomy had great value. But the Gentry is dead now and I’m not sure we should ache to have it back. Thirteen social classes is too many; three is too many. Zero is the right number. Distinctions of hereditary social class should be abolished and, seeing the atrocious job global capitalism’s current leadership has done with regard to climate change, public health, and economic management, this cannot be done too soon. Corporate capitalism delenda est.

The d8 Role-Playing System

Also posted on my Substack page.

The d8 System is a role-playing game system designed to be mechanically lightweight, so as not to break immersion, and to be modular. It enables experienced GMs (in Dungeons & Dragons parlance, DMs) to run campaigns in a manner similar to what they would use in other systems, but it also provides tools for extension. The long-term vision here is that designers and GMs can build and share modules specific to their preferred gaming styles and genres.

Since only the core mechanics (“Core d8”) have been written and no modules exist yet, d8 isn’t intended for use by inexperienced GMs.

What Is a Role-Playing Game, Anyway?

Ask ten role-players (including GMs) this question, and you’ll get d12 different answers.

I’m a novelist (Farisa’s Crossing, 2021) so I’ve encountered the various approaches to, and theories about, storytelling— they are too numerous to list here. I also studied math in college, so you’d rightly guess that I’ve spent far too much time analyzing game mechanics and their probability distributions. I know the “3d6” distribution by heart (triangular 1 to 21, 25, 27; back down). I’ve also played a lot of RPGs, both electronic and tabletop. The d8 System exists for tabletop play, where the design problems are most interesting. A video game can do (and is expected to do) millions of tiny calculations per second; but a GM must construct the game world and its challenges, to some extent, on the fly. Her players will do things she didn’t expect; she will have to let the world respond in a world that doesn’t break immersion.

Players (and GMs) have a wide variety of tastes. Some want to see battles play out on hex paper and know exactly how much range a trebuchet has. Others prefer deep-character storytelling approaches and only want to know what their avatar in the game— their player character (PC)— is experiencing; they get upset if the GM says, “You lost 5 hit points”, because the character doesn’t know what ‘hit points’ are. Some players want realism; some want plot armor. There’s no right or wrong here; it’s all a matter of taste.

The core of the d8 System is designed to appeal both to statistician-strategists as well as holistic in-character gamers— to the extent that both camps can never be satisfied perfectly, such refinements are left to modules. By design, little information is conferred through game mechanics that the characters themselves wouldn’t know. To get specific, numeric data usually take the form of small integers (whole numbers). Almost no character would experience an action as “a 67th-percentile performance” but they would know the difference between 2/fair and 3/good, between 3/good and 4/very-good, etc.

Statistics and systems should only be a small part of the role-playing game experience. Some players and GMs are happy to go “systemless” and let the game be a free-form interactive story. Others want the game to hold fast to a common language— because that’s what a role-playing system is, in a sense: a language— so they know precisely how likely a mounted barbarian is to hit a downed orc at night with a +1 Axe of Retribution. These things need to be resolved, and fairly quickly. Constant calculation, however, can bog the game down. If the GM and the players find themselves in an argument about whether a percentile roll of 78 suffices to hit the man on the privy with a crossbow bolt, versus whether it ought to have taken at least 79, then everyone has lost.

The GM has the hardest job. She’s the worldbuilder and storyteller, which means she has the godlike power to “decide” what happens to the players, even overruling the dice if she wants. This, of course, means that if she’s unskilled and impulsive the game might suck, and the players have the ultimate vote-with-their-feet power to quit a campaign if that happens. Like a novelist, she has to keep a storyteller’s paradoxical combination of unflappable authority and deep humility. Her players will suggest solutions and story arcs she didn’t think of; she’ll have to know when to adapt, and when to overrule them.

GMs have to keep balance between the players, which means keeping their power levels— the scope of what each can do, in the game— balanced. It’s not that much of a problem if the players as a whole are overpowered or underpowered because the GM can adjust the power levels of the challenges they face; it’s much more of an issue if the characters differ wildly in how much they’re able to contribute. Most novels have one main character; in a role-playing campaign, everyone is the main character. The GM must reward clever, skillful play (so long the skillful play is in character) while punishing bad decisions, brute force, disengagement, and out-of-character moves. If she’s good at her job, and if her players are receptive, they nearly forget that she exists (and that she is wholly responsible for the mess the characters are in) for a few hours, enough to immerse themselves in the story. The GM has to keep the fictive dream, to use John Gardner’s term, going.

The d8 Philosophy: Modular, Non-Judgmental

To be technical, Dungeons and Dragons isn’t “a role-playing game” but an RPG system. Same with GURPSFudge, and Warhammer. The game is what happens between the players and the GM; usually a “campaign” that unfolds over several sessions (sometimes, comprising hundreds of hours). Systems have a tradeoff between modularity and specificity. If a combat system is designed for swords, axes, and shields, it’s going to have low utility when applied to modern warfare. Combat systems bring specificity— they give useful information to the GM and players about what can be expected to follow from certain happenings— but reduce modularity: the assumptions they carry specialize them into certain styles and genres of role-playing game, and necessarily exclude others. That isn’t a bad thing; GMs and players benefit from knowing and agreeing on what style of game they are playing.

Core d8, favoring modularity, tries to make no genre or style commitments; that’s left to modules. The core system could be used for medieval high fantasy, but it could also be used for 1920s gangland Chicago, 1997 suburbia during a vampire fruit epidemic, or 23rd-century Budapest. What does that mean, in practice? The lack of specificity given by the core rules means it can be used profitably by an experienced GM who already knows what genre and style of campaign she wants to run, and who has the competence to do it. Novice and intermediate world-builders, on the other hand, will have to rely on d8’s module-writing community (which doesn’t exist yet, because this is day zero) if they want specificity.

I am tempted to say that the d8 System as given here (“Core d8”) is not an RPG system (like D&D or GURPS) but an RPG system system (system-squared?) It’s a system designed to help people build RPG systems (modules, sets of modules, etc.) It doesn’t tell you how many hit points (HP) a fighter or wizard should have— because it doesn’t decide whether fighters exist in your world, or whether HP exist in your world.

Health systems are a common point of debate, and a great example for me to use in showing the innate tradeoffs one makes when using a more specific toolset. The concept of hit points comes from tactical wargames and was originally a measure of structural integrity: how hard it was to sink a ship, take out a bridge, or raze a castle. In Dungeons & Dragons and related games, hit points are used both for the player characters (PCs) and their adversaries to represent how hard combatants are to kill; they keep battle quick and fun by leaving damage abstract. The ogre hits you; you take 12 damage. A wizard heals you; you recover 17 hit points. This, of course, isn’t a realistic model of mortal danger. People don’t lose “hit points”; they lose blood and skin and fingers and, if they’re really unlucky, vital organs. Any attempt to realistically model medieval life would require a roll of the dice on minor wounds, to see if they become infected. There are GMs and role-players who enjoy this kind of gritty realism; I would guess that they’re in the minority. Anyway, Core d8 doesn’t legislate. It doesn’t propose as canonical a health, leveling, class, combat, magic, or technology system— it doesn’t even mandate that one be used at all. There isn’t just one way to role-play.

Specificity is, nevertheless, important. Before a campaign begins, GMs and players should understand what kind of world is being built. What can happen, and what can’t? If players expecting medieval realism get genre-twisted into a sci-fi romp via time warp, they’re not going to be happy. Character death is probably the biggest non-genre (that is, stylistic) point where agreement should be established. RPGs tend to focus on daring adventures and perilous adversaries, and sometimes the PCs fail. What does death mean? In settings favoring realism, it means: the person’s life ends, as it does in the real world. (The player isn’t eliminated, but creates a new character. Death isn’t necessarily losing, and it’s not to be “punished.” It is part of the game.) On the other hand, in fantasy settings, death can be a minor inconvenience, to be expected on a routine jaunt to take Tiamat’s lunch money, reversible with a mid-level spell. GMs and players need to have some measure of agreement on what death means; but d8 isn’t going to tell them what it must be.

Why Use a System at All?

Role-playing games, as interactive stories, probably predate role-playing systems. And all GMs home-brew their games, to some extent, in that we basically ignore the rules we find annoying. Encumbrance is usually ignored within reason— a STR 6 wizard isn’t going to be wearing plate mail, but keeping track of whether a PC is carrying 19.9 pounds versus 20 is no one’s idea of fun. An implicit “Bag of Holding”, because playing “tetris” with one’s inventory (real or virtual) is no one’s idea of fun. There are GMs who insist on playing battles out on hex or cartesian graph paper; there are others who keep them abstract. As I said before, some people want to know their precise percentage chance of killing that orcish lieutenant with a +2 Falchion, and others want to get deep into character and will find “THAC0” to break immersion.

Systems, well-used, improve the GM’s and PCs’ understanding of the world they live in, and what the capabilities of the characters (and adversaries) are. They give a sense of objectivity, so the players don’t think the GM is making rules up as she goes along— even though, to some extent, it is part of her job.

Health modeling (to HP or not to HP) is incredibly important, and probably where GMs (and PCs) lean most heavily on their systems. Players need to know how close their characters are to shuffling off the mortal coil. There tend to be two approaches to this. The abstract, hit-point-based system described above is one— in it, damage tends to be inconsequential until a player’s HP reach 0. At the other extreme, with an aim for biomedical realism, one has injury systems that tell players precisely what bodily agony their characters are feeling. In injury systems, characters don’t “lose 6 hit points”; they get fingers blown off and shoulders crushed by lead pipes. HP systems tend to work well in high-fantasy campaigns where combat is common and cannot be avoided; injury systems tend to work better in realistic settings where (as in real life) physical combat should usually be avoided as much as one can.

I’m an old man (well, 37) but I too was once a teenager too smart to believe in hit points. I designed “realistic” but complicated injury systems that were not at all fun to play. Now that I’m older and mountains are mountains again, I have an appreciation for HP systems. They keep combat moving, and for the most part they prevent stochastic permanent damage, which is an asset in a horror setting but unwanted and off-brand in epic fantasy. Are hit points unrealistic? Yes, and that’s the point.

Or, let me be more specific: hit points are not really a health system. What they model, and this is key in understanding the way in which HP are realistic, is the push-and-pull of combat— the fatigue and pain that build up from bodily abuse, the waning ability of a fighter’s range and determination to compensate for a failing body, and the (greatly increased, in real life as in RPGs) capacity of experienced fighters to keep pushing through.

The notion that hit points are wildly unrealistic tends to come from two sources. One is a simple misunderstanding of leveling— while level-10 characters are “mid level” in D&D they are in fact among the best fighters in most worlds. From 1 to 10, each level represents about a factor of 10 in rarity, so merely level-3 characters are in the top-0.1% of adventuring experience, skill, and (plot-armor-slash-)luck. Levels 11 to 15 are superheroes; 16 to 20 are mythic heroes who emerge only in world-stakes conflicts that’ll be discussed for millions of years. In their proper (notably, counterfactual) context, I don’t think D&D’s leveling or its at-high-levels generous HP allotments are unrealistic. The other is a misunderstanding of what 0 HP means. It does strain credibility that a character can be “near death” (1 HP) and still fight at full power— but, under the modern interpretation of hit points, that’s not what it means to be at 1 (or 0) hit point. Zero doesn’t mean certain death, and it doesn’t even have to mean unconsciousness or total incapacitation. It’s the point at which single combat ceases to be a fight (and, if let to continue, becomes a depressing beatdown). 0 HP is the point where the referee of a fighting sport calls the match because the losing combatant can no longer defend himself.

A hit point system doesn’t preclude injuries; in fact, most D&D-style systems have plenty of ways for characters to get grievously injured (or killed) after falling to 0. The simplifying assumption is that injuries won’t happen during the “fair fight” phase (as opposed to the “beatdown” that may occur after one fighter has lost). That’s false, but it’s not that false. In pre-firearm single combat, it was pretty rare for people to suffer mortal wounds during the period in which it was still a two-sided fight. Armor makes it hard to pierce a vital organ. The fact isn’t chivalrous, but a medieval knight’s most common killing blow was nothing theatrical, but a dagger to the throat of a downed or exhausted opponent. Similarly, bare-fisted combat has a low rate of fatal injury if the fight is stopped once properly over, which is why fighting sports have (in comparison with other sports, and of course we are learning about the cumulative effects of seemingly minor injuries) a reasonable safety record.

What 0 HP represents is not character death but the loss of fighting capacity. What happens when all PCs are reduced to 0 HP (“total party wipe”) depends on the motivations of their adversaries. If they’re wiped out by brigands, they can be robbed “left for dead” but survive, because murder is something even scumbags don’t do lightly. If they fall to some malignant force, like a lich or an indifferent stone giant, the campaign may well be over.

Real-world fights (street assaults) are hideous, depressing affairs. Many are ambushes; people don’t fight fair. If the fight is balanced at all, it usually ends (and may turn into a dangerous beatdown) in a few seconds. Often, the scumbag will win because the (a) the scumbag was planning the assault, and (b) non-scumbags very rarely get in street fights after their early teens. This sort of thing isn’t what we want in high fantasy. We don’t want to see ambushes and beatdowns— we want the long cinematic fights were the opponents fight with honor and bravery, and wherein someone can return from the brink of failure (1 HP) to victory. We don’t, in fantasy, want the fights that end because a weapon breaks, or because someone slips in horse shit, or because artillery fire demolishes a combatant’s head, rendering the whole sequence of events moot.

Whether to use HP or an injury system depends on genre and style preferences. Some GMs want to build, and some players want to live in, a gritty dangerous world where a rabid dog or a teenager with a knife is a real threat— a world where getting sucker punched means entering a fight with a disadvantage and possibly for only that reason losing to a worthless, unskilled opponent.

What I think makes people unhappy is when systems try to blend the two. For example, GURPS has a rule by which characters’ physical abilities degrade through general damage— at 33% of their maximum HP, they are bad off. If I were in a fantasy campaign, I would call that a misfeature. Here’s the thing: if it were me battling ogres, I’m sure my physical abilities would degrade after the first blow. However, D&D adventurers are experienced warriors, not regular people like me. The system is calibrated for an immense dynamic range— from level 1 characters who must be played conservatively, because bears and wolves and superior warriors can still destroy them, to level 20 demigods who can punch a dragon in the taint and run— and, seeing as level 1 characters don’t have many hit points at all, I don’t consider its modeling unrealistic. It’s plausible to me that, at high levels, these people can— through adrenaline, rage, and grit— keep fighting until their bodies give out.

As one can see, there’s no single right way to build a role-playing game, a system, or a world. Whether it’s better to use abstract damage (hit points) or a biomedically realistic injury system where every fight is a losing fight, that’s a genre and style concern, and there’s no right answer. In any case, the real objective for the GM in a role-playing game isn’t realism, but immersion. If a mechanic breaks immersion and becomes, rather than a modeling tool, a force unto itself, it should probably not be used.

As systems become more specific (and less modular) they impose constraints. These can focus, direct, and inspire a GM’s creativity, especially if she can trust that the mechanics are sound and well-tested. At the same time, they can stifle creativity and direct player focus to the wrong things— in which case, she should discard them.

Core Mechanics: Yes, It’s All About Dice

GMs of high skill can run campaigns without any random elements such as dice; at this point, it is “system-free” interactive story. Still, I think most GMs prefer to have randomizers. It helps immersion for players to think their characters are in a game rather than in a world built from one person’s imagination. Randomizers can vary. I once played a short campaign on a hike where we used the 0.01-second digit of a stopwatch. I’ve also seen GMs run campaigns using only tarot cards. Dice, however, are the go-to tool. They’re objectively numerical; as random influences, they help players forget that the game is actually under the GM’s complete control. Sometimes, the dice “speak for themselves” and suggest a course for the story very different from what anyone had intended. These injections of random chance make the game world feel more real or, in literary terms, more “lived in”.

The standard role-playing dice set consists of six sizes of dice: four (d4), six (d6), eight (d8), ten (d10), twelve (d12), and twenty (d20) sizes. Other sizes exist, but those tend to be expensive and unnecessary; it’s rare that role-playing games need to model an exact 1/7 probability (for which a d7 would be used)— typically, 3/20 is close enough. The d100 (or d%, or “percentile roll”) is commonly called-for roll, but it’s achieved with two d10s of different colors. An actual d100 is nearly spherical and almost impractical to use.

The d8 System is designed to run using only 8-sided dice (hence the name). Module writers and GMs are free to incorporate d12s and d20s and tarot cards, but the core system can be run with a handful of 8-sided dice.

The statistical engine of many systems is the “percentile roll”, the d100 synthesized from two visually distinct d10s. If the GM or the module specifies that there must be an 18% chance of rain, each day, in a given setting, then a d% is the way to go about it: 1–18: rain, 19–100: no rain. In general, though, it’s lacking because we don’t think in percentiles. Should “12th percentile” weather in late March in Manhattan be a cold drizzle, or a sleeting horror show? Is a 73rd-percentile ice cream sandwich sufficient to give +1 Morale, or does it have to be 74th percentile to have that effect? People can perceive four to seven meaningful levels of quality in most things, not 100.

Percentile rolls can force GMs to set precise numerical probabilities on events, rather than letting the system, through its modeling capacity, figure out what those probabilities might be. If a regular person has a 20% chance of making a jump, what are the odds for a skilled circus performer? Eighty percent? Ninety percent? How do we adjust the odds if the lights go out, or the performer is recovering from an injury? This sort of thing is the core of what we use role-playing systems to model.

Linearity— A Criticism of d20

What’s wrong with D&D’s d20 System? Objectively, nothing. As I’ve said, there are no absolutes, only tradeoffs— for simplicity it can’t be beat. It’s an improvement on the percentile roll— 20 levels, instead of 100— but it still has the issue of linearity, which means it lacks a certain realism.

Here’s the problem, in a nutshell. As I said, the dice resolve conflicts between the PCs and the environment. When the character wants to do something at which he might not succeed, and the GM decides to “let the dice speak”, it’s called a trial or check; the system is there to compile situational factors and (without requiring advanced statistical training on the GM’s part) find a reasonable answer of the PC’s likelihood to succeed.

Here’s an example that shows the issue with linearity: Thomas has a 50% chance of convincing NPC Rosalind to help him get to the next town. Or, he would; but he’s wearing a Ring of Persuasion, which increases his likelihood to 75%. Additionally, he and Rosalind share the same native language. Thomas, wisely, uses it to communicate with her, and gains the same degree of advantage (50 to 75 percent). If he has both advantages, how likely is he to succeed in getting Rosalind to help him out?

A linear system like d20 says: 100 percent. Each buff is treated as a +5 modifier, or a 25-percentage-point bonus. They combine to a +10 modifier, or a +50% bump, and Thomas is guaranteed to succeed. Is that accurate? I’ve modeled these sorts of problems for a living, and I would say no. What is the right answer? Well, models vary, clustering around the 90% mark, and I’d consider any number between 87 and 94 defensible— and since gameplay matters more than statistical perfection, I’d also accept 85 (17/20) and 95 (19/20) percent. At any rate, the difference between 50% and 75% is about the same as that between 75% and 90%, which is about the same as that between 90% and 96%.

The fact that linear models “are wrong” isn’t the worst problem. If a player gets a free pass (100%, no roll) on what “should be”, per statistical realism, a 90% shot, that’s not a big issue. No one’s going to feel cheated because he didn’t have to roll for something he had a 90% chance of pulling off anyway. If it goes the other way, turning what ought to be a 10% chance into zero, that’s more of an issue— it is the system rather than the game (the play, and the dice, and the reasonably-estimated difficulty of the task being modeled) that are making an action impossible. And that’s not ideal. Even still, though, the biggest problem here is that, because two “+5 modifiers” stack to turn a 50/50 shot into a certainty, or an impossibility into a 50/50 shot, we rightly conclude that +5 modifiers are huge. Then, most of the situational modifiers used and prescribed by the literature are going to be smaller, more conservative ones— ±1 and ±2— to avoid generating pathological cases. But in the mid-range (that is, when the probability is right around 50%) these modifiers are so tiny, almost insignificant, that they become inexpensive from a design perspective. This encourages a proliferation of nuisance modifiers and rules lawyering and realism disputes. Should that pesky -1, from a ringing in a PC’s ear, really turn a 10% chance into a 5 percent chance? Shouldn’t the GM see the unfairness of this, and waive the -1 modifier? Maybe this needs to be a Sensory Exclusion check— now, do we use INT or WIS? And so on. The system’s quirks intrude on role play.

It’s better, in my view, to have infrequent modifiers; when they exist, they should be significant. If they’re not substantial, let the system ignore the. The failing of d20’s linearity is that it’s coarse-grained at the tails where we actually benefit from the understand a fine-grained system gives us— there’s a difference between a 95th percentile and 99th percentile outcome, in most cases— but fine-grained in the middle where we don’t need that precision. A “+1 modifier” is major at the tails, imperceptible in the midrange… which means we lack an intuitive sense of what it means.

GURPS uses 3d6 instead of d20. This is an improvement, because the system is finer-grained at the tails and coarser in the middle where (as explained) we don’t need as much distinction— 3 (in GURPS, low is good) is “top 0.46%” whereas 10 is “50th–63rd percentile. Fudge, in the same spirit, uses 4dF, where a dF is a six-sided die with two sides each marked as: [+], [ ], and [-], corresponding to values {-1, 0, 1}. Notably, it gains an aesthetic advantage (for some) of making results visible without calculation. Cancel out the +/- pairs (if any) and what’s left is the result. Fudge also eschews raw numbers in favor of verbal descriptions: a Good (+1) cook who has a Great (+2) roll will produce a Superb (+3) dish; if she has a Mediocre (-1) roll, she’ll produce a Fair (0) one.

The linearity of d20 comes from its core random variable being sampled from a (discretized) uniform distribution, thereby assuming that the “nudge” it takes to turn a 75th-percentile performance is the same required to turn a 75th-percentile performance into a 100th-percentile (maximal) performance. That’s false, but the falsity isn’t the issue because all models contain false, simplifying assumptions. Summed-dice mechanics (3d6 or 4dF instead of d20) give us something closer in shape to a Gaussian or normal distribution, and in some cases that’s the explicit target. That is, the designers assume the resolved performance P of a character with ability (or skill, or level) S shall be: P = a*S + b*Z, where a and b are known constants and Z is a normally distributed random variable. It’s not all that far off; one can do a lot worse. That said, I think it’s possible to do a lot better.

What’s wrong with a normal distribution? For one thing, it’s not what we’re getting when we use 3d6 or 4dF. Those mechanics are bounded. If you’re a Mediocre (-1) cook, you have a zero-percent chance of producing a dish better than Superb (+3). For food preparation, that seems reasonable, but is the probability of a Mediocre archer hitting a dragon in the eye really zero point zero zero zero, or is it just very small? Again, if the system “behind the scenes” makes things that should be improbable, improbable, that’s not an issue— but the system shouldn’t be intruding by making such things impossible. One fix to this problem is to say that certain outlier results (e.g., 3 and 18 on 3d6, -4 and 4 on 4dF) always succeed or fail, but the system is still intruding. Another fix is chaining: on a maximal (or minimal) result, roll again and add. So, +4 (on 4dF) followed by another +4 is +8. Okay, but can chaining make things worse— can +4 followed by -4 make a net 0? If that’s a possible risk, can players choose not to chain?

The boundedness itself isn’t the real problem, though. The actual Gaussian distribution isn’t bounded— a result 4 or 6 or 8 standard deviations from the mean is theoretically possible, though exceedingly unlikely— but it still isn’t what we want for gameplay; its tails are infinite but extremely “thin”.

Fudge can have what I’ve heard called the “Fair Cook Problem”. For this reason, many players prefer to use 3dF or 2dF. With 4dF, it is possible for a Fair (0) cook to produce a Legendary (+4) dish, but he is equally likely (1/81) to produce an Abysmal (-4) dish and make everyone sick. At 1-in-81, we’re talking about rare events, so that’s not much of a concern on either end; but 4dF also means 5% (4/81) of his dishes are Terrible and 12% (10/81) are Poor. That’s more of a problem. We wouldn’t call someone with this profile “a Fair Cook”. We’d fire him, not because he occasionally (1/81) screws up legendarily— we all have, at one thing or another— but because of his frequent, moderate screw-ups. At the same time, if we drop to 2dF, we lose a lot of the upside variation that makes RPGs exciting— 77% of the rolls will be within one level of his base (plus or minus modifiers) so why don’t we just go diceless? Using 2dF imposes draconian conditions on what can happen and what cannot— the system is deciding— whereas 4dF lets the dice speak but they get loud and never shut up.

For this reason, I advise against using the Gaussian distribution for the core mechanic of one’s role-playing system. It’s too thin-tailed. Although outliers are rare by definition, we need to feel like they’re possible, which means they need to happen sometime. What we don’t want are frequent moderate deviations (Poor dishes from Fair cooks) that muck up the works and turn the game into a circus. In technical terms, we want kurtosis; we probably also want some positive skew.

In addition to this observation, the real-world normal distribution is continuous and its natural units (standard deviations from the mean) feel bloodless. Is “0.631502 standard deviations below average” meaningful? Not really. It has the same problem as the percentile roll. I just don’t know what “a 31st-percentile result” is supposed to mean. As I said, we can only distinguish about seven levels of quality among outcomes— and, in most cases, fewer than that. We don’t want to think about tiny increments. Whatever our “points” are, we want them in units that matter: not that the fisherwoman had a 37th-percentile (or 12, or -1) day, but how many fish she caught. No fish, one fish, or two fish? Red fi— never mind. Fish. The French word for fish is… what again? Poisson.

The “Poisson Die”

Here are the design constrains under which I built the d8 System:

  • (i) the core random variable must be implementable using a small number of regular (d4, d6, d8, d10, d12, d20) dice and simple mental arithmetic. No immersion-breaking tables or calculators.
  • (ii) the output should have a small number of levels that represent discrete qualitative jumps; not the 16 levels (3–18) of 3d6 or 100 of d100.
  • (iii) the system should be unbounded above. Except when there’s a character-specific reason (e.g., disability, moral vow) reason a PC cannot do something, there should be a nonzero chance of him achieving it, even if the task is ridiculously hard task. (Probability, not the system, should limit the PC.)
  • (iv) chaining, or the use of further dice rolls for additional detail on extreme results (e.g. “roll d6; on 6, roll again and add the result”) is permissible upward, but not downward. Chaining can improve a result or leave it unchanged; it can be used to determine how well someone succeeded, but not how badly he failed (“roll of shame”).
  • (v) it should be easy to correlate a performance level to the skill level of which it is typical. This is something Fudge does well: a Good (skill level) cook will, on average, produce Good (result level) food.

How do we meet these criteria? (Here’s some technical stuff you can skip if you want.) Between (ii) and (iii), there seems to be a contradiction; (ii) wants us to have “a small number of” discrete separable qualitative levels, and (iii) demands unbounded possibility upward. This isn’t hard to resolve: we can have an infinite number of levels in theory, so long as the levels are meaningfully separate— lowest (“0”) from second lowest (“1”), “1” from “2”, and so on. The infinitude of possibilities isn’t a problem as long as 10+ aren’t happening all the time. This favors a distribution that produces only integers, which is also a good match for dice, which produce integers.

The Poisson distribution models counts of events (which could be “successes” or could be “failures”— it does not care if the events are desirable). Poisson(1) is the distribution of counts of an event during an interval X if it happens once every X on average. If lightning strikes 2 times per minute, the distribution governing a 15-second inverval will be Poisson(0.5) and that governing a 60-second interval will be Poisson(2).

For an integer m, a Poisson(m) distribution produces m on average, so we can naturally correlate skill and result levels. If a character of skill 4 gets a Poisson(4)-distributed result, then we know that a result of 4 is average at that level. They also sum nicely: if you add a Poisson(m) variable and a Poisson(n) variable, you get a Poisson(m+n) variable, which means that statistically-minded people like me have a good sense of what’s happening. It also means that, if we can simulate a Poisson(1) variable with dice, we can do it for all integer values.

Finally, the Poisson distribution’s tail decay is exponential as opposed to the Gaussian distribution’s quadratic-exponential decay. This has a small but meaningful effect on the feel of the game. Difficult, unlikely endeavors still feel possible— we can imagine having several lucky rolls in a row, because sometimes it actually happens— so it doesn’t feel like the system itself is imposing stricture.

Can you sample from a Poisson(1) distribution using dice? Not perfectly; the probabilities involved are irrational numbers. For our purposes, the most important probability to get right is Pr(X = 0), which for Poisson(1) is 1/e = 0.367879…; as rational approximations go, 3/8 = 0.375 is good enough. (One can do better using d30s— this is detailed below— but I don’t think the extra accuracy is worth the cost. GMs and players benefit from the feel of statistical realism, but I don’t think they care about Poisson distributions in particular.)

To roll n Poisson dice, or ndP:

  • roll n d8’s. A 4 or higher is 1 point (or “success”); a 7 is double, an 8 is triple.
  • for each 8 rolled, roll it again. On 1-7, no change. On 8, add 1, roll again, repeat.

So, if a player is rolling 4dP and gets {2, 3, 6, 8}, we interpret the result as 0 + 0 + 1 + 3 = 4; we chain on the 8, if we get, say, a 3, we stop and 4 is our result. That’s an average outcome from 4dP, but a 1-in-64 result from 1dP.

For easier readability, you can buy blank d8’s and label them {0, 0, 0, 1, 1, 1, 2, 3}. You’ll typically be rolling one to four of these, so four such dice per player (including GM) is enough.

Here’s a table (graph also on site) that shows how 2dP tracks against Poisson(2). Are there more complicated methods that are more accurate? Of course. Is it worth it, from a gameplay perspective? Probably not. The dP, as described above, does the job.


Unlike a board game whose state can be represented precisely, the environment of a role-playing game is usually strictly or mostly verbal, and the game state is a collection of facts about the world that the GM and PC have agreed to be true. Fact: there’s a goblin 10 feet away. (PC stabs the goblin.) New fact: There’s a dead goblin at the PC’s feet. A character sheet contains all the important facts about a player’s character. Erika is 28 years old. Jim is a member of a disliked minority. Sara has eyes of different colors. Mat’s religion forbids him from eating meat.

The facts above are qualitative, which doesn’t make them less important, but they’re not what RPG systems exist to model. GMs and players decide what they mean and when they have an influence on gameplay (if they do). The system itself isn’t going to say that it means that Sara’s eyes are of different colors. It’s the quantitative measurements of characters— what they do; how they match up against each other and the world— that an RPG system cares about. In D&D, a character with STR 18 is very strong while one with STR 3 is very weak; the former is formidable in combat, but the latter can’t pick up a sword.

In the d8 System, these quantitative attributes are all Skills, which range from 0 to 8, but entry-level characters will rarely go above 4. For Skills, 1 represents a solidly capable apprentice who can do basic things without supervision and, within time, can produce solid work; 2 represents a seasoned journeyman; 3 represents a master craftsman; 4 represents an expert of local-to-regional renown. 5 is approaching world class, and 6–8 are very rare.

Skills can be physical (Weightlifting, Juggling, Climbing) or academic (Chemistry, Research, Astronomy) or social (Persuasion, Deception, Seduction) or martial (Archery, Swordsmanship, Brawling) or magical (Telekinesis, Healing, Pyromancy). Each campaign world is going to have a different Skill tree, and a GM can choose to have very few Skills (say, ten to fifteen for a single-session campaign) or a massive array (several hundred), although it’s best not to start with several hundred available Skills among new players.

The more Skills there are, the more specialized and fine-grained they will be. For a coarse-grained system, Survival, Bargaining, and Elemental Magic would be skills. In a more fine-grained system, you’d split Survival into, say: Trapping, Fishing, Camping, and Finding Water. Elemental Magic would become Fireball, Cold Blast, Liquefy Ground, and Move Metal. Bargaining would become Item Appraisal, Negotiation, and Sales Instinct.

Also, things that we assume most or all people in the campaign world can do, do not require Skills. In a modern setting, “Literacy 2” (by a medieval standard) would be assumed, and if someone was well-read they would probably have “Literacy 3”— but we wouldn’t bother writing it down; it can mostly be assumed that a 21st-century American can read and can drive.

As a campaign goes on, and as players do harder and more specialized things, the Skill tree is going to grow. There’s nothing wrong with that. Of course, GMs are going to want to start with a list of basic Skills they think will be useful in the game world. Here’s how I’d recommend doing that: start by listing classes that would be useful in the game’s world. That doesn’t mean the GM is committing to the class system— they are “metadata” that will be thrown away, allowing players to invest points as they will. Here are twelve archetypes that might befit a typical late-medieval fantasy world.

  • Soldier (swords, spears, armor, defense).
  • Barbarian (axes, hammers, strength).
  • Ranger (survival skills, defensive fighting, animal husbandry).
  • Rogue (thievery, deception, evasion, sabotage).
  • Healer (defensive magic, curative spells, medical knowledge).
  • Warlock (offensive magic, conjuration, elemental magic).
  • Wizard (buffs/debuffs, combination magics, potions).
  • Monk (bare-fisted fighting, “inner” magic).
  • Merchant (social skills, commerce, regional knowledge).
  • Scholar (chemistry, engineering, historical knowledge, foreign languages).
  • Bard (arts, seduction, high-society knowledge).
  • General (oratory, battle tactics, military knowledge).

For some value of N, generate N primary Skills appropriate to each class. For a coarse-grained Skill system, one might use N = 4; for a fine-grained one, consider N = 10. If a skill doesn’t fit into a class, add it anyway. If it fits into more than one, don’t worry about that, assuming these classes are just for inspiration. In general, I wouldn’t worry about the complexities of Skill trees (specialties, neighbor Skills, etc.) for an entry-level campaign.

When circumstances create new Skills, GMs have to decide how to “spawn” them. The population median for most Skills is zero, so most characters won’t have it— but if players’ back stories argue for some exposure, that might make the case for a level. Of course, GMs have to keep player balance in mind while doing this.

As player characters improve, the numbers will increase, but that can be boring. Rolling five dice when you used to roll four is fun, but eventually it’s all just rolling dice. Once players are hitting the 3–5 range, it’s time for the GM to start thinking about specialties. A character can have Medicine 4 and no experience with surgery. We could model a very high-risk surgery as a Difficulty 6 task using Medicine— the player rolls 4dP and has to hit 6 to succeed— but it would be more precise to model it as a Difficulty 3 trial of a harder and more specialized skill: Surgery.

As PCs do harder, more interesting things, the Skill tree may become an actual tree.

There are three ways skills relate to each other. A hard dependency means the parent must be achieved, at each level, before the dependent skill can be learned. When hard dependencies exist, there’s usually a slash in the name of the more specialized skill, e.g., Writing/Fiction or Writing/Poetry. It is impossible for a character to get Writing/Fiction 4 without having Writing 4. Soft dependencies are more lenient: the character’s level in the specialty can exceed that of the parent Skill, as long as there’s nonzero investment in the parent skill— however, the Skill gets harder to improve as the discrepancy grows. Someone could, say, have Medicine 3 and Surgery 4— above-average medical knowledge but fantastic in the operating room— but Surgery 4 (or even Surgery 1) without Medicine isn’t possible. Neighbor Skills do not have a prerequisite relationship but one can substitute (at a penalty) for the other. If a PC has Low Dwarven 3 and has to read a scroll in High Dwarven, he might able to read it as if he had High Dwarven 1 or 2.

GMs should, as much as possible, flesh out these relationships before characters are created. An entry-level Skill tree isn’t likely to have much specialization, so hard dependencies will be rare, if used at all. Typically, all of the primary Skills are going to be soft-dependent on parents called Aptitudes— in the example I give below, social Skills would be soft-dependent on Social Aptitude, athletic Skills on Physical Aptitude, and so on.

For each primary Skill, the GM should decide:

  • which Aptitude the Skill is soft-dependent on, and
  • each Skill’s Complexity: Easy (-1), Average (-2), Hard (-3), Very Hard (-4), or Extremely Hard (-5).

Complexity doesn’t measure innate difficulty but relative difficulty— how much additional investment is required to learn the skill. Lifting weights isn’t “Easy”— I’m exhausted after a good session at the gym— but I’d probably model Weightlifting as Easy relative to the Aptitude, Strength: it’s not hard for a character with Strength 3 to get Weightlifting 3.

Fungibility is another factor GMs should determine. Let’s say that Weightlifting is Easy (-1) and Rock-Climbing (-2) is Average relative to Strength. If these Skills are fungible, then a character with Strength 4, and no prior investment in either skill, implicitly has Weightlifting 3 and Rock-Climbing 2. If they’re non-fungible, then the character doesn’t, and will be unable to perform the task without prior investment in the skill.

By default, Easy and Average Skills are fungible by their parents (Aptitudes for primary Skills, broader fields for specialties) whereas Hard+ Skills are not fungible. GMs can overrule this on a per-Skill basis— the GM might decide that Surgery is Average (-2) relative to Medicine but non-fungible. Then, while a character with Medicine 4 can get Surgery 1 rather quickly (having mastered the parent Skill) he doesn’t implicitly have it without investing in the Skill.

GMs may vary fungibility at the task level. For example, a GM might allow a brilliant-but-unscrupulous (indeed, they sometimes go together) charlatan with Medicine 4 (but no Surgery) to roll 2dP for the task of faking knowledge, but be utterly incapable should he have do it.

Primary Skills (that is, Skills that aren’t specialties of other Skills) are almost always soft-dependent on an Aptitude— these play the role of “ability scores” in other systems, and they function as Skills for learning Skills, but they’re also Skills in their own right. Whether this shall mean they represent (small-s) skills, that can be improved with practice, or immutable talents, that matter depends on the GM.

If the GM’s objective is realism, it should be incredibly uncommon for Aptitudes to go more than one level above where they started; but, toward the objective of modularity and optimism about human potential, the d8 System doesn’t prohibit Aptitude improvement.

Here are some ways in which Aptitudes are different from regular Skills:

  • Everyone has them. The population median for a typical Skill is zero. Most people have never been Scuba Diving and most people outside of Germany don’t speak German. On the other hand, nearly all of us use Manual Dexterity and Logical Aptitude on a daily basis. For a typical Aptitude, the population median is 1–2; 1 for people who don’t use it in daily life, and 2 for people whose professions or trades require it.
  • They change slowly. Improving Skills takes a lot less effort than improving Aptitudes. It’s not uncommon for a mid-level character’s top skills to hit 5 and 6; but Aptitudes of 5+ should be very rare (they can break the game).
  • Below 1, fractional values (½, ¼) exist. For other Skills, the lowest nonzero value is 1. It’s just not useful to keep track of a dilettante having “Battle Axe ¼”. On the other hand, for core Aptitudes like Strength and Manual Dexterity, there’s a difference between extremely (¼) or moderately (½) weak and “Strength 0”, which to me means “dead” (or, at least, nearly so).

One converts from D&D’s 3d6 scale roughly as follows: 3: ¼; 4–7: ½; 8–11: 1; 12–14: 2; 15–17: 3; 18–19: 4; 20+: 5+.

What Aptitudes exist in a campaign is up to the GM. Theoretically, a GM could run an Aptitude-free system, in which learning Skills is equally easy (or hard) for all characters. Usually, though, players and GMs want to model a world where people have different natural talents.

For a fantasy setting, my “core set” consists of the following:

  • Physical: Strength, Agility, Manual Dexterity, Physical Aptitude, Stamina.
  • Mental: Logical Aptitude, Creative Aptitude, Will Power, Perception.
  • Social: Charm, Leadership, Appearance, Social Aptitude.
  • Magical: Magic Power, Magic Resistance, Magical Aptitude.

Note: these Aptitudes are not part of “Core d8”— the d8 System doesn’t mandate you use them— though I refer to them for the purpose of example. In a science-fiction setting, Strength isn’t that important and perhaps would combined with Stamina. In a non-magical setting, discard the Magical ones.

Please note: these attributes are part of “Core d8”; the d8 System doesn’t mandate that you use any of them. In a science-fiction setting, Strength is less important than it would be in a classical epic fantasy. In a non-magical world, the magic-related Aptitudes have no use and should be discarded.

Some of these— Strength, Agility, Will Power, Perception, Charm— are likely to be checked directly. A PC makes a Strength check to open a heavy door, a Charm check to determine an important NPC’s reaction to meeting him for the first time, a Will Power check to determine whether he is able to resist temptation. Those with Aptitude in their name largely exist as Skills for learning other Skills— most primary Skills will be soft-dependent (“keyed on”) them. So, while Weightlifting will be keyed on Strength, Sprinting on Agility, and fine motor (s|S)kills on Manual Dexterity, most athletic Skills will key on Physical Aptitude (kinesthetic intelligence).

This means it is possible, for example, that a PC has Charm 4 but Social Aptitude 1— he’s very good at making positive impressions on people, but learning nuanced social (s|S)kills is difficult for him.

The d8 System de-emphasizes “ability scores”, so it might seem odd that my core set for fantasy has so many (16) Aptitudes; but this is part of the de-emphasis. To start, I broke up the “master stats”. Dexterity/DEX, I broke into Agility, Manual Dexterity, and Physical Aptitude— all of which are different (though correlated) talents. Intelligence/INT I broke up into Logical Aptitude, Creative Aptitude and Perception. The d8 system, by having fewer levels, limits its vertical specificity in favor of horizontal specificity. “Intelligence 4” could mean a lot of things; on the other hand, if someone has “Logical Aptitude 5, Creative Aptitude 1”, I understand that he’s deductively brilliant but mediocre (and likely uninterested) in the arts.

Notice also that I’ve separated Magic Power and Magical Aptitude from the intelligences. I rather like the idea of a super-powerful, stupid wizard.

If any of the Aptitudes in my fantasy core set are misnamed, Creative Aptitude is the one, because it also includes spatial aptitude and was originally named thus. The “left brain, right brain” model is for the most part outdated, but it gives us a rather balanced split of what would otherwise be a master stat, Intelligence. Is it entirely accurate to lump spatial and creative intelligence together? Probably not; but this set does so because, combined, they are in approximate power balance with Logical Aptitude.

Building Characters

Core d8 doesn’t tell GMs how characters should be made. Aptitudes and initial Skills can be “rolled”, but with experienced players I think a point-buy system is better. Players should start with “big picture” conceptions of who their characters are, their back stories, and their qualitative advantages and disabilities.

Players should, in general, get k*N points to disperse among their PCs’ Aptitudes, where N is the number of Aptitudes that exist and k is between 0.5 (regular folk) and 1.5 (entry level but legendary). GMs can decide whether various qualitative, subjective traits cost Aptitude points, or (if disadvantageous) confer them.

The baseline value of most Aptitudes is 1, from which the point-buy costs are:

  • ¼: -2 points (GM discretion).
  • ½: -1 point
  • 1: 0 points
  • 2: 1 point
  • 3: 3 points
  • 4: 5 points
  • 5: 8 points (GM discretion).

Perhaps I risk offense by saying this, but men and women are different: men have more upper body strength and women are more attractive (and are perceived as such, even by infants). So, I’m inclined to give male characters +1 Strength (baseline 3) and women +1 Appearance (baseline 2). This doesn’t prevent players from “selling back” that point for something else. A player can create a Strength ¼, Appearance 4 male character; a player can also make a Strength 4, Appearance 1 female (e.g., Brienne of Tarth in Game of Thrones). It does make it cheaper to have a Strength 3+ male or Appearance 3+ female character. If you’re a GM and you find these modifiers sexist, throw them out. It’s your world.

If I were to build a Aptitude sheet for Farisa, protagonist of Farisa’s Crossing, it would look like this:

  • Strength: 1 — average female.
  • Agility: 1 — average, untrained.
  • Manual Dexterity: ½ — clumsy.
  • Physical Aptitude: ½ — same.
  • Stamina: 2 — able to walk/run long distances.
  • Logical Aptitude: 4 — the smartest person she knows excl. Katarin and [SPOILER].
  • Creative Aptitude: 3 — Raqel has more artistic talent; so does [SPOILER].
  • Will Power: 3 — determination necessary to survive Marquessa.
  • Perception: 2 — except [SPOILER] may be case for 3.
  • Charm: 2 — quirky, nerdy; able to use her atypicality to advantage sometimes.
  • Leadership: 2 — teacher at prestigious school.
  • Appearance: 3 — above-average female.
  • Social Aptitude: ½ — Aspie and probably bipolar (Marquessa).
  • Magic Power: 4 — very strong mage by [SPOILER] standard.
  • Magic Resistance: 1 — iffy b/c mages are weaker to most magic in this world.
  • Magical Aptitude: 4 — [SPOILER] and [SPOILER] and then [SPOILER].

Here, k turns out to be 1.75 (+28); she’s in a world where most people have no magic and the baseline for Magic Power and Magic Aptitude is 0— so those cost her 8 points each. Stats wise, she’s overpowered. I would argue that this “paid off” by her various disadvantages. She has the horrible illness that afflicts all mages— the Blue Marquessa. She’s a woman attracted to women in a puritanical (1895 North America–based) society. She’s visibly different from the people around her. Her rigid morality (neutral/chaotic good) gets her in trouble, and so does her good nature (her theory-of-mind inadequately models malevolence, leading to [SPOILER]). Finally, there’s the bounty put on her head by trillionaire Hampus Bell, Patriarch (full title: Chief Patriarch and Seraph of All Human Capital) at the Global Company. She probably needs that +28 to survive.

Aptitudes need to be selected before primary Skills are bought, as the Aptitudes will influence how much it costs to learn Skills.

By default, I would give players k*sqrt(N) points where N is the number of primary Skills that exist and k is… around 5. The going assumption is that entry-level characters (regardless of chronological age, for balance) have about five years of adventure-relevant experience, which gives them enough time to grab a few 1’s and 2’s, and maybe a 3 or 4 if talented. If you’re building a mentor NPC, you might use k = 20 or 30.

The point-buy cost of raising a Skill one level depends both on the Skill’s Complexity and the character’s level of the Aptitude it’s keyed on, as follows:

  • a base of 1 point for Easy skills; 2 for Average; 4, Hard; 8, Very Hard; 16 Extremely Hard; times:
  • 1 (per level) for each level up to A, the rating in the relevant Aptitude; 3 from A to A+1; 5, to A+2, 10, to A+3; 20, to A+4. (Here, treat A as 0 if A < 1.)

Let’s say, for example, that Espionage is Hard and keyed on Social Aptitude. Then a character with Social Aptitude 3 will pay 4 points each to get the first levels; if he wants Espionage 4, he’ll have to pay an additional 12 (total: 24). If he wants Espionage 5, he’ll have to pay 20 more (total: 44).

Applying Skills

Any time it is uncertain (per GM) what’s going to happen, dice are rolled. Often, this is because a player wants his character to do something where there’s a nontrivial (1% or greater) chance of failure. RPGs call this a check or trial, and both terms are used interchangeably.

Active trials occur when the PC attempts something and succeeds or fails. There are also passive trials, where the GM needs to know if the PC made a good impression on an important NPC (Charm or Appearance check), resisted temptation (Will check), or became aware of something unusual in the environment (Perception check). Passive trials will almost always be covert checks (described below).

Unopposed trials (also called “tasks”) are those in which the PC is the sole or main participant. Attempting to jump from one rooftop to the next is an unopposed trial; so is playing a musical instrument (although NPCs may differ in their appreciation of the PC’s doing so). There are two kinds of unopposed trials: binary checks and performance checks.

Binary Trials

The GM must decide which Skill is being checked. This will typically be the most specialized Skill that exists in the game. For example, if the task is surgery and Surgery is a specialty of Medicine, the roll will be performed using a character’s Surgery rating. A character who does not have that Skill at all (“Surgery 0”) cannot do it; otherwise, the number of dice (dP, or Poisson dice) equals the player’s rating. In cases where no Skill applies at all— say, “situation rolls” like weather that are (usually) out of the characters’ control— two dice (2dP) are used.

A binary check is made against a Difficulty rating, which is always a nonnegative integer. Difficulty 0 (“trivial”) means the character succeeds (without rolling dice) as long as there is some investment in the Skill. There’s no need to roll; the only way a PC could fail is if he were hit by an Amnesia spell (or equivalent) in the middle of the action. Difficulty 1 (“simple”) means there is some chance of failure; for example, recognizing a fairly common word of a foreign language. Difficulty 2 (“moderate”) is something like jumping from a second-story window; Difficulty 3 (“challenging”) would be something like cooking for twelve people, all with different dietary requirements, under strict time pressure. Difficulty 4 (“very challenging”) would be driving an unfamiliar race car on an unknown track at competitive speeds. There’s no limit to how high Difficulty can go. At 12, even a maxed-out character (Skill 8) can expect to fail 87% of the time.

What does failure mean? Before the dice are rolled, the GM should decide, and the player should understand:

  • how long the action takes. In turn-based combat this could be a turn (seconds of in-world time). For research or invention, this could be two weeks of in-world time.
  • what resources are required, and whether they are consumed on failure.
  • other consequences of failure, which can range from nothing (the player can try again) to devastating (failing to jump safely from a moving train).
  • whether the player knows if he succeeded or failed at all. This will be discussed below.

For example, PC Sarah has Climbing 4, but she’s climbing cautiously and has top-of-the-line equipment. The GM judges that climbing a nearly vertical rock face, one that has stymied expert climbers, is a Difficulty 5. The GM determines that an attempt will take an hour and 550 kCal of food energy. Since her skill is 4, Sarah (well, Sarah’s player) rolls 4dP: {1, 0, 0, 1} for a total of 2, which falls short of the 5 necessary to make the climb. Since she took safety precautions, the result of this failure is: no cost but the time and food. She can try again.

The Three-Attempt Rule, or 3AR, is the general principle that after 3 consecutive failures, the player typically must have his character do something else. Characters are not min-maxers, and they don’t fully know how easy or difficult an objective is. They get discouraged. If the player has the character come back after a night of sleep, or a month later with more (s|S)kill, or goes about the problem is a distinctly different way, this isn’t a violation of the 3AR. Last of all, the 3AR never applies when there’s a strong reason for the character to persist— a life-or-death situation (such as combat), a religious vow, or a revenge motive. A PC in battle who swings and misses needs no excuse to keep swinging; but the 3AR is there to block unimaginative brute force and while-looping; the player can’t say “I continue to attempt [Difficulty 9 feat] until I succeed.” (There is no, “I take 20.”)

Most binary trials are pass/fail. The degree by which the roll fell short of, or exceeded, the Difficulty target isn’t considered— any effects of the failure or success are determined separately. That is, the dice rolls represent the character’s performance (how good he is at opening the chest) but not raw luck (what he finds, if successful, inside it) but, if GMs prefer to combine the two for speedier gameplay, that is up to them.

The d8 System has no concept of “critical failure” or “botch”, which I consider to be poor design. Failure can of course have negative consequences (making a loud noise and waking someone during Burglary) but the player shouldn’t be punished for such low rolls that the rules say bad things must happen and therefore the GM must make something up.

When it comes to binary trials, we say the player (or the PC) is in advantage if the relevant Skill meets or exceeds the Difficulty (adjusted for modifiers); he is out of advantage if the relevant Skill rating is less than the Difficulty. Since the median of NdP is N (for all values we care about) a roll in-advantage will succeed more than 50 percent of the time; a roll out-of-advantage will succeed less than half the time..

Performance Trials

In a binary trial, one succeeds or one doesn’t. Performance trials have degrees of success. For a character with Hunting 3, the player rolls 3dP, but while a 2 might suffice to bring home a rabbit, a 4 gets a deer, a 6 gets a bison, and a 15 might result in meeting a dragon who befriends the party.

General guidelines for performance interpretation are below.

  • 0: a bad performance (2dP: bottom 15%). No evidence of skill is shown at all.
  • 1: a poor performance (2dP: bottom 40%). Some success, but it’s an amateurish showing. The work may be dodgy; reception will be mediocre.
  • 2: a fair performance (2dP: average). The character demonstrates skill appropriate to his class or profession— not especially good, but not bad.
  • 3: a good performance (2dP: top 35%). The performance is significantly above the expectation of a competent practitioner.
  • 4: a very good performance (2dP: top 15%, 4dP: average). Rewards for exceptional performance accrue. A singer might earn three times the usual amount of tips.
  • 5: a great performance (2dP: top 6%, 4dP: top 40%). Like the above, but more. Instant recognition is likely to accrue; this is approaching world-class.
  • 6–7: an excellent(+) performance (2dP: top 2%, 4dP: top 25%). This is the kind of performance that, if reliably repeated, will lead to renown and fame.
  • 8–9: an incredible(+) performance (2dP: top 0.1%; 4dP: top 6%). The character has done so well, some people are convinced that he’s a genius, or that he has magical powers, or that he’s cheating.
  • 10–11: a heroic(+) performance (2dP: 1-in-50,000; 4dP: top 1%).
  • 12+: a legendary performance (2dP: 1-in-2,000,000; 4dP: top 0.1%).

Of course, GMs are at liberty to interpret these results as they wish, and these qualitative judgments are contextual. A musician who gives a 3/good performance at a world-famous orchestra will mildly disappoint the audience; one who gives 2/fair will likely be booed. Usually, the results (and whether they benefit or disadvantage the character) are correlated to the outcome rolled; but, if a player games the system with modifier min-maxing and rules lawyering, and somehow produces a 15/legendary+++ result whilst Singing, the GM is allowed to have him run out of town as a witch.

Degrees of Transparency

In general, players roll the dice themselves and know how they did. The d8 System deliberately keeps increments “big” so they correspond to noticeable degrees of quality.

Should the GM tell players the precise Difficulty ratings of what their PCs are doing? The d8 System doesn’t call that shot. A GM can say, “It’s Difficulty 5” or she can say, “It looks like a more experienced climber would have no problem, but it leaves you feeling uneasy.” That’s up to their tastes. As a general rule, characters have no problem “seeing” Difficulties and performances 2 levels beyond their own, maybe more. A PC with Climbing 4 knows the difference between challenging-but-likely (4) and a-stretch (5) and out-of-range-but-possible (6) and “very unlikely” (7+).

There are cases, though, when players shouldn’t know the Difficulty level. Perhaps there’s an environmental factor they haven’t perceived. Sometimes, but more rarely, they shouldn’t even know how well they performed. There’s a spectrum of transparency applicable to these trials, like so:

  • Standard Binary: The GM tells the player the Difficulty of the action. Whether a numerical rating or qualitative description (“It looks like someone at your level of skill can do it”) is used, d8 doesn’t specify.
  • Concealed Difficulty: there are concealed environmental factors that make the GM unable to indicate a precise Difficulty level (or that induce the GM to lie about it— though she and her players should reach agreement on what the GM can and cannot lie about.) The player rolls the dice and the GM reveals whether success or failure is achieved, but not the margin. The player’s experience is comparable to that of a performance, rather than binary, trial.
  • Concealed Outcome: Appropriate to social skills. The player rolls the dice and has a general sense of how the PC performed, but not whether success was achieved. With information skills (e.g. Espionage) GMs are, absolutely, without equivocation, allowed to lie— the player may be deceived into thinking he succeeded, and fed false information, if the PC was in fact unsuccessful.
  • Concealed Performance: the player knows that a Skill is being checked, and that’s it. The GM may ask questions about how the player intends to do it— to decide which Skill applies if there’s more than one candidate, and possibly to apply modifiers if the player comes up with an ingenious (or an obnoxious and stupid) approach. The GM rolls. She may have a qualitative indicator (e.g., “You feel you could have done better”) to the player, or she may not.
  • Covert: the player is unaware that the roll ever took place. (This is what those pre-printed sheets of randbetween(1,8) are for.) The PC’s in a haunted house but the player has no idea that he just failed a Detect Spirits check, or that he failed a check of any kind, or that anything was even checked.


As written so far, the d8 System still has the “Fair Cook Problem”. Someone with Cooking 2 is going to fail at cooking (roll 0 from 2dP) 14% of the time, or one dinner per week. This is unrealistic; someone who’s 86% reliable at a mundane, low-tension activity simply isn’t a professional-level cook. Of course, if he’s subject to the high-pressure environment of a show like Top Chef, that probability of failure becomes more reasonable….

The d8 System resolves this issue with the Tension mechanic. Routine tasks occur at Low Tension. Social stakes and non-immediate consequences suggest Medium Tension. Death stakes imply High Tension; combat is always High Tension, as are Skill checks in hazardous environments. Singing in front of friends or for practice is Low Tension; doing it for a living is Medium Tension; signing for a deranged king who’ll kill the whole party if he doesn’t like what he hears, is High Tension.

Low and Medium Tension, for binary and performance trials, allow the player to “take points”, by which I mean “take 1” for each die (a dP’s mean value is just slightly above 1). At Low Tension, the player can “take 1” for all his dice if he wishes. Medium Tension allows him to take up to half, rounded down. So, a player with Skill 3 has the following options at each Tension level:

  • Low: 3dP, 2dP+1, dP+2, 3.
  • Medium: 3dP, 2dP+1.
  • High: 3dP only.

For strictly binary trials, the PC does best to take as many points as possible when in advantage, and to take as all dice (as he would have to do, at High Tension) when out-of-advantage. Thus, when the Difficulty exceeds the PC’s Skill level, the Tension level becomes irrelevant.

If success is guaranteed, there’s no reason to do the roll. If someone has Cooking 2 in a Medium Tension setting, but he’s only trying to broil some ground beef (Difficulty 1) there is no need to roll for that— dP+1 against 1 will always succeed.

Most situations occur at Medium Tension; it jumps to high in combat situations or where time-sensitive threat to life is possible. The regular stresses of unknown settings, meeting new people, and enduring the daily discomforts of long camping trips make the case for Medium. Low Tension is mostly used for familiar settings, downtime, and practice (skill improvement).

On performance rolls and concealed-difficulty rolls, the player may not know whether it’s better to take dice or points— it’s up to the GM whether to reveal what’s in the player’s interests (if it’s clear cut). For covert trials, the GM should typically make these decisions in the player’s interests— taking points when in-advantage and dice when out-of-advantage— unless circumstances strongly suggest otherwise.

Gate-Keeping, Compound Trials, and Stability

Tension handles the “Fair Cook Problem”. We get the variability we expect from high-tension situations (combat) but we don’t have an unrealistically high probability of competent people failing, just because the dice said so. A Fair (2) cook will be able to produce Fair (2) results 100% of the time at Low Tension, 62.5% of the time at Medium Tension (dP+1), and 58% of the time at High Tension (2dP).

This doesn’t handle everything player characters might want to do. For many performances and tasks, the variance offered by Poisson dice is too high. Let’s use weightlifting as an example. We might decide that each level of Strength (or Weightlifting skill) counts for 150 pounds (68 kg) of deadlifting capacity. A character with Strength 1 can deadlift 150 pounds; with Strength 2, 300. The 2dP distribution suggests that a Strength 2 (300-pound) deadlifter has a 34% chance of a Strength 3 feat: lifting 450 pounds. I can tell you: that’s not true. It’s about zero. Raw muscular strength doesn’t have a lot of variability— people don’t jump 150 (let alone 300) pounds in their 1RM because the dice said so.

When it comes to pure lifting strength, I think GM fiat is appropriate— the PC cna lift it, or can’t. Skill and day-to-day performance are factors, but raw physical capacity dominates the result. If GMs and PCs want to know, down to ten-pound increments, how much a character can deadlift, they’re probably going to need something finer-grained than five or six levels. I’m not a stickler for physical precision, though, so I’m fine with a module saying “Strength 3 required” rather than “400 pounds”.

Running speed also doesn’t have a whole lot of variability. A 4-hour marathoner (level 2) is not going to run a 2:20 (level 7) marathon, ever— but 2dP spits out 7 about once in 250 trials. This is a case where GMs can say, “Uh, no.” As in, “Uh, no; I’m not going roll the dice to ‘see if’ your Agility 2 character wins the Boston Marathon.”

Whether this is a problem in role-playing games is debatable. Athletic events measure people’s raw physical capabilities, and there just isn’t a lot of variability, because these events are designed to measure what the athletes can do at their best, and therefore remove intrusions. Role-playing environments are typically chaotic and full of intrusions. This, I suppose, allows for a certain “fudge factor”; the higher variability of an Agility check makes it appropriate to running through a forest while being chased, but not competitive distance running, at which an Agility 2 runner will never defeat one with Agility 4.

What about cases where we want some performance variability, but not as much as ndP gives us? Shakespeare, one presumes, had Playwriting 8; a performance of 8 is also 94th-percentile from a playwright with Playwriting 4. Does mean that 6% of a his efforts are Shakespearean? Well, it’s hard to say. (Not all of Shakespeare’s efforts were Shakespearean, but that’s another topic. Titus Andronicus gets points for a South Park homage, but Othello it ain’t.) It’s quite subjective, but for GMs who find it hard to believe that someone with Playwriting 4 can kick out a Shakespeare-level (8+) script once in 16 efforts, Stability is a mechanic that, well, stabilizes performance trials. Stability N means that the roll will be done 2*N + 1 times, and the median selected.

Stability is a heavyweight mechanic, appropriate to long, skill-intensive efforts like composing music, writing a book, or playing a game of chess. You don’t want to use it for quick actions, such as in combat, and you typically wouldn’t use it for binary trials where large deviations may not matter more than small ones. For example, if a musician is putting together an album, one way to simulate this is to have a Composition check for each song on the album— another is just to use Stability 1.

For a case study of the mechanic’s simulative effectiveness, let’s consider chess, where we actually have some statistical insight into how likely a player is to beat someone of superior skill. Chess 4 corresponds, roughly, to an Elo rating of 2000— well into the top 1 percent, but not quite world class. Chess 8 corresponds to about 2800, which has only been achieved by a few people in history (Magnus Carlsen, the best active right now, has 2862). Elo ratings are based purely on results, and a 10-fold odds advantage is called 400 points— in other words, each Elo point represents about a 0.58% increase in the chance of winning.

So, we can test the d8 System, without Stability, for how well it models this. A Chess 4 player should have a 1% chance (counting draws at half) of beating a Chess 8 player; but how often does 4dP actually win or draw against 8dP? At High Tension, about 16% of the time. That’s far too high for chess, a board game where there’s really no luck factor. We have to adjust our model. Well, first of all, we drop the Tension to Medium. No one’s going to die— although the superior player might be embarrassed by losing to someone 800 points below her. Then, we use Stability 1. Finally, we model it as an attacker/defender opposed action, which means that if there’s a tie in the performance score, it goes to the defender— whom we decide to be the more skilled player. Then, we can expect the Chess 4 player to win only 1.331% of the time (95% CI: 1.264% – 1.398%) against the one with Chess 8. That’s a 748-point Elo difference as opposed to 800. Is it perfect? No— among other things, it ignores that high-level chess games actually do often end in draws— but it’s close enough for role-playing.

Stability is one way to “gate keep” the highest levels of performance. Another way, for complex endeavors, is to us compound trials that pull on multiple Skills. A PC who wants to pen a literary classic might face a compound difficulty of [Writing/Fiction 4, Characterization 4, Knowledge/Literary 3]. A PC who wants to write a bestseller would face [Writing/Fiction 1, Marketing 4, Luck 5]. It’s up to GMs how to interpret mixed successes. For harder trials, GMs should require that all succeed; for easier ones, they can allow a mixed success to confer some positive result. (“Your book sells well, but the critics pan your poor writing, and your ex thinks the book is about her.”)


Situational factors— low light, being distracted, assistance from someone else, being physically attractive— easier or harder than otherwise. Because the d8 System deliberately makes steps between consecutive Skill, Difficulty, and performance levels fairly large— they’re supposed to represent distinctions the characters would actually notice— modifiers shouldn’t be used lightly. Nuisance factors that would “deserve” small modifiers in finer-grained systems should probably, in d8, be ignored unless they, in bulk, comprise a substantial impediment.

There are two types of modifiers: point and level modifiers. In Skill substitution, we see level modifiers. If a PC has Archery 3, and the Crossbow Skill neighbors it with Easy (-1) relative Complexity, then the PC implicitly has Crossbow 2. He rolls 2dP instead of 3dP— not everything he knows about Archery is applicable, but some of it is.

Most situational modifiers, on the other hand, will be point modifiers applied to the result of the dice, after they are rolled. At twilight, the poor light might make it 1 point harder to hit a target: Difficulty 4, in effect, instead of Difficulty 3. However, we call this modified roll “3dP – 1”, rolled against 3; rather than 3dP against 3 + 1 = 4. Why? Because it would be confusing if a “+1” modifier made the character’s life harder and a “-1” modifier made it easier.

Which type of modifier should a GM use? Situational modifiers should almost always be point modifiers, because while they can make a task easier or harder to pull off, they don’t really affect the skill level of the character. Skill substitution, on the other hand, is the case where the negative level modifier is appropriate— the PC with High Dwarven 3 “knows three ways to do things” (per abstraction) in High Dwarven, but only “two of those things” transfer over to Low Dwarven.

Severe illness might justify a negative level modifier; regular situational factors don’t. As for positive level modifiers, I think those only make sense under magical or supernatural influence. It’s conceivable that a random schlub could get “+3L Fighting” in The Matrix, but most real-world tactical advantages don’t actually increase performance— they merely reduce difficulty.

If negative point modifiers reduce a character’s performance below 0, it’s treated as zero. Likewise, if positive point modifiers reduce the effective Difficulty of a purely binary roll to 0— the player is now rolling 3dP+2 against 2— then there is no need to perform the roll at all.

If level modifiers reduce a Skill below 1, the fractional levels ½ and ¼ are used before going to 0. If Chainsaw Juggling is Hard (-3) relative to Juggling but fungible, a character with Juggling 4 implicitly has Chainsaw Juggling 1; if Flaming Chainsaw Juggling is Medium (-2) relative to regular Chainsaw Juggling, then this character has “Flaming Chainsaw Juggling ¼”.

When level modifiers come from Skill substitutions, the step after ¼ is 0— the Skill can’t be faked (as if it were nonfungible). When the debuffs come from other sources (sleeplessness, ergotism, PC madly in love with a statue) they cease having additional negative effect at ¼, which is as low as a Skill or Aptitude can get.

To roll at sub-unity levels, use the following modified dice; the “chaining” is the same as for a dP.

  • ½dP : {0, 0, 0, 0, 0, 1, 1, 2} / on 2, chain.
  • ¼dP : {0, 0, 0, 0, 0, 0, 1, 1*} / on 1*, chain.

Thus, the probabilities of hitting various difficulty targets are:

  • Skill 1: {1 : 5/8, 2: 2/8, 3: 1/8, 4: 1/64 … }
  • Skill ½: {1: 3/8, 2: 1/8, 3: 1/64, 4: 1/512 … }
  • Skill ¼: {1: 2/8, 2: 1/64, 3: 1/512, 4: 1/4,096 … }

Skill Improvements

Skills improve in two ways. One is through practice, which typically occurs in the downtime between adventures— although heavy use of the Skill during the adventure should also count for a few practice points (PP). Practice happens “off camera”, for the most part, between gaming sessions when the characters are presumably attending to daily life and building skills rather than scouring dungeons.

A Practice Point (PP) represents the amount of in-world time it takes to reach level 1 of an Easy primary Skill. It might be 200 hours (4 weeks, full time); it might be 650— it depends on how fine- or coarse-grained the Skills are (and, also, how fast the GM wants the players to grow). Each level of Complexity doubles the cost: 2 PP for Average, 4 PP for Hard, 8 PP for Very Hard. Furthermore, this isn’t the cost to gain a level, but the cost of an attempt to learn that level, using the relevant Aptitude. In other words, if Painting is keyed on Creative Aptitude, then to reach Painting 4 is a Creative Aptitude check. Practice almost always occurs at Low Tension, so PCs typically have a 100% chance of success for each level up to the level of that Aptitude.

In the example above, let’s say a PC has Creative Aptitude 3, and Painting is Average in Complexity. Then it takes 2 PP to get Painting 1, 2 more for Painting 2, and 2 more for Painting 3, for a total of 6 PP. Dice never have to be rolled because of the Low Tension— the PC will always succeed up to level 3. To get Painting 4, however, the player spends 2 PP only to get an attempt of 3dP against 4 (37% chance). On average, it’s going to cost 5.4 PP to get Painting 4.

Two kinds of modifiers can apply to practice. A skilled teacher (who has attained the desired level) can justify a +1 point modifier, and an exceedingly capable (and probably very expensive) mentor can bring +2. The other is practice “in bulk”, which allows a PC who has a contiguous block of down time to spend a multiple of the base cost to gain a point modifier: 3x for +1, 5x for +2, 10x for +3, 20x for +4. In the case above, the PC could spend 2 PP for a 37% chance of getting Painting 4, or spend 6 PP for a guarantee of locking it is— even though his Creative Aptitude is only 3. To get Painting 5 in this way will cost 10 PP.

It’s up to GMs how stringent they want to be on the ability of PCs to actually practice in the conditions they find themselves in. I would argue that if the PCs are in Florida over the summer, they probably can’t level up their Skiing. On the other hand, more lenient GMs might allow practice to be more fluid, like a traditional XP system.

Aptitudes improve in the same way, but are costed as Extremely Hard (16 PP base) and, since there is no “Aptitude for Aptitudes”, the default roll is 2. Raising an Aptitude from ½ to 1, or 1 to 2, costs 16 PP. From 2 to 3, it either costs 48 PP (“bulk”) or it costs 16 PP for a 34% shot (2dP against 3); from 3 to 4, it either costs 80 PP or 16 PP for a 17% shot (2dP against 4).

When Skills (and especially Aptitudes) move beyond 4, the player should be able to convince the GM that his character actually is finding relevant practice during the downtime. A character isn’t going to improve Chess 6 to Chess 7 unless he’s actually going out and playing against upper-tier chess players.

Practice is never subject to the Three-Attempt Rule. The characters can spend as much of their off-camera time practicing as they want.

The other, more dramatic, way in which Skills can improve is through the Feat system. A Feat occurs for a successful trial of a Skill where:

  • the result was unexpected— for a binary trial, the PC was out-of-advantage; for a performance trial, the roll was at least 3 levels above the Skill level;
  • no level modifiers were applied to the PC at the time— that is, the character was not under the influence of some magical buff or debuff— and
  • most importantly, the success mattered from a plot perspective. A “critical” hit against an orc that was going to die anyway isn’t a Feat. To qualify, it has to be something that occurred under some tension, and that no one expected— the sort of thing that characters (and players) talk about for weeks.

When a Feat occurs, the character’s Skill goes up by 1 level for 24 hours (game time) or until the character has a chance to rest. At that point, a check of the relevant Aptitude— Difficulty set at the new level— occurs. If the check is successful, the Skill increases is permanent. If (against a Difficulty of the new level) occurs. If successful, the Skill increment is permanent. If unsuccessful, the Skill reverts to its prior level, but the PC gets a bonus 3+dP practice points (PP) that apply either to the Skill or its Aptitude.

Multiplayer Actions

Opposed actions model conflict between two or more characters (PCs or NPCs, including monsters) in the game world. A character is swinging a sword; another one wants not to get hit. A valuable coin has fallen and two people grab for it. One person is lying; the other wants to know the truth. Live musicians compete for a cash prize. Usually, PCs compete against NPCs; sometimes PCs go at each other in friendly or not-so-friendly contests. Opposed actions, like single-character trials, come down to the dice.

One could, in a low-combat game, model fighting as a simple opposed action indexed on the skill, Combat. That would, of course, be unsatisfactory for an epic-fantasy campaign where combat is common and there is high demand for detail and player control. But combat system design is beyond the scope of what we’re going to do here— there is too much variety by style and genre.

Opposed actions nearly always occur at Medium or High Tension. Of course, they are subject to situational modifiers.

simple opposed action is one where each contender rolls in the relevant Skill (typically, the same Skill) for performance: highest score wins. If it makes sense to break ties, roll again. Use the first roll as representative of performance— if both singers in a contest roll 5/great on the first roll, and the PC rolls 0 on the tie-breaker, he may not get first prize but he shouldn’t be booed. What the simple opposed action offers is symmetry: it doesn’t require the GM differentiate attackers from defenders— performance scores are compared, and that’s it.

Passive defense is applicable when there is a defending party, who doesn’t really participate in the defense. Armor is typically modeled this way: a character having “Armor Class 4” might mean that to harm him has Difficulty 5. For the attacker to roll against passive defense is equivalent to a binary check.

Collaborative actions are additive. Most actions (e.g., climbing a wall, sneaking into a building without being caught) are single-person— but a few will allow collaboration: group spell casting, large engineering projects, efforts of team strength. Three PCs with Skill 2, 3, and 5 can roll 10dP against a Difficulty 9 collaborative task that any single one of them would be unlikely to pull off.

Serial actions are contests that continue until one character passes and the other fails. They start at a set Difficulty D; if both parties pass, roll again at Difficulty D+1; if both fail, roll again at Difficulty D-1 (not going below 1). Some amount of game time may pass between each trial— in a combat situation, this might be a turn (~5 seconds); in a more gradual environment (e.g. business competition) this might be a month— this means that external factors may intervene before the contest concludes. The starting value of the difficulty will typically be halfway between the Skill levels of the parties, rounding down

Attacker/defender actions are the most complicated kind, and that’s because they index different Skills, which means that levels don’t necessarily compare. If Bob has Deception 2 and Mary has Detect Lies 3, then it should be harder for Bob to lie than for Mary to detect lies— and he should have an under-50% chance. On the other hand, if Mary has Social Aptitude 3 but no skill investment in lie detection itself, then Bob ought to have the advantage, because he’s bringing the more specialized Skill.

Ideally, any “offensive” skill (one that can be “defended” against) that can be defended against has a compliment at the same level of specialization and Complexity— if Deception is Average (-2) relative to Social Aptitude, so should be Detect Lies. Then, because Mary doesn’t have any investment in Detect Lies, she falls back on the implicit “Detect Lies 1” she has, per Social Aptitude 3.

By default, defenders win ties. If the GM feels that circumstances should have the attacker advantaged instead, this can be achieved with a +1 point modifier in the attacker’s favor.

Final Notes

Above is enough material from which an experienced GM can run a campaign using the d8 System. At this point, she’ll have to create her own health and combat system, because no such modules have been written, so the d8 System is certainly not ready for first-time GMs.

Here are a few technical tangents that won’t matter for most players or GMs.

Other Dice?

The “Poisson die” doesn’t have to be a d8. In fact, the d8 isn’t the most accurate contender. Its zero is 1.9% heavy (0.375 vs 1/e = 0.36787…) and so 8dP is going to produce a 0, although very rarely, about 15% more often than Poisson(8). Most GMs and players aren’t going to care about that discrepancy.

You can get a more accurate “Poisson die” on a larger die, like a d30:

  • {1–11: 0, 12–22: 1, 23–28: 2, 29: 3, 30: 4}, with
  • {1–24: 0, 25–29: 1, 30: 2} for chaining.

The drawback here is that d30s are big (and, likely, expensive). You can’t easily hold 4, 6, or 8 of them in your hand. Also, while the d8-based dP has a “heavy” 3+ (1/8 = 12.5%, as opposed to about 8%), the d30-based dP has a light one (2/30 = 6.6%). Since most players (and GMs) are not going to care about exact fidelity to a probability distribution, I consider the heavy 3 on 1dP a feature rather than a bug.

Nothing, of course, stops players from synthesizing a d64 from two d8’s and therefore getting a more accurate model of a single dP… but I don’t think it’s worth it. The dP

One can get a fantastically accurate Poisson(k/120) simulation by rolling k d120’s with the mapping {1–119: 0, 120: 1}. I recommend not doing this, though. The d8 System, I think, gets the statistical properties we want from the Poisson family of distributions, even if a single dP doesn’t model Poisson(1) with perfect accuracy.

To Chain or Not To Chain?

It’s up to the GM whether chaining applies to antagonists’ rolls. I would say that they should— my aversion to downward chaining for PCs’ rolls doesn’t prevent upward chaining for antagonistic NPC results. It’s not the potential for unbounded badness from a PC’s perspective that makes downward chaining a design blunder— after all, the game simulates a world in which the characters can die— but the physical act of making a player roll for it.

Similarly, for “situational rolls” (e.g., weather), the standard 2dP has a wide bottom (14%). If you want to distinguish 0/bad from 00/terrible and 000/atrocious… go for it.

Fractional Levels

Normal Skills shouldn’t have fractional levels. They require special treatment of dice (e.g., a “½dP” as described above) and, in my opinion, half-levels of skills (from a player and GM perspective) aren’t worth the hassle. If you don’t know how to get simple (Difficulty 1) tasks done right 63% of the time, you don’t know the thing.

The d8 System assumes four levels of Skills are available to entry-level players:

  • 1: apprentice,
  • 2: journeyman,
  • 3: master,
  • 4: expert (best in a small or mid-sized town).

with four more levels of room for improvement… 8 is best-in-the-world… and that six(-ish) levels of Aptitudes are available:

  • ¼: utterly inept,
  • ½: below average,
  • 1: average untrained,
  • 2: above average (average in class),
  • 3: gifted,
  • 4: exceptional,
  • 5: extreme, borderline broken (~1 in 100,000).

These are coarse-grained in order to correspond with levels we perceive in the real world, so that in a game situation where no rules apply, or the existing rules don’t make sense, the GM can fall back on her “common sense” about what a journeyman carpenter (Carpentry 2) or master of mendacity (Deception 3) can and cannot do. But also, it prevents the stats from telling players more than the characters would naturally know about themselves. It seems especially coarse-grained that I only have two levels of below-averageness, as opposed to four going up, but I imagine many experienced GMs will understand the sense of this. When players want their characters to be below-average in something, there are usually two use cases:

  • they want to play a character who is extremely incompetent in one way (but, one hopes, competent in other ways) for the role-playing challenge; the opposite of “munchkins”, they want the game to be difficult for them.
  • they don’t consider the attribute (or Aptitude) relevant to their character archetype (or “class”) and are “dump statting” it to buy more points elsewhere.

The first of these is fine; the system supports severe incompetence (¼), although GMs should restrict it to players who know what they’re doing. And there’s nothing wrong with the second, either; but dump statting shouldn’t yield all that much, which it can if there are too many levels of intermediate-but-not-extreme below-averageness. For this reason, the system enables only two: the I-don’t-think-I’ll-need-this below average of ½ and the I’m-up-for-a-serious-challenge of ¼.

Still, players may find a need for finer granularity. Using the deadlift example, there are several intermediate levels between a 300-pound deadlifter (Strength 2) and a 450-pound deadlifter (Strength 3); if a player decides that his character should be able to lift precisely 375, the mechanics allow for a 2½ in the aptitude, and it’s easy enough to figure out what “2½dP” looks like (2dP plus ½dP, as described above). If players or GMs want to know exactly how strong and fast the characters are, down to tens of pounds and tenths of miles per hour, the d8 System doesn’t outright preclude these intermediate fractional levels— it’s just that I, personally, don’t think they’re necessary for 98% of role-playing needs. We don’t care, after all, whether the character can or cannot budge a 700- versus 800-pound door; we care whether the PC can budge that door that is standing in the way— and while it’s a bit “fudging” to make it a Strength check (arguably, suggesting that the weight of the door itself is being determined by the dice) it does keep the story moving.

… And that’s probably enough for one installment.

Life Update 11/21/20: Farisa’s Crossing, et al

I’m still trying to figure out the matter of my online presence (including, to be frank, whether I want to have one at all). For now, I’m still on Substack. I’ll be mirroring these posts on WordPress; as much as I’ve lost faith in the platform, I don’t see any harm in keeping the blog up.

On Farisa’s Crossing, I’ve stopped promising release dates. I can only give a release probability distribution— and that, only for the Bayesians, since the frequentists don’t believe probabilities can be applied to one-time future events— but, I have reasons to be optimistic, regarding current and future progress:

  • the novel is “bucket complete”, by which I mean that if I had a month to live, I could leave it and a pile of notes for an editor, and I wouldn’t feel like I had left the world an incomplete book. (I wouldn’t care about marketing or sales outcomes, for an obvious reason.) There are still things to improve, and I intend to do most of the remaining work myself, but it’s basically “ready enough”.
  • I’ve stopped fussing about word count. It used to be really important to me that the book not get “too big”. Traditional publishing ceased to be an option when I crossed the Rubicon of 120,000 words. As a first time novelist, you have zero leverage and 120K is all you get; although, most award-winning literary titles (in adult fiction) run 175–250K. For a while, my upper limit was 175,000… 200,000… 250,000… which shows how good I am at setting these “upper limits”. Farisa became a bigger story over time. Her love interest, I realized, ought to be more than a love interest. I gave more characters POV time, which meant more world to flesh out. I decided to give more back story to an important villain. Various proportionality and pacing concerns— systems of equations where the variables and coefficients are all subjective, but still require precise tuning— meant that fleshing out one set of details required me to do the same for another. I’ll still cut anything that doesn’t belong. If a scene has an obvious weakest paragraph, or a paragraph a clear weakest sentence, or a sentence has a needless word… it gets yanked. At some point, though, the risks of cutting outweigh the benefits.
  • I’m able to afford having stopped taking new clients in May. I’m down to maintenance of existing ones, at least for now. There’s little stopping me from hitting the next six months at 180 miles per hour. Unless something unexpected happens (and of course there’s that one thing that can happen to anyone) I can’t see anything preventing me from getting the book to a ready state.

There are a million “lessons learned” in the writing process, but I don’t believe in talking about those sorts of things until after you’ve completed the task.

I’ll give it an 75% chance that I’m ready to send my novel to a copyeditor by mid-May; 98% chance by mid-July. Concurrently, probably starting in early spring, I’ll need to get cover art, blurbs, and other marketing materials together. That can go off in a couple weeks, or it can take months. It depends on a number of factors.

I may release the book, without much marketing— because if I’m right about the book, it shouldn’t need it; if I’m wrong about it, perhaps obscurity is a good thing— in August. My next big project (everything being up in the air for obvious reasons) starts in the fall and, to be honest, while the quality of the book itself is paramount, I’m willing to compromise on short-term sales to increase my likelihood of succeeding in other projects. On the other hand, circumstances evolve, and I may size up the situation and decide that Farisa does need the traditional long-calendar marketing strategy, in which case we’re looking at a release date of late 2021 or early 2022.

Quitting WordPress – April 30, 2020

I’ve gotten several complaints about ads on my blog.

When I set this thing up in 2009, I didn’t know much about the web— I’m an AI programmer; web stuff I do when there’s a reason to do it— and I used WordPress’s free offering, and it worked. At the time, you published a blog post and there it was. No ads.

At some point, WordPress began running banner ads under my essays, without paying me, because I was using the free tier, so I guess the attitude was, fuck that guy. I never saw the ads on my own blog, when logged in, and now I understand why. If WordPress bloggers (like this dumb sap) knew how intrusive the ads were, they’d be less likely to create content.

The banner ads were ugly— and I wasn’t making any money off the damn things— but I was willing to tolerate them… laziness, inertia, not wanting to start over.

This afternoon, I looked at my blog, while not logged in, and saw this:

Screen Shot 2020-03-25 at 2.57.38 PM

Not just a banner ad, but a block ad, right between paragraphs. A fear-based fake-news ad, on top of that. Fucking garbage, in the middle of my writing.

I never allowed this. I am embarrassed that this piece of garbage ran between two paragraphs of my writing. I am fucking done with this shit.

What have we let happen to the Web? Fake news, interstitial ads, egregious memory consumption, and those obnoxious metered paywalls. Social media is an embarrassment. I am so sick of all this fucking garbage, the blue-check two-tier social platforms, the personality cults, the insipid drama, and the advertisements for garbage products no one wants and badly-written ad copy no one needs to read.

I am sick of “Free” meaning garbage. Yes, I’ll pay for news— but never in a million years if you punish me for reading more than my “4 free monthly articles”, you rancid stain. Make it free or charge for it; don’t be an asswipe and play games. Stop “giving away” a garbage product in the hopes of someone paying for something better.

This blog goes down at the end of April. I’m done with WordPress. I’m a programmer; time to roll my own.


A COVID–19 False Dilemma

Political leaders like Donald Trump and Congressional Republicans are trying to force the American people to choose one of two unacceptable alternatives:

  • Fast Kill: do nothing about the virus’s spread, causing millions of preventable deaths due to the catastrophe of large numbers of people— orders of magnitude beyond what our hospital system is designed to handle— becoming critically ill at the same time.
  • Hang the Poor: practice social distancing and flatten the curve (which we must do) but at the expense of crashing the economy, leading the poor to face joblessness, misery, and bankruptcy— In Time, it turns out, is not fiction— culminating in a Great Depression–level economic collapse.

Both scenarios lead to preventable loss of life. Both scenarios are intolerably destructive and will impoverish a generation. Both scenarios are completely unacceptable if something better can be done. Indeed, something better can. We must flatten the curve; we must practice social distancing. But, it is artificial that “the economy” should be threatened by our doing so.

Compared to a 1973 benchmark, employers take 37 cents out of worker’s paychecks for themselves. Costs of living have gone up, wages have not kept pace, and working conditions have degraded. The result is a society where working people live on the margin, where two weeks without an income can produce, for most individuals, financial ruin. It didn’t have to be this way. This fragility is artificial. The rich created, for their own short-sighted benefit, a society in which the poor must serve the manorial lords on a daily basis or starve. It doesn’t have to be that way.

There’s a third option, one that Trump and Congressional Republicans would rather us not see. Yes, we flatten the curve; we practice social distancing and self-isolation and even follow a quarantine if circumstances require it. On the economic front, institute a wealth tax— a 37-percent immediate wealth tax to commemorate the 37% private tax levied against workers by their employers, and a 3-percent annual tax on wealth over $5 million going forward. Restore upper-tier income taxes to their New Deal levels. Offer a universal basic income (UBI) and put in place universal healthcare (“Medicare for All”). Remove restrictions on unemployment benefits. Mandate that employers protect the jobs of workers furloughed by this crisis. Offer rent and mortgage relief to those who need it. Eliminate student debt, and make appropriate public education free for all who are academically qualified. After the crisis, put funding into research and sustainable infrastructure. All of this can be done— for the most part, these aren’t new ideas.

The billionaires and corporate executives— and the Republican Party that represents them— don’t want Americans to see this third option. They’re afraid of “socialism”, not because it might not work, but because it almost certainly will. It took them fifty years— and an uneasy alliance with religious nutcases and racists— to roll back the New Deal and the Great Society, and they’re terrified of socialist ideas getting into implementation, because they know that when this happens, people find out they like socialism, and it takes immense political effort to roll this plutocrat-hostile progress back.

We don’t have to choose between “the economy” and millions of lives. This is a false dilemma being put forward by evil people who will only consider scenarios that leave the power relationships and hierarchies of corporate capitalism intact. Their failure to allow a workable third alternative constitutes murderous negligence.

Our economic elite is made up of people who would rather see millions die than the emergence of an economic system that challenged their titanic power. If we survive COVID–19, if we defeat the the virus, we should go after them next.

Capitalism–19 Vs. Humanity–20

Societies around the world face a horrible decision, as a deadly coronavirus rages through the population. Do they continue with economic business-as-usual, and allow tens of millions of preventable deaths? Or, do they take drastic measures to slow the spread of disease (“flattening the curve“) that endanger our economy?

Let’s consider one extreme. What is likely to happen if our elected and business leaders do nothing? The number of people infected continues to double every 6 days. Our hospitals are swamped. Unheard-of numbers of people need respiratory support, all at once. Most do not get it— and they die. People needing transplants, even if they never get the virus, die waiting because the resources are unavailable. By midsummer, tens of millions of people are dead, and at least tens of millions more, though recovered, are permanently disabled. I call this scenario, the Fast Kill.

I don’t want the Fast Kill. Millions of needless deaths is a thing to be avoided. However, the perspective of our economic elite is quite different from mine or yours. The billionaires are on private islands, or in secret bunkers, and can wait this thing out. A Fast Kill, to them, has one clear advantage: the power relationships and hierarchies of corporate capitalism (with some loss of personnel) remain intact.

Will our economy shatter if we take measures to slow the spread of disease? Yes, because corporate capitalism is brittle by design. Since 1973, worker productivity has nearly doubled, while wages have stagnated. Out of every dollar a worker makes, executives take 37 cents for themselves. As workers compete against each other for the benefit of the richest 0.1 percent— as opposed to, say, overthrowing their masters— rents rise, wages fall, and working conditions degrade. We now have a world where most people— and quite a number of vital small businesses— cannot survive 2, 4, 6 weeks without an income. Many workers get no paid sick leave. As elected officials and public-health experts demand we take measures to control COVID–19’s spread, many people will, by virtue of their need for weekly income, be unable to comply.

We wouldn’t tolerate a 37% tax, imposed on the lower and middle classes, from our government. And yet, that is exactly what the private-sector bureaucrats called “executives” have levied against working men and women. As a result, millions of people are so broke that, even under a quarantine enforced by the national guard, the need for an income will undermine such measures. Those who are forced to live on the daily
“hustle”— odd jobs, panhandling, alleyway short cons, and black-market labor— are used to evading authorities, and they’re good at it.

Here’s some of what we need to do, to survive COVID–19 with civilization intact. Yes, of course we need to flatten the curve; we need to slow our economy and focus on urgent needs such as food, shelter, energy, and medicine. We need universal basic income protection— not a means-tested one-time payment, because a one-time check won’t do enough and we don’t have time to quibble over means tests— that people can rely on until the crisis is over. We need mandatory job protection for people sickened (and, in many cases, disabled) by COVID–19. We need rent relief for people who lose their jobs. We need to remove all restrictions on unemployment benefits, and to make these benefits tax-exempt as they were before Reagan. We need an unconditional moratorium on all medical bills— and, at the same time, government funding of hospitals to keep them afloat— during this unprecedented public health crisis.

All of this, yes, is “socialism”. Socialism is nothing more and nothing less than the contention that the principles of the Age of Reason (e.g., rational government over clerical rule or hereditary monarchy) ought apply to the economy as well. It turns out that there are no capitalists in foxholes.

Our society is ruled by people, most of whom would rather see millions die than see such measures enacted. Why? Once so-called socialist measures are in place, they become pillars of a society and it takes decades to remove them. Surviving COVID–19 is going to require governments all around the world to impose socialistic measures more drastic than the New Deal and the Great Society combined.

There are no good alternatives. If elected leaders do nothing, we get a Fast Kill. Tens of millions of people die, and tens of millions more are disabled. If curve-flattening measures are imposed without socialistic protections, we destroy what’s left of the middle class, eviscerate the consumer economy, and risk such a high rate of noncompliance that infection may spread, needlessly killing millions, anyway.

Billionaires and corporate executives are scared, not of the virus, but of the changes our society will need to make to survive COVID–19. What if those social-welfare protections stick? Billionaires will become three-digit millionaires. Three-digit millionaires will become two-digit millionaires. Private jetters will have to fly first-class commercial flights. Corporate executives will be administrators rather than dukes and viscounts. Worker protections will be enforced again, interfering with the “right to manage”. In the long term, extensive investment in the sciences and health (to fight the next COVID–19) will raise employee leverage, at capital’s expense, across the board. The horror!

Those who run the global economy, to the extent that they have a say in what societies do, have a conflict of interest. They can try to preserve the hierarchies and power relationships that enrich them— at the cost of a holocaust or few. Or, they can accept social changes that, while bringing humanity forward, will emasculate corporate capitalism and hasten its replacement by a more humane system, such as social democracy en route to automated luxury communism.

Shall it be Capitalism–19, or Humanity–20, that survives? Working men and women await the answer.

Yes, Under Corporate Capitalism, 8 Million Working Americans Are Likely To Become Unemployably* Disabled–– Possibly, for Life. Check the Math; Check the Assumptions.

An assertion I have made recently has drawn controversy. I have said that, in the wake of COVID–19, we’ll likely see 8 million American workers become unemployably disabled for a long period of time–– years; possibly, for life. This is an extreme prediction, and I hope I’m wrong. I’ve made predictions that were wrong and embarrassing. I sincerely hope this is the most embarrassing prediction I’ll ever make. Given the extremity of it, let me explain the assumptions on which it rests.

Please, check my work. If I’m making an incorrect assumption, post a comment, and I will fix it.

I am not, in any capacity, an expert on virology, medicine, or epidemiology. These are complex, difficult sciences and we must defer to the experts. The numbers I will be using will be within the ranges of existing predictions regarding how bad this pandemic can get, and how much damage it can do.

Of course, we have to define terms. What does it mean for a person to be unemployably disabled? There is a spectrum of sickness, and one of disability. The vast majority of this 8 million people (plus or minus a factor of two) will not be bedridden, miserable, or sickly for the rests of their lives. Unemployably disabled means that someone is sick enough that (a) no one wants to hire her (whether because of her disability itself or her suboptimal career history) and (b) she struggles to retain jobs due to her inability to hide the chronic health problem. She need not be physically crippled, psychiatrically hospitalized, or too sick to contend with daily life. She might not “look” disabled at all, but she will have too few spoons to have even a chance of victory in corporate combat.

In the United States, where employers are above the law on account of having convinced the public to call them “job creators”, it does not take much disability at all to make someone unemployably disabled.


Like I said, I’m going to document all of my assumptions, so the public can check my work.

My first assumption is that COVID–19 will not be contained. This is the biggest one, and I hope I’m wrong. If the virus is contained, like SARS, then perhaps only a small number of people will be exposed to the virus. If only 500,000 people get it, then clearly there is no way for COVID–19 to render 8 million people unemployably disabled.

However, the virus is extremely contagious, with an r0 estimated at 2.28. Not as bad as measles, worse than flu–– probably worse (in contagion) than the monster flu of 1918. Does this mean that it can’t be contained? No. SARS had a similar r0 and was contained. However, neoliberal corporate capitalism, for reasons that will be discussed, is especially bad at containing outbreaks.

Old-style state authoritarianism has its failings, but people know what the rules are. A government quarantine can be enforced. An authoritarian government can just shoot at people who move until the r0 drops below 1. It’s a terrible solution, but it works.

Social democracy can also work, so long as a sufficient number of people have the good will to exercise their option to hunker down (that is, practice social distancing) and let the experts handle the crisis. I have chronic health issues but I am taking special measures right now (e.g., dietary changes, avoidance of damaging circumstances) to minimize risk of needing medical attention in the next six months. In part, my reasons for doing so are selfish; in part, I am trying to minimize my risk of being a burden to a soon-to-be-overtaxed hospital system. We are all on the same team.

What cannot contain an epidemic like COVID–19 is an economic system such as ours. Under neoliberal corporate capitalism, we have a libertarian government (providing immense economic freedom to those privileged enough not to have to work) but live in a matrix of authoritarian employers, who control our incomes and our reputations, and who can bend the government to their will by calling themselves “job creators”. In a world like this, no one knows who is in charge. Who does the American worker obey? Does he obey the man in Washington advising self-quarantine, or does he obey the boss who believes “coronavirus is just a cold” and has the power to turn off his income (and, by giving negative references, non-consensually insert himself into the worker’s reputation) if he shows up 15 minutes late? Chances are, he’s going to ignore the G-Man and obey his boss. The quarantine will not be effective. Even if it is enforced by the government, so many people are in such precarious economic straits that they will illegally circumvent it, if it comes to that.

We would have to scrap corporate capitalism entirely to have anything better than a 5 percent chance of containing COVID–19. Let’s be honest, a total overhaul of our economic system in the next two months is very unlikely. Chances are that, instead, the novel coronavirus will stick around in the American population (and, therefore, the world population) for good.

How bad is this? Not necessarily terrible. Over time, we’ll probably develop natural immunities to this thing, rendering it just another coronavirus. In the mean time, though, COVID is going to make a lot of people sick.

My second assumption is that about 100 million American workers will get COVID–19. Angela Merkel predicted that two-thirds of Germans will contract the virus., which is in line with epidemiologists’ expectations. That doesn’t mean they’ll all get sick. Most won’t. Case-fatality rates–– the WHO has given this disease a CFR of 3.4%–– often overestimate the lethality of the virus, because so many mild and asymptomatic cases go undetected. We may never know the real lethality rate of this disease, but in working-age Americans it will likely be under 1 percent. That’s the good news. This is a serious illness, but it’s not showing a likelihood of being a massacre like, say, the 1918 flu.

What about flattening the curve?

Health ministers and epidemiologists have been advising us to practice social distancing–– that is, avoid large gatherings–– to slow the virus’s exponential growth and “flatten the curve”. We absolutely must do that. A widespread emergency that overloads the hospital system will cause the lethality to spike, as it has in Italy.

By flattening the curve, we can achieve a great deal in preventing deaths, but we’re not necessarily going to reduce the number of cases. Flattening the curve is important because, when resources run thin, the matter of when people get sick has a major influence on survival. It doesn’t guarantee that they’ll never get sick.

How sick? Some people will carry the virus and suffer no symptoms. Some people (and not only elderly people) will get severely ill.

My third assumption is that, among that 100 million workers, the breakdown of cases (into asymptomatic, mild, severe, and critical cases) will be similar to what we’ve seen so far.

Unfortunately, there’s some guesswork regarding the currently infected population. We haven’t tested everyone; we don’t know how many cases of COVID–19 there are. Using percentages I believe to be in range of what experts expect, and scaling down a bit because we are speaking of the working-age population (a younger and healthier set) I’m going to predict: 50 million asymptomatic cases, 35 million mild infections, 13 million severe cases, and 2 million that are critical. These numbers could well be off by a factor of two, but not in a way that would meaningfully alter my fundamental conclusion–– that millions of people are about to develop long-term disabilities that, in American corporate capitalism, will render them unemployable.

It’s important to understand what is meant by a “mild” infection, when the medical community says that most (70–90%) COVID infections are mild. The word “mild” is relative. A “severe” cold (38 °C fever, inflammation and pain, unable to work) is “mild” by the standards of flu. Similarly, “mild” SARS or COVID is comparable to “severe” influenza (unless we’re talking about the 1918 monster flu, which is in its own category). Specifically, in the context of COVID, “mild” means that a patient is expected to survive without hospitalization–– there is no evidence of immediate danger.

In a “mild” case, life-threatening secondary infections may occur later on. That’s a serious issue, but not one that must be treated now. Some of these “mild” cases will come with pneumonia. Some will come with 39–40 °C (unpleasant but not critical) fever. Some will produce post-viral chronic fatigue comparable to that following mononucleosis or the bacterial infection responsible for Lyme disease. Quite a few people with “mild” cases will experience transient (but not life-threatening) respiratory distress serious enough to induce panic disorder or PTSD. These cases won’t require hospitalization–– and hospitalization will likely be unavailable–– but they will still be, for most young people, the worst health problems of their lives so far.

If that barrel of fun is “mild” COVID, what’s severe? Severe cases require hospitalization for days, and possibly weeks. Artificial respiration may be involved. Critical cases include those where vital organs are involved–– kidney failure has been reported. Yeah, this thing’s nasty.

Any health problem can traumatize a person, but respiratory ailments have quite a track record. The body is not meant to go without oxygen, and even slight deprivations freak the brain out. We’ve seen this with SARS and the 1918 flu. We’re likely to see it with COVID–19. Even in the cases being called “mild”, because there is no threat to life that requires emergency hospitalization, truly “full recovery” is not a guarantee. People are going to get panic attacks from this, and once a person has had a few of those, a lifelong struggle with panic disorder (and agoraphobia, and depression due to adversity in employment) becomes likely.

My fourth assumption is that COVID–19 will have a long-term disability profile, controlling for severity, comparable to SARS.

Nearly half of SARS survivors, ten years later, were unable to return to work.

Does this mean that 40–50 percent of COVID–19 survivors will be unemployably disabled? It’s hard to say. SARS is not COVID–19. Let’s size up some of the differences.

For one, SARS disproportionately affected skilled healthcare workers, for whom there’s high demand in any economic situation. We would see a higher rate of unemployable disability if this hit people whose services aren’t really needed–– say, private-sector software engineers or project managers. Of course, it will hit everyone and

Second. SARS did not have many victims in the United States–– where, although it is illegal to discriminate against disabled workers, the laws are scantly enforced. It mostly afflicted countries where workers have better protections against their employers. If, say, 40 percent of survivors were unemployably disabled in Canada, we’d likely see 75 percent unemployably disabled in the United States, not because the disease was more severe but simply because employers in the US get away with more.

That being said, all the evidence so far suggests that COVID–19 is not as severe as SARS. Therefore, I don’t think we’re going to see the same rate of unemployable disability (40 – 50 percent) among COVID–19 survivors, if only because there are so many more mild cases.

Here are my predictions. Five percent (1.75 million) of those with mild infections will be unemployably disabled–– that is, at some point, subjected to a career disruption through no fault of their own from which they will be unable to recover. Among the severe cases, I’m predicting 40 percent (5.2 million); among those with critical cases, 65 percent (1.3 million). These numbers might each be off by a factor of 2, but they’re not unreasonable. They are middling estimates.

There’s already a mountain of evidence supporting high proportions of those suffering severe and critical illness becoming, through no fault of their own, unemployable. What about the mild cases? Isn’t it a bit dire to predict that 5 percent of people with “only” mild infections will become unemployably disabled? No, it’s not. If anything, the real number’s likely to be higher.

Most of these cases will not be attributed COVID–19. Plenty of the people won’t know they ever had it. They’ll simply experience “a bad month” in which they will be unable to meet the performance requirements of their jobs, suffer managerial adversity and workplace bullying, and suffer career setbacks from which they’ll never recover.

Kimberly Han is a (hypothetical) 33-year-old software engineer at a half-trillion-dollar technology company, LetterSalad (formerly, Vigintyllion). On April 3, she develops a mild case of COVID–19. She’s able to work from home, because the US is on lockdown. Her fever never breaks 39 °C and she never feels the need to go to the hospital. She’s never diagnosed with COVID. She never thinks she even had it. Since she works from home, she’s not even aware of racist COVID-related jokes made about her by the managerial in-crowd. The storm passes. Everything’s fine.

In September, Ms. Han finds herself tired. Post-viral syndrome. Other than being tired, she’s fine, but she develops a cough. She misses a “sprint” deadline. She needs to take naps in the afternoon, and misses an unannounced but important meeting. Management perceives her as a “slacker” or as “sickly” or as “low-energy”. The product manager and her “people manager” tell her to stop “SARSing up the schedule”, which is totally not racist because the direct manager is a white, Ivy-educated “Boston Brahmin” and the product manager is an actual Brahmin, and it’s physically impossible for racists of two different races to work together to be racist to someone.

The workplace bullying culminates in her developing post-traumatic stress disorder. She begins to have daily panic attacks. She powers through the episodes, not missing a day of work to the attacks, but her manager doesn’t like “the optics” and begins paperwork to terminate her “for performance”. Kimberly Han, through no fault of her own, loses her job. Within time, the post-viral fatigue lets up but post-traumatic stress disorder does not. COVID–19 left her body and she is unaware that she’s had it, but she’s unemployably disabled.

What’s above will happen to people. Even if we do everything right, even if we flatten the curve and prevent our hospitals from becoming dangerously overloaded, it will happen to American workers, not necessarily in that precise way, but nonetheless surely. Some will have reduced lung capacity. Some will develop anxiety and depression. Some will develop panic attacks or PTSD. So will never be diagnosed but exhibit unexplained personal changes and not even know, when they are fired and unable to ever work again, that it was because of illness that they lost their careers (and that they were, therefore, fired illegally).

Could I be wrong on that 8 million figure? Of course. More accurately, it is: 8 million, plus or minus a factor of 2, conditional on an assumption of non-containment. I hope I’m wrong. I hope the virus is contained, or that it proves seasonal and dies out in the spring, but there’s no evidence that we can count on either one.

It is very likely that millions of American workers are about to become unemployably disabled. Crippled? No. Not even necessarily unhealthy. Careers are fragile things; it doesn’t take much disturbance to make someone unable to get and keep jobs in a competitive labor market that has been rigged against workers for the past forty years.

“Couldn’t this be a good thing?”


I understand the argument. This pandemic may create a short-term labor shortage, and there are people who believe the clearing-out could lead to an improvement of opportunities for workers. I’m not so bullshit.

I don’t know enough about virology, medicine, or epidemiology to do anything more than piece together existing research, but I do know enough about economics, politics, and organizational dynamics to say this: while the people who own our society are evil, they are not stupid. The upper class and the corporate executives will profit, and we will suffer.

There are some people (sick, broken people) who believe that this “Boomer Remover”: virus will create opportunities in the workplace or that it will “clear away” people who are a burden on society. Neither’s true. First, while this will kill a lot of sick old people, it will at the same time make a lot of currently healthy people (young and old) very sick–– in some cases, for a long time. The disability burden on society is not going to be ameliorated by COVID–19; it will be increased.

So, let’s talk about why a potential labor shortage isn’t actually to the worker’s benefit. We are not in the time of the Black Plague. In the 14th century, the nobles needed the peasants. American workers can easily be replaced by machines and by literal slaves in other countries, and they will be. I remember, in 2005, being told that Millennials would face a world of opportunity by now, as Boomers retired and vacated the workforce. It didn’t happen. Those cushy $500,000-per-year BoomerJobs? Those were never filled. They simply ceased to exist. We live in a society where recessions are permanent (for the workers) and recoveries are jobless. When things go bad, workers are first to suffer; when things are good, the owners take the bounty for themselves. COVID–19 will be no different. The rich will see a drop in their stock valuations; the poor will be eviscerated. This dynamic will not change until we destroy corporate capitalism.

What happens to the eight million people who become unemployable because of post-viral disability? There’s no safety net in this country, so these people will have record-low leverage, and so while they won’t find decent jobs (because no one will hire them for one) the owners of our society will find ways to extract work from them. A number fall into precarious “gig economy” piece-work, grinding out enough of an income to survive, as their health gradually unravels (even as COVID–19 becomes a distant, unpleasant memory). The least fortunate will turn to various unsavory ventures, because illicit labor doesn’t require a spotless résumé. Perhaps the most talented of the newly-disabled will do what I’ve had to do: swing from one six-month rent-a-job to another, until the boss figures out they have a disability and either fires or gimp-tracks them. That these people will be unemployable doesn’t mean that society won’t be able to get work out of them–– it means that they’ll be unable to get anything out of society.

One might think, though, that the eventual exclusion of 8 million people from traditional, “respectable” labor (office jobs) could bring a benefit to other 152 million who do not develop lifelong disabilities. Less competition, right? That’s exactly what our pig-fucker bastard owners want us to think. They want us to think of our fellow citizens–– fellow proletarians–– as “competition”. They want us divided against each other, because it keeps them in charge.

That Star

Revisit the title of this essay. I predicted that millions of people (8 million, plus or minus) will become unemployably* disabled, accent on the *.

In a corporate dystopia, where workers compete against each other for the benefit of their owners, it is inevitable that people with otherwise mild disabilities will become unemployable. That is, they will be unable to convince the obscenely well-paid “professionals” who profit by the buying and selling of others’ labor to give them gainful, stable employment. There is no reason it has to be this way.

Should a person who suffers post-viral fatigue be subjected to workplace bullying and performance evaluation? I would say no. Should a person, recovering from a severe respiratory illness, be non-consensually ejected from her career because her panic disorder or depression caused a headache for her boss? No.

Here’s the reveal, which should not be much of one.

Yes, COVID–19 is going to fuck a lot of people up. It’s killing people and will continue to do so. It’s horrible. I wish this were not happening; I wish what is about to happen were not about to happen. This said, it need not be the case that COVID–19 renders 8 million people, or even one person, unemployable. COVID–19 exists in nature; it is part of the real physical world and we have to contend with it. “Employability” does not exist in nature. It is a part of a social construct and a stupid one at that.

Corporate capitalism is a fragile, hostile economic system that will throw millions of people under the bus in the next year for no reason but their “offense” of getting sick. It will not know whether they got sick from COVID–19 or a secondary infection or post-viral fatigue or the psychiatric sequelae of respiratory illnesses. It will not care. It will fire them “for performance” and the wheels of the bus will roll along.

We’ll soon see about 8 million people rendered permanently unable to, on the harsh terms of corporate capitalism, get an income. For what? Is the needless suffering (and, likely, the continuing worsening of their health) of 8 million people, who did nothing wrong, a worthy price for the upkeep of a decaying socioeconomic system that all intelligent people–– even though we disagree on solutions–– despise? I think not.

COVID–19 is horrible. The earthly existences of thousands are, as I write this, in present danger. That number is likely to worsen. We need not let it be more perilous than nature has made it.

If we keep corporate capitalism around, we will see 8 million people–– some talented, some extraordinarily competent; but nonetheless unable to survive in a system where each worker must compete against a hypothetical replacement who might be as skilled but without illness–– fall out of the primary economy for good. There’s no point in that. It doesn’t have to happen that way. We can tear corporate capitalism down. We can overthrow our corporate masters (through nonviolent means if possible, through other means if our adversaries make it so). We can eradicate an economic system in which we compete against each other for the benefit of a tiny, self-serving minority who wish to own us. COVID–19 is proving to us that we, citizens of the world, are all on one team. We all want this thing not to destroy us and everyone we care about. It’s time to build an economic system reflective of that.

Wash your hands for 20 seconds. Avoid public gatherings. Try not to touch your face. Furthermore, I consider that corporate capitalism delenda est.

Welcome To My World. I’m Sorry That You’re Here.

I had a mild bout of flu in February 2008. I’d had worse flus, and I have had worse since then. I was a 24-year-old with no health issues; I recovered quickly.

What made this infection notable was that, a month later, I experienced intense pain in my throat that radiated through my chest and face. I could barely see. I tried to drink water and could not swallow. For a minute or two, I couldn’t breathe. Laryngospasm–– it feels like drowning in air. Dizziness, nausea, and vomiting followed. The “mystery illness” caused a panic attack. Not just one, either; they kept coming for months.

The physical problem turned out to be a secondary bacterial infection. It’s rare, but sometimes happens after influenza.

Unfortunately, the panic attacks never went away. They often don’t. Severe respiratory illnesses often cause lifelong disability–– PTSD, reduced lung capacity, depression, anxiety and panic disorders. Once the body and brain “learn” how to panic, this vulnerability becomes a new facet of daily life. So terrible is the experience of a panic attack that a person will do nearly anything to end one. Without a doubt, they’re one of the worst things a person can experience. Moreover, the fear of panic attacks can, itself, produce one. Intrusive thoughts and superstitions become a part of daily life. Unchecked, this can lead to dysfunction and agoraphobia.

I hit bottom in 2009. I was agoraphobic. I had to spend a year re-learning how to do daily activities, re-learning that it was safe to ride a bike, sit on a crowded subway, ride a car. I built myself back from 1 HP. It wasn’t easy.

At this point, I’m 98-percent recovered from panic disorder. I used to have attacks on a daily basis. Now, I might have a “go-homer”–– one bad enough that I have to leave work–– once in a year or so. I’m probably in the 85th percentile for health at my age (36). Aside from being minus gallbladder, I’m in excellent physical health. I can deadlift 340 pounds. At this point, I can do all the activities of daily life. I’ve had panic attacks while driving. I don’t recommend that experience, but it’s not unsafe. If I have one while scuba diving, I have a plan for that (signal diving buddy, ascend slowly).

Open-plan offices are a struggle for me. Actual danger doesn’t trigger panic attacks. I’m fine riding a bike in traffic. I’ve swum with sharks (no cage) at 78 feet–– which is not as dangerous as it sounds. Open-plan offices, though, are needless cruelty. The easiest way to have a panic attack is to sit for nine hours in a place where having one (a minor irritation when it happens at home) will be a professional death sentence–– and, trust me on this, it is. If the bosses find out you have (scary music) “mental illness”, you will either be fired or given the worst projects–– gimp-tracking–– until you leave.

So-called “mental illness”, after a serious respiratory infection, is normal. The body is not meant to go breathless. Nearly half of SARS survivors, ten years after recovery, were still too disabled to return to work. These were healthcare workers (in high demand) outside of the United States. For American wage workers, the rate’s going to be worse.

I’ll give myself as an example. On May 10, 2019, I successfully interviewed for a job at MITRE as a simulation and modeling engineer. On May 13, they made an offer, which I accepted. My intended start date was Monday, June 3. Robert Wittman, who was to be my manager, somehow learned of my diagnosis (likely, illegally) and, on the (false) belief that it would prevent me from getting a security clearance, rescinded the offer. This happened to me 11 years after the original infection.

So, even if you survive severe COVID and are well enough to work, you might not find anyone willing to hire you.

Here’s my prediction, and I hope I’m wrong, but I’m probably not. If anything, these numbers are conservative.

First, I think that nearly everyone in the US will be exposed to COVID–19. The Republican Party’s forty-year campaign to destroy our government has been successful, and employers are more interested in the appearance of doing the right thing than in actually doing the right thing. The American workforce is 160 million people. I predict 100 million will be infected.

Half of that 100 million, I predict, will be asymptomatic. They’ll get the disease but show little pathology. Of the other half, I predict 35 million mild cases, 13 million severe cases, and 2 million critical cases, leading to 125,000 deaths. These numbers are far more favorable than the pattern the disease has shown, and that’s because I’m talking about the American workforce, not the entire population. Total deaths in the US could reach seven figures; working-age deaths, probably, will not.

“Mild” is a relative term, and when we’re talking about diseases like SARS or COVID, “mild” isn’t all that mild. It means the case probably doesn’t require hospitalization. Some who have mild cases will develop secondary infections. Many will lose their jobs and health insurance, producing psychiatric sequelae. These people won’t be in immediate danger of losing their lives, but many will be disabled, and some for years. I’m going to say that 5 percent of people (1.75 million) in this set will be long-term disabled–– they will lose their jobs due to illness and be unable to find work.

Of the 13 million severe cases, I’m going to use SARS as a point of reference and predict a 40-percent disability rate–– 5.2 million. This leaves 2 million at the worst level of illness–– critical, meaning organ failure or intubation are involved, and I’m going to predict that 65 percent of them (1.3 million) are unable to go to work. This gives us a total of 8.25 million.

If my (conservative) predictions are right, we in the 18–65 sector are going to see “only” five years’ worth of traffic deaths from COVID–19. A big number, and worth taking seriously, but not apocalyptic. Life will, after a few miserable months, return to normal.

Millions of workers–– I predicted 8 million, but it could be half that or double that–– will be, in the wake of non-fatal COVID, unable to return to their jobs, or to get other ones. They’ll try to work–– in this country, they have no other choice–– but they will be unable to meet the performance demands of their jobs, and summarily fired. They will have six-month job gaps in 2020 and no one will want to hire them. Their careers will be disrupted and unfixable. CEOs will insist that they are not discriminating against people who survived COVID, with all the credibility I have in insisting that I have a 16-inch IQ and 200 penises. Legislators might pass laws preventing discrimination against COVID–19 sufferers, or against people with job gaps during 2020, but we all know that employers don’t need to follow laws when they can call themselves “jawb creators” and get a free pass.

Our society runs on “if ya doesn’t work, ya doesn’t eat” model, and millions of people are likely to become unemployably disabled. Some will be unable to work at all. Some will, like me, return mostly to health, and be able to work, but struggle to get hired due to lingering stigma. COVID–19 will pass. Our bosses and owners will tell us that everything’s back to normal (it won’t be) and that we just need to get back to work. But millions of people are going to be unable to do so, and the system will discard them forever.

I should mention a personal bias: I’m a democratic socialist. Often, I read people on the right claiming that “communism killed millions“. It isn’t true. Death attribution is a complex science and you can’t just count every death that’s not by old age as being caused by the economic system in place. If you compare the death tolls of so-called communist regimes (some of which were terrible) to what they would likely have been under similarly repressive regimes (of which there are numerous examples) aligned with imperialist capitalism, the excess death rate of communism is… zero or negative. That’s not to say that communism is flawless or faultless–– only that it does not produce excess deaths over what would have otherwise occurred.

At issue is that we’ve been brainwashed, in the United States, to believe that all people who died of causes excluding old age in communist countries were “killed by communism”, every single one. Meanwhile, when capitalism kills people, it blames those who were killed. “Personal responsibility.” If that Pakistani kid’d had the good sense not to go outside on a sunny day, he wouldn’t have been freedom’d by a drone.

Communism’s public liability is that it never forgets–– and, given the severe failings of societies that called themselves communist, it should not forget. Communism has too much memory and too much history and too much responsibility. Capitalism has no memory and no history and no responsibility.

If we go “back to normal”, as our owners and managers will insist, and neoliberal corporate capitalism remains in force, eight million people are going to find themselves falling to the bottom. Months or years from now, they’ll die needless deaths. We know what the capitalists will already say. Trump already said it: “I don’t take responsibility at all.”

Not only in the next three months, but in the years following this catastrophe–– as people try to return to their careers and find their jobs gone–– corporate capitalism is going to fail. But is it going to fall? That’s up to us. If we do our jobs, yes. We cannot let our economic system and those who own employ us, when they try to avoid taking responsibility for their role in this calamity, succeed.