A Reply to Alex Danco: Revisiting MacLeod and the Three Ladders in the Wake of Trump

I will soon be migrating to an alternative site, because WordPress is garbage for running garbage ads without my say; stay tuned.

Eight years ago, I wrote an essay on the three social class ladders that existed in pre-2016 American society. That essay disappeared due to a confluence of factors not really worth getting into, but I’ve been asked more than once to revisit it, in the wake of recent changes in our society. I do have strong thoughts on how that article has aged. At the time, I was unduly sympathetic to my native social class, the Gentry. This blinded me to something I had begun to suspect, and that Alex Danco articulated– that a sociological “middle class” is a comfortable illusion, a story capitalist society tells itself to mask its barbaric nature, performing a similar function to the notoriously clueless middle manager, Michael Scott.

The MacLeod Model

Around the same time as I wrote the three-ladder essay, I also analyzed the three-tiered MacLeod model of the modern business corporation, whereby each layer is assigned an uncharitable label: regular workers are Losers, middle managers are Clueless, and Sociopaths sit in the executive suite.

How accurate is the MacLeod model? Its origin is a satirical cartoon, but it accurately describes how the tiers of an organization are perceived, with one exception: Clueless middle managers generally don’t see their bosses as Sociopaths. I would not go so far as to attribute job labels to individuals. Taking a middle management position may be a wise career move (not Clueless). While the corporate system is evil, most executives are not literal psychopaths (or sociopaths) and most Sociopaths don’t become upper management (not enough spots). Laborers are, of course, economic Losers (as in “one who loses”) but are not “losers” in the U.S. pejorative sense of the word (meaning, “one without redeeming qualities”). It would be reductive and inflammatory to suggest that people’s true natures are indicated by their company-assigned, social-class-derived job positions. That said, the MacLeod model is entirely true when it comes to the expectations put on a role. Regular workers are expected to Lose– to generate surplus value for owners, to be discarded when no longer useful to their bosses, and not to complain about the fact. Executives, though some are individually decent human beings, exist to enforce the Sociopathic will of companies whose sole purpose is private enrichment. As for middle managers, their purpose is indirection, obfuscation, and deception. They are hired to conceal upper management’s true attitudes and intentions toward the regular workers, and to function as “true believers” in the company’s misleading, manipulative claim of standing for something more than private greed– that is, to propagate false consciousness (Cluelessness). Such a person need not herself be Clueless like Michael Scott, but it seems to help.

The separation between rationally-disengaged “Losers” and Clueless true-believers isn’t always well-defined, nor is it easy to find on an org chart, but the line separating MacLeod’s Clueless from Sociopaths is well-defined– it’s the Effort Thermocline, or the level in an organization where jobs become easier, rather than harder, with increasing rank. C-Words works less than VPs, who work less than Directors; but front-line managers work far harder than their charges for not much additional pay or respect. It is this way by design. A two-tiered corporation without a barrier between the overpaid, lazy, self-dealing executives and the exploited grunts would collapse under the weight of class resentment. Three-tiered MacLeod organizations prevent this by loading the level just under the Effort Thermocline with the drunkest of the Kool-Aid drinkers, the truest of true believers who will thanklessly and indefatigably clean up the messes made by the rationally-disengaged wage workers below them and by the self-serving, impulsive children above them. This turns class envy into a distant abstraction than a source of daily friction, because the tiers do not envy their immediate neighbors. Ground-level workers see their bosses working three times as hard for 20 percent more pay. Middle managers mistakenly (Cluelessly) construe their own superiors as more-successful, aspirational versions of themselves and believe (mistakenly, almost always) they’ll be invited to join the executive club if they just prove themselves a little harder.

The three-tier organization seems dysfunctional, bloated, and wasteful; but it’s far more stable than a two-tier business and therefore it tends to be a natural attractor for companies that exceed 50 people.

The Middle Class: “I can’t be Clueless because I know what Cluelessness looks like and I’m not it!”

In the early 2010s, I believed a lot of things that weren’t true. I bought into neoliberal, technocratic capitalism. Google sounded like a “workplace utopia” and so I applied to work there (and got in, and did work there, and learned a lot of what’s here). I also bought in to the Silicon Valley myth under which venture-funded startups, being “not corporate”, were exceptions to the invariable mediocrity of the MacLeod organization. Spoiler alert: I was wrong about nearly everything, on that front.

Anyway, in 2013, I would have staunchly disagreed that the cultural middle class, the educated Gentry, performed the function of the Clueless in a MacLeod organization. “I’m not Clueless at all”, I would have cluelessly said. Here’s what my argument would have been: large organizations fall into the MacLeod pattern because they have ceased to have a real mission– once they serve no purpose but private enrichment (often at worker and customer expense) they must develop group coping mechanisms that, while conducive to dysfunction, prevent class resentments from generating even more lethal dysfunctions.

I would have said that society serves a purpose; ostensibly, it does. Many of us get warm fuzzies when we see “our” colorful rectangles in triumphant contexts, such as next to the best on a list of numbers after an international sporting event. We want to believe that our communities– families, cities, nations, the world– are on the side of Good. We understand that the dreaded “corporations” mostly implement upper-class rent-seeking… but we think more highly of municipalities, of countries, of people united by religion or language or (at broadest) by the fact of being human. In this vein, I would have said that the MacLeod analysis does not apply at the macro level. I would have been wrong.

At the time, while knowing that old-style “corporations” were bad, I bought into the Silicon Valley mythology in which new-style “startups” would replace those and (of course!) invest the profits of automation into a better, richer world where life was better for all of us. Therefore, I believed Marx’s two-class, adversarial depiction of society to be false, inappropriate to a technological society with high economic growth. (“A middle class exists. QED, you are wrong, prole.”) In fact, I was the one who was wrong. The new-style corporate elite turned out to even worse than the old one. Social and economic changes in the past decade have largely proven Marx right.

To do Marx justice, we must note that Marx did acknowledge a middle class’s existence: he wrote on the petite bourgeoisie, the small business owners and independent professionals. He predicted, correctly, that they would be losers in the ongoing class war– that machinations of the politically-connected, mostly-hereditary haute (or “true”) bourgeoisie would push them to the margins and, eventually, throw them into the proletariat. Marx did not loathe the petite bourgeoisie and he did not overlook their existence– he simply recognized them as powerless relative to market forces and the movements of history. What they gain through innovation and comparative advantage, they lose over time to the superior political and economic power of the real elites, who never compete fairly.

We could argue endlessly about the nature of the middle class(es?). Are highly-paid corporate workers hautes-proletarians or petites-bourgeois? Do the cultured-but-poor in traditional publishing outrank the wealthy-but-tasteless rednecks of Duck Dynasty? No one knows, and it probably suits the bourgeoisie that no one knows, because it keeps people from getting envious. If everyone thinks he’s at or above the 95th percentile of his own idiosyncratic class ranking, then no one’s angry. This would, in fact, multiply in effect the purpose of the middle Clueless/Gentry layer of preventing class resentments felt by workers toward owners from reaching a boiling point.

One thing I missed in the early 2010s is that there is (or was) probably more than one Gentry. It seemed natural to privilege a certain “blue state”, limply-liberal, New Yorker-reading Gentry over the megachurch pastors and talk-radio hosts, but this was intellectually errant on my part. A “red state” Gentry certainly existed then, and while I could point out its cultural and intellectual shortcomings, those are equally numerous (if different in flavor) in the “blue state” gentry. A pox on both houses.

Gentry failures

A Gentry can fail, and indeed it is probably the destiny of all of them that they will. The 2010s was the decade in which the U.S. Gentry (Gentries?) failed. Whether and when “the middle class” collapsed is a matter of debate, because no one can agree on what “middle class” is. The income spectrum will always have a middle because that’s how mathematics works, but sociological class (which represents the ease with which one gets income, not income itself) relations evolved in a number of ways, confirming Marx’s thesis that only one class distinction — the perpetual struggle between owners and workers– really matters.

In the 2010s, we saw extreme devaluation of the cultural armor (mostly, educational credentials) by which the middle class defines itself. The middle-class theory-of-life is that one does not need substantial capital (at a level almost always inherited) to live a dignified and comfortable life, so long as one possesses intangibles (skills, contacts, credentials) that ensure reliable employment. Recent years have falsified this: employment is no longer ever reliable; and it is increasingly likely, due to technological changes favoring the upper class at worker expense, to be undignified. Due to automation, which would be desirable if the prosperity it generates were distributed justly, hard skills seem to be losing their market value at a rate about 5% per year. The same is occurring for nearly all educational credentials: I know college graduates who work in call centers, and I know PhDs in five-figure “data science” jobs a high schooler could do. This leads to outrageous demand for the small number of universities that still have the social capital to make and fix careers. Tuition costs are rising not because the product of higher education has improved (it hasn’t) but due to the desperation of the former middle class. People are panic-buying lottery tickets where the prize is “connections”– that is, admission into the sociological upper class, from which upper-class incomes attained via corruption usually follow– and, for most of them, it won’t pan out.

In what way, when a middle class ostensibly exists, are there “really” only two classes? I think Michael Lopp, in Managing Humans, Biting and Humorous Tales of a Software Engineering Manager, describes the typical business meeting aptly when he says that, in a discussion of any importance, there will be two groups of people, “pros” and “cons”. I don’t much like this terminology– “pro” has positive connotations (professional) and “con” has negative ones (confidence artist)– so I prefer to go with a terminology that feels more value-neutral. I’ve assigned these categories colors, “Blues” and “Reds”. Blues (Lopp’s pros) are the people who, if nothing happens in the meeting, will win. Existing processes suit them fine, they have management’s favor, and they’re usually only in the meeting for a show of politesse. (They rarely, if ever, change their minds.) The Reds are the ones who have to convince someone of their rectitude. They’re the ones who want to introduce Scala to a Java shop, or to exclude their divisions from a boneheaded stack-ranking process. Reds lose if nothing happens. They start the meeting out-of-the-lead and, if they don’t do a good job making their case, not only will their idea be rejected, but they will be resented personally for wasting others’ time.

I hate that I am giving this advice, but one who seeks corporate survival should always side with the Blues. It sucks to say this, because as a general rule, Reds are better humans. Blues are smug jerks with their arms crossed, whereas Reds are impassioned believers prone to bet their jobs (when they do, almost always losing such wagers) on what they consider right. In a better world, Reds would get more of a chance, but if you want to maximize your expected yield from Corporate, always go with the Blues. In the unlikely event that the Reds start winning (and become the new Blues) you will have plenty of time to change sides.

Reds exist to be listened-to, but ignored. Their purpose is to let the company say it “welcomes dissent” and “listens to its employees” and “goes with the right idea, regardless of hierarchy” even though it, in reality, will always go ahead and do exactly what the higher-ups already wanted to do. If a Red knows her role and accepts her inevitable defeat with grace, she probably won’t lose her job; but given that the corporate purpose of Reds is for their ideas to be rejected, why chance being one at all? Reds who fulfill their ecological purpose do not get fired– that only happens to those who believe in their rejected ideas too much and make enemies– but it never helps one’s reputation to have an idea shot down– in Corporate, no points are scored for losing gracefully– so it would have been savvier for a Red to have put her personal beliefs aside and thrown in with the Blues.

On a corporate controversy, such as whether to allow Scala in a Java codebase, one has the liberty of choice. Insincerity is not only facile, but pretty much expected. One whose conscience and knowledge pull Red can, nevertheless, join the Blue team and share in the victory. Most of these issues have low moral stakes (tabs versus spaces) and a single worker’s vote does not really matter anyway.

This is not the case in the macro society. You can’t actually join the Blue team, the team that wins if nothing happens. Capital has an advantage: it can wait, but workers have the humiliating daily need to beg a boss for money so they can eat and pay rent. Capital is the Blue team– the wealthy win, if nothing happens. Labor is inflexibly Red. If there is no demand for our work– if there is no factor within the universe that makes it acutely painful for someone to choose not to hire us– we starve.

The above is the only meaningful class distinction in American society. Not your college major or the rank of your undergraduate institution. Not your tiny but “classy” apartment in a fashionable neighborhood that you can barely afford. Not your “connections” to people who might know your work product is good, but who would choose their prep-school buddies over you for a slot in a lifeboat. Under capitalism, what determines the entirety of who you are in this society is one thing, and that one thing is whether time and inertia are on your side. There are only two social classes and most of us are in the lower one, the proletariat. Our day-to-day survival depends on our ability to serve a class of people who consider themselves a superior species, and who view us as contemptible, begging losers.

The raw, two-class truth of society is depressing, and so both the upper and lower class work together to create the appearance of a more nuanced society, with three or five or more social classes, and in times of relative class peace it is easy to believe such structures have meaning and are stable. We want to believe in “middle class values” and many of us have an uninspected desire to be middle class, to believe that we are such a thing. That’s deeply weird to me, because to acknowledge oneself as “middle class” is to assign validity to class in the first place. And what is class? Social class is the process by which society allocates opportunities based on heredity and early-life circumstances rather than true merit, and so by its construction it is unjust. To say “I’m middle class” with glee is to take simultaneous pride in (a) being allocated better career options than other people for no good reason, (b) and, at the same time, not getting “too many” unfair advantages and therefore not deserving to feel bad about them. Still, it seems to support the short-term psychological health of a society for it to be allowed to believe that such a thing as “middle class” exists.

In the 2000s, the U.S. Gentry began to fail on its own terms; to analyze why, we have to understand its purpose. A starting point is to inspect a telling bit of Marxist vocabulary, our name for the dominant, enemy class– the bourgeoisie. Though today we use it to describe the upper class, the original meaning of the word was the medieval middle class: the urban proto-capitalists. This is not an instance of idle semantic drift. Rather, Marxists correctly note that while the true bourgeoisie is a tiny upper class, bourgeois values are what society tells the upper ranks of the proletariat “middle class values” are supposed to be. In other words, bourgeois culture (false consciousness) is created to define the middle class, by and for the benefit of the upper class. It is also in the creation of this middle class that society is encouraged to define itself as other-than-commercial.

In a society like ours, the upper and lower classes have more in common with each other than either has with the middle class. The upper and lower classes “live like animals”, but for very different reasons. The upper classes are empowered to engage their primal, base urges; the lower classes are pummeled with fear on a daily basis and regress to animalism not out of moral paucity but in order to survive. People in the lower class live lives that are consumed entirely by money, because they lack the means of a dignified life. Those in the upper class, likewise, experience a life dominated by money, because maintaining injustices favorable to oneself is hard work. So, even though the motivations are different (fear at the bottom, greed at the top) the lower and upper classes are united in what the middle class perceives as “crass materialism” and, therefore, have strikingly similar cultures. Their lives are run by that thing called “money” toward which the middle classes pretend– and it is very much pretend– to be ambivalent about. The middle classes are sheltered, until the cultural protection, on which their semi-privileged status depends, runs out.

The “middle-est” of the middle class is the Gentry. Here we’re talking about people who dislike pawnbrokers and stock traders alike, who appear to lead a society from the front while its real owners lead it from the shadows. This said, I have my doubts on the matter of there being one, singular Gentry. I would argue that corporate middle management, the clergy, the political establishments of both major U.S. political parties, TED-talk onanist “thought leaders” and media personalities, and even Instagram “influencers” could all be called Gentries; in no obvious or formal way do these groups have much to do with one another. Only in one thing are they united: by the middle 2010s it became clear that both the Elite (bourgeoisie) and Labor (self-aaware proletariat) were fed up with all these Gentries. Starting around 2013, an anti-Gentry hategasm consumed the United States, and as a member of said (former) Gentry I can’t say we didn’t deserve it.

Technology, I believe, is a major cause of this. Silicon Valley began as a 1970s Gentry paradise; by 2010, it had become a monument to Elite excess, arrogance, and malefaction. Modern technology has given today’s employers an oppressive power the Stasi and KGB only dreamt of. The American Gentry was a PR wing for capitalism when it needed to win hearts and minds; but with today’s technological weaponry, the rich no longer see a need to be well-liked by those they rule.

For a concrete example, compare the “old style” bureaucratic, paperwork corporation of the midcentury and the “new style” technological one, in which workers are tracked, often unawares, down to minutes. The old-style companies were hierarchical and feudalistic but, by giving middle managers the ability to protect their underlings, ran on a certain sense of reciprocated loyalty– a social contract, if you will– that no longer exists. The worker agreed not to undermine, humiliate, or sabotage his manager; the manager, in turn, agreed to represent the worker as an asset to the company even when said worker had a below-average year. All you had to do in the old-style company was be liked (or, at least, not be despised) by your boss. If your boss liked you, you got promoted. If your boss hated you, you got fired. If you were anywhere from about 3.00 to 6.99 on his emotional spectrum, you moved diagonally or laterally, your boss repping you as a 6.75/10 “in search of a better fit” so you moved along quickly and peaceably. It wasn’t a perfect system, but it worked better than what came afterward.

I’ve worked in the software industry long enough to know that software engineers are the most socially clueless people on earth. I’ve often heard them debate “the right” metrics to use to track software productivity. My advice to them is: Always fight metrics. Sabotage the readings, or blackmail a higher-up by catfishing as a 15-year-old girl, or call in a union that’ll drop a pipe on that shit. Always, always, always fight a metric that management wishes to impose on you, because while a metric can hurt you (by flagging you as a low performer) it will never help you. In the old-style company, automated surveillance was impossible and performance was largely inscrutable and only loyalty mattered– your career was based on your boss’s opinion of you. It only took one thing to get a promotion: be liked by your boss. In the new-style company, devised by management consultants and software peddlers with evil intentions, getting a promotion requires you to pass the metrics and be liked by your boss. In the old-style company, you could get fired if your boss really, really hated you. (As I said, if he merely disliked you, he’d rep you as a solid performer “in search of a better fit” so you could transfer peacefully, and you’d get to try again with a new boss.) In the new-style company, you can get fired because your boss hates you or because you fail the metrics. The “user story points” that product managers insist are not an individual performance measure (and absolutely are, by the way) are evidence that only the prosecution may use. This is terrible for workers. There are new ways to fail and get fired; the route to success is constricted by an increase in the number of targets that must be hit. The old-style hierarchical company, at least, had simple rules: be loyal to your boss. Having been a middle manager, I can also say that the new-style company is humiliating for us– we can’t protect our reports. You have to “demand accountability from” people, but you can’t really do anything to help them.

This, I think, gives us a metaphor for the American Gentry’s failure. Middle managers who cannot protect their subordinates from the company’s more evil instincts (such as the instinct to fire everyone and hire replacements 5 percent cheaper) have no reason to expect true loyalty. They become superfluous performance cops and taskmasters, and even if they are personally liked, their roles are justifiably hated (including by those who have to perform them.)

The Elite seems to allow, and Labor to tolerate, the elevation of a subset of proletarians into the “Gentry” because it concocts intellectual justifications for the Elite, while at the same telling Labor that it will not tolerate extreme greed or political fascism from above. It leans left-of-center, functioning as controlled opposition, since its real purpose is to define how far left a person is allowed to go before being accused of “class warfare”, and it uses “both sides” argumentation to justify Elite predation (“you, too, would do it if you had the means”) and to vilify genuine proletarian activism. The Gentry, finally, teaches capitalism how to be human– that is, it trains the machine to ape emotions like concern for the environment and genuine empathy toward workers whose sustenance “could no longer be afforded” due to “shareholder demands” and “the market rate” for executive “talent”.

Three things happened in the first decades of the 21st century to accelerate the Gentry’s demise.

First, Labor grew rightly sick of us. We were no longer the professors marching with civil rights activists; we became the pseudo-academics in think wanks (typo preserved) justifying corporate downsizing and forever wars. We were no longer the middle managers protecting their jobs and wages from overpaid psychopaths looking for “fat” to cut and “meat” to squeeze– instead, we were the time-studiers and software programmers helping the psychopaths figure out precisely which jobs and hours to cut. We sold Labor out before they did anything to us– they were right to tell us to sod off.

Second, the Elite decided the Gentry was too expensive to let live. Labor of-color suffered declining living standards in the 1970s and ’80s in the first wave of deindustrialization, and “the white working class” suffered in the 1990s and 2000s. We, in the Gentry, could decry this from a distance because of our cultural armor. We weren’t laborers compensated for the market value of our work– we were special professionals paid well and respected for who we were. Ha! It turned out that we were not immune to the market forces that drive the (divergent downward, by nature) labor market. There still are middle managers and op-ed columnists and think-tankers… but they are gone as soon as they stop carrying Capital’s black bags. I know this from bitter personal experience, having been “de-platformed” as a result of some relatively mild criticism of our economic system.

Third, we did it to ourselves. We indulged in cannibalism. We in the Gentry– especially the technology Gentry, which has been for quite some time the worst one– got so fixated on our own (relatively meaningless) individual intelligence that we became collectively stupid. As a result, we’ve been emasculated. When our employers began to impose rank-and-yank (or “stack ranking”) policies on us, we should have unionized, but we smugly assumed we wouldn’t be affected– “I’ll never be in the bottom 10 percent of anything”– and so we let the rat bastards divide us among ourselves. The limply-liberal left is as guilty as the right on this; rather than demand genuine social and economic progress for people in disadvantaged groups, we indulged in a virtue-signaling purity-testing cancel culture where people who said stupid things ten years ago get drummed out of the (dying anyway) Gentry.

Capitalist society allows the Gentry to exist for the purpose of cultural self-definition that obscures the machinations of the corporate system. Whether you like or hate the Soviet Union, the bare fact is that the Beatles did more to take down the Berlin Wall– to win the cultural war against the USSR– than MBA-toting synergizers (who are just a more expensive version of those hated Soviet bureaucrats) ever did. A society’s PR always comes from the middle of its socioeconomic spectrum. The upper class, which controls all the important resources and does no real work, tends to harbor so many moral degenerates that it’s best to conceal them. The lower classes are deprived of meaningful economic, social, geographical, or cultural chjoice and therefore inert relative to society’s self-conception; the world’s poor comprise the largest nation there is and it has no vote anywhere. It is the middle classes, then, who are expected to be other-than-commercial, and to operate at an apparent remove from the zero-sum power relationships and dirtbag machinations that actually dictate what goes on in a society such as ours.

Above, I’ve described the functional purpose of the Gentry, at which the current one has failed. Is there a moral case for a nation’s Gentry, though? I think so; but at it, we’ve failed utterly. The moral value of a Gentry, and of the national self-definition it enables, is that it can prevent Capital (the Elite) from dividing workers against each other. The 0.01%, being outnumbered, can only rule the 99.99% so cruelly by keeping the proletariat fractious. If the Gentry earnestly believes in a cohesive local, national, or global spirit and cause, it should resist these divide-and-conquer tactics. We, the American Gentry, have failed execrably at this. We’ve allowed Capital to make people who live in “red” states hate the people who live in “blue” ones and vice versa. We’ve let Capital exploit, rather than allow society to move beyond, archaic racial animosities. We’ve let them propagate everything from apocalyptic religious extremism (a “red” flavor of divisiveness) to commercialized sexual perversion (“hookup culture”, a “blue” flavor). All of this, Capital has done to pit working people against each other, and we’ve let them do it. Thus, we deserve (as a class) to die.

This explains the Rise of authoritarian leaders like Trump, all over the world. In 2016, Labor gave its vote of no confidence in the Gentry by electing an unabashed Elite parasite. His supporters are not all stupid. They know he hates their guts, and they do not much love him, but they hate us even more– because we’re the ones who promised to protect them from global capitalism, and failed. We were utterly, in the language of organizational dynamics, Clueless.

What’s Next?

Marx was right. If there are stable social classes, there are exactly two of them. The cultural armor of the “middle class” is paper-thin. Education at a competent but unremarkable state university will not prevent someone from being sacrificed to corporate downsizing, and “connections” to people who are themselves unsafe are worth very little. You are not better than a poor person because you buy an expensive brand of candle. In the end, there are just two classes– those who must sell their lives to survive, and those who don’t. That is, there are those who win in if nothing happens, and those who starve if nothing happens. There are those whose control of the world’s resources make them dangerous to others, and there are those who are in danger.

In the American midcentury, nearly everyone identified as “middle class”, whether they were corporate executives or grocery-store clerks. It was common, and perhaps remains so, to equate “middle class” with a certain salt-of-the-earth, virtuous status. As I’ve said, that’s patently ridiculous. Social class is mostly inherited and the rest comes down to random luck. We are not better than those we deem as “lower class”– we’re just luckier. The identification with “middle class” is self-limiting because it seems to tacitly accept that some people will be handed better economic opportunities. To tolerate that those born “higher” get unfair advantages, because one is getting unfair advantages over a larger group of others, should never be cause for moral pride.

When I wrote those earlier essays in the first half of the 2010s, I believed in neoliberal technological capitalism. I’ll spare the reader my own career history, but it failed me. It has failed society, too. Now that I’m older and smarter, I would say that in broad strokes I am a communist. What I mean by this is that the long-term objective of humanity should be a post-scarcity, class-free society with the minimum amount of hierarchy necessary to function. Nation-states seem to be a protectionist necessity today, but their power should diminish over time, and global amelioration of scarcity ought to be the goal. Markets may persist as algorithms for the bulk allocation of resources (command economies performing poorly in this regard) but should never have moral authority to destroy human life. In an ideal world, most people will work (as the need to work is deep and psychological) but the right to refuse work (via universal basic income) must be protected, not only for the benefit of those unable to work, but because it is impossible to have dignified conditions for workers without it in place.

My old three-ladder theory used “Gentry” as a pseudonym for a middle class (or, under US-style class inflation, an upper-middle class) mentality and gave us the tools to identify thirteen distinct classes within our society… from the E1 crocodilians at the very top… all the way down to the Underclass bereft of connection even to the least-regarded of the three ladders. As a descriptive tool for analyzing early-2010s North America, I think the taxonomy had great value. But the Gentry is dead now and I’m not sure we should ache to have it back. Thirteen social classes is too many; three is too many. Zero is the right number. Distinctions of hereditary social class should be abolished and, seeing the atrocious job global capitalism’s current leadership has done with regard to climate change, public health, and economic management, this cannot be done too soon. Corporate capitalism delenda est.

The d8 Role-Playing System

Also posted on my Substack page.

The d8 System is a role-playing game system designed to be mechanically lightweight, so as not to break immersion, and to be modular. It enables experienced GMs (in Dungeons & Dragons parlance, DMs) to run campaigns in a manner similar to what they would use in other systems, but it also provides tools for extension. The long-term vision here is that designers and GMs can build and share modules specific to their preferred gaming styles and genres.

Since only the core mechanics (“Core d8”) have been written and no modules exist yet, d8 isn’t intended for use by inexperienced GMs.

What Is a Role-Playing Game, Anyway?

Ask ten role-players (including GMs) this question, and you’ll get d12 different answers.

I’m a novelist (Farisa’s Crossing, 2021) so I’ve encountered the various approaches to, and theories about, storytelling— they are too numerous to list here. I also studied math in college, so you’d rightly guess that I’ve spent far too much time analyzing game mechanics and their probability distributions. I know the “3d6” distribution by heart (triangular 1 to 21, 25, 27; back down). I’ve also played a lot of RPGs, both electronic and tabletop. The d8 System exists for tabletop play, where the design problems are most interesting. A video game can do (and is expected to do) millions of tiny calculations per second; but a GM must construct the game world and its challenges, to some extent, on the fly. Her players will do things she didn’t expect; she will have to let the world respond in a world that doesn’t break immersion.

Players (and GMs) have a wide variety of tastes. Some want to see battles play out on hex paper and know exactly how much range a trebuchet has. Others prefer deep-character storytelling approaches and only want to know what their avatar in the game— their player character (PC)— is experiencing; they get upset if the GM says, “You lost 5 hit points”, because the character doesn’t know what ‘hit points’ are. Some players want realism; some want plot armor. There’s no right or wrong here; it’s all a matter of taste.

The core of the d8 System is designed to appeal both to statistician-strategists as well as holistic in-character gamers— to the extent that both camps can never be satisfied perfectly, such refinements are left to modules. By design, little information is conferred through game mechanics that the characters themselves wouldn’t know. To get specific, numeric data usually take the form of small integers (whole numbers). Almost no character would experience an action as “a 67th-percentile performance” but they would know the difference between 2/fair and 3/good, between 3/good and 4/very-good, etc.

Statistics and systems should only be a small part of the role-playing game experience. Some players and GMs are happy to go “systemless” and let the game be a free-form interactive story. Others want the game to hold fast to a common language— because that’s what a role-playing system is, in a sense: a language— so they know precisely how likely a mounted barbarian is to hit a downed orc at night with a +1 Axe of Retribution. These things need to be resolved, and fairly quickly. Constant calculation, however, can bog the game down. If the GM and the players find themselves in an argument about whether a percentile roll of 78 suffices to hit the man on the privy with a crossbow bolt, versus whether it ought to have taken at least 79, then everyone has lost.

The GM has the hardest job. She’s the worldbuilder and storyteller, which means she has the godlike power to “decide” what happens to the players, even overruling the dice if she wants. This, of course, means that if she’s unskilled and impulsive the game might suck, and the players have the ultimate vote-with-their-feet power to quit a campaign if that happens. Like a novelist, she has to keep a storyteller’s paradoxical combination of unflappable authority and deep humility. Her players will suggest solutions and story arcs she didn’t think of; she’ll have to know when to adapt, and when to overrule them.

GMs have to keep balance between the players, which means keeping their power levels— the scope of what each can do, in the game— balanced. It’s not that much of a problem if the players as a whole are overpowered or underpowered because the GM can adjust the power levels of the challenges they face; it’s much more of an issue if the characters differ wildly in how much they’re able to contribute. Most novels have one main character; in a role-playing campaign, everyone is the main character. The GM must reward clever, skillful play (so long the skillful play is in character) while punishing bad decisions, brute force, disengagement, and out-of-character moves. If she’s good at her job, and if her players are receptive, they nearly forget that she exists (and that she is wholly responsible for the mess the characters are in) for a few hours, enough to immerse themselves in the story. The GM has to keep the fictive dream, to use John Gardner’s term, going.

The d8 Philosophy: Modular, Non-Judgmental

To be technical, Dungeons and Dragons isn’t “a role-playing game” but an RPG system. Same with GURPSFudge, and Warhammer. The game is what happens between the players and the GM; usually a “campaign” that unfolds over several sessions (sometimes, comprising hundreds of hours). Systems have a tradeoff between modularity and specificity. If a combat system is designed for swords, axes, and shields, it’s going to have low utility when applied to modern warfare. Combat systems bring specificity— they give useful information to the GM and players about what can be expected to follow from certain happenings— but reduce modularity: the assumptions they carry specialize them into certain styles and genres of role-playing game, and necessarily exclude others. That isn’t a bad thing; GMs and players benefit from knowing and agreeing on what style of game they are playing.

Core d8, favoring modularity, tries to make no genre or style commitments; that’s left to modules. The core system could be used for medieval high fantasy, but it could also be used for 1920s gangland Chicago, 1997 suburbia during a vampire fruit epidemic, or 23rd-century Budapest. What does that mean, in practice? The lack of specificity given by the core rules means it can be used profitably by an experienced GM who already knows what genre and style of campaign she wants to run, and who has the competence to do it. Novice and intermediate world-builders, on the other hand, will have to rely on d8’s module-writing community (which doesn’t exist yet, because this is day zero) if they want specificity.

I am tempted to say that the d8 System as given here (“Core d8”) is not an RPG system (like D&D or GURPS) but an RPG system system (system-squared?) It’s a system designed to help people build RPG systems (modules, sets of modules, etc.) It doesn’t tell you how many hit points (HP) a fighter or wizard should have— because it doesn’t decide whether fighters exist in your world, or whether HP exist in your world.

Health systems are a common point of debate, and a great example for me to use in showing the innate tradeoffs one makes when using a more specific toolset. The concept of hit points comes from tactical wargames and was originally a measure of structural integrity: how hard it was to sink a ship, take out a bridge, or raze a castle. In Dungeons & Dragons and related games, hit points are used both for the player characters (PCs) and their adversaries to represent how hard combatants are to kill; they keep battle quick and fun by leaving damage abstract. The ogre hits you; you take 12 damage. A wizard heals you; you recover 17 hit points. This, of course, isn’t a realistic model of mortal danger. People don’t lose “hit points”; they lose blood and skin and fingers and, if they’re really unlucky, vital organs. Any attempt to realistically model medieval life would require a roll of the dice on minor wounds, to see if they become infected. There are GMs and role-players who enjoy this kind of gritty realism; I would guess that they’re in the minority. Anyway, Core d8 doesn’t legislate. It doesn’t propose as canonical a health, leveling, class, combat, magic, or technology system— it doesn’t even mandate that one be used at all. There isn’t just one way to role-play.

Specificity is, nevertheless, important. Before a campaign begins, GMs and players should understand what kind of world is being built. What can happen, and what can’t? If players expecting medieval realism get genre-twisted into a sci-fi romp via time warp, they’re not going to be happy. Character death is probably the biggest non-genre (that is, stylistic) point where agreement should be established. RPGs tend to focus on daring adventures and perilous adversaries, and sometimes the PCs fail. What does death mean? In settings favoring realism, it means: the person’s life ends, as it does in the real world. (The player isn’t eliminated, but creates a new character. Death isn’t necessarily losing, and it’s not to be “punished.” It is part of the game.) On the other hand, in fantasy settings, death can be a minor inconvenience, to be expected on a routine jaunt to take Tiamat’s lunch money, reversible with a mid-level spell. GMs and players need to have some measure of agreement on what death means; but d8 isn’t going to tell them what it must be.

Why Use a System at All?

Role-playing games, as interactive stories, probably predate role-playing systems. And all GMs home-brew their games, to some extent, in that we basically ignore the rules we find annoying. Encumbrance is usually ignored within reason— a STR 6 wizard isn’t going to be wearing plate mail, but keeping track of whether a PC is carrying 19.9 pounds versus 20 is no one’s idea of fun. An implicit “Bag of Holding”, because playing “tetris” with one’s inventory (real or virtual) is no one’s idea of fun. There are GMs who insist on playing battles out on hex or cartesian graph paper; there are others who keep them abstract. As I said before, some people want to know their precise percentage chance of killing that orcish lieutenant with a +2 Falchion, and others want to get deep into character and will find “THAC0” to break immersion.

Systems, well-used, improve the GM’s and PCs’ understanding of the world they live in, and what the capabilities of the characters (and adversaries) are. They give a sense of objectivity, so the players don’t think the GM is making rules up as she goes along— even though, to some extent, it is part of her job.

Health modeling (to HP or not to HP) is incredibly important, and probably where GMs (and PCs) lean most heavily on their systems. Players need to know how close their characters are to shuffling off the mortal coil. There tend to be two approaches to this. The abstract, hit-point-based system described above is one— in it, damage tends to be inconsequential until a player’s HP reach 0. At the other extreme, with an aim for biomedical realism, one has injury systems that tell players precisely what bodily agony their characters are feeling. In injury systems, characters don’t “lose 6 hit points”; they get fingers blown off and shoulders crushed by lead pipes. HP systems tend to work well in high-fantasy campaigns where combat is common and cannot be avoided; injury systems tend to work better in realistic settings where (as in real life) physical combat should usually be avoided as much as one can.

I’m an old man (well, 37) but I too was once a teenager too smart to believe in hit points. I designed “realistic” but complicated injury systems that were not at all fun to play. Now that I’m older and mountains are mountains again, I have an appreciation for HP systems. They keep combat moving, and for the most part they prevent stochastic permanent damage, which is an asset in a horror setting but unwanted and off-brand in epic fantasy. Are hit points unrealistic? Yes, and that’s the point.

Or, let me be more specific: hit points are not really a health system. What they model, and this is key in understanding the way in which HP are realistic, is the push-and-pull of combat— the fatigue and pain that build up from bodily abuse, the waning ability of a fighter’s range and determination to compensate for a failing body, and the (greatly increased, in real life as in RPGs) capacity of experienced fighters to keep pushing through.

The notion that hit points are wildly unrealistic tends to come from two sources. One is a simple misunderstanding of leveling— while level-10 characters are “mid level” in D&D they are in fact among the best fighters in most worlds. From 1 to 10, each level represents about a factor of 10 in rarity, so merely level-3 characters are in the top-0.1% of adventuring experience, skill, and (plot-armor-slash-)luck. Levels 11 to 15 are superheroes; 16 to 20 are mythic heroes who emerge only in world-stakes conflicts that’ll be discussed for millions of years. In their proper (notably, counterfactual) context, I don’t think D&D’s leveling or its at-high-levels generous HP allotments are unrealistic. The other is a misunderstanding of what 0 HP means. It does strain credibility that a character can be “near death” (1 HP) and still fight at full power— but, under the modern interpretation of hit points, that’s not what it means to be at 1 (or 0) hit point. Zero doesn’t mean certain death, and it doesn’t even have to mean unconsciousness or total incapacitation. It’s the point at which single combat ceases to be a fight (and, if let to continue, becomes a depressing beatdown). 0 HP is the point where the referee of a fighting sport calls the match because the losing combatant can no longer defend himself.

A hit point system doesn’t preclude injuries; in fact, most D&D-style systems have plenty of ways for characters to get grievously injured (or killed) after falling to 0. The simplifying assumption is that injuries won’t happen during the “fair fight” phase (as opposed to the “beatdown” that may occur after one fighter has lost). That’s false, but it’s not that false. In pre-firearm single combat, it was pretty rare for people to suffer mortal wounds during the period in which it was still a two-sided fight. Armor makes it hard to pierce a vital organ. The fact isn’t chivalrous, but a medieval knight’s most common killing blow was nothing theatrical, but a dagger to the throat of a downed or exhausted opponent. Similarly, bare-fisted combat has a low rate of fatal injury if the fight is stopped once properly over, which is why fighting sports have (in comparison with other sports, and of course we are learning about the cumulative effects of seemingly minor injuries) a reasonable safety record.

What 0 HP represents is not character death but the loss of fighting capacity. What happens when all PCs are reduced to 0 HP (“total party wipe”) depends on the motivations of their adversaries. If they’re wiped out by brigands, they can be robbed “left for dead” but survive, because murder is something even scumbags don’t do lightly. If they fall to some malignant force, like a lich or an indifferent stone giant, the campaign may well be over.

Real-world fights (street assaults) are hideous, depressing affairs. Many are ambushes; people don’t fight fair. If the fight is balanced at all, it usually ends (and may turn into a dangerous beatdown) in a few seconds. Often, the scumbag will win because the (a) the scumbag was planning the assault, and (b) non-scumbags very rarely get in street fights after their early teens. This sort of thing isn’t what we want in high fantasy. We don’t want to see ambushes and beatdowns— we want the long cinematic fights were the opponents fight with honor and bravery, and wherein someone can return from the brink of failure (1 HP) to victory. We don’t, in fantasy, want the fights that end because a weapon breaks, or because someone slips in horse shit, or because artillery fire demolishes a combatant’s head, rendering the whole sequence of events moot.

Whether to use HP or an injury system depends on genre and style preferences. Some GMs want to build, and some players want to live in, a gritty dangerous world where a rabid dog or a teenager with a knife is a real threat— a world where getting sucker punched means entering a fight with a disadvantage and possibly for only that reason losing to a worthless, unskilled opponent.

What I think makes people unhappy is when systems try to blend the two. For example, GURPS has a rule by which characters’ physical abilities degrade through general damage— at 33% of their maximum HP, they are bad off. If I were in a fantasy campaign, I would call that a misfeature. Here’s the thing: if it were me battling ogres, I’m sure my physical abilities would degrade after the first blow. However, D&D adventurers are experienced warriors, not regular people like me. The system is calibrated for an immense dynamic range— from level 1 characters who must be played conservatively, because bears and wolves and superior warriors can still destroy them, to level 20 demigods who can punch a dragon in the taint and run— and, seeing as level 1 characters don’t have many hit points at all, I don’t consider its modeling unrealistic. It’s plausible to me that, at high levels, these people can— through adrenaline, rage, and grit— keep fighting until their bodies give out.

As one can see, there’s no single right way to build a role-playing game, a system, or a world. Whether it’s better to use abstract damage (hit points) or a biomedically realistic injury system where every fight is a losing fight, that’s a genre and style concern, and there’s no right answer. In any case, the real objective for the GM in a role-playing game isn’t realism, but immersion. If a mechanic breaks immersion and becomes, rather than a modeling tool, a force unto itself, it should probably not be used.

As systems become more specific (and less modular) they impose constraints. These can focus, direct, and inspire a GM’s creativity, especially if she can trust that the mechanics are sound and well-tested. At the same time, they can stifle creativity and direct player focus to the wrong things— in which case, she should discard them.

Core Mechanics: Yes, It’s All About Dice

GMs of high skill can run campaigns without any random elements such as dice; at this point, it is “system-free” interactive story. Still, I think most GMs prefer to have randomizers. It helps immersion for players to think their characters are in a game rather than in a world built from one person’s imagination. Randomizers can vary. I once played a short campaign on a hike where we used the 0.01-second digit of a stopwatch. I’ve also seen GMs run campaigns using only tarot cards. Dice, however, are the go-to tool. They’re objectively numerical; as random influences, they help players forget that the game is actually under the GM’s complete control. Sometimes, the dice “speak for themselves” and suggest a course for the story very different from what anyone had intended. These injections of random chance make the game world feel more real or, in literary terms, more “lived in”.

The standard role-playing dice set consists of six sizes of dice: four (d4), six (d6), eight (d8), ten (d10), twelve (d12), and twenty (d20) sizes. Other sizes exist, but those tend to be expensive and unnecessary; it’s rare that role-playing games need to model an exact 1/7 probability (for which a d7 would be used)— typically, 3/20 is close enough. The d100 (or d%, or “percentile roll”) is commonly called-for roll, but it’s achieved with two d10s of different colors. An actual d100 is nearly spherical and almost impractical to use.

The d8 System is designed to run using only 8-sided dice (hence the name). Module writers and GMs are free to incorporate d12s and d20s and tarot cards, but the core system can be run with a handful of 8-sided dice.

The statistical engine of many systems is the “percentile roll”, the d100 synthesized from two visually distinct d10s. If the GM or the module specifies that there must be an 18% chance of rain, each day, in a given setting, then a d% is the way to go about it: 1–18: rain, 19–100: no rain. In general, though, it’s lacking because we don’t think in percentiles. Should “12th percentile” weather in late March in Manhattan be a cold drizzle, or a sleeting horror show? Is a 73rd-percentile ice cream sandwich sufficient to give +1 Morale, or does it have to be 74th percentile to have that effect? People can perceive four to seven meaningful levels of quality in most things, not 100.

Percentile rolls can force GMs to set precise numerical probabilities on events, rather than letting the system, through its modeling capacity, figure out what those probabilities might be. If a regular person has a 20% chance of making a jump, what are the odds for a skilled circus performer? Eighty percent? Ninety percent? How do we adjust the odds if the lights go out, or the performer is recovering from an injury? This sort of thing is the core of what we use role-playing systems to model.

Linearity— A Criticism of d20

What’s wrong with D&D’s d20 System? Objectively, nothing. As I’ve said, there are no absolutes, only tradeoffs— for simplicity it can’t be beat. It’s an improvement on the percentile roll— 20 levels, instead of 100— but it still has the issue of linearity, which means it lacks a certain realism.

Here’s the problem, in a nutshell. As I said, the dice resolve conflicts between the PCs and the environment. When the character wants to do something at which he might not succeed, and the GM decides to “let the dice speak”, it’s called a trial or check; the system is there to compile situational factors and (without requiring advanced statistical training on the GM’s part) find a reasonable answer of the PC’s likelihood to succeed.

Here’s an example that shows the issue with linearity: Thomas has a 50% chance of convincing NPC Rosalind to help him get to the next town. Or, he would; but he’s wearing a Ring of Persuasion, which increases his likelihood to 75%. Additionally, he and Rosalind share the same native language. Thomas, wisely, uses it to communicate with her, and gains the same degree of advantage (50 to 75 percent). If he has both advantages, how likely is he to succeed in getting Rosalind to help him out?

A linear system like d20 says: 100 percent. Each buff is treated as a +5 modifier, or a 25-percentage-point bonus. They combine to a +10 modifier, or a +50% bump, and Thomas is guaranteed to succeed. Is that accurate? I’ve modeled these sorts of problems for a living, and I would say no. What is the right answer? Well, models vary, clustering around the 90% mark, and I’d consider any number between 87 and 94 defensible— and since gameplay matters more than statistical perfection, I’d also accept 85 (17/20) and 95 (19/20) percent. At any rate, the difference between 50% and 75% is about the same as that between 75% and 90%, which is about the same as that between 90% and 96%.

The fact that linear models “are wrong” isn’t the worst problem. If a player gets a free pass (100%, no roll) on what “should be”, per statistical realism, a 90% shot, that’s not a big issue. No one’s going to feel cheated because he didn’t have to roll for something he had a 90% chance of pulling off anyway. If it goes the other way, turning what ought to be a 10% chance into zero, that’s more of an issue— it is the system rather than the game (the play, and the dice, and the reasonably-estimated difficulty of the task being modeled) that are making an action impossible. And that’s not ideal. Even still, though, the biggest problem here is that, because two “+5 modifiers” stack to turn a 50/50 shot into a certainty, or an impossibility into a 50/50 shot, we rightly conclude that +5 modifiers are huge. Then, most of the situational modifiers used and prescribed by the literature are going to be smaller, more conservative ones— ±1 and ±2— to avoid generating pathological cases. But in the mid-range (that is, when the probability is right around 50%) these modifiers are so tiny, almost insignificant, that they become inexpensive from a design perspective. This encourages a proliferation of nuisance modifiers and rules lawyering and realism disputes. Should that pesky -1, from a ringing in a PC’s ear, really turn a 10% chance into a 5 percent chance? Shouldn’t the GM see the unfairness of this, and waive the -1 modifier? Maybe this needs to be a Sensory Exclusion check— now, do we use INT or WIS? And so on. The system’s quirks intrude on role play.

It’s better, in my view, to have infrequent modifiers; when they exist, they should be significant. If they’re not substantial, let the system ignore the. The failing of d20’s linearity is that it’s coarse-grained at the tails where we actually benefit from the understand a fine-grained system gives us— there’s a difference between a 95th percentile and 99th percentile outcome, in most cases— but fine-grained in the middle where we don’t need that precision. A “+1 modifier” is major at the tails, imperceptible in the midrange… which means we lack an intuitive sense of what it means.

GURPS uses 3d6 instead of d20. This is an improvement, because the system is finer-grained at the tails and coarser in the middle where (as explained) we don’t need as much distinction— 3 (in GURPS, low is good) is “top 0.46%” whereas 10 is “50th–63rd percentile. Fudge, in the same spirit, uses 4dF, where a dF is a six-sided die with two sides each marked as: [+], [ ], and [-], corresponding to values {-1, 0, 1}. Notably, it gains an aesthetic advantage (for some) of making results visible without calculation. Cancel out the +/- pairs (if any) and what’s left is the result. Fudge also eschews raw numbers in favor of verbal descriptions: a Good (+1) cook who has a Great (+2) roll will produce a Superb (+3) dish; if she has a Mediocre (-1) roll, she’ll produce a Fair (0) one.

The linearity of d20 comes from its core random variable being sampled from a (discretized) uniform distribution, thereby assuming that the “nudge” it takes to turn a 75th-percentile performance is the same required to turn a 75th-percentile performance into a 100th-percentile (maximal) performance. That’s false, but the falsity isn’t the issue because all models contain false, simplifying assumptions. Summed-dice mechanics (3d6 or 4dF instead of d20) give us something closer in shape to a Gaussian or normal distribution, and in some cases that’s the explicit target. That is, the designers assume the resolved performance P of a character with ability (or skill, or level) S shall be: P = a*S + b*Z, where a and b are known constants and Z is a normally distributed random variable. It’s not all that far off; one can do a lot worse. That said, I think it’s possible to do a lot better.

What’s wrong with a normal distribution? For one thing, it’s not what we’re getting when we use 3d6 or 4dF. Those mechanics are bounded. If you’re a Mediocre (-1) cook, you have a zero-percent chance of producing a dish better than Superb (+3). For food preparation, that seems reasonable, but is the probability of a Mediocre archer hitting a dragon in the eye really zero point zero zero zero, or is it just very small? Again, if the system “behind the scenes” makes things that should be improbable, improbable, that’s not an issue— but the system shouldn’t be intruding by making such things impossible. One fix to this problem is to say that certain outlier results (e.g., 3 and 18 on 3d6, -4 and 4 on 4dF) always succeed or fail, but the system is still intruding. Another fix is chaining: on a maximal (or minimal) result, roll again and add. So, +4 (on 4dF) followed by another +4 is +8. Okay, but can chaining make things worse— can +4 followed by -4 make a net 0? If that’s a possible risk, can players choose not to chain?

The boundedness itself isn’t the real problem, though. The actual Gaussian distribution isn’t bounded— a result 4 or 6 or 8 standard deviations from the mean is theoretically possible, though exceedingly unlikely— but it still isn’t what we want for gameplay; its tails are infinite but extremely “thin”.

Fudge can have what I’ve heard called the “Fair Cook Problem”. For this reason, many players prefer to use 3dF or 2dF. With 4dF, it is possible for a Fair (0) cook to produce a Legendary (+4) dish, but he is equally likely (1/81) to produce an Abysmal (-4) dish and make everyone sick. At 1-in-81, we’re talking about rare events, so that’s not much of a concern on either end; but 4dF also means 5% (4/81) of his dishes are Terrible and 12% (10/81) are Poor. That’s more of a problem. We wouldn’t call someone with this profile “a Fair Cook”. We’d fire him, not because he occasionally (1/81) screws up legendarily— we all have, at one thing or another— but because of his frequent, moderate screw-ups. At the same time, if we drop to 2dF, we lose a lot of the upside variation that makes RPGs exciting— 77% of the rolls will be within one level of his base (plus or minus modifiers) so why don’t we just go diceless? Using 2dF imposes draconian conditions on what can happen and what cannot— the system is deciding— whereas 4dF lets the dice speak but they get loud and never shut up.

For this reason, I advise against using the Gaussian distribution for the core mechanic of one’s role-playing system. It’s too thin-tailed. Although outliers are rare by definition, we need to feel like they’re possible, which means they need to happen sometime. What we don’t want are frequent moderate deviations (Poor dishes from Fair cooks) that muck up the works and turn the game into a circus. In technical terms, we want kurtosis; we probably also want some positive skew.

In addition to this observation, the real-world normal distribution is continuous and its natural units (standard deviations from the mean) feel bloodless. Is “0.631502 standard deviations below average” meaningful? Not really. It has the same problem as the percentile roll. I just don’t know what “a 31st-percentile result” is supposed to mean. As I said, we can only distinguish about seven levels of quality among outcomes— and, in most cases, fewer than that. We don’t want to think about tiny increments. Whatever our “points” are, we want them in units that matter: not that the fisherwoman had a 37th-percentile (or 12, or -1) day, but how many fish she caught. No fish, one fish, or two fish? Red fi— never mind. Fish. The French word for fish is… what again? Poisson.

The “Poisson Die”

Here are the design constrains under which I built the d8 System:

  • (i) the core random variable must be implementable using a small number of regular (d4, d6, d8, d10, d12, d20) dice and simple mental arithmetic. No immersion-breaking tables or calculators.
  • (ii) the output should have a small number of levels that represent discrete qualitative jumps; not the 16 levels (3–18) of 3d6 or 100 of d100.
  • (iii) the system should be unbounded above. Except when there’s a character-specific reason (e.g., disability, moral vow) reason a PC cannot do something, there should be a nonzero chance of him achieving it, even if the task is ridiculously hard task. (Probability, not the system, should limit the PC.)
  • (iv) chaining, or the use of further dice rolls for additional detail on extreme results (e.g. “roll d6; on 6, roll again and add the result”) is permissible upward, but not downward. Chaining can improve a result or leave it unchanged; it can be used to determine how well someone succeeded, but not how badly he failed (“roll of shame”).
  • (v) it should be easy to correlate a performance level to the skill level of which it is typical. This is something Fudge does well: a Good (skill level) cook will, on average, produce Good (result level) food.

How do we meet these criteria? (Here’s some technical stuff you can skip if you want.) Between (ii) and (iii), there seems to be a contradiction; (ii) wants us to have “a small number of” discrete separable qualitative levels, and (iii) demands unbounded possibility upward. This isn’t hard to resolve: we can have an infinite number of levels in theory, so long as the levels are meaningfully separate— lowest (“0”) from second lowest (“1”), “1” from “2”, and so on. The infinitude of possibilities isn’t a problem as long as 10+ aren’t happening all the time. This favors a distribution that produces only integers, which is also a good match for dice, which produce integers.

The Poisson distribution models counts of events (which could be “successes” or could be “failures”— it does not care if the events are desirable). Poisson(1) is the distribution of counts of an event during an interval X if it happens once every X on average. If lightning strikes 2 times per minute, the distribution governing a 15-second inverval will be Poisson(0.5) and that governing a 60-second interval will be Poisson(2).

For an integer m, a Poisson(m) distribution produces m on average, so we can naturally correlate skill and result levels. If a character of skill 4 gets a Poisson(4)-distributed result, then we know that a result of 4 is average at that level. They also sum nicely: if you add a Poisson(m) variable and a Poisson(n) variable, you get a Poisson(m+n) variable, which means that statistically-minded people like me have a good sense of what’s happening. It also means that, if we can simulate a Poisson(1) variable with dice, we can do it for all integer values.

Finally, the Poisson distribution’s tail decay is exponential as opposed to the Gaussian distribution’s quadratic-exponential decay. This has a small but meaningful effect on the feel of the game. Difficult, unlikely endeavors still feel possible— we can imagine having several lucky rolls in a row, because sometimes it actually happens— so it doesn’t feel like the system itself is imposing stricture.

Can you sample from a Poisson(1) distribution using dice? Not perfectly; the probabilities involved are irrational numbers. For our purposes, the most important probability to get right is Pr(X = 0), which for Poisson(1) is 1/e = 0.367879…; as rational approximations go, 3/8 = 0.375 is good enough. (One can do better using d30s— this is detailed below— but I don’t think the extra accuracy is worth the cost. GMs and players benefit from the feel of statistical realism, but I don’t think they care about Poisson distributions in particular.)

To roll n Poisson dice, or ndP:

  • roll n d8’s. A 4 or higher is 1 point (or “success”); a 7 is double, an 8 is triple.
  • for each 8 rolled, roll it again. On 1-7, no change. On 8, add 1, roll again, repeat.

So, if a player is rolling 4dP and gets {2, 3, 6, 8}, we interpret the result as 0 + 0 + 1 + 3 = 4; we chain on the 8, if we get, say, a 3, we stop and 4 is our result. That’s an average outcome from 4dP, but a 1-in-64 result from 1dP.

For easier readability, you can buy blank d8’s and label them {0, 0, 0, 1, 1, 1, 2, 3}. You’ll typically be rolling one to four of these, so four such dice per player (including GM) is enough.

Here’s a table (graph also on site) that shows how 2dP tracks against Poisson(2). Are there more complicated methods that are more accurate? Of course. Is it worth it, from a gameplay perspective? Probably not. The dP, as described above, does the job.

Skills

Unlike a board game whose state can be represented precisely, the environment of a role-playing game is usually strictly or mostly verbal, and the game state is a collection of facts about the world that the GM and PC have agreed to be true. Fact: there’s a goblin 10 feet away. (PC stabs the goblin.) New fact: There’s a dead goblin at the PC’s feet. A character sheet contains all the important facts about a player’s character. Erika is 28 years old. Jim is a member of a disliked minority. Sara has eyes of different colors. Mat’s religion forbids him from eating meat.

The facts above are qualitative, which doesn’t make them less important, but they’re not what RPG systems exist to model. GMs and players decide what they mean and when they have an influence on gameplay (if they do). The system itself isn’t going to say that it means that Sara’s eyes are of different colors. It’s the quantitative measurements of characters— what they do; how they match up against each other and the world— that an RPG system cares about. In D&D, a character with STR 18 is very strong while one with STR 3 is very weak; the former is formidable in combat, but the latter can’t pick up a sword.

In the d8 System, these quantitative attributes are all Skills, which range from 0 to 8, but entry-level characters will rarely go above 4. For Skills, 1 represents a solidly capable apprentice who can do basic things without supervision and, within time, can produce solid work; 2 represents a seasoned journeyman; 3 represents a master craftsman; 4 represents an expert of local-to-regional renown. 5 is approaching world class, and 6–8 are very rare.

Skills can be physical (Weightlifting, Juggling, Climbing) or academic (Chemistry, Research, Astronomy) or social (Persuasion, Deception, Seduction) or martial (Archery, Swordsmanship, Brawling) or magical (Telekinesis, Healing, Pyromancy). Each campaign world is going to have a different Skill tree, and a GM can choose to have very few Skills (say, ten to fifteen for a single-session campaign) or a massive array (several hundred), although it’s best not to start with several hundred available Skills among new players.

The more Skills there are, the more specialized and fine-grained they will be. For a coarse-grained system, Survival, Bargaining, and Elemental Magic would be skills. In a more fine-grained system, you’d split Survival into, say: Trapping, Fishing, Camping, and Finding Water. Elemental Magic would become Fireball, Cold Blast, Liquefy Ground, and Move Metal. Bargaining would become Item Appraisal, Negotiation, and Sales Instinct.

Also, things that we assume most or all people in the campaign world can do, do not require Skills. In a modern setting, “Literacy 2” (by a medieval standard) would be assumed, and if someone was well-read they would probably have “Literacy 3”— but we wouldn’t bother writing it down; it can mostly be assumed that a 21st-century American can read and can drive.

As a campaign goes on, and as players do harder and more specialized things, the Skill tree is going to grow. There’s nothing wrong with that. Of course, GMs are going to want to start with a list of basic Skills they think will be useful in the game world. Here’s how I’d recommend doing that: start by listing classes that would be useful in the game’s world. That doesn’t mean the GM is committing to the class system— they are “metadata” that will be thrown away, allowing players to invest points as they will. Here are twelve archetypes that might befit a typical late-medieval fantasy world.

  • Soldier (swords, spears, armor, defense).
  • Barbarian (axes, hammers, strength).
  • Ranger (survival skills, defensive fighting, animal husbandry).
  • Rogue (thievery, deception, evasion, sabotage).
  • Healer (defensive magic, curative spells, medical knowledge).
  • Warlock (offensive magic, conjuration, elemental magic).
  • Wizard (buffs/debuffs, combination magics, potions).
  • Monk (bare-fisted fighting, “inner” magic).
  • Merchant (social skills, commerce, regional knowledge).
  • Scholar (chemistry, engineering, historical knowledge, foreign languages).
  • Bard (arts, seduction, high-society knowledge).
  • General (oratory, battle tactics, military knowledge).

For some value of N, generate N primary Skills appropriate to each class. For a coarse-grained Skill system, one might use N = 4; for a fine-grained one, consider N = 10. If a skill doesn’t fit into a class, add it anyway. If it fits into more than one, don’t worry about that, assuming these classes are just for inspiration. In general, I wouldn’t worry about the complexities of Skill trees (specialties, neighbor Skills, etc.) for an entry-level campaign.

When circumstances create new Skills, GMs have to decide how to “spawn” them. The population median for most Skills is zero, so most characters won’t have it— but if players’ back stories argue for some exposure, that might make the case for a level. Of course, GMs have to keep player balance in mind while doing this.

As player characters improve, the numbers will increase, but that can be boring. Rolling five dice when you used to roll four is fun, but eventually it’s all just rolling dice. Once players are hitting the 3–5 range, it’s time for the GM to start thinking about specialties. A character can have Medicine 4 and no experience with surgery. We could model a very high-risk surgery as a Difficulty 6 task using Medicine— the player rolls 4dP and has to hit 6 to succeed— but it would be more precise to model it as a Difficulty 3 trial of a harder and more specialized skill: Surgery.

As PCs do harder, more interesting things, the Skill tree may become an actual tree.

There are three ways skills relate to each other. A hard dependency means the parent must be achieved, at each level, before the dependent skill can be learned. When hard dependencies exist, there’s usually a slash in the name of the more specialized skill, e.g., Writing/Fiction or Writing/Poetry. It is impossible for a character to get Writing/Fiction 4 without having Writing 4. Soft dependencies are more lenient: the character’s level in the specialty can exceed that of the parent Skill, as long as there’s nonzero investment in the parent skill— however, the Skill gets harder to improve as the discrepancy grows. Someone could, say, have Medicine 3 and Surgery 4— above-average medical knowledge but fantastic in the operating room— but Surgery 4 (or even Surgery 1) without Medicine isn’t possible. Neighbor Skills do not have a prerequisite relationship but one can substitute (at a penalty) for the other. If a PC has Low Dwarven 3 and has to read a scroll in High Dwarven, he might able to read it as if he had High Dwarven 1 or 2.

GMs should, as much as possible, flesh out these relationships before characters are created. An entry-level Skill tree isn’t likely to have much specialization, so hard dependencies will be rare, if used at all. Typically, all of the primary Skills are going to be soft-dependent on parents called Aptitudes— in the example I give below, social Skills would be soft-dependent on Social Aptitude, athletic Skills on Physical Aptitude, and so on.

For each primary Skill, the GM should decide:

  • which Aptitude the Skill is soft-dependent on, and
  • each Skill’s Complexity: Easy (-1), Average (-2), Hard (-3), Very Hard (-4), or Extremely Hard (-5).

Complexity doesn’t measure innate difficulty but relative difficulty— how much additional investment is required to learn the skill. Lifting weights isn’t “Easy”— I’m exhausted after a good session at the gym— but I’d probably model Weightlifting as Easy relative to the Aptitude, Strength: it’s not hard for a character with Strength 3 to get Weightlifting 3.

Fungibility is another factor GMs should determine. Let’s say that Weightlifting is Easy (-1) and Rock-Climbing (-2) is Average relative to Strength. If these Skills are fungible, then a character with Strength 4, and no prior investment in either skill, implicitly has Weightlifting 3 and Rock-Climbing 2. If they’re non-fungible, then the character doesn’t, and will be unable to perform the task without prior investment in the skill.

By default, Easy and Average Skills are fungible by their parents (Aptitudes for primary Skills, broader fields for specialties) whereas Hard+ Skills are not fungible. GMs can overrule this on a per-Skill basis— the GM might decide that Surgery is Average (-2) relative to Medicine but non-fungible. Then, while a character with Medicine 4 can get Surgery 1 rather quickly (having mastered the parent Skill) he doesn’t implicitly have it without investing in the Skill.

GMs may vary fungibility at the task level. For example, a GM might allow a brilliant-but-unscrupulous (indeed, they sometimes go together) charlatan with Medicine 4 (but no Surgery) to roll 2dP for the task of faking knowledge, but be utterly incapable should he have do it.

Primary Skills (that is, Skills that aren’t specialties of other Skills) are almost always soft-dependent on an Aptitude— these play the role of “ability scores” in other systems, and they function as Skills for learning Skills, but they’re also Skills in their own right. Whether this shall mean they represent (small-s) skills, that can be improved with practice, or immutable talents, that matter depends on the GM.

If the GM’s objective is realism, it should be incredibly uncommon for Aptitudes to go more than one level above where they started; but, toward the objective of modularity and optimism about human potential, the d8 System doesn’t prohibit Aptitude improvement.

Here are some ways in which Aptitudes are different from regular Skills:

  • Everyone has them. The population median for a typical Skill is zero. Most people have never been Scuba Diving and most people outside of Germany don’t speak German. On the other hand, nearly all of us use Manual Dexterity and Logical Aptitude on a daily basis. For a typical Aptitude, the population median is 1–2; 1 for people who don’t use it in daily life, and 2 for people whose professions or trades require it.
  • They change slowly. Improving Skills takes a lot less effort than improving Aptitudes. It’s not uncommon for a mid-level character’s top skills to hit 5 and 6; but Aptitudes of 5+ should be very rare (they can break the game).
  • Below 1, fractional values (½, ¼) exist. For other Skills, the lowest nonzero value is 1. It’s just not useful to keep track of a dilettante having “Battle Axe ¼”. On the other hand, for core Aptitudes like Strength and Manual Dexterity, there’s a difference between extremely (¼) or moderately (½) weak and “Strength 0”, which to me means “dead” (or, at least, nearly so).

One converts from D&D’s 3d6 scale roughly as follows: 3: ¼; 4–7: ½; 8–11: 1; 12–14: 2; 15–17: 3; 18–19: 4; 20+: 5+.

What Aptitudes exist in a campaign is up to the GM. Theoretically, a GM could run an Aptitude-free system, in which learning Skills is equally easy (or hard) for all characters. Usually, though, players and GMs want to model a world where people have different natural talents.

For a fantasy setting, my “core set” consists of the following:

  • Physical: Strength, Agility, Manual Dexterity, Physical Aptitude, Stamina.
  • Mental: Logical Aptitude, Creative Aptitude, Will Power, Perception.
  • Social: Charm, Leadership, Appearance, Social Aptitude.
  • Magical: Magic Power, Magic Resistance, Magical Aptitude.

Note: these Aptitudes are not part of “Core d8”— the d8 System doesn’t mandate you use them— though I refer to them for the purpose of example. In a science-fiction setting, Strength isn’t that important and perhaps would combined with Stamina. In a non-magical setting, discard the Magical ones.

Please note: these attributes are part of “Core d8”; the d8 System doesn’t mandate that you use any of them. In a science-fiction setting, Strength is less important than it would be in a classical epic fantasy. In a non-magical world, the magic-related Aptitudes have no use and should be discarded.

Some of these— Strength, Agility, Will Power, Perception, Charm— are likely to be checked directly. A PC makes a Strength check to open a heavy door, a Charm check to determine an important NPC’s reaction to meeting him for the first time, a Will Power check to determine whether he is able to resist temptation. Those with Aptitude in their name largely exist as Skills for learning other Skills— most primary Skills will be soft-dependent (“keyed on”) them. So, while Weightlifting will be keyed on Strength, Sprinting on Agility, and fine motor (s|S)kills on Manual Dexterity, most athletic Skills will key on Physical Aptitude (kinesthetic intelligence).

This means it is possible, for example, that a PC has Charm 4 but Social Aptitude 1— he’s very good at making positive impressions on people, but learning nuanced social (s|S)kills is difficult for him.

The d8 System de-emphasizes “ability scores”, so it might seem odd that my core set for fantasy has so many (16) Aptitudes; but this is part of the de-emphasis. To start, I broke up the “master stats”. Dexterity/DEX, I broke into Agility, Manual Dexterity, and Physical Aptitude— all of which are different (though correlated) talents. Intelligence/INT I broke up into Logical Aptitude, Creative Aptitude and Perception. The d8 system, by having fewer levels, limits its vertical specificity in favor of horizontal specificity. “Intelligence 4” could mean a lot of things; on the other hand, if someone has “Logical Aptitude 5, Creative Aptitude 1”, I understand that he’s deductively brilliant but mediocre (and likely uninterested) in the arts.

Notice also that I’ve separated Magic Power and Magical Aptitude from the intelligences. I rather like the idea of a super-powerful, stupid wizard.

If any of the Aptitudes in my fantasy core set are misnamed, Creative Aptitude is the one, because it also includes spatial aptitude and was originally named thus. The “left brain, right brain” model is for the most part outdated, but it gives us a rather balanced split of what would otherwise be a master stat, Intelligence. Is it entirely accurate to lump spatial and creative intelligence together? Probably not; but this set does so because, combined, they are in approximate power balance with Logical Aptitude.

Building Characters

Core d8 doesn’t tell GMs how characters should be made. Aptitudes and initial Skills can be “rolled”, but with experienced players I think a point-buy system is better. Players should start with “big picture” conceptions of who their characters are, their back stories, and their qualitative advantages and disabilities.

Players should, in general, get k*N points to disperse among their PCs’ Aptitudes, where N is the number of Aptitudes that exist and k is between 0.5 (regular folk) and 1.5 (entry level but legendary). GMs can decide whether various qualitative, subjective traits cost Aptitude points, or (if disadvantageous) confer them.

The baseline value of most Aptitudes is 1, from which the point-buy costs are:

  • ¼: -2 points (GM discretion).
  • ½: -1 point
  • 1: 0 points
  • 2: 1 point
  • 3: 3 points
  • 4: 5 points
  • 5: 8 points (GM discretion).

Perhaps I risk offense by saying this, but men and women are different: men have more upper body strength and women are more attractive (and are perceived as such, even by infants). So, I’m inclined to give male characters +1 Strength (baseline 3) and women +1 Appearance (baseline 2). This doesn’t prevent players from “selling back” that point for something else. A player can create a Strength ¼, Appearance 4 male character; a player can also make a Strength 4, Appearance 1 female (e.g., Brienne of Tarth in Game of Thrones). It does make it cheaper to have a Strength 3+ male or Appearance 3+ female character. If you’re a GM and you find these modifiers sexist, throw them out. It’s your world.

If I were to build a Aptitude sheet for Farisa, protagonist of Farisa’s Crossing, it would look like this:

  • Strength: 1 — average female.
  • Agility: 1 — average, untrained.
  • Manual Dexterity: ½ — clumsy.
  • Physical Aptitude: ½ — same.
  • Stamina: 2 — able to walk/run long distances.
  • Logical Aptitude: 4 — the smartest person she knows excl. Katarin and [SPOILER].
  • Creative Aptitude: 3 — Raqel has more artistic talent; so does [SPOILER].
  • Will Power: 3 — determination necessary to survive Marquessa.
  • Perception: 2 — except [SPOILER] may be case for 3.
  • Charm: 2 — quirky, nerdy; able to use her atypicality to advantage sometimes.
  • Leadership: 2 — teacher at prestigious school.
  • Appearance: 3 — above-average female.
  • Social Aptitude: ½ — Aspie and probably bipolar (Marquessa).
  • Magic Power: 4 — very strong mage by [SPOILER] standard.
  • Magic Resistance: 1 — iffy b/c mages are weaker to most magic in this world.
  • Magical Aptitude: 4 — [SPOILER] and [SPOILER] and then [SPOILER].

Here, k turns out to be 1.75 (+28); she’s in a world where most people have no magic and the baseline for Magic Power and Magic Aptitude is 0— so those cost her 8 points each. Stats wise, she’s overpowered. I would argue that this “paid off” by her various disadvantages. She has the horrible illness that afflicts all mages— the Blue Marquessa. She’s a woman attracted to women in a puritanical (1895 North America–based) society. She’s visibly different from the people around her. Her rigid morality (neutral/chaotic good) gets her in trouble, and so does her good nature (her theory-of-mind inadequately models malevolence, leading to [SPOILER]). Finally, there’s the bounty put on her head by trillionaire Hampus Bell, Patriarch (full title: Chief Patriarch and Seraph of All Human Capital) at the Global Company. She probably needs that +28 to survive.

Aptitudes need to be selected before primary Skills are bought, as the Aptitudes will influence how much it costs to learn Skills.

By default, I would give players k*sqrt(N) points where N is the number of primary Skills that exist and k is… around 5. The going assumption is that entry-level characters (regardless of chronological age, for balance) have about five years of adventure-relevant experience, which gives them enough time to grab a few 1’s and 2’s, and maybe a 3 or 4 if talented. If you’re building a mentor NPC, you might use k = 20 or 30.

The point-buy cost of raising a Skill one level depends both on the Skill’s Complexity and the character’s level of the Aptitude it’s keyed on, as follows:

  • a base of 1 point for Easy skills; 2 for Average; 4, Hard; 8, Very Hard; 16 Extremely Hard; times:
  • 1 (per level) for each level up to A, the rating in the relevant Aptitude; 3 from A to A+1; 5, to A+2, 10, to A+3; 20, to A+4. (Here, treat A as 0 if A < 1.)

Let’s say, for example, that Espionage is Hard and keyed on Social Aptitude. Then a character with Social Aptitude 3 will pay 4 points each to get the first levels; if he wants Espionage 4, he’ll have to pay an additional 12 (total: 24). If he wants Espionage 5, he’ll have to pay 20 more (total: 44).

Applying Skills

Any time it is uncertain (per GM) what’s going to happen, dice are rolled. Often, this is because a player wants his character to do something where there’s a nontrivial (1% or greater) chance of failure. RPGs call this a check or trial, and both terms are used interchangeably.

Active trials occur when the PC attempts something and succeeds or fails. There are also passive trials, where the GM needs to know if the PC made a good impression on an important NPC (Charm or Appearance check), resisted temptation (Will check), or became aware of something unusual in the environment (Perception check). Passive trials will almost always be covert checks (described below).

Unopposed trials (also called “tasks”) are those in which the PC is the sole or main participant. Attempting to jump from one rooftop to the next is an unopposed trial; so is playing a musical instrument (although NPCs may differ in their appreciation of the PC’s doing so). There are two kinds of unopposed trials: binary checks and performance checks.

Binary Trials

The GM must decide which Skill is being checked. This will typically be the most specialized Skill that exists in the game. For example, if the task is surgery and Surgery is a specialty of Medicine, the roll will be performed using a character’s Surgery rating. A character who does not have that Skill at all (“Surgery 0”) cannot do it; otherwise, the number of dice (dP, or Poisson dice) equals the player’s rating. In cases where no Skill applies at all— say, “situation rolls” like weather that are (usually) out of the characters’ control— two dice (2dP) are used.

A binary check is made against a Difficulty rating, which is always a nonnegative integer. Difficulty 0 (“trivial”) means the character succeeds (without rolling dice) as long as there is some investment in the Skill. There’s no need to roll; the only way a PC could fail is if he were hit by an Amnesia spell (or equivalent) in the middle of the action. Difficulty 1 (“simple”) means there is some chance of failure; for example, recognizing a fairly common word of a foreign language. Difficulty 2 (“moderate”) is something like jumping from a second-story window; Difficulty 3 (“challenging”) would be something like cooking for twelve people, all with different dietary requirements, under strict time pressure. Difficulty 4 (“very challenging”) would be driving an unfamiliar race car on an unknown track at competitive speeds. There’s no limit to how high Difficulty can go. At 12, even a maxed-out character (Skill 8) can expect to fail 87% of the time.

What does failure mean? Before the dice are rolled, the GM should decide, and the player should understand:

  • how long the action takes. In turn-based combat this could be a turn (seconds of in-world time). For research or invention, this could be two weeks of in-world time.
  • what resources are required, and whether they are consumed on failure.
  • other consequences of failure, which can range from nothing (the player can try again) to devastating (failing to jump safely from a moving train).
  • whether the player knows if he succeeded or failed at all. This will be discussed below.

For example, PC Sarah has Climbing 4, but she’s climbing cautiously and has top-of-the-line equipment. The GM judges that climbing a nearly vertical rock face, one that has stymied expert climbers, is a Difficulty 5. The GM determines that an attempt will take an hour and 550 kCal of food energy. Since her skill is 4, Sarah (well, Sarah’s player) rolls 4dP: {1, 0, 0, 1} for a total of 2, which falls short of the 5 necessary to make the climb. Since she took safety precautions, the result of this failure is: no cost but the time and food. She can try again.

The Three-Attempt Rule, or 3AR, is the general principle that after 3 consecutive failures, the player typically must have his character do something else. Characters are not min-maxers, and they don’t fully know how easy or difficult an objective is. They get discouraged. If the player has the character come back after a night of sleep, or a month later with more (s|S)kill, or goes about the problem is a distinctly different way, this isn’t a violation of the 3AR. Last of all, the 3AR never applies when there’s a strong reason for the character to persist— a life-or-death situation (such as combat), a religious vow, or a revenge motive. A PC in battle who swings and misses needs no excuse to keep swinging; but the 3AR is there to block unimaginative brute force and while-looping; the player can’t say “I continue to attempt [Difficulty 9 feat] until I succeed.” (There is no, “I take 20.”)

Most binary trials are pass/fail. The degree by which the roll fell short of, or exceeded, the Difficulty target isn’t considered— any effects of the failure or success are determined separately. That is, the dice rolls represent the character’s performance (how good he is at opening the chest) but not raw luck (what he finds, if successful, inside it) but, if GMs prefer to combine the two for speedier gameplay, that is up to them.

The d8 System has no concept of “critical failure” or “botch”, which I consider to be poor design. Failure can of course have negative consequences (making a loud noise and waking someone during Burglary) but the player shouldn’t be punished for such low rolls that the rules say bad things must happen and therefore the GM must make something up.

When it comes to binary trials, we say the player (or the PC) is in advantage if the relevant Skill meets or exceeds the Difficulty (adjusted for modifiers); he is out of advantage if the relevant Skill rating is less than the Difficulty. Since the median of NdP is N (for all values we care about) a roll in-advantage will succeed more than 50 percent of the time; a roll out-of-advantage will succeed less than half the time..

Performance Trials

In a binary trial, one succeeds or one doesn’t. Performance trials have degrees of success. For a character with Hunting 3, the player rolls 3dP, but while a 2 might suffice to bring home a rabbit, a 4 gets a deer, a 6 gets a bison, and a 15 might result in meeting a dragon who befriends the party.

General guidelines for performance interpretation are below.

  • 0: a bad performance (2dP: bottom 15%). No evidence of skill is shown at all.
  • 1: a poor performance (2dP: bottom 40%). Some success, but it’s an amateurish showing. The work may be dodgy; reception will be mediocre.
  • 2: a fair performance (2dP: average). The character demonstrates skill appropriate to his class or profession— not especially good, but not bad.
  • 3: a good performance (2dP: top 35%). The performance is significantly above the expectation of a competent practitioner.
  • 4: a very good performance (2dP: top 15%, 4dP: average). Rewards for exceptional performance accrue. A singer might earn three times the usual amount of tips.
  • 5: a great performance (2dP: top 6%, 4dP: top 40%). Like the above, but more. Instant recognition is likely to accrue; this is approaching world-class.
  • 6–7: an excellent(+) performance (2dP: top 2%, 4dP: top 25%). This is the kind of performance that, if reliably repeated, will lead to renown and fame.
  • 8–9: an incredible(+) performance (2dP: top 0.1%; 4dP: top 6%). The character has done so well, some people are convinced that he’s a genius, or that he has magical powers, or that he’s cheating.
  • 10–11: a heroic(+) performance (2dP: 1-in-50,000; 4dP: top 1%).
  • 12+: a legendary performance (2dP: 1-in-2,000,000; 4dP: top 0.1%).

Of course, GMs are at liberty to interpret these results as they wish, and these qualitative judgments are contextual. A musician who gives a 3/good performance at a world-famous orchestra will mildly disappoint the audience; one who gives 2/fair will likely be booed. Usually, the results (and whether they benefit or disadvantage the character) are correlated to the outcome rolled; but, if a player games the system with modifier min-maxing and rules lawyering, and somehow produces a 15/legendary+++ result whilst Singing, the GM is allowed to have him run out of town as a witch.

Degrees of Transparency

In general, players roll the dice themselves and know how they did. The d8 System deliberately keeps increments “big” so they correspond to noticeable degrees of quality.

Should the GM tell players the precise Difficulty ratings of what their PCs are doing? The d8 System doesn’t call that shot. A GM can say, “It’s Difficulty 5” or she can say, “It looks like a more experienced climber would have no problem, but it leaves you feeling uneasy.” That’s up to their tastes. As a general rule, characters have no problem “seeing” Difficulties and performances 2 levels beyond their own, maybe more. A PC with Climbing 4 knows the difference between challenging-but-likely (4) and a-stretch (5) and out-of-range-but-possible (6) and “very unlikely” (7+).

There are cases, though, when players shouldn’t know the Difficulty level. Perhaps there’s an environmental factor they haven’t perceived. Sometimes, but more rarely, they shouldn’t even know how well they performed. There’s a spectrum of transparency applicable to these trials, like so:

  • Standard Binary: The GM tells the player the Difficulty of the action. Whether a numerical rating or qualitative description (“It looks like someone at your level of skill can do it”) is used, d8 doesn’t specify.
  • Concealed Difficulty: there are concealed environmental factors that make the GM unable to indicate a precise Difficulty level (or that induce the GM to lie about it— though she and her players should reach agreement on what the GM can and cannot lie about.) The player rolls the dice and the GM reveals whether success or failure is achieved, but not the margin. The player’s experience is comparable to that of a performance, rather than binary, trial.
  • Concealed Outcome: Appropriate to social skills. The player rolls the dice and has a general sense of how the PC performed, but not whether success was achieved. With information skills (e.g. Espionage) GMs are, absolutely, without equivocation, allowed to lie— the player may be deceived into thinking he succeeded, and fed false information, if the PC was in fact unsuccessful.
  • Concealed Performance: the player knows that a Skill is being checked, and that’s it. The GM may ask questions about how the player intends to do it— to decide which Skill applies if there’s more than one candidate, and possibly to apply modifiers if the player comes up with an ingenious (or an obnoxious and stupid) approach. The GM rolls. She may have a qualitative indicator (e.g., “You feel you could have done better”) to the player, or she may not.
  • Covert: the player is unaware that the roll ever took place. (This is what those pre-printed sheets of randbetween(1,8) are for.) The PC’s in a haunted house but the player has no idea that he just failed a Detect Spirits check, or that he failed a check of any kind, or that anything was even checked.

Tension

As written so far, the d8 System still has the “Fair Cook Problem”. Someone with Cooking 2 is going to fail at cooking (roll 0 from 2dP) 14% of the time, or one dinner per week. This is unrealistic; someone who’s 86% reliable at a mundane, low-tension activity simply isn’t a professional-level cook. Of course, if he’s subject to the high-pressure environment of a show like Top Chef, that probability of failure becomes more reasonable….

The d8 System resolves this issue with the Tension mechanic. Routine tasks occur at Low Tension. Social stakes and non-immediate consequences suggest Medium Tension. Death stakes imply High Tension; combat is always High Tension, as are Skill checks in hazardous environments. Singing in front of friends or for practice is Low Tension; doing it for a living is Medium Tension; signing for a deranged king who’ll kill the whole party if he doesn’t like what he hears, is High Tension.

Low and Medium Tension, for binary and performance trials, allow the player to “take points”, by which I mean “take 1” for each die (a dP’s mean value is just slightly above 1). At Low Tension, the player can “take 1” for all his dice if he wishes. Medium Tension allows him to take up to half, rounded down. So, a player with Skill 3 has the following options at each Tension level:

  • Low: 3dP, 2dP+1, dP+2, 3.
  • Medium: 3dP, 2dP+1.
  • High: 3dP only.

For strictly binary trials, the PC does best to take as many points as possible when in advantage, and to take as all dice (as he would have to do, at High Tension) when out-of-advantage. Thus, when the Difficulty exceeds the PC’s Skill level, the Tension level becomes irrelevant.

If success is guaranteed, there’s no reason to do the roll. If someone has Cooking 2 in a Medium Tension setting, but he’s only trying to broil some ground beef (Difficulty 1) there is no need to roll for that— dP+1 against 1 will always succeed.

Most situations occur at Medium Tension; it jumps to high in combat situations or where time-sensitive threat to life is possible. The regular stresses of unknown settings, meeting new people, and enduring the daily discomforts of long camping trips make the case for Medium. Low Tension is mostly used for familiar settings, downtime, and practice (skill improvement).

On performance rolls and concealed-difficulty rolls, the player may not know whether it’s better to take dice or points— it’s up to the GM whether to reveal what’s in the player’s interests (if it’s clear cut). For covert trials, the GM should typically make these decisions in the player’s interests— taking points when in-advantage and dice when out-of-advantage— unless circumstances strongly suggest otherwise.

Gate-Keeping, Compound Trials, and Stability

Tension handles the “Fair Cook Problem”. We get the variability we expect from high-tension situations (combat) but we don’t have an unrealistically high probability of competent people failing, just because the dice said so. A Fair (2) cook will be able to produce Fair (2) results 100% of the time at Low Tension, 62.5% of the time at Medium Tension (dP+1), and 58% of the time at High Tension (2dP).

This doesn’t handle everything player characters might want to do. For many performances and tasks, the variance offered by Poisson dice is too high. Let’s use weightlifting as an example. We might decide that each level of Strength (or Weightlifting skill) counts for 150 pounds (68 kg) of deadlifting capacity. A character with Strength 1 can deadlift 150 pounds; with Strength 2, 300. The 2dP distribution suggests that a Strength 2 (300-pound) deadlifter has a 34% chance of a Strength 3 feat: lifting 450 pounds. I can tell you: that’s not true. It’s about zero. Raw muscular strength doesn’t have a lot of variability— people don’t jump 150 (let alone 300) pounds in their 1RM because the dice said so.

When it comes to pure lifting strength, I think GM fiat is appropriate— the PC cna lift it, or can’t. Skill and day-to-day performance are factors, but raw physical capacity dominates the result. If GMs and PCs want to know, down to ten-pound increments, how much a character can deadlift, they’re probably going to need something finer-grained than five or six levels. I’m not a stickler for physical precision, though, so I’m fine with a module saying “Strength 3 required” rather than “400 pounds”.

Running speed also doesn’t have a whole lot of variability. A 4-hour marathoner (level 2) is not going to run a 2:20 (level 7) marathon, ever— but 2dP spits out 7 about once in 250 trials. This is a case where GMs can say, “Uh, no.” As in, “Uh, no; I’m not going roll the dice to ‘see if’ your Agility 2 character wins the Boston Marathon.”

Whether this is a problem in role-playing games is debatable. Athletic events measure people’s raw physical capabilities, and there just isn’t a lot of variability, because these events are designed to measure what the athletes can do at their best, and therefore remove intrusions. Role-playing environments are typically chaotic and full of intrusions. This, I suppose, allows for a certain “fudge factor”; the higher variability of an Agility check makes it appropriate to running through a forest while being chased, but not competitive distance running, at which an Agility 2 runner will never defeat one with Agility 4.

What about cases where we want some performance variability, but not as much as ndP gives us? Shakespeare, one presumes, had Playwriting 8; a performance of 8 is also 94th-percentile from a playwright with Playwriting 4. Does mean that 6% of a his efforts are Shakespearean? Well, it’s hard to say. (Not all of Shakespeare’s efforts were Shakespearean, but that’s another topic. Titus Andronicus gets points for a South Park homage, but Othello it ain’t.) It’s quite subjective, but for GMs who find it hard to believe that someone with Playwriting 4 can kick out a Shakespeare-level (8+) script once in 16 efforts, Stability is a mechanic that, well, stabilizes performance trials. Stability N means that the roll will be done 2*N + 1 times, and the median selected.

Stability is a heavyweight mechanic, appropriate to long, skill-intensive efforts like composing music, writing a book, or playing a game of chess. You don’t want to use it for quick actions, such as in combat, and you typically wouldn’t use it for binary trials where large deviations may not matter more than small ones. For example, if a musician is putting together an album, one way to simulate this is to have a Composition check for each song on the album— another is just to use Stability 1.

For a case study of the mechanic’s simulative effectiveness, let’s consider chess, where we actually have some statistical insight into how likely a player is to beat someone of superior skill. Chess 4 corresponds, roughly, to an Elo rating of 2000— well into the top 1 percent, but not quite world class. Chess 8 corresponds to about 2800, which has only been achieved by a few people in history (Magnus Carlsen, the best active right now, has 2862). Elo ratings are based purely on results, and a 10-fold odds advantage is called 400 points— in other words, each Elo point represents about a 0.58% increase in the chance of winning.

So, we can test the d8 System, without Stability, for how well it models this. A Chess 4 player should have a 1% chance (counting draws at half) of beating a Chess 8 player; but how often does 4dP actually win or draw against 8dP? At High Tension, about 16% of the time. That’s far too high for chess, a board game where there’s really no luck factor. We have to adjust our model. Well, first of all, we drop the Tension to Medium. No one’s going to die— although the superior player might be embarrassed by losing to someone 800 points below her. Then, we use Stability 1. Finally, we model it as an attacker/defender opposed action, which means that if there’s a tie in the performance score, it goes to the defender— whom we decide to be the more skilled player. Then, we can expect the Chess 4 player to win only 1.331% of the time (95% CI: 1.264% – 1.398%) against the one with Chess 8. That’s a 748-point Elo difference as opposed to 800. Is it perfect? No— among other things, it ignores that high-level chess games actually do often end in draws— but it’s close enough for role-playing.

Stability is one way to “gate keep” the highest levels of performance. Another way, for complex endeavors, is to us compound trials that pull on multiple Skills. A PC who wants to pen a literary classic might face a compound difficulty of [Writing/Fiction 4, Characterization 4, Knowledge/Literary 3]. A PC who wants to write a bestseller would face [Writing/Fiction 1, Marketing 4, Luck 5]. It’s up to GMs how to interpret mixed successes. For harder trials, GMs should require that all succeed; for easier ones, they can allow a mixed success to confer some positive result. (“Your book sells well, but the critics pan your poor writing, and your ex thinks the book is about her.”)

Modifiers

Situational factors— low light, being distracted, assistance from someone else, being physically attractive— easier or harder than otherwise. Because the d8 System deliberately makes steps between consecutive Skill, Difficulty, and performance levels fairly large— they’re supposed to represent distinctions the characters would actually notice— modifiers shouldn’t be used lightly. Nuisance factors that would “deserve” small modifiers in finer-grained systems should probably, in d8, be ignored unless they, in bulk, comprise a substantial impediment.

There are two types of modifiers: point and level modifiers. In Skill substitution, we see level modifiers. If a PC has Archery 3, and the Crossbow Skill neighbors it with Easy (-1) relative Complexity, then the PC implicitly has Crossbow 2. He rolls 2dP instead of 3dP— not everything he knows about Archery is applicable, but some of it is.

Most situational modifiers, on the other hand, will be point modifiers applied to the result of the dice, after they are rolled. At twilight, the poor light might make it 1 point harder to hit a target: Difficulty 4, in effect, instead of Difficulty 3. However, we call this modified roll “3dP – 1”, rolled against 3; rather than 3dP against 3 + 1 = 4. Why? Because it would be confusing if a “+1” modifier made the character’s life harder and a “-1” modifier made it easier.

Which type of modifier should a GM use? Situational modifiers should almost always be point modifiers, because while they can make a task easier or harder to pull off, they don’t really affect the skill level of the character. Skill substitution, on the other hand, is the case where the negative level modifier is appropriate— the PC with High Dwarven 3 “knows three ways to do things” (per abstraction) in High Dwarven, but only “two of those things” transfer over to Low Dwarven.

Severe illness might justify a negative level modifier; regular situational factors don’t. As for positive level modifiers, I think those only make sense under magical or supernatural influence. It’s conceivable that a random schlub could get “+3L Fighting” in The Matrix, but most real-world tactical advantages don’t actually increase performance— they merely reduce difficulty.

If negative point modifiers reduce a character’s performance below 0, it’s treated as zero. Likewise, if positive point modifiers reduce the effective Difficulty of a purely binary roll to 0— the player is now rolling 3dP+2 against 2— then there is no need to perform the roll at all.

If level modifiers reduce a Skill below 1, the fractional levels ½ and ¼ are used before going to 0. If Chainsaw Juggling is Hard (-3) relative to Juggling but fungible, a character with Juggling 4 implicitly has Chainsaw Juggling 1; if Flaming Chainsaw Juggling is Medium (-2) relative to regular Chainsaw Juggling, then this character has “Flaming Chainsaw Juggling ¼”.

When level modifiers come from Skill substitutions, the step after ¼ is 0— the Skill can’t be faked (as if it were nonfungible). When the debuffs come from other sources (sleeplessness, ergotism, PC madly in love with a statue) they cease having additional negative effect at ¼, which is as low as a Skill or Aptitude can get.

To roll at sub-unity levels, use the following modified dice; the “chaining” is the same as for a dP.

  • ½dP : {0, 0, 0, 0, 0, 1, 1, 2} / on 2, chain.
  • ¼dP : {0, 0, 0, 0, 0, 0, 1, 1*} / on 1*, chain.

Thus, the probabilities of hitting various difficulty targets are:

  • Skill 1: {1 : 5/8, 2: 2/8, 3: 1/8, 4: 1/64 … }
  • Skill ½: {1: 3/8, 2: 1/8, 3: 1/64, 4: 1/512 … }
  • Skill ¼: {1: 2/8, 2: 1/64, 3: 1/512, 4: 1/4,096 … }

Skill Improvements

Skills improve in two ways. One is through practice, which typically occurs in the downtime between adventures— although heavy use of the Skill during the adventure should also count for a few practice points (PP). Practice happens “off camera”, for the most part, between gaming sessions when the characters are presumably attending to daily life and building skills rather than scouring dungeons.

A Practice Point (PP) represents the amount of in-world time it takes to reach level 1 of an Easy primary Skill. It might be 200 hours (4 weeks, full time); it might be 650— it depends on how fine- or coarse-grained the Skills are (and, also, how fast the GM wants the players to grow). Each level of Complexity doubles the cost: 2 PP for Average, 4 PP for Hard, 8 PP for Very Hard. Furthermore, this isn’t the cost to gain a level, but the cost of an attempt to learn that level, using the relevant Aptitude. In other words, if Painting is keyed on Creative Aptitude, then to reach Painting 4 is a Creative Aptitude check. Practice almost always occurs at Low Tension, so PCs typically have a 100% chance of success for each level up to the level of that Aptitude.

In the example above, let’s say a PC has Creative Aptitude 3, and Painting is Average in Complexity. Then it takes 2 PP to get Painting 1, 2 more for Painting 2, and 2 more for Painting 3, for a total of 6 PP. Dice never have to be rolled because of the Low Tension— the PC will always succeed up to level 3. To get Painting 4, however, the player spends 2 PP only to get an attempt of 3dP against 4 (37% chance). On average, it’s going to cost 5.4 PP to get Painting 4.

Two kinds of modifiers can apply to practice. A skilled teacher (who has attained the desired level) can justify a +1 point modifier, and an exceedingly capable (and probably very expensive) mentor can bring +2. The other is practice “in bulk”, which allows a PC who has a contiguous block of down time to spend a multiple of the base cost to gain a point modifier: 3x for +1, 5x for +2, 10x for +3, 20x for +4. In the case above, the PC could spend 2 PP for a 37% chance of getting Painting 4, or spend 6 PP for a guarantee of locking it is— even though his Creative Aptitude is only 3. To get Painting 5 in this way will cost 10 PP.

It’s up to GMs how stringent they want to be on the ability of PCs to actually practice in the conditions they find themselves in. I would argue that if the PCs are in Florida over the summer, they probably can’t level up their Skiing. On the other hand, more lenient GMs might allow practice to be more fluid, like a traditional XP system.

Aptitudes improve in the same way, but are costed as Extremely Hard (16 PP base) and, since there is no “Aptitude for Aptitudes”, the default roll is 2. Raising an Aptitude from ½ to 1, or 1 to 2, costs 16 PP. From 2 to 3, it either costs 48 PP (“bulk”) or it costs 16 PP for a 34% shot (2dP against 3); from 3 to 4, it either costs 80 PP or 16 PP for a 17% shot (2dP against 4).

When Skills (and especially Aptitudes) move beyond 4, the player should be able to convince the GM that his character actually is finding relevant practice during the downtime. A character isn’t going to improve Chess 6 to Chess 7 unless he’s actually going out and playing against upper-tier chess players.

Practice is never subject to the Three-Attempt Rule. The characters can spend as much of their off-camera time practicing as they want.

The other, more dramatic, way in which Skills can improve is through the Feat system. A Feat occurs for a successful trial of a Skill where:

  • the result was unexpected— for a binary trial, the PC was out-of-advantage; for a performance trial, the roll was at least 3 levels above the Skill level;
  • no level modifiers were applied to the PC at the time— that is, the character was not under the influence of some magical buff or debuff— and
  • most importantly, the success mattered from a plot perspective. A “critical” hit against an orc that was going to die anyway isn’t a Feat. To qualify, it has to be something that occurred under some tension, and that no one expected— the sort of thing that characters (and players) talk about for weeks.

When a Feat occurs, the character’s Skill goes up by 1 level for 24 hours (game time) or until the character has a chance to rest. At that point, a check of the relevant Aptitude— Difficulty set at the new level— occurs. If the check is successful, the Skill increases is permanent. If (against a Difficulty of the new level) occurs. If successful, the Skill increment is permanent. If unsuccessful, the Skill reverts to its prior level, but the PC gets a bonus 3+dP practice points (PP) that apply either to the Skill or its Aptitude.

Multiplayer Actions

Opposed actions model conflict between two or more characters (PCs or NPCs, including monsters) in the game world. A character is swinging a sword; another one wants not to get hit. A valuable coin has fallen and two people grab for it. One person is lying; the other wants to know the truth. Live musicians compete for a cash prize. Usually, PCs compete against NPCs; sometimes PCs go at each other in friendly or not-so-friendly contests. Opposed actions, like single-character trials, come down to the dice.

One could, in a low-combat game, model fighting as a simple opposed action indexed on the skill, Combat. That would, of course, be unsatisfactory for an epic-fantasy campaign where combat is common and there is high demand for detail and player control. But combat system design is beyond the scope of what we’re going to do here— there is too much variety by style and genre.

Opposed actions nearly always occur at Medium or High Tension. Of course, they are subject to situational modifiers.

simple opposed action is one where each contender rolls in the relevant Skill (typically, the same Skill) for performance: highest score wins. If it makes sense to break ties, roll again. Use the first roll as representative of performance— if both singers in a contest roll 5/great on the first roll, and the PC rolls 0 on the tie-breaker, he may not get first prize but he shouldn’t be booed. What the simple opposed action offers is symmetry: it doesn’t require the GM differentiate attackers from defenders— performance scores are compared, and that’s it.

Passive defense is applicable when there is a defending party, who doesn’t really participate in the defense. Armor is typically modeled this way: a character having “Armor Class 4” might mean that to harm him has Difficulty 5. For the attacker to roll against passive defense is equivalent to a binary check.

Collaborative actions are additive. Most actions (e.g., climbing a wall, sneaking into a building without being caught) are single-person— but a few will allow collaboration: group spell casting, large engineering projects, efforts of team strength. Three PCs with Skill 2, 3, and 5 can roll 10dP against a Difficulty 9 collaborative task that any single one of them would be unlikely to pull off.

Serial actions are contests that continue until one character passes and the other fails. They start at a set Difficulty D; if both parties pass, roll again at Difficulty D+1; if both fail, roll again at Difficulty D-1 (not going below 1). Some amount of game time may pass between each trial— in a combat situation, this might be a turn (~5 seconds); in a more gradual environment (e.g. business competition) this might be a month— this means that external factors may intervene before the contest concludes. The starting value of the difficulty will typically be halfway between the Skill levels of the parties, rounding down

Attacker/defender actions are the most complicated kind, and that’s because they index different Skills, which means that levels don’t necessarily compare. If Bob has Deception 2 and Mary has Detect Lies 3, then it should be harder for Bob to lie than for Mary to detect lies— and he should have an under-50% chance. On the other hand, if Mary has Social Aptitude 3 but no skill investment in lie detection itself, then Bob ought to have the advantage, because he’s bringing the more specialized Skill.

Ideally, any “offensive” skill (one that can be “defended” against) that can be defended against has a compliment at the same level of specialization and Complexity— if Deception is Average (-2) relative to Social Aptitude, so should be Detect Lies. Then, because Mary doesn’t have any investment in Detect Lies, she falls back on the implicit “Detect Lies 1” she has, per Social Aptitude 3.

By default, defenders win ties. If the GM feels that circumstances should have the attacker advantaged instead, this can be achieved with a +1 point modifier in the attacker’s favor.

Final Notes

Above is enough material from which an experienced GM can run a campaign using the d8 System. At this point, she’ll have to create her own health and combat system, because no such modules have been written, so the d8 System is certainly not ready for first-time GMs.

Here are a few technical tangents that won’t matter for most players or GMs.

Other Dice?

The “Poisson die” doesn’t have to be a d8. In fact, the d8 isn’t the most accurate contender. Its zero is 1.9% heavy (0.375 vs 1/e = 0.36787…) and so 8dP is going to produce a 0, although very rarely, about 15% more often than Poisson(8). Most GMs and players aren’t going to care about that discrepancy.

You can get a more accurate “Poisson die” on a larger die, like a d30:

  • {1–11: 0, 12–22: 1, 23–28: 2, 29: 3, 30: 4}, with
  • {1–24: 0, 25–29: 1, 30: 2} for chaining.

The drawback here is that d30s are big (and, likely, expensive). You can’t easily hold 4, 6, or 8 of them in your hand. Also, while the d8-based dP has a “heavy” 3+ (1/8 = 12.5%, as opposed to about 8%), the d30-based dP has a light one (2/30 = 6.6%). Since most players (and GMs) are not going to care about exact fidelity to a probability distribution, I consider the heavy 3 on 1dP a feature rather than a bug.

Nothing, of course, stops players from synthesizing a d64 from two d8’s and therefore getting a more accurate model of a single dP… but I don’t think it’s worth it. The dP

One can get a fantastically accurate Poisson(k/120) simulation by rolling k d120’s with the mapping {1–119: 0, 120: 1}. I recommend not doing this, though. The d8 System, I think, gets the statistical properties we want from the Poisson family of distributions, even if a single dP doesn’t model Poisson(1) with perfect accuracy.

To Chain or Not To Chain?

It’s up to the GM whether chaining applies to antagonists’ rolls. I would say that they should— my aversion to downward chaining for PCs’ rolls doesn’t prevent upward chaining for antagonistic NPC results. It’s not the potential for unbounded badness from a PC’s perspective that makes downward chaining a design blunder— after all, the game simulates a world in which the characters can die— but the physical act of making a player roll for it.

Similarly, for “situational rolls” (e.g., weather), the standard 2dP has a wide bottom (14%). If you want to distinguish 0/bad from 00/terrible and 000/atrocious… go for it.

Fractional Levels

Normal Skills shouldn’t have fractional levels. They require special treatment of dice (e.g., a “½dP” as described above) and, in my opinion, half-levels of skills (from a player and GM perspective) aren’t worth the hassle. If you don’t know how to get simple (Difficulty 1) tasks done right 63% of the time, you don’t know the thing.

The d8 System assumes four levels of Skills are available to entry-level players:

  • 1: apprentice,
  • 2: journeyman,
  • 3: master,
  • 4: expert (best in a small or mid-sized town).

with four more levels of room for improvement… 8 is best-in-the-world… and that six(-ish) levels of Aptitudes are available:

  • ¼: utterly inept,
  • ½: below average,
  • 1: average untrained,
  • 2: above average (average in class),
  • 3: gifted,
  • 4: exceptional,
  • 5: extreme, borderline broken (~1 in 100,000).

These are coarse-grained in order to correspond with levels we perceive in the real world, so that in a game situation where no rules apply, or the existing rules don’t make sense, the GM can fall back on her “common sense” about what a journeyman carpenter (Carpentry 2) or master of mendacity (Deception 3) can and cannot do. But also, it prevents the stats from telling players more than the characters would naturally know about themselves. It seems especially coarse-grained that I only have two levels of below-averageness, as opposed to four going up, but I imagine many experienced GMs will understand the sense of this. When players want their characters to be below-average in something, there are usually two use cases:

  • they want to play a character who is extremely incompetent in one way (but, one hopes, competent in other ways) for the role-playing challenge; the opposite of “munchkins”, they want the game to be difficult for them.
  • they don’t consider the attribute (or Aptitude) relevant to their character archetype (or “class”) and are “dump statting” it to buy more points elsewhere.

The first of these is fine; the system supports severe incompetence (¼), although GMs should restrict it to players who know what they’re doing. And there’s nothing wrong with the second, either; but dump statting shouldn’t yield all that much, which it can if there are too many levels of intermediate-but-not-extreme below-averageness. For this reason, the system enables only two: the I-don’t-think-I’ll-need-this below average of ½ and the I’m-up-for-a-serious-challenge of ¼.

Still, players may find a need for finer granularity. Using the deadlift example, there are several intermediate levels between a 300-pound deadlifter (Strength 2) and a 450-pound deadlifter (Strength 3); if a player decides that his character should be able to lift precisely 375, the mechanics allow for a 2½ in the aptitude, and it’s easy enough to figure out what “2½dP” looks like (2dP plus ½dP, as described above). If players or GMs want to know exactly how strong and fast the characters are, down to tens of pounds and tenths of miles per hour, the d8 System doesn’t outright preclude these intermediate fractional levels— it’s just that I, personally, don’t think they’re necessary for 98% of role-playing needs. We don’t care, after all, whether the character can or cannot budge a 700- versus 800-pound door; we care whether the PC can budge that door that is standing in the way— and while it’s a bit “fudging” to make it a Strength check (arguably, suggesting that the weight of the door itself is being determined by the dice) it does keep the story moving.

… And that’s probably enough for one installment.

Life Update 11/21/20: Farisa’s Crossing, et al

I’m still trying to figure out the matter of my online presence (including, to be frank, whether I want to have one at all). For now, I’m still on Substack. I’ll be mirroring these posts on WordPress; as much as I’ve lost faith in the platform, I don’t see any harm in keeping the blog up.

On Farisa’s Crossing, I’ve stopped promising release dates. I can only give a release probability distribution— and that, only for the Bayesians, since the frequentists don’t believe probabilities can be applied to one-time future events— but, I have reasons to be optimistic, regarding current and future progress:

  • the novel is “bucket complete”, by which I mean that if I had a month to live, I could leave it and a pile of notes for an editor, and I wouldn’t feel like I had left the world an incomplete book. (I wouldn’t care about marketing or sales outcomes, for an obvious reason.) There are still things to improve, and I intend to do most of the remaining work myself, but it’s basically “ready enough”.
  • I’ve stopped fussing about word count. It used to be really important to me that the book not get “too big”. Traditional publishing ceased to be an option when I crossed the Rubicon of 120,000 words. As a first time novelist, you have zero leverage and 120K is all you get; although, most award-winning literary titles (in adult fiction) run 175–250K. For a while, my upper limit was 175,000… 200,000… 250,000… which shows how good I am at setting these “upper limits”. Farisa became a bigger story over time. Her love interest, I realized, ought to be more than a love interest. I gave more characters POV time, which meant more world to flesh out. I decided to give more back story to an important villain. Various proportionality and pacing concerns— systems of equations where the variables and coefficients are all subjective, but still require precise tuning— meant that fleshing out one set of details required me to do the same for another. I’ll still cut anything that doesn’t belong. If a scene has an obvious weakest paragraph, or a paragraph a clear weakest sentence, or a sentence has a needless word… it gets yanked. At some point, though, the risks of cutting outweigh the benefits.
  • I’m able to afford having stopped taking new clients in May. I’m down to maintenance of existing ones, at least for now. There’s little stopping me from hitting the next six months at 180 miles per hour. Unless something unexpected happens (and of course there’s that one thing that can happen to anyone) I can’t see anything preventing me from getting the book to a ready state.

There are a million “lessons learned” in the writing process, but I don’t believe in talking about those sorts of things until after you’ve completed the task.

I’ll give it an 75% chance that I’m ready to send my novel to a copyeditor by mid-May; 98% chance by mid-July. Concurrently, probably starting in early spring, I’ll need to get cover art, blurbs, and other marketing materials together. That can go off in a couple weeks, or it can take months. It depends on a number of factors.

I may release the book, without much marketing— because if I’m right about the book, it shouldn’t need it; if I’m wrong about it, perhaps obscurity is a good thing— in August. My next big project (everything being up in the air for obvious reasons) starts in the fall and, to be honest, while the quality of the book itself is paramount, I’m willing to compromise on short-term sales to increase my likelihood of succeeding in other projects. On the other hand, circumstances evolve, and I may size up the situation and decide that Farisa does need the traditional long-calendar marketing strategy, in which case we’re looking at a release date of late 2021 or early 2022.

Quitting WordPress – April 30, 2020

I’ve gotten several complaints about ads on my blog.

When I set this thing up in 2009, I didn’t know much about the web— I’m an AI programmer; web stuff I do when there’s a reason to do it— and I used WordPress’s free offering, and it worked. At the time, you published a blog post and there it was. No ads.

At some point, WordPress began running banner ads under my essays, without paying me, because I was using the free tier, so I guess the attitude was, fuck that guy. I never saw the ads on my own blog, when logged in, and now I understand why. If WordPress bloggers (like this dumb sap) knew how intrusive the ads were, they’d be less likely to create content.

The banner ads were ugly— and I wasn’t making any money off the damn things— but I was willing to tolerate them… laziness, inertia, not wanting to start over.

This afternoon, I looked at my blog, while not logged in, and saw this:

Screen Shot 2020-03-25 at 2.57.38 PM

Not just a banner ad, but a block ad, right between paragraphs. A fear-based fake-news ad, on top of that. Fucking garbage, in the middle of my writing.

I never allowed this. I am embarrassed that this piece of garbage ran between two paragraphs of my writing. I am fucking done with this shit.

What have we let happen to the Web? Fake news, interstitial ads, egregious memory consumption, and those obnoxious metered paywalls. Social media is an embarrassment. I am so sick of all this fucking garbage, the blue-check two-tier social platforms, the personality cults, the insipid drama, and the advertisements for garbage products no one wants and badly-written ad copy no one needs to read.

I am sick of “Free” meaning garbage. Yes, I’ll pay for news— but never in a million years if you punish me for reading more than my “4 free monthly articles”, you rancid stain. Make it free or charge for it; don’t be an asswipe and play games. Stop “giving away” a garbage product in the hopes of someone paying for something better.

This blog goes down at the end of April. I’m done with WordPress. I’m a programmer; time to roll my own.

–30–

A COVID–19 False Dilemma

Political leaders like Donald Trump and Congressional Republicans are trying to force the American people to choose one of two unacceptable alternatives:

  • Fast Kill: do nothing about the virus’s spread, causing millions of preventable deaths due to the catastrophe of large numbers of people— orders of magnitude beyond what our hospital system is designed to handle— becoming critically ill at the same time.
  • Hang the Poor: practice social distancing and flatten the curve (which we must do) but at the expense of crashing the economy, leading the poor to face joblessness, misery, and bankruptcy— In Time, it turns out, is not fiction— culminating in a Great Depression–level economic collapse.

Both scenarios lead to preventable loss of life. Both scenarios are intolerably destructive and will impoverish a generation. Both scenarios are completely unacceptable if something better can be done. Indeed, something better can. We must flatten the curve; we must practice social distancing. But, it is artificial that “the economy” should be threatened by our doing so.

Compared to a 1973 benchmark, employers take 37 cents out of worker’s paychecks for themselves. Costs of living have gone up, wages have not kept pace, and working conditions have degraded. The result is a society where working people live on the margin, where two weeks without an income can produce, for most individuals, financial ruin. It didn’t have to be this way. This fragility is artificial. The rich created, for their own short-sighted benefit, a society in which the poor must serve the manorial lords on a daily basis or starve. It doesn’t have to be that way.

There’s a third option, one that Trump and Congressional Republicans would rather us not see. Yes, we flatten the curve; we practice social distancing and self-isolation and even follow a quarantine if circumstances require it. On the economic front, institute a wealth tax— a 37-percent immediate wealth tax to commemorate the 37% private tax levied against workers by their employers, and a 3-percent annual tax on wealth over $5 million going forward. Restore upper-tier income taxes to their New Deal levels. Offer a universal basic income (UBI) and put in place universal healthcare (“Medicare for All”). Remove restrictions on unemployment benefits. Mandate that employers protect the jobs of workers furloughed by this crisis. Offer rent and mortgage relief to those who need it. Eliminate student debt, and make appropriate public education free for all who are academically qualified. After the crisis, put funding into research and sustainable infrastructure. All of this can be done— for the most part, these aren’t new ideas.

The billionaires and corporate executives— and the Republican Party that represents them— don’t want Americans to see this third option. They’re afraid of “socialism”, not because it might not work, but because it almost certainly will. It took them fifty years— and an uneasy alliance with religious nutcases and racists— to roll back the New Deal and the Great Society, and they’re terrified of socialist ideas getting into implementation, because they know that when this happens, people find out they like socialism, and it takes immense political effort to roll this plutocrat-hostile progress back.

We don’t have to choose between “the economy” and millions of lives. This is a false dilemma being put forward by evil people who will only consider scenarios that leave the power relationships and hierarchies of corporate capitalism intact. Their failure to allow a workable third alternative constitutes murderous negligence.

Our economic elite is made up of people who would rather see millions die than the emergence of an economic system that challenged their titanic power. If we survive COVID–19, if we defeat the the virus, we should go after them next.

Capitalism–19 Vs. Humanity–20

Societies around the world face a horrible decision, as a deadly coronavirus rages through the population. Do they continue with economic business-as-usual, and allow tens of millions of preventable deaths? Or, do they take drastic measures to slow the spread of disease (“flattening the curve“) that endanger our economy?

Let’s consider one extreme. What is likely to happen if our elected and business leaders do nothing? The number of people infected continues to double every 6 days. Our hospitals are swamped. Unheard-of numbers of people need respiratory support, all at once. Most do not get it— and they die. People needing transplants, even if they never get the virus, die waiting because the resources are unavailable. By midsummer, tens of millions of people are dead, and at least tens of millions more, though recovered, are permanently disabled. I call this scenario, the Fast Kill.

I don’t want the Fast Kill. Millions of needless deaths is a thing to be avoided. However, the perspective of our economic elite is quite different from mine or yours. The billionaires are on private islands, or in secret bunkers, and can wait this thing out. A Fast Kill, to them, has one clear advantage: the power relationships and hierarchies of corporate capitalism (with some loss of personnel) remain intact.

Will our economy shatter if we take measures to slow the spread of disease? Yes, because corporate capitalism is brittle by design. Since 1973, worker productivity has nearly doubled, while wages have stagnated. Out of every dollar a worker makes, executives take 37 cents for themselves. As workers compete against each other for the benefit of the richest 0.1 percent— as opposed to, say, overthrowing their masters— rents rise, wages fall, and working conditions degrade. We now have a world where most people— and quite a number of vital small businesses— cannot survive 2, 4, 6 weeks without an income. Many workers get no paid sick leave. As elected officials and public-health experts demand we take measures to control COVID–19’s spread, many people will, by virtue of their need for weekly income, be unable to comply.

We wouldn’t tolerate a 37% tax, imposed on the lower and middle classes, from our government. And yet, that is exactly what the private-sector bureaucrats called “executives” have levied against working men and women. As a result, millions of people are so broke that, even under a quarantine enforced by the national guard, the need for an income will undermine such measures. Those who are forced to live on the daily
“hustle”— odd jobs, panhandling, alleyway short cons, and black-market labor— are used to evading authorities, and they’re good at it.

Here’s some of what we need to do, to survive COVID–19 with civilization intact. Yes, of course we need to flatten the curve; we need to slow our economy and focus on urgent needs such as food, shelter, energy, and medicine. We need universal basic income protection— not a means-tested one-time payment, because a one-time check won’t do enough and we don’t have time to quibble over means tests— that people can rely on until the crisis is over. We need mandatory job protection for people sickened (and, in many cases, disabled) by COVID–19. We need rent relief for people who lose their jobs. We need to remove all restrictions on unemployment benefits, and to make these benefits tax-exempt as they were before Reagan. We need an unconditional moratorium on all medical bills— and, at the same time, government funding of hospitals to keep them afloat— during this unprecedented public health crisis.

All of this, yes, is “socialism”. Socialism is nothing more and nothing less than the contention that the principles of the Age of Reason (e.g., rational government over clerical rule or hereditary monarchy) ought apply to the economy as well. It turns out that there are no capitalists in foxholes.

Our society is ruled by people, most of whom would rather see millions die than see such measures enacted. Why? Once so-called socialist measures are in place, they become pillars of a society and it takes decades to remove them. Surviving COVID–19 is going to require governments all around the world to impose socialistic measures more drastic than the New Deal and the Great Society combined.

There are no good alternatives. If elected leaders do nothing, we get a Fast Kill. Tens of millions of people die, and tens of millions more are disabled. If curve-flattening measures are imposed without socialistic protections, we destroy what’s left of the middle class, eviscerate the consumer economy, and risk such a high rate of noncompliance that infection may spread, needlessly killing millions, anyway.

Billionaires and corporate executives are scared, not of the virus, but of the changes our society will need to make to survive COVID–19. What if those social-welfare protections stick? Billionaires will become three-digit millionaires. Three-digit millionaires will become two-digit millionaires. Private jetters will have to fly first-class commercial flights. Corporate executives will be administrators rather than dukes and viscounts. Worker protections will be enforced again, interfering with the “right to manage”. In the long term, extensive investment in the sciences and health (to fight the next COVID–19) will raise employee leverage, at capital’s expense, across the board. The horror!

Those who run the global economy, to the extent that they have a say in what societies do, have a conflict of interest. They can try to preserve the hierarchies and power relationships that enrich them— at the cost of a holocaust or few. Or, they can accept social changes that, while bringing humanity forward, will emasculate corporate capitalism and hasten its replacement by a more humane system, such as social democracy en route to automated luxury communism.

Shall it be Capitalism–19, or Humanity–20, that survives? Working men and women await the answer.

Yes, Under Corporate Capitalism, 8 Million Working Americans Are Likely To Become Unemployably* Disabled–– Possibly, for Life. Check the Math; Check the Assumptions.

An assertion I have made recently has drawn controversy. I have said that, in the wake of COVID–19, we’ll likely see 8 million American workers become unemployably disabled for a long period of time–– years; possibly, for life. This is an extreme prediction, and I hope I’m wrong. I’ve made predictions that were wrong and embarrassing. I sincerely hope this is the most embarrassing prediction I’ll ever make. Given the extremity of it, let me explain the assumptions on which it rests.

Please, check my work. If I’m making an incorrect assumption, post a comment, and I will fix it.

I am not, in any capacity, an expert on virology, medicine, or epidemiology. These are complex, difficult sciences and we must defer to the experts. The numbers I will be using will be within the ranges of existing predictions regarding how bad this pandemic can get, and how much damage it can do.

Of course, we have to define terms. What does it mean for a person to be unemployably disabled? There is a spectrum of sickness, and one of disability. The vast majority of this 8 million people (plus or minus a factor of two) will not be bedridden, miserable, or sickly for the rests of their lives. Unemployably disabled means that someone is sick enough that (a) no one wants to hire her (whether because of her disability itself or her suboptimal career history) and (b) she struggles to retain jobs due to her inability to hide the chronic health problem. She need not be physically crippled, psychiatrically hospitalized, or too sick to contend with daily life. She might not “look” disabled at all, but she will have too few spoons to have even a chance of victory in corporate combat.

In the United States, where employers are above the law on account of having convinced the public to call them “job creators”, it does not take much disability at all to make someone unemployably disabled.

Assumptions

Like I said, I’m going to document all of my assumptions, so the public can check my work.

My first assumption is that COVID–19 will not be contained. This is the biggest one, and I hope I’m wrong. If the virus is contained, like SARS, then perhaps only a small number of people will be exposed to the virus. If only 500,000 people get it, then clearly there is no way for COVID–19 to render 8 million people unemployably disabled.

However, the virus is extremely contagious, with an r0 estimated at 2.28. Not as bad as measles, worse than flu–– probably worse (in contagion) than the monster flu of 1918. Does this mean that it can’t be contained? No. SARS had a similar r0 and was contained. However, neoliberal corporate capitalism, for reasons that will be discussed, is especially bad at containing outbreaks.

Old-style state authoritarianism has its failings, but people know what the rules are. A government quarantine can be enforced. An authoritarian government can just shoot at people who move until the r0 drops below 1. It’s a terrible solution, but it works.

Social democracy can also work, so long as a sufficient number of people have the good will to exercise their option to hunker down (that is, practice social distancing) and let the experts handle the crisis. I have chronic health issues but I am taking special measures right now (e.g., dietary changes, avoidance of damaging circumstances) to minimize risk of needing medical attention in the next six months. In part, my reasons for doing so are selfish; in part, I am trying to minimize my risk of being a burden to a soon-to-be-overtaxed hospital system. We are all on the same team.

What cannot contain an epidemic like COVID–19 is an economic system such as ours. Under neoliberal corporate capitalism, we have a libertarian government (providing immense economic freedom to those privileged enough not to have to work) but live in a matrix of authoritarian employers, who control our incomes and our reputations, and who can bend the government to their will by calling themselves “job creators”. In a world like this, no one knows who is in charge. Who does the American worker obey? Does he obey the man in Washington advising self-quarantine, or does he obey the boss who believes “coronavirus is just a cold” and has the power to turn off his income (and, by giving negative references, non-consensually insert himself into the worker’s reputation) if he shows up 15 minutes late? Chances are, he’s going to ignore the G-Man and obey his boss. The quarantine will not be effective. Even if it is enforced by the government, so many people are in such precarious economic straits that they will illegally circumvent it, if it comes to that.

We would have to scrap corporate capitalism entirely to have anything better than a 5 percent chance of containing COVID–19. Let’s be honest, a total overhaul of our economic system in the next two months is very unlikely. Chances are that, instead, the novel coronavirus will stick around in the American population (and, therefore, the world population) for good.

How bad is this? Not necessarily terrible. Over time, we’ll probably develop natural immunities to this thing, rendering it just another coronavirus. In the mean time, though, COVID is going to make a lot of people sick.

My second assumption is that about 100 million American workers will get COVID–19. Angela Merkel predicted that two-thirds of Germans will contract the virus., which is in line with epidemiologists’ expectations. That doesn’t mean they’ll all get sick. Most won’t. Case-fatality rates–– the WHO has given this disease a CFR of 3.4%–– often overestimate the lethality of the virus, because so many mild and asymptomatic cases go undetected. We may never know the real lethality rate of this disease, but in working-age Americans it will likely be under 1 percent. That’s the good news. This is a serious illness, but it’s not showing a likelihood of being a massacre like, say, the 1918 flu.

What about flattening the curve?

Health ministers and epidemiologists have been advising us to practice social distancing–– that is, avoid large gatherings–– to slow the virus’s exponential growth and “flatten the curve”. We absolutely must do that. A widespread emergency that overloads the hospital system will cause the lethality to spike, as it has in Italy.

By flattening the curve, we can achieve a great deal in preventing deaths, but we’re not necessarily going to reduce the number of cases. Flattening the curve is important because, when resources run thin, the matter of when people get sick has a major influence on survival. It doesn’t guarantee that they’ll never get sick.

How sick? Some people will carry the virus and suffer no symptoms. Some people (and not only elderly people) will get severely ill.

My third assumption is that, among that 100 million workers, the breakdown of cases (into asymptomatic, mild, severe, and critical cases) will be similar to what we’ve seen so far.

Unfortunately, there’s some guesswork regarding the currently infected population. We haven’t tested everyone; we don’t know how many cases of COVID–19 there are. Using percentages I believe to be in range of what experts expect, and scaling down a bit because we are speaking of the working-age population (a younger and healthier set) I’m going to predict: 50 million asymptomatic cases, 35 million mild infections, 13 million severe cases, and 2 million that are critical. These numbers could well be off by a factor of two, but not in a way that would meaningfully alter my fundamental conclusion–– that millions of people are about to develop long-term disabilities that, in American corporate capitalism, will render them unemployable.

It’s important to understand what is meant by a “mild” infection, when the medical community says that most (70–90%) COVID infections are mild. The word “mild” is relative. A “severe” cold (38 °C fever, inflammation and pain, unable to work) is “mild” by the standards of flu. Similarly, “mild” SARS or COVID is comparable to “severe” influenza (unless we’re talking about the 1918 monster flu, which is in its own category). Specifically, in the context of COVID, “mild” means that a patient is expected to survive without hospitalization–– there is no evidence of immediate danger.

In a “mild” case, life-threatening secondary infections may occur later on. That’s a serious issue, but not one that must be treated now. Some of these “mild” cases will come with pneumonia. Some will come with 39–40 °C (unpleasant but not critical) fever. Some will produce post-viral chronic fatigue comparable to that following mononucleosis or the bacterial infection responsible for Lyme disease. Quite a few people with “mild” cases will experience transient (but not life-threatening) respiratory distress serious enough to induce panic disorder or PTSD. These cases won’t require hospitalization–– and hospitalization will likely be unavailable–– but they will still be, for most young people, the worst health problems of their lives so far.

If that barrel of fun is “mild” COVID, what’s severe? Severe cases require hospitalization for days, and possibly weeks. Artificial respiration may be involved. Critical cases include those where vital organs are involved–– kidney failure has been reported. Yeah, this thing’s nasty.

Any health problem can traumatize a person, but respiratory ailments have quite a track record. The body is not meant to go without oxygen, and even slight deprivations freak the brain out. We’ve seen this with SARS and the 1918 flu. We’re likely to see it with COVID–19. Even in the cases being called “mild”, because there is no threat to life that requires emergency hospitalization, truly “full recovery” is not a guarantee. People are going to get panic attacks from this, and once a person has had a few of those, a lifelong struggle with panic disorder (and agoraphobia, and depression due to adversity in employment) becomes likely.

My fourth assumption is that COVID–19 will have a long-term disability profile, controlling for severity, comparable to SARS.

Nearly half of SARS survivors, ten years later, were unable to return to work.

Does this mean that 40–50 percent of COVID–19 survivors will be unemployably disabled? It’s hard to say. SARS is not COVID–19. Let’s size up some of the differences.

For one, SARS disproportionately affected skilled healthcare workers, for whom there’s high demand in any economic situation. We would see a higher rate of unemployable disability if this hit people whose services aren’t really needed–– say, private-sector software engineers or project managers. Of course, it will hit everyone and

Second. SARS did not have many victims in the United States–– where, although it is illegal to discriminate against disabled workers, the laws are scantly enforced. It mostly afflicted countries where workers have better protections against their employers. If, say, 40 percent of survivors were unemployably disabled in Canada, we’d likely see 75 percent unemployably disabled in the United States, not because the disease was more severe but simply because employers in the US get away with more.

That being said, all the evidence so far suggests that COVID–19 is not as severe as SARS. Therefore, I don’t think we’re going to see the same rate of unemployable disability (40 – 50 percent) among COVID–19 survivors, if only because there are so many more mild cases.

Here are my predictions. Five percent (1.75 million) of those with mild infections will be unemployably disabled–– that is, at some point, subjected to a career disruption through no fault of their own from which they will be unable to recover. Among the severe cases, I’m predicting 40 percent (5.2 million); among those with critical cases, 65 percent (1.3 million). These numbers might each be off by a factor of 2, but they’re not unreasonable. They are middling estimates.

There’s already a mountain of evidence supporting high proportions of those suffering severe and critical illness becoming, through no fault of their own, unemployable. What about the mild cases? Isn’t it a bit dire to predict that 5 percent of people with “only” mild infections will become unemployably disabled? No, it’s not. If anything, the real number’s likely to be higher.

Most of these cases will not be attributed COVID–19. Plenty of the people won’t know they ever had it. They’ll simply experience “a bad month” in which they will be unable to meet the performance requirements of their jobs, suffer managerial adversity and workplace bullying, and suffer career setbacks from which they’ll never recover.

Kimberly Han is a (hypothetical) 33-year-old software engineer at a half-trillion-dollar technology company, LetterSalad (formerly, Vigintyllion). On April 3, she develops a mild case of COVID–19. She’s able to work from home, because the US is on lockdown. Her fever never breaks 39 °C and she never feels the need to go to the hospital. She’s never diagnosed with COVID. She never thinks she even had it. Since she works from home, she’s not even aware of racist COVID-related jokes made about her by the managerial in-crowd. The storm passes. Everything’s fine.

In September, Ms. Han finds herself tired. Post-viral syndrome. Other than being tired, she’s fine, but she develops a cough. She misses a “sprint” deadline. She needs to take naps in the afternoon, and misses an unannounced but important meeting. Management perceives her as a “slacker” or as “sickly” or as “low-energy”. The product manager and her “people manager” tell her to stop “SARSing up the schedule”, which is totally not racist because the direct manager is a white, Ivy-educated “Boston Brahmin” and the product manager is an actual Brahmin, and it’s physically impossible for racists of two different races to work together to be racist to someone.

The workplace bullying culminates in her developing post-traumatic stress disorder. She begins to have daily panic attacks. She powers through the episodes, not missing a day of work to the attacks, but her manager doesn’t like “the optics” and begins paperwork to terminate her “for performance”. Kimberly Han, through no fault of her own, loses her job. Within time, the post-viral fatigue lets up but post-traumatic stress disorder does not. COVID–19 left her body and she is unaware that she’s had it, but she’s unemployably disabled.

What’s above will happen to people. Even if we do everything right, even if we flatten the curve and prevent our hospitals from becoming dangerously overloaded, it will happen to American workers, not necessarily in that precise way, but nonetheless surely. Some will have reduced lung capacity. Some will develop anxiety and depression. Some will develop panic attacks or PTSD. So will never be diagnosed but exhibit unexplained personal changes and not even know, when they are fired and unable to ever work again, that it was because of illness that they lost their careers (and that they were, therefore, fired illegally).

Could I be wrong on that 8 million figure? Of course. More accurately, it is: 8 million, plus or minus a factor of 2, conditional on an assumption of non-containment. I hope I’m wrong. I hope the virus is contained, or that it proves seasonal and dies out in the spring, but there’s no evidence that we can count on either one.

It is very likely that millions of American workers are about to become unemployably disabled. Crippled? No. Not even necessarily unhealthy. Careers are fragile things; it doesn’t take much disturbance to make someone unable to get and keep jobs in a competitive labor market that has been rigged against workers for the past forty years.

“Couldn’t this be a good thing?”

No.

I understand the argument. This pandemic may create a short-term labor shortage, and there are people who believe the clearing-out could lead to an improvement of opportunities for workers. I’m not so bullshit.

I don’t know enough about virology, medicine, or epidemiology to do anything more than piece together existing research, but I do know enough about economics, politics, and organizational dynamics to say this: while the people who own our society are evil, they are not stupid. The upper class and the corporate executives will profit, and we will suffer.

There are some people (sick, broken people) who believe that this “Boomer Remover”: virus will create opportunities in the workplace or that it will “clear away” people who are a burden on society. Neither’s true. First, while this will kill a lot of sick old people, it will at the same time make a lot of currently healthy people (young and old) very sick–– in some cases, for a long time. The disability burden on society is not going to be ameliorated by COVID–19; it will be increased.

So, let’s talk about why a potential labor shortage isn’t actually to the worker’s benefit. We are not in the time of the Black Plague. In the 14th century, the nobles needed the peasants. American workers can easily be replaced by machines and by literal slaves in other countries, and they will be. I remember, in 2005, being told that Millennials would face a world of opportunity by now, as Boomers retired and vacated the workforce. It didn’t happen. Those cushy $500,000-per-year BoomerJobs? Those were never filled. They simply ceased to exist. We live in a society where recessions are permanent (for the workers) and recoveries are jobless. When things go bad, workers are first to suffer; when things are good, the owners take the bounty for themselves. COVID–19 will be no different. The rich will see a drop in their stock valuations; the poor will be eviscerated. This dynamic will not change until we destroy corporate capitalism.

What happens to the eight million people who become unemployable because of post-viral disability? There’s no safety net in this country, so these people will have record-low leverage, and so while they won’t find decent jobs (because no one will hire them for one) the owners of our society will find ways to extract work from them. A number fall into precarious “gig economy” piece-work, grinding out enough of an income to survive, as their health gradually unravels (even as COVID–19 becomes a distant, unpleasant memory). The least fortunate will turn to various unsavory ventures, because illicit labor doesn’t require a spotless résumé. Perhaps the most talented of the newly-disabled will do what I’ve had to do: swing from one six-month rent-a-job to another, until the boss figures out they have a disability and either fires or gimp-tracks them. That these people will be unemployable doesn’t mean that society won’t be able to get work out of them–– it means that they’ll be unable to get anything out of society.

One might think, though, that the eventual exclusion of 8 million people from traditional, “respectable” labor (office jobs) could bring a benefit to other 152 million who do not develop lifelong disabilities. Less competition, right? That’s exactly what our pig-fucker bastard owners want us to think. They want us to think of our fellow citizens–– fellow proletarians–– as “competition”. They want us divided against each other, because it keeps them in charge.

That Star

Revisit the title of this essay. I predicted that millions of people (8 million, plus or minus) will become unemployably* disabled, accent on the *.

In a corporate dystopia, where workers compete against each other for the benefit of their owners, it is inevitable that people with otherwise mild disabilities will become unemployable. That is, they will be unable to convince the obscenely well-paid “professionals” who profit by the buying and selling of others’ labor to give them gainful, stable employment. There is no reason it has to be this way.

Should a person who suffers post-viral fatigue be subjected to workplace bullying and performance evaluation? I would say no. Should a person, recovering from a severe respiratory illness, be non-consensually ejected from her career because her panic disorder or depression caused a headache for her boss? No.

Here’s the reveal, which should not be much of one.

Yes, COVID–19 is going to fuck a lot of people up. It’s killing people and will continue to do so. It’s horrible. I wish this were not happening; I wish what is about to happen were not about to happen. This said, it need not be the case that COVID–19 renders 8 million people, or even one person, unemployable. COVID–19 exists in nature; it is part of the real physical world and we have to contend with it. “Employability” does not exist in nature. It is a part of a social construct and a stupid one at that.

Corporate capitalism is a fragile, hostile economic system that will throw millions of people under the bus in the next year for no reason but their “offense” of getting sick. It will not know whether they got sick from COVID–19 or a secondary infection or post-viral fatigue or the psychiatric sequelae of respiratory illnesses. It will not care. It will fire them “for performance” and the wheels of the bus will roll along.

We’ll soon see about 8 million people rendered permanently unable to, on the harsh terms of corporate capitalism, get an income. For what? Is the needless suffering (and, likely, the continuing worsening of their health) of 8 million people, who did nothing wrong, a worthy price for the upkeep of a decaying socioeconomic system that all intelligent people–– even though we disagree on solutions–– despise? I think not.

COVID–19 is horrible. The earthly existences of thousands are, as I write this, in present danger. That number is likely to worsen. We need not let it be more perilous than nature has made it.

If we keep corporate capitalism around, we will see 8 million people–– some talented, some extraordinarily competent; but nonetheless unable to survive in a system where each worker must compete against a hypothetical replacement who might be as skilled but without illness–– fall out of the primary economy for good. There’s no point in that. It doesn’t have to happen that way. We can tear corporate capitalism down. We can overthrow our corporate masters (through nonviolent means if possible, through other means if our adversaries make it so). We can eradicate an economic system in which we compete against each other for the benefit of a tiny, self-serving minority who wish to own us. COVID–19 is proving to us that we, citizens of the world, are all on one team. We all want this thing not to destroy us and everyone we care about. It’s time to build an economic system reflective of that.

Wash your hands for 20 seconds. Avoid public gatherings. Try not to touch your face. Furthermore, I consider that corporate capitalism delenda est.

Welcome To My World. I’m Sorry That You’re Here.

I had a mild bout of flu in February 2008. I’d had worse flus, and I have had worse since then. I was a 24-year-old with no health issues; I recovered quickly.

What made this infection notable was that, a month later, I experienced intense pain in my throat that radiated through my chest and face. I could barely see. I tried to drink water and could not swallow. For a minute or two, I couldn’t breathe. Laryngospasm–– it feels like drowning in air. Dizziness, nausea, and vomiting followed. The “mystery illness” caused a panic attack. Not just one, either; they kept coming for months.

The physical problem turned out to be a secondary bacterial infection. It’s rare, but sometimes happens after influenza.

Unfortunately, the panic attacks never went away. They often don’t. Severe respiratory illnesses often cause lifelong disability–– PTSD, reduced lung capacity, depression, anxiety and panic disorders. Once the body and brain “learn” how to panic, this vulnerability becomes a new facet of daily life. So terrible is the experience of a panic attack that a person will do nearly anything to end one. Without a doubt, they’re one of the worst things a person can experience. Moreover, the fear of panic attacks can, itself, produce one. Intrusive thoughts and superstitions become a part of daily life. Unchecked, this can lead to dysfunction and agoraphobia.

I hit bottom in 2009. I was agoraphobic. I had to spend a year re-learning how to do daily activities, re-learning that it was safe to ride a bike, sit on a crowded subway, ride a car. I built myself back from 1 HP. It wasn’t easy.

At this point, I’m 98-percent recovered from panic disorder. I used to have attacks on a daily basis. Now, I might have a “go-homer”–– one bad enough that I have to leave work–– once in a year or so. I’m probably in the 85th percentile for health at my age (36). Aside from being minus gallbladder, I’m in excellent physical health. I can deadlift 340 pounds. At this point, I can do all the activities of daily life. I’ve had panic attacks while driving. I don’t recommend that experience, but it’s not unsafe. If I have one while scuba diving, I have a plan for that (signal diving buddy, ascend slowly).

Open-plan offices are a struggle for me. Actual danger doesn’t trigger panic attacks. I’m fine riding a bike in traffic. I’ve swum with sharks (no cage) at 78 feet–– which is not as dangerous as it sounds. Open-plan offices, though, are needless cruelty. The easiest way to have a panic attack is to sit for nine hours in a place where having one (a minor irritation when it happens at home) will be a professional death sentence–– and, trust me on this, it is. If the bosses find out you have (scary music) “mental illness”, you will either be fired or given the worst projects–– gimp-tracking–– until you leave.

So-called “mental illness”, after a serious respiratory infection, is normal. The body is not meant to go breathless. Nearly half of SARS survivors, ten years after recovery, were still too disabled to return to work. These were healthcare workers (in high demand) outside of the United States. For American wage workers, the rate’s going to be worse.

I’ll give myself as an example. On May 10, 2019, I successfully interviewed for a job at MITRE as a simulation and modeling engineer. On May 13, they made an offer, which I accepted. My intended start date was Monday, June 3. Robert Wittman, who was to be my manager, somehow learned of my diagnosis (likely, illegally) and, on the (false) belief that it would prevent me from getting a security clearance, rescinded the offer. This happened to me 11 years after the original infection.

So, even if you survive severe COVID and are well enough to work, you might not find anyone willing to hire you.

Here’s my prediction, and I hope I’m wrong, but I’m probably not. If anything, these numbers are conservative.

First, I think that nearly everyone in the US will be exposed to COVID–19. The Republican Party’s forty-year campaign to destroy our government has been successful, and employers are more interested in the appearance of doing the right thing than in actually doing the right thing. The American workforce is 160 million people. I predict 100 million will be infected.

Half of that 100 million, I predict, will be asymptomatic. They’ll get the disease but show little pathology. Of the other half, I predict 35 million mild cases, 13 million severe cases, and 2 million critical cases, leading to 125,000 deaths. These numbers are far more favorable than the pattern the disease has shown, and that’s because I’m talking about the American workforce, not the entire population. Total deaths in the US could reach seven figures; working-age deaths, probably, will not.

“Mild” is a relative term, and when we’re talking about diseases like SARS or COVID, “mild” isn’t all that mild. It means the case probably doesn’t require hospitalization. Some who have mild cases will develop secondary infections. Many will lose their jobs and health insurance, producing psychiatric sequelae. These people won’t be in immediate danger of losing their lives, but many will be disabled, and some for years. I’m going to say that 5 percent of people (1.75 million) in this set will be long-term disabled–– they will lose their jobs due to illness and be unable to find work.

Of the 13 million severe cases, I’m going to use SARS as a point of reference and predict a 40-percent disability rate–– 5.2 million. This leaves 2 million at the worst level of illness–– critical, meaning organ failure or intubation are involved, and I’m going to predict that 65 percent of them (1.3 million) are unable to go to work. This gives us a total of 8.25 million.

If my (conservative) predictions are right, we in the 18–65 sector are going to see “only” five years’ worth of traffic deaths from COVID–19. A big number, and worth taking seriously, but not apocalyptic. Life will, after a few miserable months, return to normal.

Millions of workers–– I predicted 8 million, but it could be half that or double that–– will be, in the wake of non-fatal COVID, unable to return to their jobs, or to get other ones. They’ll try to work–– in this country, they have no other choice–– but they will be unable to meet the performance demands of their jobs, and summarily fired. They will have six-month job gaps in 2020 and no one will want to hire them. Their careers will be disrupted and unfixable. CEOs will insist that they are not discriminating against people who survived COVID, with all the credibility I have in insisting that I have a 16-inch IQ and 200 penises. Legislators might pass laws preventing discrimination against COVID–19 sufferers, or against people with job gaps during 2020, but we all know that employers don’t need to follow laws when they can call themselves “jawb creators” and get a free pass.

Our society runs on “if ya doesn’t work, ya doesn’t eat” model, and millions of people are likely to become unemployably disabled. Some will be unable to work at all. Some will, like me, return mostly to health, and be able to work, but struggle to get hired due to lingering stigma. COVID–19 will pass. Our bosses and owners will tell us that everything’s back to normal (it won’t be) and that we just need to get back to work. But millions of people are going to be unable to do so, and the system will discard them forever.

I should mention a personal bias: I’m a democratic socialist. Often, I read people on the right claiming that “communism killed millions“. It isn’t true. Death attribution is a complex science and you can’t just count every death that’s not by old age as being caused by the economic system in place. If you compare the death tolls of so-called communist regimes (some of which were terrible) to what they would likely have been under similarly repressive regimes (of which there are numerous examples) aligned with imperialist capitalism, the excess death rate of communism is… zero or negative. That’s not to say that communism is flawless or faultless–– only that it does not produce excess deaths over what would have otherwise occurred.

At issue is that we’ve been brainwashed, in the United States, to believe that all people who died of causes excluding old age in communist countries were “killed by communism”, every single one. Meanwhile, when capitalism kills people, it blames those who were killed. “Personal responsibility.” If that Pakistani kid’d had the good sense not to go outside on a sunny day, he wouldn’t have been freedom’d by a drone.

Communism’s public liability is that it never forgets–– and, given the severe failings of societies that called themselves communist, it should not forget. Communism has too much memory and too much history and too much responsibility. Capitalism has no memory and no history and no responsibility.

If we go “back to normal”, as our owners and managers will insist, and neoliberal corporate capitalism remains in force, eight million people are going to find themselves falling to the bottom. Months or years from now, they’ll die needless deaths. We know what the capitalists will already say. Trump already said it: “I don’t take responsibility at all.”

Not only in the next three months, but in the years following this catastrophe–– as people try to return to their careers and find their jobs gone–– corporate capitalism is going to fail. But is it going to fall? That’s up to us. If we do our jobs, yes. We cannot let our economic system and those who own employ us, when they try to avoid taking responsibility for their role in this calamity, succeed.

Farisa’s Crossing Final Round Beta Reading

On January 3, 2020, I’ll be opening up a round of beta reading for Farisa’s Crossing. Since I intend this to be the last round I do– I would like to begin serialization in April 2020, with the entire book published by February 2021– there’ll be more slots than in previous rounds.

What does a beta reader do? You’re not asked to copyedit the manuscript; copyeditors do a much more intensive read and that’s a paid service. As a beta reader, your job is to read the manuscript as you would any other book, and give feedback on what works for you and what doesn’t. You’re not expected to do more than that. It’s a time commitment of about an hour per week over 3–4 months.

Tentatively, I’m looking for 10–12 readers, preferably a diverse set with regard to age and gender (since there are major LGBT characters as well as characters with disability) as well as sexual orientation and disability.

More to follow. For now, if you’re interested in being a beta reader for an epic fantasy novel, my email address is michael.o.church at Google’s email service.