The d8 Role-Playing System

Also posted on my Substack page.

The d8 System is a role-playing game system designed to be mechanically lightweight, so as not to break immersion, and to be modular. It enables experienced GMs (in Dungeons & Dragons parlance, DMs) to run campaigns in a manner similar to what they would use in other systems, but it also provides tools for extension. The long-term vision here is that designers and GMs can build and share modules specific to their preferred gaming styles and genres.

Since only the core mechanics (“Core d8”) have been written and no modules exist yet, d8 isn’t intended for use by inexperienced GMs.

What Is a Role-Playing Game, Anyway?

Ask ten role-players (including GMs) this question, and you’ll get d12 different answers.

I’m a novelist (Farisa’s Crossing, 2021) so I’ve encountered the various approaches to, and theories about, storytelling— they are too numerous to list here. I also studied math in college, so you’d rightly guess that I’ve spent far too much time analyzing game mechanics and their probability distributions. I know the “3d6” distribution by heart (triangular 1 to 21, 25, 27; back down). I’ve also played a lot of RPGs, both electronic and tabletop. The d8 System exists for tabletop play, where the design problems are most interesting. A video game can do (and is expected to do) millions of tiny calculations per second; but a GM must construct the game world and its challenges, to some extent, on the fly. Her players will do things she didn’t expect; she will have to let the world respond in a world that doesn’t break immersion.

Players (and GMs) have a wide variety of tastes. Some want to see battles play out on hex paper and know exactly how much range a trebuchet has. Others prefer deep-character storytelling approaches and only want to know what their avatar in the game— their player character (PC)— is experiencing; they get upset if the GM says, “You lost 5 hit points”, because the character doesn’t know what ‘hit points’ are. Some players want realism; some want plot armor. There’s no right or wrong here; it’s all a matter of taste.

The core of the d8 System is designed to appeal both to statistician-strategists as well as holistic in-character gamers— to the extent that both camps can never be satisfied perfectly, such refinements are left to modules. By design, little information is conferred through game mechanics that the characters themselves wouldn’t know. To get specific, numeric data usually take the form of small integers (whole numbers). Almost no character would experience an action as “a 67th-percentile performance” but they would know the difference between 2/fair and 3/good, between 3/good and 4/very-good, etc.

Statistics and systems should only be a small part of the role-playing game experience. Some players and GMs are happy to go “systemless” and let the game be a free-form interactive story. Others want the game to hold fast to a common language— because that’s what a role-playing system is, in a sense: a language— so they know precisely how likely a mounted barbarian is to hit a downed orc at night with a +1 Axe of Retribution. These things need to be resolved, and fairly quickly. Constant calculation, however, can bog the game down. If the GM and the players find themselves in an argument about whether a percentile roll of 78 suffices to hit the man on the privy with a crossbow bolt, versus whether it ought to have taken at least 79, then everyone has lost.

The GM has the hardest job. She’s the worldbuilder and storyteller, which means she has the godlike power to “decide” what happens to the players, even overruling the dice if she wants. This, of course, means that if she’s unskilled and impulsive the game might suck, and the players have the ultimate vote-with-their-feet power to quit a campaign if that happens. Like a novelist, she has to keep a storyteller’s paradoxical combination of unflappable authority and deep humility. Her players will suggest solutions and story arcs she didn’t think of; she’ll have to know when to adapt, and when to overrule them.

GMs have to keep balance between the players, which means keeping their power levels— the scope of what each can do, in the game— balanced. It’s not that much of a problem if the players as a whole are overpowered or underpowered because the GM can adjust the power levels of the challenges they face; it’s much more of an issue if the characters differ wildly in how much they’re able to contribute. Most novels have one main character; in a role-playing campaign, everyone is the main character. The GM must reward clever, skillful play (so long the skillful play is in character) while punishing bad decisions, brute force, disengagement, and out-of-character moves. If she’s good at her job, and if her players are receptive, they nearly forget that she exists (and that she is wholly responsible for the mess the characters are in) for a few hours, enough to immerse themselves in the story. The GM has to keep the fictive dream, to use John Gardner’s term, going.

The d8 Philosophy: Modular, Non-Judgmental

To be technical, Dungeons and Dragons isn’t “a role-playing game” but an RPG system. Same with GURPSFudge, and Warhammer. The game is what happens between the players and the GM; usually a “campaign” that unfolds over several sessions (sometimes, comprising hundreds of hours). Systems have a tradeoff between modularity and specificity. If a combat system is designed for swords, axes, and shields, it’s going to have low utility when applied to modern warfare. Combat systems bring specificity— they give useful information to the GM and players about what can be expected to follow from certain happenings— but reduce modularity: the assumptions they carry specialize them into certain styles and genres of role-playing game, and necessarily exclude others. That isn’t a bad thing; GMs and players benefit from knowing and agreeing on what style of game they are playing.

Core d8, favoring modularity, tries to make no genre or style commitments; that’s left to modules. The core system could be used for medieval high fantasy, but it could also be used for 1920s gangland Chicago, 1997 suburbia during a vampire fruit epidemic, or 23rd-century Budapest. What does that mean, in practice? The lack of specificity given by the core rules means it can be used profitably by an experienced GM who already knows what genre and style of campaign she wants to run, and who has the competence to do it. Novice and intermediate world-builders, on the other hand, will have to rely on d8’s module-writing community (which doesn’t exist yet, because this is day zero) if they want specificity.

I am tempted to say that the d8 System as given here (“Core d8”) is not an RPG system (like D&D or GURPS) but an RPG system system (system-squared?) It’s a system designed to help people build RPG systems (modules, sets of modules, etc.) It doesn’t tell you how many hit points (HP) a fighter or wizard should have— because it doesn’t decide whether fighters exist in your world, or whether HP exist in your world.

Health systems are a common point of debate, and a great example for me to use in showing the innate tradeoffs one makes when using a more specific toolset. The concept of hit points comes from tactical wargames and was originally a measure of structural integrity: how hard it was to sink a ship, take out a bridge, or raze a castle. In Dungeons & Dragons and related games, hit points are used both for the player characters (PCs) and their adversaries to represent how hard combatants are to kill; they keep battle quick and fun by leaving damage abstract. The ogre hits you; you take 12 damage. A wizard heals you; you recover 17 hit points. This, of course, isn’t a realistic model of mortal danger. People don’t lose “hit points”; they lose blood and skin and fingers and, if they’re really unlucky, vital organs. Any attempt to realistically model medieval life would require a roll of the dice on minor wounds, to see if they become infected. There are GMs and role-players who enjoy this kind of gritty realism; I would guess that they’re in the minority. Anyway, Core d8 doesn’t legislate. It doesn’t propose as canonical a health, leveling, class, combat, magic, or technology system— it doesn’t even mandate that one be used at all. There isn’t just one way to role-play.

Specificity is, nevertheless, important. Before a campaign begins, GMs and players should understand what kind of world is being built. What can happen, and what can’t? If players expecting medieval realism get genre-twisted into a sci-fi romp via time warp, they’re not going to be happy. Character death is probably the biggest non-genre (that is, stylistic) point where agreement should be established. RPGs tend to focus on daring adventures and perilous adversaries, and sometimes the PCs fail. What does death mean? In settings favoring realism, it means: the person’s life ends, as it does in the real world. (The player isn’t eliminated, but creates a new character. Death isn’t necessarily losing, and it’s not to be “punished.” It is part of the game.) On the other hand, in fantasy settings, death can be a minor inconvenience, to be expected on a routine jaunt to take Tiamat’s lunch money, reversible with a mid-level spell. GMs and players need to have some measure of agreement on what death means; but d8 isn’t going to tell them what it must be.

Why Use a System at All?

Role-playing games, as interactive stories, probably predate role-playing systems. And all GMs home-brew their games, to some extent, in that we basically ignore the rules we find annoying. Encumbrance is usually ignored within reason— a STR 6 wizard isn’t going to be wearing plate mail, but keeping track of whether a PC is carrying 19.9 pounds versus 20 is no one’s idea of fun. An implicit “Bag of Holding”, because playing “tetris” with one’s inventory (real or virtual) is no one’s idea of fun. There are GMs who insist on playing battles out on hex or cartesian graph paper; there are others who keep them abstract. As I said before, some people want to know their precise percentage chance of killing that orcish lieutenant with a +2 Falchion, and others want to get deep into character and will find “THAC0” to break immersion.

Systems, well-used, improve the GM’s and PCs’ understanding of the world they live in, and what the capabilities of the characters (and adversaries) are. They give a sense of objectivity, so the players don’t think the GM is making rules up as she goes along— even though, to some extent, it is part of her job.

Health modeling (to HP or not to HP) is incredibly important, and probably where GMs (and PCs) lean most heavily on their systems. Players need to know how close their characters are to shuffling off the mortal coil. There tend to be two approaches to this. The abstract, hit-point-based system described above is one— in it, damage tends to be inconsequential until a player’s HP reach 0. At the other extreme, with an aim for biomedical realism, one has injury systems that tell players precisely what bodily agony their characters are feeling. In injury systems, characters don’t “lose 6 hit points”; they get fingers blown off and shoulders crushed by lead pipes. HP systems tend to work well in high-fantasy campaigns where combat is common and cannot be avoided; injury systems tend to work better in realistic settings where (as in real life) physical combat should usually be avoided as much as one can.

I’m an old man (well, 37) but I too was once a teenager too smart to believe in hit points. I designed “realistic” but complicated injury systems that were not at all fun to play. Now that I’m older and mountains are mountains again, I have an appreciation for HP systems. They keep combat moving, and for the most part they prevent stochastic permanent damage, which is an asset in a horror setting but unwanted and off-brand in epic fantasy. Are hit points unrealistic? Yes, and that’s the point.

Or, let me be more specific: hit points are not really a health system. What they model, and this is key in understanding the way in which HP are realistic, is the push-and-pull of combat— the fatigue and pain that build up from bodily abuse, the waning ability of a fighter’s range and determination to compensate for a failing body, and the (greatly increased, in real life as in RPGs) capacity of experienced fighters to keep pushing through.

The notion that hit points are wildly unrealistic tends to come from two sources. One is a simple misunderstanding of leveling— while level-10 characters are “mid level” in D&D they are in fact among the best fighters in most worlds. From 1 to 10, each level represents about a factor of 10 in rarity, so merely level-3 characters are in the top-0.1% of adventuring experience, skill, and (plot-armor-slash-)luck. Levels 11 to 15 are superheroes; 16 to 20 are mythic heroes who emerge only in world-stakes conflicts that’ll be discussed for millions of years. In their proper (notably, counterfactual) context, I don’t think D&D’s leveling or its at-high-levels generous HP allotments are unrealistic. The other is a misunderstanding of what 0 HP means. It does strain credibility that a character can be “near death” (1 HP) and still fight at full power— but, under the modern interpretation of hit points, that’s not what it means to be at 1 (or 0) hit point. Zero doesn’t mean certain death, and it doesn’t even have to mean unconsciousness or total incapacitation. It’s the point at which single combat ceases to be a fight (and, if let to continue, becomes a depressing beatdown). 0 HP is the point where the referee of a fighting sport calls the match because the losing combatant can no longer defend himself.

A hit point system doesn’t preclude injuries; in fact, most D&D-style systems have plenty of ways for characters to get grievously injured (or killed) after falling to 0. The simplifying assumption is that injuries won’t happen during the “fair fight” phase (as opposed to the “beatdown” that may occur after one fighter has lost). That’s false, but it’s not that false. In pre-firearm single combat, it was pretty rare for people to suffer mortal wounds during the period in which it was still a two-sided fight. Armor makes it hard to pierce a vital organ. The fact isn’t chivalrous, but a medieval knight’s most common killing blow was nothing theatrical, but a dagger to the throat of a downed or exhausted opponent. Similarly, bare-fisted combat has a low rate of fatal injury if the fight is stopped once properly over, which is why fighting sports have (in comparison with other sports, and of course we are learning about the cumulative effects of seemingly minor injuries) a reasonable safety record.

What 0 HP represents is not character death but the loss of fighting capacity. What happens when all PCs are reduced to 0 HP (“total party wipe”) depends on the motivations of their adversaries. If they’re wiped out by brigands, they can be robbed “left for dead” but survive, because murder is something even scumbags don’t do lightly. If they fall to some malignant force, like a lich or an indifferent stone giant, the campaign may well be over.

Real-world fights (street assaults) are hideous, depressing affairs. Many are ambushes; people don’t fight fair. If the fight is balanced at all, it usually ends (and may turn into a dangerous beatdown) in a few seconds. Often, the scumbag will win because the (a) the scumbag was planning the assault, and (b) non-scumbags very rarely get in street fights after their early teens. This sort of thing isn’t what we want in high fantasy. We don’t want to see ambushes and beatdowns— we want the long cinematic fights were the opponents fight with honor and bravery, and wherein someone can return from the brink of failure (1 HP) to victory. We don’t, in fantasy, want the fights that end because a weapon breaks, or because someone slips in horse shit, or because artillery fire demolishes a combatant’s head, rendering the whole sequence of events moot.

Whether to use HP or an injury system depends on genre and style preferences. Some GMs want to build, and some players want to live in, a gritty dangerous world where a rabid dog or a teenager with a knife is a real threat— a world where getting sucker punched means entering a fight with a disadvantage and possibly for only that reason losing to a worthless, unskilled opponent.

What I think makes people unhappy is when systems try to blend the two. For example, GURPS has a rule by which characters’ physical abilities degrade through general damage— at 33% of their maximum HP, they are bad off. If I were in a fantasy campaign, I would call that a misfeature. Here’s the thing: if it were me battling ogres, I’m sure my physical abilities would degrade after the first blow. However, D&D adventurers are experienced warriors, not regular people like me. The system is calibrated for an immense dynamic range— from level 1 characters who must be played conservatively, because bears and wolves and superior warriors can still destroy them, to level 20 demigods who can punch a dragon in the taint and run— and, seeing as level 1 characters don’t have many hit points at all, I don’t consider its modeling unrealistic. It’s plausible to me that, at high levels, these people can— through adrenaline, rage, and grit— keep fighting until their bodies give out.

As one can see, there’s no single right way to build a role-playing game, a system, or a world. Whether it’s better to use abstract damage (hit points) or a biomedically realistic injury system where every fight is a losing fight, that’s a genre and style concern, and there’s no right answer. In any case, the real objective for the GM in a role-playing game isn’t realism, but immersion. If a mechanic breaks immersion and becomes, rather than a modeling tool, a force unto itself, it should probably not be used.

As systems become more specific (and less modular) they impose constraints. These can focus, direct, and inspire a GM’s creativity, especially if she can trust that the mechanics are sound and well-tested. At the same time, they can stifle creativity and direct player focus to the wrong things— in which case, she should discard them.

Core Mechanics: Yes, It’s All About Dice

GMs of high skill can run campaigns without any random elements such as dice; at this point, it is “system-free” interactive story. Still, I think most GMs prefer to have randomizers. It helps immersion for players to think their characters are in a game rather than in a world built from one person’s imagination. Randomizers can vary. I once played a short campaign on a hike where we used the 0.01-second digit of a stopwatch. I’ve also seen GMs run campaigns using only tarot cards. Dice, however, are the go-to tool. They’re objectively numerical; as random influences, they help players forget that the game is actually under the GM’s complete control. Sometimes, the dice “speak for themselves” and suggest a course for the story very different from what anyone had intended. These injections of random chance make the game world feel more real or, in literary terms, more “lived in”.

The standard role-playing dice set consists of six sizes of dice: four (d4), six (d6), eight (d8), ten (d10), twelve (d12), and twenty (d20) sizes. Other sizes exist, but those tend to be expensive and unnecessary; it’s rare that role-playing games need to model an exact 1/7 probability (for which a d7 would be used)— typically, 3/20 is close enough. The d100 (or d%, or “percentile roll”) is commonly called-for roll, but it’s achieved with two d10s of different colors. An actual d100 is nearly spherical and almost impractical to use.

The d8 System is designed to run using only 8-sided dice (hence the name). Module writers and GMs are free to incorporate d12s and d20s and tarot cards, but the core system can be run with a handful of 8-sided dice.

The statistical engine of many systems is the “percentile roll”, the d100 synthesized from two visually distinct d10s. If the GM or the module specifies that there must be an 18% chance of rain, each day, in a given setting, then a d% is the way to go about it: 1–18: rain, 19–100: no rain. In general, though, it’s lacking because we don’t think in percentiles. Should “12th percentile” weather in late March in Manhattan be a cold drizzle, or a sleeting horror show? Is a 73rd-percentile ice cream sandwich sufficient to give +1 Morale, or does it have to be 74th percentile to have that effect? People can perceive four to seven meaningful levels of quality in most things, not 100.

Percentile rolls can force GMs to set precise numerical probabilities on events, rather than letting the system, through its modeling capacity, figure out what those probabilities might be. If a regular person has a 20% chance of making a jump, what are the odds for a skilled circus performer? Eighty percent? Ninety percent? How do we adjust the odds if the lights go out, or the performer is recovering from an injury? This sort of thing is the core of what we use role-playing systems to model.

Linearity— A Criticism of d20

What’s wrong with D&D’s d20 System? Objectively, nothing. As I’ve said, there are no absolutes, only tradeoffs— for simplicity it can’t be beat. It’s an improvement on the percentile roll— 20 levels, instead of 100— but it still has the issue of linearity, which means it lacks a certain realism.

Here’s the problem, in a nutshell. As I said, the dice resolve conflicts between the PCs and the environment. When the character wants to do something at which he might not succeed, and the GM decides to “let the dice speak”, it’s called a trial or check; the system is there to compile situational factors and (without requiring advanced statistical training on the GM’s part) find a reasonable answer of the PC’s likelihood to succeed.

Here’s an example that shows the issue with linearity: Thomas has a 50% chance of convincing NPC Rosalind to help him get to the next town. Or, he would; but he’s wearing a Ring of Persuasion, which increases his likelihood to 75%. Additionally, he and Rosalind share the same native language. Thomas, wisely, uses it to communicate with her, and gains the same degree of advantage (50 to 75 percent). If he has both advantages, how likely is he to succeed in getting Rosalind to help him out?

A linear system like d20 says: 100 percent. Each buff is treated as a +5 modifier, or a 25-percentage-point bonus. They combine to a +10 modifier, or a +50% bump, and Thomas is guaranteed to succeed. Is that accurate? I’ve modeled these sorts of problems for a living, and I would say no. What is the right answer? Well, models vary, clustering around the 90% mark, and I’d consider any number between 87 and 94 defensible— and since gameplay matters more than statistical perfection, I’d also accept 85 (17/20) and 95 (19/20) percent. At any rate, the difference between 50% and 75% is about the same as that between 75% and 90%, which is about the same as that between 90% and 96%.

The fact that linear models “are wrong” isn’t the worst problem. If a player gets a free pass (100%, no roll) on what “should be”, per statistical realism, a 90% shot, that’s not a big issue. No one’s going to feel cheated because he didn’t have to roll for something he had a 90% chance of pulling off anyway. If it goes the other way, turning what ought to be a 10% chance into zero, that’s more of an issue— it is the system rather than the game (the play, and the dice, and the reasonably-estimated difficulty of the task being modeled) that are making an action impossible. And that’s not ideal. Even still, though, the biggest problem here is that, because two “+5 modifiers” stack to turn a 50/50 shot into a certainty, or an impossibility into a 50/50 shot, we rightly conclude that +5 modifiers are huge. Then, most of the situational modifiers used and prescribed by the literature are going to be smaller, more conservative ones— ±1 and ±2— to avoid generating pathological cases. But in the mid-range (that is, when the probability is right around 50%) these modifiers are so tiny, almost insignificant, that they become inexpensive from a design perspective. This encourages a proliferation of nuisance modifiers and rules lawyering and realism disputes. Should that pesky -1, from a ringing in a PC’s ear, really turn a 10% chance into a 5 percent chance? Shouldn’t the GM see the unfairness of this, and waive the -1 modifier? Maybe this needs to be a Sensory Exclusion check— now, do we use INT or WIS? And so on. The system’s quirks intrude on role play.

It’s better, in my view, to have infrequent modifiers; when they exist, they should be significant. If they’re not substantial, let the system ignore the. The failing of d20’s linearity is that it’s coarse-grained at the tails where we actually benefit from the understand a fine-grained system gives us— there’s a difference between a 95th percentile and 99th percentile outcome, in most cases— but fine-grained in the middle where we don’t need that precision. A “+1 modifier” is major at the tails, imperceptible in the midrange… which means we lack an intuitive sense of what it means.

GURPS uses 3d6 instead of d20. This is an improvement, because the system is finer-grained at the tails and coarser in the middle where (as explained) we don’t need as much distinction— 3 (in GURPS, low is good) is “top 0.46%” whereas 10 is “50th–63rd percentile. Fudge, in the same spirit, uses 4dF, where a dF is a six-sided die with two sides each marked as: [+], [ ], and [-], corresponding to values {-1, 0, 1}. Notably, it gains an aesthetic advantage (for some) of making results visible without calculation. Cancel out the +/- pairs (if any) and what’s left is the result. Fudge also eschews raw numbers in favor of verbal descriptions: a Good (+1) cook who has a Great (+2) roll will produce a Superb (+3) dish; if she has a Mediocre (-1) roll, she’ll produce a Fair (0) one.

The linearity of d20 comes from its core random variable being sampled from a (discretized) uniform distribution, thereby assuming that the “nudge” it takes to turn a 75th-percentile performance is the same required to turn a 75th-percentile performance into a 100th-percentile (maximal) performance. That’s false, but the falsity isn’t the issue because all models contain false, simplifying assumptions. Summed-dice mechanics (3d6 or 4dF instead of d20) give us something closer in shape to a Gaussian or normal distribution, and in some cases that’s the explicit target. That is, the designers assume the resolved performance P of a character with ability (or skill, or level) S shall be: P = a*S + b*Z, where a and b are known constants and Z is a normally distributed random variable. It’s not all that far off; one can do a lot worse. That said, I think it’s possible to do a lot better.

What’s wrong with a normal distribution? For one thing, it’s not what we’re getting when we use 3d6 or 4dF. Those mechanics are bounded. If you’re a Mediocre (-1) cook, you have a zero-percent chance of producing a dish better than Superb (+3). For food preparation, that seems reasonable, but is the probability of a Mediocre archer hitting a dragon in the eye really zero point zero zero zero, or is it just very small? Again, if the system “behind the scenes” makes things that should be improbable, improbable, that’s not an issue— but the system shouldn’t be intruding by making such things impossible. One fix to this problem is to say that certain outlier results (e.g., 3 and 18 on 3d6, -4 and 4 on 4dF) always succeed or fail, but the system is still intruding. Another fix is chaining: on a maximal (or minimal) result, roll again and add. So, +4 (on 4dF) followed by another +4 is +8. Okay, but can chaining make things worse— can +4 followed by -4 make a net 0? If that’s a possible risk, can players choose not to chain?

The boundedness itself isn’t the real problem, though. The actual Gaussian distribution isn’t bounded— a result 4 or 6 or 8 standard deviations from the mean is theoretically possible, though exceedingly unlikely— but it still isn’t what we want for gameplay; its tails are infinite but extremely “thin”.

Fudge can have what I’ve heard called the “Fair Cook Problem”. For this reason, many players prefer to use 3dF or 2dF. With 4dF, it is possible for a Fair (0) cook to produce a Legendary (+4) dish, but he is equally likely (1/81) to produce an Abysmal (-4) dish and make everyone sick. At 1-in-81, we’re talking about rare events, so that’s not much of a concern on either end; but 4dF also means 5% (4/81) of his dishes are Terrible and 12% (10/81) are Poor. That’s more of a problem. We wouldn’t call someone with this profile “a Fair Cook”. We’d fire him, not because he occasionally (1/81) screws up legendarily— we all have, at one thing or another— but because of his frequent, moderate screw-ups. At the same time, if we drop to 2dF, we lose a lot of the upside variation that makes RPGs exciting— 77% of the rolls will be within one level of his base (plus or minus modifiers) so why don’t we just go diceless? Using 2dF imposes draconian conditions on what can happen and what cannot— the system is deciding— whereas 4dF lets the dice speak but they get loud and never shut up.

For this reason, I advise against using the Gaussian distribution for the core mechanic of one’s role-playing system. It’s too thin-tailed. Although outliers are rare by definition, we need to feel like they’re possible, which means they need to happen sometime. What we don’t want are frequent moderate deviations (Poor dishes from Fair cooks) that muck up the works and turn the game into a circus. In technical terms, we want kurtosis; we probably also want some positive skew.

In addition to this observation, the real-world normal distribution is continuous and its natural units (standard deviations from the mean) feel bloodless. Is “0.631502 standard deviations below average” meaningful? Not really. It has the same problem as the percentile roll. I just don’t know what “a 31st-percentile result” is supposed to mean. As I said, we can only distinguish about seven levels of quality among outcomes— and, in most cases, fewer than that. We don’t want to think about tiny increments. Whatever our “points” are, we want them in units that matter: not that the fisherwoman had a 37th-percentile (or 12, or -1) day, but how many fish she caught. No fish, one fish, or two fish? Red fi— never mind. Fish. The French word for fish is… what again? Poisson.

The “Poisson Die”

Here are the design constrains under which I built the d8 System:

  • (i) the core random variable must be implementable using a small number of regular (d4, d6, d8, d10, d12, d20) dice and simple mental arithmetic. No immersion-breaking tables or calculators.
  • (ii) the output should have a small number of levels that represent discrete qualitative jumps; not the 16 levels (3–18) of 3d6 or 100 of d100.
  • (iii) the system should be unbounded above. Except when there’s a character-specific reason (e.g., disability, moral vow) reason a PC cannot do something, there should be a nonzero chance of him achieving it, even if the task is ridiculously hard task. (Probability, not the system, should limit the PC.)
  • (iv) chaining, or the use of further dice rolls for additional detail on extreme results (e.g. “roll d6; on 6, roll again and add the result”) is permissible upward, but not downward. Chaining can improve a result or leave it unchanged; it can be used to determine how well someone succeeded, but not how badly he failed (“roll of shame”).
  • (v) it should be easy to correlate a performance level to the skill level of which it is typical. This is something Fudge does well: a Good (skill level) cook will, on average, produce Good (result level) food.

How do we meet these criteria? (Here’s some technical stuff you can skip if you want.) Between (ii) and (iii), there seems to be a contradiction; (ii) wants us to have “a small number of” discrete separable qualitative levels, and (iii) demands unbounded possibility upward. This isn’t hard to resolve: we can have an infinite number of levels in theory, so long as the levels are meaningfully separate— lowest (“0”) from second lowest (“1”), “1” from “2”, and so on. The infinitude of possibilities isn’t a problem as long as 10+ aren’t happening all the time. This favors a distribution that produces only integers, which is also a good match for dice, which produce integers.

The Poisson distribution models counts of events (which could be “successes” or could be “failures”— it does not care if the events are desirable). Poisson(1) is the distribution of counts of an event during an interval X if it happens once every X on average. If lightning strikes 2 times per minute, the distribution governing a 15-second inverval will be Poisson(0.5) and that governing a 60-second interval will be Poisson(2).

For an integer m, a Poisson(m) distribution produces m on average, so we can naturally correlate skill and result levels. If a character of skill 4 gets a Poisson(4)-distributed result, then we know that a result of 4 is average at that level. They also sum nicely: if you add a Poisson(m) variable and a Poisson(n) variable, you get a Poisson(m+n) variable, which means that statistically-minded people like me have a good sense of what’s happening. It also means that, if we can simulate a Poisson(1) variable with dice, we can do it for all integer values.

Finally, the Poisson distribution’s tail decay is exponential as opposed to the Gaussian distribution’s quadratic-exponential decay. This has a small but meaningful effect on the feel of the game. Difficult, unlikely endeavors still feel possible— we can imagine having several lucky rolls in a row, because sometimes it actually happens— so it doesn’t feel like the system itself is imposing stricture.

Can you sample from a Poisson(1) distribution using dice? Not perfectly; the probabilities involved are irrational numbers. For our purposes, the most important probability to get right is Pr(X = 0), which for Poisson(1) is 1/e = 0.367879…; as rational approximations go, 3/8 = 0.375 is good enough. (One can do better using d30s— this is detailed below— but I don’t think the extra accuracy is worth the cost. GMs and players benefit from the feel of statistical realism, but I don’t think they care about Poisson distributions in particular.)

To roll n Poisson dice, or ndP:

  • roll n d8’s. A 4 or higher is 1 point (or “success”); a 7 is double, an 8 is triple.
  • for each 8 rolled, roll it again. On 1-7, no change. On 8, add 1, roll again, repeat.

So, if a player is rolling 4dP and gets {2, 3, 6, 8}, we interpret the result as 0 + 0 + 1 + 3 = 4; we chain on the 8, if we get, say, a 3, we stop and 4 is our result. That’s an average outcome from 4dP, but a 1-in-64 result from 1dP.

For easier readability, you can buy blank d8’s and label them {0, 0, 0, 1, 1, 1, 2, 3}. You’ll typically be rolling one to four of these, so four such dice per player (including GM) is enough.

Here’s a table (graph also on site) that shows how 2dP tracks against Poisson(2). Are there more complicated methods that are more accurate? Of course. Is it worth it, from a gameplay perspective? Probably not. The dP, as described above, does the job.

Skills

Unlike a board game whose state can be represented precisely, the environment of a role-playing game is usually strictly or mostly verbal, and the game state is a collection of facts about the world that the GM and PC have agreed to be true. Fact: there’s a goblin 10 feet away. (PC stabs the goblin.) New fact: There’s a dead goblin at the PC’s feet. A character sheet contains all the important facts about a player’s character. Erika is 28 years old. Jim is a member of a disliked minority. Sara has eyes of different colors. Mat’s religion forbids him from eating meat.

The facts above are qualitative, which doesn’t make them less important, but they’re not what RPG systems exist to model. GMs and players decide what they mean and when they have an influence on gameplay (if they do). The system itself isn’t going to say that it means that Sara’s eyes are of different colors. It’s the quantitative measurements of characters— what they do; how they match up against each other and the world— that an RPG system cares about. In D&D, a character with STR 18 is very strong while one with STR 3 is very weak; the former is formidable in combat, but the latter can’t pick up a sword.

In the d8 System, these quantitative attributes are all Skills, which range from 0 to 8, but entry-level characters will rarely go above 4. For Skills, 1 represents a solidly capable apprentice who can do basic things without supervision and, within time, can produce solid work; 2 represents a seasoned journeyman; 3 represents a master craftsman; 4 represents an expert of local-to-regional renown. 5 is approaching world class, and 6–8 are very rare.

Skills can be physical (Weightlifting, Juggling, Climbing) or academic (Chemistry, Research, Astronomy) or social (Persuasion, Deception, Seduction) or martial (Archery, Swordsmanship, Brawling) or magical (Telekinesis, Healing, Pyromancy). Each campaign world is going to have a different Skill tree, and a GM can choose to have very few Skills (say, ten to fifteen for a single-session campaign) or a massive array (several hundred), although it’s best not to start with several hundred available Skills among new players.

The more Skills there are, the more specialized and fine-grained they will be. For a coarse-grained system, Survival, Bargaining, and Elemental Magic would be skills. In a more fine-grained system, you’d split Survival into, say: Trapping, Fishing, Camping, and Finding Water. Elemental Magic would become Fireball, Cold Blast, Liquefy Ground, and Move Metal. Bargaining would become Item Appraisal, Negotiation, and Sales Instinct.

Also, things that we assume most or all people in the campaign world can do, do not require Skills. In a modern setting, “Literacy 2” (by a medieval standard) would be assumed, and if someone was well-read they would probably have “Literacy 3”— but we wouldn’t bother writing it down; it can mostly be assumed that a 21st-century American can read and can drive.

As a campaign goes on, and as players do harder and more specialized things, the Skill tree is going to grow. There’s nothing wrong with that. Of course, GMs are going to want to start with a list of basic Skills they think will be useful in the game world. Here’s how I’d recommend doing that: start by listing classes that would be useful in the game’s world. That doesn’t mean the GM is committing to the class system— they are “metadata” that will be thrown away, allowing players to invest points as they will. Here are twelve archetypes that might befit a typical late-medieval fantasy world.

  • Soldier (swords, spears, armor, defense).
  • Barbarian (axes, hammers, strength).
  • Ranger (survival skills, defensive fighting, animal husbandry).
  • Rogue (thievery, deception, evasion, sabotage).
  • Healer (defensive magic, curative spells, medical knowledge).
  • Warlock (offensive magic, conjuration, elemental magic).
  • Wizard (buffs/debuffs, combination magics, potions).
  • Monk (bare-fisted fighting, “inner” magic).
  • Merchant (social skills, commerce, regional knowledge).
  • Scholar (chemistry, engineering, historical knowledge, foreign languages).
  • Bard (arts, seduction, high-society knowledge).
  • General (oratory, battle tactics, military knowledge).

For some value of N, generate N primary Skills appropriate to each class. For a coarse-grained Skill system, one might use N = 4; for a fine-grained one, consider N = 10. If a skill doesn’t fit into a class, add it anyway. If it fits into more than one, don’t worry about that, assuming these classes are just for inspiration. In general, I wouldn’t worry about the complexities of Skill trees (specialties, neighbor Skills, etc.) for an entry-level campaign.

When circumstances create new Skills, GMs have to decide how to “spawn” them. The population median for most Skills is zero, so most characters won’t have it— but if players’ back stories argue for some exposure, that might make the case for a level. Of course, GMs have to keep player balance in mind while doing this.

As player characters improve, the numbers will increase, but that can be boring. Rolling five dice when you used to roll four is fun, but eventually it’s all just rolling dice. Once players are hitting the 3–5 range, it’s time for the GM to start thinking about specialties. A character can have Medicine 4 and no experience with surgery. We could model a very high-risk surgery as a Difficulty 6 task using Medicine— the player rolls 4dP and has to hit 6 to succeed— but it would be more precise to model it as a Difficulty 3 trial of a harder and more specialized skill: Surgery.

As PCs do harder, more interesting things, the Skill tree may become an actual tree.

There are three ways skills relate to each other. A hard dependency means the parent must be achieved, at each level, before the dependent skill can be learned. When hard dependencies exist, there’s usually a slash in the name of the more specialized skill, e.g., Writing/Fiction or Writing/Poetry. It is impossible for a character to get Writing/Fiction 4 without having Writing 4. Soft dependencies are more lenient: the character’s level in the specialty can exceed that of the parent Skill, as long as there’s nonzero investment in the parent skill— however, the Skill gets harder to improve as the discrepancy grows. Someone could, say, have Medicine 3 and Surgery 4— above-average medical knowledge but fantastic in the operating room— but Surgery 4 (or even Surgery 1) without Medicine isn’t possible. Neighbor Skills do not have a prerequisite relationship but one can substitute (at a penalty) for the other. If a PC has Low Dwarven 3 and has to read a scroll in High Dwarven, he might able to read it as if he had High Dwarven 1 or 2.

GMs should, as much as possible, flesh out these relationships before characters are created. An entry-level Skill tree isn’t likely to have much specialization, so hard dependencies will be rare, if used at all. Typically, all of the primary Skills are going to be soft-dependent on parents called Aptitudes— in the example I give below, social Skills would be soft-dependent on Social Aptitude, athletic Skills on Physical Aptitude, and so on.

For each primary Skill, the GM should decide:

  • which Aptitude the Skill is soft-dependent on, and
  • each Skill’s Complexity: Easy (-1), Average (-2), Hard (-3), Very Hard (-4), or Extremely Hard (-5).

Complexity doesn’t measure innate difficulty but relative difficulty— how much additional investment is required to learn the skill. Lifting weights isn’t “Easy”— I’m exhausted after a good session at the gym— but I’d probably model Weightlifting as Easy relative to the Aptitude, Strength: it’s not hard for a character with Strength 3 to get Weightlifting 3.

Fungibility is another factor GMs should determine. Let’s say that Weightlifting is Easy (-1) and Rock-Climbing (-2) is Average relative to Strength. If these Skills are fungible, then a character with Strength 4, and no prior investment in either skill, implicitly has Weightlifting 3 and Rock-Climbing 2. If they’re non-fungible, then the character doesn’t, and will be unable to perform the task without prior investment in the skill.

By default, Easy and Average Skills are fungible by their parents (Aptitudes for primary Skills, broader fields for specialties) whereas Hard+ Skills are not fungible. GMs can overrule this on a per-Skill basis— the GM might decide that Surgery is Average (-2) relative to Medicine but non-fungible. Then, while a character with Medicine 4 can get Surgery 1 rather quickly (having mastered the parent Skill) he doesn’t implicitly have it without investing in the Skill.

GMs may vary fungibility at the task level. For example, a GM might allow a brilliant-but-unscrupulous (indeed, they sometimes go together) charlatan with Medicine 4 (but no Surgery) to roll 2dP for the task of faking knowledge, but be utterly incapable should he have do it.

Primary Skills (that is, Skills that aren’t specialties of other Skills) are almost always soft-dependent on an Aptitude— these play the role of “ability scores” in other systems, and they function as Skills for learning Skills, but they’re also Skills in their own right. Whether this shall mean they represent (small-s) skills, that can be improved with practice, or immutable talents, that matter depends on the GM.

If the GM’s objective is realism, it should be incredibly uncommon for Aptitudes to go more than one level above where they started; but, toward the objective of modularity and optimism about human potential, the d8 System doesn’t prohibit Aptitude improvement.

Here are some ways in which Aptitudes are different from regular Skills:

  • Everyone has them. The population median for a typical Skill is zero. Most people have never been Scuba Diving and most people outside of Germany don’t speak German. On the other hand, nearly all of us use Manual Dexterity and Logical Aptitude on a daily basis. For a typical Aptitude, the population median is 1–2; 1 for people who don’t use it in daily life, and 2 for people whose professions or trades require it.
  • They change slowly. Improving Skills takes a lot less effort than improving Aptitudes. It’s not uncommon for a mid-level character’s top skills to hit 5 and 6; but Aptitudes of 5+ should be very rare (they can break the game).
  • Below 1, fractional values (½, ¼) exist. For other Skills, the lowest nonzero value is 1. It’s just not useful to keep track of a dilettante having “Battle Axe ¼”. On the other hand, for core Aptitudes like Strength and Manual Dexterity, there’s a difference between extremely (¼) or moderately (½) weak and “Strength 0”, which to me means “dead” (or, at least, nearly so).

One converts from D&D’s 3d6 scale roughly as follows: 3: ¼; 4–7: ½; 8–11: 1; 12–14: 2; 15–17: 3; 18–19: 4; 20+: 5+.

What Aptitudes exist in a campaign is up to the GM. Theoretically, a GM could run an Aptitude-free system, in which learning Skills is equally easy (or hard) for all characters. Usually, though, players and GMs want to model a world where people have different natural talents.

For a fantasy setting, my “core set” consists of the following:

  • Physical: Strength, Agility, Manual Dexterity, Physical Aptitude, Stamina.
  • Mental: Logical Aptitude, Creative Aptitude, Will Power, Perception.
  • Social: Charm, Leadership, Appearance, Social Aptitude.
  • Magical: Magic Power, Magic Resistance, Magical Aptitude.

Note: these Aptitudes are not part of “Core d8”— the d8 System doesn’t mandate you use them— though I refer to them for the purpose of example. In a science-fiction setting, Strength isn’t that important and perhaps would combined with Stamina. In a non-magical setting, discard the Magical ones.

Please note: these attributes are part of “Core d8”; the d8 System doesn’t mandate that you use any of them. In a science-fiction setting, Strength is less important than it would be in a classical epic fantasy. In a non-magical world, the magic-related Aptitudes have no use and should be discarded.

Some of these— Strength, Agility, Will Power, Perception, Charm— are likely to be checked directly. A PC makes a Strength check to open a heavy door, a Charm check to determine an important NPC’s reaction to meeting him for the first time, a Will Power check to determine whether he is able to resist temptation. Those with Aptitude in their name largely exist as Skills for learning other Skills— most primary Skills will be soft-dependent (“keyed on”) them. So, while Weightlifting will be keyed on Strength, Sprinting on Agility, and fine motor (s|S)kills on Manual Dexterity, most athletic Skills will key on Physical Aptitude (kinesthetic intelligence).

This means it is possible, for example, that a PC has Charm 4 but Social Aptitude 1— he’s very good at making positive impressions on people, but learning nuanced social (s|S)kills is difficult for him.

The d8 System de-emphasizes “ability scores”, so it might seem odd that my core set for fantasy has so many (16) Aptitudes; but this is part of the de-emphasis. To start, I broke up the “master stats”. Dexterity/DEX, I broke into Agility, Manual Dexterity, and Physical Aptitude— all of which are different (though correlated) talents. Intelligence/INT I broke up into Logical Aptitude, Creative Aptitude and Perception. The d8 system, by having fewer levels, limits its vertical specificity in favor of horizontal specificity. “Intelligence 4” could mean a lot of things; on the other hand, if someone has “Logical Aptitude 5, Creative Aptitude 1”, I understand that he’s deductively brilliant but mediocre (and likely uninterested) in the arts.

Notice also that I’ve separated Magic Power and Magical Aptitude from the intelligences. I rather like the idea of a super-powerful, stupid wizard.

If any of the Aptitudes in my fantasy core set are misnamed, Creative Aptitude is the one, because it also includes spatial aptitude and was originally named thus. The “left brain, right brain” model is for the most part outdated, but it gives us a rather balanced split of what would otherwise be a master stat, Intelligence. Is it entirely accurate to lump spatial and creative intelligence together? Probably not; but this set does so because, combined, they are in approximate power balance with Logical Aptitude.

Building Characters

Core d8 doesn’t tell GMs how characters should be made. Aptitudes and initial Skills can be “rolled”, but with experienced players I think a point-buy system is better. Players should start with “big picture” conceptions of who their characters are, their back stories, and their qualitative advantages and disabilities.

Players should, in general, get k*N points to disperse among their PCs’ Aptitudes, where N is the number of Aptitudes that exist and k is between 0.5 (regular folk) and 1.5 (entry level but legendary). GMs can decide whether various qualitative, subjective traits cost Aptitude points, or (if disadvantageous) confer them.

The baseline value of most Aptitudes is 1, from which the point-buy costs are:

  • ¼: -2 points (GM discretion).
  • ½: -1 point
  • 1: 0 points
  • 2: 1 point
  • 3: 3 points
  • 4: 5 points
  • 5: 8 points (GM discretion).

Perhaps I risk offense by saying this, but men and women are different: men have more upper body strength and women are more attractive (and are perceived as such, even by infants). So, I’m inclined to give male characters +1 Strength (baseline 3) and women +1 Appearance (baseline 2). This doesn’t prevent players from “selling back” that point for something else. A player can create a Strength ¼, Appearance 4 male character; a player can also make a Strength 4, Appearance 1 female (e.g., Brienne of Tarth in Game of Thrones). It does make it cheaper to have a Strength 3+ male or Appearance 3+ female character. If you’re a GM and you find these modifiers sexist, throw them out. It’s your world.

If I were to build a Aptitude sheet for Farisa, protagonist of Farisa’s Crossing, it would look like this:

  • Strength: 1 — average female.
  • Agility: 1 — average, untrained.
  • Manual Dexterity: ½ — clumsy.
  • Physical Aptitude: ½ — same.
  • Stamina: 2 — able to walk/run long distances.
  • Logical Aptitude: 4 — the smartest person she knows excl. Katarin and [SPOILER].
  • Creative Aptitude: 3 — Raqel has more artistic talent; so does [SPOILER].
  • Will Power: 3 — determination necessary to survive Marquessa.
  • Perception: 2 — except [SPOILER] may be case for 3.
  • Charm: 2 — quirky, nerdy; able to use her atypicality to advantage sometimes.
  • Leadership: 2 — teacher at prestigious school.
  • Appearance: 3 — above-average female.
  • Social Aptitude: ½ — Aspie and probably bipolar (Marquessa).
  • Magic Power: 4 — very strong mage by [SPOILER] standard.
  • Magic Resistance: 1 — iffy b/c mages are weaker to most magic in this world.
  • Magical Aptitude: 4 — [SPOILER] and [SPOILER] and then [SPOILER].

Here, k turns out to be 1.75 (+28); she’s in a world where most people have no magic and the baseline for Magic Power and Magic Aptitude is 0— so those cost her 8 points each. Stats wise, she’s overpowered. I would argue that this “paid off” by her various disadvantages. She has the horrible illness that afflicts all mages— the Blue Marquessa. She’s a woman attracted to women in a puritanical (1895 North America–based) society. She’s visibly different from the people around her. Her rigid morality (neutral/chaotic good) gets her in trouble, and so does her good nature (her theory-of-mind inadequately models malevolence, leading to [SPOILER]). Finally, there’s the bounty put on her head by trillionaire Hampus Bell, Patriarch (full title: Chief Patriarch and Seraph of All Human Capital) at the Global Company. She probably needs that +28 to survive.

Aptitudes need to be selected before primary Skills are bought, as the Aptitudes will influence how much it costs to learn Skills.

By default, I would give players k*sqrt(N) points where N is the number of primary Skills that exist and k is… around 5. The going assumption is that entry-level characters (regardless of chronological age, for balance) have about five years of adventure-relevant experience, which gives them enough time to grab a few 1’s and 2’s, and maybe a 3 or 4 if talented. If you’re building a mentor NPC, you might use k = 20 or 30.

The point-buy cost of raising a Skill one level depends both on the Skill’s Complexity and the character’s level of the Aptitude it’s keyed on, as follows:

  • a base of 1 point for Easy skills; 2 for Average; 4, Hard; 8, Very Hard; 16 Extremely Hard; times:
  • 1 (per level) for each level up to A, the rating in the relevant Aptitude; 3 from A to A+1; 5, to A+2, 10, to A+3; 20, to A+4. (Here, treat A as 0 if A < 1.)

Let’s say, for example, that Espionage is Hard and keyed on Social Aptitude. Then a character with Social Aptitude 3 will pay 4 points each to get the first levels; if he wants Espionage 4, he’ll have to pay an additional 12 (total: 24). If he wants Espionage 5, he’ll have to pay 20 more (total: 44).

Applying Skills

Any time it is uncertain (per GM) what’s going to happen, dice are rolled. Often, this is because a player wants his character to do something where there’s a nontrivial (1% or greater) chance of failure. RPGs call this a check or trial, and both terms are used interchangeably.

Active trials occur when the PC attempts something and succeeds or fails. There are also passive trials, where the GM needs to know if the PC made a good impression on an important NPC (Charm or Appearance check), resisted temptation (Will check), or became aware of something unusual in the environment (Perception check). Passive trials will almost always be covert checks (described below).

Unopposed trials (also called “tasks”) are those in which the PC is the sole or main participant. Attempting to jump from one rooftop to the next is an unopposed trial; so is playing a musical instrument (although NPCs may differ in their appreciation of the PC’s doing so). There are two kinds of unopposed trials: binary checks and performance checks.

Binary Trials

The GM must decide which Skill is being checked. This will typically be the most specialized Skill that exists in the game. For example, if the task is surgery and Surgery is a specialty of Medicine, the roll will be performed using a character’s Surgery rating. A character who does not have that Skill at all (“Surgery 0”) cannot do it; otherwise, the number of dice (dP, or Poisson dice) equals the player’s rating. In cases where no Skill applies at all— say, “situation rolls” like weather that are (usually) out of the characters’ control— two dice (2dP) are used.

A binary check is made against a Difficulty rating, which is always a nonnegative integer. Difficulty 0 (“trivial”) means the character succeeds (without rolling dice) as long as there is some investment in the Skill. There’s no need to roll; the only way a PC could fail is if he were hit by an Amnesia spell (or equivalent) in the middle of the action. Difficulty 1 (“simple”) means there is some chance of failure; for example, recognizing a fairly common word of a foreign language. Difficulty 2 (“moderate”) is something like jumping from a second-story window; Difficulty 3 (“challenging”) would be something like cooking for twelve people, all with different dietary requirements, under strict time pressure. Difficulty 4 (“very challenging”) would be driving an unfamiliar race car on an unknown track at competitive speeds. There’s no limit to how high Difficulty can go. At 12, even a maxed-out character (Skill 8) can expect to fail 87% of the time.

What does failure mean? Before the dice are rolled, the GM should decide, and the player should understand:

  • how long the action takes. In turn-based combat this could be a turn (seconds of in-world time). For research or invention, this could be two weeks of in-world time.
  • what resources are required, and whether they are consumed on failure.
  • other consequences of failure, which can range from nothing (the player can try again) to devastating (failing to jump safely from a moving train).
  • whether the player knows if he succeeded or failed at all. This will be discussed below.

For example, PC Sarah has Climbing 4, but she’s climbing cautiously and has top-of-the-line equipment. The GM judges that climbing a nearly vertical rock face, one that has stymied expert climbers, is a Difficulty 5. The GM determines that an attempt will take an hour and 550 kCal of food energy. Since her skill is 4, Sarah (well, Sarah’s player) rolls 4dP: {1, 0, 0, 1} for a total of 2, which falls short of the 5 necessary to make the climb. Since she took safety precautions, the result of this failure is: no cost but the time and food. She can try again.

The Three-Attempt Rule, or 3AR, is the general principle that after 3 consecutive failures, the player typically must have his character do something else. Characters are not min-maxers, and they don’t fully know how easy or difficult an objective is. They get discouraged. If the player has the character come back after a night of sleep, or a month later with more (s|S)kill, or goes about the problem is a distinctly different way, this isn’t a violation of the 3AR. Last of all, the 3AR never applies when there’s a strong reason for the character to persist— a life-or-death situation (such as combat), a religious vow, or a revenge motive. A PC in battle who swings and misses needs no excuse to keep swinging; but the 3AR is there to block unimaginative brute force and while-looping; the player can’t say “I continue to attempt [Difficulty 9 feat] until I succeed.” (There is no, “I take 20.”)

Most binary trials are pass/fail. The degree by which the roll fell short of, or exceeded, the Difficulty target isn’t considered— any effects of the failure or success are determined separately. That is, the dice rolls represent the character’s performance (how good he is at opening the chest) but not raw luck (what he finds, if successful, inside it) but, if GMs prefer to combine the two for speedier gameplay, that is up to them.

The d8 System has no concept of “critical failure” or “botch”, which I consider to be poor design. Failure can of course have negative consequences (making a loud noise and waking someone during Burglary) but the player shouldn’t be punished for such low rolls that the rules say bad things must happen and therefore the GM must make something up.

When it comes to binary trials, we say the player (or the PC) is in advantage if the relevant Skill meets or exceeds the Difficulty (adjusted for modifiers); he is out of advantage if the relevant Skill rating is less than the Difficulty. Since the median of NdP is N (for all values we care about) a roll in-advantage will succeed more than 50 percent of the time; a roll out-of-advantage will succeed less than half the time..

Performance Trials

In a binary trial, one succeeds or one doesn’t. Performance trials have degrees of success. For a character with Hunting 3, the player rolls 3dP, but while a 2 might suffice to bring home a rabbit, a 4 gets a deer, a 6 gets a bison, and a 15 might result in meeting a dragon who befriends the party.

General guidelines for performance interpretation are below.

  • 0: a bad performance (2dP: bottom 15%). No evidence of skill is shown at all.
  • 1: a poor performance (2dP: bottom 40%). Some success, but it’s an amateurish showing. The work may be dodgy; reception will be mediocre.
  • 2: a fair performance (2dP: average). The character demonstrates skill appropriate to his class or profession— not especially good, but not bad.
  • 3: a good performance (2dP: top 35%). The performance is significantly above the expectation of a competent practitioner.
  • 4: a very good performance (2dP: top 15%, 4dP: average). Rewards for exceptional performance accrue. A singer might earn three times the usual amount of tips.
  • 5: a great performance (2dP: top 6%, 4dP: top 40%). Like the above, but more. Instant recognition is likely to accrue; this is approaching world-class.
  • 6–7: an excellent(+) performance (2dP: top 2%, 4dP: top 25%). This is the kind of performance that, if reliably repeated, will lead to renown and fame.
  • 8–9: an incredible(+) performance (2dP: top 0.1%; 4dP: top 6%). The character has done so well, some people are convinced that he’s a genius, or that he has magical powers, or that he’s cheating.
  • 10–11: a heroic(+) performance (2dP: 1-in-50,000; 4dP: top 1%).
  • 12+: a legendary performance (2dP: 1-in-2,000,000; 4dP: top 0.1%).

Of course, GMs are at liberty to interpret these results as they wish, and these qualitative judgments are contextual. A musician who gives a 3/good performance at a world-famous orchestra will mildly disappoint the audience; one who gives 2/fair will likely be booed. Usually, the results (and whether they benefit or disadvantage the character) are correlated to the outcome rolled; but, if a player games the system with modifier min-maxing and rules lawyering, and somehow produces a 15/legendary+++ result whilst Singing, the GM is allowed to have him run out of town as a witch.

Degrees of Transparency

In general, players roll the dice themselves and know how they did. The d8 System deliberately keeps increments “big” so they correspond to noticeable degrees of quality.

Should the GM tell players the precise Difficulty ratings of what their PCs are doing? The d8 System doesn’t call that shot. A GM can say, “It’s Difficulty 5” or she can say, “It looks like a more experienced climber would have no problem, but it leaves you feeling uneasy.” That’s up to their tastes. As a general rule, characters have no problem “seeing” Difficulties and performances 2 levels beyond their own, maybe more. A PC with Climbing 4 knows the difference between challenging-but-likely (4) and a-stretch (5) and out-of-range-but-possible (6) and “very unlikely” (7+).

There are cases, though, when players shouldn’t know the Difficulty level. Perhaps there’s an environmental factor they haven’t perceived. Sometimes, but more rarely, they shouldn’t even know how well they performed. There’s a spectrum of transparency applicable to these trials, like so:

  • Standard Binary: The GM tells the player the Difficulty of the action. Whether a numerical rating or qualitative description (“It looks like someone at your level of skill can do it”) is used, d8 doesn’t specify.
  • Concealed Difficulty: there are concealed environmental factors that make the GM unable to indicate a precise Difficulty level (or that induce the GM to lie about it— though she and her players should reach agreement on what the GM can and cannot lie about.) The player rolls the dice and the GM reveals whether success or failure is achieved, but not the margin. The player’s experience is comparable to that of a performance, rather than binary, trial.
  • Concealed Outcome: Appropriate to social skills. The player rolls the dice and has a general sense of how the PC performed, but not whether success was achieved. With information skills (e.g. Espionage) GMs are, absolutely, without equivocation, allowed to lie— the player may be deceived into thinking he succeeded, and fed false information, if the PC was in fact unsuccessful.
  • Concealed Performance: the player knows that a Skill is being checked, and that’s it. The GM may ask questions about how the player intends to do it— to decide which Skill applies if there’s more than one candidate, and possibly to apply modifiers if the player comes up with an ingenious (or an obnoxious and stupid) approach. The GM rolls. She may have a qualitative indicator (e.g., “You feel you could have done better”) to the player, or she may not.
  • Covert: the player is unaware that the roll ever took place. (This is what those pre-printed sheets of randbetween(1,8) are for.) The PC’s in a haunted house but the player has no idea that he just failed a Detect Spirits check, or that he failed a check of any kind, or that anything was even checked.

Tension

As written so far, the d8 System still has the “Fair Cook Problem”. Someone with Cooking 2 is going to fail at cooking (roll 0 from 2dP) 14% of the time, or one dinner per week. This is unrealistic; someone who’s 86% reliable at a mundane, low-tension activity simply isn’t a professional-level cook. Of course, if he’s subject to the high-pressure environment of a show like Top Chef, that probability of failure becomes more reasonable….

The d8 System resolves this issue with the Tension mechanic. Routine tasks occur at Low Tension. Social stakes and non-immediate consequences suggest Medium Tension. Death stakes imply High Tension; combat is always High Tension, as are Skill checks in hazardous environments. Singing in front of friends or for practice is Low Tension; doing it for a living is Medium Tension; signing for a deranged king who’ll kill the whole party if he doesn’t like what he hears, is High Tension.

Low and Medium Tension, for binary and performance trials, allow the player to “take points”, by which I mean “take 1” for each die (a dP’s mean value is just slightly above 1). At Low Tension, the player can “take 1” for all his dice if he wishes. Medium Tension allows him to take up to half, rounded down. So, a player with Skill 3 has the following options at each Tension level:

  • Low: 3dP, 2dP+1, dP+2, 3.
  • Medium: 3dP, 2dP+1.
  • High: 3dP only.

For strictly binary trials, the PC does best to take as many points as possible when in advantage, and to take as all dice (as he would have to do, at High Tension) when out-of-advantage. Thus, when the Difficulty exceeds the PC’s Skill level, the Tension level becomes irrelevant.

If success is guaranteed, there’s no reason to do the roll. If someone has Cooking 2 in a Medium Tension setting, but he’s only trying to broil some ground beef (Difficulty 1) there is no need to roll for that— dP+1 against 1 will always succeed.

Most situations occur at Medium Tension; it jumps to high in combat situations or where time-sensitive threat to life is possible. The regular stresses of unknown settings, meeting new people, and enduring the daily discomforts of long camping trips make the case for Medium. Low Tension is mostly used for familiar settings, downtime, and practice (skill improvement).

On performance rolls and concealed-difficulty rolls, the player may not know whether it’s better to take dice or points— it’s up to the GM whether to reveal what’s in the player’s interests (if it’s clear cut). For covert trials, the GM should typically make these decisions in the player’s interests— taking points when in-advantage and dice when out-of-advantage— unless circumstances strongly suggest otherwise.

Gate-Keeping, Compound Trials, and Stability

Tension handles the “Fair Cook Problem”. We get the variability we expect from high-tension situations (combat) but we don’t have an unrealistically high probability of competent people failing, just because the dice said so. A Fair (2) cook will be able to produce Fair (2) results 100% of the time at Low Tension, 62.5% of the time at Medium Tension (dP+1), and 58% of the time at High Tension (2dP).

This doesn’t handle everything player characters might want to do. For many performances and tasks, the variance offered by Poisson dice is too high. Let’s use weightlifting as an example. We might decide that each level of Strength (or Weightlifting skill) counts for 150 pounds (68 kg) of deadlifting capacity. A character with Strength 1 can deadlift 150 pounds; with Strength 2, 300. The 2dP distribution suggests that a Strength 2 (300-pound) deadlifter has a 34% chance of a Strength 3 feat: lifting 450 pounds. I can tell you: that’s not true. It’s about zero. Raw muscular strength doesn’t have a lot of variability— people don’t jump 150 (let alone 300) pounds in their 1RM because the dice said so.

When it comes to pure lifting strength, I think GM fiat is appropriate— the PC cna lift it, or can’t. Skill and day-to-day performance are factors, but raw physical capacity dominates the result. If GMs and PCs want to know, down to ten-pound increments, how much a character can deadlift, they’re probably going to need something finer-grained than five or six levels. I’m not a stickler for physical precision, though, so I’m fine with a module saying “Strength 3 required” rather than “400 pounds”.

Running speed also doesn’t have a whole lot of variability. A 4-hour marathoner (level 2) is not going to run a 2:20 (level 7) marathon, ever— but 2dP spits out 7 about once in 250 trials. This is a case where GMs can say, “Uh, no.” As in, “Uh, no; I’m not going roll the dice to ‘see if’ your Agility 2 character wins the Boston Marathon.”

Whether this is a problem in role-playing games is debatable. Athletic events measure people’s raw physical capabilities, and there just isn’t a lot of variability, because these events are designed to measure what the athletes can do at their best, and therefore remove intrusions. Role-playing environments are typically chaotic and full of intrusions. This, I suppose, allows for a certain “fudge factor”; the higher variability of an Agility check makes it appropriate to running through a forest while being chased, but not competitive distance running, at which an Agility 2 runner will never defeat one with Agility 4.

What about cases where we want some performance variability, but not as much as ndP gives us? Shakespeare, one presumes, had Playwriting 8; a performance of 8 is also 94th-percentile from a playwright with Playwriting 4. Does mean that 6% of a his efforts are Shakespearean? Well, it’s hard to say. (Not all of Shakespeare’s efforts were Shakespearean, but that’s another topic. Titus Andronicus gets points for a South Park homage, but Othello it ain’t.) It’s quite subjective, but for GMs who find it hard to believe that someone with Playwriting 4 can kick out a Shakespeare-level (8+) script once in 16 efforts, Stability is a mechanic that, well, stabilizes performance trials. Stability N means that the roll will be done 2*N + 1 times, and the median selected.

Stability is a heavyweight mechanic, appropriate to long, skill-intensive efforts like composing music, writing a book, or playing a game of chess. You don’t want to use it for quick actions, such as in combat, and you typically wouldn’t use it for binary trials where large deviations may not matter more than small ones. For example, if a musician is putting together an album, one way to simulate this is to have a Composition check for each song on the album— another is just to use Stability 1.

For a case study of the mechanic’s simulative effectiveness, let’s consider chess, where we actually have some statistical insight into how likely a player is to beat someone of superior skill. Chess 4 corresponds, roughly, to an Elo rating of 2000— well into the top 1 percent, but not quite world class. Chess 8 corresponds to about 2800, which has only been achieved by a few people in history (Magnus Carlsen, the best active right now, has 2862). Elo ratings are based purely on results, and a 10-fold odds advantage is called 400 points— in other words, each Elo point represents about a 0.58% increase in the chance of winning.

So, we can test the d8 System, without Stability, for how well it models this. A Chess 4 player should have a 1% chance (counting draws at half) of beating a Chess 8 player; but how often does 4dP actually win or draw against 8dP? At High Tension, about 16% of the time. That’s far too high for chess, a board game where there’s really no luck factor. We have to adjust our model. Well, first of all, we drop the Tension to Medium. No one’s going to die— although the superior player might be embarrassed by losing to someone 800 points below her. Then, we use Stability 1. Finally, we model it as an attacker/defender opposed action, which means that if there’s a tie in the performance score, it goes to the defender— whom we decide to be the more skilled player. Then, we can expect the Chess 4 player to win only 1.331% of the time (95% CI: 1.264% – 1.398%) against the one with Chess 8. That’s a 748-point Elo difference as opposed to 800. Is it perfect? No— among other things, it ignores that high-level chess games actually do often end in draws— but it’s close enough for role-playing.

Stability is one way to “gate keep” the highest levels of performance. Another way, for complex endeavors, is to us compound trials that pull on multiple Skills. A PC who wants to pen a literary classic might face a compound difficulty of [Writing/Fiction 4, Characterization 4, Knowledge/Literary 3]. A PC who wants to write a bestseller would face [Writing/Fiction 1, Marketing 4, Luck 5]. It’s up to GMs how to interpret mixed successes. For harder trials, GMs should require that all succeed; for easier ones, they can allow a mixed success to confer some positive result. (“Your book sells well, but the critics pan your poor writing, and your ex thinks the book is about her.”)

Modifiers

Situational factors— low light, being distracted, assistance from someone else, being physically attractive— easier or harder than otherwise. Because the d8 System deliberately makes steps between consecutive Skill, Difficulty, and performance levels fairly large— they’re supposed to represent distinctions the characters would actually notice— modifiers shouldn’t be used lightly. Nuisance factors that would “deserve” small modifiers in finer-grained systems should probably, in d8, be ignored unless they, in bulk, comprise a substantial impediment.

There are two types of modifiers: point and level modifiers. In Skill substitution, we see level modifiers. If a PC has Archery 3, and the Crossbow Skill neighbors it with Easy (-1) relative Complexity, then the PC implicitly has Crossbow 2. He rolls 2dP instead of 3dP— not everything he knows about Archery is applicable, but some of it is.

Most situational modifiers, on the other hand, will be point modifiers applied to the result of the dice, after they are rolled. At twilight, the poor light might make it 1 point harder to hit a target: Difficulty 4, in effect, instead of Difficulty 3. However, we call this modified roll “3dP – 1”, rolled against 3; rather than 3dP against 3 + 1 = 4. Why? Because it would be confusing if a “+1” modifier made the character’s life harder and a “-1” modifier made it easier.

Which type of modifier should a GM use? Situational modifiers should almost always be point modifiers, because while they can make a task easier or harder to pull off, they don’t really affect the skill level of the character. Skill substitution, on the other hand, is the case where the negative level modifier is appropriate— the PC with High Dwarven 3 “knows three ways to do things” (per abstraction) in High Dwarven, but only “two of those things” transfer over to Low Dwarven.

Severe illness might justify a negative level modifier; regular situational factors don’t. As for positive level modifiers, I think those only make sense under magical or supernatural influence. It’s conceivable that a random schlub could get “+3L Fighting” in The Matrix, but most real-world tactical advantages don’t actually increase performance— they merely reduce difficulty.

If negative point modifiers reduce a character’s performance below 0, it’s treated as zero. Likewise, if positive point modifiers reduce the effective Difficulty of a purely binary roll to 0— the player is now rolling 3dP+2 against 2— then there is no need to perform the roll at all.

If level modifiers reduce a Skill below 1, the fractional levels ½ and ¼ are used before going to 0. If Chainsaw Juggling is Hard (-3) relative to Juggling but fungible, a character with Juggling 4 implicitly has Chainsaw Juggling 1; if Flaming Chainsaw Juggling is Medium (-2) relative to regular Chainsaw Juggling, then this character has “Flaming Chainsaw Juggling ¼”.

When level modifiers come from Skill substitutions, the step after ¼ is 0— the Skill can’t be faked (as if it were nonfungible). When the debuffs come from other sources (sleeplessness, ergotism, PC madly in love with a statue) they cease having additional negative effect at ¼, which is as low as a Skill or Aptitude can get.

To roll at sub-unity levels, use the following modified dice; the “chaining” is the same as for a dP.

  • ½dP : {0, 0, 0, 0, 0, 1, 1, 2} / on 2, chain.
  • ¼dP : {0, 0, 0, 0, 0, 0, 1, 1*} / on 1*, chain.

Thus, the probabilities of hitting various difficulty targets are:

  • Skill 1: {1 : 5/8, 2: 2/8, 3: 1/8, 4: 1/64 … }
  • Skill ½: {1: 3/8, 2: 1/8, 3: 1/64, 4: 1/512 … }
  • Skill ¼: {1: 2/8, 2: 1/64, 3: 1/512, 4: 1/4,096 … }

Skill Improvements

Skills improve in two ways. One is through practice, which typically occurs in the downtime between adventures— although heavy use of the Skill during the adventure should also count for a few practice points (PP). Practice happens “off camera”, for the most part, between gaming sessions when the characters are presumably attending to daily life and building skills rather than scouring dungeons.

A Practice Point (PP) represents the amount of in-world time it takes to reach level 1 of an Easy primary Skill. It might be 200 hours (4 weeks, full time); it might be 650— it depends on how fine- or coarse-grained the Skills are (and, also, how fast the GM wants the players to grow). Each level of Complexity doubles the cost: 2 PP for Average, 4 PP for Hard, 8 PP for Very Hard. Furthermore, this isn’t the cost to gain a level, but the cost of an attempt to learn that level, using the relevant Aptitude. In other words, if Painting is keyed on Creative Aptitude, then to reach Painting 4 is a Creative Aptitude check. Practice almost always occurs at Low Tension, so PCs typically have a 100% chance of success for each level up to the level of that Aptitude.

In the example above, let’s say a PC has Creative Aptitude 3, and Painting is Average in Complexity. Then it takes 2 PP to get Painting 1, 2 more for Painting 2, and 2 more for Painting 3, for a total of 6 PP. Dice never have to be rolled because of the Low Tension— the PC will always succeed up to level 3. To get Painting 4, however, the player spends 2 PP only to get an attempt of 3dP against 4 (37% chance). On average, it’s going to cost 5.4 PP to get Painting 4.

Two kinds of modifiers can apply to practice. A skilled teacher (who has attained the desired level) can justify a +1 point modifier, and an exceedingly capable (and probably very expensive) mentor can bring +2. The other is practice “in bulk”, which allows a PC who has a contiguous block of down time to spend a multiple of the base cost to gain a point modifier: 3x for +1, 5x for +2, 10x for +3, 20x for +4. In the case above, the PC could spend 2 PP for a 37% chance of getting Painting 4, or spend 6 PP for a guarantee of locking it is— even though his Creative Aptitude is only 3. To get Painting 5 in this way will cost 10 PP.

It’s up to GMs how stringent they want to be on the ability of PCs to actually practice in the conditions they find themselves in. I would argue that if the PCs are in Florida over the summer, they probably can’t level up their Skiing. On the other hand, more lenient GMs might allow practice to be more fluid, like a traditional XP system.

Aptitudes improve in the same way, but are costed as Extremely Hard (16 PP base) and, since there is no “Aptitude for Aptitudes”, the default roll is 2. Raising an Aptitude from ½ to 1, or 1 to 2, costs 16 PP. From 2 to 3, it either costs 48 PP (“bulk”) or it costs 16 PP for a 34% shot (2dP against 3); from 3 to 4, it either costs 80 PP or 16 PP for a 17% shot (2dP against 4).

When Skills (and especially Aptitudes) move beyond 4, the player should be able to convince the GM that his character actually is finding relevant practice during the downtime. A character isn’t going to improve Chess 6 to Chess 7 unless he’s actually going out and playing against upper-tier chess players.

Practice is never subject to the Three-Attempt Rule. The characters can spend as much of their off-camera time practicing as they want.

The other, more dramatic, way in which Skills can improve is through the Feat system. A Feat occurs for a successful trial of a Skill where:

  • the result was unexpected— for a binary trial, the PC was out-of-advantage; for a performance trial, the roll was at least 3 levels above the Skill level;
  • no level modifiers were applied to the PC at the time— that is, the character was not under the influence of some magical buff or debuff— and
  • most importantly, the success mattered from a plot perspective. A “critical” hit against an orc that was going to die anyway isn’t a Feat. To qualify, it has to be something that occurred under some tension, and that no one expected— the sort of thing that characters (and players) talk about for weeks.

When a Feat occurs, the character’s Skill goes up by 1 level for 24 hours (game time) or until the character has a chance to rest. At that point, a check of the relevant Aptitude— Difficulty set at the new level— occurs. If the check is successful, the Skill increases is permanent. If (against a Difficulty of the new level) occurs. If successful, the Skill increment is permanent. If unsuccessful, the Skill reverts to its prior level, but the PC gets a bonus 3+dP practice points (PP) that apply either to the Skill or its Aptitude.

Multiplayer Actions

Opposed actions model conflict between two or more characters (PCs or NPCs, including monsters) in the game world. A character is swinging a sword; another one wants not to get hit. A valuable coin has fallen and two people grab for it. One person is lying; the other wants to know the truth. Live musicians compete for a cash prize. Usually, PCs compete against NPCs; sometimes PCs go at each other in friendly or not-so-friendly contests. Opposed actions, like single-character trials, come down to the dice.

One could, in a low-combat game, model fighting as a simple opposed action indexed on the skill, Combat. That would, of course, be unsatisfactory for an epic-fantasy campaign where combat is common and there is high demand for detail and player control. But combat system design is beyond the scope of what we’re going to do here— there is too much variety by style and genre.

Opposed actions nearly always occur at Medium or High Tension. Of course, they are subject to situational modifiers.

simple opposed action is one where each contender rolls in the relevant Skill (typically, the same Skill) for performance: highest score wins. If it makes sense to break ties, roll again. Use the first roll as representative of performance— if both singers in a contest roll 5/great on the first roll, and the PC rolls 0 on the tie-breaker, he may not get first prize but he shouldn’t be booed. What the simple opposed action offers is symmetry: it doesn’t require the GM differentiate attackers from defenders— performance scores are compared, and that’s it.

Passive defense is applicable when there is a defending party, who doesn’t really participate in the defense. Armor is typically modeled this way: a character having “Armor Class 4” might mean that to harm him has Difficulty 5. For the attacker to roll against passive defense is equivalent to a binary check.

Collaborative actions are additive. Most actions (e.g., climbing a wall, sneaking into a building without being caught) are single-person— but a few will allow collaboration: group spell casting, large engineering projects, efforts of team strength. Three PCs with Skill 2, 3, and 5 can roll 10dP against a Difficulty 9 collaborative task that any single one of them would be unlikely to pull off.

Serial actions are contests that continue until one character passes and the other fails. They start at a set Difficulty D; if both parties pass, roll again at Difficulty D+1; if both fail, roll again at Difficulty D-1 (not going below 1). Some amount of game time may pass between each trial— in a combat situation, this might be a turn (~5 seconds); in a more gradual environment (e.g. business competition) this might be a month— this means that external factors may intervene before the contest concludes. The starting value of the difficulty will typically be halfway between the Skill levels of the parties, rounding down

Attacker/defender actions are the most complicated kind, and that’s because they index different Skills, which means that levels don’t necessarily compare. If Bob has Deception 2 and Mary has Detect Lies 3, then it should be harder for Bob to lie than for Mary to detect lies— and he should have an under-50% chance. On the other hand, if Mary has Social Aptitude 3 but no skill investment in lie detection itself, then Bob ought to have the advantage, because he’s bringing the more specialized Skill.

Ideally, any “offensive” skill (one that can be “defended” against) that can be defended against has a compliment at the same level of specialization and Complexity— if Deception is Average (-2) relative to Social Aptitude, so should be Detect Lies. Then, because Mary doesn’t have any investment in Detect Lies, she falls back on the implicit “Detect Lies 1” she has, per Social Aptitude 3.

By default, defenders win ties. If the GM feels that circumstances should have the attacker advantaged instead, this can be achieved with a +1 point modifier in the attacker’s favor.

Final Notes

Above is enough material from which an experienced GM can run a campaign using the d8 System. At this point, she’ll have to create her own health and combat system, because no such modules have been written, so the d8 System is certainly not ready for first-time GMs.

Here are a few technical tangents that won’t matter for most players or GMs.

Other Dice?

The “Poisson die” doesn’t have to be a d8. In fact, the d8 isn’t the most accurate contender. Its zero is 1.9% heavy (0.375 vs 1/e = 0.36787…) and so 8dP is going to produce a 0, although very rarely, about 15% more often than Poisson(8). Most GMs and players aren’t going to care about that discrepancy.

You can get a more accurate “Poisson die” on a larger die, like a d30:

  • {1–11: 0, 12–22: 1, 23–28: 2, 29: 3, 30: 4}, with
  • {1–24: 0, 25–29: 1, 30: 2} for chaining.

The drawback here is that d30s are big (and, likely, expensive). You can’t easily hold 4, 6, or 8 of them in your hand. Also, while the d8-based dP has a “heavy” 3+ (1/8 = 12.5%, as opposed to about 8%), the d30-based dP has a light one (2/30 = 6.6%). Since most players (and GMs) are not going to care about exact fidelity to a probability distribution, I consider the heavy 3 on 1dP a feature rather than a bug.

Nothing, of course, stops players from synthesizing a d64 from two d8’s and therefore getting a more accurate model of a single dP… but I don’t think it’s worth it. The dP

One can get a fantastically accurate Poisson(k/120) simulation by rolling k d120’s with the mapping {1–119: 0, 120: 1}. I recommend not doing this, though. The d8 System, I think, gets the statistical properties we want from the Poisson family of distributions, even if a single dP doesn’t model Poisson(1) with perfect accuracy.

To Chain or Not To Chain?

It’s up to the GM whether chaining applies to antagonists’ rolls. I would say that they should— my aversion to downward chaining for PCs’ rolls doesn’t prevent upward chaining for antagonistic NPC results. It’s not the potential for unbounded badness from a PC’s perspective that makes downward chaining a design blunder— after all, the game simulates a world in which the characters can die— but the physical act of making a player roll for it.

Similarly, for “situational rolls” (e.g., weather), the standard 2dP has a wide bottom (14%). If you want to distinguish 0/bad from 00/terrible and 000/atrocious… go for it.

Fractional Levels

Normal Skills shouldn’t have fractional levels. They require special treatment of dice (e.g., a “½dP” as described above) and, in my opinion, half-levels of skills (from a player and GM perspective) aren’t worth the hassle. If you don’t know how to get simple (Difficulty 1) tasks done right 63% of the time, you don’t know the thing.

The d8 System assumes four levels of Skills are available to entry-level players:

  • 1: apprentice,
  • 2: journeyman,
  • 3: master,
  • 4: expert (best in a small or mid-sized town).

with four more levels of room for improvement… 8 is best-in-the-world… and that six(-ish) levels of Aptitudes are available:

  • ¼: utterly inept,
  • ½: below average,
  • 1: average untrained,
  • 2: above average (average in class),
  • 3: gifted,
  • 4: exceptional,
  • 5: extreme, borderline broken (~1 in 100,000).

These are coarse-grained in order to correspond with levels we perceive in the real world, so that in a game situation where no rules apply, or the existing rules don’t make sense, the GM can fall back on her “common sense” about what a journeyman carpenter (Carpentry 2) or master of mendacity (Deception 3) can and cannot do. But also, it prevents the stats from telling players more than the characters would naturally know about themselves. It seems especially coarse-grained that I only have two levels of below-averageness, as opposed to four going up, but I imagine many experienced GMs will understand the sense of this. When players want their characters to be below-average in something, there are usually two use cases:

  • they want to play a character who is extremely incompetent in one way (but, one hopes, competent in other ways) for the role-playing challenge; the opposite of “munchkins”, they want the game to be difficult for them.
  • they don’t consider the attribute (or Aptitude) relevant to their character archetype (or “class”) and are “dump statting” it to buy more points elsewhere.

The first of these is fine; the system supports severe incompetence (¼), although GMs should restrict it to players who know what they’re doing. And there’s nothing wrong with the second, either; but dump statting shouldn’t yield all that much, which it can if there are too many levels of intermediate-but-not-extreme below-averageness. For this reason, the system enables only two: the I-don’t-think-I’ll-need-this below average of ½ and the I’m-up-for-a-serious-challenge of ¼.

Still, players may find a need for finer granularity. Using the deadlift example, there are several intermediate levels between a 300-pound deadlifter (Strength 2) and a 450-pound deadlifter (Strength 3); if a player decides that his character should be able to lift precisely 375, the mechanics allow for a 2½ in the aptitude, and it’s easy enough to figure out what “2½dP” looks like (2dP plus ½dP, as described above). If players or GMs want to know exactly how strong and fast the characters are, down to tens of pounds and tenths of miles per hour, the d8 System doesn’t outright preclude these intermediate fractional levels— it’s just that I, personally, don’t think they’re necessary for 98% of role-playing needs. We don’t care, after all, whether the character can or cannot budge a 700- versus 800-pound door; we care whether the PC can budge that door that is standing in the way— and while it’s a bit “fudging” to make it a Strength check (arguably, suggesting that the weight of the door itself is being determined by the dice) it does keep the story moving.

… And that’s probably enough for one installment.

Life Update 11/21/20: Farisa’s Crossing, et al

I’m still trying to figure out the matter of my online presence (including, to be frank, whether I want to have one at all). For now, I’m still on Substack. I’ll be mirroring these posts on WordPress; as much as I’ve lost faith in the platform, I don’t see any harm in keeping the blog up.

On Farisa’s Crossing, I’ve stopped promising release dates. I can only give a release probability distribution— and that, only for the Bayesians, since the frequentists don’t believe probabilities can be applied to one-time future events— but, I have reasons to be optimistic, regarding current and future progress:

  • the novel is “bucket complete”, by which I mean that if I had a month to live, I could leave it and a pile of notes for an editor, and I wouldn’t feel like I had left the world an incomplete book. (I wouldn’t care about marketing or sales outcomes, for an obvious reason.) There are still things to improve, and I intend to do most of the remaining work myself, but it’s basically “ready enough”.
  • I’ve stopped fussing about word count. It used to be really important to me that the book not get “too big”. Traditional publishing ceased to be an option when I crossed the Rubicon of 120,000 words. As a first time novelist, you have zero leverage and 120K is all you get; although, most award-winning literary titles (in adult fiction) run 175–250K. For a while, my upper limit was 175,000… 200,000… 250,000… which shows how good I am at setting these “upper limits”. Farisa became a bigger story over time. Her love interest, I realized, ought to be more than a love interest. I gave more characters POV time, which meant more world to flesh out. I decided to give more back story to an important villain. Various proportionality and pacing concerns— systems of equations where the variables and coefficients are all subjective, but still require precise tuning— meant that fleshing out one set of details required me to do the same for another. I’ll still cut anything that doesn’t belong. If a scene has an obvious weakest paragraph, or a paragraph a clear weakest sentence, or a sentence has a needless word… it gets yanked. At some point, though, the risks of cutting outweigh the benefits.
  • I’m able to afford having stopped taking new clients in May. I’m down to maintenance of existing ones, at least for now. There’s little stopping me from hitting the next six months at 180 miles per hour. Unless something unexpected happens (and of course there’s that one thing that can happen to anyone) I can’t see anything preventing me from getting the book to a ready state.

There are a million “lessons learned” in the writing process, but I don’t believe in talking about those sorts of things until after you’ve completed the task.

I’ll give it an 75% chance that I’m ready to send my novel to a copyeditor by mid-May; 98% chance by mid-July. Concurrently, probably starting in early spring, I’ll need to get cover art, blurbs, and other marketing materials together. That can go off in a couple weeks, or it can take months. It depends on a number of factors.

I may release the book, without much marketing— because if I’m right about the book, it shouldn’t need it; if I’m wrong about it, perhaps obscurity is a good thing— in August. My next big project (everything being up in the air for obvious reasons) starts in the fall and, to be honest, while the quality of the book itself is paramount, I’m willing to compromise on short-term sales to increase my likelihood of succeeding in other projects. On the other hand, circumstances evolve, and I may size up the situation and decide that Farisa does need the traditional long-calendar marketing strategy, in which case we’re looking at a release date of late 2021 or early 2022.

Apex 2 @ 12:58

Donald Trump has been admitted to Walter Reed because of COVID–19 and, at the moment of my writing this, most if not all of this nation’s corporate and economic elite have retreated into their bunkers. Sorry, they hate that word. Their “compounds”.

Why? At 3:58 (Eastern time) an “Apex Two” alert went out.

Let me say that I know nothing that is not public about the president’s health. No one can predict the future; I can’t either. There is no major reason, in my view, to believe that the recent A2 represents anything dangerous to the public.

What is “Apex Two”? For historical reasons, I am on a list of technology-industry personalities estimated to be “HNWIs”, which is what pretentious rich people call themselves. “High Net Worth Individual”, which starts at $30 million. I am not a “HNWI”. My actual net worth isn’t even 1% of that figure. I assume there must be some error in the estimation process, for me to be on an alert list where I clearly don’t belong. I don’t even have a bunker.

These people are obsessed with “The Event”, which is a widespread civil breakdown caused by economic inequality, political instability, or some other calamity. Apex 2 instructs the billionaires to go into their bunkers but be ready to profit from opportunities— to, in recently-used words, “stand down and stand by”. Apex 1 means that local unrest is already happening, or that there is cause to believe “The Event” will unfold imminently. Apex 0 means that it’s already happening.

Severity runs from 0 (catastrophic) to 5 (minor). “Apex” means that there is an admitted plan by a wealthy individual or several to cause social instability if conditions are right for it. Most of these guys would rather see the country lurch to the right than the left— this is because right-wing violence and fascism represent no real threat to the wealthy, whereas even moderate leftism is seen as a risk to their pocketbooks, reputations, and standing. “Barrel” is more of a passive observation that something is about to go down. There’s a C and a D and possibly some other letters, but I don’t pay attention to those.

In other words, Barrel means an insider in the political or economic elite (or several) wants others to know, “I’m worried.” Apex means, “I can’t tell you who I am, but I’m about to cause something.”

So, what’s going on? Of course, this coronavirus is unpredictable and there is a chance that the president is incapacitated, temporarily or permanently, by it. I don’t think I’m revealing new information in saying that. It could happen to anyone.

Such an event would actually be good for the crypto-fascists (and plain-out fascists) in Corporate America, because the sudden and unexpected incapacity of someone who’s bad-at-fascism (and whom they believe will lose the election) would give them an opportunity to mobilize behind someone who’s good at fascism. If, say, the president’s illness took the most extreme course, there would likely be right-wing efforts to “honor” him by initiating violence. It is also likely that some of these “HNWIs” have figured this out and see themselves as standing to benefit by causing such violence.

What does “Apex Two” mean? To be honest, not necessarily all that much. It means that at least one person of high wealth and power is alerting his rich buddies to go into hiding, because he’s planning to do something anti-democratic, if the opportunity presents itself. The opportunity may not present itself. I’ve received four “Barrel Two” alerts in the past five years and absolutely nothing has come of them.

Furthermore, while no one can predict the future— I can’t either, and please remember that— I have doubts as to whether they would succeed. The U.S. government is very much aware of the disloyalty (to use a conservative term for it) of several high-profile tech-industry “HNWIs” and I am sure they have contingency plans in place.

The only thing we can be sure of is that, in an uncertain time, the nation’s economic elites have scurried, like cockroaches, into their bunkers. That’s all anyone can know right now for sure.

Quitting WordPress – April 30, 2020

I’ve gotten several complaints about ads on my blog.

When I set this thing up in 2009, I didn’t know much about the web— I’m an AI programmer; web stuff I do when there’s a reason to do it— and I used WordPress’s free offering, and it worked. At the time, you published a blog post and there it was. No ads.

At some point, WordPress began running banner ads under my essays, without paying me, because I was using the free tier, so I guess the attitude was, fuck that guy. I never saw the ads on my own blog, when logged in, and now I understand why. If WordPress bloggers (like this dumb sap) knew how intrusive the ads were, they’d be less likely to create content.

The banner ads were ugly— and I wasn’t making any money off the damn things— but I was willing to tolerate them… laziness, inertia, not wanting to start over.

This afternoon, I looked at my blog, while not logged in, and saw this:

Screen Shot 2020-03-25 at 2.57.38 PM

Not just a banner ad, but a block ad, right between paragraphs. A fear-based fake-news ad, on top of that. Fucking garbage, in the middle of my writing.

I never allowed this. I am embarrassed that this piece of garbage ran between two paragraphs of my writing. I am fucking done with this shit.

What have we let happen to the Web? Fake news, interstitial ads, egregious memory consumption, and those obnoxious metered paywalls. Social media is an embarrassment. I am so sick of all this fucking garbage, the blue-check two-tier social platforms, the personality cults, the insipid drama, and the advertisements for garbage products no one wants and badly-written ad copy no one needs to read.

I am sick of “Free” meaning garbage. Yes, I’ll pay for news— but never in a million years if you punish me for reading more than my “4 free monthly articles”, you rancid stain. Make it free or charge for it; don’t be an asswipe and play games. Stop “giving away” a garbage product in the hopes of someone paying for something better.

This blog goes down at the end of April. I’m done with WordPress. I’m a programmer; time to roll my own.

–30–

A COVID–19 False Dilemma

Political leaders like Donald Trump and Congressional Republicans are trying to force the American people to choose one of two unacceptable alternatives:

  • Fast Kill: do nothing about the virus’s spread, causing millions of preventable deaths due to the catastrophe of large numbers of people— orders of magnitude beyond what our hospital system is designed to handle— becoming critically ill at the same time.
  • Hang the Poor: practice social distancing and flatten the curve (which we must do) but at the expense of crashing the economy, leading the poor to face joblessness, misery, and bankruptcy— In Time, it turns out, is not fiction— culminating in a Great Depression–level economic collapse.

Both scenarios lead to preventable loss of life. Both scenarios are intolerably destructive and will impoverish a generation. Both scenarios are completely unacceptable if something better can be done. Indeed, something better can. We must flatten the curve; we must practice social distancing. But, it is artificial that “the economy” should be threatened by our doing so.

Compared to a 1973 benchmark, employers take 37 cents out of worker’s paychecks for themselves. Costs of living have gone up, wages have not kept pace, and working conditions have degraded. The result is a society where working people live on the margin, where two weeks without an income can produce, for most individuals, financial ruin. It didn’t have to be this way. This fragility is artificial. The rich created, for their own short-sighted benefit, a society in which the poor must serve the manorial lords on a daily basis or starve. It doesn’t have to be that way.

There’s a third option, one that Trump and Congressional Republicans would rather us not see. Yes, we flatten the curve; we practice social distancing and self-isolation and even follow a quarantine if circumstances require it. On the economic front, institute a wealth tax— a 37-percent immediate wealth tax to commemorate the 37% private tax levied against workers by their employers, and a 3-percent annual tax on wealth over $5 million going forward. Restore upper-tier income taxes to their New Deal levels. Offer a universal basic income (UBI) and put in place universal healthcare (“Medicare for All”). Remove restrictions on unemployment benefits. Mandate that employers protect the jobs of workers furloughed by this crisis. Offer rent and mortgage relief to those who need it. Eliminate student debt, and make appropriate public education free for all who are academically qualified. After the crisis, put funding into research and sustainable infrastructure. All of this can be done— for the most part, these aren’t new ideas.

The billionaires and corporate executives— and the Republican Party that represents them— don’t want Americans to see this third option. They’re afraid of “socialism”, not because it might not work, but because it almost certainly will. It took them fifty years— and an uneasy alliance with religious nutcases and racists— to roll back the New Deal and the Great Society, and they’re terrified of socialist ideas getting into implementation, because they know that when this happens, people find out they like socialism, and it takes immense political effort to roll this plutocrat-hostile progress back.

We don’t have to choose between “the economy” and millions of lives. This is a false dilemma being put forward by evil people who will only consider scenarios that leave the power relationships and hierarchies of corporate capitalism intact. Their failure to allow a workable third alternative constitutes murderous negligence.

Our economic elite is made up of people who would rather see millions die than the emergence of an economic system that challenged their titanic power. If we survive COVID–19, if we defeat the the virus, we should go after them next.

Capitalism–19 Vs. Humanity–20

Societies around the world face a horrible decision, as a deadly coronavirus rages through the population. Do they continue with economic business-as-usual, and allow tens of millions of preventable deaths? Or, do they take drastic measures to slow the spread of disease (“flattening the curve“) that endanger our economy?

Let’s consider one extreme. What is likely to happen if our elected and business leaders do nothing? The number of people infected continues to double every 6 days. Our hospitals are swamped. Unheard-of numbers of people need respiratory support, all at once. Most do not get it— and they die. People needing transplants, even if they never get the virus, die waiting because the resources are unavailable. By midsummer, tens of millions of people are dead, and at least tens of millions more, though recovered, are permanently disabled. I call this scenario, the Fast Kill.

I don’t want the Fast Kill. Millions of needless deaths is a thing to be avoided. However, the perspective of our economic elite is quite different from mine or yours. The billionaires are on private islands, or in secret bunkers, and can wait this thing out. A Fast Kill, to them, has one clear advantage: the power relationships and hierarchies of corporate capitalism (with some loss of personnel) remain intact.

Will our economy shatter if we take measures to slow the spread of disease? Yes, because corporate capitalism is brittle by design. Since 1973, worker productivity has nearly doubled, while wages have stagnated. Out of every dollar a worker makes, executives take 37 cents for themselves. As workers compete against each other for the benefit of the richest 0.1 percent— as opposed to, say, overthrowing their masters— rents rise, wages fall, and working conditions degrade. We now have a world where most people— and quite a number of vital small businesses— cannot survive 2, 4, 6 weeks without an income. Many workers get no paid sick leave. As elected officials and public-health experts demand we take measures to control COVID–19’s spread, many people will, by virtue of their need for weekly income, be unable to comply.

We wouldn’t tolerate a 37% tax, imposed on the lower and middle classes, from our government. And yet, that is exactly what the private-sector bureaucrats called “executives” have levied against working men and women. As a result, millions of people are so broke that, even under a quarantine enforced by the national guard, the need for an income will undermine such measures. Those who are forced to live on the daily
“hustle”— odd jobs, panhandling, alleyway short cons, and black-market labor— are used to evading authorities, and they’re good at it.

Here’s some of what we need to do, to survive COVID–19 with civilization intact. Yes, of course we need to flatten the curve; we need to slow our economy and focus on urgent needs such as food, shelter, energy, and medicine. We need universal basic income protection— not a means-tested one-time payment, because a one-time check won’t do enough and we don’t have time to quibble over means tests— that people can rely on until the crisis is over. We need mandatory job protection for people sickened (and, in many cases, disabled) by COVID–19. We need rent relief for people who lose their jobs. We need to remove all restrictions on unemployment benefits, and to make these benefits tax-exempt as they were before Reagan. We need an unconditional moratorium on all medical bills— and, at the same time, government funding of hospitals to keep them afloat— during this unprecedented public health crisis.

All of this, yes, is “socialism”. Socialism is nothing more and nothing less than the contention that the principles of the Age of Reason (e.g., rational government over clerical rule or hereditary monarchy) ought apply to the economy as well. It turns out that there are no capitalists in foxholes.

Our society is ruled by people, most of whom would rather see millions die than see such measures enacted. Why? Once so-called socialist measures are in place, they become pillars of a society and it takes decades to remove them. Surviving COVID–19 is going to require governments all around the world to impose socialistic measures more drastic than the New Deal and the Great Society combined.

There are no good alternatives. If elected leaders do nothing, we get a Fast Kill. Tens of millions of people die, and tens of millions more are disabled. If curve-flattening measures are imposed without socialistic protections, we destroy what’s left of the middle class, eviscerate the consumer economy, and risk such a high rate of noncompliance that infection may spread, needlessly killing millions, anyway.

Billionaires and corporate executives are scared, not of the virus, but of the changes our society will need to make to survive COVID–19. What if those social-welfare protections stick? Billionaires will become three-digit millionaires. Three-digit millionaires will become two-digit millionaires. Private jetters will have to fly first-class commercial flights. Corporate executives will be administrators rather than dukes and viscounts. Worker protections will be enforced again, interfering with the “right to manage”. In the long term, extensive investment in the sciences and health (to fight the next COVID–19) will raise employee leverage, at capital’s expense, across the board. The horror!

Those who run the global economy, to the extent that they have a say in what societies do, have a conflict of interest. They can try to preserve the hierarchies and power relationships that enrich them— at the cost of a holocaust or few. Or, they can accept social changes that, while bringing humanity forward, will emasculate corporate capitalism and hasten its replacement by a more humane system, such as social democracy en route to automated luxury communism.

Shall it be Capitalism–19, or Humanity–20, that survives? Working men and women await the answer.

Yes, Under Corporate Capitalism, 8 Million Working Americans Are Likely To Become Unemployably* Disabled–– Possibly, for Life. Check the Math; Check the Assumptions.

An assertion I have made recently has drawn controversy. I have said that, in the wake of COVID–19, we’ll likely see 8 million American workers become unemployably disabled for a long period of time–– years; possibly, for life. This is an extreme prediction, and I hope I’m wrong. I’ve made predictions that were wrong and embarrassing. I sincerely hope this is the most embarrassing prediction I’ll ever make. Given the extremity of it, let me explain the assumptions on which it rests.

Please, check my work. If I’m making an incorrect assumption, post a comment, and I will fix it.

I am not, in any capacity, an expert on virology, medicine, or epidemiology. These are complex, difficult sciences and we must defer to the experts. The numbers I will be using will be within the ranges of existing predictions regarding how bad this pandemic can get, and how much damage it can do.

Of course, we have to define terms. What does it mean for a person to be unemployably disabled? There is a spectrum of sickness, and one of disability. The vast majority of this 8 million people (plus or minus a factor of two) will not be bedridden, miserable, or sickly for the rests of their lives. Unemployably disabled means that someone is sick enough that (a) no one wants to hire her (whether because of her disability itself or her suboptimal career history) and (b) she struggles to retain jobs due to her inability to hide the chronic health problem. She need not be physically crippled, psychiatrically hospitalized, or too sick to contend with daily life. She might not “look” disabled at all, but she will have too few spoons to have even a chance of victory in corporate combat.

In the United States, where employers are above the law on account of having convinced the public to call them “job creators”, it does not take much disability at all to make someone unemployably disabled.

Assumptions

Like I said, I’m going to document all of my assumptions, so the public can check my work.

My first assumption is that COVID–19 will not be contained. This is the biggest one, and I hope I’m wrong. If the virus is contained, like SARS, then perhaps only a small number of people will be exposed to the virus. If only 500,000 people get it, then clearly there is no way for COVID–19 to render 8 million people unemployably disabled.

However, the virus is extremely contagious, with an r0 estimated at 2.28. Not as bad as measles, worse than flu–– probably worse (in contagion) than the monster flu of 1918. Does this mean that it can’t be contained? No. SARS had a similar r0 and was contained. However, neoliberal corporate capitalism, for reasons that will be discussed, is especially bad at containing outbreaks.

Old-style state authoritarianism has its failings, but people know what the rules are. A government quarantine can be enforced. An authoritarian government can just shoot at people who move until the r0 drops below 1. It’s a terrible solution, but it works.

Social democracy can also work, so long as a sufficient number of people have the good will to exercise their option to hunker down (that is, practice social distancing) and let the experts handle the crisis. I have chronic health issues but I am taking special measures right now (e.g., dietary changes, avoidance of damaging circumstances) to minimize risk of needing medical attention in the next six months. In part, my reasons for doing so are selfish; in part, I am trying to minimize my risk of being a burden to a soon-to-be-overtaxed hospital system. We are all on the same team.

What cannot contain an epidemic like COVID–19 is an economic system such as ours. Under neoliberal corporate capitalism, we have a libertarian government (providing immense economic freedom to those privileged enough not to have to work) but live in a matrix of authoritarian employers, who control our incomes and our reputations, and who can bend the government to their will by calling themselves “job creators”. In a world like this, no one knows who is in charge. Who does the American worker obey? Does he obey the man in Washington advising self-quarantine, or does he obey the boss who believes “coronavirus is just a cold” and has the power to turn off his income (and, by giving negative references, non-consensually insert himself into the worker’s reputation) if he shows up 15 minutes late? Chances are, he’s going to ignore the G-Man and obey his boss. The quarantine will not be effective. Even if it is enforced by the government, so many people are in such precarious economic straits that they will illegally circumvent it, if it comes to that.

We would have to scrap corporate capitalism entirely to have anything better than a 5 percent chance of containing COVID–19. Let’s be honest, a total overhaul of our economic system in the next two months is very unlikely. Chances are that, instead, the novel coronavirus will stick around in the American population (and, therefore, the world population) for good.

How bad is this? Not necessarily terrible. Over time, we’ll probably develop natural immunities to this thing, rendering it just another coronavirus. In the mean time, though, COVID is going to make a lot of people sick.

My second assumption is that about 100 million American workers will get COVID–19. Angela Merkel predicted that two-thirds of Germans will contract the virus., which is in line with epidemiologists’ expectations. That doesn’t mean they’ll all get sick. Most won’t. Case-fatality rates–– the WHO has given this disease a CFR of 3.4%–– often overestimate the lethality of the virus, because so many mild and asymptomatic cases go undetected. We may never know the real lethality rate of this disease, but in working-age Americans it will likely be under 1 percent. That’s the good news. This is a serious illness, but it’s not showing a likelihood of being a massacre like, say, the 1918 flu.

What about flattening the curve?

Health ministers and epidemiologists have been advising us to practice social distancing–– that is, avoid large gatherings–– to slow the virus’s exponential growth and “flatten the curve”. We absolutely must do that. A widespread emergency that overloads the hospital system will cause the lethality to spike, as it has in Italy.

By flattening the curve, we can achieve a great deal in preventing deaths, but we’re not necessarily going to reduce the number of cases. Flattening the curve is important because, when resources run thin, the matter of when people get sick has a major influence on survival. It doesn’t guarantee that they’ll never get sick.

How sick? Some people will carry the virus and suffer no symptoms. Some people (and not only elderly people) will get severely ill.

My third assumption is that, among that 100 million workers, the breakdown of cases (into asymptomatic, mild, severe, and critical cases) will be similar to what we’ve seen so far.

Unfortunately, there’s some guesswork regarding the currently infected population. We haven’t tested everyone; we don’t know how many cases of COVID–19 there are. Using percentages I believe to be in range of what experts expect, and scaling down a bit because we are speaking of the working-age population (a younger and healthier set) I’m going to predict: 50 million asymptomatic cases, 35 million mild infections, 13 million severe cases, and 2 million that are critical. These numbers could well be off by a factor of two, but not in a way that would meaningfully alter my fundamental conclusion–– that millions of people are about to develop long-term disabilities that, in American corporate capitalism, will render them unemployable.

It’s important to understand what is meant by a “mild” infection, when the medical community says that most (70–90%) COVID infections are mild. The word “mild” is relative. A “severe” cold (38 °C fever, inflammation and pain, unable to work) is “mild” by the standards of flu. Similarly, “mild” SARS or COVID is comparable to “severe” influenza (unless we’re talking about the 1918 monster flu, which is in its own category). Specifically, in the context of COVID, “mild” means that a patient is expected to survive without hospitalization–– there is no evidence of immediate danger.

In a “mild” case, life-threatening secondary infections may occur later on. That’s a serious issue, but not one that must be treated now. Some of these “mild” cases will come with pneumonia. Some will come with 39–40 °C (unpleasant but not critical) fever. Some will produce post-viral chronic fatigue comparable to that following mononucleosis or the bacterial infection responsible for Lyme disease. Quite a few people with “mild” cases will experience transient (but not life-threatening) respiratory distress serious enough to induce panic disorder or PTSD. These cases won’t require hospitalization–– and hospitalization will likely be unavailable–– but they will still be, for most young people, the worst health problems of their lives so far.

If that barrel of fun is “mild” COVID, what’s severe? Severe cases require hospitalization for days, and possibly weeks. Artificial respiration may be involved. Critical cases include those where vital organs are involved–– kidney failure has been reported. Yeah, this thing’s nasty.

Any health problem can traumatize a person, but respiratory ailments have quite a track record. The body is not meant to go without oxygen, and even slight deprivations freak the brain out. We’ve seen this with SARS and the 1918 flu. We’re likely to see it with COVID–19. Even in the cases being called “mild”, because there is no threat to life that requires emergency hospitalization, truly “full recovery” is not a guarantee. People are going to get panic attacks from this, and once a person has had a few of those, a lifelong struggle with panic disorder (and agoraphobia, and depression due to adversity in employment) becomes likely.

My fourth assumption is that COVID–19 will have a long-term disability profile, controlling for severity, comparable to SARS.

Nearly half of SARS survivors, ten years later, were unable to return to work.

Does this mean that 40–50 percent of COVID–19 survivors will be unemployably disabled? It’s hard to say. SARS is not COVID–19. Let’s size up some of the differences.

For one, SARS disproportionately affected skilled healthcare workers, for whom there’s high demand in any economic situation. We would see a higher rate of unemployable disability if this hit people whose services aren’t really needed–– say, private-sector software engineers or project managers. Of course, it will hit everyone and

Second. SARS did not have many victims in the United States–– where, although it is illegal to discriminate against disabled workers, the laws are scantly enforced. It mostly afflicted countries where workers have better protections against their employers. If, say, 40 percent of survivors were unemployably disabled in Canada, we’d likely see 75 percent unemployably disabled in the United States, not because the disease was more severe but simply because employers in the US get away with more.

That being said, all the evidence so far suggests that COVID–19 is not as severe as SARS. Therefore, I don’t think we’re going to see the same rate of unemployable disability (40 – 50 percent) among COVID–19 survivors, if only because there are so many more mild cases.

Here are my predictions. Five percent (1.75 million) of those with mild infections will be unemployably disabled–– that is, at some point, subjected to a career disruption through no fault of their own from which they will be unable to recover. Among the severe cases, I’m predicting 40 percent (5.2 million); among those with critical cases, 65 percent (1.3 million). These numbers might each be off by a factor of 2, but they’re not unreasonable. They are middling estimates.

There’s already a mountain of evidence supporting high proportions of those suffering severe and critical illness becoming, through no fault of their own, unemployable. What about the mild cases? Isn’t it a bit dire to predict that 5 percent of people with “only” mild infections will become unemployably disabled? No, it’s not. If anything, the real number’s likely to be higher.

Most of these cases will not be attributed COVID–19. Plenty of the people won’t know they ever had it. They’ll simply experience “a bad month” in which they will be unable to meet the performance requirements of their jobs, suffer managerial adversity and workplace bullying, and suffer career setbacks from which they’ll never recover.

Kimberly Han is a (hypothetical) 33-year-old software engineer at a half-trillion-dollar technology company, LetterSalad (formerly, Vigintyllion). On April 3, she develops a mild case of COVID–19. She’s able to work from home, because the US is on lockdown. Her fever never breaks 39 °C and she never feels the need to go to the hospital. She’s never diagnosed with COVID. She never thinks she even had it. Since she works from home, she’s not even aware of racist COVID-related jokes made about her by the managerial in-crowd. The storm passes. Everything’s fine.

In September, Ms. Han finds herself tired. Post-viral syndrome. Other than being tired, she’s fine, but she develops a cough. She misses a “sprint” deadline. She needs to take naps in the afternoon, and misses an unannounced but important meeting. Management perceives her as a “slacker” or as “sickly” or as “low-energy”. The product manager and her “people manager” tell her to stop “SARSing up the schedule”, which is totally not racist because the direct manager is a white, Ivy-educated “Boston Brahmin” and the product manager is an actual Brahmin, and it’s physically impossible for racists of two different races to work together to be racist to someone.

The workplace bullying culminates in her developing post-traumatic stress disorder. She begins to have daily panic attacks. She powers through the episodes, not missing a day of work to the attacks, but her manager doesn’t like “the optics” and begins paperwork to terminate her “for performance”. Kimberly Han, through no fault of her own, loses her job. Within time, the post-viral fatigue lets up but post-traumatic stress disorder does not. COVID–19 left her body and she is unaware that she’s had it, but she’s unemployably disabled.

What’s above will happen to people. Even if we do everything right, even if we flatten the curve and prevent our hospitals from becoming dangerously overloaded, it will happen to American workers, not necessarily in that precise way, but nonetheless surely. Some will have reduced lung capacity. Some will develop anxiety and depression. Some will develop panic attacks or PTSD. So will never be diagnosed but exhibit unexplained personal changes and not even know, when they are fired and unable to ever work again, that it was because of illness that they lost their careers (and that they were, therefore, fired illegally).

Could I be wrong on that 8 million figure? Of course. More accurately, it is: 8 million, plus or minus a factor of 2, conditional on an assumption of non-containment. I hope I’m wrong. I hope the virus is contained, or that it proves seasonal and dies out in the spring, but there’s no evidence that we can count on either one.

It is very likely that millions of American workers are about to become unemployably disabled. Crippled? No. Not even necessarily unhealthy. Careers are fragile things; it doesn’t take much disturbance to make someone unable to get and keep jobs in a competitive labor market that has been rigged against workers for the past forty years.

“Couldn’t this be a good thing?”

No.

I understand the argument. This pandemic may create a short-term labor shortage, and there are people who believe the clearing-out could lead to an improvement of opportunities for workers. I’m not so bullshit.

I don’t know enough about virology, medicine, or epidemiology to do anything more than piece together existing research, but I do know enough about economics, politics, and organizational dynamics to say this: while the people who own our society are evil, they are not stupid. The upper class and the corporate executives will profit, and we will suffer.

There are some people (sick, broken people) who believe that this “Boomer Remover”: virus will create opportunities in the workplace or that it will “clear away” people who are a burden on society. Neither’s true. First, while this will kill a lot of sick old people, it will at the same time make a lot of currently healthy people (young and old) very sick–– in some cases, for a long time. The disability burden on society is not going to be ameliorated by COVID–19; it will be increased.

So, let’s talk about why a potential labor shortage isn’t actually to the worker’s benefit. We are not in the time of the Black Plague. In the 14th century, the nobles needed the peasants. American workers can easily be replaced by machines and by literal slaves in other countries, and they will be. I remember, in 2005, being told that Millennials would face a world of opportunity by now, as Boomers retired and vacated the workforce. It didn’t happen. Those cushy $500,000-per-year BoomerJobs? Those were never filled. They simply ceased to exist. We live in a society where recessions are permanent (for the workers) and recoveries are jobless. When things go bad, workers are first to suffer; when things are good, the owners take the bounty for themselves. COVID–19 will be no different. The rich will see a drop in their stock valuations; the poor will be eviscerated. This dynamic will not change until we destroy corporate capitalism.

What happens to the eight million people who become unemployable because of post-viral disability? There’s no safety net in this country, so these people will have record-low leverage, and so while they won’t find decent jobs (because no one will hire them for one) the owners of our society will find ways to extract work from them. A number fall into precarious “gig economy” piece-work, grinding out enough of an income to survive, as their health gradually unravels (even as COVID–19 becomes a distant, unpleasant memory). The least fortunate will turn to various unsavory ventures, because illicit labor doesn’t require a spotless résumé. Perhaps the most talented of the newly-disabled will do what I’ve had to do: swing from one six-month rent-a-job to another, until the boss figures out they have a disability and either fires or gimp-tracks them. That these people will be unemployable doesn’t mean that society won’t be able to get work out of them–– it means that they’ll be unable to get anything out of society.

One might think, though, that the eventual exclusion of 8 million people from traditional, “respectable” labor (office jobs) could bring a benefit to other 152 million who do not develop lifelong disabilities. Less competition, right? That’s exactly what our pig-fucker bastard owners want us to think. They want us to think of our fellow citizens–– fellow proletarians–– as “competition”. They want us divided against each other, because it keeps them in charge.

That Star

Revisit the title of this essay. I predicted that millions of people (8 million, plus or minus) will become unemployably* disabled, accent on the *.

In a corporate dystopia, where workers compete against each other for the benefit of their owners, it is inevitable that people with otherwise mild disabilities will become unemployable. That is, they will be unable to convince the obscenely well-paid “professionals” who profit by the buying and selling of others’ labor to give them gainful, stable employment. There is no reason it has to be this way.

Should a person who suffers post-viral fatigue be subjected to workplace bullying and performance evaluation? I would say no. Should a person, recovering from a severe respiratory illness, be non-consensually ejected from her career because her panic disorder or depression caused a headache for her boss? No.

Here’s the reveal, which should not be much of one.

Yes, COVID–19 is going to fuck a lot of people up. It’s killing people and will continue to do so. It’s horrible. I wish this were not happening; I wish what is about to happen were not about to happen. This said, it need not be the case that COVID–19 renders 8 million people, or even one person, unemployable. COVID–19 exists in nature; it is part of the real physical world and we have to contend with it. “Employability” does not exist in nature. It is a part of a social construct and a stupid one at that.

Corporate capitalism is a fragile, hostile economic system that will throw millions of people under the bus in the next year for no reason but their “offense” of getting sick. It will not know whether they got sick from COVID–19 or a secondary infection or post-viral fatigue or the psychiatric sequelae of respiratory illnesses. It will not care. It will fire them “for performance” and the wheels of the bus will roll along.

We’ll soon see about 8 million people rendered permanently unable to, on the harsh terms of corporate capitalism, get an income. For what? Is the needless suffering (and, likely, the continuing worsening of their health) of 8 million people, who did nothing wrong, a worthy price for the upkeep of a decaying socioeconomic system that all intelligent people–– even though we disagree on solutions–– despise? I think not.

COVID–19 is horrible. The earthly existences of thousands are, as I write this, in present danger. That number is likely to worsen. We need not let it be more perilous than nature has made it.

If we keep corporate capitalism around, we will see 8 million people–– some talented, some extraordinarily competent; but nonetheless unable to survive in a system where each worker must compete against a hypothetical replacement who might be as skilled but without illness–– fall out of the primary economy for good. There’s no point in that. It doesn’t have to happen that way. We can tear corporate capitalism down. We can overthrow our corporate masters (through nonviolent means if possible, through other means if our adversaries make it so). We can eradicate an economic system in which we compete against each other for the benefit of a tiny, self-serving minority who wish to own us. COVID–19 is proving to us that we, citizens of the world, are all on one team. We all want this thing not to destroy us and everyone we care about. It’s time to build an economic system reflective of that.

Wash your hands for 20 seconds. Avoid public gatherings. Try not to touch your face. Furthermore, I consider that corporate capitalism delenda est.

Welcome To My World. I’m Sorry That You’re Here.

I had a mild bout of flu in February 2008. I’d had worse flus, and I have had worse since then. I was a 24-year-old with no health issues; I recovered quickly.

What made this infection notable was that, a month later, I experienced intense pain in my throat that radiated through my chest and face. I could barely see. I tried to drink water and could not swallow. For a minute or two, I couldn’t breathe. Laryngospasm–– it feels like drowning in air. Dizziness, nausea, and vomiting followed. The “mystery illness” caused a panic attack. Not just one, either; they kept coming for months.

The physical problem turned out to be a secondary bacterial infection. It’s rare, but sometimes happens after influenza.

Unfortunately, the panic attacks never went away. They often don’t. Severe respiratory illnesses often cause lifelong disability–– PTSD, reduced lung capacity, depression, anxiety and panic disorders. Once the body and brain “learn” how to panic, this vulnerability becomes a new facet of daily life. So terrible is the experience of a panic attack that a person will do nearly anything to end one. Without a doubt, they’re one of the worst things a person can experience. Moreover, the fear of panic attacks can, itself, produce one. Intrusive thoughts and superstitions become a part of daily life. Unchecked, this can lead to dysfunction and agoraphobia.

I hit bottom in 2009. I was agoraphobic. I had to spend a year re-learning how to do daily activities, re-learning that it was safe to ride a bike, sit on a crowded subway, ride a car. I built myself back from 1 HP. It wasn’t easy.

At this point, I’m 98-percent recovered from panic disorder. I used to have attacks on a daily basis. Now, I might have a “go-homer”–– one bad enough that I have to leave work–– once in a year or so. I’m probably in the 85th percentile for health at my age (36). Aside from being minus gallbladder, I’m in excellent physical health. I can deadlift 340 pounds. At this point, I can do all the activities of daily life. I’ve had panic attacks while driving. I don’t recommend that experience, but it’s not unsafe. If I have one while scuba diving, I have a plan for that (signal diving buddy, ascend slowly).

Open-plan offices are a struggle for me. Actual danger doesn’t trigger panic attacks. I’m fine riding a bike in traffic. I’ve swum with sharks (no cage) at 78 feet–– which is not as dangerous as it sounds. Open-plan offices, though, are needless cruelty. The easiest way to have a panic attack is to sit for nine hours in a place where having one (a minor irritation when it happens at home) will be a professional death sentence–– and, trust me on this, it is. If the bosses find out you have (scary music) “mental illness”, you will either be fired or given the worst projects–– gimp-tracking–– until you leave.

So-called “mental illness”, after a serious respiratory infection, is normal. The body is not meant to go breathless. Nearly half of SARS survivors, ten years after recovery, were still too disabled to return to work. These were healthcare workers (in high demand) outside of the United States. For American wage workers, the rate’s going to be worse.

I’ll give myself as an example. On May 10, 2019, I successfully interviewed for a job at MITRE as a simulation and modeling engineer. On May 13, they made an offer, which I accepted. My intended start date was Monday, June 3. Robert Wittman, who was to be my manager, somehow learned of my diagnosis (likely, illegally) and, on the (false) belief that it would prevent me from getting a security clearance, rescinded the offer. This happened to me 11 years after the original infection.

So, even if you survive severe COVID and are well enough to work, you might not find anyone willing to hire you.

Here’s my prediction, and I hope I’m wrong, but I’m probably not. If anything, these numbers are conservative.

First, I think that nearly everyone in the US will be exposed to COVID–19. The Republican Party’s forty-year campaign to destroy our government has been successful, and employers are more interested in the appearance of doing the right thing than in actually doing the right thing. The American workforce is 160 million people. I predict 100 million will be infected.

Half of that 100 million, I predict, will be asymptomatic. They’ll get the disease but show little pathology. Of the other half, I predict 35 million mild cases, 13 million severe cases, and 2 million critical cases, leading to 125,000 deaths. These numbers are far more favorable than the pattern the disease has shown, and that’s because I’m talking about the American workforce, not the entire population. Total deaths in the US could reach seven figures; working-age deaths, probably, will not.

“Mild” is a relative term, and when we’re talking about diseases like SARS or COVID, “mild” isn’t all that mild. It means the case probably doesn’t require hospitalization. Some who have mild cases will develop secondary infections. Many will lose their jobs and health insurance, producing psychiatric sequelae. These people won’t be in immediate danger of losing their lives, but many will be disabled, and some for years. I’m going to say that 5 percent of people (1.75 million) in this set will be long-term disabled–– they will lose their jobs due to illness and be unable to find work.

Of the 13 million severe cases, I’m going to use SARS as a point of reference and predict a 40-percent disability rate–– 5.2 million. This leaves 2 million at the worst level of illness–– critical, meaning organ failure or intubation are involved, and I’m going to predict that 65 percent of them (1.3 million) are unable to go to work. This gives us a total of 8.25 million.

If my (conservative) predictions are right, we in the 18–65 sector are going to see “only” five years’ worth of traffic deaths from COVID–19. A big number, and worth taking seriously, but not apocalyptic. Life will, after a few miserable months, return to normal.

Millions of workers–– I predicted 8 million, but it could be half that or double that–– will be, in the wake of non-fatal COVID, unable to return to their jobs, or to get other ones. They’ll try to work–– in this country, they have no other choice–– but they will be unable to meet the performance demands of their jobs, and summarily fired. They will have six-month job gaps in 2020 and no one will want to hire them. Their careers will be disrupted and unfixable. CEOs will insist that they are not discriminating against people who survived COVID, with all the credibility I have in insisting that I have a 16-inch IQ and 200 penises. Legislators might pass laws preventing discrimination against COVID–19 sufferers, or against people with job gaps during 2020, but we all know that employers don’t need to follow laws when they can call themselves “jawb creators” and get a free pass.

Our society runs on “if ya doesn’t work, ya doesn’t eat” model, and millions of people are likely to become unemployably disabled. Some will be unable to work at all. Some will, like me, return mostly to health, and be able to work, but struggle to get hired due to lingering stigma. COVID–19 will pass. Our bosses and owners will tell us that everything’s back to normal (it won’t be) and that we just need to get back to work. But millions of people are going to be unable to do so, and the system will discard them forever.

I should mention a personal bias: I’m a democratic socialist. Often, I read people on the right claiming that “communism killed millions“. It isn’t true. Death attribution is a complex science and you can’t just count every death that’s not by old age as being caused by the economic system in place. If you compare the death tolls of so-called communist regimes (some of which were terrible) to what they would likely have been under similarly repressive regimes (of which there are numerous examples) aligned with imperialist capitalism, the excess death rate of communism is… zero or negative. That’s not to say that communism is flawless or faultless–– only that it does not produce excess deaths over what would have otherwise occurred.

At issue is that we’ve been brainwashed, in the United States, to believe that all people who died of causes excluding old age in communist countries were “killed by communism”, every single one. Meanwhile, when capitalism kills people, it blames those who were killed. “Personal responsibility.” If that Pakistani kid’d had the good sense not to go outside on a sunny day, he wouldn’t have been freedom’d by a drone.

Communism’s public liability is that it never forgets–– and, given the severe failings of societies that called themselves communist, it should not forget. Communism has too much memory and too much history and too much responsibility. Capitalism has no memory and no history and no responsibility.

If we go “back to normal”, as our owners and managers will insist, and neoliberal corporate capitalism remains in force, eight million people are going to find themselves falling to the bottom. Months or years from now, they’ll die needless deaths. We know what the capitalists will already say. Trump already said it: “I don’t take responsibility at all.”

Not only in the next three months, but in the years following this catastrophe–– as people try to return to their careers and find their jobs gone–– corporate capitalism is going to fail. But is it going to fall? That’s up to us. If we do our jobs, yes. We cannot let our economic system and those who own employ us, when they try to avoid taking responsibility for their role in this calamity, succeed.

Techxit (Part 2 of 2)

If you haven’t read Part 1, please do so. The story I’m in the process of telling is not one to enter in the middle. It is too strange for that–– it becomes nigh-unbelievable the scrupulous accounting that I have painstakingly provided.

If you’re caught up, welcome to the fourth circle. Five more to go.

In case anyone forgot, Nazis are bad. That hasn’t changed in the past 48 hours.

Chapter 13: The Misappropriation of the Nerd Archetype

During its fifty-year reign, Silicon Valley has created one meaningful invention: the disposable company. That is its true product.

In order to understand how we got here, we need to look into one of the more irritation, inexact archetypes of the modern era–– the nerd, stereotypically associated with hyper-intelligence and middle-class authenticity. A nerd is endearingly harmless, straightforward and socially uncomplicated, and vaguely asexual. Nerds are authentic because they only have one mode of interaction–– they lack the social skill of keeping separate multiple version of themselves. You might find the nerd in your life infuriating, but you can trust her.

In the 2000s and 2010s, this evolved into frank disability appropriation. Software executives–– bullies who swept into the tech industry to exploit nerds–– will often use “aspie chic”, despite being neurotypical, to excuse damage caused by their lack of empathy for other humans. This blurs an important distinction–– between a neuro-socially disabled person’s reduced capacity to appropriately express empathy, and the psychopath’s utter absence of it.

Software executives, for the most part, are people who wanted to be somewhere else. The top third of business school graduates go into hedge funds and develop trading strategies. The middle third go into management consulting or do “soft” work in private equity. The ones sent West to boss nerds around are the leftovers. They don’t like being there, and they view the people working for them as unlikable misfits, but over time they grow to view nerds as a puzzle–– how can this type of person’s earnestness, ego, and social inadequacy be used against him?

One failing of nerds is the desire to avoid “politics” and focus only “on the work”. To say, “I just want to code.” This results in programmers building systems without asking how they’ll be used; it gives us the weapons of mass unemployment, and it gives us the “performance” surveillance inflicted on honest workers.

Nerds, as I’ve noticed, don’t have a lot of leverage in today’s workplace, because they tend to fall behind the curve when it comes to performative emotion. They tend to fail at the effusive over-emotion that American culture expects. Neurotypical people understand that a person’s real job in a corporate workplace is to mirror management’s anxieties, without actually being affected–– to be Xanax in human form. Nerds, to their detriment, are straightforward and legible. They either shut out exterior anxieties (which management reads as disengagement) and focus on the work, or they let the nervousness get inside them, taking a hit to performance. They lack the essential two-facedness that workplace survival requires.

The neurological social ineptitude we observe in people with autism, and in the hyper-intelligent, is not what we find in software executives. Software executives know what the rules and expectations are, and they break not out of unaware earnestness, but as a means of belligerence. The breaches of decorum, the microaggressions, and the brazen flashes of non-empathy, those all use the archetype of a nerd as air cover, but these people are something else.

What characterizes the Silicon Valley software executive is a deep-seated contempt for human “softness”–– for empathy and for what makes us human. His dream company employs zero people–– no emotional cooties–– and makes a trillion dollars per hour.

I’m not against technology itself, of course. At a nuclear or higher technology level, post-scarcity automated luxury communism is the only economic system that stands a chance, and we should race to it. Automation and globalization aren’t evil–– we have to do them right, to distribute the wealth decently. We cannot trust the current financio-technological elite to do it right. If we leave the job to them, we’ll watch as they build increasingly profligate toys, migrate to off-planet bases, strip mine the Earth, and leave the bulk of us to die.

Chapter 14: The Disposable Company

A corporation, legally, has all the rights of an embodied person (corporis). It has none of the weaknesses, however, that come with a human body. It is designed to live forever. It cannot be put in prison. It commands such an obscene share of society’s resources that it can become “too big to fail” and stake an economy’s health on its persistence. It is increasingly unaccountable to the nations in which it operates.

That’s not an artificial person. That’s an artificial god. Gods only die in one way: people stop believing. That’s what killed Ereshkigal, Zeus, Thor, and Enron. Financial markets tell us, in real time, how strongly society believes in each god–– and how willing a society is to overlook the failings of the corrupt priests who take for themselves what is sacrificed to these gods.

In the corporate gods, I’m a nonbeliever. For quite some time, though, I bought in to the venture-funded technology industry (Silicon Valley). I let myself get duped. Silicon Valley is a god designed for nonbelievers.

There are thousands of venture capitalists, but only a few of them matter, and they mostly live in a small geographical area. The ones in the in-crowd, who can arrange publicity and introduce clients, decide as a group what gets funded, what gets bought at large companies, and what gets shut down. Silicon Valley is a factory for lightweight companies that can be inflated if circumstances demand it, but that can also be scrapped or mined if necessary. If the workers form a union, or if a founder goes to jail for domestic violence, the syndicate of investors will decline to participate in the next funding round, and redirect its resources and clients to another option in that space.

By design, these venture-funded companies cannot survive without a new infusion of cash every 18–24 months, because it is not only a one-time investment these companies require. The bosses on Sand Hill Road give them clients and publicity, and hold sway on whether, should the company fail (as most do) to become an independent concern, the founders get a favorable job outcome in the acquisition.

Founders present themselves as entrepreneurs running independent companies, but they function as a middle management layer between the true executives (investors) and the workers. They have no choice but to accept the venture capitalist’s high-risk, aggressive growth plan. If the founders fail to keep the VCs happy, they won’t only lose their companies, but they’ll lose their personal reputations.

Startups are risky, of course, but if you listen to people like Paul Graham, you shouldn’t fear this risk because even failure will advance your career. No, you won’t become an IPO billionaire this time around. You’ll have to take a time-out as a VP at a FaceGoog, and you’ll have to show up at a place once or twice a week, but you’ll be able to recover on your own terms. That’s not how it works at all.

Founders sometimes get “soft landings”–– most “acquired” companies are six months away from failure when bought–– but not because employers value their experience. When the VCs decide that it’s time for one of their companies to die–– they have no interest in funding it further, or sending it more clients, or pulling strings to arrange publicity–– they understand that founders typically don’t agree with the decision, and have some leverage in the shutdown process. Legally, founders are within their rights to fight, but that would delay the inevitable result, and damage reputations in the process. So, instead, the VCs make sure the founders will land in desirable jobs (e.g., VP-level sinecures at the acquiring company, “entrepreneur in residence” roles) and have acceptable financial outcomes. A startup acquisition is usually a hush fee paid to those who will have strong (if unfavorable to their ex-bosses, the VCs) opinions on why the company didn’t work out.

What about employees? What about the engineers who build the product? Oh, they get shanked.

Chapter 15: What Startup Failure Actually Looks Like–– Or: Why Your CTO Drinks

Here’s the picture most people have of business failure. The boss comes in, calls a meeting, and says that the company is defunct and that everyone’s next paycheck is the last. It’s awful, but it isn’t personal. The laid-off worker’s reputation stays intact, and she gets a comparable or better job, because of the experience and contacts she’s acquired. This is what venture capitalists, when they spew claptrap about “embracing risk” and “failing fast”, want people to imagine. Consequently, by this narrative, a startup that is not defunct is nowhere near failure.

In truth, a venture-funded startup’s failure is an ugly, drawn-out process that unfolds over years, often invisible to regular employees, that sinks the careers of innocents by the tens or hundreds before anyone figures out what’s going on.

When a venture-funded company starts to fail, it’s still able to raise money, but it has to get capital from less-connected investors and the terms get worse over time. This is why technology companies are cagey about the details of the “equity” (in truth, illiquid call options on penny stocks) they offer to compensate for low salaries. Deal terms can be mind-bogglingly complex–– I won’t get into that here–– and it’s not uncommon for a startup to be acquired for $250 million while its regular workers, after liquidation preferences and several rounds of financial shenanigans, get nothing.

The failure process of a venture-funded firm occurs in stages. Founders cede control, or initiate “pivots”–– complete changes in what the company exists to do–– and the result is a culture of constant reorganization. Upward mobility is rare because, as the company is forced to accept worse terms to raise capital, executive positions are sold off to friends of investors. Founders of foundering companies insist, while this is going on, that everything’s going exceedingly well, and they blame subordinates for shortfalls.

This results is jobs getting lost, and I’m not talking about layoffs. I’m talking about humiliating terminations “for cause” of innocents. The startup environment is a downhill highway, full of busses barreling down at a hundred miles per hour, and you don’t have to do anything wrong to be thrown under one.

The sociology of a churn-failing startup is fascinating, but for now, just trust me: this is how it always goes. Venture-funded founders do not admit they made mistakes. They blame the people under them. Technology is first to suffer blame, because it’s a soft target. Nerds don’t fight back, and nontechnical investors and bosses and clients don’t understand what they do.

To a young programmer, being a Chief Technology Officer (CTO) seems like a dream job, but it’s actually a high-stress position with a lot of turnover. When the company fails to execute, nerds get the blame.

An old-style company, when it had to lay people off, had the decency to admit that circumstances required it to terminate good workers. Often, firms would work with the press, at their own expense, to ensure that the reputations of departing workers were unharmed. That’s not how Silicon Valley does these things, because it’s run at the highest levels by empathy-deficient psychopaths–– who’ve taken the nerd label to give themselves plausible deniability.

When a tech founder founders, he presents himself as a visionary impervious to mistakes. Alas, his subordinates failed to implement his brilliant ideas. He didn’t fuck up; they sabotaged him!

This industry’s full of 25-year-old companies that claim they’ve never had to lay anyone off. Their history is one of monotonic progress that will never end. Dig deeper, and what you find is that these companies, during bad economic times, laid people off just as non-tech firms do. The difference is: tech firms disguise layoffs as firings for cause or for performance reasons–– protecting management’s reputation, at the expense of now jobless employees.

Technology founders present a mythology in which they either win big (get rich, buy boats) or die as a group. That’s not how it works, though. Startup failure takes 5–10 years to run its course as it usually involves a slow deflation in the founders’ standing in the investor community–– and these people will macerate workers by the hundreds before they go down. The “prima donna” programmers screwed it up. “Technical difficulties.”

Founders survive startup failure if they do right by their investors. If they shut the company down, when and how their bosses ask them to do so, their reputations stay intact and they can be founders again. Workers? Fired, no severance, often for phony performance reasons to disguise a layoff.

The disposable company’s political appeal is right-wing: no matter how badly the workers are treated, it is unlikely to unionize.

Most of Silicon Valley culture and mythology can be understood as anti-union prophylaxis. Programmers are led to believe they’re getting some “revenge of the nerds” against the girls who rejected them in high school by working on Jira tickets and making low six figures. Workers are pitted against each other–– tech versus non-tech, designers versus engineers, employees versus “red badges” (contractors), old hands versus entry-level–– in order to keep false consciousness strong.

Workers in a venture-funded company know that if they unionize, the VCs will simply nonexist it.

Chapter 16: Post-Truth

Corporate capitalism is a post-truth world.

I don’t love Jeb Bush, but his candidacy in 2016 was not ended on substance but because Donald Trump labelled him, “Low Energy Jeb.” What does that mean? It’s hard to say. It doesn’t matter. There need be no factual truth to it. It stuck.

Donald Trump pulled a corporate in presidential politics and it worked.

In Chapter 7, I mentioned the Carly Simon Problem. Someone misinterpreted an old blog post and he stabbed me in the back–– a reliable job reference turned negative. This raises an interesting question: why are negative references so damaging as to render otherwise excellent job candidates unhireable? It has nothing to do with the hiring manager believing the content of the negative reference “is true”. It’s probably not. In the corporate game of promotions, demotions, performance appraisal, and terminations, there is no truth–– there is only power. People get what they get.

Someone who gets a bad reference is unemployable because he “got got”. Was he bad at his job? It doesn’t matter. Donald Trump illustrated this viewpoint when he slagged a literal war hero, John McCain: “I like people who don’t get captured.”

Reputation, in the world of corporate false consciousness, is an entity unto itself. It need nor respect what is true. Donald Trump’s success in life proves this. He has shown no talent in running businesses. He has shown no significant intelligence or creative ability. He has been a degenerate reputationeer for forty years and it has worked. All he needed to kill off most of his political opposition, in his rise to the presidency, was a knack for nasty nicknames.

My personal view is that one should never invest oneself, or trust, in reputation. It’s too easily destroyed. It is a volatile asset, increasingly controlled by the world’s worst people. But it is not only distant, very rich malefactors one must fear. If I were a bad human being, I could render unemployable any young professional I wanted–– with less than $1,000 and in under 48 hours. I won’t get into the strategy, for obvious reasons, but it doesn’t take brilliance. Such a person would be sidelined at his job and, over time, terminated. Prospective employers would Google him, see a slew of rumors and whispers, and pass. Truth doesn’t matter, in the corporate world. No one wants to hire someone with a bad reputation.

Chapter 17: Reputation Management–– Keeping the Gods Alive

Why is reputation so important in the modern economy?

Largely, it’s because the most highly compensated people in our workforce do absolutely nothing. There are workers and watchers–– most white-collar people are watchers who participate (sometimes, indirectly) in the buying and selling of others. Those who do measurable work can be tracked, surveilled, and bargained against. The winning strategy is to get an advanced degree, keep one’s contributions abstract, destroy anyone who has the gall to point out the needlessness of one’s activity, and focus full time on the protection and expansion of one’s own reputation.

The most highly-compensated people justify their consumptive existences by saying they “allocate resources”. That is, they do nothing but “solve”, in political and suboptimal ways, problems that could be solved organically by a less oppressive system.

Napoleon may or may not have called England “a nation of shopkeepers”. Thanks to corporate capitalism, we’ve become one of reputation managers. A worker is promoted, demoted, passed-over, or fired based on his contribution to his boss’s reputation. The middle manager is likewise rewarded or punished for his perceived effect on the reputations of the executives above him. A CEO’s job is to bolster the reputation of the company he supposedly also runs. Innovation is nonexistent; the work itself hardly matters at all. It’s all about reputation, but why?

Above, I mentioned that the modern business corporation isn’t an artificial person but an artificial god. What can kill a god? Disbelief. Whether a true God exists in the abstract is another discussion, but ethnic gods are beasts that exist in society because they have reputations for existing. Corporations are the same.

The business world runs on reputation–– a product of cognitive laziness, social inertia, and a low degree of respect for accuracy in information. Every white-collar worker’s job is the management of some reputation. This is why a rumor about someone, no matter how absurd or demonstrably false, renders him unemployable. The rumor’s existence shows poor performance in the management of the target’s own reputation. How can he be trusted to defend and expand a boss’s image, if he can’t even control his own?

Nothing is true or false, in the scatological agora of corporate life. There is no good or bad content; all is just content. Things are loud or quiet, amplified or ignored. Rank begets rank. The longest eigenvector wins the right to poke you in the eye, or somewhere else if it so desires.

Chapter 18: Is the U.S. at Risk of State-Level Fascism?

I have no love for Richard Nixon, Ronald Reagan, or George W. Bush. This said, their professional ethics, while in office, were world-class compared to those of the typical corporate executive. Richard Nixon resigned over offenses that, in the private sector, would be everyday office politics. Government, we hold to a higher standard.

The Age of Reason, as begun in the European 1700s, led to the institution of rules-based, rational governments operating on laws rather than clerical fiat or the whims of charismatic individuals. The rich have largely accepted rational government as beneficial, the alternative being unpredictable; but in the 1800s when this led to discussion of rational economy–– also known as socialism–– they did what they could to slam the breaks on progress. National governments could be democratic, constitutional, and legalistic… this would make their operation slow-paced and “boring”, which would be good for business… so long as no one interfered with the boss’s “right to manage” on the factory floor.

Rational government and pre-rational economic principles coexisted for some time, but modern technology has made this untenable. One or the other must go. Which?

The Age of Reason has always had its skeptics. A pervert and not-even-middling French intellectual, Donatien Alphonse François–– also known as the Marquis de Sade–– managed to gain relevance by his inflammatory anti-rationalism. He believed that, given the human thirst for power and delight in the suffering of others, rational government could not exist. (He was wrong.) Donald Trump, our first truly corporate president, has doubled down on anti-rationalism. In a perverse irony, his supporters find him to be “a straight shooter” even though half of what he says is untrue. He uses mendacity as a power move (a business-world trick he has, over decades, perfected) and to some people, by doing so he communicates the only truth that matters–– that he’s in charge.

Fascists do not believe in truth. They only believe in power. Power decides what is true; power makes the rules. Donald Trump was impeached (unsuccessfully) for abuse of power. To a fascist, the term “abuse of power” makes no sense. In their view, we who are out of power are “losers and haters” using the term abuse toward power because we do not have it. To a fascist, no rules should exist over power.

Donald Trump is racist, misogynistic, self-indulgent, mendacious, volatile, deleterious, incompetent, and stupid. Is he a fascist?

Chapter 19: What Is Fascism?

I described earlier that no one is truly a nihilist, because meaning voids get filled. A person can be unprincipled, but that is different. Systems can be nihilistic or even destructive of value (cf., corporate capitalism) but, in an individual, nihilism is untenable.

Political nihilism, when observed, has a flavor of might making right. This goes back to antiquity. Trial by combat, on its own terms, solves two problems at once: the party that wins goes free; guilt passes to the deceased. Everyone wins because no one is alive to lose. It’s a great system if you don’t believe in truth.

Fascism, of course, isn’t just moral nihilism. There is more to fascism than a belief that might makes right; the notion is celebrated. Furthermore, while fascism is fundamentally empty, it presents itself not as nihilistic but as a rebellion against nihilism and relativism. Fascism promises answers. It is decadent and empty but blames society’s decay and emptiness on vulnerable minorities, or external enemies, and by doing this, it fills the failing society’s purpose void with hatred.

Corporate capitalism has little apparent interest in state-level fascism. It is amoral and nihilistic, but it is too lethargic to overthrow democratic societies if there is no profit in them. Much of what drives fascism to emerge in its wake, as it did in 1920s Italy and 1930s Germany–– and as it could have done in 1930s America–– is that people would rather live with a bad purpose than live, as they would under corporate capitalism, with no purpose.

Fascism doesn’t simply assert that might makes right. It celebrates the notion. Like philosophical sadism, it confronts something ugly in human nature (the problem of evil) that stymies well-intended philosophers and theologians and, instead of treating the malady as a flaw to worked around, embraces it and declares it good.

Every time we encounter another person, we decide whether to cooperate or compete. Societies generate rules to determine whether we favor one or the other. A nation might use a market (competition) to price commodities but institute a basic income or welfare state (cooperation). Representative democracy holds that we cooperate as citizens, but that those who wish to gain and hold power must compete for it. Competition, then, is introduced to make power accountable to the governed.

Fascism is the dual opposite of that. The people are divided against each other, constantly measured and compared, and locked in endless battles for artificially scarce resources. Power, at the same time, unifies. State, cultural, religious, economic, and corporate power congeal into an inflexible fasces.

A fascist society introduces competition to make people accountable to the ruling elite. There will be competition in high places, but it must only be seen from above. Fascism’s ruling class must present a unified front at all times. There will only be one political party, one leader, and one vision for society. All else is the enemy–– the other.

In the corporate world, people with bad bosses think they can improve their situation by appealing to HR or a higher-level manager. I have never seen anyone make that work. Usually, they get themselves fired. To a fascist, the attempt to divide power against itself is unforgivable.

Chapter 20: Why Corporates Might Favor State-Level Fascism

It’s said that if you scratch a capitalist, a fascist bleeds.

Corporates, outwardly, like to play both sides. They take on liberal identity politics and conservative economics, while striving for an image of centrist pragmatism. They will almost always, however, favor a rightward lurch over even modest leftist progress. Why? They view fascism as an in-one-country problem. They will move family if safety requires, reallocate capital to take advantage, and wait out the storm. Genuine social progress is more of a threat to their capital and social status, and–– worse yet–– likely to have longevity. What is “socialism” before it is implemented, people like once it is there, and it becomes impossible to roll back.

The United States has always associated leftist politics with radicalism, but in our recent history, we’ve faced orders of magnitudes more danger from thee right. The Weather Underground, at its worst, was a nonentity compared to the horror of the Ku Klux Klan. We live under active threat–– school shootings, theater shootings, church shootings, synagogue shootings–– from a belligerent, far-right counterrevolution the corporates manufactured to divide the proletariat against itself, for the benefit of the ruling class, and to distract people from the widespread, but notionally centrist, looting of society by the executive class.

Why do corporates present themselves as centrists? Frame dragging. They want to nudge the Overton Window to the right, but they do so by holding on to the zero point. Despite Machiavelli, they’d rather be loved than feared. Machiavelli’s advice in The Prince pertained to an individual seeking to block short- and medium-term challenges to his power, but an owning class that wants to hold power forever will prefer, in peacetime, to make itself loved. That is the purpose false consciousness serves. In event of active conflict, however, they will resort to fear.

The way I’ve discussed fascism may sound bloodless. With my focus on the unification of the ruling class–– and workers competing to serve the masters–– it might seem that I’m downplaying fascism’s end-stage horrors: racism, misogyny, religious bigotry, belligerent nationalism, and genocide. Not so. Those emerge as a matter of course. A fascist leader’s goal is not to rack up a body count per se, but to hold power at all costs. This said, the people governed will not tolerate ceaseless competition without a narrative of expansion. If the poor figure out that they’re being pitted against each other for the benefit of the rich, they’ll revolt. Instead, says the fascist, they’re being prepared for a grand conflict, an upcoming war–– in which they are predestined to win, because of national, racial, religious, or cultural supremacy–– in which the society will prosper and expand (Lebensraum) through the vanquishment of undeserving, “inferior” people.

Narratives in the startup’s corporate playbook are not especially different. The “lean” (understaffed and underfunded, with workers artificially divided against each other) startup is destined to drink the milkshake of its “bloated” competitor because “We have a better culture”, because “All they do over there is play politics”, because “No one over there works Sundays.” Sure, many of the workers–– the weak, the unworthy–– will burn our or get fired; but in the end, the startup will demolish its competitors because of its superior “culture”.

Chapter 21: Masculine Crisis

Economic systems like ours produce epidemics of masculine failure. High-status, rich males never need to grow up (that is, become men); low-status men are denied the opportunity. Men and women lose.

I recognize that I need to tread with care here. I make no absolute claims about men or women. I categorically reject any line of thinking that declares one sex superior to the other, or that encourages the stigmatization, exclusion, or punishment of those who do not conform to sex or gender roles.

It’s an inoffensive, common leftist position that gender is entirely socially constructed, but I don’t think that’s true. Much of it is, of course. That Brian is a boy’s name, and Emily is a girl’s name, that’s socially constructed. That pink is a feminine color, or that truck driving is a masculine job, that’s socially constructed. That said, there are patterns that recur in societies to such an extent as to suggest sex-linked differences in the aggregate–– in probability distributions, even if not relevant at the individual level.

I agree that gender roles do not suit everyone. This said, if one looks at the cultural mainstream, one finds deep-seated attitudes that, right or wrong, will not be abandoned by 90 percent of the population at once. Heterosexual men, in general, want to see themselves as masculine, and are attracted to women they perceive as feminine. Heterosexual women, in general, want to see themselves as feminine and are attracted to men they perceive as masculine. I’m making no statement on what should be–– only one on what is.

Corporate capitalism has a problem. It requires men to live on their reputations. That is not masculine. Subordinate men, in a courtier society, are seen as obsequious and supernumerary. Men do not want to see themselves that way; women are not attracted to such men. For this reason alone, corporate capitalism is unstable.

To be clear, I don’t think women should have to live that way either. I focus on the masculine side of this crisis not because that, in my view, is what drives the lurch toward fascism. Men who support demagogues like Trump do so out of rage at their emasculation by corporate capitalism. Women who support demagogues like Trump do so out of rage at the destruction of men in their lives.

Masculinity is, and will always be, the weak point of hierarchical courtier societies. Traditionally masculine endeavors (hunting, exploring, defending) do not pay. Corporate capitalism must sell the narrative that making money is a sex act. A real man provides, even at the expense of his own comfort. If this means he peddles drugs to children, or builds bombs, that’s what he does. If this means he supports a fascist regime, that’s what he does. Freedom is just another disposable comfort of lower rank than his obligation to “be a man” and provide.

The problem, of course, is that court life is emasculating. Men who earn their coin by subordinating to other men are useless. Women are the reproductive bottleneck, not us. In our role in the reproductive equation, we’re replaceable.

Corporate capitalism tells men they must provide, but only leaves them one way to do it, which is to be emasculated by other men. Eventually, men figure out that they’ve been duped. They get angry. Equally angry are the women who, in a decaying society where male adulthood is increasingly rare, cannot find husbands.

Is it emasculating to be an organizational subordinate? Five thousand years of human history has produced exactly one counterexample, one context in which a man can be subordinate and fully masculine–– the archetype of the soldier.

We see yet another reason why fascists love war.

I could write thousands of words on toxic masculinity, but I’d rather not. It’s disgusting and depressing. Our economic system induces toxic masculinity. The degradation of the feminine distracts men from the game being played against them and women both. At the same time, toxic masculinity is what keeps the corporate system going. Inertia does not suffice to explain it. The corporate system is always busy. It propagates false consciousness, enforces a social hierarchy, resists challenges, and bolsters the image of a hereditary elite disguised as meritocratic ubermenschen. That’s not a conspiracy–– it’s all done in the open, and legal even if its methods aren’t–– but it is a lot of work. Who keeps it going? What motivates the plutocrats and corporate executives who (unlike us) could easily retire from the world’s shittiest mini-game, but keep playing?

For the most part, the system’s raison d’être is to procure sexual access for rapacious, disgusting men. Harvey Weinstein, Roger Ailes, and Donald Trump wouldn’t have a lot to offer women if they had to compete on an even footing with socioeconomically inferior but otherwise superior men like me (and like 99% of my male readership). Corporate capitalism is a way for these odious men, using paperwork and poverty, to disempower their competition.

The reason we do not have health insurance in the United States is that, in 1947, a bunch of racist Southern Senators fought a movement that would result in desegregated hospitals. Millions of people have died of lousy or nonexistent health insurance because a bunch of now-dead, inadequate white men feared losing “their” women to… not the British Broadcasting Corporation.

Chapter 22: Is Donald Trump Fascist?

This might be the only section where I don’t know the answer. Is Donald John Trump a fascist? I don’t know.

He is heinous and bizarre. We could be debating fifty years from now whether he is a fascist or opportunist. His mental health is questionable, but I’m not qualified to opinee on that topic. He seems to have no coherent ideology. There are fascists around him–– that’s clear. There are also opportunists around him. There may be one or two noble souls putting his career at risk in a sacrificial effort to limit Trump’s damage. As for whether the man holds fascism in his heart, we’ll tackle that some other day.

Objectively, we can say that Donald Trump has normalized behaviors and practices that threaten democracy, making the job easier for any fascist who follows him. What about capability, though? Is he capable of making himself a fascist threat to this country? Three years ago, I would have said, “No, absolutely not.” On that, I must admit my confidence has waned.

Having studied fascism, I would have said in 2017 that Donald Trump would be unable to pull the requisite image off. Adolf Hitler was a wealthy, self-indulgent, flatulent buffoon who had a number of trysts, and Mussolini’s sexual perversions are now legendary, but the public images these men presented were more in line with stoic, traditional masculinity than the flagrantly toxic variety of Berlusconi, Bloomberg, and Trump. It was all a lie, but the Fuhrer presented himself as a simple-living celibate bachelor, “married to Germany”. He himself said that a politician should never let himself be photographed in a bathing suit.

Donald Trump, for a contrast, lived like a clown for his entire adult life. I did not think, in 2017, that such a man could sustain enthusiasm of any kind, fascist or otherwise, for more than a couple years. I expected his movement to die out as he became part of the establishment he railed against.

So far, time has proven me wrong. Toxic masculinity hasn’t been a liability for Trump. He has doubled down on it, to no cost to himself. Fascism has proven itself protean.

This acknowledged, I will not say that Donald Trump poses no fascist threat to our society. He clearly does. But, I continue in my belief that he hasn’t taken the most efficient or obvious route to fascism. In 2016, he nearly lost. His approval rating is lousy. If he wins in 2020, it will have had more to do with Democratic incompetence than any appealing personal traits of his.

All of this said, and recognizing that a fascist can play either to traditional or to Trump’s overtly toxic masculinity, the greatest fascist threat in my view comes not from Trump, but from Silicon Valley. We could see, in 2024, a young technology founder running on an image of centrist competence, with a sterling reputation (because anyone who would say bad things about him has been silenced), who will present himself as “post-political” and an antidote to “these polarized times”. I would imagine that he would avoid the public self-indulgence of Donald Trump, while nonetheless bolstering his personal reputation (at the expense of others) using the same dirty tricks he learned in the corporate world.

Whatever Trump’s fate, what Trump represents will not go away. The corporate class has taken notes, and continues to take notes, on what works and what doesn’t. The owners of everything are watching his deleterious presidency and learning what can be gotten away with. So long as corporate capitalism remains our economic system, we shall always be one bad roll of the dice away from nation-level fascism.

Fascists fight dirty. I know, because I’ve seen how they fight.

For the purpose of this essay, I’m going to call militant fascists, Nazis, differentiating them from the abstract notion of a person who might support fascism but not participate in enforcement. The far-right militants I’m about to discuss are not members of the German NDSAP, because it no longer exists. They may or may not be in that nonexistent racial category called “Aryan”, although most of them are white-male supremacists. The people on the intellectual fringe who spout odious politics on the internet, we’ll stick to calling fascists. The enforcers and dirtbags who–– let’s say–– send death threats to leftists and feminists, or who cause people to lose job opportunities they were qualified for, those are the modern-day Nazis. We will have to fight them.

Chapter 23: Panic Disorder (Trigger Warning–– Mental Illness)

If state-level fascism comes to the United States, I will be one of the first to die.

This issue, for me, is not about so-called virtue signaling. Whether I’m a virtuous person, that’s for another discussion. To be in this fight, for me, isn’t a choice.

I can’t become “not political”. In a more liberal time, I wrote political content under my real name. At this point, there is no harm in my continuing to do so. I am an outed leftist. My existence is political. I’ve been doxxed over and over. I assume I have no privacy. I don’t feel like I have anything to hide.

Far-right operatives got me banned from Hacker News and Quora on defamatory pretenses. Far-right operatives have sent me death threats. Far-right operatives have caused me to lose job opportunities even after successful interviews, leading to offers. The Nazis know who I am; they will not forget me.

Of course, I chose to speak politically in the open. There is no such thing as an “ethnic leftist”. To share my views is something I decided to do, not something I was born into. Were that the only factor pinning me inflexibly to one side, in any future conflict with fascism, I could not say “The fight chose me.” I would have to fess up to having entered it.

So, here’s the other part of the story.

I have a chronic neurological disability, manageable but not curable.

March 3, 2008, was an unseasonably warm, sunny day in New York. I was recovering well from an ordinary bout of influenza. Around 2:30 that afternoon, a stabbing sensation erupted in my throat, spreading throughout my body. Laryngospasm. Couldn’t breathe. Tried to drink water. Couldn’t swallow. I was sure that I was going to die, in front of my co-workers, right there on the floor. A woman, able to see my distress, called emergency services.

Diagnosis: panic attack. There was a physical cause to my illness; more on that later.

The second attack, on March 10, was the worst I ever experienced. I had written the March 3 attack off as a one-off, but now I realized I would keep having these things. It came in waves, for 23 hours, until fatigue took over after midnight the next day. During that one, I considered admitting myself to a psychiatric hospital.

I had more, tens or hundreds, over March and April. Often, I could not eat a meal because I could not swallow. After some time, I found a competent doctor, an ENT in Chinatown who found a bacterial plaque, left over from the flu–– an easy problem to treat.

Problem is, once the body and brain are “trained” into the panic process, it becomes a thing that can happen, without warning, at any time. Panic attacks, for the most part, aren’t “about” anything–– nothing in daily life merits such an extreme bodily reaction. These attacks don’t often have clear triggers and, at this point in my life, I don’t think panic is the right word for it. I don’t actually panic. I’ve cycled through the five hundred or so symptoms this horrible disease can throw–– chest pain, shortness of breath, auditory hallucinations, derealization, tachycardia, tremors, tingling, intrusive thoughts, sudden depression, vomiting, akathisia–– and, having survived all of this nonsense, I’m no longer scared of these attacks. It took me years to get to this point, mind you, but they’re more like severe headaches than anything that would cause me to “panic”.

Truth is, if I have a panic attack in public, I handle the episode better than anyone else. I’ve been through it, hundreds of times. I know that these things end.

I won’t mince words, though. A true panic attack is extremely unpleasant. Even now, I’d probably pay $500 not to have one. I would wager that a quarter of the population has had the movie version of a panic attack–– racing heart, hot flashes, mild visual disturbances, nausea and vomiting. I consider that a mere anxiety attack and would put it at 2.25 on the panic scale, as I’ve come to know it. At 4, the level I have about once per year, we talking about symptoms that would put a civilian in the emergency room–– if he could form the words to get himself there. At 6, every system in your body’s screaming, and you’re begging God for your own quietus… and you’ll be sore for a week afterward in muscles you didn’t know you had. As for eight… well an 8 compares unfavorably to a bad salvia extract trip. It’s worse because, at the end of it, you know that it came from you, not some stupid chemical you ingested.

I haven’t had worse than 5 or so since 2010. In my experience, this sort of thing gets worse, and then it gets better.

Chapter 24: One Hit Point

I mentioned my mistake, in summer 2008, of leaving finance.

It became clear that I was not suited to work at a trading desk. By necessity, prop trading is done in a noisy, open-plan environment. I despise the software industry’s use of open-plan offices–– for programmers, they are unnecessary and qualify as hostile architecture–– but there are a couple jobs that necessitate them. I’ll defend trading firms for using open spaces, because seconds matter in that game, and a traditional office layout would be untenable.

What irony that I left finance because of open-plan offices, just before the plague of Agile de-professionalization, one-sided transparency, and (of course) open-plan fetishism hit the software industry.

I never had an attack as bad as the second one, on March 10, 2008. The dreaded Big One that would render me permanently insane, never came because it does not exist. That said, attacks continued to come.

A severe, punishing experience leads your mind to look for patterns, even if none exist. This produces phobias. If you have an attack on a crowded subway, you might mistakenly attribute it to the environment. At bottom (autumn 2009) I was a shut-in. Home was safe. Work was mostly safe. I could go back home (Pennsylvania) with preparation, but I’d sometimes have a nasty attack on the train. I didn’t date, because any time my heart sped up, the fear of an attack (anticipatory anxiety) would hit me. “Safe spaces”, as they do, got smaller and smaller, because no such thing exists. As Confucius said: Wherever you go, there you are.

No one ever intends to become a shut-in, to become agoraphobic. It happens one day at a time. To have panic attacks on a regular basis produces lethargy, apathy, and aversion. Dysfunctional cognition and self-reinforcing superstition accrete over time. Eventually, the entire world feels unsafe for no good reason.

It was not easy, but I built myself back from scratch–– recovery from 1 hit point. Limit break after limit break. I re-established the confidence to do ordinary things. I started dating again and got married. There was a first-again airplane ride, a first-again ride on a bike, a first-again drive, a first-again long hiking trip. I rebuilt myself from zero and, in the end, built a better self than what had been there before.

Petty phobias disappear when you beat the monsters, as I have. Public speaking is said to be the number-one fear of most people. (Death ranks second.) It’s not an issue for me. I took up scuba diving in 2015. In 2018, I swam with sharks (no cage) under 78 feet of sea water, off the coast of Honduras. That’s not as dangerous as it sounds, but it seems to impress people, so we’ll use it.

I must speak on the issue of safety. I can drive during an attack–– it’s unpleasant, but it’s not unsafe. However, there are things, given my diagnosis, that I will never do. In open water, I can safely ascend from 130 feet (4 minutes) in event of unexpected neuro-adrenal fuckery. Cave and wreck diving, those are out for good.

Chapter 25: Fearlessness (?)

The petty fears that restrain most people do not faze me.

It’s said that death and public speaking are the human creature’s two biggest fears. Death, I haven’t done yet, despite some half-hearted attempts by others. I can’t speculate on how much fear I’ll experience when I get to that point. In the abstract, I have no dread of it. If there is a hereafter, I look forward to meeting it; if there is not, I will not exist to be disappointed.

Public speaking, that one’s easier. I like it. I’m good at it.

Funny thing is, stress itself doesn’t cause the dysphoria that turns into panic. I can handle swimming with sharks, biking in heavy traffic, and the physical sequelae of an extreme workout. When there is purpose to the stress, I handle it well–– better than most people. It’s gnawing, pointless stress that angers me.

Inoculation to extreme, underworld-level fear has left me immune to the petty fears that rule most people. That is an asset in life. In corporate undeath, it is not. One achieves social success in the corporate world by mirroring management’s anxieties without becoming affected–– because if one becomes as dysfunctional as they are, one will be unable to perform. I am not good at this. I can, as corporate managers might desire, induce fear in myself based on minor discomfort and meaningless shit–– I am diagnosed as having a brain far too good at that–– but I have learned that it is unhealthy.

I’ve faced my own death, thousands of times in a body-brain mock execution, and quite a few life-threatening situations I haven’t talked about. Given this, I can’t force myself to care about “Sprint 31”. If a director’s worst fear is explaining to his VP that his software, version 7.0, doesn’t support the blink tag… he and I are not going to relate well.

I’m terminally one-faced. Mirroring another person’s anxieties without being affected by them, which is the most important office survival skill, is one I lack.

I don’t handle the open plan office well.

Stress? Under 80 feet of water, surrounded by sharks, with a compressed-air canister at my back, I’m fine. Diving is pretty safe if you follow the rules and keep your wits about you. Worst-case scenario, I’m 160 seconds from the surface. Giving a presentation in front of hundreds, on three hours of sleep… no issue. If my nervous system bitches out, I’ll play it off as a headache.

The mandatory 9-hour economy-class flight from nowhere to nowhere, five times per week, is not physically stressful. Its main demand is that I sit in a chair, to be seen by other people, and hold in any farts. Hardly Herculean, that. The problem is not the level of stress–– it is that the stress is so pointless.

Chapter 26: The Open-Plan Virus

I won’t opine on Jordan Peterson’s lobster theories, but it is true that we as humans are attuned to social status. Public speaking is stressful, but it’s a positive stress–– the stress of giving a compelling presentation, of having something to say that merits the high-status position of being the speaker. There is a job to be done; there is a point to the stress.

Office culture is not illegible. To be visible from behind is a sign of low status. Though tech companies boast of their “egalitarian” office architectures, the truth is that you can figure out exactly who matters and who doesn’t by counting lines of sight. Yes, the managers work in the open space, but they all have walls and windows at their back. This is how the company shows they are trusted, supported, and (no pun intended) backed up. The people whose monitors are visible to the largest number of people are the most disliked, least trusted, and first to be fired.

Additionally, it is infantilizing, the claim that open-plan offices are egaliatarian, because executives can come and go as they please, while workers cannot.

Open-plan offices are not productive. People get less done, perform worse on tasks requiring concentration, and get sick more often. Technology executives cite “collaboration” as a reason for using these horrible offices. That’s bullshit. The topic has been studied to death. People do not become more collaborated when they are enervated by constant unwanted visibility and contact. In truth, these offices breed low-grade hostilities due to noises, odors, and invasions of personal space.

What’s the real reason for technology executives to prefer open-plan offices? Never assume malevolence where ignorance suffices; I think 70 percent of it is that these offices are cheap, in all senses of the word. Another 10 percent is showmanship. The open-plan fetish began in the startup world as a means for founders to showcase how many busy nerds they have working under them. In this light, open-plan programmers are valued as carbon-based office furniture more than for the code they produce. (Having seen the quality of code these startups produce, I… nah, let’s skip this one.) A further 10 percent is classic managerial malignancy: control and surveillance. Finally, the last 10 percent of the motivation is the diametrical opposite of “fostering collaboration”. If personal space becomes another artificially scarce resource for the proles to fight over, they will grow to loathe each other’s company, and this drives to zero the probability of their cohesion around collective interests.

Open-plan offices, for programmers, exist to humiliate them–– to remind them that they’re unimportant and untrusted.

In that environment, my skin crawls, because I feel like I don’t belong, because in fact I know I don’t belong. In an open-plan, Agile Scrotum software shop, where it’s normal for people to interview for their own jobs every day as if they were interns or on PIPs, I feel like an adult sitting at the kid’s table.

Chapter 27: Trump’s America

A lot happened in 2016.

Far-right attacks on my career became common. I had to start hiding my tracks. I skipped a couple tech conferences because I couldn’t safely go to the cities where they were held. I was assaulted twice.

By this point, I was planning a “techxit” from the private-sector software industry. I had a strategy that would have probably worked but, due to post-2016 dysfunction in the public sphere, was unsuccessful. During this time, I joined one of the so-called “artificial intelligence” startups, a venture-funded outfit in Reston, Virginia, as a software engineering manager.

I’ve wrestled, over the past month, with the question of whether to name this company. Its founders are absolute fecal garbage. If I could name them without collateral damage, I would. If a time comes when that is the case, perhaps I will. The operation was one of chaos induced from the top by a culture of childish management and dishonesty to investors and employees alike. Why have I chosen not to name this company? Past experience.

The problem, when you slag a company, is that the people responsible get off. Barring a criminal conviction or an eight-figure lawsuit, the scumbags will always be supported and protected by other scumbags. The people responsible for a company’s terrible culture, they get off scot-free. The ones at risk of being hurt are regular workers–– fellow proletarians–– who did nothing wrong, but now have a tarnished name on their résumé.

Soon enough, I will expose by name an unethical organization (not an ex-employer) because it will be in the public interest for me to do so. In the case of this so-called “artificial intelligence” company, I see no public-interest reason to name it. I have ready made its investors aware of the founders’ unethical behaviors. I have done my job.

I ran a team of 17 people, and I must say this. The people working for me were professional, capable, intelligent, and all-around great to work with. It was a pleasure to have such a high calibre of people under me, and I would hire any one of them again. Not one of them is at fault, in any way, for the ethical faults of the company where they and I worked.

Like most software companies in the late 2010s, this outfit used an open-plan office and discouraged working from home. The environment was tolerable, for a while, because as a manager I had the right to use one of the unallocated side offices (“breakout” spaces). As a supervisor, I sometimes used it for one-on-one conversations. As a person who diligently tried to excel at his work–– and did, in fact, excel at it–– I sometimes used the side office to get my job done. As a person with a neurological disability, I sometimes used it to ride out an attack.

Chapter 28: Open-Planic!

Panic attacks, as I’ve said, aren’t “about” anything, although patterns exist. Phobias develop because one anxiety about panic attacks can, itself, induce panic attacks. Trying not to have a panic attack can, in fact, cause one.

Open-plan offices and micromanagement (Agile) exist on the theory that petty inducements of anxiety nudge the lethargic and unmotivated into marginal employability. That may be. I’m not an expert on the lethargic and unmotivated. For self-motivated people like me, though, those petty insults reduce performance. We don’t need to be watched; we’re at our best when left alone.

Such offices create anxiety in the neurotypical, but they feed into the anticipatory anxiety, the anxiety about anxiety. Have a panic attack in your living room, and you’ll probably be fine in an hour. Have a panic attack in an open-plan office, and you’ll be working somewhere else in 6 months. Perhaps they’ll find a way to fire you. More likely, you’ll be demoted and gimp-tracked. I’ve had it happen to me; I’ve seen it happen to other people. Bosses hear “panic attack” and do not think “manageable neurological problem” but “personal weakness”.

I dislike the term “mental illness”. I think it gets a key thing wrong. It is not the mind (or, if you will, soul) that is sick, in a person with depression, bipolar disorder, generalized anxiety, panic disorder, schizophrenia, or any other of these terrible diseases. We’ve moved beyond the four humors, and it’s time to move beyond sick souls.

These diseases are physical, but have mental symptoms. Thing is, we know today that most physical diseases have some mental symptoms. The lack of a clear causative mechanism does not merit a leap into superstition and stigma. The truth is that disorders like mine deserve the status of “boring” health problems like atrial fibrillation or cluster headaches. They’re unpleasant and can be dangerous, but they deserve neither the romance nor stigma assigned to them.

One of the reasons panic disorder gets easier to deal with, over the years, is that one learns that the attacks are physically harmless. I’ve had them while driving–– hellish, but not unsafe. Problem is, in the corporate workplace, a panic attack is not harmless. It can become cause for the bosses to presume personal weakness or reduced leverage, leading to termination or reduced opportunities–– gimp-tracking.

The rule of the open-plan office is simple: don’t panic. Don’t panic. DO NOT PANIC. Don’t panic. Panic? Don’t panic. Don’t panic don’t panic ¿panic? don’t panic they’ll see you they’ll judge you. DO NOT PANIC WHAT IS WRONG WITH YOU. Don’t. Don’t. Don’t don’t panic. DON’T PANIC YOU CRAZY MOTHERFUCKER. Breathe. Not so fast. Not so slow. Breathe. If you forget to breathe you die. If you breathe too fast you panic you lose your job. Don’t. Don’t panic. Stop staring it’s creepy don’t panic. Stop it stop the panic this cannot happen here it is getting the better of you. Don’t panic don’t panic stop your panic they all see you. They all see you. You haven’t written a line of code for ¡¡¡13!!! minutes you panicky broken motherfucker you soon-to-be-jobless motherfucker they see you they see you as a they see you they ¡see! you. Panic, don’t. Don’t. PANIC. You cannot grasp the true form of… et cetera et cetera.

One might ask: is there medication for this sort of thing? There is. High doses of SSRIs reduce the frequency and intensity of the attacks, although the side effects are unpleasant. Benzodiazepines are a good short-term treatment, but they’re not a panacea. For one thing, it takes time for them to have an effect–– you take the drug to put an upper bound on the attack’s duration, and to smooth your recovery, but there is no way to abort an in-progress attack. You still have to get through the next five or ten minutes.

Furthermore, regular use of benzodiazepines (say, for prophylaxis rather than treatment) carries a high risk of tolerance and dependency. These drugs are a lifesaver when needed, but addiction is hellish and I do not use them except when necessary. My first-line prophylaxis, if I begin to feel raw at 2:00 in the afternoon, is to take a side office. Perhaps the attack will never come. If it comes, I’ll ride it out and get back to work as soon as I can.

At this particular company, the fake-news AI company in Reston, that’s what I did: used a side office. Most of my team was remote, so it didn’t matter where I worked. Other people began to use side offices, too. I did not intervene; who was I to say they didn’t have a legitimate reason for using them? (I hate open plan offices and think everyone has a legitimate reason to break away.) Anyway, executives took notice of people using the side offices to get work done, and HR got involved. I was labelled the one who “started the trend”, which I suppose technically I was.

The CTO and my immediate manager pulled me into a meeting, late in the afternoon on January 25, 2018. I was admonished about the use of side offices, even though I (and possibly others who used them) had legitimate reasons to do so. I was told that the CEO had “concerns” about the frequency of my doctor visits–– not for panic disorder, but for a physical problem that would later turn out to be gallbladder disease necessitating emergency surgery. Changes, therefore, would be made to my job duties.

Some of the changes I agreed with. I had a large team doing complex work and was excited about the prospect of running a smaller team and becoming an “individual contributor” (non-manager). I counter-offered with a proposal that was mostly identical, except with my non-managerial contributions in data science–– a natural fit for an AI specialist at a company that claims to do AI. The CTO’s refusal of this offer, and his explanation, made it clear that he was aware of my disability and presumed lesser leverage on the job market–– the ol’ gimp track.

Recognizing the obvious demotion, I confirmed in writing that I suffer from a disability, but would continue to give advance notice of doctor appointments (which, as I planned, might be interviews).

I was fired the next morning. Illegal? Yes. Expected? Not so quickly, no. Usually, when a company wants to fire someone for an illegal reason, it offers severance in exchange for an agreement not to sue or disparage the company. This firm, instead, decided to place a bet on extortion.

Hold on to your hats. Keep your arms, legs, and tentacles inside the car at all times.

Chapter 29: Octopus Royalty

Not long after this, I spoke to an attorney. She described my case, barring perjury, as a slam dunk. The problem, of course, is that “barring perjury, 100 percent” does not mean a sure thing.

Suing an employer is not like suing a tire manufacturer. You’re going up against an organization that can–– and, knowing the founders of this company, I am sure they would–– threaten people with their jobs into lying about your performance and professional ethics. Unless you think a seven-figure award is possible–– and for a highly-skilled 35-year-old whose disability is mild and intermittent, that ain’t likely–– you are often better off to make like a frozen and let it go, especially when the adversary is a startup that has the option of just not paying. If you win a judgment against a FaceGoog, you’ll probably collect. Against a money-losing startup? Remember what I said about a disposable company. It’s hard to collect on a judgment, after suing a hole in the ground.

Nonetheless, the company perceived it had a lot to fear from me, so they made threats–– the usual negative publicity, frivolous litigation, nothing I took too seriously. What I did take seriously was when my ex-manager said things that were not true about my departure to former colleagues. I informed him that I would not tolerate illegal, defamatory statements.

Threats continued. I dug up what I could about the founders and executives; they dug up a few things (minor shit) about me. Most of what I found doesn’t matter and is not well-enough sourced for me to get into it, even without naming them. I will only say this. One of the people involved in their extortion effort, I was able to link to a racist, far-right organization that advocates violence.

Great. Nazis in my life.

Chapter 30: Techxit Achieved (?)

That was how 2018 began. After that, I did some consulting, some weightlifting, and some work on Farisa. The next part of this story occurs mainly in May, 2019.

Before 2016, this would have been front-page news in the Washington Post, the kind of scandal that would have led to public resignations of top management. Now, we’re so used to public dysfunction that I don’t know if it even registers. But it’s my story, so I’m going to tell it.

Given my views of the technology industry, it shouldn’t be surprising that I tried to get out of it. In April 2019, I applied for a job at the MITRE Corporation, a federally funded research and development center (FFRDC) in MacLean, Virginia.

For reasons that will become clear, I will stick to what is factual.

My application led to a phone interview on April 26, one that was mutually successful. I left the conversation excited about opportunities MITRE had to offer–– about returning to R&D. MITRE invited me to an in-person interview on May 10. In about three days, I put together a presentation on set theory and why it matters to computer science. I felt like I did very well, and MITRE seemed to agree. On May 13, I received a job offer from MITRE, to join as a senior simulation and modeling engineer on June 3.

I accepted the offer that day, and put in my two-week notice at my then-employer on May 17. So far, standard job-change story.

I was thrilled to be out of the corporate world. Agile and the short-term nonsense, no more. Instead of working on sprint work for which I’m more than a decade overqualified… I’d get to work in R&D again.

Techxit successful? Or would this be like Michael Corleone’s “just when I thought I was out” moment?

Chapter 31: Nazis in McLean

“Out” leftists, feminists, and antiracists deal with people on the far right who follow our careers and try to interfere. In the startup world, it’s 50–50 whether you’ll still have a job, after that happens. It’s not that employers are dumb enough to consider right-wing keyboard warriors a reliable source–– they just don’t want to make a stand.

I do not consider radical the assertion that MITRE, an FFRDC that relies on the trust of the government–– the trust of the American people–– should be held to a higher standard than a fly-by-night tech startup.

“Mr. W” would have been my manager at MITRE. A leak of information occurred, between May 14 and May 30, and a far-right operative–– likely the one discussed in Chapter 30, because no one else fits the pattern of motive and opportunity–– leaked to Mr. W that I suffer from, and have been treated for, panic disorder.

Mr. W emailed me on May 30 to arrange a time when we could discuss what he falsely represented as a benign administrative detail. “Nothing to worry about,” he said.

We spoke at 9:00 am on May 31. He informed me that he had become aware of my disability status. He said, “I don’t see you ever getting a security clearance” with a diagnosis of panic disorder. (More on that, below.) Moreover, since I did not disclose my diagnosis–– I was never legally required to do so–– he rescinded the offer.

I will not argue against the federal government itself having the right to apply increased scrutiny, on the matter of security clearances, to people with psychiatric diagnoses. When lives are possibly at risk, the rules are different.

MITRE is not the federal government. Not all the work it does requires a security clearance. It is legal for a government contractor who terminate someone who applies for a security clearance and fails to get one. It is legal for a contractor to make an offer contingent on a clearance (a conditional job offer, or CJO). It is not legal for a hiring manager to discriminate against people with disabilities on the suspicion that they might take longer to clear.

I don’t know Mr. W’s politics. Is he a far-right operative? A Nazi? That’s between him and God. As I said, I’m sticking to the facts. Some time between May 14 and May 30, he spoke to a Nazi and made a decision based on information furnished by said Nazi. Whether he is guilty of mere irresponsibility, or bears blame for something deeper and more shameful, I do not intend to say. Here, it is essential that I stick to the facts.

During the conversation of May 31, Mr. W mentioned that he could only get away with rescinding the offer because of “our current political time.” Trump.

As it were, I had a full SF-86 looked over by one of the nation’s top security clearance attorney. I have no recreational drug use (including alcohol) since March 2008. I have no criminal record, no financial mishaps. Foreign contacts, the attorney said, might be an issue. My health problems, he told me, would be “absolutely no issue” for the level of clearance in discussion, and “would not significantly delay” the process.

It’s possible that the illegal rescission of the offer was not motivated by the stated reason, but the result of a far-right infiltration of one of the nation’s most important government contractors.

Either way, it was insanely fucking illegal.

I would, of course, put the probability of Donald J. Trump’s personal involvement at zero. I don’t think he makes time on his calendar to call up MITRE and screw over leftists with mild disabilities. I doubt he even knows, or cares, that MITRE exists. But he normalized the might-makes-right moral filth of corporate America, and brought it into the public sector, and by doing so, created a problem for me.

In May 2019, a literal outed fascist emerged from the woodwork to attack my career by sending true (I do suffer from panic disorder) but irrelevant information to Mr. W. This led to MITRE allowing the illegal rescission of a job offer made to a non-radical, non-violent leftist with a job-irrelevant disability.

The Nazis won.

Epilogue 1

Government, until 2016, was supposed to be immune to the bush-league chicanery we encounter in the startup industry. Illegal terminations and illegal rescissions of offers made are not supposed to happen there. Today, all bets are off. Be afraid.

I’ve studied fascism. I know what it means when a government tells people of a certain kind they now live under a five o’clock curfew. I know what it means when people like me experience rescinded job offers. Civilization’s enemy, fascism, starts with minor stuff–– boycotts, unionbusting, blacklisting–– before it builds up a seven- or eight-figure body count.

It’s difficult to predict which ethnicities and minorities will be targeted, in what order. We know that fascism takes the accessible first. It hits unionists and leftists and feminists–– people who speak out. It attacks people with disabilities–– whom it perceives as weak.

It’s tempting for people in that non-aligned majority to take comfort in the notion that they need not outrun the bear–– they only need outrun the other guy. For me, the fight’s not optional. I am that other guy.

The metaphor of a bear is not adequate, however. A bear, once sated, will cease to feed. Rather, what advances is a rising tide of ethical failure–– a saturated, soaking mud of moral filth that, if not opposed, will drag civilization to oblivion.

I have about an hour of video, audio, and picture media pertaining to the matters discussed in Chapters 27–31. There’s plenty of detail that, for the sake of brevity, I haven’t shared, but that makes my case even stronger.

MITRE’s illegal rescission of my job offer is exactly the sort of thing that happens before a far-right flashover. In the battle against the far right–– against fascists and Nazis, against infiltrators of trusted institutions–– we are at eleven fifty-nine and a sweeping second hand.

Epilogue 2

It wasn’t easy to tell that story. Thank you for taking the time to hear it.

The word count nears novella territory. Unfortunately, I could not have told that story in two thousand words. I doubt I could have told it in six thousand words. At brevity, it would have read as a paranoid rant. Extreme claims require justification and analysis. Consider post-2016 politics, from a pre-2015 vantage point. Some things are hard to believe unless every detail is backed up.

It is not with pleasure that I write on an existential threat to this great nation, and to civilization and its future. It is not with pleasure that I write on the probable infiltration by far-right militants of an organization that relies on the trust of the federal government.

We have a world to win; we also have one to lose. We do not have to live in a world where experiences like mine, relayed above in twenty thousand words of horror, are the norm.

For the love of God, fight.