Computing From the Middle Out, Part 1: Why Turing Machines Matter

While you’re here: my novel, Farisa’s Crossing will come out on April 26, 2019.

Computers have an undeserved reputation for being unpredictable, complicated beasts. I’m going to argue that, to the contrary, they’re quite simple at their core. In order to establish this, I’ll work through some models of computation, as well as some programming models that correspond well to real-world computation (with indications of where they don’t).

There’s a lot of complexity in real-world computing. Some of it’s desirable and some of it’s not. For example, today’s cell phones, laptops, and servers use electronic circuitry far more complex than, say, a Turing machine. That isn’t a problem because the payoff is immense and the cost to user is minimal. If the complicated adder or multiplier is a thousand times faster, most people are happy to have this way. So, even though real-world integrated circuits are complicated in ways we won’t even begin to discuss here, it’s not a problem. Doing simple things, better, is a worthy expense of complexity.

On the other hand, bloated buggy software ruins lives– this problem is largely preventable, but unlikely to improve because of conditions in the software industry (e.g., a culture that encourages piss-poor management) that are beyond the scope of the analysis here. If ever there were a machine for producing unusable crapware, it would be the American corporation. But again, that’s a topic for another time.

I’d prefer to motivate the claim that computers can be simple. They can be.

What Is Computation?

Computability theory is quite deep, but there’s a relatively simple, rule-based definition of what it means for a (partial) function to be mathematically computable. Our domain here is functions Nn → N; that is, from lists of natural numbers to natural numbers.

  • The n-ary zero functions z1(x) = 0, z2(x, y) = 0 , … , are computable for all n.
  • The successor function s(x) = x + 1 is computable.
  • For any nk < n, the projection function pn,k(x1, … , xn) =xk is computable.
    • p1,1(x) = x, the identity function, and p2,1(x, y) = x, f2,2(x, y) = y are the most used examples.
  • Composition: compositions of computable functions are computable.
    • For example h(x, y) = f(g1(x, y), g2(x, y), g3(x, y), g4(x, y) is computable if f and all the gi are.
    • This means that a computable function can use as many computable functions as it wants as subroutines.
  • Primitive Recursion: if g and h are computable, then so is f, defined like so:
    • f(0, x1, … , xn) = g(x1, … , xn), and
    • f(n + 1, x1, … , xn) = h(nf(nx1, …), x1, …);
    • this is the recursive analogue a for-loop; the number of calls is bounded.
  • Search (a.k.a. General Recursion): if f is computable, then so is mf, defined as:
    • mf(x1, … , xn) = k where k is the least integer where f(kx1, … , xn) = 0.
    • We say mf(x1, … , xn) ↑ (pronounced “diverges”) if there is no such k. The function is not defined at that point.
    • this is analogous to a while loop. If the function diverges, an implementation would not terminate– unless the programmer could predict divergence in advance, but this is not always possible.

Functions that don’t use search are called primitive recursive. Those are total– they have values for all inputs, and more importantly, these values can be computed in a finite number of steps. If one uses general recursion, though, all bets are off. The function may not be defined for some inputs.

For example, addition is primitive recursive. It’s defined like so:

add(0, x) = x

add(n + 1, x) = s(add(n, x))

In the language above, g(x) = x and h(nax) = s(a).

Multiplication is a primitive recursion using addition rather than the successor function. One can also show that limited subtraction, sub(xy) = max(x – y, 0) is primitive recursive.

Furthermore, any bounded search problem is primitive recursive. If you have an upper bound on how far you’re willing to search, you can use a primitive recursive function.

Sometimes, it’s a judgment call how one wants to implement it.

For example, the division function can be represented as:

div(nd) is the first q such that qd< (q + 1) * d.

Perform an unbounded search for such a q and, when d = 0, this diverges. However, in this case we know when the function’s badly behaved and can rectify it:

idiv(nd) is 1 + div(nd) if d > 0, and 0 if d = 0.

It returns a positive integer on success– a successful return of 0 becomes a 1– and a 0 on failure. The enclosing routine can decide how to handle the error case.

Divisibility checks (nothing but 0 is divisible by 0) and primality are primitive recursive and therefore total computable within finite time. Most importantly, prime factorization is primitive recursive. This is something we’ll come back to.

Turing Machines

Most people have heard of Turing machines, but unless they have taken a course in graduate-level logic or the theory of computation, they’ve probably never worked with one– and may not know what it is.

They have the reputation of being complicated beasts. They’re brain-dead simple, actually. Doing anything with them, that’s the part that can be painful. The ones that we inspect and analyze as computers tend to have massive state spaces– which may or may not be a problem– while the most aggressively minimalistic ones– I won’t prove it, but there are machines with under 20 states and two symbols that can compute any function– tend to be inscrutable in practice.

Formally, an (n, s) Turing machine is a device that:

  • recognizes a pre-programmed alphabet of n > 2 symbols. That set could be {0, 1}, or {A, B, C}, or the 100,000 most common English language words. One of these symbols is blank.
  • is in one of s distinct internal states, including one called Start and one called Halt. This set must be finite and is pre-programmed into the machine.
  • has n * (s – 1) pre-programmed rules, written as (sold, ain, snew, aout, ±1), one for each (sold, ain) pair except for those where sold = Halt.
  • reads and writes to a tape– each cell holding exactly one symbol– that never runs out in either direction.

And here is how it works:

  • Input: a finite number of cells may be set to any non-blank values. (The rest of the tape is all blank, in both directions.)
  • Initialization: the machine is put in state Start.
  • Runtime: Over and over, the machine does the same thing:
    • read the symbol (ain) at the cell where the machine is, and consult its internal state (sold);
    • fetch the matching rule (sold, ain, snew, aout, ±1);
    • write aout to the tape, and transition to state snew;
    • move right if the matching rule’s last column had a +1; left, if -1;
    • repeat this cycle unless snew is Halt, in which case the machine terminates. Whatever is on the tape is the program’s output.

What happens if the Turing machine never goes into the Halt state? It runs forever. This is generally considered undesirable. The computation doesn’t complete.

This is probably the biggest disconnect between Turing machines and the computers we actually use. Turing machines are supposed to halt. If one doesn’t, that’s considered pathological; its work isn’t done and as far as we’re concerned, it hasn’t computed anything. Meanwhile, the cell phones and laptops we use on a daily basis run in an infinite loop and that’s what we expect them to do. We expect them to be available (and I’ll formalize that much later, but not in this installment) but they never halt.

A Turing machine is all-or-nothing. Its job is to compute one function and then indicate that it’s done by going into the Halt state. For a contrast, a real-world computer, at the minimum has to respond to real-world inputs like the user’s keystrokes, its own temperature sensors (so it doesn’t run too hot), and power supply disruptions. Later on, I’ll show how to close this gap.

What’s neat about Turing machines is that, in principle, one could have been built in the late 19th century. (My work on Farisa has had be on a steampunk kick.) We were close: we had programmable looms, player pianos, and electricity. We had record players and magnetic storage. Today, a Turing machine good enough to emulate a 1980s video game console could be built with about $100 of commodity electronics. Rather than get into the details– it’s not my expertise– I’ll point the reader to Ben Eater’s excellent series of videos on the 8-bit computer he built on a breadboard. As he’s building an actual circuit, his model gives a much better representation of what computers actually do, in the physical world, than do Turing machines.

Anyway, an automaton is only as good as its ruleset. Most rulesets will have the machine pinging about at random– sound and fury, signifying nothing. A few, though, do useful things. A Turing machine can add two numbers, whether specified in binary or decimal that are supplied on the tape. These machines can multiply, or check regular expressions, or… well, literally anything computable. In fact, that’s one definition of what it means for something to be computable– they are legion, and they’re all equivalent.

It’s counterintuitive to most people, but the slowest computers from the 1960s can do anything a modern machine can– they would merely take longer. In terms of what computers can do, nothing has changed. If we allow computers to generate probabilistic bits, they even quantum computing does not add capabilities– quantum computers are merely faster.

From a practical perspective, computers and programming languages are not remotely equivalent. In theory, they are.

Now, Turing machines would be nearly useless as a real-world concept, say, if they required 2210,000 states in order to do useful computation. It would be annoying if there were computations that couldn’t be done with fewer states, because we have no way to store that much information. In fact, one can find fairly small n and s, and specific rulesets, that can emulate any Turing machine (any size, any ruleset) on any input at all. These are called universal Turing machines. I’m not going to go through the details of building one and proving it universal, but I’ll walk through the basic concepts, along two different paths.

We are not concerned with how efficiently the machines run– as long as they terminate, except on problems where no machine terminates. Real world computers are sufficiently different from Turing machines that the the (heavy) performance implications here are irrelevant.

  • First, a Turing machine’s read-fetch-write-transition-move cycle is mechanical. We can implement it over all (ns) Turing machines with a machine using sf(s), where f is a slow-growing function, states. We include the ruleset we want as an input– a lookup table– and our machine implements the read-fetch-write-transition-move cycle against that table instead.
  • Operating on k-grams of symbols allows us to use an n-symbol Turing machine to emulate an nk-symbol machine. We can in practice do any of this work with a 2-symbol machine.
  • An (n, s) Turing machine can emulate a Turing machine with a larger state space (say, s2 states) by writing state information to the tape. The details of this are ugly, and the machine may take much longer, but it will emulate the more powerful machine– by which, I mean that it will come to the same conclusions and that it will halt if the emulated machine does.

This approach isn’t the most attractive, and it has a lot of technical details that I’m handwaving away, but using those techniques, we can emulate, say, all the (n2,  s2) Turing machines using an (nf(n, s), kg(ns)) where f and g are asymptotically sub-linear (I believe, logarithmic) in their inputs. The result is that, for sufficiently large n and s, machines can be build that emulate all machines at some larger size– and, of course, a machine at that size can emulate an even larger one. The cost in efficiency may be extreme– one could be emulating the emulation of another emulator emulating another emulator… ad nauseum– but we don’t care about speed.

If that approach is unappealing, here’s a different one. It uses the symbols: {0, 1, Z, R, E,+, <, _, ~, [, ], and ?}– in two colors: black and red; 1, Z, E, and R will never be red. This gives us 20 symbols. The blank symbol is the black 0.

Here’s a series of steps that, if one goes into enough detail (I’ll confess that I haven’t, and the machines involved are likely wholly impractical) can be used to construct a universal Turing machine.

Step 1: establish that copying and equality checking on strings of arbitrary length can be done by a specific, small Turing machine.

Step 2: use a symbol Z and put it between two regions of tape at (without loss of generality) tape position 0. Use it nowhere else. Use a symbol R to separate the right side of the tape into registers. These will hold numbers, e.g. R 1 0 1 R 1 0 0 0 1 R 0 R means that 5, 17, and 0 are in the registers. Resizing the registers is tedious (everything to the right must be resized, too) but it’s relatively straightforward for a Turing machine to do. There will be an E at the rightward edge of the data.

Step 3: The right side of the Z stores a stack of nonnegative integers: 1s and 0s (representing binary numbers) separated by register symbol R. The left side stores code, which consists of the symbols {0, +, <, _, ~, [, ], ?}. Only code symbols can be red.

  • A possible tape state is: E0+++++0+0+?0+++Z 101 R 1 R 0 R 1 E. (Spaces added for convenience.) The left region is code in a language (to be defined); the red zero indicates where in execution the program is; on the stack we have [5, 1, 0, 1] with TOS being the righthand 1.

Step 4: A Turing machine with a finite number of states can be an interpreter for StackMan, which is the following programming language:

  • At initialization, the stack is empty. The stack will only ever consist of nonnegative integers. We’ll write stack left-to-right with the top-of-stack (TOS) at the right.
  • 0 (“zero”) is an instruction (not a value!) that puts a 0 on top of the stack, e.g. ... X -> ... X 0.
  • + (“plus”) increments TOS, e.g. ... X 5 -> ... X 6.
  • _ (“drop”) pops TOS, e.g. ... X Y -> ... X.
  • ~ (“dupe”) duplicates TOS, e.g. ... X -> ... X X.
  • < (“rotate”) pops TOS calls it n and then rotates the top n elements left. This may be the most tedious to implement. Examples:
    • ... X Y 2 -> ... Y X
    • ... X Y Z 3 -> ... Y Z X
    • ... X Y Z W 4 -> ... Y Z W X
  • ? (“test”) decrements TOS, then pushes a 1 on the stack, if TOS is nonzero; otherwise, it pushes a zero, e.g.:
    • ... 6 -> ... 5 1.
    • ... 0 -> ... 0 0.
  • This is a concatenative language, so instructions are executed in sequence one after the other. For example, +++ adds 3 to TOS, 0+++0+++ pushes two threes on it, _0 drops TOS and replaces it with a zero (constant function), and ?_?_?_ subtracts 3 from TOS (leaving a 0 if TOS < 3).
  • Code inside [] brackets is executed repeatedly while TOS is nonzero and skipped over once TOS is zero or if the stack is empty.
    • For example, 0+[] will loop forever because TOS is always 1.
    • The code [?_0++<+0++<]_ has behavior ... x y -> ... x + y. It’s an adder. For example, if the stack’s state is ... 6 2, it does the following:
      • The code in the brackets is executed. ? tests the 2, so we have 6 1 1, and we immediately drop the 1. The 0++< (“fish”) is a swap, so we have 1 6, and the + gives us 1 7. We do another 0++< and are back at 7 1.
      • The next cycle, we end up at 8 0; after that, TOS is zero so we exit our loop. With a _, we are left with ... 8.
  • Any instruction demanding more elements than are on the stack does nothing.

The interpreter for this language can be built on a Turing machine using a finite number of states. To keep track of the code pointer (i.e., one’s place in the stored program) while operating on the stack, color a symbol red. Make sure to color it black when you have moved on.

Step 5: show that any primitive recursive function Nn → N can be computed as a fragment of StackMan, taking the arguments from the stack; e.g.,

  • f(x, y, z) = x + y * z could be implemented a fragment with behavior ... x y z -> ... (x + y * z).

This isn’t hard. The zero functions and successor come for free (0, +) and the projection functions (data movement) can be built using _, ~, and <. Composition is merely concatenation– we get that for free by nature of the language. We can get primitive recursion from ? and principled use of [] blocks, and general recursion from arbitrary [] blocks.

Thus, a StackMan interpreter is a Turing machine that can compute any primitive recursive function.

Next, show that any computable function Nn → N can be computed as a fragment of StackMan that will terminate if the function is defined. (It may loop indefinitely where it is not.)

Step 6: since prime factorization is primitive recursive, we can go from lists of nonnegative integers to a single nonnegative integer, using multiplication (one way) and prime factorization the other way: e.g. (1, 2, 0, 1) ↔ 2* 3* 5* 71 = 126. This means that we can coalesce

Step 7: show that all (ruleset, state, tape) configurations can be encoded as a single integer. Then show that the Turing step (read-fetch-transition-write-move) and the halting check are both primitive recursive. These capabilities can be encoded as StackMan routines. (They’ll be obnoxiously inefficient but, again, we don’t care about speed here.)

Step 8: then, a Turing machine can be built with a finite number of states that:

  • takes a Turing machine ruleset, tape, and state configuration and translates it into a StackMan program that repeatedly checks whether the machine has halted and, if not, computes the next step. The read-fetch-transition-write-move cycle will be performed in bounded time. The only source of unbounded looping is that the emulated machine may not halt.
  • and, therefore, can write and run StackMan program that will halt if and only if the emulated configuration also halts.

Neither of these approaches leads to a practical universal Turing machine. We don’t actually want to be doing number theory one increment (+, in StackMan) at a time. Though StackMan can perform sufficient number theory to emulate any machine or run any program– it is, after all, Turing complete– it is unlikely that the requisite programs would complete in a human life. But, in principle, this shows one way to construct a Turing machine that is provably universal.

Human Computation

This installment is part of what was a larger work. I’ve decided to put it out in pieces. I titled it, “Why Turing Machines Matter”, but I had to start with a bunch of stuff that most people would think doesn’t matter– a stack-based esoteric language, some number theory review, et cetera. I haven’t yet motivated that this concept actually does matter. So, let me get on that, just briefly.

Mathematicians and logicians like Turing machines because they’re one of the simplest representations of all computers, and the state space and alphabet size don’t need to be unusually large to get a machine that can compute anything– although it might be slow. Alan Turing’s establishment of the first universal Turing machine led to John von Neumann’s architecture for the first actual computers.

Is it reasonable to assume that Turing machines perform all computations? Well, that’s one way that computability is defined, but it’s a bit cheap to fall back on a definition. It’s more accurate to look at the shortcomings of Turing machines and decide whether it’s reasonable to believe a computer can be built that overcomes them.

For example, some electronic devices are analog, and Turing machines don’t allow real-numbered inputs. Everything they do is in a finite world. But, in practice, machines can only differentiate a finite number of different states. There’s no such thing as a zero error bar. Not only that, but quantum mechanics suggests that this will always be the case. For example, there are an infinite number of colors in theory, but humans can only differentiate a few million under best-case circumstances, and we can only reliably name about a hundred. It’s the same for machines: measurements have error. Of course, an infinite state space isn’t allowable either: that would be analogous to infinite RAM.

So, those shortcomings of Turing machines apply to all computers that we know– including (in a different way) the quantum computers humans know how to build.

Turing machines, as theoretical objects, can’t do I/O. The input exists all at once on the tape, and output is produced– and until that output occurs, no computation has been completed. One alteration to account for this is to allow the Turing Machine an input register that other agents (e.g., keyboards, temperature sensors, the camera) can write to. When the computer is in a Ready state, it scans for input and reacts appropriately. If the machine reaches Ready within a finite time interval, that is analogous to successfully halting– the software itself may be broken, but the machine is doing its job.

In truth, modern computers are more accurately modeled as systems of interacting Turing-like machines than single machines– especially with all the multitasking they have to do to support users’ demands.

There is one thing Turing machines don’t do that we take for granted, although it’s a bit of a philosophical mess: random number generation. Turing machines don’t model it: everything they do is deterministic, and “random” is not a computable function (or a function at all). Real computers most often use pseudorandom number generators (PRNGs)– which are predictably (but ideally without pattern) “random”– and Turing machines can implement any of those. Truly random? Well, we don’t fully know what that is. We can get “random enough” with a PRNG or from some input that we expect to be uncorrelated to anything we care about (e.g. atmospheric noise, radioactive decay).

Turing machines give a poor model of performance as described here. To access data at cell 5,305, from cell 0, the machine has to go through every cell in between. That’s O(N) memory access, which is terrible. Luckily, real computers have O(1) memory access, right? That’s why it’s called random access memory, eh? Well, not quite. Caching is too much of a beast for me to take on here, but I would argue this far: a Turing machine with a 3-dimensional tape– I haven’t gotten into this, but a Turing machine can have any dimensionality and be computationally equivalent– is more faithful model for performance. Why? Well, our best case or random access is O(N1/3). . We can call random access into a finite machine O(1), but that’s moving the goalposts. Asymptotic behavior is only about the infinite, and the real world is constrained by the speed of light. If have a robot moving around a 3-dimensional cubic lattice where each cell is 100 microns on a side (no diagonal movement) and we want each round trip to complete in one nanosecond (30 cm) then we are limited to 125 trillion cells. Going up to 1 quadrillion would double our latency. Of course, we’re ignoring the absurdity of a robot zipping around at relativistic speeds.

Happily, most computers don’t have the moving part of a robotic tape head (although a traditional hard drive may be analogous). Rather than the computation going to the data (in the model of a classical Turing machine) they, instead, bring the data to the chip. Electrical signals travel faster than a mechanical robot, as on a literal Turing machine, could (without catastrophic heat dissipation). So, in this way, modern computers and Turing machines are quite different.

If anything, I’d make a different claim altogether. Turing machines aren’t a perfect model of what computers do– although they’re good enough to explain what computers can (and can’t) do. They are, perhaps surprisingly, a great representation of what we do when we compute.

Before “a computer” was a machine, it was a person whose job was to perform rote operations– addition, subtraction, multiplication, division, elementary functions, and moving data around– which is, as it were, all today’s computers really do as well. And how does a human compute, say, 157,393 * 648,203? Most of us would have to reach for paper– a two-dimensional Turing tape– and start going through rote operations. To transliterate schoolbook multiplication to be done by a Turing machine is tedious but not hard– there are a couple thousand states.

The plodding Turing machine isn’t “about” computers. It’s about us, moving around a sheet of paper with a pencil and eraser, as we do– at least, when we know we’re computing. Most of what we do, we don’t think of computation at all. We’re not even aware of computation happening.

It’s an open question whether there’s a non-computational element to human experience. I tend to be unusual– by the standards of, say, Silicon Valley, I’m downright mystical– and I think that there is. I can’t prove it, though. No one can.

The difference between intuition and computation is that the latter happens by rote, from a precisely-understood, finitely-describable state, following a series of rules that require no judgment. Intuition can’t be checked; computation can.

Most mathematicians use informal proofs– verbal arguments that convince intelligent, skeptical people that a conclusion is valid. This is a social rather than algorithmic process, and it is not devoid of error. Informal proofs can be unrolled into formal proofs from ZFC, it is generally believed, but it would typically be impractical to check. An informal proof is an argument (using other informal proofs) that a formal proof exists, and although the informal proof is imperfect– of course, 100-percent perfection in computation is not physically possible, either– it usually gives more insight into the mathematical structure than a formal one would.

Do humans have non-computational capabilities or elements to our existence? I believe so. But, in terms of what we can communicate to each others with proof– that is, checkable computation– we are limited to finite strings of finite symbols, an agreed-upon initial state, and a finite set of rules. At least in this life, that’s the best we can prove.

Next Up

In the next installment, I’m going to show how to build a Turing machine that’s practical.

Aggressively minimal universal Turing machines– with, say, only 10 states and 5 symbols– tend to be next-to-impossible to understand. I’m going to work with a large-ish state space and alphabet: 512 symbols and 248 possible states (even though we’ll only use about a million). Those numbers sound beastly, and to implement the Turing machine as a lookup table would require 1,884,160 terabytes. At such a size, storing the entire ruleset is cost-prohibitive. Most rulesets for those parameters are patternless and unmanageable, but a ruleset that we’d actually want to use is likely to be highly patterned– allowing rules to be computed on the fly. In fact, that’s what we’ll have to do.

In the second installment, we’ll build a Turing machine about as capable as a 1980s video game console (e.g. Atari, Nintendo) that’ll be much easier to program against. That’s up next.

Advertisements

Don’t Be Like Ajay

There’s a lot of bad career advice out there, but the worst of it comes from people who’ve been successful at private-sector social climbing. Blind to their own privilege, and invested in the perverse mythology of corporate meritocracy, they are least equipped to perceive the truth– not to mention their lack of incentive to share it, in the off chance of discovering it. At the same time, these people can say anything and get it into print, so desperate are the rest of us, the proles, to hear the inside corporate secrets they purport to have.

There are no secrets. The corporate system is corrupt; it is not a conspiracy. It is exactly what it looks like; the powerful abuse the powerless, the rich get richer, and people who speak the truth about it are punished.

This pestilent article, “What College Grads Could Learn From My Former Intern“, comes from Zillow CEO, Spencer Rascoff. Now, I have no personal knowledge of the author, and I know even less about the “Ajay”– that may or may not be his real name; it doesn’t matter– so I’m going to stick to the merits of the article itself.

This I will say: venture-funded startup CEOs are the worst when it comes to self-deception and the profligate evangelization of nonsense.

Venture capital, at least in the technology industry, has become a mechanism for the replication of privilege. Well-connected families create the appearance of their progeny having built businesses from scratch when, it fact, they had all sorts of hidden advantages: tighter sales advantages, fawning press coverage, and most importantly, the privilege not to worry about personal financial nonsense. (If their businesses tanked, they’d fail up into cushy executive jobs, often as venture capitalists.) It’s money laundering, plain and simple, and it’s not even well hidden since it’s technically not illegal.

The corporate system is a resource extraction culture, not unlike the ones in culturally impoverished, oil-rich societies that never needed to grow or innovate, because they could pump wealth out of the ground. In this case, though, the depleting resource is the good faith of the American middle class– an earnest belief in hard work, an affinity for technology, an acceptance of authority. The purpose of the ruse is to make it look like “this time it’s different” and that today’s elite, unlike the warlords and viscounts of the past, actually earned it.

 

Ajay, the protagonist of this second-rate Horatio Alger story, was a hard worker, eager to please, by the author’s description (emphasis mine):

Ajay did [difficult, unpleasant work] eagerly and with a smile; he worked incredibly hard and because of that, built a reputation for himself as someone who would pitch in to help with anything you asked and give it his best effort. People liked that.

I almost retched when I came upon “and with a smile”. Gross.

My thoughts, for the rising generation? Yes, work hard when it’s worth it to work hard. In fact, I would not try to give advice to the young about “work-life balance” or tell them that they should backpack around Australia for two years. It’s hard enough to achieve something significant during peace time; it’s much harder in 2018, when the rich have made it so much harder for anyone to get a chance. One cannot produce significant work in any field and also have the Instagram party life.

This said, there is difficult, unpleasant work worth doing; there are other tasks that are waste. If one has to do the job with a goddamn smile to get credit for it, then it’s almost certainly in the latter category.

 

Bosses might like, on a personal level, those who do unpleasant work with a smile. That doesn’t mean that it leads to career success. It’s never good to be disliked by a manager, but bosses don’t get to promote everyone they like. If one is well-liked only because of having made it a path of least resistance to give one unpleasant, career-incoherent work, then one is in a state sustained only by suffering, that one can almost never turn into career advancement.

I’d also like to point out the author’s corporate weasel terminology. He says, “People liked that.” He liked it. There’s nothing sinister or surprising about a boss liking someone who’s preternaturally “easy to manage”. What’s galling is that, like most corporate bosses, he felt entitled to superpose his opinion over the entire company. It’s like when managers fire people but want to avoid taking responsibility, so they say “the team decided”.

I would guess that many people disliked Ajay. They saw what he was doing, and they cringed.

Of course, if Ajay succeeded, then their opinions didn’t matter; those people didn’t win. Still, it’s generally not useful to be disliked by one’s colleagues, and no one likes ass-kissers.

Ajay was also a serial networker, even all the way up to me, the CEO.

It’s funny how blind CEOs are the politics that exist all around them. Since they get everything they want, there’s “no politics” in the organization. I suppose that’s true. The ultimate solution for someone who wishes to abolish politics is despotism– the degenerate but nominally apolitical arrangement. Most of us don’t want that, of course.

At any rate, if Ajay’s colleagues and managers tolerated “a serial networker”, it’s because they never saw him as a threat until he was fully ensconced in the managerial sun. Perhaps they were wrong and got blindsided. Like I said, I don’t know these people.

 

In general, though, the idea that a 22-year-old can try to rub elbows with a CEO, in a competitive environment like a startup or investment bank, and not get shanked by someone at or above his own level, is laughable. The people with the training to pull this off are those with inherited wealth and social resources, who have the least need for “internal networking” because of the extensive external networks their Daddies gave them.

When Ajay left to finish school and go on to various startups, he continued to build upon his brand and kept in touch—essentially marketing himself through his networks.

Emphases mine. There’s nothing incorrect about “essentially”; I just wanted to highlight an unnecessary adverb that really, totally, very badly, irritatingly weakened the prose.

I want to focus more on “build upon his brand”. (The author could have taken out “upon” and nothing would have been lost, but there’s actual incorrectness here, so I shan’t dwell on it.) See, what got me to write this response is not that the author’s giving misguided career advice. To be honest, I couldn’t give better advice that Forbes readers (if my estimation of its demographic is correct) would want to hear. I’d offer the truth– the game is rigged and most people will lose no matter what they do– and that’s not a charismatic message. No, I’m writing this response because the notion of “personal brand” is, to me, sickening.

I am not a brand. There are not five hundred of me stacked on a shelf in grocery store, all in neat order like the rectangular boxes they put toothpaste tubes in. You, dear reader, are not a brand either. If you don’t cringe when you hear the words “personal brand”, then wake up.

People who use the term “personal brand” without dripping contempt are a special breed of douchebag. What’s amusing is that, while they identify “personal brand” with their desperate claims of uniqueness, these people are pretty much all the same.

It is bad advice. The truth is that people who focus on “building their brand” are assumed by their colleagues not to be doing the work, and they’re the first ones to get shanked when things get difficult. Perhaps Ajay succeeded. Perhaps he’s in a corporate jet, still smile. Or perhaps he used his bonus on plastic surgery to fix that frozen-face smile after getting kicked out of a funeral for the goddamn last time.

You want to be remembered, whether you’re joining a company of five or 500, because remembered people get opportunities; anonymous ones don’t.

Remembered people get denied opportunities.

I’ve been involved with the antifascist cause since 2011. I’ve been turned down for jobs because of a somewhat public (and, in cases, adversarially publicized) track record of having the backbone to stand up for what’s right.

When it comes to social media, employment references, and personal uniqueness, we live in a 500-mile world. As in, follow any driver for 500 miles, and you’ll find a reason to write him up. It used to be difficult (literally, and in metaphor) and time-consuming to follow one person so far; technology and surveillance have made it easier.

I’ve been a hiring manager. I was always sympathetic to people with controversial online histories, for obvious reasons, but it’s the most common reason for denying a job to someone good enough to make it to the final round. No, these people aren’t alt-right psychopaths or proud, public drug users. Usually, they’re normal people who just happen to hold opinions. It’s assumed that they’ll get bored, or that they’ll react badly to mistakes made from authority. I did, on one occasion, cringe when a startup executive commented on a black woman’s natural hair being “political”.

The people who rise in the corporate system are boring. The best odds, in the corporate game, come from becoming the most bland, inoffensive, socially useless person one can. The problem with this truth– the reason it lacks business-magazine charisma– is that its odds are still poor. There are a lot of perfunctory losers out there, and they don’t all get executive jobs. Most of them get the same shitty treatment and outcomes as everyone else.

Not being boring, though, means that someone only has to follow you for 25 miles to find a reason to screw you over, damage your reputation, or deny you a job.

The optimal strategy is to be boring, to ingratiate oneself to powerful people over time, and to become intertwined enough with an organization’s powerful people that one is perceived to have undocumented leverage, and therefore gets what one wants out of the organization. Does this strategy work for everyone, all the time? No. The odds are depressing– most social climbers fail. But the odds are even worse for all the other strategies.

 

“How do you effectively brand yourself without being a peacock or a sycophant?” There are two ways: intentionally constructing it and being patient.

There are several ways to brand yourself. The classic approach is apply pressure with iron, heated in a fire. At high enough temperatures, permanent scars can be achieved in two or three seconds. Electric arcs are sometimes used for this process. An alternative to thermal burns is “cold branding”, often using liquid nitrogen. There seems to be no risk-free option, since branding literally is skin damage.

 

The same should be true for you: “Work with Sophia—she has a great attitude, big ideas, and is really hard-working.”

This guy must be getting paid per word. The Hemingway editor yells at me; I use adverbs. They’re not always unnecessary and replacing one with a clunky adverb-free adverbial phrase isn’t my way. Still, not only is the “really” unnecessary, but the author could have said “works hard”.

Whatever you decide to pursue as your personal brand, make sure it has a strong purpose behind it. If you do that, the rest is just packaging.

“Just packaging.” A product’s brand is literally that: packaging. Brand is the use of identical-looking boxes to convince buyers that a minimum standard of quality has been met. A Hershey Bar isn’t going to blow me away, but it’s perfectly adequate. I know that when I buy one, I’m unlikely to find a severed housefly wing in it.

If you want “perfectly adequate” on your tombstone, then consider being like Ajay– a brand. That said, you might want to pull that smile down. Do your job and do it well, of course, but if you smile so much, you’ll make everyone hate you. No one wants to compete for attention with an ass-kisser.

The Truth

As I said, I found the article harmless till I got to the “personal brand” bit.

There’s a lot of bad career advice out there from successful people (most of whom lucked into, or were born into, what they have). There’s also a lot of bad career advice from unsuccessful people who’ve found success selling the “inside secrets” of a corporate game they never actually won– now that is personal brand. The well-meaning self-deception will never go away, nor will the intentionally deceptive sleaze. There are many gamblers who “have a system” for beating roulette wheels and slot machines. Many books have been written on their systems. They do not work. The house wins in the long term. That’s why it’s the house.

The house is smart enough to keep people coming in. So it offers intermittent small wins, and a few big ones that generate publicity. It’s very hard for lottery winners to keep their windfalls private; lotteries discourage it. In these corrupt career lotteries, though, the system doesn’t have to make it hard for game winners to stay private. They shout in open air; they never shut up.

Is “be like Ajay” good advice? I don’t know, because I don’t know who Ajay is. Perhaps he was a ruthless political operator, fully aware of the resentments his supplicating smiles generated, and he used them for some sort of eleven-dimensional manifold socio-economic judo so brilliant it’s beyond my comprehension. Perhaps Ajay’s reading this blog post on Trump’s golden toilet, laughing at me. For the average schmuck, though, it’s not good advice. Of course, don’t be incompetent. Don’t be too grumpy. Be the “go to” guy or girl for work you genuinely enjoy and are good at. But, as a favor to yourself, don’t become a dumpster for career-incoherent work. Also, don’t smile all the time; it’s creepy.

I would love to advise authenticity, but that is also not a good approach for someone who needs to squeeze money out of the corporate system– and most people have no other choice.

 

There’s no path I can sell for the individual. The situation, in truth, is quite dire. In Boomer times, the corporate system seduced people with greed: $500 executive lunches, business-class travel all over the world, and seven-figure bonuses just for showing up. Today, it runs on fear. Fear’s cheap. Most Ajays won’t succeed; I can say that with confidence. I can also say that most anti-Ajays won’t succeed. Most people won’t succeed. The corporate game is rigged and anyone who says otherwise is trying to sell something toxic. I have no elixir of socioeconomic invulnerability; I’ll admit that. There’s a massive market for false hope. I will not sell into it. I am better than that.

For the world– if, sadly, not always the individual– it would be better if we woke up, tore down the corporate system brick-by-brick like the Bastille, and replaced it with a fairer, more sensible, pro-intellectual style of society worth caring about. If enough of us had the courage to live in truth, consequences be damned, the whole corporate edifice would crumble and we’d all be better off for it.

It’s not easy to live in truth. It’s downright hard to change a world whose most powerful people loathe any change at all. A first step, though, might be for us, unhindered by mercy, to mock anyone and everyone who says “personal brand” without vehement contempt for the concept. If we work together, we can make such people shut up. That would be a start.

Why I’m not using a traditional publisher to launch Farisa’s Crossing.

As I write this sentence, it’s June 30, 2018– 300 days before I launch Farisa’s Crossing, on April 26, 2019.

A few months ago, I decided to self publish the book. I realized that I wasn’t even going to try traditional publishing. I have no doubts about my ability to get in. The process is harrowing and random, and even the best writers can expect to be shot down more than anyone likes to think about, but that wasn’t the problem I realized I had with it. In the end, it came down to time. It’s finite. I’m 35; I’ll be almost 36 in April 2019. Anyone who plans to explore all options before doing everything will end up achieving nothing. I had to knock some things off the calendar. I’m not going to skimp on the writing itself, nor research, nor editing. What can I cut that doesn’t affect the quality of the book? Writing a bunch of silly query letters landed high on that list.

Self publishing isn’t for every author or every book; nor is traditional publishing. Each has its advantages and drawbacks. There are books where I would eagerly use a traditional publisher, in spite of the drawbacks.

I thought it would be worthwhile to go through my reasoning here. Below is why I decided not to use traditional publishing for Farisa’s Crossing.

1. I don’t need it– Farisa is fiction.

A friend of mine writes biographies. Of all the genres, I think biography is the best suited by traditional publishing. Generalist copy editors aren’t equipped to copy edit biographies, which require extensive fact checking and removal of bias. Traditional publishing, in this genre, is invaluable.

Opinionated nonfiction, I would argue, is best served by traditional publishing– at least at book length and in print. Author credibility is huge, and can be manufactured if it isn’t there. Here, a self publisher is a guy with opinions; backed by a traditional publisher that’ll line up national TV spots, he’s a world-renowned expert. (Actual expertise optional.) Topical nonfiction– say, a book about a current election– has a short half-life; it will sell quickly or never. New York publishers have the resources to publicize it quickly; self publishers, in general, do not.

 

Memoir, if it’s at risk of being controversial, needs a traditional publisher. The author puts her personal reputation on the line. She needs a full-time publicist to fend off attacks.

 

Finally, we have business books. Those aren’t written to sell copies. It doesn’t hurt if they do, but few books make large sums of money, especially by business executives’ standards. Rather, these books are written to advance their authors’ careers. Middle-aged managers can reinvent themselves as “successful executives” and get better jobs– or, if they’re tired of being employees, lucrative speaking opportunities. Prestige, in that game, is everything. Substance, as anyone who’s read a business book or few, is not.

From the above, it should be obvious that I do not think traditional publishing is a dinosaur on the brink of its own extinction. Will its retreat from fiction continue? Yes. Is it dead? No. In fact, it’s exactly where it wants to be. It has decided that new author discovery, at least in fiction, costs too much. In the 1970s, fiction editors read manuscripts (“slush”). In the 1990s, they pushed that job to literary agents. In 2018, unpaid 19-year-old interns do it. A reader is a reader, so I don’t mean to disparage these interns as people; but I would always bet on a larger crowd when it comes to discovery. A hundred strangers versus one Ivy Leaguer? I’m betting on the hundred doing a better job. So long as self publishers can get their work read in the first place, the gatekeepers will be unnecessary.

Nonfiction demands external credibility, because it makes truth claims. I’m more inclined to trust an opinion essay from an expert writing acceptable prose than a stranger who writes beautifully.

As for fiction, the traditional publisher is far more optional. Farisa’s Crossing will be no better and no worse than the 200,000-or-so words I write because it will literally be the 200,000-or-so words I write.

Authors don’t need external credibility to write successful fiction. A good novelist disappears. The reader should get so involved in the story that she forgets that she’s reading one in the first place. The ability to induce this feeling is rare, quite difficult to teach, and does not come from advanced degrees, an author platform, or a reputation built by a Manhattan publicist. It comes from good writing.

2. Thinking about agents led to bad artistic decisions.

Self publishing is hard. Traditional publishing, if the stars align, is easy– seductively easy. Every single one of us humans is prone to the “Prince Charming” mentality, at least a little bit. We’d like the basics to be taken care of.

The traditional publishing fantasy goes like so: you get the first and best agent you query, he snaps together a lead-title deal, your book is reviewed by the New York Times, then the New Yorker offers to publish a chapter (and your publishing house doesn’t object) and it goes viral like that “Cat Person” story, so you sell 2 million copies and you’re set for life. You can literally think (and type) your way to the life you want– if you get the words right. That’s the promise; that’s the dream.

Of course, you can also win the lottery– if you get the numbers right.

The time cost of querying, one can put limits on. I’m 35 and I’m starting a series that I expect to take at least 10 years to finish. My health is better than it has been for a long time (ten years ago, I didn’t expect to be here today) but my life hasn’t been a no-damage speed run. If I thought the expense of 6 more months were worth it, I might put querying on the schedule. No harm in that.

 

We are all humans, though. When we see something that looks easy– a path of least resistance that seems to go where we are trying to get– we’re built to focus on it.

This becomes a problem if you start to think about agents rather than readers. This ruins a book. One of the major reasons for literary fiction’s decline, if not the main one, is that many of these stories are written to score agents. And not all agents are created equal. In any genre, there’ll be no more than a dozen “power agents” who can snap together serious deals with large print runs, demand aggressive marketing from major publishing houses, and sell screenplays. There’s a lot of terrible fiction written to appeal to the tastes of a small number of people.

An experiment has been performed several times in which an award-winning novel is queried to literary agents and shut out entirely. It’s not that agents are stupid or don’t understand good literature. (I think their tastes are as valid as anyone else’s.) To some degree, it’s just the sheer randomness of the process that produces this outcome. Being read at 9:00 am will produce different results from being read at 3:30 pm– or, worst of all, right before lunch. No one can control that.

Furthermore, great novels take risks. (So do many terrible novels.) Agents pick up heuristics that one must heed in order to get published. An exhaustive list of “agent rules” is not the purpose of this essay, but I’ll give a couple examples.

One of those agent rules is not to use exclamation points, ever. (Some agents allow 1 per 50,000 words.) Are they overused by mediocre writers? Yes. Can they be obnoxious? Of course! Used skillfully and in character, they’re quite useful. In dialogue, they differentiate hot anger from cold anger– there’s a difference between “Get out!” and “Get out.” Likewise, an author using deep POV in the voice of a seven-year-old girl might use exclamation points for weather (“It was hot!”) while a septuagenerian probably wouldn’t.

Another agent rule is never to use back story in the first chapter. Now, like all of these agent-level prejudices, this principle is not without merit. First-chapter time jumps are very difficult to get right. They tend either to bore or confuse readers. If back story is relevant in a first chapter, it should be limited to a sentence or two here or there, and it should be told rather than shown. (Showing costs words; words equal time; always but especially in the first chapter, milliseconds matter.) Why do I hate this as a hard rule? The first chapter, in well-told linear narrative, is always back story… to the rest of the book. In truth, there are times when it’s artistically valid to open at 120 miles per hour, and times when it’s not.

You write differently to get an agent than to write a good novel. If querying is on your mind, you’ll find yourself writing for the 19-year-old unpaid intern who’s been throat-deep in slush since 9:56 am and who’ll decide in eight seconds whether to read beyond the first paragraph. You’ll put that explosion that belongs on Page 32 on Page 1. You’ll find yourself writing for people trying to mirror their bosses’ opinions rather than readers who want to get lost in a story. You’ll write a hook-laden confusing opening, flash and no substance, at the expense of the rest of the book.

Writing for agents is easier than writing for readers– the former is paint-by-numbers, and the latter takes genuine artistic commitment– but pollutes the work. Writing for both is impossible. Sometimes an author will hit both targets– a novel written for readers will land a power agent– but it’s so rare, it’s not worth obsessing over.

I had an agent-friendly opening, for more than one drafting cycle, that I knew was wrong. I found it subtly corrupting other, later, chapters. Readers found it intriguing but pretentious and confusing– which it was. They were right. So, eventually, I decided, “Fuck that agent game; I’m going to write for readers.”

3. Farisa is long.

Speaking of agent prejudices….

What is the right word count for a novel?

The answer is similar to, What is the correct weight for an airplane? The answer: as light as possible to do the job.

In truth, the answer is less satisfactory for stories than airplanes, because an airplane’s duties are, at least, well defined. The metaphor works this far, though: airplane weights range all over the place, because of their different purposes.

Novels range from about 25,000 words (which would, today, be classified as a novella) to well over 500,000. It’s story-specific what number is right; a book can be overweight at 100,000 words or underweight at 200,000. An average traditionally published novel might weigh in at 85,000 words. The sweet spot for contemporary literary fiction seems to be 125,000 – 250,000, which is longer than average.

My guess is that Farisa‘s final word count– in revision, word counts go up, then down– will land in the 175,000 – 225,000 range.

How much do readers care about word count? They don’t. They care about pacing. They care about price– which can make a big book hard to sell on paper. Editors care, but will make exceptions for good books. Agents? You will not get one over 150,000 words. They’ll sometimes represent a long (or short) book as a favor to an existing client, but not a first-time novelist. Acceptable word counts, as determined by literary agents, tend to fall into a tight range: a genre-specific target, plus or minus 10,000 – 15,000 words. For example, first-time literary novels are expected to be between 80,000 and 100,000 words; epic fantasy should be 90,000 – 120,000.

It’s hard to land an agent with a big book because it has to be sold to one’s boss several times. The intern has to sell the book to his boss (the agent). The agent has to sell it to an editor at a publishing house. The editor has to sell it to executives who control marketing budgets. Only established, big-name authors can get through at 200,000, even if that’s the right length for the story.

An option, with a big book, is to split it. Both publisher and author stand to make more money this way. Sometimes this is the right artistic decision. For Farisa’s Crossing, it’s not, although an explanation of why would spoil the plot.

4. Farisa is a genre-crosser: literary fantasy.

What on earth is literary fiction? What is genre? Can a book be both? This is a fun topic. I could write thousands of words on that alone, but I’ll spare the reader.

Conventional wisdom, in some literary circles, is that there’s “real literature” and then there’s “genre fiction”. Literary novels transcend; genre novels merely entertain. This is, I shan’t hesitate to say, complete bollocks.

All literature has genre. What is usually called “literary fiction” is, in fact, another genre. I call it metrorealism. Actually, literary (as often defined) and mainstream fiction are two sub-branches of metrorealism that otherwise have little to do with each other. Metrorealism takes place in the real world and focuses on ordinary characters. If kings and queens, heroes and villains, or geniuses and fools are featured, it is usually ironic in a way that humanizes the subject and equalizes with the reader. Character-driven metrorealism with high-quality prose tends to be received (and marketed) as literary, while plot-driven metrorealism with adequate prose tends to be presented as mainstream fiction.

There’s a lot to be said for metrorealism. It’s a fine genre– especially the literary subtype. I read a lot of it. I’ve written a few short stories in that genre (that I’ll probably try to get published around April, when I launch Farisa). I have nothing against it. It’s not what Farisa’s Crossing is, is all. The Antipodes is an epic fantasy series– with literary style and aspirations.

 

The meaningful distinction, to me, has nought to do with genre. A novel is not “genre” or “not genre” because all work has genre. (Technically speaking, “novel” is a genre and “fantasy novel” is a subgenre.) Rather, the distinction is between literary and commercial fiction. So, just as commercial metrorealism (mainstream fiction) exists, so can literary fantasy.

I don’t intend to say that commercial fiction is inferior. This is a distinction of purpose, not value. Most commercial writing is perfectly adequate, and I don’t believe the reading public wants substandard dreck. People buy books for all sorts of reasons, and shoddy writing is not a deal-breaker when it comes to commercial (or critical) success, but I don’t think the first wave of readers for 50 Shades bought the books because they were badly written. (The hate readers came after its commercial success.) Would the book have sold better if it were polished to a literary standard? Perhaps it would have sold 100,000 more copies. Compared to the 125+ million it actually sold, that’s a rounding error.

There doesn’t seem to be much evidence that literary novels sell worse than commercial ones, if one compares like against like. There’s an apex fallacy by which literary writers look at the outcomes for commercial bestsellers, rather than hangers-on, and think they’re all rolling in money. I’d actually bet that improving the writing, characterization, and relevance of a commercial novel, up to a literary standard, will only improve sales. The problem? It takes 10 times as much work, and I highly doubt that it increases sales by a factor of 10.

 

Literary writing is intensive of writing time, calendar time, and life experience. The characters form over years in the writer’s mind. Sentences are revised several times before going in to print. Every decision is questioned over and over again. The second draft is nearly a complete rewrite, now that the author understands the characters more fully. A seasoned commercial writer is about 50 percent done after writing “The End” on the first draft; the literary writer is lucky if she’s 10 percent done.

Like I said, the difference is not in value or quality so much as purpose and process. The commercial writer, once the prose is adequate enough that an editor can take the book from there, stops working on that story and begins the next one. The literary author line edits her own work and often has tens of thousands of unused back story for each of the main characters.

Commercial authors aren’t necessarily bad writers (some are, but that’s true of literary authors as well) and sometimes they’re the best storytellers. They iterate. They publish more often and get quicker feedback, so they can get more experience with a wider array of story formats. They usually have a stronger sense of the average person’s psychology– and let’s be honest, every one of us is average in almost all ways; the exceptional are usually extraordinary in only a few ways– than the literary writers (who tend, in turn, to have a stronger grasp of deep characterization, language, and atypical psychology).

Farisa’s Crossing is literary fantasy. Agents tend not to like literary fantasy (or literary science fiction). Why is that? Any answer would be speculative (pun intended) insofar as I’m not one. The polite guess is that they must believe they’re hard to market– and they might be right about that. The impolite guess isn’t relevant here.

5. I’m writing a series.

Traditional publishing carries risks. One does not sell “a book”; one sells rights to a book. This is important. Most traditionally-published authors rely on their agents to navigate their contracts. They do not use lawyers (they often cannot afford lawyers) and are discouraged by their agents from doing so. Lawyers kill deals, they say. (It may be true, but that says more about the deals than the attorneys.) If they killed so many deals, then why do publishing houses employ them?

Bad things sometimes happen in publishing. Authors get dumped. Editors change houses or quit entirely. Agents burn out and leave the industry. Someone in a distant corner of the world might say the wrong thing and burn a bridge three degrees separated from the author– zeroing the marketing budget and turning that enviable advance into a festering zombie albatross. An author might leave his publishing house after learning that he’s been under-published for years because the house hired an executive who really, really hates Ohio– and the author is from Ohio. Getting rights back, when leaving (or fired by) a publisher, can be a nightmare.

The value of book rights is book-dependent, of course. If you’re writing a book about the 2018 election, the rights are unlikely to be valuable in 2038 unless the title achieves lasting cultural relevance now. If the publisher fumbles, it’s a lost opportunity, but the loss of rights is irrelevant.

For a series, giving up the wrong rights can be deadly. Many authors cannot publish using their world or characters without permission of the publishing house. Even without that, though, taking a series to a new publisher is difficult. No publisher wants to buy Books 3–7 of a series when a rival house owns the first two books, and won’t give them up.

Books used to go out of print if the publisher stopped printing and selling copies. Rights reverted to the author. If the book was ahead of its time, or would have fared better as a $4 e-book than as a $20 block of paper in the bookstore (the author makes about the same money on each) it can be republished.

No one wants to think about their book selling poorly, or their series being dumped by a publisher, but these things can happen and not always to bad books. Good series can be trashed for all sorts of reasons. A self-publisher can try again. In traditional publishing, retries are rare– and if the book fares poorly, it’s always taken to be the author’s fault.

6. Trade publishing takes too long.

Good things take time, and books are no exception.

I could write a 100,000-word rough draft in an 80-hour week. It wouldn’t be worth reading. I’d need to spend significant time on revision. Lining up editors and cover art shouldn’t be rushed, either, and the people doing this work need time, of course. Traditional publishing requires additional lead time, due to the emphasis placed by bookstores on each title’s performance in its first eight weeks; if it doesn’t sell well in the short term, it might not have a long term.

Much of the delay in trade publishing is legitimate. Some it is not– there is some status waiting, too. A literary agent’s turnaround time can exceed 6 months. At my age, I’m not in the position where I can treat it as nothing to spend a year waiting for a “power agent” to grace me with… the right to offer him a job. I’d rather spend the time writing.

7. Control.

Title and cover art are artistic and commercial decisions; pricing is mostly commercial. Guesswork and intuition come in to play.

Traditional publishing houses have expertise, and the short-term winning bet, I think, is to hand those duties over. The problem is that, since the author signs over so many rights, he loses control completely. I’ve known several authors whose books were ruined by bad titles and cover art.

Of course, if a book flops due to bad marketing or a terrible cover, the author’s in no position to ask for it to be released again with better efforts. The publisher will consider itself generous if it offers him to write another book for them.

Self publishers, at least, can iterate and learn. This, I think, is one of the major reasons why self publishing will become the usual way in for fiction. Trade publishers will continue to work with nonfiction, public domain work, and the top hundred or so bestselling fiction others. For novelists, it’ll be a victory lap rather than a career, for those who need to negotiate foreign-language rights and screenplays before the book even comes out.

By 2030, the vast majority of important novelists– including, to the establishment’s surprise, the best literary authors– will not use traditional publishing. Why? sheer numbers. Talent seems uncorrelated with hereditary social class. For every would-be writer whose parents get him representation by a power agent as a 21st-birthday present, there are 1,000 writers who don’t.

 

8. I want to learn about the business.

By American standards, my politics are left-wing, so it might surprise some people that I’m saying this: I’m not ideologically against capitalism. Business is natural and necessary. I don’t view commerce as inherently dirty, and I think that academics’ outmoded, knee-jerk, leftist pearl-clutching about the material world (in fact, often a social-class humble-brag that reinforces power structures) hurts everyone. The result of the left’s dislike for all things business means that the best people shrink from it– and dirty people disproportionately go into (and end up dominating) the game. It doesn’t have to be that way.

The publishing business isn’t a massive money-maker but, for better or worse, it influences culture.

Our culture is in peril. The danger is not immigration (which refreshes it) or gender equality (on the contrary, gender justice is the strongest indicator of cultural health I know) or scientific advances (again, beneficial, at least when used well). Rather, the threat to our culture is atrocious leadership, both from the perceived right (corporate executives) and left (connected coastal tastemakers). Border walls won’t solve this problem; we did it to ourselves, and the enemy is our own elite.

Right now, too many good people sit on the sidelines. Too many people on the left would rather make a performance art out of being offended than get out there and start doing. We can’t let this happen. Good people need to enter tough, competitive worlds like business and politics– and stand up for intellect, morality, and culture.

9. I wanted to learn editing.

Editing is hard. It can be a slog.

Here’s a dirty secret about writing: quite a few people who are good at it, whether we’re talking about bestselling commercial authors or acclaimed literary voices, don’t especially enjoy it. This is something they rarely admit (and I’m not about to out anyone) and I’m not entirely sure why. I guess they have to keep up the “dream job” image, but for many of them, it has become merely a job. They’re good enough to stay relevant and get paid, but the passion’s gone.

I don’t think they should be ashamed of this. Writing’s hard. It’s not for everyone. It’s not for the vast majority of people. The world needs more readers, more than it needs more writers.

There are probably 50 million people in the United States who want to “be a writer” and will publish their novel “someday”. Not a small number of them have 300-page manuscripts. Some will self publish unready work. Others will query agents and find themselves quoted on Twitter with the annotation, #queryfail. Very few of them will actually write a solid book. Divergent creativity (branching) isn’t all that rare. It’s the fun part. Kids have it. Convergent creativity (pruning) requires taste and skill. It’s painful and detail-oriented. In corporate management, there’s a separation between the “creative work” (which is not all that creative) and the detailed “grunt work”, but that mentality carries over badly to the arts. It’s all about the details. Few people have the grit necessary to write a complete, publishable novel– much less a significant literary work.

I’d guess that 40–60 percent of successful writers still enjoy writing– and, again, I’m not denigrating those who don’t. It’s not a sin that they enjoy Manhattan cocktail parties more than 6:00am writing sessions; it means they’re normal. (I’m not normal.) I’d guess that less than 10 percent enjoy editing.

I didn’t think I would at first, but as my skills improved, I found myself enjoying editing as well. It’s a different pleasure from 120-mile-an-hour rough-draft writing, but it’s a lot of fun in its own right. I studied characterization; scene construction; nuances of grammar; line editing; story structure; and rhetorical devices and when (and when not) to use them. There’s something liberating about going deep into detail, without fear. Not many people do that after college (if even then).

When I finished my first draft of Farisa, it weighed in at 134,159 words. (I remember the number because it’s one transposition away from the approximation of π, 3.14159.) The number intimidated me, and over the next month I discovered plot holes, missed opportunities, dangling story threads and far too much telling. The more I learned about craftsmanship, the more I spotted and improved. For every 500-word info dump I could cut (kill, kill, kill those things) I found a 2,500-word scene needed to strengthen a connection between events that, in my first writing, I had assumed but never stated or shown. Some of the edges I drew, to tighten the story became nodes (scenes, even characters) in their own right. If my sum total, after a bit of line editing to take the word count down, comes in under 200,000, I’ll be happy.

Revising a 130,000-plus word manuscript is a big task. I was apprehensive. “Shit, I’ve got to edit this thing. Maybe twice, even.” (Hahahaha.) I found out, though, that I like it. I’m not a perfectionist– I went through that phase of life, and it’s crippling– but there is a ludic element, a game almost, of seeing how tight I can make a sentence or how good I can make a story.

The inclination to edit well and enjoy it, I think, is rare. Age and life history have a lot to do with it. If I succeed with Farisa (or a later work) I’ll be glad that it happened late. Many writers are ruined by early success; they write a great book at 25, but are useless by 30, because the Manhattan cocktail party scene takes them in and they stop having original ideas. It could have happened to me, and probably would have, had things gone a different way. I’m different, but I’m not morally superior.

 

At 35, half of my biblical three-score-and-ten, I find that as I get older, I get simpler in most ways. If somehow I beat all odds and sold a million copies of my first book, I wouldn’t hang around the Manhattan book buzz people. I’d move to the mountains and focus entirely on the second book (and the third, et al).

 

10. I’m realistic.

Outsiders to traditional publishing think that it comes with six-figure advances, national radio and TV spots, reviews in the New York Times, and full-time publicists pushing each other out of the way to line up one’s speaking calendar.

Those deals are rare, but they also have very little to do with literary merit. It may be true that “good writing gets found”, but what makes or breaks a career in traditional publishing is how well a book performs in its first eight weeks, and that has everything to do with how the book gets treated by its publisher, which in turn is driven almost entirely by agent clout. What favors can (and will) he call in? Will someone’s kid not get in to a preschool if the New York Times declines to review an author’s book? Book buzz is like sausage and laws; some things, it is best not to see them made.

The sausage-making component doesn’t require only “an agent”. Querying still works (given enough time) if one’s goal is just to “get in”. The agents who have the power and connections to drive the sort of treatment that makes traditional publishing worthwhile are extremely rare. One doesn’t need only to sign such an agent, but to rank among his favored clients. That outcome is inaccessible without pre-existing social class or extraordinary luck.

Most authors of reasonable talent can get into traditional publishing, even in 2018, even without inherited social connections, if they give it enough time. Their outcomes, though, are uninspiring: mediocre deals with no publicity, that they’re pressed to take because their agents will fire them if they back out, but that lead to lackluster launches that harm their careers in the long run. Querying, of course, isn’t free. It no longer costs postage, but time is the most valuable resource we have, and querying takes too much of it compared to what it can actually do.

 

I don’t think it’s worthwhile to be bitter about the changes in traditional publishing. Industries evolve. So long as the self-publishing infrastructure continues to grow, literature will improve with time. The few dozen power agents in Manhattan (even if augmented by the thousands who wish to join them) were always a tiny fraction of the reading population, but their proportion is even smaller if one steps up to a global perspective. As for bitterness, which there’s a lot of in publishing, the problem (as I’ve learned, by being embittered in a different career) is that it leads, paradoxically, to magical thinking. Bitter people want to be not-bitter; they want someone (like a literary agent) to come along and solve their problems. This is why they’re so easy to swindle. Bitter people fall for sweet talk– the narrative wherein someone riding higher stops for someone special, just because– and that’s a dangerous weak spot to have in business. There are cases in which to use traditional publishers, and others in which they’re unnecessary. Realism, not bitterness, is what an author needs.

 

11. Experimentation / flexibility.

No one knows what sells books. It constantly changes. There’s a lot of guesswork and iteration. Traditional publishers get a bad rap for how often they get it wrong, but most self publishers aren’t any better.

Marketing is especially hard for books, because the book’s main advantage over other media is its reputation for (and, because books are less expensive, true advantage in) authenticity. The production values of a film or television show come at a price: executives who control budgets, focus groups, the need to manage an average attention span. People understand this. Popular visual media tend to establish value using social proof: special effects, wide releases, and famous actors. Novels establish value through the quality of writing, characterization, plotting and world-building. The proof-of-value isn’t $30 million but 3 years of a talented writer’s time. The issue is that a reader must spend considerable time with the writing to see these production-like values; they don’t come through in a two-minute trailer. Even for the writer to get a shot, readers must know that the book exists in the first place. Marketing matters.

No one expects authenticity from a summer blockbuster– it may be there, but it’s not mandatory– but we absolutely expect it from literary novels (and, to a lesser extent, high-grade commercial works). Authenticity and marketing/publicity go against each other. If readers knew how much Manhattan favor trading and sausage making went in to “book buzz”, they’d trust it even less. For light summer entertainment, the inauthenticity of marketing is not so self-destructive. Getting people to come to the theaters is, in comparison, straightforward. For books? Most publicity efforts go nowhere, because the nature of public relations is its irreducible inauthenticity.

A publicity strategy that drives sales today might fall flat in 2019. What a publishing house thinks, for good reason, is genius, might pull a zero and take a good book down with it.

In traditional publishing, recovery is next to impossible. A one-shot approach to  One way to recover would be to reduce price, give copies away, and publish chapters either for free or in magazines, but traditional publishers rarely do. Once a book is deemed a flop (or worse, a mediocre performer, making the book expensive for the publisher to give away, which might be the best move for the next one) the publisher loses interest in its fate, although the author doesn’t.

A self publisher, when a publicity effort fails, can try another approach. There’s more experimentation available.

12. Not to be an employee.

I’ve said before that more people want to “be a writer” than actually want to write (much less write well) and one of the reasons for this is that people, eventually, want to escape the oppressive stupidity of office life. They think they’ll be their own boss. I’ll admit that this is a contributing motivation for me, as well.

I’m good at many things. I believe writing is one of them. I’m also bad at many things. Because I have a architect’s knack for how things could be or ought to be, my mind under-attunes itself to parochial details of the broken way things really are at any specific point in space and time. As a result, arbitrary authority– like bad legacy software, just another form of sloppy writing– isn’t something I handle skillfully. I’m not good at tolerating bad decisions or managing the childlike needs of people in power. If I could change such traits, perhaps I would. On one hand, I would be a less virtuous person and my life’s total value to the world would decrease. On the other, it has cost me jobs and a lot of money to be less-than-perfect at the less-than-virtuous skill at navigating less-than-excellence.

I was, at one time, in the top 1 percent or so of software engineers. Perhaps I still am, although I’m not as current. These days, I prefer management and data science roles. I had a period in which I hated writing code; I could do it, but it was a struggle, because every keystroke felt like an injection of nonsense into the world. Programming did become fun again, but it took considerable time.

Sometimes it is right and prudent to follow orders (operational subordination) but organizations often demand personal subordination. If you have a backbone, and do anything in that context– anything at all– you will grow to hate it. Writing, programming, speaking… if you do the job in a context of personal subordination, you will ruin it for yourself. You may find excuses not to do it. You might complete the work, but poorly. Perhaps you’ll power through and do it well enough, but nothing you produce will be authentic. For factory-floor corporate work, this isn’t such a tragedy to the product; mediocrity and inauthenticity are not merely survivable but expected and commonplace. For literary fiction, it’s fatal.

 

I know plenty of people who’ve used traditional publishing: successes and failures; people who defend it and others who loathe it. I know people who’ve been dumped by their agents and fallen to pieces; I know people who’ve been failed by traditional publishing and still defend it; I know people who’ve succeeded but would self publish if they were to do it again; I know bestselling authors with exceptional agents who love what traditional publishing does for them and have no regrets. There seem, at first, to be few similarities between the outlier successes and the horror stories, but there is, in fact, one theme that connects them all.

 

That theme is: traditionally published authors are employees.

For example, often they give their publishers the right of first refusal, which means they can’t shop work around unless their “home” has already rejected it. Most authors cannot publish, even short stories and bonus chapters, in a world they used without the publisher’s permission. Of course, publishing has elements of a feudal reputation economy, and an author dumped by a publisher or editor will likely find it harder to acquire another one than he did for his debut. And, as bad as it is for an author to lose a publisher, to be dumped by an agent is almost always fatal.

For example, authors who demand their publishers to do their job– to market their books– are deemed “difficult”. Those who turn down career-damaging deals with onerous contractual terms get pressure from their agents to acquiesce and, eventually, will be tossed back in the slush pile if their agents get sick of waiting for ‘dat commission. It’s shockingly easy for a writer to end up worse off than pre-debut, which leaves them out of power.

Agents don’t fear being dumped by authors, because there are thousands more submitting queries every day. Authors know that if they get dumped, their careers in traditional publishing are over.

 

These aren’t theoretical concerns. I know of talented writers being dumped (and blacklisted) by their agents for turning down crappy deals. I’ve heard of publishers reneging on promised marketing when the author complained about an ill-chosen title. The old system, under which authors knew that their publishers truly backed them, and that after getting published once, they’d continue to get book deals and competent marketing, is gone.

Of course, people who leave traditional publishing can still self publish, but if that were their plan, they ought not to have wasted time and rights on a different game. It would have been better for them to spend those years self publishing.

It is not always bad to be an employee. I want to make that clear. Nor is there anything sinister about employment. I’d like to have my own show in some time, but that’s not everyone’s way; done morally right, employment is a risk transfer. It only becomes immoral when the trade is misrepresented (i.e., the risk reduction is not commensurate with what the employee gives up). I’ll leave it to others to decide, for themselves, whether traditional publishing offers more than it takes away. On an individual level, it depends more on the book and the deal than anything else.

As for being an employee, there are tiers of it. There are seven-figure executives who write their own performance reviews, fly in corporate jets, and have limitless resources for any projects they might imagine, and they are employees; there are also miserable, underpaid, precarious employees. Some people enjoy organizational mechanics, either as a spectator sport or for live-action play; others consider it nonsense and a distraction. Some people excel at the game; others are either bad or, at best, inauthentic when they play. There are as many approaches that can be taken as stories that can be written.

What story do I exist to write? I don’t have a fully-formed answer but, on my own question, I’m further along than anyone else. Clearly no one else knows; I’ve lived half a life to learn that much. My job becomes to figure out the rest.

Why 95 Percent of Software Engineers Lose Nothing By Unionizing

Should software engineers unionize?

I can’t give a simple answer to this. There are advantages and disadvantages to enrolling in a collective bargaining arrangement. If the disadvantages didn’t exist, or weren’t considerable in some situations, everyone would unionize. So, we need to take both sides seriously.

The upshots of collective bargaining are: better compensation on average, better job security, better working conditions, and more protection against managerial adversity. There are a lot of improvements to employment that can only be made with collective negotiation. An individual employee who requested guaranteed severance, the right to appeal performance reviews, transparency in reference-checking and internal transfer, and waiving of onerous (and effectively nonconsensual) but common terms in contracts– e.g., mandatory arbitration provisions, non-competition and non-solicitation agreements, anti-moonlighting provisions– would be laughed out of the building. No individual can negotiate against these terms– it is, for example, embarrassing for an individual to discuss what rights she has if a manager gives a negative performance review– but unions can.

So what are the downsides of unionization? Possible losses of autonomy. Often, an increase in bureaucracy (but most often a tolerable one). Union dues, though usually those are minimal in comparison to the wage gains the unions achieve. Possible declines in upper-tier salaries as compensation moves toward the middle– however, not all unions regulate compensation; for example, unions for athletes, actors, and screenwriters do not seem to have this problem.

There are a small number individuals in software who would not benefit from unions, and there are a few firms (mostly small, or outside of the for-profit sector) that do not need them.

To wit, if you’re a high-frequency trader making $1 million per year, you probably do not need a union– free agency is working well for you– and you may not want one.

And, if you work in a federally-funded research lab that pays for your graduate education, and that allows you to publish papers, attend conferences, and perform original research on working time, then you probably don’t need a union.

If you’re a Principal Engineer at a “Big N” technology company, making $500,000 per year, who picks and chooses his projects– you’ve never even heard of Jira– and wakes up every morning excited to implement the ideas he dreamt about over night… you may not need a union.

If your boss is personally invested in your career, so much so that the only thing that could prevent you from making senior management within 5 years would be to commit some grievous crime… then you might not want to unionize.

If you’re anyone else– if you’re part of that other 95+ percent, probably 99+ percent; the IT peons– then, chances are, you lose nothing by unionizing.

For example: if you have to justify weeks or days of your working time; if you work on Jira tickets rather than choosing and defining your own projects; if you know for sure that you’re never going to be promoted; if your work is business-driven and you have little or no working time to spend on your own technical interests… then you are hopelessly nuts if you are not in favor of unionization.

Here’s why I say that. If you’re the typical, low-status, open-plan programmer, forced to interview for his own job every morning in “Daily Scrum”, then all the bad things that unions can bring have already happened at your job. Whatever negatives unions might bring– bureaucracy, reduced autonomy, lower status of the profession– have already occurred and are therefore moot.

Is there a risk that a union will introduce bureaucracy and reduce worker autonomy? Yes; sometimes that happens. But, engineers under Jira, Scrum, and Agile (technological surveillance) already have so little autonomy that there’s nothing to lose.

Might a union will create an adversarial climate between management and the work force? Sure. But, most software engineers are low-status workers whose jobs their bosses would gladly ship overseas, and who live under the surveillance described above. They’ll be fired as soon as their performance dips, or a cheaper worker comes on the market, or they piss the wrong person off. The adversarial climate exists. Again, nothing to lose.

Do unions tend to pull compensation toward the middle (or, more accurately, the upper middle)? Of course, they do. Software engineers making $500,000 per year might not see a use for unions. That said, any engineer who works on “user stories” is highly unlikely to be anywhere close to that number, and within her current company, never will be. The same applies: nothing to lose.

What do unions do? For good and bad, they commoditize work. The technician, artisan, or engineer, once a union comes in, is no longer fully a creative, unique, lover-of-the-trade (amateur, in the original sense) valued for his intangible, cultural, and long-term (looking back and forward) importance to the organization. Nope, he’s a worker, selling time or labor for money. If both you and your employer believe your work is not a commodity– this attitude still exists in some corners of academia, and in some government agencies– then you might not want to involve a union, since unions are designed to negotiate commodity work.

Let’s be honest, though. If you’re the typical software engineer, then your work has already been commoditized. Your bosses are comparing your salaries to those in countries where drinking water is a luxury. Commoditizing your work is, quite often, your employer’s job. Middle managers are there to reduce risk, and that includes diminishing reliance on singular, high-value individuals. Running a company, if possible, on “commodity” (average) talent isn’t good for us highly-capable people; but it is, when possible, good middle management.

Chances are, you don’t get to pick and choose your projects because “product managers” have better ideas than you (so says the company) about how you should spend your time. You’re told that “story points” and “velocity” aren’t used as performance measures, but when times get tough, they very much are. Open your eyes; when middle managers say that Agile is there to “spot impediments”, what they mean is that it makes it easier and quicker for them to fire people.

A union will also commoditize your work– this lies behind all the objections to them– but it will try to do so in a fair way. Most employers– in private-sector technology, the vast majority of them– will commoditize your work just as readily, but in an unfair way. Which one wins? I think it’s obvious.

If you’ve been indoctrinated, you might think that unions are only valuable for the stragglers and the unambitious, and that the services they offer to workers are useless to average, but less high, performers. False. “I’ve never been fired,” you say. “I could get another job next week,” you say. “The working world is just,” you say.

Most people hope never to face managerial adversity. I have, so I know how it works. When it develops, things start happening fast. The worker is usually unprepared. In fact, he’s at a disadvantage. The manager has the right to use “working time” to wage the political fight– because “managing people out” is literally part of his job– while the worker has to sustain a 40-hour effort in addition to playing the political side-game of fighting the adversity or PIP. It’s the sort of ugly, brutal fight that managers understand from experience (although even most managers dislike the process) and, because they choose the time and place of each confrontation, have every advantage possible. The worker thinks it’s a “catch up” meeting because that’s what the calendar says. A stranger from HR is there: it’s an ambush. Two witnesses against one, and because corporate fascism-lite is under-regulated in our country, the employee does not have the right to an attorney, nor to remain silent.

What might be able to counterbalance such disadvantages? Oh, right. A union.

What, though, if you’re happy with your compensation and don’t consider yourself a low performer? Do you still need a union?

Saying “I don’t need a union because I’m a high performer” is like saying “I don’t need to know about self-defense, because I’m so good-looking no one would ever attack me.” Real talk: that meth-addicted, drunk scumbag does not care one whit for your pretty face, buddy. Run if you at all can; avoid the fight if he’ll listen to reason; but, defend yourself if you must.

Have you, dear reader, been in a street fight? I don’t mean a boxing match, a prize fight where there are still rules, or a childhood or middle-school fight that ends once one person has won. I’m talking about a real adult fistfight– also known as: for the attacker, an assault; for the defender, a self-defense situation– where multiple assailants, deadly weapons, and continued (and possibly lethal) violence after defeat are serious possibilities? I, personally, have not.

Most people haven’t. I’ve studied combat enough to know that most people (including, quite possibly, me) have no idea what the fuck to do when such a situation emerges. Many victims freeze. Given that an average street fight is over in about ten seconds– after that point, it’s more of a one-sided beatdown of the loser– that’s deadly. But it’s something that untrained humans are not well-equipped to handle.

Even people with excellent self-defense training avoid street fights– there are too many bad things that can happen, and nothing good. Sometimes, they lose. Why? Because their training, mostly oriented around friendly sparring, has them primed to stop short of hurting the assailant. That’s noble, but against someone who will bite and eye-gouge and resort to murder, this is a disadvantage.

What sorts of people are experienced with street fights (not sparring)? Criminals, reprobates, psychopaths…. Thugs. They’ve been in a few. Pain that would stall or incapacitate the uninitiated (that is, most of us) doesn’t faze them; they may be on drugs. They’ll do anything to win. They’ve stomped on necks and heads; they’ve pulled knives and guns; they’ve possibly committed sexual assaults against their victims. They know and choose the venue. They select the target and the time. They may have friends waiting to get in on the action. They may have weapons. They know almost everything about the situation they’re about the enter and, most of the time, their target knows nothing.

The odds for an untrained defender, in an unanticipated self-defense situation, are extremely poor.

It’s the same in the corporate world, when it comes to managerial adversity. Most workers think they’re decent performers– and, quite often, they are– and when they’re hit out of the blue with a PIP, they don’t know what’s going on. Was it a performance problem? Often, no. Perhaps the manager found a 2013 blog post and disliked the employee’s political views or religion. Perhaps, as is usual in private-sector technology, the company dishonestly represented a layoff as a rash of performance-based firings. Perhaps the employee is working in good faith, but performing poorly for reasons that aren’t her fault: poor project/person fit, or life events like health issues, sick parents, or divorce. Perhaps some stranger three levels up made the call, to free up a spot for his nephew, and the hapless middle manager got stuck doing the paperwork.

The corporate world is a might-makes-right system where there is no sense of ethics. There is no line between abuse of power and power as those on top see it; what we plebeians call “abuse”, they call “power”; what use would power have, they ask, if there were rules put on it?

People suffer all sorts of career punishments– PIPs, firings, bad references, damaged reputations– for reasons that aren’t their fault. The idea that only bad workers end up in this situation is analogous to the idea that the only people who can be assaulted on the streets are those who asked for it.

As in a street fight, the odds are overwhelmingly bad for an employee under managerial adversity. The other side has more information, more power, and more experience. Management and HR have done this before. The worker? It’s likely her first or second time.

In a non-union, private-sector organization like the typical technology company, to be an employee is to walk down the streets, alone, at 2:30 in the morning.

For everything one can learn in a self-defense class– proper fighting techniques improve one’s chances from impossible to merely undesirable– the best defense is to avoid dangerous places altogether. In the corporate world, that’s not possible. This is a country where at-will employment is the law of the land, so every time and every place is dangerous. Every street should be considered a slum; it’s always 2:30 in the morning.

If one must go into a dangerous place, what’s the best means of defense? The same rules that apply in bear country: don’t go alone. Wild animals rarely attack humans in groups, and criminals tend to be similar. But the corporate system is designed to isolate those it wishes to target. In the meetings that unfold under managerial adversity, the boss can bring in whoever he wants– HR, higher-level bosses, “Scrum Masters” and miscellaneous enforcers, even his 9-year-old son to laugh at the poor worker– while the target can bring in… only himself.

I do not intend to peddle illusions. Unions aren’t perfect. They aren’t good in all situations. However, most of private-sector technology needs them. Why? Because they allow the worker to exercise his right not to go alone. The HR tactics (e.g., stack ranking, performance surveillance, constructive dismissal) that are so common in technology companies to have become accepted practices would simply not survive under a decent union.

The average non-managerial white-collar worker has never been in the street fight of managerial adversity. Unions have. They know exactly what to do– and what not to do– when a situation turns nasty. Fights, albeit for the side of good, are much of what they do.

Again, if you’re in that elite cadre of software programmers who get to work on whatever they want, who find $400/hour consulting work just by asking for it in a tweet, and whose bosses see them as future leaders of the company… then you’re probably not reading my blog for career advice. On the other hand, if you’re in that other 95-plus (to be honest, it’s probably 99-plus) percent, you should unionize. All the bureaucracy and commoditization that you fear might come from a union is already around you; you can’t make it go away, so the best thing to do is to make it fair.

Incel: the Strange Identity That Became a Weapon Against Feminism

The incels are coming. Hide the socks.

The word incel means different things to different people, which makes for dangerous discussions. On the surface, all it takes to qualify as an incel is to be involuntarily celibate, a fairly common turn of fate that most people experience at least once, and yet a community of homegrown extremists and terrorists have taken up the label incel to describe something darker: a defeatist mentality asserting that women (and especially feminists) have doomed a large percentage of men to implacable misery.

If by “incel”, one means a misogynist or extremist, than nothing is acceptable but an utter desire to end that culture. Of course, to attack incels as people risks association with one of the oldest pillars of patriarchy: virgin shaming. This is why I don’t like the term incel: the extremists began using it to sympathy, but also to recruit, because although pathological misogynists are uncommon, people suffering annoying dry spells (and in the ages of 15–25, when people are most susceptible to propaganda, they are mostly men) are not.

Make no mistake about the incel identity, though: whatever the word meant once, it has lately been used as a self-identification by a culture and ideology so frightening, retrogressive, misogynistic, and downright insane that it takes a strong stomach to look at it square-on.

In an age of proliferating identities, where personality traits become labels, and we have terms like demisexual, otherkin, wagecuck and NEET flying about, the identity of incel is perhaps the strangest, because it fixates on what is, for almost everyone in fact, a transient frustration. Sexually speaking, there’s an order of magnitude more demand for young (18–23) women than men of that age, and so this period of time is unpleasant for most men. It’s so much so that societies have to invent ways to deal with it: prostitution is an old one, and martial culture (giving young men a source of worth) is another. College is yet another technique that tries to handle it, by culturally and geographically isolating 18–22 year olds so young men have a chance. Mostly, though, this problem is managed privately using wealth transfer, especially around social and cultural capital where there’s enough ambiguity to make it socially acceptable. Young men from privilege get set up, by their parents and inherited networks, with precocious career advancement to give them esteem, build their confidence, and make maximize their “eligibility” when they hit the golden score of male sexual attractiveness (25 to 44) and look for marital partners. The rest of the young men can go die, as far as conservative patriarchal societies like our late-stage corporate capitalism are concerned.

That’s what’s so weird about incel rage. These men are blaming women for something that patriarchy did to them. Women didn’t create the Hollywood narrative under which only young sex counts (quite opposite from the truth) and a man is a loser if a virgin at 25. Women didn’t crash the job market. Women didn’t drive up college tuitions. Patriarchy– and about 90 percent of the people running it are men– did that.

Inherent in the incel worldview is the notion that this transient state– an unfavorable sexual power balance, since women reach high levels of sexual attractiveness so much earlier than men– will last forever. Average- and even above-average-looking incels declare themselves “ugly” based on facial bone structure traits that haven’t been fetishized as much since the racist pseudoscience of the late 19th century. Male grievance culture isn’t new; it’s been around forever. What is new is the degree of despair and violence. It wouldn’t have been able to hit a critical mass until recently. Male grievance culture– from mainstream sexism in the 1950s and ’60s, to the rakish porn-star chauvinism of the 1970s and ’80s, to the pickup artistry of the 1990s and ’00s, to the raving misogyny of incels today– has been increasingly cult-like with each iteration. What gives a true cult its ultimate hard-on? Apocalypse. What did it take to bring the incel phenomenon about? Socioeconomic collapse.

The economic changes of 2008 were managed well-enough to protect the wealthier and older people by keeping asset prices up. Socioeconomically, however, they were cataclysmic and most of society is underappreciative of the damage that has been done. We’ll be reeling from this, fifty years from now. The shithole we let our society become, that’ll kill people in the future even if we fix everything now. For example, people will die in 2060 because of anti-medical prejudices and bad habits developed, right now, in this era of unaffordable, lousy care, atrocious coverage, and adversarial behavior by employers and insurers who’ll break a social contract (and often a legal one) as soon as there’s a dollar in it. The world changed; an apocalypse actually happened.

What does this have to do with incels? Well, they are a post-apocalyptic creature. What makes them unnerving and sometimes disgusting is their complete lack of insight into the nature of the apocalypse inflicted upon them. They blame women for a social calamity– one that has left them hurting and miserable– that was, in fact, caused by corporate capitalism.

Incels believe that priapic creatures like “Chads” (the male entitlement figure of yesteryear) and “Tyrones” (an offensive African-American stereotype) scour the wasteland of modern human sexuality and fight over the last remaining “pure” women like junkyard animals. Names that Chicagoans and Twin Cities residents used to describe less-sophisticated Midwesterners who gave the region a bad name– Trixie, Chad, Becky and Cam– have mutated into supposed creatures one would expect to fight in the 2300 AD world of Chrono Trigger. Anyway, in this post-apocalyptic, over-fucked and semen-drenched world, everyone’s having lots of sex– frequent, amazing sex, because that totally happens at 18, and also inexplicably stops around 23– except them.

 

I’ve spent months studying human sexuality, in part as background research for Farisa’s Crossing, since I’m having to build characters with sexualities different from my own. You’d think there’d be data to support this sexual apocalypse, if one were going on. Nope. For example, infidelity and marital failure are becoming less common, as is participation in high school and college casual sex. The culture’s healing, not falling apart. What’s driving it? It turns out that feminism is a good thing, especially for so-called “beta” males who lack the glib charm, aggressive presence, irresponsible risk-seeking and financial resources to succeed at high-frequency casual promiscuity.

It’s patriarchy that drives women into the arms of boorish alpha males– the sorts who climb corporate hierarchies– not feminism. When women don’t have to marry early out of economic necessity, and when they choose their husbands instead of having those choices made by their fathers or economic forces, so-called “beta” males win (contrary to “Chad” phobia) more often than the aggressive, boorish men our society deems “alpha”.

Incels and MRAs halfway acknowledge female maturation, but because they’re so obsessed with casual sex, they’ve built up another toxic narrative to explain it.

The worst men seem to win at casual sex. No one disputes this. Even if decent men are having casual sex– and one must be careful about terminology here: does it count as casual if it becomes a legitimate relationship? what about if it happens between two close friends?– it is most often the case that the indecent are loud about it. Perhaps normal people are doing all kinds of stuff only they know about, but the loudest cultural narrative one sees in casual sex is that of macho, entitled men taking advantage of women with low self-esteem (often, victims of abuse) with copious alcohol in the mix. This is unhealthy; it’s hideous. Is it the sexual mainstream? No.

There is no sexual apocalypse. Terminological debates aside, casual sex and in particular stranger sex (to which incel fantasies about hyper-aggressive demonic men absconding with superficial women might apply) seem to be going down. This variety of sexuality, perhaps deserving of its vilification for its superficiality and tendency to spread disease, isn’t common at all. Most women have no casual encounters; or they have one, don’t like it, and never do it again; or they only have them when led to believe (often by unscrupulous men such as “pickup artists”) that romantic relationships are forming, which is not their fault. Few women knowingly have causal sex and they don’t enjoy it: only about 10 percent of women orgasm on a one-night-stand. In this light, the incel mythology about women pining for selfish “Chads” is a bit absurd to anyone who understands sex. Monogamous relationships, which women overwhelmingly prefer (with a few exceptions), are very much in, and feminism is no threat to them.

There’s a difference, of course, between a dry spell and an apocalypse: between weather and climate collapse. Like most 20-year-old men, I was socially and romantically unsuccessful compared to what I wanted to be at that age, but I knew it would get better. At that age, women have all the options and men have maximal competition; it improves. Incels, on the other hand, have tied themselves to the mast with an extreme notion that there’s no hope. Most of these guys aren’t unattractive or seriously disabled (except, perhaps, for often-treatable mental illnesses) and they come overwhelmingly from the middle class of the English-speaking world. They live in diverse countries where they could easily meet women from all sorts of cultural backgrounds. It is not hopeless, at least sexually speaking, for them at all; if they evicted the misogynistic, cultish garbage from their heads, they’d be fine.

They just need something better to do with themselves. See, the way we handled dry spells, back before the 0.1 percent trashed the economy, was to focus on our careers. Those existed back then. There was a time– in 2018, it’s hard to imagine this– when applying for jobs actually worked. Transiently sexless men had something to do other than stew about experiences they weren’t having.

We are not in a sexual apocalypse caused by feminism. We are in a socioeconomic apocalypse caused by corporate capitalism– also known sometimes as “the patriarchy”, though I dislike this term because it demonizes fatherhood, and it gives too much credit to an oppressive system. We must know who our true enemies really are. Incels have allowed themselves to become useful idiots, who blame the malfeasance of corporate patiarchy on women.

Incels aren’t miserable because of women. They’re not miserable because they aren’t getting sex, because it’s always been hard for young men to get sex, and because sexless doesn’t always lead to such rage. To wit, most 60-year-old widows become involuntarily celibate, but don’t fall into rage. These men are miserable because society has subjected them to a long con. They’ve been swindled. Society has stuffed their minds full of rotten ideas that are leading them down a bad road.

When a corporate-capitalist society such as Mussolini’s Italy or Corporate America perceives peace, it does not go out of its way to differentiate gender roles: for example, in overt governmental fascism, men and women are both told they must support the state; in our covert employer-nucleated fascism, the directive is to support a manager’s career and hope to be invited to ride his coattails. When such a society perceives war, though, gender roles emerge: the woman becomes a soldier factory, favored for her ability to produce children of the master race; men become sacrificial and are told to accept posthumous glory, for not all will survive what the society decides it must do.

Our time is unique, in that peace and war have become one, like the gas and liquid phases of a supercritical fluid. Most Americans do not sacrifice, as we would in war– meat and sugar are not rationed, gas prices of $3 per gallon are cause for complaint– and the wealthiest quarter of us can live in peaceful prosperity. At the same time, war surrounds us: two campaigns we started (with unclear intention) last decade still rage in the Middle East; our appetite for drugs has financed violence and upheaval from Juarez to Medellin; and social media drama can render an individual unemployable (and blacklisting is, I would argue, an act of war). In what state are we? A peace with pockets of war, or a war that looks like peace? If war, who is fighting whom?

Patriarchal and fascistic societies ramp up toxic masculinity in preparation for war, especially when they intend to be the aggressor, and wind it down (into smoldering chauvinism, as in the 1950s) during peace. So what’s our state today? We live in a mostly-peaceful but tenuous time of asymmetric economic war. Overt acts of aggression (health insurance denials; negative employment references and blacklisting; social-media harassment campaigns; poisoning of public water resources) are fairly uncommon, but terrifying, rapid in their onset, and hard to prevent or control. We live in a time where 140 characters from a powerful person can send fifty unconnected strangers to harass any target in the world. We live in a time when workers get fired, quite literally by computers whose sole purpose is performance surveillance; the manager’s only function is to read the monthly print-out and deliver the bad news.

It’s important to understand that, while this war is different from any other, it is a real war. The 0.1 percent has not been waging “class war”, some inferior category thereof, against the rest of us. It is, and has been for a long time, an actual war. People have died because of it.

Incels get the nature of this calamity wrong. They’re too young to know about health insurance, and they haven’t gotten in the corporate world yet, which is why they think it’s only women who are capable of maltreating people. It’s impossible to sympathize with the militant incels, because they lash out at innocents, but they are perceptive of the fact that an apocalypse is underway. Their mistake is that they mischaracterize it. Millennials really have been fucked over by previous generations.

Had the upper class not stabbed us in the back, we know what society would look like, and we know because this is what was like, forty years ago: if you had a car and a college education, you could talk your way on to a job anywhere in the country. You’d call an executive on Thursday, have an hour-long lunch with him on Friday, and start on Monday. If you were 27 or older, you’d get a management-level job. If you were 32 or older, you’d get an executive job. If the job required an advanced degree, the company would send you back to school. If it was 1:30 in the afternoon and you were still working, you were a go-getter who’d get every promotion. This is the country we used to have, and our elite took it from us, and we should be willing to fight them– to die, and even to kill, if necessary– if we stand a chance of getting it back.

What killed our society, starting in the 1970s? The right wing wants people to believe that social advances (feminism, gender liberalism) had something to with the economic degradation that began around the same time. That could not be farther from the truth. In fact, the situation for women and racial minorities has been declining of late, specifically because of worsening economic inequality and job prospects. So what did go to hell in the late 1970s? Again, the culprit is toxic masculinity.

The elite of the 1940s–70s saw themselves as a national elite and took pride in making the country better: building libraries and museums, supporting progressive causes, and making education more available. To the extent that this can be gendered (and at the time, it was) this was a productive masculinity that brought society (and, over time, women) forward. Toxic masculinity never took a break, but in economics, it was on defense for a solid forty years, only to rage back into focus in the 1980s. Why?

One might be tempted to pin our society’s self-created decline on “the Reagan Era”, but I don’t think one conservative politician can be blamed for everything that happened. Rather, as we became increasingly connected, our national elite re-polarized. This ties in to our hatred for Baby Boomers. Most Baby Boomers aren’t the privileged assholes we love to hate on– the traditional Boomer narrative ignores black Boomers, gay Boomers, dead-in-Vietnam Boomers, and Boomers who fought for the rights of minorities or engaged in the (alas, losing) battle against corporate supremacy. But the Boomer 1% deserves its horrible reputation. These were the guys who compared themselves to oil sheikhs, third-world despots, narcotraficantes, and (after 1990) post-Soviet kleptocrats and decided that the American CEO– making $400,000 per year, and having to follow his country’s laws– was the short man in the group.

The lesson from the Boomer 1% is to forget Milton’s comparison of reigning in hell versus serving in heaven. From a material perspective, it is even superior to reign in hell over reigning in heaven. The 1980s is the decade when our elite began intentionally de-civilizing us in order to join the slurry of kleptocratic garbage that is the global elite.

It is hard to imagine reversing the above. The national elite, as it once was, is dead. After selling us out, it was subsumed into the malevolent global one. Toxic masculinity runs the world again– to everyone’s detriment. It’s the force that drives a man with $1 billion to want $10 billion, or a man with a beautiful wife to cheat because he has decided that the world owes him 10 (and then 100) beautiful women. It is not enough for him to drink and enjoy his milkshake. He must drink all the milkshakes, even if he throws up afterward.

Incels are not the men running the world, of course. They’re not drinking any milkshakes. In fact, they’re triple-threat losers. They’re sexual losers because of their social alienation and self-sabotaging tendencies, perhaps inherited from our puritanical culture’s views of sex as dirty (amplified by an envy of the mature and less inhibited). They’re social losers because toxic masculinity says in no uncertain terms that low-status, unsuccessful men are worth less than garbage and ought to be viewed with suspicion. They’re economic losers because the high-autonomy middle-class jobs (which would be fantastic plum positions by today’s standards) have been replaced by technologically surveilled and menial subordinate work. They exhibit toxic masculinity in their odious attitudes toward women, but they’ve also been crushed by it.

The logical fallacies of the male grievance culture are too numerous to list– each one could get an essay of its own– but the most prominent (no pun intended) is the apex fallacy. An apex fallacy exists when one compares the most successful or fortuante of one group against the average-case performance or outcomes of one’s own. Reactionaries and nostalgists often indulge in apex fallacies, comparing their lot as average people today to those of kings, knights and ladies– not peasants who die at 33 of typhoid. Likewise, incels believe that women drown in male attention because they’re hyperfocused on the white, blonde, young “Stacies” that so many other men are chasing. Apex fallacies exist, likely, because it is advantageous to observe the most successful individuals. When the pinnacle of a society is corrupt, calamity is likely to follow for that reason– bad examples are being set– and we should be scared for that reason. Incels look at the top of society and see people devoid of virtue– the unaccountable, unscrupulous, self-indulgent “Chads”, almost always from well-connected families– winning. Their most noted reaction, “Why can’t that be me?”, is hardly sympathetic, but their problem is. In terms of male role models, our society is in dissolution.

Corporate capitalism, and other forms of dysfunctional patriarchy, cannot keep themselves afloat without using various narratives to manipulate people’s desires and therefore allay the resentment that would otherwise accrue to the corrupt top. For example, it is patriarchy (not feminism) that tells men they are worthless if they cannot support a family on one income. To be a “basement dweller”, under patriarchy, is to be less than human. The system tells men to derive their sense of worth from capability, especially as expressed in competitive endeavors– even if those contests are dehumanizing or stupid. In high school and college, one of the most fetishized (but also most detrimental to personal growth) competence metrics is the ability to procure sex when one wants it. (And, further according to this narrative, men always want sex; or else there is something wrong with them.) When incels struggle with a normal, benign thing– that it is difficult for men under 25 to find sexual partners– they begin to see themselves as useless incompetence, doomed to fail in all other areas of life. They shut down; they lose contact with their friends, their grades drop, and they become addicted to video games and internet trolling– living out their power fantasies behind a keyboard.

What does patriarchy think of this massive waste of male talent? Patriarchy couldn’t be happier. See, virgin shaming is what keeps men going into work, in order to procure those pictures of dead people, that can be traded for social experiences like overpriced meals and recreational neurotoxins, that may on occasion lead to sexual access.

There’s a response I can imagine coming from incels and MRAs, which is that women, as much as men, can participate in virgin shaming, gold digging, and various other behaviors that keep toxic masculinity in place. Of course, that’s true. See, feminism doesn’t require a conviction that women are innately morally superior to men. I am a feminist and hold no such belief. I think the distributions of moral character are most likely equivalent across genders. And just as there are good men aligned with feminist causes, there are plenty of women who lend their support to patriarchy, who enforce its doctrines, and even who prefer to live within it. Women actually exist who uphold toxic values by making themselves available to the sorts of malignant, aggressive men running our civilization into the ground. It is not acknowledge of their existence that makes MRAs and incels problematic; it is their inaccurate believe that immature, damaged women are somehow representative of the gender (they are not) that makes this dangerous. The truth is that, in a world with billions of people within it, you’re bound to find everything.

What is feminism? I think it has two components. One is the belief that women ought to have equal political and economic rights to men. That, on itself, doesn’t need to be called feminism. If this were all there were to the feminist cause, I’d have no issue with people who say, “I’m not a feminist; I’m an equalist”. The second component pertains not to biological femaleness but to femininity. This gets tricky, because it’s not clear that any of the differences between “masculine” and “feminine” nature exist in any innate way. Any discussion of masculinity and femininity must be relative to a cultural frame. There’s a lot of virtue– compassion, judgment, quiet competence, collaboration over competition, sexual restraint– that lives in what out culture construes as feminine. What makes toxic masculinity so virulent is that it’s built to destroy the feminine. It does not necessarily hate females; it hates femininity in women, but especially in men. What we’re learning, as our late-stage corporate capitalism destroys the planet ecologically, culturally, and socially as well as economically, is that in order to survive for another century, we’re going to have to become more feminine. It is not about women as superior to men (I do not think they are) but the need for us, as humans, to evolve in a more feminine direction and, while retaining masculinity’s virtues, purge it of its aggressive and toxic elements.

Feminism also has tons of historical support. Making things better for women also makes the world better for men. Gender is not a zero-sum game.

Self-indulgence is often marked by misogynists (most likely, a case of projection) as a female vice, but it’s actually the core of toxic masculinity. This is not to say that female self-indulgence and toxic femininity don’t exist– every woman who demands an expensive carbon crystal before she’ll marry is engaging in an instance of toxic femininity (manufactured by toxic men in the diamond industry)– but it seems to be toxic masculinity that is most capable of metastasis. Toxic masculinity says: one must grow up and acquire, acquire, acquire; one must do it fast; and one who acquires less than other men is inferior and not really a man at all. Accrued wealth and paid work– the influence of family contacts, though it accounts for almost all of what actaully happens in the career game– become the sole, numerical metric of male value. One cannot criticize the might-makes-right corporate system, either, unless one wants to risk being called “whiny”, “weak”, “a snowflake”, or (who can forget this classic?) “a fag”.

Corporate capitalism and toxic masculinity are cruel, and there’s no moral justification for shoehorning 50 percent of the population into it (and forcing the other 50 percent to clean up). But is this brand of masculinity a con? I don’t think it always was. In the 1950s, there was real work to be done, and people could make a living by doing it. Competence and merit actually mattered: there were more small businesses, it was easier for a skilled person to escape a reputation problem and reinvent himself, and there was high federal investment in R&D, resulting in 4–6 percent annual economic growth. For all the flaws of that era– I can’t think of anyone sane who’d want to restore 1950s gender or race relation– it was a time when work worked.

Keynes predicted that, by now, we’d be working about 5–10 hours per week. That turned out to be right. So where’s our leisure society? Nowhere, because of the Graeberian imperative to hold position. People now spend 10 hours to work a 2-hour day, the rest of the time full of useless anxiety in open-plan offices that exist largely to humiliate them. If the bosses figure out how little work is necessary, they’ll cut jobs and workers will lose, so it must be hidden. The work being done almost never matters; it is mostly a commodity, and little respect accrues to people who do actual work. Instead, we’re a nation of professional reputation managers. If you’re not disgusted by the notion, you’re not human. Of course, this means that the winners of the new economy are those people (mostly, physically imposing men, because even though such violent confrontations have been rare for thousands of years– it’s now how we like to do business– it is just easier to ask for favors when one could physically end the other’s life) who can force others to manicure their own personal reputations. Neofeudalism sets in: those who have permanent staffs of reputation managers (of course, the firms that employ them fully believe real work is being done, and occasionally it is) become lords, and those who support their campaigns for relevance in a blandly decadent, pointless economic system become the vassals.

One can see this most prominently in that people do things that are more work-like for their hobbies– gardening, hunting, hiking, learning new fields, writing– than the stupid, sedentary, humiliating subordinate bullshit they endure in under the proto-fascist corporate regime of status reports about status reports they call “work”. Men (and women) used to go to work and do things, but now they go to work and subordinate to other, almost always completely useless, men.

Isn’t this ancient, though? Hasn’t work always been about subordination? Well, yes and no. This topic requires more words than I can give it, but complex endeavors always require operational subordination. That is, some people have to take direction from others, and apprentices need more direction than seasoned masters. There’s nothing wrong with operational subordination; we do it every day, to our benefit, when we stop at a red traffic signal. It is better to follow a sound order and wait two minutes than to disobey it and possibly die in a preventable traffic accident. Operational subordination isn’t humiliating; it’s just something we need to do. In today’s corporate climate, though, the demand has gone beyond lawful operational subordination into personal subordination. It is not enough for the worker to take direction; he must fully accept the total superiority of the manager. It is not enough to do the job well; he must pretend to like it, he must ask for more grunt work when he is underutilized, and he can never for a second allow anyone to hold the suspicion that he might be smarter than the mediocre apparatchik doling out the tasks.

Here is where I offend some leftists: it may be entirely due to socialization, but men and women are different. Women are, to put it bluntly, better actors. They learn how to be pleasant to people they dislike, to mirror emotions without feeling them, and to engage in the ceremony of personal subordination while, in fact, avoiding major compromise. They’re socialized to put a crumple zone between them and abuse that is coming from uphill. Perhaps that’s why, even though corporate culture is terrible for women, it’s devastating to men. Women can play a humiliating, stupid game– powdering the bottoms and attending the whims of adult babies called “executives”– without total personal collapse, whereas men seem unable to do so. I don’t think the explanation is that men are weaker; I think we are not socialized as well to be actors– to be able to play a humiliating, subordinate role for 8 hours per day without internalizing it– and that we are also pushed to identify with paid work (a problem, in an economy where humiliation is the only thing left most people will pay for) more than women are.

If you tell men that the highest expression of masculinity is to go into a workplace and subordinate to other men– not the temporary operational subordination of the apprentice, but a permanent personal subordination to better-placed, my-daddy-made-a-call mediocrity– you’re going to have a masculine crisis on your hands. And we do. While I won’t get into detail about Jordan Peterson, his appeal seems to derive from his willingness to address the masculine crisis head-on, without fear. (This is not to say that he knows how to solve it.) But here’s the truth: our masculine crisis will not be solved until we eradicate artificial scarcities (which exist to manipulate men into working hard, on the promise that those proxies for female sexual attention– job titles, higher salaries– actually mean something) and corporate capitalism itself. To kill corporate capitalism, we’ll need to institute a more compassionate society– one that takes care of people, sending them to school if they wish, paying favors forward without expecting immediate return– and that would be, traditionally, more feminine. So we have the odd-sounding-but-true conclusion that the solution to our masculine crisis is (in part) feminism.

What was done to these incels was not done by women. It was done to them by patriarchy: a system that has inculcated the notion of women as sexual objects and rewards for participating in an economic system that professes to be meritocracy but that, on closer inspection, is no further along an evolutionary journey than might-makes-right barbarism. They are just entitled-men-the-enemy. They have been infected by terrible ideas and they are suffering intensely. And while their expressions of rage, both on and off the internet, are often unacceptable, we must raise our focus away from this particular element, and smash the woman-hating, racist, elitist, proto-fascist corporate system that created them in the first place.

The Green Pill: the Case for Doing the Right Thing; Why Feminism Is Good for Men, Too; and What to Learn from “Incels”.

Fifteen years ago, I got taken in by the male grievance cult and swallowed its nonsense whole: pickup artists, Chads, dual mating strategies… all that garbage, though we had different names for the stuff. It was the same ugly culture, thought it seems to have gotten worse. I was what would today be called an “incel”: unsuccessful with women, and seething with rage. I’m ashamed of my participation in that world, and in the then-fledgling art of internet trolling, seeing what all the nonsense has led to.

Today, for a contrast, I’m happily married to a feminist woman, and I’m writing Farisa’s Crossing, a novel with a female protagonist. What changed? Well, the time in between has been quite interesting, and I think there’s something one could learn from my own zero’s journey into (and, later, out of) the “red pill” world of male grievance culture. Yet, every time I sit down to write “that” essay… I just fucking can’t. I don’t like reliving it. Today’s incel phenomenon hits too close to home. I read delusional, angry screeds on the “braincels” subreddit or various other incel forums, and I remember a time when I could have believed (or even said) such things. It gives me a headache.

Forgive me if this is raw. I’m not a saint. I don’t judge the male grievance community– increasingly like a cult in its commitment to a set of incorrect, self-defeating, and misogynistic beliefs with no bearing in reality– from a place of superiority. I was there once. I got taken in, and I got out. I know how it operates, and I know why it appeals to some young men.

What is it that drives young men, while they endure that oppressively quotidian and not especially harmful problem of early-adulthood sexual infrequency, into such rage and despair? Well, I think everyone should watch this video about charismatic anger. Rage spreads. Fear sells a story. Angry memes stick in the mind, regardless of truth. Add to this some confirmation bias, apex fallacies, and ready-made excuses for one’s own sexual infrequency– “it’s not me; it’s all women”– and you get a self-defeating complex that takes years to evict from one’s head. As with a cult’s illogic, smart and otherwise rational people don’t seem to be immune to this.

Most of the guys who get taken in male grievance culture are like me around age 20: decent men in a vulnerable, difficult time where the rules are unclear, it’s hard to know what’s going on, and everyone else seems to be doing better. That said, the luminaries of this culture seem to be an assortment of loathsome creatures, such as: white nationalists (who argue that multiculturalism and miscegenation have caused the incel’s problems), domestic abusers, pedophiles (who wish to normalize their perversion by demonizing adult female sexuality), and the sex addicts who call themselves “pickup artists”.

The male grievance cult, in other words, draws its strength from the worst of the male gender.

Blue, Red, Black and Green Pills

Mainstream American culture doesn’t indulge in the overt misogyny of incels or pickup artists, but one of politically correct hypocrisy. We claim to be liberal and vote conservative. We support a might-makes-right economic system, corporate capitalism, in which economically successful men (until 2017) were able to maltreat women with impunity; it was one of the perks of being an executive. This corporatized, paper-thin, dishonest culture I call chauvalry: a combination of chivalry and chauvinism. It’s what the male grievance culture calls the blue pill.

The blue pill’s not feminist. It’s the worldview of the Hollywood movie where being “a nice guy” and working for hard for his boss is enough that a man “ought to” get sex any time he wants it. In romantic comedies, it shows us male behaviors that would actually put someone in jail: ticketless airport runs, stalker-level displays of singular attention at the Act-2/Act-3 transition, punching guys in the face who look at one’s girlfriend the wrong way. It tells men that if they do the right thing, two hours of cat saving ought to be enough to attract women– even if a man is still in high school. That’s not how it works. An exercise montage stands in for the hundreds of hours it takes to fix or improve an ill-cared-for body. Adults know this, but adolescents might not fully get it.

The blue pill, “nice guy” worldview is casually misogynistic. It indulges in just world fallacies that suit our corporate masters. Do what you’re told for fifty weeks, it says, and your beautiful wife will give you hot sex on your two-week vacation. Be the hard-working “all-American” guy, and you’ll get laid, no problem. It presents sex as the ultimate validation of male virtue, and women as a sort of “insert compliments and free dinners, receive blowjobs and nookie” vending machine. You don’t have to go to the gym and become an attractive person, or read books and become an interesting person; just show up at your job, and a pretty girl will come by and touch your dick, we promise.

Of course, sex is not (nor should it be) the measure of male virtue. Men are not owed sex for being productive members of society; they are not owed sex at all from anyone who does not want to have sex with them.

Usually by the first or second year of college, men realize that the blue-pill story is fraudulent. They see useless men getting ample sexual activity in high school and they’re told that it’s different in college. Whether they go to a state school or to Harvard, it isn’t. Actually, I don’t think that useless men are getting more sex on average than anyone else; they’re just the ones who make a trophy out of it. The decent people are having sex, too; they’re just not talking about it.

In The Matrix, the protagonist is offered a choice: take the blue pill and persist in self-deception, or take the red one and engage reality, starting on a hero’s journey. The male grievance community co-opted this metaphor, and starting using the term “red pill” to describe their alternative, less corporate but more vicious, misogyny. Of course, what they call “red pill” isn’t any more reflective of reality than the blue pill worldview they reject (and that all thinking adults know to be a facade). But, they took that term first, and we’re stuck with it that way.

The red pill view of women and relationships is much more dismal. It views all of us (male and female) as selfish, hypersexual, narcissistic and obsessed with physical appearances. Coming from a mix of failed providers who fared poorly in divorces– most divorces aren’t “won” by the woman, but impair both parties’ finances– and sex-addicted pickup artists, the red-pill view that dominates male grievance culture is intensely negative. For example, the typical red-pill view of women is that they all secretly long for domineering men (“Chad”, in incel lore) who will degrade them.

In the early 2000s, the process was called “speed seduction” or “Game”; now it’s known as pickup artistry. The truth about pickup artists is that they’re often insecure, disease-ridden, broken men. Their lives aren’t enviable. Their high-frequency promiscuity is mostly made possible by lowering of standards. A bona fide sex addict doesn’t care if she’s a “9/10” marriage-worthy chemical engineer or a “3/10” disease-ridden drunk party girl, and relationally-impaired men often can’t sustain the effort necessary to attract the former. However, their unhealthy lifestyles have given them hypertrophic social ability. They know what a certain subclass of women, selected for rapid sexual availability, want.

What makes pickup artistry so dangerous an art for a young man to learn is that (as with cults) the first courses and modules will focus on what 97% of people (i.e., the ones who know it) would call common sense: basic social skills and grooming: don’t talk about sex on the first date, wear dark colors to seem more masculine, do between 25 and 40 percent of the talking. All of this advice actually works, contradicting that two-word blue pill myth, Be Yourself. It’s the later material in the pickup world that’s more disturbing. Ultimately, pickup artists’ views of women are not based on the best sample, but on the small percentage of women on whom cheap tricks work. Since pickup artists are rarely able to achieve long-term, mutually enriching relationships, they deny their possibility. Run Game forever, they say. Never let your guard down, they say. Don’t be vulnerable, they say. All girls are basically the same, they say. It’s best not to listen to that shit. None of it’s true.

Pickup artistry doesn’t work, not as advertised. The high-pressure sales tactics that lead to quick lays will undermine genuine relationships. “Dread game” is abuse. Finally, having sex with a lot of different women never cures insecurity. Sex is amazing when it exists on its own, in the context of a loving relationship, but sex rarely solves problems. It does have a fascinating history of creating them, though.

Though the blue-pill lie ignores corruption, the red-pill lie exaggerates it, and advises one to manipulate it for personal benefit. This doesn’t work as most people hope, because few people get away with aggressive non-virtue for very long. Con artistry is a great way to get a one-night stand, and a shitty way to find relationships. In the long run, most people don’t find it fulfilling, men who indulge in pornified casual sex lose interest in “7/10” women, in the same way that porn addicts tire of vanilla scenes and gravitate toward the extreme, and obsess over the “9/10” they can never have.

It shouldn’t be surprising that men selling the secrets of how to con women into reluctant sex will also swindle men buying their services. They overpromise. Bed Models In 21 Days, only $26.99. Those who dip into these corrupt games find that they’re not winning enough, but the losers become enraged and disgusted. Despair sets in, and that’s what incels call the black pill.

Blue pillers view the world as just and indulge in hypocrisy. Red pillers view the world as corrupt and seek personal benefit, making the world a little worse with each move. Black pillers see the world as hopeless; it must be destroyed. That’s what produces the Elliot Rodgers and Alex Minassians.

When I was a redpilled guy trying to up my dating game, I learned about “negging”– an exaggerated refutation of the blue-pill notion that disingenuous compliments lead anywhere– and how to deal with those pesky interlopers called “AMOGs”. One learned how to play hot-and-cold games. I don’t like that I indulged in this, but the blackpill incel discussions of today’s world are worse: one finds martyr worship for murderers like Elliot Rodger, fantasies involving Westworld-esque sexbots, and female sexual slavery from the “pro” side of a debate that should not exist. These blackpilled men are, of course, deranged and need psychological help. I doubt many of them, in a country without universal healthcare, will get it.

Red-pill pickup artistry, at least, had better exit options than black-pill incel rage. In the mid-2000s, I used cheap tricks to get dates and make-out sessions, and to improve my confidence. It may have improved my life, though I doubt it. Eventually, with enough dating and relationship experience, I learned (spoiler alert) that women are people. Even when women I dated rejected me, they didn’t seem like terrible people; they were only quicker to perceive that it wouldn’t work out. Successes kept me going: the odds are always low, but the payoff is high.

With black-pill misery, though, there seems to be no exit. If I believed I lived in the world that these guys think is the real world, I’d be just as enraged. See, they believe that all women secretly want to sleep with their high school bullies, mythologized as “Chad”. In their view, romantic relationships are impossible, because they’ve used stunted male sexuality for their model of the adult female. Women who show genuine sexual interest in non-Chads, they believe, are settling for a “betabux” provider and, like a cat in heat, will do anything to get alpha sperm when she’s ready to have children. The red pill turned men into pickup artists; the black pill is turning them into suicides or, worse yet, murderers.

Most cults have an eschatological narrative, because cultish behavior is unsustainable and therefore the world’s imminent end must be, at least, wished-for. The recent turns in male grievance culture show us belief in a post-apocalyptic landscape. It says that women like their mothers and grandmothers no longer exist. It says that today, every “Stacy” has been fucked by 100 Chads before age 15. Ask a blackpiller about adult sexuality, and you get a picture of junkyard animals fighting over scraps of meat, ten weeks after the end of the world.

Fuck the blue pill. Fuck the red pill. Especially fuck the black pill.

There is corruption in the world; it has always been there. But, it is rarely so hopeless as to merit nihilism and destruction (black pill) and one need not get involved and add to the corruption (red pill). Are there women so damaged that they’ll sleep with men who deploy cheap tricks? Of course there are. That doesn’t mean that one has to take part.

I advocate what I’d call, in response, the green pill: to acknowledge reality as it is, with no self-deception, but then to do the right thing, rather than the easy thing that everyone else seems to be doing, anyway. If the world’s dirty, be clean. Plant a fucking tree. This is the approach of the ancient cynic or stoic, whose wisdom has not decayed with age. Don’t support women’s rights because you think it’ll get you laid (it might, or it might not); support women’s rights because it’s the moral thing to do.

In our might-makes-right, corporatized society, the green pill isn’t fashionable. We see bad guys winning, all over the place. Our president bragged about (and, almost certainly, has actually committed) sexual assault, and still got elected. Am I really going to make a case for virtue– as an end in itself, with no expectation of reward– when corporate capitalism reigns and so many bad people seem to be winning? Yes.

No one tells young men this, but being male sucks from ages 18 to 23 for most people. Men have a higher sex drive than women their age, but also the women in that cohort are attractive to the full age range of men, whereas men in that group can barely attract their own counterparts. Women have all the options; men have lots of competition. It gets better for men as the years pass, but the male initiation period has never been a positive experience. I went through it as well. Most men do.

Here’s what I’ve learned. The sex I didn’t have at 17, 18, 19, 20, 21… never mattered. In the long run, whether one loses virginity at 16 versus 26 is unimportant. That said, what I do remember, and not in a good way, is how often I acted like an asshole. I’d be lying if I said I was fully over that.

The results of actions fade in importance over time; the actions themselves stick around in one’s mind, and a person has to live with them forever.

Perhaps that’s the strongest case I can make for virtue on its own. I don’t know that I need a stronger one.

 

The blue pill is the chauvalry of disingenuous “nice guys” in a patriarchal corporate system where sex is a reward for good male behavior, rather than an immensely pleasurable expression of deep love. Blue pillers are just-worlders who go to work and believe their companies are “meritocracies”. Red pillers are men who’ve internalized how capitalism’s alpha males actually behave– they’ll lie, cheat, and steal to glamorize themselves– and apply it to sexuality. The black pill prescribes aggressive nihilism– humanity is hopeless; best to kill everyone. None of these are virtuous attitudes, and they’re all expressions of our right-wing corporate society. I advocate something else entirely: let’s be honest with ourselves about the world’s corruption– and fix it.

Why the Black Pill’s So Relevant

Red- and black-pill thinking, two branches of the male grievance culture, have always existed, but the red pill was always much more prevalent. People would rather believe they can manipulate their way to a better life than convince themselves that they’re terminally fucked (or the lack thereof). When I was taken in by the male grievance cult, wannabe pickup artists were common and no one identified as a permanent incel, or discussed murdering innocent people (as Rodger, Cruz, and Minassian have) to prove one’s point. So what changed? I’m going to borrow a dumb 1990s catchphrase: “It’s the Economy, Stupid.”

Back in my day, when we walked uphill (both ways) nine miles through the snow (now get off my lawn) we had something that doesn’t exist anymore: a functioning labor market. As I said, it has always been difficult to be an 18- to 23-year-old man– the supply/demand imbalances leave a lot of young men single. But, there were other things to do than stew about not getting laid. Let me explain how the whole “work” think used to work.

In the 1970s, a college degree and a car was all it took to talk your way on to a job– if you were 27 or older, a management job; if you were 32 or older, an executive job– anywhere in the country. So, what did you do, if you were a decent 22-year-old who couldn’t get laid? You worked, because these things called “careers” existed back then, and because if you actually showed up on time at your job and were still working at 1:30 in the afternoon, you’d be flying business-class by age 35 and running a company by your mid-40s. If you were what is now called an incel– back then, we just called it “not getting laid”– you could invest time and energy at work and distract yourself from the sex you weren’t having. That economic world doesn’t exist anymore; the 1% took it away from us. Thanks to offshoring and automation, the only thing left to do in the corporate world, except for those born into elite connections, is subordinate make-work with no career value, creative fulfillment, or redeeming social value.

It shouldn’t be surprising that our society would be throat-deep in a dangerous masculine crisis. Work sucks for women as much as it does for men– actually, it sucks even more for them– but men have been told for hundreds of years to identify themselves with their paid work, and that success in business is the ultimate expression of masculinity. With corporate consolidation and an imploded job market– one that hasn’t recovered from any of recessions we’ve had since 1973, although the stock market has and property prices are sky-high– this setup has produced an untenable situation where men are told that masculinity is to be found by… subordinating to other men. Of course the system would collapse. Ex falso quodlibet.

When I was an unsuccessful 22-year-old man, in 2005, I said a lot of shit that I now regret, but I held out hope (and was right) that my lack of social and sexual success was transient. The difference, in the post-2008 world, where housing is unaffordable in places where there are jobs, and where third-world corruption has become the norm in the “lean” private sector company, is that “incel” now stands as a permanent identity. Not knowing it, young people have conflated their permanent (unless we overthrow corporate capitalism) economic misery with their transient socio-sexual difficulties and become hopeless black-pillers. I don’t blame this on the women who are exercising their right to turn down men; I blame this on the 1% for stealing everything– for wrecking our economy and culture, and for perpetuating the simmering (blue pill, mostly) misogyny that makes these rages possible. The Elliot Rodgers of the world are like Japan’s hikikomori, but with the misogyny of an emerging fascist movement, and (scariest most) the guns of America.

So what are we going to do about it? We must overthrow corporate capitalism– a might-makes-right system of cancerous masculinity– before its corruption spreads further and the masculine crisis becomes an all-out war. We need to overthrow the red-pill corporate executives– the ones who perpetuate corruption for personal benefit– and the blue-pill establishment enablers, before this black-pill psychosis can fester and its shit really hits the fan.

In the Meantime

Angry young men such as today’s incels do not tend to believe facts put in front of them. If they listen to reason, they’ll still find their way to extreme interpretations. It’s very hard to change a mind in the moment. That doesn’t mean one shouldn’t blast bad ideas. People will eventually come around. It’s good to plant the good ones, even knowing they’ll be rejected at the time.

I’ve argued for feminism with incels. What these men don’t realize yet (and won’t be convinced any time soon, but there’s hope) is that feminism is actually good for the decent man, the “beta male” who’d rather play with his own kids than do 3 extra hours of work for a company that poisons someone else’s kids in Brazil. When I explain why feminism’s good for average beta males like me, the reaction tends to be either “well, obviously” (from the leftist progressives) or shock (from the zero-sum-thinking incels). Either way, here me out.

The incels have mythologized their high-school bullies, with a bit of male porn actor thrown in, as “Chad Thundercock”, the priapic Norse god of white male douchebaggery. The rest of us just call this loathsome creature “frat boy” or “bro”. The thing is, Chad is the guy who wins out under misogynistic structures like corporate capitalism. In societies where women’s fathers and economic forces parcel women out as a sexual commodity, Chads rule. Meanwhile, the more feminist a society is, the better results will come to the patient, caring, and less-macho men that women are more likely to choose– because they’re better fathers and far superior lovers than the Chads.

Incels tend to come, I’d guess, from the earnest lower-middle-class– the ones who once believed in all-American mythologies about corporate meritocracy, and who bought into the blue pill worldview– so they have a sense that the men women choose when directed by economic forces (or, better yet, economic necessity) are somehow superior to the brutish or monstrous men– there may be racial attitudes here– that women will choose if left to their own devices. There’s no evidence that bears that out, though.

Some women, of course, choose skid-row rotten men, just as vice versa. To the extent that there is a “Chad pattern” in some women, I think it comes from our puritanical attitudes about sex (paradoxically?) more than anything else. Tell young girls that sex is a disgusting thing, and they’ll do it with disgusting men: the abusive frat boys of the world.

There is one painful but beneficial result of feminism for men: we get rejected more. They have more options life. It may be counterintuitive, but all this rejection is a good thing. In the long term, relationships are symmetric: a marriage that’s good for one person is good for the other. Women are choosier not because they’re mean, but because they’re quicker to perceive mutual non-matches, while men are prone to “we can make it work” quixotry. Female choosiness, in the end, saves us time and emotional energy. A polite, respectful turn-down is a favor.

Why does rejection hurt so much, then? My guess is that it has to do with our evolutionary environment: small tribes of about 100 people. In such a small world, being rejected means being humiliated in front of all of one’s whole social world. Rejection and breakups feel like major, life-ending events because, twenty thousand years ago, they were pretty close to it.

In a world with 4 billion sexually active adults, though, rejection is harmless. You can get rejected 300 times and nothing bad happens. Our society is capable of discovering exceptional matches that would never have been found decades ago– I grew up in Appalachia, my wife grew up in the Philippines, and we met in New York– but the price of this is an ultra-high rejection rate.

Under the old patriarchal systems, women were pushed into marriage because they needed economic security, and as a way for their families to improve their social standing or achieve political goals. Love was optional, and long-term marital love seems to have been the exception rather than the rule. True, middle-class (and above) men were assured wives and the loss of their virginity, but “dead bedrooms” were pretty common after the baby-making stage ended.

Although lasting romantic love was the hoped-for marital ideal, it didn’t happen often in the old world that incels seem to want to return to. Old-style family sitcoms where the goofy dad always wants (and almost never gets) sex once seemed that’s-life funny and now, from a 2018 perspective, seem pathetic. Perhaps this is a cultural change more than one in reality, but when I look at the Everybody Loves Raymond marriage and its lost sexual chemistry, it’s not what I think people want, or will settle for. Feminism is forcing men to up their play if they want to get married, and almost everyone seems to be winning.

In sum on this matter, pairings that can generate long-term romantic love– the kind where a couple still want to jump each other’s bones after 10 years of marriage– are rare, and female choice isn’t 100% accurate in finding them, but it seems to be doing a better job than corporate patriarchy (“take it from your father: this boy’ll be a good provider.”)

It seems counterintuitive that heterosexual female choosiness– a source of extreme frustration in the short term– would lead to benefits for heterosexual men. If we accept though that relational health is, in the long term, mostly symmetric, it shouldn’t surprise us that much.

This would require another essay to explore, but I call it the Control Paradox. Relinquishing control can lead to better results, especially when that control is illusory. What’s magical about mature female sexuality, from a male perspective, is that it can’t be controlled. Nothing’s better than a woman going after what she wants.

Our economic system is brittle– corporate capitalism is breaking down already, and will probably shatter in the next fifty years– in the face of control paradoxes. See, we work in firms built on zero-sum thinking. The Graeberian bullshit jobs exist largely because corporate executives believe their subordinates’ happiness equals their misery. The more (stereotypically, at least) feminine open allocation approach to corporate governance results in more innovation than toxically masculine zero-sum command-and-control regimes… but the working world hasn’t learned that, and won’t until corporate capitalism has been overthrown.

It’s extremely counterintuitive that a system where men get rejected far more than ever before would be, in truth, better than all those other mating regimes we’ve discarded. The teens and early 20s are absolutely brutal. But here are things becoming rarer: divorce, loveless marriages, infidelity, and dead bedrooms. It’s sometimes amazing to me that, after all the rejection and false starts and misbehavior (on male and female sides) that I endured, I ended up getting what I wanted all along: marriage to the best woman, at least for me, I’ve ever met.

In Sum

When the blue-pill, politically correct, lies fall away, it’s a vulnerable moment for any man– much like what we are all going through as corporate capitalism’s total failure reaches a point we can’t ignore. Red-pill contempt and black-pill despair can set in, but I’d like to make the case for something else: to acknowledge the world’s corruption, without self-deception or undue negative emotion (for emotions themselves are useful data, but not recreation). I’d like to argue for the green pill: to do, in the face of corruption, the one thing that’s truly rebellious: to be better than that.