# Computing From the Middle Out, Part 1: Why Turing Machines Matter

While you’re here: my novel, Farisa’s Crossing will come out on April 26, 2019.

Computers have an undeserved reputation for being unpredictable, complicated beasts. I’m going to argue that, to the contrary, they’re quite simple at their core. In order to establish this, I’ll work through some models of computation, as well as some programming models that correspond well to real-world computation (with indications of where they don’t).

There’s a lot of complexity in real-world computing. Some of it’s desirable and some of it’s not. For example, today’s cell phones, laptops, and servers use electronic circuitry far more complex than, say, a Turing machine. That isn’t a problem because the payoff is immense and the cost to user is minimal. If the complicated adder or multiplier is a thousand times faster, most people are happy to have this way. So, even though real-world integrated circuits are complicated in ways we won’t even begin to discuss here, it’s not a problem. Doing simple things, better, is a worthy expense of complexity.

On the other hand, bloated buggy software ruins lives– this problem is largely preventable, but unlikely to improve because of conditions in the software industry (e.g., a culture that encourages piss-poor management) that are beyond the scope of the analysis here. If ever there were a machine for producing unusable crapware, it would be the American corporation. But again, that’s a topic for another time.

I’d prefer to motivate the claim that computers can be simple. They can be.

What Is Computation?

Computability theory is quite deep, but there’s a relatively simple, rule-based definition of what it means for a (partial) function to be mathematically computable. Our domain here is functions Nn → N; that is, from lists of natural numbers to natural numbers.

• The n-ary zero functions z1(x) = 0, z2(x, y) = 0 , … , are computable for all n.
• The successor function s(x) = x + 1 is computable.
• For any nk < n, the projection function pn,k(x1, … , xn) =xk is computable.
• p1,1(x) = x, the identity function, and p2,1(x, y) = x, f2,2(x, y) = y are the most used examples.
• Composition: compositions of computable functions are computable.
• For example h(x, y) = f(g1(x, y), g2(x, y), g3(x, y), g4(x, y) is computable if f and all the gi are.
• This means that a computable function can use as many computable functions as it wants as subroutines.
• Primitive Recursion: if g and h are computable, then so is f, defined like so:
• f(0, x1, … , xn) = g(x1, … , xn), and
• f(n + 1, x1, … , xn) = h(nf(nx1, …), x1, …);
• this is the recursive analogue a `for`-loop; the number of calls is bounded.
• Search (a.k.a. General Recursion): if f is computable, then so is mf, defined as:
• mf(x1, … , xn) = k where k is the least integer where f(kx1, … , xn) = 0.
• We say mf(x1, … , xn) ↑ (pronounced “diverges”) if there is no such k. The function is not defined at that point.
• this is analogous to a `while` loop. If the function diverges, an implementation would not terminate– unless the programmer could predict divergence in advance, but this is not always possible.

Functions that don’t use search are called primitive recursive. Those are total– they have values for all inputs, and more importantly, these values can be computed in a finite number of steps. If one uses general recursion, though, all bets are off. The function may not be defined for some inputs.

For example, addition is primitive recursive. It’s defined like so:

add(0, x) = x

add(n + 1, x) = s(add(n, x))

In the language above, g(x) = x and h(nax) = s(a).

Multiplication is a primitive recursion using addition rather than the successor function. One can also show that limited subtraction, sub(xy) = max(x – y, 0) is primitive recursive.

Furthermore, any bounded search problem is primitive recursive. If you have an upper bound on how far you’re willing to search, you can use a primitive recursive function.

Sometimes, it’s a judgment call how one wants to implement it.

For example, the division function can be represented as:

div(nd) is the first q such that qd< (q + 1) * d.

Perform an unbounded search for such a q and, when d = 0, this diverges. However, in this case we know when the function’s badly behaved and can rectify it:

idiv(nd) is 1 + div(nd) if d > 0, and 0 if d = 0.

It returns a positive integer on success– a successful return of 0 becomes a 1– and a 0 on failure. The enclosing routine can decide how to handle the error case.

Divisibility checks (nothing but 0 is divisible by 0) and primality are primitive recursive and therefore total computable within finite time. Most importantly, prime factorization is primitive recursive. This is something we’ll come back to.

Turing Machines

Most people have heard of Turing machines, but unless they have taken a course in graduate-level logic or the theory of computation, they’ve probably never worked with one– and may not know what it is.

They have the reputation of being complicated beasts. They’re brain-dead simple, actually. Doing anything with them, that’s the part that can be painful. The ones that we inspect and analyze as computers tend to have massive state spaces– which may or may not be a problem– while the most aggressively minimalistic ones– I won’t prove it, but there are machines with under 20 states and two symbols that can compute any function– tend to be inscrutable in practice.

Formally, an (n, s) Turing machine is a device that:

• recognizes a pre-programmed alphabet of n > 2 symbols. That set could be {`0`, `1`}, or {`A`, `B`, `C`}, or the 100,000 most common English language words. One of these symbols is blank.
• is in one of s distinct internal states, including one called `Start` and one called `Halt`. This set must be finite and is pre-programmed into the machine.
• has n * (s – 1) pre-programmed rules, written as (sold, ain, snew, aout, ±1), one for each (sold, ain) pair except for those where sold = `Halt`.
• reads and writes to a tape– each cell holding exactly one symbol– that never runs out in either direction.

And here is how it works:

• Input: a finite number of cells may be set to any non-blank values. (The rest of the tape is all blank, in both directions.)
• Initialization: the machine is put in state `Start`.
• Runtime: Over and over, the machine does the same thing:
• read the symbol (ain) at the cell where the machine is, and consult its internal state (sold);
• fetch the matching rule (sold, ain, snew, aout, ±1);
• write aout to the tape, and transition to state snew;
• move right if the matching rule’s last column had a +1; left, if -1;
• repeat this cycle unless snew is `Halt`, in which case the machine terminates. Whatever is on the tape is the program’s output.

What happens if the Turing machine never goes into the `Halt` state? It runs forever. This is generally considered undesirable. The computation doesn’t complete.

This is probably the biggest disconnect between Turing machines and the computers we actually use. Turing machines are supposed to halt. If one doesn’t, that’s considered pathological; its work isn’t done and as far as we’re concerned, it hasn’t computed anything. Meanwhile, the cell phones and laptops we use on a daily basis run in an infinite loop and that’s what we expect them to do. We expect them to be available (and I’ll formalize that much later, but not in this installment) but they never halt.

A Turing machine is all-or-nothing. Its job is to compute one function and then indicate that it’s done by going into the `Halt` state. For a contrast, a real-world computer, at the minimum has to respond to real-world inputs like the user’s keystrokes, its own temperature sensors (so it doesn’t run too hot), and power supply disruptions. Later on, I’ll show how to close this gap.

What’s neat about Turing machines is that, in principle, one could have been built in the late 19th century. (My work on Farisa has had be on a steampunk kick.) We were close: we had programmable looms, player pianos, and electricity. We had record players and magnetic storage. Today, a Turing machine good enough to emulate a 1980s video game console could be built with about \$100 of commodity electronics. Rather than get into the details– it’s not my expertise– I’ll point the reader to Ben Eater’s excellent series of videos on the 8-bit computer he built on a breadboard. As he’s building an actual circuit, his model gives a much better representation of what computers actually do, in the physical world, than do Turing machines.

Anyway, an automaton is only as good as its ruleset. Most rulesets will have the machine pinging about at random– sound and fury, signifying nothing. A few, though, do useful things. A Turing machine can add two numbers, whether specified in binary or decimal that are supplied on the tape. These machines can multiply, or check regular expressions, or… well, literally anything computable. In fact, that’s one definition of what it means for something to be computable– they are legion, and they’re all equivalent.

It’s counterintuitive to most people, but the slowest computers from the 1960s can do anything a modern machine can– they would merely take longer. In terms of what computers can do, nothing has changed. If we allow computers to generate probabilistic bits, they even quantum computing does not add capabilities– quantum computers are merely faster.

From a practical perspective, computers and programming languages are not remotely equivalent. In theory, they are.

Now, Turing machines would be nearly useless as a real-world concept, say, if they required 2210,000 states in order to do useful computation. It would be annoying if there were computations that couldn’t be done with fewer states, because we have no way to store that much information. In fact, one can find fairly small n and s, and specific rulesets, that can emulate any Turing machine (any size, any ruleset) on any input at all. These are called universal Turing machines. I’m not going to go through the details of building one and proving it universal, but I’ll walk through the basic concepts, along two different paths.

We are not concerned with how efficiently the machines run– as long as they terminate, except on problems where no machine terminates. Real world computers are sufficiently different from Turing machines that the the (heavy) performance implications here are irrelevant.

• First, a Turing machine’s read-fetch-write-transition-move cycle is mechanical. We can implement it over all (ns) Turing machines with a machine using sf(s), where f is a slow-growing function, states. We include the ruleset we want as an input– a lookup table– and our machine implements the read-fetch-write-transition-move cycle against that table instead.
• Operating on k-grams of symbols allows us to use an n-symbol Turing machine to emulate an nk-symbol machine. We can in practice do any of this work with a 2-symbol machine.
• An (n, s) Turing machine can emulate a Turing machine with a larger state space (say, s2 states) by writing state information to the tape. The details of this are ugly, and the machine may take much longer, but it will emulate the more powerful machine– by which, I mean that it will come to the same conclusions and that it will halt if the emulated machine does.

This approach isn’t the most attractive, and it has a lot of technical details that I’m handwaving away, but using those techniques, we can emulate, say, all the (n2,  s2) Turing machines using an (nf(n, s), kg(ns)) where f and g are asymptotically sub-linear (I believe, logarithmic) in their inputs. The result is that, for sufficiently large n and s, machines can be build that emulate all machines at some larger size– and, of course, a machine at that size can emulate an even larger one. The cost in efficiency may be extreme– one could be emulating the emulation of another emulator emulating another emulator… ad nauseum– but we don’t care about speed.

If that approach is unappealing, here’s a different one. It uses the symbols: {`0`, `1`, `Z`, `R`, `E`,`+`, `<`, `_`, `~`, `[`, `]`, and `?`}– in two colors: black and red; `1`, `Z`, `E`, and `R` will never be red. This gives us 20 symbols. The blank symbol is the black `0`.

Here’s a series of steps that, if one goes into enough detail (I’ll confess that I haven’t, and the machines involved are likely wholly impractical) can be used to construct a universal Turing machine.

Step 1: establish that copying and equality checking on strings of arbitrary length can be done by a specific, small Turing machine.

Step 2: use a symbol `Z` and put it between two regions of tape at (without loss of generality) tape position 0. Use it nowhere else. Use a symbol `R` to separate the right side of the tape into registers. These will hold numbers, e.g. `R 1 0 1 R 1 0 0 0 1 R 0 R` means that 5, 17, and 0 are in the registers. Resizing the registers is tedious (everything to the right must be resized, too) but it’s relatively straightforward for a Turing machine to do. There will be an `E` at the rightward edge of the data.

Step 3: The right side of the `Z` stores a stack of nonnegative integers: `1`s and `0`s (representing binary numbers) separated by register symbol `R`. The left side stores code, which consists of the symbols {`0`, `+`, `<`, `_`, `~`, `[`, `]`, `?`}. Only code symbols can be red.

• A possible tape state is: `E0+++++0+0+?0+++Z 101 R 1 R 0 R 1 E`. (Spaces added for convenience.) The left region is code in a language (to be defined); the red zero indicates where in execution the program is; on the stack we have [5, 1, 0, 1] with TOS being the righthand 1.

Step 4: A Turing machine with a finite number of states can be an interpreter for StackMan, which is the following programming language:

• At initialization, the stack is empty. The stack will only ever consist of nonnegative integers. We’ll write stack left-to-right with the top-of-stack (TOS) at the right.
• `0` (“zero”) is an instruction (not a value!) that puts a 0 on top of the stack, e.g. `... X -> ... X 0`.
• `+` (“plus”) increments TOS, e.g. `... X 5 -> ... X 6`.
• `_` (“drop”) pops TOS, e.g. `... X Y -> ... X`.
• `~` (“dupe”) duplicates TOS, e.g. `... X -> ... X X`.
• `<` (“rotate”) pops TOS calls it n and then rotates the top n elements left. This may be the most tedious to implement. Examples:
• `... X Y 2 -> ... Y X`
• `... X Y Z 3 -> ... Y Z X`
• `... X Y Z W 4 -> ... Y Z W X`
• `?` (“test”) decrements TOS, then pushes a `1` on the stack, if TOS is nonzero; otherwise, it pushes a zero, e.g.:
• `... 6 -> ... 5 1`.
• `... 0 -> ... 0 0`.
• This is a concatenative language, so instructions are executed in sequence one after the other. For example, `+++` adds 3 to TOS, `0+++0+++` pushes two threes on it, `_0` drops TOS and replaces it with a zero (constant function), and `?_?_?_` subtracts 3 from TOS (leaving a 0 if TOS < 3).
• Code inside `[``]` brackets is executed repeatedly while TOS is nonzero and skipped over once TOS is zero or if the stack is empty.
• For example, `0+[]` will loop forever because TOS is always 1.
• The code `[?_0++<+0++<]_` has behavior `... x y -> ... x + y`. It’s an adder. For example, if the stack’s state is `... 6 2`, it does the following:
• The code in the brackets is executed. `?` tests the `2`, so we have `6 1 1`, and we immediately drop the `1`. The `0++<` (“fish”) is a swap, so we have `1 6`, and the `+` gives us `1 7`. We do another `0++<` and are back at `7 1`.
• The next cycle, we end up at `8 0`; after that, TOS is zero so we exit our loop. With a `_`, we are left with `... 8`.
• Any instruction demanding more elements than are on the stack does nothing.

The interpreter for this language can be built on a Turing machine using a finite number of states. To keep track of the code pointer (i.e., one’s place in the stored program) while operating on the stack, color a symbol red. Make sure to color it black when you have moved on.

Step 5: show that any primitive recursive function Nn → N can be computed as a fragment of StackMan, taking the arguments from the stack; e.g.,

• f(x, y, z) = x + y * z could be implemented a fragment with behavior `... x y z -> ... (x + y * z)`.

This isn’t hard. The zero functions and successor come for free (`0`, `+`) and the projection functions (data movement) can be built using `_`, `~`, and `<`. Composition is merely concatenation– we get that for free by nature of the language. We can get primitive recursion from `?` and principled use of `[]` blocks, and general recursion from arbitrary `[]` blocks.

Thus, a StackMan interpreter is a Turing machine that can compute any primitive recursive function.

Next, show that any computable function Nn → N can be computed as a fragment of StackMan that will terminate if the function is defined. (It may loop indefinitely where it is not.)

Step 6: since prime factorization is primitive recursive, we can go from lists of nonnegative integers to a single nonnegative integer, using multiplication (one way) and prime factorization the other way: e.g. (1, 2, 0, 1) ↔ 2* 3* 5* 71 = 126. This means that we can coalesce

Step 7: show that all (ruleset, state, tape) configurations can be encoded as a single integer. Then show that the Turing step (read-fetch-transition-write-move) and the halting check are both primitive recursive. These capabilities can be encoded as StackMan routines. (They’ll be obnoxiously inefficient but, again, we don’t care about speed here.)

Step 8: then, a Turing machine can be built with a finite number of states that:

• takes a Turing machine ruleset, tape, and state configuration and translates it into a StackMan program that repeatedly checks whether the machine has halted and, if not, computes the next step. The read-fetch-transition-write-move cycle will be performed in bounded time. The only source of unbounded looping is that the emulated machine may not halt.
• and, therefore, can write and run StackMan program that will halt if and only if the emulated configuration also halts.

Neither of these approaches leads to a practical universal Turing machine. We don’t actually want to be doing number theory one increment (`+`, in StackMan) at a time. Though StackMan can perform sufficient number theory to emulate any machine or run any program– it is, after all, Turing complete– it is unlikely that the requisite programs would complete in a human life. But, in principle, this shows one way to construct a Turing machine that is provably universal.

Human Computation

This installment is part of what was a larger work. I’ve decided to put it out in pieces. I titled it, “Why Turing Machines Matter”, but I had to start with a bunch of stuff that most people would think doesn’t matter– a stack-based esoteric language, some number theory review, et cetera. I haven’t yet motivated that this concept actually does matter. So, let me get on that, just briefly.

Mathematicians and logicians like Turing machines because they’re one of the simplest representations of all computers, and the state space and alphabet size don’t need to be unusually large to get a machine that can compute anything– although it might be slow. Alan Turing’s establishment of the first universal Turing machine led to John von Neumann’s architecture for the first actual computers.

Is it reasonable to assume that Turing machines perform all computations? Well, that’s one way that computability is defined, but it’s a bit cheap to fall back on a definition. It’s more accurate to look at the shortcomings of Turing machines and decide whether it’s reasonable to believe a computer can be built that overcomes them.

For example, some electronic devices are analog, and Turing machines don’t allow real-numbered inputs. Everything they do is in a finite world. But, in practice, machines can only differentiate a finite number of different states. There’s no such thing as a zero error bar. Not only that, but quantum mechanics suggests that this will always be the case. For example, there are an infinite number of colors in theory, but humans can only differentiate a few million under best-case circumstances, and we can only reliably name about a hundred. It’s the same for machines: measurements have error. Of course, an infinite state space isn’t allowable either: that would be analogous to infinite RAM.

So, those shortcomings of Turing machines apply to all computers that we know– including (in a different way) the quantum computers humans know how to build.

Turing machines, as theoretical objects, can’t do I/O. The input exists all at once on the tape, and output is produced– and until that output occurs, no computation has been completed. One alteration to account for this is to allow the Turing Machine an input register that other agents (e.g., keyboards, temperature sensors, the camera) can write to. When the computer is in a `Ready` state, it scans for input and reacts appropriately. If the machine reaches `Ready` within a finite time interval, that is analogous to successfully halting– the software itself may be broken, but the machine is doing its job.

In truth, modern computers are more accurately modeled as systems of interacting Turing-like machines than single machines– especially with all the multitasking they have to do to support users’ demands.

There is one thing Turing machines don’t do that we take for granted, although it’s a bit of a philosophical mess: random number generation. Turing machines don’t model it: everything they do is deterministic, and “random” is not a computable function (or a function at all). Real computers most often use pseudorandom number generators (PRNGs)– which are predictably (but ideally without pattern) “random”– and Turing machines can implement any of those. Truly random? Well, we don’t fully know what that is. We can get “random enough” with a PRNG or from some input that we expect to be uncorrelated to anything we care about (e.g. atmospheric noise, radioactive decay).

Turing machines give a poor model of performance as described here. To access data at cell 5,305, from cell 0, the machine has to go through every cell in between. That’s O(N) memory access, which is terrible. Luckily, real computers have O(1) memory access, right? That’s why it’s called random access memory, eh? Well, not quite. Caching is too much of a beast for me to take on here, but I would argue this far: a Turing machine with a 3-dimensional tape– I haven’t gotten into this, but a Turing machine can have any dimensionality and be computationally equivalent– is more faithful model for performance. Why? Well, our best case or random access is O(N1/3). . We can call random access into a finite machine O(1), but that’s moving the goalposts. Asymptotic behavior is only about the infinite, and the real world is constrained by the speed of light. If have a robot moving around a 3-dimensional cubic lattice where each cell is 100 microns on a side (no diagonal movement) and we want each round trip to complete in one nanosecond (30 cm) then we are limited to 125 trillion cells. Going up to 1 quadrillion would double our latency. Of course, we’re ignoring the absurdity of a robot zipping around at relativistic speeds.

Happily, most computers don’t have the moving part of a robotic tape head (although a traditional hard drive may be analogous). Rather than the computation going to the data (in the model of a classical Turing machine) they, instead, bring the data to the chip. Electrical signals travel faster than a mechanical robot, as on a literal Turing machine, could (without catastrophic heat dissipation). So, in this way, modern computers and Turing machines are quite different.

If anything, I’d make a different claim altogether. Turing machines aren’t a perfect model of what computers do– although they’re good enough to explain what computers can (and can’t) do. They are, perhaps surprisingly, a great representation of what we do when we compute.

Before “a computer” was a machine, it was a person whose job was to perform rote operations– addition, subtraction, multiplication, division, elementary functions, and moving data around– which is, as it were, all today’s computers really do as well. And how does a human compute, say, 157,393 * 648,203? Most of us would have to reach for paper– a two-dimensional Turing tape– and start going through rote operations. To transliterate schoolbook multiplication to be done by a Turing machine is tedious but not hard– there are a couple thousand states.

The plodding Turing machine isn’t “about” computers. It’s about us, moving around a sheet of paper with a pencil and eraser, as we do– at least, when we know we’re computing. Most of what we do, we don’t think of computation at all. We’re not even aware of computation happening.

It’s an open question whether there’s a non-computational element to human experience. I tend to be unusual– by the standards of, say, Silicon Valley, I’m downright mystical– and I think that there is. I can’t prove it, though. No one can.

The difference between intuition and computation is that the latter happens by rote, from a precisely-understood, finitely-describable state, following a series of rules that require no judgment. Intuition can’t be checked; computation can.

Most mathematicians use informal proofs– verbal arguments that convince intelligent, skeptical people that a conclusion is valid. This is a social rather than algorithmic process, and it is not devoid of error. Informal proofs can be unrolled into formal proofs from ZFC, it is generally believed, but it would typically be impractical to check. An informal proof is an argument (using other informal proofs) that a formal proof exists, and although the informal proof is imperfect– of course, 100-percent perfection in computation is not physically possible, either– it usually gives more insight into the mathematical structure than a formal one would.

Do humans have non-computational capabilities or elements to our existence? I believe so. But, in terms of what we can communicate to each others with proof– that is, checkable computation– we are limited to finite strings of finite symbols, an agreed-upon initial state, and a finite set of rules. At least in this life, that’s the best we can prove.

Next Up

In the next installment, I’m going to show how to build a Turing machine that’s practical.

Aggressively minimal universal Turing machines– with, say, only 10 states and 5 symbols– tend to be next-to-impossible to understand. I’m going to work with a large-ish state space and alphabet: 512 symbols and 248 possible states (even though we’ll only use about a million). Those numbers sound beastly, and to implement the Turing machine as a lookup table would require 1,884,160 terabytes. At such a size, storing the entire ruleset is cost-prohibitive. Most rulesets for those parameters are patternless and unmanageable, but a ruleset that we’d actually want to use is likely to be highly patterned– allowing rules to be computed on the fly. In fact, that’s what we’ll have to do.

In the second installment, we’ll build a Turing machine about as capable as a 1980s video game console (e.g. Atari, Nintendo) that’ll be much easier to program against. That’s up next.

# Don’t Be Like Ajay

There’s a lot of bad career advice out there, but the worst of it comes from people who’ve been successful at private-sector social climbing. Blind to their own privilege, and invested in the perverse mythology of corporate meritocracy, they are least equipped to perceive the truth– not to mention their lack of incentive to share it, in the off chance of discovering it. At the same time, these people can say anything and get it into print, so desperate are the rest of us, the proles, to hear the inside corporate secrets they purport to have.

There are no secrets. The corporate system is corrupt; it is not a conspiracy. It is exactly what it looks like; the powerful abuse the powerless, the rich get richer, and people who speak the truth about it are punished.

This pestilent article, “What College Grads Could Learn From My Former Intern“, comes from Zillow CEO, Spencer Rascoff. Now, I have no personal knowledge of the author, and I know even less about the “Ajay”– that may or may not be his real name; it doesn’t matter– so I’m going to stick to the merits of the article itself.

This I will say: venture-funded startup CEOs are the worst when it comes to self-deception and the profligate evangelization of nonsense.

Venture capital, at least in the technology industry, has become a mechanism for the replication of privilege. Well-connected families create the appearance of their progeny having built businesses from scratch when, it fact, they had all sorts of hidden advantages: tighter sales advantages, fawning press coverage, and most importantly, the privilege not to worry about personal financial nonsense. (If their businesses tanked, they’d fail up into cushy executive jobs, often as venture capitalists.) It’s money laundering, plain and simple, and it’s not even well hidden since it’s technically not illegal.

The corporate system is a resource extraction culture, not unlike the ones in culturally impoverished, oil-rich societies that never needed to grow or innovate, because they could pump wealth out of the ground. In this case, though, the depleting resource is the good faith of the American middle class– an earnest belief in hard work, an affinity for technology, an acceptance of authority. The purpose of the ruse is to make it look like “this time it’s different” and that today’s elite, unlike the warlords and viscounts of the past, actually earned it.

Ajay, the protagonist of this second-rate Horatio Alger story, was a hard worker, eager to please, by the author’s description (emphasis mine):

Ajay did [difficult, unpleasant work] eagerly and with a smile; he worked incredibly hard and because of that, built a reputation for himself as someone who would pitch in to help with anything you asked and give it his best effort. People liked that.

I almost retched when I came upon “and with a smile”. Gross.

My thoughts, for the rising generation? Yes, work hard when it’s worth it to work hard. In fact, I would not try to give advice to the young about “work-life balance” or tell them that they should backpack around Australia for two years. It’s hard enough to achieve something significant during peace time; it’s much harder in 2018, when the rich have made it so much harder for anyone to get a chance. One cannot produce significant work in any field and also have the Instagram party life.

This said, there is difficult, unpleasant work worth doing; there are other tasks that are waste. If one has to do the job with a goddamn smile to get credit for it, then it’s almost certainly in the latter category.

Bosses might like, on a personal level, those who do unpleasant work with a smile. That doesn’t mean that it leads to career success. It’s never good to be disliked by a manager, but bosses don’t get to promote everyone they like. If one is well-liked only because of having made it a path of least resistance to give one unpleasant, career-incoherent work, then one is in a state sustained only by suffering, that one can almost never turn into career advancement.

I’d also like to point out the author’s corporate weasel terminology. He says, “People liked that.” He liked it. There’s nothing sinister or surprising about a boss liking someone who’s preternaturally “easy to manage”. What’s galling is that, like most corporate bosses, he felt entitled to superpose his opinion over the entire company. It’s like when managers fire people but want to avoid taking responsibility, so they say “the team decided”.

I would guess that many people disliked Ajay. They saw what he was doing, and they cringed.

Of course, if Ajay succeeded, then their opinions didn’t matter; those people didn’t win. Still, it’s generally not useful to be disliked by one’s colleagues, and no one likes ass-kissers.

Ajay was also a serial networker, even all the way up to me, the CEO.

It’s funny how blind CEOs are the politics that exist all around them. Since they get everything they want, there’s “no politics” in the organization. I suppose that’s true. The ultimate solution for someone who wishes to abolish politics is despotism– the degenerate but nominally apolitical arrangement. Most of us don’t want that, of course.

At any rate, if Ajay’s colleagues and managers tolerated “a serial networker”, it’s because they never saw him as a threat until he was fully ensconced in the managerial sun. Perhaps they were wrong and got blindsided. Like I said, I don’t know these people.

In general, though, the idea that a 22-year-old can try to rub elbows with a CEO, in a competitive environment like a startup or investment bank, and not get shanked by someone at or above his own level, is laughable. The people with the training to pull this off are those with inherited wealth and social resources, who have the least need for “internal networking” because of the extensive external networks their Daddies gave them.

When Ajay left to finish school and go on to various startups, he continued to build upon his brand and kept in touch—essentially marketing himself through his networks.

Emphases mine. There’s nothing incorrect about “essentially”; I just wanted to highlight an unnecessary adverb that really, totally, very badly, irritatingly weakened the prose.

I want to focus more on “build upon his brand”. (The author could have taken out “upon” and nothing would have been lost, but there’s actual incorrectness here, so I shan’t dwell on it.) See, what got me to write this response is not that the author’s giving misguided career advice. To be honest, I couldn’t give better advice that Forbes readers (if my estimation of its demographic is correct) would want to hear. I’d offer the truth– the game is rigged and most people will lose no matter what they do– and that’s not a charismatic message. No, I’m writing this response because the notion of “personal brand” is, to me, sickening.

I am not a brand. There are not five hundred of me stacked on a shelf in grocery store, all in neat order like the rectangular boxes they put toothpaste tubes in. You, dear reader, are not a brand either. If you don’t cringe when you hear the words “personal brand”, then wake up.

People who use the term “personal brand” without dripping contempt are a special breed of douchebag. What’s amusing is that, while they identify “personal brand” with their desperate claims of uniqueness, these people are pretty much all the same.

It is bad advice. The truth is that people who focus on “building their brand” are assumed by their colleagues not to be doing the work, and they’re the first ones to get shanked when things get difficult. Perhaps Ajay succeeded. Perhaps he’s in a corporate jet, still smile. Or perhaps he used his bonus on plastic surgery to fix that frozen-face smile after getting kicked out of a funeral for the goddamn last time.

You want to be remembered, whether you’re joining a company of five or 500, because remembered people get opportunities; anonymous ones don’t.

Remembered people get denied opportunities.

I’ve been involved with the antifascist cause since 2011. I’ve been turned down for jobs because of a somewhat public (and, in cases, adversarially publicized) track record of having the backbone to stand up for what’s right.

When it comes to social media, employment references, and personal uniqueness, we live in a 500-mile world. As in, follow any driver for 500 miles, and you’ll find a reason to write him up. It used to be difficult (literally, and in metaphor) and time-consuming to follow one person so far; technology and surveillance have made it easier.

I’ve been a hiring manager. I was always sympathetic to people with controversial online histories, for obvious reasons, but it’s the most common reason for denying a job to someone good enough to make it to the final round. No, these people aren’t alt-right psychopaths or proud, public drug users. Usually, they’re normal people who just happen to hold opinions. It’s assumed that they’ll get bored, or that they’ll react badly to mistakes made from authority. I did, on one occasion, cringe when a startup executive commented on a black woman’s natural hair being “political”.

The people who rise in the corporate system are boring. The best odds, in the corporate game, come from becoming the most bland, inoffensive, socially useless person one can. The problem with this truth– the reason it lacks business-magazine charisma– is that its odds are still poor. There are a lot of perfunctory losers out there, and they don’t all get executive jobs. Most of them get the same shitty treatment and outcomes as everyone else.

Not being boring, though, means that someone only has to follow you for 25 miles to find a reason to screw you over, damage your reputation, or deny you a job.

The optimal strategy is to be boring, to ingratiate oneself to powerful people over time, and to become intertwined enough with an organization’s powerful people that one is perceived to have undocumented leverage, and therefore gets what one wants out of the organization. Does this strategy work for everyone, all the time? No. The odds are depressing– most social climbers fail. But the odds are even worse for all the other strategies.

“How do you effectively brand yourself without being a peacock or a sycophant?” There are two ways: intentionally constructing it and being patient.

There are several ways to brand yourself. The classic approach is apply pressure with iron, heated in a fire. At high enough temperatures, permanent scars can be achieved in two or three seconds. Electric arcs are sometimes used for this process. An alternative to thermal burns is “cold branding”, often using liquid nitrogen. There seems to be no risk-free option, since branding literally is skin damage.

The same should be true for you: “Work with Sophia—she has a great attitude, big ideas, and is really hard-working.”

This guy must be getting paid per word. The Hemingway editor yells at me; I use adverbs. They’re not always unnecessary and replacing one with a clunky adverb-free adverbial phrase isn’t my way. Still, not only is the “really” unnecessary, but the author could have said “works hard”.

Whatever you decide to pursue as your personal brand, make sure it has a strong purpose behind it. If you do that, the rest is just packaging.

“Just packaging.” A product’s brand is literally that: packaging. Brand is the use of identical-looking boxes to convince buyers that a minimum standard of quality has been met. A Hershey Bar isn’t going to blow me away, but it’s perfectly adequate. I know that when I buy one, I’m unlikely to find a severed housefly wing in it.

If you want “perfectly adequate” on your tombstone, then consider being like Ajay– a brand. That said, you might want to pull that smile down. Do your job and do it well, of course, but if you smile so much, you’ll make everyone hate you. No one wants to compete for attention with an ass-kisser.

The Truth

As I said, I found the article harmless till I got to the “personal brand” bit.

There’s a lot of bad career advice out there from successful people (most of whom lucked into, or were born into, what they have). There’s also a lot of bad career advice from unsuccessful people who’ve found success selling the “inside secrets” of a corporate game they never actually won– now that is personal brand. The well-meaning self-deception will never go away, nor will the intentionally deceptive sleaze. There are many gamblers who “have a system” for beating roulette wheels and slot machines. Many books have been written on their systems. They do not work. The house wins in the long term. That’s why it’s the house.

The house is smart enough to keep people coming in. So it offers intermittent small wins, and a few big ones that generate publicity. It’s very hard for lottery winners to keep their windfalls private; lotteries discourage it. In these corrupt career lotteries, though, the system doesn’t have to make it hard for game winners to stay private. They shout in open air; they never shut up.

Is “be like Ajay” good advice? I don’t know, because I don’t know who Ajay is. Perhaps he was a ruthless political operator, fully aware of the resentments his supplicating smiles generated, and he used them for some sort of eleven-dimensional manifold socio-economic judo so brilliant it’s beyond my comprehension. Perhaps Ajay’s reading this blog post on Trump’s golden toilet, laughing at me. For the average schmuck, though, it’s not good advice. Of course, don’t be incompetent. Don’t be too grumpy. Be the “go to” guy or girl for work you genuinely enjoy and are good at. But, as a favor to yourself, don’t become a dumpster for career-incoherent work. Also, don’t smile all the time; it’s creepy.

I would love to advise authenticity, but that is also not a good approach for someone who needs to squeeze money out of the corporate system– and most people have no other choice.

There’s no path I can sell for the individual. The situation, in truth, is quite dire. In Boomer times, the corporate system seduced people with greed: \$500 executive lunches, business-class travel all over the world, and seven-figure bonuses just for showing up. Today, it runs on fear. Fear’s cheap. Most Ajays won’t succeed; I can say that with confidence. I can also say that most anti-Ajays won’t succeed. Most people won’t succeed. The corporate game is rigged and anyone who says otherwise is trying to sell something toxic. I have no elixir of socioeconomic invulnerability; I’ll admit that. There’s a massive market for false hope. I will not sell into it. I am better than that.

For the world– if, sadly, not always the individual– it would be better if we woke up, tore down the corporate system brick-by-brick like the Bastille, and replaced it with a fairer, more sensible, pro-intellectual style of society worth caring about. If enough of us had the courage to live in truth, consequences be damned, the whole corporate edifice would crumble and we’d all be better off for it.

It’s not easy to live in truth. It’s downright hard to change a world whose most powerful people loathe any change at all. A first step, though, might be for us, unhindered by mercy, to mock anyone and everyone who says “personal brand” without vehement contempt for the concept. If we work together, we can make such people shut up. That would be a start.

# Why I’m not using a traditional publisher to launch Farisa’s Crossing.

As I write this sentence, it’s June 30, 2018– 300 days before I launch Farisa’s Crossing, on April 26, 2019.

A few months ago, I decided to self publish the book. I realized that I wasn’t even going to try traditional publishing. I have no doubts about my ability to get in. The process is harrowing and random, and even the best writers can expect to be shot down more than anyone likes to think about, but that wasn’t the problem I realized I had with it. In the end, it came down to time. It’s finite. I’m 35; I’ll be almost 36 in April 2019. Anyone who plans to explore all options before doing everything will end up achieving nothing. I had to knock some things off the calendar. I’m not going to skimp on the writing itself, nor research, nor editing. What can I cut that doesn’t affect the quality of the book? Writing a bunch of silly query letters landed high on that list.

Self publishing isn’t for every author or every book; nor is traditional publishing. Each has its advantages and drawbacks. There are books where I would eagerly use a traditional publisher, in spite of the drawbacks.

I thought it would be worthwhile to go through my reasoning here. Below is why I decided not to use traditional publishing for Farisa’s Crossing.

1. I don’t need it– Farisa is fiction.

A friend of mine writes biographies. Of all the genres, I think biography is the best suited by traditional publishing. Generalist copy editors aren’t equipped to copy edit biographies, which require extensive fact checking and removal of bias. Traditional publishing, in this genre, is invaluable.

Opinionated nonfiction, I would argue, is best served by traditional publishing– at least at book length and in print. Author credibility is huge, and can be manufactured if it isn’t there. Here, a self publisher is a guy with opinions; backed by a traditional publisher that’ll line up national TV spots, he’s a world-renowned expert. (Actual expertise optional.) Topical nonfiction– say, a book about a current election– has a short half-life; it will sell quickly or never. New York publishers have the resources to publicize it quickly; self publishers, in general, do not.

Memoir, if it’s at risk of being controversial, needs a traditional publisher. The author puts her personal reputation on the line. She needs a full-time publicist to fend off attacks.

Finally, we have business books. Those aren’t written to sell copies. It doesn’t hurt if they do, but few books make large sums of money, especially by business executives’ standards. Rather, these books are written to advance their authors’ careers. Middle-aged managers can reinvent themselves as “successful executives” and get better jobs– or, if they’re tired of being employees, lucrative speaking opportunities. Prestige, in that game, is everything. Substance, as anyone who’s read a business book or few, is not.

From the above, it should be obvious that I do not think traditional publishing is a dinosaur on the brink of its own extinction. Will its retreat from fiction continue? Yes. Is it dead? No. In fact, it’s exactly where it wants to be. It has decided that new author discovery, at least in fiction, costs too much. In the 1970s, fiction editors read manuscripts (“slush”). In the 1990s, they pushed that job to literary agents. In 2018, unpaid 19-year-old interns do it. A reader is a reader, so I don’t mean to disparage these interns as people; but I would always bet on a larger crowd when it comes to discovery. A hundred strangers versus one Ivy Leaguer? I’m betting on the hundred doing a better job. So long as self publishers can get their work read in the first place, the gatekeepers will be unnecessary.

Nonfiction demands external credibility, because it makes truth claims. I’m more inclined to trust an opinion essay from an expert writing acceptable prose than a stranger who writes beautifully.

As for fiction, the traditional publisher is far more optional. Farisa’s Crossing will be no better and no worse than the 200,000-or-so words I write because it will literally be the 200,000-or-so words I write.

Authors don’t need external credibility to write successful fiction. A good novelist disappears. The reader should get so involved in the story that she forgets that she’s reading one in the first place. The ability to induce this feeling is rare, quite difficult to teach, and does not come from advanced degrees, an author platform, or a reputation built by a Manhattan publicist. It comes from good writing.

2. Thinking about agents led to bad artistic decisions.

Self publishing is hard. Traditional publishing, if the stars align, is easy– seductively easy. Every single one of us humans is prone to the “Prince Charming” mentality, at least a little bit. We’d like the basics to be taken care of.

The traditional publishing fantasy goes like so: you get the first and best agent you query, he snaps together a lead-title deal, your book is reviewed by the New York Times, then the New Yorker offers to publish a chapter (and your publishing house doesn’t object) and it goes viral like that “Cat Person” story, so you sell 2 million copies and you’re set for life. You can literally think (and type) your way to the life you want– if you get the words right. That’s the promise; that’s the dream.

Of course, you can also win the lottery– if you get the numbers right.

The time cost of querying, one can put limits on. I’m 35 and I’m starting a series that I expect to take at least 10 years to finish. My health is better than it has been for a long time (ten years ago, I didn’t expect to be here today) but my life hasn’t been a no-damage speed run. If I thought the expense of 6 more months were worth it, I might put querying on the schedule. No harm in that.

We are all humans, though. When we see something that looks easy– a path of least resistance that seems to go where we are trying to get– we’re built to focus on it.

This becomes a problem if you start to think about agents rather than readers. This ruins a book. One of the major reasons for literary fiction’s decline, if not the main one, is that many of these stories are written to score agents. And not all agents are created equal. In any genre, there’ll be no more than a dozen “power agents” who can snap together serious deals with large print runs, demand aggressive marketing from major publishing houses, and sell screenplays. There’s a lot of terrible fiction written to appeal to the tastes of a small number of people.

An experiment has been performed several times in which an award-winning novel is queried to literary agents and shut out entirely. It’s not that agents are stupid or don’t understand good literature. (I think their tastes are as valid as anyone else’s.) To some degree, it’s just the sheer randomness of the process that produces this outcome. Being read at 9:00 am will produce different results from being read at 3:30 pm– or, worst of all, right before lunch. No one can control that.

Furthermore, great novels take risks. (So do many terrible novels.) Agents pick up heuristics that one must heed in order to get published. An exhaustive list of “agent rules” is not the purpose of this essay, but I’ll give a couple examples.

One of those agent rules is not to use exclamation points, ever. (Some agents allow 1 per 50,000 words.) Are they overused by mediocre writers? Yes. Can they be obnoxious? Of course! Used skillfully and in character, they’re quite useful. In dialogue, they differentiate hot anger from cold anger– there’s a difference between “Get out!” and “Get out.” Likewise, an author using deep POV in the voice of a seven-year-old girl might use exclamation points for weather (“It was hot!”) while a septuagenerian probably wouldn’t.

Another agent rule is never to use back story in the first chapter. Now, like all of these agent-level prejudices, this principle is not without merit. First-chapter time jumps are very difficult to get right. They tend either to bore or confuse readers. If back story is relevant in a first chapter, it should be limited to a sentence or two here or there, and it should be told rather than shown. (Showing costs words; words equal time; always but especially in the first chapter, milliseconds matter.) Why do I hate this as a hard rule? The first chapter, in well-told linear narrative, is always back story… to the rest of the book. In truth, there are times when it’s artistically valid to open at 120 miles per hour, and times when it’s not.

You write differently to get an agent than to write a good novel. If querying is on your mind, you’ll find yourself writing for the 19-year-old unpaid intern who’s been throat-deep in slush since 9:56 am and who’ll decide in eight seconds whether to read beyond the first paragraph. You’ll put that explosion that belongs on Page 32 on Page 1. You’ll find yourself writing for people trying to mirror their bosses’ opinions rather than readers who want to get lost in a story. You’ll write a hook-laden confusing opening, flash and no substance, at the expense of the rest of the book.

Writing for agents is easier than writing for readers– the former is paint-by-numbers, and the latter takes genuine artistic commitment– but pollutes the work. Writing for both is impossible. Sometimes an author will hit both targets– a novel written for readers will land a power agent– but it’s so rare, it’s not worth obsessing over.

I had an agent-friendly opening, for more than one drafting cycle, that I knew was wrong. I found it subtly corrupting other, later, chapters. Readers found it intriguing but pretentious and confusing– which it was. They were right. So, eventually, I decided, “Fuck that agent game; I’m going to write for readers.”

3. Farisa is long.

Speaking of agent prejudices….

What is the right word count for a novel?

The answer is similar to, What is the correct weight for an airplane? The answer: as light as possible to do the job.

In truth, the answer is less satisfactory for stories than airplanes, because an airplane’s duties are, at least, well defined. The metaphor works this far, though: airplane weights range all over the place, because of their different purposes.

Novels range from about 25,000 words (which would, today, be classified as a novella) to well over 500,000. It’s story-specific what number is right; a book can be overweight at 100,000 words or underweight at 200,000. An average traditionally published novel might weigh in at 85,000 words. The sweet spot for contemporary literary fiction seems to be 125,000 – 250,000, which is longer than average.

My guess is that Farisa‘s final word count– in revision, word counts go up, then down– will land in the 175,000 – 225,000 range.

How much do readers care about word count? They don’t. They care about pacing. They care about price– which can make a big book hard to sell on paper. Editors care, but will make exceptions for good books. Agents? You will not get one over 150,000 words. They’ll sometimes represent a long (or short) book as a favor to an existing client, but not a first-time novelist. Acceptable word counts, as determined by literary agents, tend to fall into a tight range: a genre-specific target, plus or minus 10,000 – 15,000 words. For example, first-time literary novels are expected to be between 80,000 and 100,000 words; epic fantasy should be 90,000 – 120,000.

It’s hard to land an agent with a big book because it has to be sold to one’s boss several times. The intern has to sell the book to his boss (the agent). The agent has to sell it to an editor at a publishing house. The editor has to sell it to executives who control marketing budgets. Only established, big-name authors can get through at 200,000, even if that’s the right length for the story.

An option, with a big book, is to split it. Both publisher and author stand to make more money this way. Sometimes this is the right artistic decision. For Farisa’s Crossing, it’s not, although an explanation of why would spoil the plot.

4. Farisa is a genre-crosser: literary fantasy.

What on earth is literary fiction? What is genre? Can a book be both? This is a fun topic. I could write thousands of words on that alone, but I’ll spare the reader.

Conventional wisdom, in some literary circles, is that there’s “real literature” and then there’s “genre fiction”. Literary novels transcend; genre novels merely entertain. This is, I shan’t hesitate to say, complete bollocks.

All literature has genre. What is usually called “literary fiction” is, in fact, another genre. I call it metrorealism. Actually, literary (as often defined) and mainstream fiction are two sub-branches of metrorealism that otherwise have little to do with each other. Metrorealism takes place in the real world and focuses on ordinary characters. If kings and queens, heroes and villains, or geniuses and fools are featured, it is usually ironic in a way that humanizes the subject and equalizes with the reader. Character-driven metrorealism with high-quality prose tends to be received (and marketed) as literary, while plot-driven metrorealism with adequate prose tends to be presented as mainstream fiction.

There’s a lot to be said for metrorealism. It’s a fine genre– especially the literary subtype. I read a lot of it. I’ve written a few short stories in that genre (that I’ll probably try to get published around April, when I launch Farisa). I have nothing against it. It’s not what Farisa’s Crossing is, is all. The Antipodes is an epic fantasy series– with literary style and aspirations.

The meaningful distinction, to me, has nought to do with genre. A novel is not “genre” or “not genre” because all work has genre. (Technically speaking, “novel” is a genre and “fantasy novel” is a subgenre.) Rather, the distinction is between literary and commercial fiction. So, just as commercial metrorealism (mainstream fiction) exists, so can literary fantasy.

I don’t intend to say that commercial fiction is inferior. This is a distinction of purpose, not value. Most commercial writing is perfectly adequate, and I don’t believe the reading public wants substandard dreck. People buy books for all sorts of reasons, and shoddy writing is not a deal-breaker when it comes to commercial (or critical) success, but I don’t think the first wave of readers for 50 Shades bought the books because they were badly written. (The hate readers came after its commercial success.) Would the book have sold better if it were polished to a literary standard? Perhaps it would have sold 100,000 more copies. Compared to the 125+ million it actually sold, that’s a rounding error.

There doesn’t seem to be much evidence that literary novels sell worse than commercial ones, if one compares like against like. There’s an apex fallacy by which literary writers look at the outcomes for commercial bestsellers, rather than hangers-on, and think they’re all rolling in money. I’d actually bet that improving the writing, characterization, and relevance of a commercial novel, up to a literary standard, will only improve sales. The problem? It takes 10 times as much work, and I highly doubt that it increases sales by a factor of 10.

Literary writing is intensive of writing time, calendar time, and life experience. The characters form over years in the writer’s mind. Sentences are revised several times before going in to print. Every decision is questioned over and over again. The second draft is nearly a complete rewrite, now that the author understands the characters more fully. A seasoned commercial writer is about 50 percent done after writing “The End” on the first draft; the literary writer is lucky if she’s 10 percent done.

Like I said, the difference is not in value or quality so much as purpose and process. The commercial writer, once the prose is adequate enough that an editor can take the book from there, stops working on that story and begins the next one. The literary author line edits her own work and often has tens of thousands of unused back story for each of the main characters.

Commercial authors aren’t necessarily bad writers (some are, but that’s true of literary authors as well) and sometimes they’re the best storytellers. They iterate. They publish more often and get quicker feedback, so they can get more experience with a wider array of story formats. They usually have a stronger sense of the average person’s psychology– and let’s be honest, every one of us is average in almost all ways; the exceptional are usually extraordinary in only a few ways– than the literary writers (who tend, in turn, to have a stronger grasp of deep characterization, language, and atypical psychology).

Farisa’s Crossing is literary fantasy. Agents tend not to like literary fantasy (or literary science fiction). Why is that? Any answer would be speculative (pun intended) insofar as I’m not one. The polite guess is that they must believe they’re hard to market– and they might be right about that. The impolite guess isn’t relevant here.

5. I’m writing a series.

Traditional publishing carries risks. One does not sell “a book”; one sells rights to a book. This is important. Most traditionally-published authors rely on their agents to navigate their contracts. They do not use lawyers (they often cannot afford lawyers) and are discouraged by their agents from doing so. Lawyers kill deals, they say. (It may be true, but that says more about the deals than the attorneys.) If they killed so many deals, then why do publishing houses employ them?

Bad things sometimes happen in publishing. Authors get dumped. Editors change houses or quit entirely. Agents burn out and leave the industry. Someone in a distant corner of the world might say the wrong thing and burn a bridge three degrees separated from the author– zeroing the marketing budget and turning that enviable advance into a festering zombie albatross. An author might leave his publishing house after learning that he’s been under-published for years because the house hired an executive who really, really hates Ohio– and the author is from Ohio. Getting rights back, when leaving (or fired by) a publisher, can be a nightmare.

The value of book rights is book-dependent, of course. If you’re writing a book about the 2018 election, the rights are unlikely to be valuable in 2038 unless the title achieves lasting cultural relevance now. If the publisher fumbles, it’s a lost opportunity, but the loss of rights is irrelevant.

For a series, giving up the wrong rights can be deadly. Many authors cannot publish using their world or characters without permission of the publishing house. Even without that, though, taking a series to a new publisher is difficult. No publisher wants to buy Books 3–7 of a series when a rival house owns the first two books, and won’t give them up.

Books used to go out of print if the publisher stopped printing and selling copies. Rights reverted to the author. If the book was ahead of its time, or would have fared better as a \$4 e-book than as a \$20 block of paper in the bookstore (the author makes about the same money on each) it can be republished.

No one wants to think about their book selling poorly, or their series being dumped by a publisher, but these things can happen and not always to bad books. Good series can be trashed for all sorts of reasons. A self-publisher can try again. In traditional publishing, retries are rare– and if the book fares poorly, it’s always taken to be the author’s fault.

6. Trade publishing takes too long.

Good things take time, and books are no exception.

I could write a 100,000-word rough draft in an 80-hour week. It wouldn’t be worth reading. I’d need to spend significant time on revision. Lining up editors and cover art shouldn’t be rushed, either, and the people doing this work need time, of course. Traditional publishing requires additional lead time, due to the emphasis placed by bookstores on each title’s performance in its first eight weeks; if it doesn’t sell well in the short term, it might not have a long term.

Much of the delay in trade publishing is legitimate. Some it is not– there is some status waiting, too. A literary agent’s turnaround time can exceed 6 months. At my age, I’m not in the position where I can treat it as nothing to spend a year waiting for a “power agent” to grace me with… the right to offer him a job. I’d rather spend the time writing.

7. Control.

Title and cover art are artistic and commercial decisions; pricing is mostly commercial. Guesswork and intuition come in to play.

Traditional publishing houses have expertise, and the short-term winning bet, I think, is to hand those duties over. The problem is that, since the author signs over so many rights, he loses control completely. I’ve known several authors whose books were ruined by bad titles and cover art.

Of course, if a book flops due to bad marketing or a terrible cover, the author’s in no position to ask for it to be released again with better efforts. The publisher will consider itself generous if it offers him to write another book for them.

Self publishers, at least, can iterate and learn. This, I think, is one of the major reasons why self publishing will become the usual way in for fiction. Trade publishers will continue to work with nonfiction, public domain work, and the top hundred or so bestselling fiction others. For novelists, it’ll be a victory lap rather than a career, for those who need to negotiate foreign-language rights and screenplays before the book even comes out.

By 2030, the vast majority of important novelists– including, to the establishment’s surprise, the best literary authors– will not use traditional publishing. Why? sheer numbers. Talent seems uncorrelated with hereditary social class. For every would-be writer whose parents get him representation by a power agent as a 21st-birthday present, there are 1,000 writers who don’t.

8. I want to learn about the business.

By American standards, my politics are left-wing, so it might surprise some people that I’m saying this: I’m not ideologically against capitalism. Business is natural and necessary. I don’t view commerce as inherently dirty, and I think that academics’ outmoded, knee-jerk, leftist pearl-clutching about the material world (in fact, often a social-class humble-brag that reinforces power structures) hurts everyone. The result of the left’s dislike for all things business means that the best people shrink from it– and dirty people disproportionately go into (and end up dominating) the game. It doesn’t have to be that way.

The publishing business isn’t a massive money-maker but, for better or worse, it influences culture.

Our culture is in peril. The danger is not immigration (which refreshes it) or gender equality (on the contrary, gender justice is the strongest indicator of cultural health I know) or scientific advances (again, beneficial, at least when used well). Rather, the threat to our culture is atrocious leadership, both from the perceived right (corporate executives) and left (connected coastal tastemakers). Border walls won’t solve this problem; we did it to ourselves, and the enemy is our own elite.

Right now, too many good people sit on the sidelines. Too many people on the left would rather make a performance art out of being offended than get out there and start doing. We can’t let this happen. Good people need to enter tough, competitive worlds like business and politics– and stand up for intellect, morality, and culture.

9. I wanted to learn editing.

Editing is hard. It can be a slog.

Here’s a dirty secret about writing: quite a few people who are good at it, whether we’re talking about bestselling commercial authors or acclaimed literary voices, don’t especially enjoy it. This is something they rarely admit (and I’m not about to out anyone) and I’m not entirely sure why. I guess they have to keep up the “dream job” image, but for many of them, it has become merely a job. They’re good enough to stay relevant and get paid, but the passion’s gone.

I don’t think they should be ashamed of this. Writing’s hard. It’s not for everyone. It’s not for the vast majority of people. The world needs more readers, more than it needs more writers.

There are probably 50 million people in the United States who want to “be a writer” and will publish their novel “someday”. Not a small number of them have 300-page manuscripts. Some will self publish unready work. Others will query agents and find themselves quoted on Twitter with the annotation, #queryfail. Very few of them will actually write a solid book. Divergent creativity (branching) isn’t all that rare. It’s the fun part. Kids have it. Convergent creativity (pruning) requires taste and skill. It’s painful and detail-oriented. In corporate management, there’s a separation between the “creative work” (which is not all that creative) and the detailed “grunt work”, but that mentality carries over badly to the arts. It’s all about the details. Few people have the grit necessary to write a complete, publishable novel– much less a significant literary work.

I’d guess that 40–60 percent of successful writers still enjoy writing– and, again, I’m not denigrating those who don’t. It’s not a sin that they enjoy Manhattan cocktail parties more than 6:00am writing sessions; it means they’re normal. (I’m not normal.) I’d guess that less than 10 percent enjoy editing.

I didn’t think I would at first, but as my skills improved, I found myself enjoying editing as well. It’s a different pleasure from 120-mile-an-hour rough-draft writing, but it’s a lot of fun in its own right. I studied characterization; scene construction; nuances of grammar; line editing; story structure; and rhetorical devices and when (and when not) to use them. There’s something liberating about going deep into detail, without fear. Not many people do that after college (if even then).

When I finished my first draft of Farisa, it weighed in at 134,159 words. (I remember the number because it’s one transposition away from the approximation of π, 3.14159.) The number intimidated me, and over the next month I discovered plot holes, missed opportunities, dangling story threads and far too much telling. The more I learned about craftsmanship, the more I spotted and improved. For every 500-word info dump I could cut (kill, kill, kill those things) I found a 2,500-word scene needed to strengthen a connection between events that, in my first writing, I had assumed but never stated or shown. Some of the edges I drew, to tighten the story became nodes (scenes, even characters) in their own right. If my sum total, after a bit of line editing to take the word count down, comes in under 200,000, I’ll be happy.

Revising a 130,000-plus word manuscript is a big task. I was apprehensive. “Shit, I’ve got to edit this thing. Maybe twice, even.” (Hahahaha.) I found out, though, that I like it. I’m not a perfectionist– I went through that phase of life, and it’s crippling– but there is a ludic element, a game almost, of seeing how tight I can make a sentence or how good I can make a story.

The inclination to edit well and enjoy it, I think, is rare. Age and life history have a lot to do with it. If I succeed with Farisa (or a later work) I’ll be glad that it happened late. Many writers are ruined by early success; they write a great book at 25, but are useless by 30, because the Manhattan cocktail party scene takes them in and they stop having original ideas. It could have happened to me, and probably would have, had things gone a different way. I’m different, but I’m not morally superior.

At 35, half of my biblical three-score-and-ten, I find that as I get older, I get simpler in most ways. If somehow I beat all odds and sold a million copies of my first book, I wouldn’t hang around the Manhattan book buzz people. I’d move to the mountains and focus entirely on the second book (and the third, et al).

10. I’m realistic.

Outsiders to traditional publishing think that it comes with six-figure advances, national radio and TV spots, reviews in the New York Times, and full-time publicists pushing each other out of the way to line up one’s speaking calendar.

Those deals are rare, but they also have very little to do with literary merit. It may be true that “good writing gets found”, but what makes or breaks a career in traditional publishing is how well a book performs in its first eight weeks, and that has everything to do with how the book gets treated by its publisher, which in turn is driven almost entirely by agent clout. What favors can (and will) he call in? Will someone’s kid not get in to a preschool if the New York Times declines to review an author’s book? Book buzz is like sausage and laws; some things, it is best not to see them made.

The sausage-making component doesn’t require only “an agent”. Querying still works (given enough time) if one’s goal is just to “get in”. The agents who have the power and connections to drive the sort of treatment that makes traditional publishing worthwhile are extremely rare. One doesn’t need only to sign such an agent, but to rank among his favored clients. That outcome is inaccessible without pre-existing social class or extraordinary luck.

Most authors of reasonable talent can get into traditional publishing, even in 2018, even without inherited social connections, if they give it enough time. Their outcomes, though, are uninspiring: mediocre deals with no publicity, that they’re pressed to take because their agents will fire them if they back out, but that lead to lackluster launches that harm their careers in the long run. Querying, of course, isn’t free. It no longer costs postage, but time is the most valuable resource we have, and querying takes too much of it compared to what it can actually do.

I don’t think it’s worthwhile to be bitter about the changes in traditional publishing. Industries evolve. So long as the self-publishing infrastructure continues to grow, literature will improve with time. The few dozen power agents in Manhattan (even if augmented by the thousands who wish to join them) were always a tiny fraction of the reading population, but their proportion is even smaller if one steps up to a global perspective. As for bitterness, which there’s a lot of in publishing, the problem (as I’ve learned, by being embittered in a different career) is that it leads, paradoxically, to magical thinking. Bitter people want to be not-bitter; they want someone (like a literary agent) to come along and solve their problems. This is why they’re so easy to swindle. Bitter people fall for sweet talk– the narrative wherein someone riding higher stops for someone special, just because– and that’s a dangerous weak spot to have in business. There are cases in which to use traditional publishers, and others in which they’re unnecessary. Realism, not bitterness, is what an author needs.

11. Experimentation / flexibility.

No one knows what sells books. It constantly changes. There’s a lot of guesswork and iteration. Traditional publishers get a bad rap for how often they get it wrong, but most self publishers aren’t any better.

Marketing is especially hard for books, because the book’s main advantage over other media is its reputation for (and, because books are less expensive, true advantage in) authenticity. The production values of a film or television show come at a price: executives who control budgets, focus groups, the need to manage an average attention span. People understand this. Popular visual media tend to establish value using social proof: special effects, wide releases, and famous actors. Novels establish value through the quality of writing, characterization, plotting and world-building. The proof-of-value isn’t \$30 million but 3 years of a talented writer’s time. The issue is that a reader must spend considerable time with the writing to see these production-like values; they don’t come through in a two-minute trailer. Even for the writer to get a shot, readers must know that the book exists in the first place. Marketing matters.

No one expects authenticity from a summer blockbuster– it may be there, but it’s not mandatory– but we absolutely expect it from literary novels (and, to a lesser extent, high-grade commercial works). Authenticity and marketing/publicity go against each other. If readers knew how much Manhattan favor trading and sausage making went in to “book buzz”, they’d trust it even less. For light summer entertainment, the inauthenticity of marketing is not so self-destructive. Getting people to come to the theaters is, in comparison, straightforward. For books? Most publicity efforts go nowhere, because the nature of public relations is its irreducible inauthenticity.

A publicity strategy that drives sales today might fall flat in 2019. What a publishing house thinks, for good reason, is genius, might pull a zero and take a good book down with it.

In traditional publishing, recovery is next to impossible. A one-shot approach to  One way to recover would be to reduce price, give copies away, and publish chapters either for free or in magazines, but traditional publishers rarely do. Once a book is deemed a flop (or worse, a mediocre performer, making the book expensive for the publisher to give away, which might be the best move for the next one) the publisher loses interest in its fate, although the author doesn’t.

A self publisher, when a publicity effort fails, can try another approach. There’s more experimentation available.

12. Not to be an employee.

I’ve said before that more people want to “be a writer” than actually want to write (much less write well) and one of the reasons for this is that people, eventually, want to escape the oppressive stupidity of office life. They think they’ll be their own boss. I’ll admit that this is a contributing motivation for me, as well.

I’m good at many things. I believe writing is one of them. I’m also bad at many things. Because I have a architect’s knack for how things could be or ought to be, my mind under-attunes itself to parochial details of the broken way things really are at any specific point in space and time. As a result, arbitrary authority– like bad legacy software, just another form of sloppy writing– isn’t something I handle skillfully. I’m not good at tolerating bad decisions or managing the childlike needs of people in power. If I could change such traits, perhaps I would. On one hand, I would be a less virtuous person and my life’s total value to the world would decrease. On the other, it has cost me jobs and a lot of money to be less-than-perfect at the less-than-virtuous skill at navigating less-than-excellence.

I was, at one time, in the top 1 percent or so of software engineers. Perhaps I still am, although I’m not as current. These days, I prefer management and data science roles. I had a period in which I hated writing code; I could do it, but it was a struggle, because every keystroke felt like an injection of nonsense into the world. Programming did become fun again, but it took considerable time.

Sometimes it is right and prudent to follow orders (operational subordination) but organizations often demand personal subordination. If you have a backbone, and do anything in that context– anything at all– you will grow to hate it. Writing, programming, speaking… if you do the job in a context of personal subordination, you will ruin it for yourself. You may find excuses not to do it. You might complete the work, but poorly. Perhaps you’ll power through and do it well enough, but nothing you produce will be authentic. For factory-floor corporate work, this isn’t such a tragedy to the product; mediocrity and inauthenticity are not merely survivable but expected and commonplace. For literary fiction, it’s fatal.

I know plenty of people who’ve used traditional publishing: successes and failures; people who defend it and others who loathe it. I know people who’ve been dumped by their agents and fallen to pieces; I know people who’ve been failed by traditional publishing and still defend it; I know people who’ve succeeded but would self publish if they were to do it again; I know bestselling authors with exceptional agents who love what traditional publishing does for them and have no regrets. There seem, at first, to be few similarities between the outlier successes and the horror stories, but there is, in fact, one theme that connects them all.

That theme is: traditionally published authors are employees.

For example, often they give their publishers the right of first refusal, which means they can’t shop work around unless their “home” has already rejected it. Most authors cannot publish, even short stories and bonus chapters, in a world they used without the publisher’s permission. Of course, publishing has elements of a feudal reputation economy, and an author dumped by a publisher or editor will likely find it harder to acquire another one than he did for his debut. And, as bad as it is for an author to lose a publisher, to be dumped by an agent is almost always fatal.

For example, authors who demand their publishers to do their job– to market their books– are deemed “difficult”. Those who turn down career-damaging deals with onerous contractual terms get pressure from their agents to acquiesce and, eventually, will be tossed back in the slush pile if their agents get sick of waiting for ‘dat commission. It’s shockingly easy for a writer to end up worse off than pre-debut, which leaves them out of power.

Agents don’t fear being dumped by authors, because there are thousands more submitting queries every day. Authors know that if they get dumped, their careers in traditional publishing are over.

These aren’t theoretical concerns. I know of talented writers being dumped (and blacklisted) by their agents for turning down crappy deals. I’ve heard of publishers reneging on promised marketing when the author complained about an ill-chosen title. The old system, under which authors knew that their publishers truly backed them, and that after getting published once, they’d continue to get book deals and competent marketing, is gone.

Of course, people who leave traditional publishing can still self publish, but if that were their plan, they ought not to have wasted time and rights on a different game. It would have been better for them to spend those years self publishing.

It is not always bad to be an employee. I want to make that clear. Nor is there anything sinister about employment. I’d like to have my own show in some time, but that’s not everyone’s way; done morally right, employment is a risk transfer. It only becomes immoral when the trade is misrepresented (i.e., the risk reduction is not commensurate with what the employee gives up). I’ll leave it to others to decide, for themselves, whether traditional publishing offers more than it takes away. On an individual level, it depends more on the book and the deal than anything else.

As for being an employee, there are tiers of it. There are seven-figure executives who write their own performance reviews, fly in corporate jets, and have limitless resources for any projects they might imagine, and they are employees; there are also miserable, underpaid, precarious employees. Some people enjoy organizational mechanics, either as a spectator sport or for live-action play; others consider it nonsense and a distraction. Some people excel at the game; others are either bad or, at best, inauthentic when they play. There are as many approaches that can be taken as stories that can be written.

What story do I exist to write? I don’t have a fully-formed answer but, on my own question, I’m further along than anyone else. Clearly no one else knows; I’ve lived half a life to learn that much. My job becomes to figure out the rest.