Phishing/Hacking Attempt

In April, I got an email about a CTO-level position. It was a personalized message. The person writing it knew who I was and my capabilities. Naturally, I checked it out. It never hurts to talk to people. As is typical, a résumé/CV was not enough. I had to use that company’s web-portal. Okay, why not. I have time, says the dog.

I didn’t hear back. I should have suspected something given the lack of response. Now, everyone gets rejected, even people like me. (Especially people like me.) There’s nothing odd about getting turned down. That said, above the VP level, you get a personal response and a truthful explanation of why you didn’t get the job. Usually, it’s impersonal (it could be, “the other candidate has 20 years more experience”) and you move on. If you don’t hear anything, at my level, it’s fishy. Or, should I say, phishy?

It was a fake job portal. The company that this attacker purported to be was not looking for a CTO. To be clear, they had no involvement in this and were professional in every way.

A few weeks later, someone tried to access my account on multiple cloud services using the password I used (I create a new one for every job site) and hundreds of variations thereof. I got calls about this. (No one got into anything.) These attempts came from a reputable technology company in the San Francisco Bay Area. I know exactly who they are and what they were after. They’re probably pissed off that they weren’t able to get into it.

At this time, that is all I intend to say.

Advertisements

Crossing the Equator 6: Villains in Fantasy Versus Real Life

I open Farisa’s Courage with the heroine running for her life. Her memory is breaking down (a consequence of her magic, when used too far) and she’s confused, desperate, exhausted. In an unknown city, feet and legs caked in miles of trail mud, she bangs on a stranger’s door. She’s forgotten several years of her life. By the time she reaches (transient) safety, she doesn’t know where (or even who) she is. (She recovers, of course.) Meanwhile, the antagonist doesn’t get much stage time in the early chapters. That’s intentional. It’s also unusual, per fantasy genre conventions.

Many fantasy novels open with the Big Bad Antagonist doing something terrible. He destroys a village, or he tortures a child. Often, no reason is given; of course the bad guy would do something bad. The dragon just likes gold, though she never spends it. The sixteen-eyed beholder has to slurk out of its dungeon, eat a peasant child, and then slurk back because, if the heroes sought and killed it for treasure or “experience points”, then they’d be the villains.

In Farisa’s Courage, the first book of the Antipodes series, the main antagonist is a corporation, the Global Company. They’re bad, but like business organizations in the real world, they’re reactive and effete. They do more damage (early in the story arc, that is) through incompetence than by intention. I open Courage with asymmetry. The heroine is in danger, but the antagonist is comfortable (and unaware that it is anyone’s antagonist). That’s how good and evil work in the real world.

Fifty years before, the “Globbies” were a corporate police firm. That exists in the real world; they’re the infamous Pinkertons, who are still around. The Globbies also had a flair for witch hunting (which also still exists, even if witches don’t). When Farisa’s story opens, they control 70 percent of the known world’s economy. (It’s a steampunk dystopia where the Pinkertons won, and evolved into something worse. Something similar almost happened here.) They don’t take much of an interest in Farisa. They know that she exists, and that she’s a mage, but they also know that magic is unreliable and dangerous. They have been through forty years’ worth of failed attempts to harness it. So, they don’t think much of her. They’re only interested in her because she’s been accused of a crime that she didn’t commit (and, in fact, they know that she’s innocent of it).

Farisa doesn’t see world-fucking evil from them in the first 200 pages. The reader sees the Company’s low-level, self-protective evil, but nothing threatening the end of the world. That’s intentional. You didn’t see world-fucking evil from the Nazis until Kristallnacht, either. They’d been around for almost twenty years by then.

Epic fantasy is often Manichaeist. Good and evil exist as diametrical opposites. In the first or second chapter, the reader often sees the Big Bad doing some horrible thing. It has to be shown early who the Big Bad is. In my experience, though, evil doesn’t reveal itself until it needs to do so. There’s a potent asymmetry between good and evil. Good must act, and evil can wait. Good is desperate to survive, like a candle in a hurricane. It will rescue a child from a house fire (and fire, though dangerous, is not even evil). Evil can use slow corruption, hiding and waiting. It’s usually done in by its own complacency and arrogance, but that takes time.

Epic fantasy often wants symmetry. It wants evil that is as desperate to do harm (and to kill the heroes) as the heroes are desperate to survive. It wants evil that can’t plop down on its haunches and wait. It must burn that village! It must abduct that princess! In my experience, that’s a rare kind of evil, and evil itself is not all that rare.

This might explain why our culture is fascinated by serial killers. They’re very rare, but they show us a refreshingly different kind of evil from what lurks in corporate boardrooms. The serial killer is intense and desperate. Why does Vic the Biter eat the faces of human children? Because he’s an insane fucker, that’s why. His desperation mirrors that of the good. He’s fighting for survival, because his mind is broken, and eating children’s faces is the only thing that gives him respite from his own demons. His vampire-like hunger drives him to make mistakes that render him easy enough to capture that the story can be told in a two-hour movie or a 90,000-word novel. If there must be evil in our world, that’s the kind we want: a kind that is as desperate for its own survival as is good.

The desperate, belligerent kind of evil exists, but it’s not the kind that’s running the world. The Davos elites view the rest of us with phlegmatic contempt– they prefer not to think of us at all– but not burning hatred. Yes, the Davos Man would rape a child if there were a billion dollars in it; but, outside of that laughable, contrived scenario, he’d rather go back to his hotel room and sleep off his drunk.

Not to take the metaphor too far, but this mirrors the asymmetry between capital and labor; the former can wait, the latter must eat. Now, I don’t wish to say that capital is evil or that labor is good. Neither’s true. The parallels of their struggles, though, I find to be worth note.

Labor and capital both perceive themselves as at war with entropy, but one conflict has higher stakes.  Labor must consume two thousand kilocalories per capita of chemical energy or burn itself to death. Capital issues weak complaints about meager stock market returns, and the declining quality of private boarding schools, and too many brown people at the country clubs. Labor is a stroke of bad luck from dying on the streets. Capital is slightly perturbed by the notion that things aren’t as good as they used to be, or could be, or are for someone else. It’s the same with good and evil. Good lives in constant warfare with selfishness, stupidity, disengagement, petty and grand malevolence, and myriad other entropic forces. Evil? Well, it rarely recognizes itself as evil, to start. When it’s losing, it’s a chaotic force. When it’s winning, it thinks as little as possible. It too has its slight unsettlements, but rarely feels a need to fight against the world for its own survival.

It’s unfashionable, in the postmodern world, to believe that good and evil exist. Some view them as relics, like ethnic gods, that simpletons cling to. We’re not enlightened enough to see the complexities of human power struggles from all angles. I don’t know whether gods exist, but good and evil do. An issue is that they’re viewed as compact entities or forces rather than patterns of behavior. As “alignments”, they don’t exist. There’s no unifying banner of “Good”, nor one of “Evil”. Yet, we experience good and evil in daily life, from the small to the large. Is a convicted murderer an evil person? Not necessarily. Prima facie, there’s a lot of context that we don’t know. He could be mentally ill. He could be innocent, or have killed in self defense. We can agree, though, that murder is usually an evil act.

Good people value what is good, though we’re slow to find perfect agreement. There are good people with bad ideas. There are good people who’ve been infected with evil ideas. Most of our so-called “founding fathers” were racist, and racism is without a doubt one of the most evil ideas that humans have ever concocted. That aside, some of those men were arguably good, even heroic.

Evil does not, in general, value evil. German and Japanese authoritarians fought together, but regarded the other as racial inferiors. (Stalin was pretty vicious as well, but fought on our side.) Corporate executives and child molesters despise each other; you don’t see them seated together at Evil Conventions, because those don’t exist. Good values good, but evil doesn’t value evil. Evil values and seeks strength, and a position of strength is one from which one can wait.

I have actually battled evil, and suffered for it. I wrote hundreds of thousands (if not millions) of words on how to survive corporate fascism. I have exposed union-busting, labor law violations, and shady practices of all kinds in Silicon Valley. It has been an edifying (if expensive) ride. I’ve probably mentioned that Evil has won some of its battles. It may yet win the war.

In this light, we have to understand it. We have to know how it works, what it values and what it doesn’t, and why it wins. It wins because it can. It wins because often it wins if nothing happens.

Does the hungry evil of the vampire or serial killer exist? It does, but it’s rare. The more prosaic boardroom variety of evil is far more common. Often, the most dangerous thing about it is its most boring advantage. If it wants to do so, it can sit in its castle, and wait, and hope that we fuck up before it dies of its own ennui.

Swamp Baseball

My warning meant nothing | You’re dancing in quicksand…

— Tool, “Swamp Song”, 1993

Swamp Baseball is like regular baseball, but with a few changes:

  • You play in a muddy bog. Outfielders can fall into quicksand. The “run” to each base can take 30 seconds. Swimming is allowed, but bare-eyed (no goggles!) only.
  • The ball is covered in mud and will spin and fly unpredictably. Every pitch has its own character. Instead of bats, you remove and use tree branches.
  • Each inning, you have to remove leeches. Whichever team has fewer leeches gets an additional run. Lampreys count as four leeches each. (This does make the game notably higher-scoring than regular baseball.)
  • Dangerous mosquitos are shipped in, if not already present; therefore, you will probably die of malaria (and, thus, be kicked out of the game with nothing to show for it) before you are 40.

Who wants to play Swamp Baseball? I’m guessing that the answer is “No one”. Nor would most people want to watch it as anything more than a novelty. We like to see humans play the sport in a more appropriate habitat. There’s nothing wrong with swamps. They’re good for the world. They just aren’t where we do our best running– or pitching or fielding or spectating. If you want to see baseball played in top form, you’ll go to a ballpark rather than a malarial bog. It may be, in the abstract, more of an accomplishment to score a home run in Swamp Baseball, but who cares?

In the career sense, I’ve played a lot of Swamp Baseball. I’ve become an expert on the topic. I used to have the leading blog on the ins and outs of Swamp Baseball: how it’s played, why it exists, and how not to lose too much. I’ve fought actual fascists in corporate environments and had my share of runs and outs, wins and losses.

Here’s the problem: no one cares about Swamp Baseball. Why should they? It’s a depressing, muddy sport where even the winners get their blood sucked out by leeches and lampreys. It doesn’t inspire. No one sees the guy who slides into home plate for a run, only to get his face ripped off by an alligator, and says, “I want to be like him when I grow up!”

Technology can be a creative force, and programming can be an intellectually thrilling activity. Getting a complicated machine learning system to compile, run, and produce right answers might be more exciting than the crack of a bat (says a guy who has no hope of being any kind of professional athlete). Like writing and mathematics, it’s one of the Great Games. Victories are hard-won but often useful and sometimes even profitable.

Yet, most programmers are going to be playing their sport in the swamps. There won’t be literal mud pits, but legacy code that management refuses to budget the time to fix. There won’t be literal lampreys and leeches but there will be middle managers and project managers trying to get the team to do more with less– bloodsuckers of a different kind. Just as all swamps are different, all corporate obstructions are unique.

Here’s the problem. Swamp Baseball can be fun in a perverse way, but it would fail as a watchable sport because one’s success has more to do with the terrain than the players or teams. Runner falls into a mud pit? Whoops, too bad! Fielder faints due to blood loss, thanks to leeches? Looks like the other team’s getting a run. Real baseball has boring terrain and lets the players write the story. Swamp Baseball has interesting terrain but no sport or art. If the sport existed, it would just be artificially hobbled people failing at everything because they’re in the wrong habitat.

Corporate life is, likewise, all about the swamps. The success or failure of a person’s career has nothing to do with batting or running or fielding, but whether that person trips over an alligator or not on the way to first base… or whether the shortstop collapses because the lampreys and leeches have exsanguinated him in time. Sometimes the terrain wrecks you, and sometimes it wrecks everyone else and leaves you the winner, but… in the end, who cares?

Swamp Baseball wouldn’t get zero viewers, of course. Some people enjoy comic relief, which in this case is a euphemism for schadenfreude. It wouldn’t be respectable to watch it, nor to play, but some people would watch and for enough money, some would play. Corporate life is the same. Its myriad dysfunctions and self-contradictions make for lots of entertainment, often at another’s literal and severe expense, but it’s fundamentally lowbrow.

That’s why I don’t like to write about corporate software engineering (or “the tech industry”) anymore. And if I stay in technology (which I intend to do) then I want to play the real game.

Crossing the Equator 5: Natural Writers

There are a large number of intelligent, well-intended people who “might write a novel” someday. Spoiler alert: they won’t.

I’m not trashing them. The world needs more readers, much more than it needs more writers. The reason why most of those people will never “write their novel” is that they’re not weird enough. They’re doctors who read beach books. They’re professors who read a sci-fi book every now and then. There’s nothing wrong with that. I wish they had more time to read and bought more books, because writers don’t make enough money, but I’m not here to criticize them. A world in which everyone had the inclination to be a writer wouldn’t work. The job isn’t for everyone. At 33 years old, I’ve got a good sense of what I can and cannot do. It’s not a stretch to say that I’ll never put my hands inside a man’s chest and manipulate an intact heart, an innocent human life on the line. I’m very glad people exist who can do that job. I’m happy to have them make more money than I do. It just isn’t me, who will do that, unless something bizarre happens like an apocalypse that demands for me to fill the role.

Natural writers

I’m going to talk about natural writers. It’s a term that I invented, although it’s a statistical certainty that it’s been coined somewhere else with a different definition.

First of all, it’s not really about natural talent in any form that would pop up in school. The general intelligence (“IQ”) necessary is significant but not astronomical. If I had to guess, I’d put it around the 95th percentile: IQ 125 to 130. More can help or hurt. Higher intelligence can make the research and self-editing aspects of writing go by faster. On the other hand, many highly intelligent people have no aesthetic sense or, worse yet, have the “four-wheel drive” problem of getting stuck in more inaccessible places. The ability to write great literature may be rare, and the inclination certainly is, but that’s not because of an IQ-related barrier.

One in twenty-five people might have the raw intelligence necessary to be a good writer, but I’d guess that one person in about a thousand has the necessary skills. The skills are hard to learn.

First, you have to read and write millions of words. Also, you have to read the whole quality spectrum, from excellent canonical work down to Internet forum comments. If you want to be able to write dialogue, you need to have ear for how people talk at differing levels of education and in various emotional states. None of us speak with perfect grammar and some are worse than others.

Further, to be able to write something interesting, you have to read and learn all kinds of random stuff– literature, history, philosophy, science– that most people (including, to my surprise upon becoming an adult, most expensively-educated people) stop caring about once they stop getting graded on it. You don’t need to know more words than the average reader– you can do a hell of a lot with the 10,000-or-so most common words– but you have know them deeper than most people do. You have to know how to communicate complex ideas efficiently, but also when not to transmit complexity at all. If your novel has someone eating a sausage, you have to when and when not to write about how it was made.

Rarest yet is the inclination to be a writer. On this topic, there’s a joke programmers have, and there’s a picture here:

HER DIARY:

Tonight, I thought my husband was acting weird. […] I thought he was upset at the fact that I was a bit late, but he made no comment on it. Conversation wasn’t flowing, so I suggested that we go somewhere quiet so we could talk. He agreed, but he didn’t say much. I asked him what was wrong; He said, ‘Nothing’. […] He just sat there quietly, and watched TV. […] I still felt that he was distracted, and his thoughts were somewhere else. He fell asleep – I cried. I don’t know what to do. I’m almost sure that his thoughts are with someone else. My life is a disaster.

HIS DIARY:

My code is broken, can’t figure out why.

To make it clear, I don’t advocate alienating one’s bed partner. That’s bad. Writing is a marathon, not a sprint, and it’s no excuse for being an asshole. That said, the reality of being a creative person is that we’re not accessible on demand. This is why we hate open-plan offices, which emphasize availability at the expense of productivity. When we’re mentally or emotionally all in, it’s hard to turn our minds off. I’ve watched every episode of Silicon Valley this season, but at about 40 percent comprehension because so much of my mind is on Farisa’s Courage.

Every writer will see where I’m going. One could replace “My code is broken” with “Chapter 6 needs to be rewritten” or “I’m not sure I have Erika’s motivation worked out” or (never listen to this voice) “a ‘real writer’ would slash this to pieces”. Among natural writers, we’ve all been there. We’ve all been out to dinner and thinking about how we’re going to resolve a plot issue or flesh out a character.

That said, a percentage of commercially successful writers are not natural writers. There are some who clearly are– Stephen King comes to mind. He’s a writer. A few of them don’t enjoy writing. I’d never name people, but it’s been confessed to me. They make a lot of money doing it, and they’ll keep writing as long as that’s the case, but they’re not the sorts of oddballs who’d be tweaking manuscripts at 9:30 on a Saturday night. They write because it pays, and because (at least in commercial terms) they’re good at it. Some hate it, some tolerate it, and some enjoy it– but not to the extent of a compulsive natural writer. You don’t have to be a natural writer to find commercial success as a writer, but I don’t know why you’d try. The odds and effort, even with talent and inclination, are worse than in business. Natural writers, on the other hand, loathe the corporate world– truth-seekers don’t like jobs that require defending lies– and often find themselves without other options.

There’s a Boomer misconception that to be good at something, you have to love it in the way that natural writers love writing. It’s not true. We all know that there are people who love to write, but will never be good at it no matter what they do. On the same token, there are those who can write engaging stories but don’t enjoy doing it. They might love reading, and discussing their books, and having written, but the slightly-masochistic act of forcing their brains to come out with thousands of words of coherent prose per day is something they put up with, because it pays– in some cases, very well. It shouldn’t surprise anyone, in a world where so many people go to jobs they hate for less than $50,000 per year, there are others who’ll do jobs they don’t enjoy for more.

The truth is that there are a lot of people who want to “be a writer”– and who have an unrealistic sense of what that means– but very few who actually want to write.

Legacy, Talent, and 45 degrees

I’m focusing on the natural writer, but the more general concept here is of the natural artist. Writing has a special place, and I’ll explore it, but the arts in general attract provocateurs. Why is that? What’s the connection between creating aesthetically pleasing objects and wanting to troll people? It isn’t at all obvious that one should exist. It does, though. I’ve met some brilliant writers and artists, and they’re almost all weird.

I have a theory that most of human politics and economic struggles can be expressed in terms of Legacy versus Talent.

In the abstract, most economic commodities aren’t very different. If you have $15 in your bank account but a Manhattan penthouse, you’re a millionaire. If you have a low income and net worth but your family can set you up with a high-paying corporate job, you’re a rich person. We don’t about the differences then between silver and oil and paper cash and electronic wealth. There are two abstract commodities that really matter: Legacy and Talent. The exchange rate between the two is a fundamental indicator of a society’s state. When Legacy trades high against Talent, social mobility is low and aristocracy sets in. When Talent trades high, skilled people can do very well on their abilities alone.

Legacy includes wealth, social relationships including the formal ones called “jobs”, credentials, and interpersonal connections. It’s the stuff that some people got and some people didn’t– for reasons that feel random and unfair, and are mostly related to the mischievous conduct of previous generations. Some people got lucky, and some got fucked over. Chances are, most of my readers are not in the “fucked over” category, although it often feels that way, subjectively. I’ll get to that.

Talent here includes natural abilities, skills, and the inclination to work hard. Again, some people were born with a lot of it and some people got screwed. It’s hard to make the case that possessing Talent conveys any sort of moral virtue. I’d love to be able to make that case, because there’d be personal benefit in it, but there are plenty of capable people who are also terrible.

As social forces, Legacy and Talent are always at odds. One is the past trying to preserve its longevity, and the other is the future pounding against the walls of an egg.

Here’s where it gets political, and perhaps controversial. People can, approximately speaking, be ranked for where they stand in each. Sally might not conceive of herself as “94th-percentile Talent” and “33rd-percentile Legacy”, but she knows that she’s smarter than her workplace assumes her to be. If she’s young, she may see upward mobility in the school system. If she’s old, she’ll probably get bitter because she feels like she’s surrounded by relative idiots.

In reality, Legacy and Talent ranks don’t exist in an exact form, but most people have some sense of where they stand. Plot Talent on the vertical (Y) axis and Legacy on the horiztonal (X) axis, and then draw the Y = X line, at a 45-degree angle to the axes.

You can often predict peoples’ political biases according to where they are in relation to that line. “Left-siders” have more Talent than Legacy. They want transformation. They’ll challenge existing systems. They’re not happy with the role that society has given them. “Right-siders” have more Legacy than Talent. They support the status quo. That does not mean that they’ll be economic conservatives. In fact, in an authoritarian leftist society, they’d be loyal communists.

Do people know where they stand? What about the Dunning-Kruger Effect? To be honest, I think that people are close enough to knowing for the errors to cancel out. Yes, plenty of people think that they’re smarter than they actually are, and that might create a left-sider bias. On the other hand, there are plenty of people who think their houses, diplomas, and social connections– all forms of Legacy– are worth more than they really are. Individuals may get their positions wrong all the time but, on balance, I think the errors cancel out.

On that 45-degree line, you have self-described “moderates” who are suspicious of left-siders and right-siders both. They complain about the (right-sider) coal miner wearing a “MAGA” hat who doesn’t really deserve that six-figure job he has because his grandfather was in the union. They also complain about “entitled” left-sider Millennials who don’t enjoy being slotted into subordinate roles of the corporate hierarchy.

Natural artists are constitutional left-siders. They’ll reject any role, high or low, that society tries to put them in. Even if it puts them at the pinnacle, they often hate the idea that there is a top, not to mention the moral compromises that come with being and staying there. It’s not about social rankings for them, and it’s definitely not about money. It’s about creative control and life on their own terms.

Not all talented artists are natural artists. You see this when a promising young artist or writer turns into a hack after becoming famous and being invited into the Manhattan elite. There are intelligent people among our society’s corporate elite, but curiosity is frowned upon and if you spend too much time around them, you’ll end up as anti-intellectual as they are. For all their claims of sophistication, the people in the upper echelons are provincial and allergic to novelty unless it fits a narrow script.

For an example in the technology business, look at Paul Graham. He wrote transformative (if silly and hyperbolic) essays when he was young, but he hasn’t had an interesting idea in more than 10 years. Why? Well, most people turn back into rubes when they get the power or wealth they crave.

Text

What is it, in this exposition, that’s particular to writers? In particular, I’m talking about novelists more than screenwriters, but screenwriters more than visual artists. There’s a spectrum in human creative work between the cryptic and the ostensible (both terms that I’ll define in a minute) and I intend to explore it. What’s special about text? I contend that on the spectrum between the ultimately cryptic and the supremely ostensible, text has a privileged position.

Visual and auditory arts, as experienced by the average person, are ostensible. I’m not a painter, and I’m not aware of the dialogue between different pieces of art, but I can tell a beautiful painting from an ugly one. Now, that ugly one might be brilliant in a way that my rube mind– I believe I am a natural writer, but I’m still a rube in most ways, like anyone else– can’t comprehend. I don’t know. I do know that I like Beethoven and Mozart. I don’t always know why I like them, and I can’t defend the sophistication of my tastes, but I do.

Computer programming, on the other hand, is very cryptic. I’m a good programmer. Actually, that’s an understatement, but let’s start there. I can post twenty lines of “great code” and it’ll mean nothing to the average person. In fact, it’ll mean nothing to the average programmer. (In fact, it doesn’t mean much, because the difference between beautiful and ugly code doesn’t matter on twenty-line toy examples.) What I mean to say is that most people (including users of software that I write) will have to take it on faith (or not, because I really have no say in what they think of me) that I’m a good programmer for now, because I can’t show it in 20 lines of code. To tell a good programmer from a bad ones, you need thousands of lines of code and to look very closely at it.

Prose writing lives between those extremes. It’s possible to make writing more ostensible by overusing emphasis and Gratuitous Capitalization, but good writers often avoid that. Our tool is text. We’ll sometime use italics. You can use them to inflect dialogue and give it a different meaning (e.g. “I didn’t ask you to come” means “I asked someone to come, but not you”). You can use them for internal monologue. You have to use them for book titles. You shouldn’t use them all the time. Text, preferably unadorned and linear, is our tool. We try to do as much with it as we can.

A visual picture is sub-linear in terms of the expectation of a reader’s effort as a function of how much is presented. A painting might have had hundreds of hours of effort put into it, but the goal is for the average viewer to think, “Yeah, that looks nice.” The clouds and the mountains and the tiny details matter, but no one expects the viewer to check out every one. That probably has something to do with ostensibility: you can show “100 times more painting” (whatever that means) and the viewer doesn’t have to do 100 times more work.

On the other hand, software is super-linear. A 100-line program is more than twice as complex as a 50-line program. Actually, the 100-line program has the potential to be way more than twice as complex. There are problems where the amount of effort required to understand a computational object grows more than exponentially as a function of its size and complexity. (It gets worse than that, in fact.) Reasoning about arbitrary software code is mathematically impossible.

The cryptic nature of code is a source of pain for programmers, because it denies us the chance to prove ourselves without demanding additional work of those who might evaluate what we produce. Programmers’ immediate bosses rarely know who their best programmers are. Opinions of peers and clients carry some signal, but the only way to judge a programmer is to read the work, and the super-linear scaling of software system complexity makes that extremely difficult. Many of the complaints of programmers about their industry come from being introverts an industry where observable final quality (e.g. website performance, lack of errors) is objective, but where the quality question around a specific artifact is so hard to evaluate that, in practice, it’s never done at an individual level. Therefore, personal attribution of responsibility for (often, brutally objective) events comes down to social skills. Programmers hate that. The cryptic nature of what we do doesn’t make us geniuses and wizards. It puts us at a social disadvantage.

What about text? Is a novel sub-linear or super-linear? Or is it exactly linear?

Readers want and expect to expend linear effort. It’s quite possible to shove more-than-linear effort into prose, by creating layers of context that require looking back and even forward through the text. It’s not what they want, though, because they want to forget that they’re reading text at all.

In return, they want above-linear payoff in exchange for their efforts. If a 100,000-word novel doesn’t deliver more than twice as much enjoyment as a 50,000-word novel, it’s too long. Now, I won’t pretend that reading enjoyment or literary complexity can be quantified mathematically. I doubt that they can. My strongest suggestion is that text endures because it demands linear effort as a function of what’s presented. Text presents a challenge. How can a writer using a flat medium create pictures out of nothing?

After all, that’s what we as writers do. We make things called settings, characters, and plots out of thin air. Formally, they don’t exist. You can’t point to page 179 in Crime and Punishment and say “that’s where the plot is”. We create images out of some 30,000-ish symbols called “words”. We don’t always understand how they form. “Forks and knives” is different from “knives and forks”, even though both phrases denote the same thing.

Let’s get specific: In October, a girl sits under the orange tree. Most people have an image in mind already. How old is the girl? She could be six or sixteen or every twenty-six (we’ll side-step the political correctness issue around calling an adult woman a “girl”). What about the orange tree? I never said that the leaves were orange. A reader from Massachusetts assumes that based on the cue, “October”. A reader in a tropical climate might think that she’s sitting under a fruit tree. If that detail doesn’t matter, I’ve done my job. I don’t need the reader to picture the same mountain or bar or tree that I have. If the detail matters (her age probably matters) then I’ve under-described. Efficiency matters too, though. If we already know that the setting is Massachusetts, it might be better to say “orange tree” than “tree with orange leaves”. Or maybe not. It probably depends on the implied third person, the girl.

The visual artist might present as much detail (a few megabytes) as the novelist. However, the painter has no illusion of a viewer who’ll look at every brush stroke, or recognize that a new pigment was invented, or understand the brilliance behind a new shadowing technique. If no one likes the painting enough to give it a second glance, that’s on the artist. On the other hand, a computer program at a few megabytes may possibly beyond our comprehension. Most real-world software systems can only be understood by running them and seeing what happens. (Billions of dollars are bet on such systems every day. Scared? You should be.) I’ve reviewed lots of source code, command several hundreds of dollars per hour to do that work, and I’m arguably worth it; and even my abilities are pedestrian. A one-character change can make the difference between a running program and a catastrophic failure and, when it comes to reasoning about what a piece of code will actually do, we’re all out of our depth.

Text is linear. The act of reading is linear, unless we expect readers to continually look back for context, and that’s not being a very kind writer. Complexities emerge from this flat array of symbols. Characters and plots and settings and philosophies that wouldn’t otherwise exist (and, from first principles, don’t exist) emerge, almost magically. We paint pictures with words, sometimes few of them. This is hard to to do well. It takes a lifetime to get sort-of decent at it, and there are a lot of ways to mess it up. Don’t believe me? Here’s an example: “John went downstairs after getting out of bed and waking up.” Logically, there’s nothing wrong with it. It’s not an incorrect sentence, but its reverse chronology is jarring to the reader. It doesn’t paint an image, because its order of presentation goes the wrong way. It reminds the reader that she’s reading writing– most likely, an amateur’s writing.

The linearity of text is a major reason why great writers are averse to, for example, overusing emphasis. The need to draw in a not-yet-committed reader with a “hook” in the first chapters, we accept with some grudge. However, we’re not going to highlight words like one weird trick, because we expect the reader to give every word the respect that it deserves. We’ll use emphasis for semantic benefit and structure, but the novelist doesn’t care much about the enjoyment of the skimmer.

All together

I’ve covered a lot of territory, so far. Some people are ill-adjusted, cranky, and creative enough to be natural artists. Similarly, a larger tension exists within our society around what should be the exchange rate between Legacy and Talent. In practice, extreme left-siders and right-siders are viewed with suspicion, and natural artists are constitutional left-siders who will reject any role that society tries to shoehorn them into. Natural artists and writers and philosophers, like Buddha, are just as likely to walk away from a throne as a cubicle.

There’s a sociological element to the struggle of the artist or writer, of course. How creative work is evaluated has a lot to do with the social status of the person who created it. Every creative person finds this infuriating, but it’s not going to change. In software, the effect of this is huge. Not only are software artifacts difficult to evaluate for work quality, but people have to get approval for their projects before any finished work exists. Socioeconomic status doesn’t matter that much, because intra-office political status dominates, but it projects a similar injustice that dominates the character of the entire industry. Just getting a company to use the right programming language for a project can be a herculean battle that ages a person five years in a month.

Writers aren’t as bad off as programmers in this regard, but are worse off than visual artists. Musical or visual talent is obvious in a way that literary talent isn’t. You can size up a painting quickly, but you have to actually read a novel (a 3- to 24-hour investment) to know if it’s any good. Sometimes, it’s not obvious whether the writer did a good job until the whole thing has been read.

Now, here’s the paradoxical and frustrating thing for a writer: it’s almost impossible to get a blind read. If you’re seen as a nobody, you can be immensely talented and still struggle to get an open-minded read. Literary agents aren’t exactly looking for new talent to “discover”. They’re booked. You might hear back in 9 months if the agent’s intern flagged your manuscript as worth a serious read. So, if you have low social status, you work will be prematurely rejected. It doesn’t get better if you have high social status. You get an audience, and you get easy publication, but nothing prevents you from embarrassing yourself. You’ll get, from most people, the same superficial read that you’d get as a nobody, but you’ll be prematurely accepted. (You may get some resentful lashing-out, but that’s another form of acceptance.) Impostor syndrome doesn’t go away in high-status, famous people. It gets worse as they realize that they can say or publish anything and have it called brilliant.

Visual artists know that their job is done or not done in the first few seconds. If the painting looks bad, it doesn’t matter if the brushwork is brilliant. Computer programmers accept that, with high likelihood, no one will ever trudge through their code to understand the details, because the point of software is not the code but what it does when a machine runs it. The important feedback is usually automatic: does the code run, does it run fast, and does it get the right answer? A writer, though? Writers have to wait for an audience. And they want impartial, honest audiences that are blind to their social status. For that reason, writers especially will live outside of, and at odds with, whatever socioeconomic topography the human world wishes to inflict.

Crossing the Equator 4: Small x Large, Publishing, and St. Petersburg Math

From the title, it doesn’t sound like this is an essay about writing. It is. More generally, it’s about successes and failures in publicity, and the mathematics involved.

Pop quiz: What’s a small number times a large number?

  • a large number.
  • a small number.
  • I have no idea.

There is a right answer. It’s “I have no idea.” It’s a poorly specified question. Yet, predicting the performance of creative work requires us to multiply small numbers by large ones. Let’s say that you’re a writer. What’s the probability that a person who goes into a bookstore looking to buy one book chooses yours? It’s small. It’s much less than 1 percent. What about the number of people who go into bookstores? It’s large. What does this mean? Well, it could mean anything.

I’ve spent more than a decade studying the economics of creativity. It pays– and it costs. The expected value (or mean return) of a creative effort is quite high. There are two problems:

  • Variance: most people would rather have $100,000 than a 50% chance at $200,000 or a 0.01% chance at $1 billion, even if all have the same average payoff. Most creative endeavors have high variance and $0.00 is not an uncommon result. If this is hobby writing, that’s fine. If one relies on creativity for an income, it’s dangerous.
  • Value capture: people who are good at creating value tend to be below average in the social skills involved in capturing value. So, the hardest-working and best people see most of their efforts enrich other people. This is a depressing social problem that I don’t expect to see solved in my lifetime.

Probability

I’ll do my best to keep this discussion accessible. This is about psychology, more than math. In general, people are bad with probabilities less than 10%.

Another pop quiz: which of the following probabilities is closest to that of being killed by a shark on a beach trip?

  • 0.01% (10,000-to-1)?
  • 0.001% (100,000-to-1)?
  • 0.0001% (1-million-to-1)?
  • 0.0000001% (1-billion-to-1)?

The answer is… the last. It’s very rare. Beach traffic kills far more people than sharks. You’re probably sucking down more micromorts if you lay on the beach and drink than if you get in the ocean, so long as you know how to swim. As humans, we’re bad when it comes to low probabilities. That’s part of why lotteries make so much money. We don’t have a resonant model for probabilities like “1 in 160 million”. When we multiply an out-of-context large event by a small probability, we have no intuition for what the result should be.

Why is publishing so hard?

I’m putting a finishing polish on Farisa’s Courage. This has put a lot of my mental attention on figuring out the publishing process. It’s not to be taken lightly. Self-publishing and traditional publishing both have pitfalls. They’re both very hard to do well.

The median outcome, for people who try either approach, is near zero. Most people who try to find agents never will. Most self-published books sell less than 100 copies. It’s the upside potential that keeps them going. Very few people make money on books. Some will improve their careers in other ways, and some view writing as a public service. However, we ought to be frank about the fact that the median book fares poorly. Perhaps it deserves to do so, because it’s easy to throw one word after another until one has 100,000 or so, but writing a good book is hard.

If I got Farisa’s Courage in front of a “Big 5” editor (chosen at random) in New York, I’d give it a very high chance (over 95 percent) chance that she recognizes it as publishable, professional-quality work. That doesn’t mean that I don’t get rejected. Publishers can’t afford to take every good manuscript. What about the likelihood that she picks it up and reads it for pleasure? Probably 1 percent. It might not be a genre that she cares about. She might be distracted and put the book down at page 20, never to pick it up again. That happens with books. To be honest, that 1 percent is a good number. I’m rating myself favorably, there. Anyone would could sell a book to 1 percent of the reading public would be set for life.

The painful truth about trade publishing isn’t that rejection is common. It’s the realization that it ought to be more common. Let me explain. Editors will go to bat for books they read and fall in love with, but that’s rare. It’s rare for all of us. Getting a trade publisher doesn’t mean that anyone loves your book. It could mean that, but it could also mean that you’re a “might surprise” and you’ll get no promotion. If you fail to earn out your (paltry) advance, you’ll probably never sell another book, even if it’s not your fault. The goal isn’t to “get published”, because much of what’s published is in the “might surprise” category that the publisher doesn’t really believe in, but to get a real deal (i.e., “all in” versus “might surprise”) that comes with a six-figure promotion package. That matters a lot more than an advance, which is just a signal. Even with an excellent book, the probability that someone has the reaction necessary to get that treatment is small.

I start reading about 120 books each year. I complete about 40. Perhaps 5 leave that kind of “this is so awesome” impression that I will tell others about them. Keep in mind that these are published books, often from highly selective houses. Those 115 others weren’t bad books. A publisher thought that they were good, and so did I, and I may have still felt that way after finishing. It just wasn’t enough to make me recommend it. Someone else fell in love with them; just not me. That’s 4%, and nothing was wrong with the other 96%.

When an editor decides whether to publish a book, she has to guess at how many other people will have that falling-in-love reaction… or at least enough short-term limerence (or curiosity) to buy it. It’s a rare thing, but there are a lot of people in the world. So it’s a “small times large” calculation and, as humans, we often have no idea what we’re doing. Who can tell the difference between a book that 0.25% of people will fall in love with (enough to recommend it to their friends) and one that 5% will feel that way about? The former is a publishable book of above-average quality. The latter is the next Harry Potter.

When it comes to word of mouth, which is the only sustainable way to get a book out there, it’s the extreme but infrequent reactions that drive. A “7” and a “2” reaction are the same: no movement. But look at the 50 Shades series. People hate those books. The writing’s bad and the characters are terrible. Yet enough people really love them that they’ve sold hundreds of millions of copies.

St. Petersburg paradox

A lottery ticket that pays out $100 million with 1-in-200-million odds is worth 50 cents. A stock that has a 50 percent chance of being worth $78 tomorrow and a 50 percent chance of being worth $76 is worth $77. That’s expected value: 0.5 * 76 + 0.5 * 78 = 77.

Okay, so let’s say that we have a lottery ticket with a base payoff of $1. The bearer flips a fair coin and the payoff doubles for each head, until a tail comes up. So, if he flips three heads, then a tail, it pays off $8. Here’s a random sample of 20 payouts:

$4 $1 $2 $16 $1 $2 $1 $2 $2 $1 $1 $128 $8 $1 $4 $1 $1 $1 $2 $1

We see that the ticket’s worth at least $1, and probably not more than $20. In twenty trials, we made $207, so we might estimate an expected value of $10.35. However, most of that value is in one trial where we got very lucky and made $128. Take that out, and we’re at $4.16. So what’s it worth?

If one ticket is for sale, most people would pay about $5 for it. They’d lose 7/8 of the time, but win big when they win.

I ran a simulation: 20 trials in which I pooled together 100 St. Petersburg tickets. The median value was $501, or $5.01 per ticket. With 10,000 in each packet, the median payoff was $7.66 each. With 1,000,000 in each, the median was $9.56 each. In other words, the more tickets you can afford to buy, the more that they’re worth.

This is counterintuitive. We see it because improbable high-impact events (like that $128 above) dominate.

To a large extent, publishing is driven by the St. Petersburg phenomenon. Some readers might love a book but not tell anyone about. Others will tweet to their 50,000 followers. Some reviews will sell 20 books and others will sell 200,000. Making it harder for writer and publisher alike, no one knows what the rules are.

Some writers might bemoan the death of the advance. Writers used to be able to live off advances. Now, advances are low and mostly a source of anxiety, because even paltry advances are not always earned-out. Why are advances disappearing? Publishers no longer know what sells books. (Also, publishers lose if bookstores screw up and can’t move product, but that’s not new.) They used to know exactly whose numbers to dial when they thought they might have a best-seller on their hands. Now, it seems to be out of their control. Books with million-dollar advances and top-shelf reviews can flop.

A good publisher will deliver reviews, radio and TV appearances, and opportunities to speak without the author paying out-of-pocket. (If you’re not getting this package, and it’s increasingly rare, then self-publish. Trade is only worth it if they’re all-in on you. Otherwise, keep the rights. You might want them.) Promotion and intangibles still matter. They seem to matter less, is all.

What does this have to do with St. Petersburg math? Well, let’s say that you’ve written an excellent book and you convince 20 people to buy it. What happens then? Probably, nothing. Your book is great, but that “I love this and will tweet incessantly about it” reaction is uncommon. If it happens for 2 percent of people (and that would be a great book) there’s a 67 percent chance that it happened with no one. Of those 20, you might get four who sell one more book and one who sells three. That’s seven more. They might move two more copies. From that base of 20, you’ve sold a total of 29 copies. Now, let’s say that you convince 2,000 people to buy it (or give 2,000 copies away). Your likelihood of reaching a “super-spreader” is a lot higher. Then, you can set off a word-of-mouth explosion, reach escape velocity, et cetera.

In the old world of trade publishing, we knew who those super-spreaders were. They worked in prestigious houses, or they were reviewers at the New York Times, or they were agents who drank with the big fish. If you kept getting rejected, you saw them as choke-points, but once you got in, they were your biggest allies and the only people you needed to please. Those still exist, but they don’t have the propulsion that they once did. This is probably good for the world. Readers matter more. However, it makes for a more confusing world– and nobody really knows how it works.

The value, in trade publishing, was that of having a super-spreader in your corner. St. Petersburg math tells us that if you sell 50 copies of an excellent book, you’re still probably going to be forgotten, because super-spreaders are rare and you probably won’t reach one. If you sell 50,000 copies of a shitty book, you’ll be a blip that fades out (celebrity books tend to do this). On the other hand, if you sell 50,000 copies of a great book, it’s only a matter of time before you sell 50,000 more. You might to be able to reach escape velocity with a smaller print run than that. There’s a critical point in initial propulsion (and, as importantly, initial social proof) that makes the difference between obscurity and success.

The death (?) of trade publishing

Is trade publishing going to die? No. It will evolve. I think that it’s going to be smaller and a lot more selective.

In every guild-like industry, the profits are made on people in their mid-career years. You lose money training novices, and paying them when they’re not producing salable work. You make some money from advanced professionals, but they’re free agents who can command market rate. Publishing used to have a similar model. They knew that they wouldn’t make their money back when they put a six-figure marketing campaign behind a first-time author, until she finished up her third or fourth book.

As in the rest of the corporate world, conditions have become somewhat better for free agents but worse for those expecting employers (or, in this case, publishers) to invest in them. The publisher’s not going to throw a six-figure marketing campaign behind a first-time author, out of fear that she’ll bounce to another publisher offering a better rate for her second book. Those relationships don’t seem to exist anymore.

So, how is trade publishing going to change? I think that we’ll see fragmentation as self-publication becomes more common, and as the infrastructure connecting readers and writers directly becomes more mature. Mass-market fiction authors who can sell millions and have turned their work into utilities, they’ll find trade publishers. We’ll also see trade publishers holding on to historical nonfiction and biography, where the research and fact-checking demands are high and the validation of being traditionally published still matters. Established literary authors will probably continue to be traditionally published, even at a loss for the house. This is probably the last generation for which a first-time author can get traditional publishing. Look at where the system is now. Getting the right to submit to a publisher requires finding an agent. Now, agents don’t even read manuscripts. Interns, who function like agents for agents, do that. It’s decentralized and unplanned, but it’s a lot of bureaucracy. It’s still navigable, but just barely, and a lot of talented writers are looking at what it offers (small

That’s not to say that one shouldn’t pursue trade publishing and query agents. For one thing, there’s still a prestige benefit. That, actually, might grow. It’s a lot more competitive to get into Harvard now than it was in 1987. What does that mean? It means that everyone who went there in ’87 had his stock go up. Likewise, as trade publishing contracts, the prestige of having used it successfully might increase. Further, when it works for an author, it works very well. A deal that comes with a six-figure promotion budget is usually worth taking.

The standard package that most authors get isn’t worth giving up the rights. Let me explain why. Most trade-published books are just over the line and get a “might surprise” package. The publisher isn’t going to promote those books. In fact, they may ask for a marketing plan in addition to a manuscript. The author does the work. Essentially, it’s self-publishing but with important rights given up and shitty royalties– in exchange for a paltry advance. The only reason why authors take such deals is that the publishing process is so exhausting and time-consuming that they’re just relieved to be done with.

If you’re in the “might surprise” category, publishers demand that you have a social media presence, because they’re not going to market the book. Now, social media marketing is going to do, in the 2020s, what traditional marketing did in the 2010s. That is, it’ll lose effectiveness. What is unlikely to lose effectiveness is giving high-quality stuff away: gift copies, lowered prices, free chapters, and search-engine indexing if free material is posted online. If you use trade publishing, you can’t do any of that. You’ve given up those rights. You can’t offer your e-book at 99 cents on your protagonist’s birthday. Your strongest option is to ask people in 140 characters to buy a rectangular solid made of plant matter and chemicals.

Ten years ago, few talented authors self-published. These days, most say that they will self-publish if they can’t find a trade publisher. Over time, self-publishing (which may be renamed to entrepreneurial publishing, because to do it right, one has to do things like hire professional editors and cover designers, and that’s business) will likely be the default way for talented authors to break out. Trade publishing will be for victory laps only.

Crossing the Equator 3: Why “she said” is divine, and about adverbs.

[W]hile to write adverbs is human, to write he said or she said is divine. — Stephen King in On Writing.

In writing, one can fall for guidelines-cum-misrules that novices over-learn. “Don’t end a sentence with a preposition.” “Don’t start a sentence with a conjunction.” “Don’t use contractions.” “Don’t split an infinitive”. All of these rules should be learned (there are reasons why they exist) and then broken skillfully.

To start, English isn’t Latin, so one can split infinitives. Sometimes it works. And there’s no reason you can’t start a sentence with a conjunction, because ideas are allowed to cross sentence boundaries. (If they don’t, your sentences are too long.) As for prepositional endings, that’s also fine: she put her clothes on. Contractions? Please. Shakespeare used contractions. They’re beautiful.

Repeated words are sometimes offensive. Here’s a bad example, unless the repetition benefits a context (let’s assume that it doesn’t).

The night was dark. She didn’t like running in the dark. If she wore dark clothes, she couldn’t be seen in the dark, because it was too dark.

It sounds bad. It’s repetitive. Also, though, it’s bad writing even in spite of its repetition. The first question is: do we want to accentuate dark? Maybe. That’s contextual, of course. Who’s the character and what is she afraid of? Yet if we had four great synonyms for dark, though, it’d be bad writing for its fluff. If no context gives us a benefit from the overuse of dark, it can be improved.

It was night– too dark for running. It was dangerous to be out along the road without bright clothes, which she didn’t have.

That said, there are repeated words that work great. They can bring in parallelism.

She wanted to see him because she was horny. She wanted to see him because he was sweet. She wanted to see him because… she was falling in love.

It makes it clear: she wanted to see him. Imagine this rewritten without parallelism.

She wanted to see him because she was horny, he was sweet, and she was falling in love.

It doesn’t work as well. Not even close. It reads like a business document. You can almost see the bullets popping out on a PowerPoint presentation entitled “Why She Wanted To See Him”.

One of the worst things that people do, to avoid repeated words, is replace “said” with synonyms. “He exclaimed.” “She blurted out.” “He screamed.” “She spoke angrily.” It injects melodrama and it fails in an important literary dimension: proportion. Here’s the thing about the boring, worn-out old “said”. It’s almost invisible. That’s what we want. The reader should be able to focus on what is being said and who is saying it. How it is said should be obvious from the context. You want the reader to forget that he’s reading words (“exclaimed”) and to focus on the action.

Of course, there are exceptions. There are times in my writing that I’ll use “said, with a smile” or “asked” or “yelled”. Battle scenes have a lot of yelling. “Asked” is almost as invisible as “said”, redundant thanks to the question mark. For a question, to ask is the lowest-entropy verb instead of to say.

“Did you have anything,” Farisa asked, “to do with that [spoiler]?”

There’s something information-theoretic here. As writers, we often think a lot about word count. Farisa’s Courage stands (as I write this) at 123,306 words and it will become much easier to get published if I can cut 3,307 of those. That’s about 13 minutes of reader time. To be fair, time matters. A difference in efficiency can make the difference between a page turner and an “okay read”. One can go too far with cutting and it’s most important to get the word count right. (For Farisa’s Courage, the right word count is somewhere around 118-121k, I feel. I’m close to it. Early drafts are typically 10-30 percent over.) My point is only that a small difference in efficiency can have a major effect on reader enjoyment.

That said, not all words are created equal. I can write 10-word sentences that are impossible to parse. In fact, I can arguably write an 8-word sentence using one word (guess which) as each of the 8 parts of speech (as-preposition and as-conjunction require a little stretching). It’s nearly unreadable but translates, approximately, to “I say with expletive emphasis not to cheat this unlikable person in business, nor have sex with him.” What we actually care about is entropy. How much information are we shoving down the reader’s eye-gullet (and, much more importantly, what is the payoff ratio)?

Entropy is why we care about grammar and spelling. Grammatical mistakes shove extra bits of information through that don’t do any good. There are two spellings for the word “color”/”colour”. If you’re consistent, then it’s an upfront cost of 1 bit per book and it doesn’t matter. Use either, it’s fine. If you’re using them interchangeably, and then you’re costing the reader 1 bit per usage. That adds up! The reader might question, is there a thematic reason why the spelling keeps changing? Am I missing something? You’ll always get the benefit of the doubt on the rare error– the reader will, at first, presume you competent and try to guess what you meant– but you don’t want to generate too much extra work like this.

The general assumption is that readers spend 240 milliseconds per word. I’d guess that a more accurate model is based on time-per-entropy, somewhere around 40-50 bits per second. (I make no claims about human consciousness bandwidth– only reading speed.) Speed readers clear more words but probably don’t take in more information. Grammar matters not because of our English teacher superegos, but because the reader deserves to get the most out of those bits and seconds. If you use “said” (and, for questions, “asked”) as a matter of principle, you’re making your how-of-dialogue channel thin (nearly zero bits) and that’s a service, because it draws attention to who and what is being said.

Perfect writing doesn’t stop at wasting no words, but wastes no bits. It’s telepathic. That said, in the real world, we have to settle for great writing that wastes as few as possible.

Repetition focuses attention. Writing is nonlinear. Just as 20-word sentences are more than twice as complex as 10-word sentences, repetition amplifies non-linearly. In the “dark” negative example, we’re amplifying the word dark, but we’re not getting anything for it. We already know that night is dark! With dialogue, we don’t get the same problem with said, because it establishes a low-entropy channel: 95+ percent of the time, the verb will be to say in past tense. It can be ignored, if one likes. We’re not worried about emphasizing to say because that’s what dialogue is: saying things.

What about adverbs? Grammatically speaking, there’s nothing wrong with adverbs. In fact, all good writers use them sometimes. I just did (“sometimes”). One of the most important adverbs is not. “Thou shalt not commit adultery.” It could be buffed for modern usage as “Do not commit adultery.” There’s no simpler way to say it, though. “Refrain from committing adultery” both introduces a larger (higher-entropy) verb (refrain versus do) and summons the gerund “committing”. You’re introducing abstraction, and costing more bits, but you’re not communicating anything more.

Not is a great adverb. It costs about 1 bit and it negates whatever verb or adjective you want. That’s beautiful.

So what’s wrong with adverbs? Nothing, but they’re tricky. We have the same problem with prepositions. To piss off and to piss on differ in ways that have little to do with the on/off antonym pair. The former is idiomatic and has nothing to do with “piss” or “off”. To fuck down is to lower ones sexual standards. To fuck up is to make a mistake and may or may not involve intercourse. With prepositions, there are a couple saving differences. First, they’re mostly short words like “on” and “off” as opposed to “hopefully” or “egregiously”. Second, the verb-modifications are usually restricted to a small set of well-known idioms. We’ve all been pissed off when someone fucked up in a way that screwed us over. Adverbs are large words and they function well when they’re adding precision– but only then. “Only three people came” is different from “three people came”, and “do not commit adultery” is very different from “do commit adultery”. Sometimes, though, adverbs fail at communicating what the writer intends. The worst culprits are false intensifiers: just, very, quite, and the worst criminal of them all when used by bad writers, literally.

“It’s literally freezing outside.” 32°F, by Midwestern standards, is a pleasant winter day. You de-emphasized.

“A literal ton of people came to her party.” You said, “Approximately 15”.

“He literally sleeps with his dog.” What’s wrong with that? I’d be disgusted if you said he figuratively slept with his dog.

“I literally got fired.” I’ve had some terrible employers and some unjust terminations, but I’ve never been burnt at the sake. I’m sure that many have had the thought, but the law protects me from me from that.

Let’s ignore the worst examples. What’s wrong with, “it was quite hot”? Well, one problem is that quite has a different meaning in the UK (where it literally diminishes) from in the US (where it may diminish but is intended to intensify). We can fix that. “It was very hot.”

There isn’t anything wrong with it, but might we do as well with “it was hot”? Is the weather important to the character? If it is, we might want to give it more words and say, “The sun was directly overhead and her brow was covered in sweat.” (Showing, not telling.) If it’s completely unimportant, we should take it out entirely. There are cases where I can defend “It was very hot.” as the exact perfect sentence to use. If you’re writing a small child in third-person limited, you wouldn’t use “it was blistering”. If it’s 120°F, then “it was hot” might not be enough. If the weather matters enough for mention but doesn’t merit 5-50 words of showing, then you might just tell with “it was very hot”. It can work. Such examples are rare, though. If you’re writing exposition or third-person omniscient and expected to write/think as well as a writer, then you might want to cool it on the adverbs. Or maybe not. There are no rules, except one: the reader must enjoy herself.

Adverbs don’t emphasize in the way that one would expect them to. There’s a simplicity to “it was hot” that’s diminished by “it was very hot”. Like “said”, “it was” is a pair of invisible words (in this context) and you’ve roughly doubled the amount of entropy (“very hot” versus “hot”) for what is usually no gain. Without comparison or exposition (which may not be worth the words/bits if weather isn’t an important detail) the two sentences mean the same thing, and so the quicker one wins.

This said, one can go too far in cutting adverbs. Some, like not, don’t deserve to be cut. It’s often said that you should use a stronger verb, e.g. “she sprinted” instead of “she ran fast”. I agree. But, there isn’t always such a word out there, and cutting adverbs isn’t a good reason to use a word that your reader won’t know. There are combinations for which stronger verbs don’t exist. You can end up replacing an adverb with an adverbial phrase. Some adverbial phrases are tight– “she said, with a smile”– and some are just clunky. Adverbial phrases, like adverbs, can be beautiful or horrible and it takes a keen eye (and, to be frank, copious revision) to know the difference.

I’m only scratching the surface when it comes to what’s possible by applying information theory to reading and writing. Now, read the last sentence again. I used a cliche! Now, was that bad or good? That’s also tricky. There are powerful idioms that, like cliches, have been said over and over. To fuck up or to piss off were clever and colorful when invented, but they’re common idioms now and should be used when they work (assuming a context in which profanity is acceptable). They communicate efficiently. Scratching the surface is more work for the reader, and (another cliche) right on the borderline. In my view, it’s okay to use a cliche if it saves effort for the reader. Unless there’s value in the exposition, don’t be clever and say, “He made a mistake of such severity that it reminded him of failed copulations past.” Just say, “He fucked up.” Maybe, “he screwed the pooch” or “he shat the bed”. Cliches are fine (in moderation) if you know what they are, know when you’re using them, don’t expect to be treated as clever for using them, and– by far, most importantly– don’t use only them.

Farisa’s Courage Public Query

When you submit to an agent, you use what’s called a query letter. They get a bad rap, but I actually found it fun to write.

There’s a chicken-and-egg problem with agents. You need an agent to get published in the trade world. You won’t be able to look out for your own interests without one, and many publishers won’t even talk to you unless you have an agent. That said, if you’re in the pool of unpublished writers who they’re an agent’s phone call away from greatness, well… get in line.

I don’t fault agents for this, because there are so many millions of bad manuscripts floating about, but even getting your manuscript read is an accomplishment at first. You have to start your search for an agent with little to show anyone. Thus, it takes a long time. The whole publishing process is that way. If I pursue trade publishing, I’d be lucky to get Farisa’s Courage in the world’s hands by October 1, 2019. (October 1st is the titular protagonist’s birthday.)

I haven’t ruled out trade publishing. I’ll pursue it if I’m offered a deal that better suits my goals. I care about readership and cultural influence and beating up the world– not getting an advance. (Advances no longer exist for most writers, but that’s another discussion.)

If I self-publish, I’d like to hit October 1, 2017. I believe that I can achieve this date while producing a professional-quality book. To make it clear, self-published doesn’t mean “half-assed”. I’d hire an editor (more on that) and make sure the book was up to “Big 5” standards before sending it out. If it takes longer than I expect, then I’ll wait. Quality is worth waiting for.

I wouldn’t have written Farisa’s Courage if I didn’t think it had the potential to be a great book. Outselling George R. R. Martin is… not what I expect, but within my 95%-confidence interval. (I’m not saying that I’m better. I’m saying I have more than a 1-in-40 chance.) Still, just having a great book doesn’t guarantee speedy process– especially if you pursue trade publishing. You’ll get rejected a lot. Everyone gets rejected a lot. Hell, I’d be thrilled to write a book that got rejected by (i.e., wasn’t bought by) only 99% of the reading public. That would mean selling about 1.5 million copies. (Selling 10,000 is no small accomplishment.) Rejection’s one thing, but you may have to wait a year before you find an agent. If this is because your book isn’t ready, it’s worth waiting. That said, I’d like to believe that one grows more as a writer by getting great work out in a timely fashion, than by waiting for publishing bureaucracies to recognize work as great and green-light it. I could be wrong. I don’t know this game, and the exciting (but risky) thing about the self-publishing game is that no one really knows how it works. This is a lot of words to say that, on self-publishing versus trade publishing, I’m undecided.

If I pursue self-publishing, I will probably be using a crowdfunding platform to raise money for editing– no matter how good you are, you need an editor– as well as cover art and promotion. I will, of course, have to convince the public to buy in to my idea.

On that note, it shan’t hurt to query the public as well. Here is the current version of the bottom (non-personalized) component of my query letter.

That said, I am at least two self-edit cycles and a professional edit cycle away from considering the work “done”. It’s beyond readable and, quite frankly, already better than a decent percentage of prestigious published work. That said, I’ve been studying writing and editing toward the goal of making it even better.

Farisa’s Courage is a complete, polished, 124,000-word epic fantasy novel. It tackles contemporary issues including race, inequality, demagogy, gender and sexuality. Although written with series potential in mind, it offers a satisfying ending and can be read as a standalone novel.

A girl runs through a city she’s never seen before. Her memory has been wrecked by magic gone awry. Confused and desperate, she knocks on a stranger’s door in the middle of the night. He identifies her by the scar on her left shoulder. “Get the hell in here,” he says, “before anyone else sees you.”

The next morning, she remembers her name: Farisa La’ewind. She stands accused of two crimes. One, she couldn’t have committed. The other, she did— as a child.  A powerful enemy— a company controlling 70 percent of the known world’s wealth, and in every business from alcohol to railroads to murder— has put out a bounty. Civil war is breaking out all over the world. Her best friend’s in danger. She is in danger. With guns, magic, intellect, and a power that comes from a place of deep love and even deeper mystery, Farisa will fight to survive. She’ll encounter creatures including orcs, skrums, ghouls and flayers— not to mention spies and hit men working for her enemies. Yet the greatest danger, to her and to millions of others, is something in her past. Her recent past: eight hours before. If only she could remember…

Between 2011 and 2016, I ran a technology-industry blog that received over 4,000 unique hits per day at its peak. Some essays garnered over 300,000 page views. I covered topics as diverse as artificial intelligence, programming languages, organizational dynamics, mathematics, business ethics, and the economics of software. When active, my blog was considered one of the most important blogs in its industry (many of my readers were Silicon Valley venture capitalists) and it was among the most popular ones without corporate backing. Farisa’s Courage, for which I am seeking representation, is my first attempt to publish fiction.