Realistic LinkedIn “Poetry”… the 99%-er Edition

I seem to have missed the LinkedIn Career Poetry that’s apparently a thing. See, I spent 2017 actually working– I have a full-time job, and I’m revising Farisa’s Crossing— and so, sadly, I had little time to pretend to be awesome on a website full of white-collar oversharers.

Apparently, this execrable new genre tends to feature a Zero’s Journey with the following sort of cadence:

I was homeless.

I was fired yesterday.

I was walking home.

I took an Uber.

Someone stopped me on the street.

My boss told me not to take a chance on anyone over 50, but I hired him anyway.

It was Elon Musk.

(Follow-up: he looked at my profile, saw that I was a state school grad, and told me to go fuck myself. We had shared the Uber; I paid.)

Only in San Francisco is a person homeless immediately after joblessness (n.b. “fired yesterday”). In any other city, it takes a while for a person’s life to go to shit. Also, how does one walk “home”, and why is one taking an Uber, if homeless? But, I digress.

Okay, you soppy fucks, now here’s some career poetry for the 99%. I shall master this genre, in order to kill it.

A “performance” plan?
Set up to fail; two weeks hence,
I cleaned out my desk.


No jobs, for it was,
they said, Series A winter.
“Your CV’s on file.”


I couldn’t afford
to keep my health insurance;
I’m now shitting blood.


can beat this bleak Depression.
“Recession,” I mean.


Don’t live in Musk’s ‘hood;
Can’t get no EIR job.
I’m still shitting blood.

In Defense Of Millennials

In early 2001, I read Millennials Rising, by William Strauss and Neil Howe, authors of Generations and The Fourth Turning.

They predicted, in the optimistic 1990s when it was unthinkable, that we’d approach a Fourth Turning, or crisis, this century. This seems to be coming true.

On the other hand, they predicted that my generation (Millennials) would rise to be civic heroes, reversing the trend of institutional decay that began in the 1980s. So far, that doesn’t seem to be happening.

Instead, the most successful among us are not reversing decay, but profiting from it. Mark Zuckerberg and Lena Dunham aren’t successful because they restored a troubled civilization to health, but because they’ve figured out how to thrive in this post-apocalyptic landscape: a world of economic decline, permanent immaturity, and cultural anomie.

The popular opinion about us Millennials seems to be the opposite of what Strauss and Howe predicted: it’s that we’re lazy, whiny, apathetic brats. That’s not true. Nor is it entirely false. The less-than-climactic revelation about each generation seems to me to be that none is worse or better than any other, taken in toto.

We look like shit right now– with Zuckerberg running for President, and Dunham commanding a $3-million book deal as “the voice of our generation”– because the people in the limelight are those who promoted by Boomers. Given that, how would we not look like shit? What else would one expect? It may change; give it time.

Ascribing moral value to a generation is a tricky business, and I have a hard time buying into it. After all, segregationists like George Wallace and Strom Thurmond were part of the “Greatest Generation”; arch-thug Bull Connor only missed it by a few years. They were repulsive! The organizationally adept Greatest Generation gave us the Rotary Club, but it also revived the Klan. Let’s not white-wash it.

Of course, it’s fashionable for young people to hate the Boomers. In general, I don’t think Boomers are an exception to my statement that no generation is, in an individual moral sense, better or worse than any other. The Boomer 1%, the current global leadership, has been an atrocious nightmare. We cannot tear them down fast enough. But most Boomers aren’t part of the elite that draws this (deserved) hatred. The worst-off victims of our nightmare society are, in fact, Boomers; many of the young will recover from this mess, but the 73-year-old who is bagging groceries despite his bad back, or the 59-year-old programmer who just got fired (“too old”) by his spoiled shithead Xer/Millennial bosses, will not. Toward the vast majority of Boomers who are merely middle or working class, we should feel empathy– not resentment.

Why Boomer Hatred Exists

Why do people hate Baby Boomers so much? They’re blamed for the abrupt decay in the quality of American economic and social leadership. The collapse was brutal, and it continues, but to blame it on one generation is, in my view, somewhat of a mistake. Decay started before Boomers were in charge, under the Silent Generation. It has continued, in Silicon Valley, under Gen X and the Millennials. No one generation deserves all the blame for this mess.

The standard narrative is this: Very Bad Things happened in the 1920s to ’40s, but the Greatest Generation heroically rose up saved us from the Depression and Hitler, and built us a society with a large middle class. They saved capitalism by integrating what was good about socialism, they sent their soldiers off to college and became the generation of warrior-scholars that made America great. Then, the Boomers, never knowing hardship, came and ruined it because… instead of building on what their parents gave them, they wanted to smoke pot at Woodstock (in the ’60s) and snort coke on Wall Street (in the ’80s) and then rise to the top of Corporate America, poison the environment, and pull the ladder up from under them (in the ’00s). Self-indulgent and narcissistic all the way, they ran our society into the ground. Their elders said that of them, half a century ago; we’re saying it now. Is it true? Self-indulgent narcissists exist in every generation, and I find no evidence that their numbers are worse in any particular one. We should, instead, indict the cultural factors that brought such people, at one point in time, to the top of society.

What’s wrong with the “standard narrative” above? To start, it’s U.S.-centric. Include more countries, and generational theory becomes harder to keep together. I’m guessing that Germany doesn’t call its World War II veterans, “The Greatest Generation”. As for the Baby Boomers, in this country, there’s no question that their leadership has been atrocious. In that regard, they may be the worst we’ve had. Yet, when we slag Boomers– painting them as that spoiled generation that had everything and left us nothing– we forget about black Boomers and gay Boomers and coal miner Boomers in West Virginia.

“Globalism” is sometimes given as an explanation for American decline, but it raises more questions. Globalism is both desirable and inevitable. As a creative, I say: we need every audience we can get. So yes, dammit, I’m a globalist. I wrote a card game, Ambition, that has been published in print… in Japan. I’m writing a book (Farisa’s Crossing, for publication in 2018 or ’19) and most of the readers I’ll want to reach are not in the U.S.

Globalism shall continue; we can’t ignore it. We can’t rewind our economy back to 1960. On globalism, we need to do it right.

There’s a perception in the U.S. that globalism occurs at the expense of the American (increasingly former) middle class. Is it true? Not really. The rich, including American rich, are making out like bandits while the middle class shrinks and suffers. We’re losing money to our own top 0.1 percent– not the people rising out of poverty. (Remember: that’s a good thing.) We’re not being stabbed in the back by the middle class of India; we’re being stabbed in the back by our own elite.

Some have argued that our morally restrained “national elite” lost out to the execrable “global elite”: the Davos Men who pine for the 1937 Germany, when fascism was good for business but before it started killing people; the Arab oil sheikhs with harems and child brides; the businessmen in China who bust unions with machine guns; the murderous dictators of sub-Saharan Africa.

For sure, the global elite is disgusting. We must face up to this, though: our national elite is, even today, a plurality contingent of the global elite. The crimes of the world do not come from “savage” people overseas. They come directly from the top of a socioeconomic order that our elite, even to this day, maintains. The global elite are not a cabal; they do not meet in one room, and self-interest explains their cohesion and operation. We do not need “Conspiracy theories” when lower-case-c conspiracies exist all around us and suffice to explain what’s happening. Though no upper-case-C “Conspiracy to rule them all” exists– that’s a fantasy, for if it were true one could blow up the room where they meet– the fact is that a tiny oligarchy (of, perhaps, a couple thousand people) now makes almost all of the important decisions.

We ascribe relative virtue to our national elite, as opposed to the global one, because… let me recite the popular narrative… they got us out of the Depression, they saved capitalism by tempering it with the best elements of socialism, they defeated the Nazis and Fascists and Japanese Imperialists, they gave us cars and spacious suburban homes, and they built a mid-century pax americana. They were charitable, their rich didn’t mind being taxed at marginal rates over 80 percent, and they founded companies using “Theory Y” management, because they cared about their workers. They made America Great, the story tells us, and it was the globalists or the Boomers or the liberals or conservatives (depending on whom one asks) who made us un-great.

We need to understand the era in which we had a relatively virtuous elite. What caused it? What made them operate with such (unusual, as elites go) restraint? Why did they allow the 1930s-80s prosperity to occur?

Our national elite was not born into superior virtue. The American elite of the First Gilded Age was just as crooked and onerous as the global elite of this Second Gilded Age. That should give us hope; if the American elite let up in the 1930s to ’70s, perhaps the global one will let up in the future. Our national elite (the “WASP Establishment”) grew content to be merely rich, as they were in the 1950s, rather than brutally hegemonic, as they are today. Why? During the Depression, there was a real threat, in every country, of communist overthrow. Being rational humans, people in the American national elite chose graceful relative decline rather than the guillotines. Smart call. That made life better for all of us. We got to a point where people, even of moderate means, could afford international air travel. Add technology to that, and we became a global society. It’s not a bad thing, and it couldn’t have been prevented.

Here’s what happened in the 1980s: our young rich met the young rich of other countries, and they felt they came up short. If you’re an American millionaire and you drive 150 miles an hour on the freeway, then crash and kill someone, you go to jail. If you’re an entertainment executive who sodomizes a 13-year-old girl, you’ll be charged with rape. Meanwhile, Arab oil sheikhs own harems, can murder the poor of their own countries with impunity, and import slave labor for domestic help. The mere two-digit millionaires of the American elite met the hegemonic billionaires of less evolved societies and asked themselves, “Why can’t we have that?”

Starting in the late 1970s, the American elite began shucking off moral restraint and pushing the bounds of decency. Drum circles and marijuana gave way to cocaine-fueled Studio 54 elitism. The rich manipulated politics to give themselves tax cuts, turning some of the most effective governments in the world (our federal, state, and local governments) into underfunded, dysfunctional messes. Those who’d climbed the proverbial corporate ladder pulled it up, then learned how to pit the people at the bottom against each other, so they’d ignore what was really happening. In the 2010s, dormant racial tensions re-emerged, as our upper classes relied on old techniques for keeping the poor divided and conquered.

This slow-motion national catastrophe, still grinding on, happened while the Baby Boomers were in charge. Did it happen because they were an evil generation? No. As I’ve said, they have no fewer or more scumbags than any other generation; but, there has been a climate over the past few decades in which bad people have a disproportionate likelihood of rising into leadership roles. We’re becoming a global society and we haven’t yet figured out how to do it right.

Institutional decay: double or nothing

One of the reasons why the future’s hard to predict is that, in any era, there will be things that seem bizarre, out of place, or otherwise wrong. Call them anomalies. A digital something-rather called a “bit coin” should not be worth $15,000, am I right? Oughtn’t that go right to zero, and soon, having less utility than a tulip bulb? Perhaps. Perhaps not. Tulip bulbs were a great investment for decades.

One might expect anomalies to mean-revert out of existence. Yet, each of those artifacts exists for some reason– little of impact is truly random— and it is often as probable that the anomaly will double itself up, before it gets worked out of the system. Let me be concrete. In the late ’90s, people recognized that dot-com stocks were overvalued and short-sold them. Many of these short-sellers got hit with margin calls and were wiped out. They were right– there was a dot-com bubble– but they timed its end poorly and they lost. As John Maynard Keynes said, “The market can stay irrational longer than you can stay solvent.” Every anomaly has a force behind it, a positive-feedback loop, that wishes to increase. So, when you bet on an anomaly, either in current with or against it, you’re making a “double or nothing” bet.

Furthermore, it is difficult in the grand scheme to know what is anomalous and what is genuine permanent change.

For a brutally relevant example, a society with a large middle class, in which the richest people and the most powerful institutions behave with a reasonable degree of moral decency is, although desirable, anomalous. It existed in the U.S. between, approximately, 1940 and 2000. We are seeing an erosion of that society, as we revert to something more similar to the naked elitism of, say, 18th century Europe. Some have argued that a prosperous society of any kind is anomalous, and that a cause like global warming or fossil fuel depletion will imminently drive us back to the poverty that dominated most of human history. I don’t subscribe to that view, though it is intellectually defensible.

On the other hand, historical trends show technical progress and it’s nearly monotonic. Even in the Dark Ages (the main losers of which were the Roman elite; average Europeans were hardly affected) technology improved. In 2017, birth control has permanently eliminated our tendency toward involuntary overpopulation. Automation has eradicated the need for many kinds of painful and dangerous labor. So, perhaps the bad times we’re having now are but a minor dip in an upward-sloping road. I find Donald Trump repugnant, but I don’t think he’s anywhere near Mussolini or Hitler. It’s reasonable to conclude that, while the first third of the 21st century will be unpleasant for the American middle class, our global progress toward prosperity shall go impeded. Will it? I don’t know. I hope so.

Which analysis is right? It’s hard to say. I’m a short-term pessimist and a long-term optimist. The atrocious economic, political, social and cultural leadership that the United States now experiences will not die out just because the Boomers vacate. Generation X and Millennials are fully capable of continuing the decay. The main reason Millennials have a bad reputation is that, right now, most Millennials in prominence are human garbage– because they’re the ones who were promoted by the Boomer elite. I believe that chaos and probable violence live in our future. The Class War– a necessary process, because the global elite needs to learn the same lesson that the American national one did in the 1930s– will be ugly.

For my part, ugliness is not what I ever wish for. I’d like to see the Class War won by the right side, without violence. Violence begets chaos, and the petty reward of vengeance (however deserving the target) is never worth the risk of harm to innocents. However, history cares not about my preference for nonviolent resolution; it will do what it wills itself to do.

Millennials Rising predicted that my generation would repair the institutional decay that started under the Baby Boomers– a decay that became inevitable once our national elite re-polarized and joined the global elite. I don’t see it happening yet. I see willful continuation of decay. It’s quite profitable; as Littlefinger said in Game of Thrones, chaos is a ladder.

Let’s look at that supposed bastion of innovation, Silicon Valley. The main innovation to come out of venture-funded technology has nothing to do with science, computation, or technology itself. It’s the disposable company. The true executives of this brave new economy are venture capitalists, and so-called “founders” are middle managers who must manage up into Sand Hill Road. The difference is the ease with which a company can be crumpled up and thrown in the wastebasket. Pesky workers want a union? No Series D for you! Founder-level sexual harassment issues causing bad press? Scrap the company, start again, and try not to get caught this time.

An old-style corporation, when it scrapped a project, would find something else for people to do. Workers on the failed project were deemed innocent and would be eligible for transfer to more promising work within the company. The postmodern corporate entity of Sand Hill Road, when it decides a company is unfundable– note that supposedly competing investors, in fact, collude and make decisions in concert– sentences it to death. Jobs end. One cannot meet investor expectations without unsustainable spending, which means that none of these companies will survive unless they continue to raise funding. This, of course, gives investors managerial power, so founders must preserve their reputations among the Sand Hill Road oligarchy at all costs. What happens, then when a project/company gets scrapped, to the workers? You might guess, “They get laid off.” True, but it’s made worse than that. An old-style company would own up to an honest layoff. Venture-funded companies don’t want the negative press, so they claim they’re firing people for performance. The number of companies that claim never to have laid anyone off, but have politically-charged “low performer initiatives” (stack ranking) any time executives screw up and lose money, is astonishing.

In this less-than-ethical climate, institutions rust quickly. People realize their employers have no sense of loyalty or fair play, and they reciprocate. I’d guess that eighty percent of people lie on their CVs, and it’s hard to blame them in an industry where bait-and-switch hiring is the norm, and where dishonesty to employees is business-as-usual. (Lie to investors, though, and that’s “pound-me-in-the-ass” prison.) If a company can lie about the career benefits of a job it offers, can’t an employee fudge his own political success– or, shall we indulge the fiction and use the term “performance”?– at previous jobs? I don’t care to unpack this particular topic; what’s moral is one debate. What is, is what interests me here. We don’t have a culture that strengthens institutions or builds durable ones. We have one that builds flimsy companies that either decay rapidly or “disrupt” some other industry, capturing great wealth quickly at some external expense. We have a culture where everyone lies and no one trusts anyone, and where everything’s falling apart.

The Daily Anomaly

I expect Corporate America to melt down under the Millennials, but I can’t say when it’ll happen. As I’ve said, predicting the future is hard; anomalies can double up multiple times before they dissipate.

Corporate work is somewhat of a joke these days. People spend 8-12 hours per day defending an income and their professional status, and very little of that time is spent actually working. The weird irony of American life is that people’s leisure activities are more work-like than their paid jobs. They hunt, read, write, hike, run, garden, and sail on their weekends. What do they do at “work”? Sit an office and try not to get fired. If that means slacking, they slack; if that means working, they work. Their only real goal is protect an income. It’s not intellectually or physically demanding, but it’s obnoxiously stressful. Until we establish a universal basic income (which will save work, not destroy it; as the New Deal saved capitalism) this will be a reality for most white-collar Americans. We recognize corporate “work” as a stupid game people are forced to play.

Automation will destroy jobs. Good. Fuck “jobs”. If we had a universal basic income, no one would shed a tear about the elimination of unpleasant labor from human life. We don’t miss death by “consumption” in 2017, and no one in 2117 will wish he’d been around to spend 50 hours per week in a box, doing a job that a robot can do using 53 cents’ worth of electricity. At some point, we won’t have to work in the way we do now. We can recognize the grand joke that is American-style office work as an anomaly. Will it go away soon, without pain? I doubt it.

Self-driving trucks are an unemployment time bomb. Consider not only the truck driving jobs, but the jobs in support of that industry. Hotels and restaurants in the Kobless Interior will fold. It’ll be a catastrophe.

Upper-middle-class office workers feel safe from this. Here’s what no one’s yet talking about, and it’s going to hit the whole middle class: inelasticity.

During the oil shocks of the 1970s, the fuel’s supply only decreased by about 5 percent, but prices went up several hundred percent. The same thing’s going to happen to wages, in the opposite direction. Laid-off truck drivers aren’t stupid. They’ll move into other trades, driving wages down. They’ll go into code boot camps. We’ll see wage inelasticity: a small increase in labor availability will cause wages to plummet, disproportionately, and beyond what most people expect. It will ripple throughout the entire middle-class job market. No job is safe. Will there be computer programmers in 2030? Without a doubt, there will be. Will they make the money they do now? I doubt it.

This notion may seem far-fetched, but consider some of what our society does already to limit the labor supply, just because the stakes are so high. We imprison so many people, I would argue, for this reason; we are, then, preventing wage collapse, albeit at an unacceptable moral cost. Draconian drug laws keep people off the job market. Within the middle class, the arms race for educational credentials exists toward a similar end: society self-corrects against wage collapse by pushing people into school, thus out of the workforce, even though the individuals pushed into schooling (often, unnecessary for the jobs they’ll be able to get) must accrue unsustainable debt in order to be there.

Self-driving vehicles will save lives. Without a doubt, we want to see them developed and used. However, if we don’t have a universal basic income (and I doubt that we will) by the time this disemployment time-bomb goes off… we are so, so, so fucked.

Green-Eye Island

Millennials recognize corporate “work” for what it is. Yet, they still go. They have no other choice. Why do they put up with it?

Here’s a parable, perhaps a riddle, that explains it; and the counterintuitive answer gets to why it’s so hard to predict future human behavior.

On Green-Eye Island, it’s illegal to have blue eyes (to simplify, everyone has green or blue eyes). If you know that your eyes are blue, you must leave the island at sunset. However, no one discusses eye color, and there are no mirrors. Of the people who live there, exactly 10 have blue eyes. These people are perfectly logical and follow the rules to the letter.

They’ve lived in harmony, each blissfully ignorant of their own eye color, for years. People see others with blue eyes (if they have blue eyes, they see 9 others with blue eyes; if they have brown eyes, they see 10).

One day an outsider, the Man In Black, comes to the island and says, “At least one of you has blue eyes.” What happens?

The intuitive answer is, “Nothing.” He is not telling them something they don’t already know. Right? In fact, the answer is: ten nights later, all the blue eyed people leave. This is a weird result. On the surface of it, the Man In Black offers no new information; yet, he causes a change in behavior.

Why? It works like this. Let’s consider the case where there were only one blue-eyed person, this information (that at least one person has blue eyes) would be new to her; not seeing anyone else with blue eyes, she’d know that her eyes are blue, and leave the island that same night.

If one night passes and no one leaves, this means there are at least two people with blue eyes. If so, then each of them will see only one other person with blue eyes and know that they have to leave, on the second night.

So, let’s say that two nights have passed and no one has left. This means there are at least 3 people with blue eyes. And so on. In the example where 10 people have blue eyes, nine nights pass and no one leaves the island. Each person with blue eyes realizes that there are, in fact, at least 10 people with blue eyes… and seeing only nine others, they must leave.

Before the Man In Black came, everyone knew that at least one person had blue eyes, but it wasn’t common knowledge. Common knowledge is stronger than that: it requires that everyone knows, and that everyone knows that everyone knows, and that everyone knows that everyone knows that everyone knows, and so on. In the example above, we have nine levels of “everyone knows”, but not ten… until the Man In Black establishes common knowledge.

Played by real people, I doubt this simulation would go on as described. People are not fully logical; they do not immediately deduce all things they could know from the information they have, as that would be computationally impossible. What would happen if this game were played out with real people? Probably nothing. Not only are we not perfectly logical, but we cannot reasonably assume that everyone else is perfectly logical.

The example above shows, in principle, how the promotion of shared knowledge (everyone knows it) to common knowledge (everyone knows everyone knows everyone knows…) can be powerful.

In Corporate America, there seems to be a similar shift underway, from shared to common knowledge.

Most individuals recognize the absurdity. People enter this miserable contest, chasing the 0.01% chance of becoming so famous, rich, or important that they no longer have to play. It’s ridiculous: they club each other, with the goal of getting out of the room where clubbings occur.

The rewards are artificially scarce and delayed, the game is hopelessly corrupt, and the odds of success are pathetic. As far as game design goes, corporate work is best viewed as an antigame, like an antinovel, but far more artless. While it is (like a game) a process whose main purpose is competition, it lacks the intellectual fulfillment and harmless fun of regular games. Corporate work is not defined by the joy of exploring new territory or deploying strategies, but avoidance of artificial unpleasantness: late working hours due to deadlines that serve no purpose, emotionally-charged confrontations over nothing that can nonetheless result in a 100-percent drop in income if one misplays then, et cetera.

What makes games fun (or not) is beyond the scope of this essay, but one factor is their creation of a status hierarchy different from the one in the real world. In a game of skill, the fun is in exploring the game’s structure (architectural and emergent) and climbing the skill ladder– in a meritocracy where an unskilled rich person loses to a skilled pauper. In a game with more luck, the light-hearted fun comes from the fluctuations of the in-game status hierarchy. Even a beginning player might win and be queen for an hour.

The anti-game of work is designed, instead of either of those goals, to elect as winners not the people of merit (as in a skill game) or to allow serendipitous wins (as in a luck-enhanced “party” game) but to ratify the socioeconomic status hierarchy– to make an oligarchy appear meritocratic– that already exists in the world.

Do we need office work for modern society? Probably. We don’t need so much of it. I’d guess that 75 percent of the time and 98 percent of the emotional suffering invested into it is pure waste.

Virtually every thinking person knows that what I’m saying is true. It’s shared knowledge. Yet when a person like me (a Man In Black) risks making it common knowledge, he becomes a pariah. It’s bad for morale. Her “tone” is “shrill”. He’s a bitter loser who just didn’t make it. Et cetera, et cetera, et cetera.

Millennials get a lot of flak for our role in “killing” travel, magazine subscriptions, restaurants, and other things we can’t afford because we don’t have the money that the Boomers stole when they ruined Corporate America. We’re called lazy because we don’t invest loyalty if we don’t expect reciprocation. We’re a hardscrabble, post-apocalyptic generation.

Generation X knew the corporate game was rigged, but it hadn’t become common knowledge yet. The morale problem had not achieved public liminality. We’re the ones destroying morale, one nonexistent avocado toast slice– I literally didn’t know that it was a thing; and is it toast with avocado in the dough, or as a topping?– at a time.

We’re the Men (and Women) In Black who come to the island and state what everyone already knows… and that’s why the people at the top of society hate us so much.

Why I Didn’t Do It

Forgive me if I don’t find the best order to use in telling this story. Life is chaos; chronological order may dissatisfy. And, since this narrative continues into the future, I have no idea how it ends.

I left a high-paying job in finance, early in 2008, to work at a startup. I had, one could say, a naive, rosy view of technology and the nobility of its place in society. I believed that if I became a great programmer, I’d both have a positive effect on the world, and earn my own reward. I wrote code, I wrote words, I read a lot, and I worked my ass off. That hasn’t changed.

My first taste of what one could call “fame” came in March 2011. A now-deleted essay hit Reddit and got 30,000 views in two days. In July 2012, I wrote “Don’t waste your time in crappy startup jobs.” At the time, I wasn’t advising people to avoid startups– only to be selective about which ones to work for. That post received 135,000 views in one day, and about 250,000 over its entire life.

Through this, I made some enemies. Advising tech employees to negotiate with employers does not earn love from all corners.

I removed those “hit” essays in February 2016, after receiving a few not-credible but disturbing physical threats. I intended to restore them, but a technical mistake (partly mine) led to their permanent deletion.

If one wants to find them, the Internet Archive (“Wayback Machine”) is what I’d recommend. The problem with my earlier writing on technology is that it has diverged from my interests and, to a lesser extent, from my values. I spent years trying to inject efficiency and integrity into venture-funded, private-sector technology. I no longer have faith that I, or anyone, can improve it. My aim in 2012 was to save it; in 2017, my goal is to minimize harm (especially, to myself) from its inevitable Untergang.

I experienced an aggressive public attack, starting in the fall of 2015. I was “de-platformed.” To wit, I was banned from Hacker News and Quora on false, defamatory pretenses. Why was I banned? It had nothing to do with my conduct on either site. First, I suggested that, instead of enduring the creep of micromanagement and surveillance, software engineers might consider collective bargaining. Second, I wrote a blog post that Paul Graham thought was about him– it wasn’t. Third, Y Combinator abused its power as an investor in Quora to force a ban on my account. It would have shut the company down, costing 120 innocents their jobs, had it not complied.

It must seem bizarre that I’m still upset about website bans from two years ago. In fact, I’m glad those sites banned me; they were monstrous wastes of time. I’m disgusted by the defamatory pretenses they used to do so, and the public statements they made. Their goal wasn’t to get me off the sites (I was a top contributor) but to damage my reputation. In a normal industry, such things would have no effect. How many industries or careers are there where a website ban could be used as a reason not to offer someone a contract or job? I can’t think of any, but one: venture-funded technology– that is, startups and ex-startups like Google and Facebook.

Leaving finance for a startup, in 2008, was a failure of career planning. That’s on me, and me alone. By doing so, I locked myself for a time in an incestuous weird industry where petty gossip drives careers. It is also an industry whose values and mine have diverged.

In early 2016, I was informed that I had been turned down for a job because of these bans. The perception was that I’d been humiliated by Dan Gackle and Marc Bodnick and failed to strike back. This petty gangster shit ought to be beneath me.

I don’t want to “strike back”. I don’t want a damn thing to do with those sick fucks. Revenge keeps you involved. Life’s too short.

I spoke to a public relations specialist about that experience. She asked me what money I would have made if I had gotten the job. I told her. She laughed.

“As smart as you are, you’re concerned about a startup job making $XXX,000?”

It amused her that, the stakes being so low, I’d even care to consult a PR coach at all. Here’s what I had to explain to her: the rest of the economy doesn’t want people from the startup world. (There are good reasons for this; most of us are sociopaths.) Often, we get stuck in it. In the tech industry– startups and ex-startups– it’s usual that one has to change jobs every 18 months to have a career, because those companies don’t invest in their people or promote from within. In real careers, that’s a sociopath’s résumé.

There are many undisclosed dangers of private-sector technology. Yes, it pays well, relative to most other careers, in the first 5 years. Still, it maroons almost all of them by middle age– and “middle age” in tech means 30. The job-hopping résumé that’s necessary in private-sector technology looks terrible anywhere else. Silicon Valley may think that it’s the future, but the rest of the country looks at five jobs in 6 years and says, “Nope.” Those who enter the startup scene often ignore the high probability of being stuck there. They think they’re younger and more invincible than they really are.

I ought to admit that I’ve never been great at processing the bizarre adversity that started with my first attempts to improve the tech industry. I have nightmares and panic attacks. I Google phone numbers I don’t recognize. I watch my back, especially in large cities.

The anonymous threats, the unjustifiable closing of doors, the necessary vigilance… that took a toll on me in 2015 and ’16. For an example of what I was going through, a homeless person in San Francisco chased me, brandishing a stick. He told me not to “fuck with” certain people, whom he named.

I hit rock bottom around March of that year. It wasn’t that I gave credibility to the death threats. Those came from high-placed people in Silicon Valley who had too much to lose, and I lived in Chicago, so I perceived myself as out of their way. Looking back on it, their objective wasn’t physical harm. Their work was incompetent and that was intentional. Rather, they wanted me to speak up. They knew I would, and I played into it. Why? Because it sounds utterly fucking nuts. If I stand up and say that, one time in San Francisco, a person associated with Y Combinator sent a homeless man to harass me, I sound insane. It seems bizarre and unreasonable, because it is. However, it happened. I wish I were making it all up.

Even I have trouble integrating these experiences, years later. I’ll confess to this: the other-than-real aura of certain events in the 2010s has led me to seek professional assistance in their processing. The normal reaction to abnormal occurrences, sometimes, requires that.

At that rock-bottom point in March, I was considering my own exit. Why? When I wrote about open allocation, or organizational dynamics, or programming languages, I held a certain opinion. Namely, that private-sector technology was a well-intended but wayward industry. There were bad guys, sure, but good guys as well, and the good guys could win.

Quora seemed to be the good guys. (Ha!) Even Y Combinator seemed, at one time, to operate with moral decency. I had this sense of computer programming as this noble activity; we were automating away worlds’ worth of undesirable work. I learned, abruptly, that I was wrong about almost everything. I realized that I’d invested in almost 10 years in an immoral career.

Our other favored debates seem so small, in comparison. One can argue about the merits of Haskell versus Python, or Bayesian models versus maximum likelihood, but to what point? These technical matters are hills of sand compared to the shit mountain that is our industry’s ethical failure.

I had a hard time accepting the role I had played. Yes, I experienced death threats and attempted blacklisting. From an objective external perspective, I’m not a sympathetic party. First, I chose to work in the tech industry. Second, by revealing unethical and illegal activities to the public, press, and authorities, I “bit the hand”. Third, my experiences raise questions but don’t answer them. I’ve proven corruption in Silicon Valley; do I have a fix for it? I don’t. Fourth, I must confess to my immaturity while the worst fights (2011 to ’15) were going down. In one case, my revelations of illegal practices led to numerous successful lawsuits against the company. Am I a hero? Nah; I did it to settle a grudge. I did a good thing, but my intentions were pedestrian. If I represent my story with honesty, I must admit this.

So, there I was, in March 2016, doubting whether I wanted to consider existing. Harassment and defamation from people who are powerful in one’s industry has that effect. Believing you’ll never get a decent job again (false, proved later) because a Quora ban (tech is petty; it’s plausible) has that effect. Spoiler alert: I’m still alive. As a general rule, I’m not suicidal, for two reasons.

First, while I don’t ascribe to literal religion, I find it plausible-to-likely that (A) there’s more to consciousness than we see on the surface, and (B) that my conduct in this life matters. So, I see no upside in self-violence, even when it tempts. There’s no guarantee, in any event, that it provides the cessation of existence that, in darker moments, I might desire. Whereas there’s a certainty of emotional harm to people who remain.

Second, when I get to that point, I often pretend I am dead or dying, just for the exercise. “I’m dead already; what do I do now?” We’re all terminally ill, after all; we just don’t know the timeframe.

Usually, I can come up with something worth doing. Perhaps it’s as pedestrian as cleaning the cat’s litter box. I ask myself how much life, in the current state, I can tolerate… and then figure out what I can do in that amount of time. Let’s say I decide that I can tolerate 6 more months. If I rushed and left editing to posterity– I have too much pride to do that, unless necessary; but it’s what I’d do if diagnosed with a terminal illness– I could finish my novel, Farisa’s Crossing, in half that time. That’d give me a valid reason to kick around for a few months, right? I find that, once I get to work on something I care about, that wish for a long sleep (which may or may not be what death is) dissipates.

It was at this bottom of night that I started writing Farisa’s Crossing. I figured it’d be a 60,000-word book. After several rounds of revision, and several to go, I’m on target for 175,000. That’s only the first book. I expected the amount of work involved in writing a significant (as opposed to merely publishable) novel to be high. It’s much more than I expected, but it’s fun work. As Camus said of Sisyphus, “One must imagine [him] happy”.

I found that I enjoy fiction more than I enjoyed tech writing. I’ll be publishing it in a year or so. There’s a lot to figure out, on that front. We live in a time where some of the best work is self-published and where any celebrity could get a prestigious house to print garbage. So, I view the process as unpredictable. My job, though, is to write significant work– and maybe, for once, give some value to what I’ve experienced.

Over 2016, for reasons mixing protest and privacy, I accelerated my own de-platforming. It was bad for my reputation to be banned from Hacker News and Quora on the defamatory pretenses that were chosen, but it was good to be banned from them.

What I realized, that year, was that the addiction to internet microapprovals had damaged my focus. It became hard to read, much less write, significant work. Ten thousand words became “too long” to read. In online magazines, even for excellent, enjoyable articles, I’d find myself checking that side cursor for total length. “Are we there yet?” “Are we there yet?” Social media feeds the monkey mind. It leads to a loss of discipline.

I quit Twitter in November 2016. Like I said, there was an element of protest, and this may have been rash. When you’re publishing a book, you need “platform”. I burned mine down. I had 2,600 followers. If I joined again, I’d start at zero.

Now, I am facing the question of whether and how to “re-platform”, as I want Farisa’s Crossing to succeed. Should I rejoin the world of 140-character insights and @-mentions? Should I start batting out 750-word blog posts that say the same thing as one from three years ago, but might “go viral” this time?

I know I can “re-platform”. I could get 10 times the attention I had at my peak. But at what cost? When I used social media, I developed unhealthy obsessions: famous followers, page view counts, blue fucking checkmarks. Do I want that in my life again? My sense is that I don’t.

It might be my age, but I enjoy books more than websites these days. Some promises of technology have been fulfilled. Most have not. The industry sucks, and it’s not getting better.

What ought to have been the first sign of broad-based moral corruption was this: in 2011, I remember someone saying she wanted to “demolish” a competitor. Not “we’d like to build a better product” but “we want to end them.” (They’re still around.) See, it’s valid and usually moral in business to compete. If another firm suffers because one offers a superior product, that’s not something to be ashamed of. However, taking job in the other’s destruction– or, in today’s language, “disruption”– seems perverse. Why wish for another’s failure, as opposed to pursuing one’s own excellence?

It’s a sad fact, but most of what we do in technology is destructive. Very few of us make new things under the sun. Most of us make business processes cheaper. There’s nothing wrong with that; we might think, naively, that the value we create would be invested into research and development. That’s not what happens. Businessmen lay people off to pay their own bonuses. We’re the ones who make that possible. Society gets worse with each iteration, and it’s our fault.

Then, is it a surprise that we fail to arouse public sympathy when we can’t afford houses in the Bay Area? Or when we suffer age discrimination at 30?

I don’t know what life’s ultimate purposes is. Though I don’t ascribe to literal religion, I tend toward anti-nihilism, like Farisa. There must be a purpose, I can’t help but feel. What is it? It’s not to destroy.

Life’s purpose is not to code people out of jobs. It’s not to wreck the reputations of innocents on social media. It’s not to get people addicted to meaningless social microapprovals. Whatever imperative I can find, in the moments when the darkness goes away, points in the opposite direction.



There’s a game called Universal Paperclips in which one plays the villain: a paperclip maximizer, or an AI whose purpose is to make as many paperclips as possible, at any expense. The result of this, should the thought experiment become real, would be our own quick death; the machine would want our matter for its own work.

Paperclip maximizers come to mind often, as someone who’s worked in the tech industry for more than a decade, and has nothing to show for it. I didn’t get rich. I didn’t change the world. I know approximately 47 programming languages, but who cares? I’m 34 years old and the vast majority of my time in this industry has been pure waste and an embarrassment.

There’s one thing I got from the tech industry. Although I developed the illness beforehand, my panic disorder really came into its own thanks to open-plan offices and startup health insurance. It didn’t help that, when I was finally on the mend in 2011, I joined Google and had a manager who provoked attacks for his own amusement. That was fun.

If I hadn’t gotten myself stuck in the tech industry, the condition would have fully remitted by now, if not several years ago. Instead, the fight has gone on for a decade, and I’m not fully out just yet.

So, my souvenir from the tech industry is, rather than some neat futuristic bauble, a defect in an ancient part of my brain, the amygdala.

When I grew up, in the 1980s, we learned about what technology might do one day: holiday lunar trips, robot servants, an end to illness, certainly an end to work except for the most rewarding kinds of it. And what have we actually achieved? Fucking Bitcoin. A 140-character President. Literal fake news. That’s what we have to show for ourselves.

As private-sector programmers, we’ve unemployed a lot of people: we’ve annihilated hundreds of millions of jobs. Some of these people got better jobs; many didn’t. We never cared when it was happening to other people, but now we have “Agile Scrum” and Jira and open-plan offices and the surveillance system we built… sits over us, its passive-management eye always watching.

In what we do, as private-sector programmers, where is the honor? There’s none. We are a failed tribe that has made rich people richer– even at our own expense. If we’re lucky, our work will be erased and we will be forgotten.

This may explain the Fermi Paradox. Perhaps there is a plateau of mediocrity at which, though a civilization could continue to innovate, it chooses not to. Perhaps it does not go the way of violence, but bored purposelessness. Perhaps we are not totally alone in the universe, but all those other supposedly intelligent civilizations are mired in thousands of years of user stories and TPS reports. Seems unlikely, right? Sure. But it’s even more absurd, if we could send a man to the Moon using 1969 computers, that we’re using supercomputers to run Jira and do “user stories” in 2017.

A Middle Manager Learns Zen

For a short break from my work on Farisa’s Crossing, I wrote this parable.

Zen and the Art of Middle Management

A middle manager went to a Zen master.

He said, “I suffer from anxiety. It’s holding me back in my career. With this problem, I’ll never become a True Executive.”

The Zen master said, “I’ll teach you how to overcome your anxieties.”

He studied under the master for a week, and learned how to control his fears and reduce his worry.

A year later, he returned to his mentor to thank him.

“You’ve helped me cut my anxieties to 25 percent. I’m smoother than silk in meetings. I’m Assistant Director now.”

The mentor smiled.

“May I study with you, for another week?”

The mentor nodded.

The manager studied. He meditated. He learned how to calm his own nerves and mute the darker bits of his mind.

He returned, a year later, with more thanks.

“You’ve helped me cut my anxieties to 10 percent. I’m a Vice President now. Almost a True Executive.”

The mentor smiled.

“May I study with you, for another week?”

The mentor nodded.

So the manager studied more. He meditated, from five in the morning to eleven at night, every day for a week.

After much work, he learned how to extinguish his anxieties, to tap into the universal calm, to pull the mind back to its sky-like nature.

A year later, the (ex-)manager returned– with a lawsuit.

“What’s this for?” the mentor asked.

“You ruined my career!”

“I don’t see how–”

“You’ve helped me cut my anxieties to 0.1 percent.”

“That’s good. You are learning to overcome samsara and its poisons.”

“No, it’s not! The job of a True Executive is not to overcome his anxieties, but to offload them to other people. How can I do that now?”

At that moment, the mentor was (dis)enlightened.

Bottom of the 4th. Or: How I Learned To Stop Worrying And Love Revision.

I’m a writer. I’ve been writing on the Internet for about 15 years, and I’ve said a few smart things, and a good number of stupid ones. I “was there” when a number of notorious Internet phenomena happened, and not all have been connected to me. I self-published a card game called Ambition in 2003 that I still get emails about. I’ve had mediocre blog posts get 300,000 hits, and I’ve had great ones get double digits. How? What determines which posts go viral and which don’t? To be honest, if I knew the patterns, then I’d sell said mysteries to people who value attention more than I do; so in full disclosure, I don’t.

I’ve probably pumped 30 million words of writing into the Internet. Good idea? Smart idea? Eh, not so sure. Not all of it has been of high quality. If I do come out with a literary masterpiece one day, I also must accept that stupid Reddit posts from the Bush Administration may also outlive me. It’s an upsetting thought. What if someone, a century from now, took the 100,000 stupidest words of Internet writing that I’ve ever done and made it into a book? Oh, it’d be full of howlers. I’ll probably be cremated, but even that wouldn’t stop me from rolling over in my grave (ash particle by particle) if that nonsense were to live.

For example, I used to get a lot of negative attention related to a bizarre hate page on Wikipedia that asserts I am responsible for some double-digit number of accounts (I shan’t go back and count; it’s not worth it). Some of those Wikipedia accounts never existed. The guy– and I have nothing against him at this point, because he didn’t intend long-lasting damage to my reputation, and because I might be the only person who’d care at this point– just made up accounts and claimed I created them. I did manage to figure out who he is, and what I know about him would discredit him, but I’d be the bad guy if I said more. That whole experience, now 12 years in the past, was just weird. The lesson? Fuck if I know. Stay off the Internet? Did not learn that one. Don’t write? Well… same.

Somehow, I became a successful tech blogger. I got death threats! More than one! At my peak, I was one of the top 10 independent bloggers in the technology industry. Yes, “tech blogger”. Throwing up in your mouth? Good. I am, to think that I once was “a tech blogger”. So, if you’re throwing up, and I’m throwing up, then… we’re “on the same page”. Ugh. I can’t believe I used those words. They came to me and I wrote them. “At the end of the day”, sometimes we “fire off” terrible snot-strings of office-coffee verbiage like we “shoot” emails. Ugh. Fuck this shit; let’s move on.

I burned down my platform in 2016. There are a lot of reasons for this. There are others that I haven’t disclosed. Doesn’t matter here. I decided to start writing fiction. Since I’d managed to get a boatload of attention just my writing here, and for a while I was the most-read non-celebrity contributor on Quora– a sleazy website run by an unethical company that everyone should stay the hell away from, but that’s another topic for some other time… I figured it’d be easy to write “my novel”, eh?

Spoiler: no. Fiction, if you want to write it well, is a much harder game. The standard is much higher.

March – April 2017: I was in between contracts (I had a $250/hour consulting gig, but the ethical ramifications of the work… I’ll just stop there). With the free time, I sat down, then I fucking wrote and wrote and wrote. Eleven days, 134,159 words. At that time, I titled the book Farisa’s Courage. Sent it out for beta reads. Not close relatives, not non-reader friends, but people I knew who read a lot of books and could offer critique. Overwhelming consensus was… it was a Six. Not put that way, not numerically rated, but… publishable, reasonably good, could be a lot better. Not great, not what I wanted. Back to the drawing board.

In truth, I sent that first version out to beta readers too soon (and I thank them, all of them, for having offered useful critique). Well, maybe. Quick feedback is a nice thing, but the book… turned out to need more work than I thought it did. If it was an Eight, I’d only need one round of beta reading, I wouldn’t need to do a complete rewrite, etc.

Something learned: writing 10,000 words per day is totally possible. It’s not even always a bad idea. Sometimes, a great chapter comes out of a 17-hour writing binge. It’s not sustainable to write that way, but it can work for short bursts.

When you revise, however, you need to be well-rested. I did perform a revision pass (after several days at a five-digit pace… whoops) before I sent the first version of the book out for beta reading, but I was naive to think that that was enough. For a blog post, one revision pass suffices and you can do it after you write the last word. For a nonfiction book, perhaps two: one organizational pass, and one line edit. For a novel, if you want to write to a literary standard? You gotta be fucking kidding me. There really is no shortcut. Not only do you need multiple revision passes, but you need time to pass so you can edit with a rested brain.

I once thought that great writers didn’t need to revise heavily. The more accurate assessment seems to be that great writers can revise heavily. I mean, anyone can; but great writers are the ones who can perform six to ten rounds of revision with the work’s quality increasing, whereas an average commercial writer wouldn’t get much utility after a certain point. They both start in the muck, the slush, the first-draft munge; but one group has a bullet’s chance in a butt of the 7th draft being better than the 3rd, while the commercial authors might as well stop at three and send it off for line editing.


Like I said, I was between jobs that spring, and I wasn’t satisfied with what I had written, so I spent about 100 hours per week reading books on self-editing, narrative structure, literary criticism, and even the publishing industry. When I needed a break from that, I’d pull out a favorite novel and try to get deeper into what about it worked and what didn’t. That was fun. I wish I got that opportunity more often.

So… I realized how much I had gotten wrong the first time around. Not grammar issues. A copy editor can fix those; a traditional publisher will assign one, and if you self-publish, you must hire one. Mostly, missed opportunities. Places where treasured characters (or loathesome ones) could “come out” more. Late-dropped reveals that were better placed earlier. Conversely, back story given up too soon– you shan’t give back story till you suspect a reader craves it, or because the story would be unreasonable without it. A weak beginning, and a sagging middle-of-the-middle. Opportunities for symbolism and thematic strengthening.

Now it’s November 10th, 2017. I’ve got about 110,000 words written in my rewrite (which uses about a third of the old scenes, though many have been fully rewritten). Surprisingly, while I thought I had nearly mastered prose composition– although I knew I lacked quite a lot about narrative construction and characterization, until about a year ago when a lot of things clicked at once and I started to understand fiction– I realized, after reading several books on line- and copy-editing, that almost everything I wrote last spring can be improved. Massively. So, I must.

I have a spreadsheet of where each segment is my revision process. I also use Scrivener. For me, it’s invaluable. I don’t know how I would organize my revision process without these tools. Not wanting to repeat my mistake of last spring, I don’t send anything to beta readers until the 4th draft. Yep, four. They still find typos and mistakes. I don’t know who said this, but there’s a law I’ve heard that 90% of work exists to counteract other work; that seems right. Revision corrects errors and creates (one hopes, a smaller number of) new ones. Anything newly added in the 4th draft is going to be rough-draft material… you can’t get around the fact that your 4th draft has improvements over the 3rd– otherwise, there’d be no point in doing the 4th– and that those revisions, themselves, live in a first-draft state. So, yeah… it’s humbling (if not a bit disappointing) to realize that even 4th- and 5th-draft material will have an error or few.

The key realization is that Sturgeon’s Law (“90 percent of everything is crap”) applies to everyone. Taste seems to be the key differentiator, and the thing that every writer must refine. The difference between great writers and mediocre ones isn’t an immunity to Sturgeon, but the ability to “de-Sturgeonize” themselves. Not 90, but probably 70 percent of the sentences in my first draft look like something that slithered out of a slush pile– publishable if commercial, but not literary. Second draft? 49 percent. Third draft? 34.3 percent. Fourth draft? 24.01 percent? Who’s happy with “75.99 percent good”? I’m not.

Writing well takes lots of work. “One must imagine Sisyphus happy,” Camus said.

At any rate, right now my model predicts that my novel will come out to 174,300 words and that I’ll finish on June 19, 2018. I think it’s optimistic by a month. This work includes the total rewrite of last spring’s version, several passes of revision, beta reader feedback, multiple targeted revision passes (e.g. dialogue, mood, diction) and a line edit. I’ll probably do a self copy-edit, in part because I’m likely to get a better quote if I use a professional editor (which, if I self-publish, I will).

Will all of this work be worth it? We’ll see. Perhaps I’ll finish my book and it’ll still be a Six. Obviously, I don’t think that to be the case.

If there’s one thing I’ve learned, it’s to embrace revision, up to and including the total rewrite; sometimes, you just have to make whole-book, tear-the-world-out-and-put-it-back-together changes. The only sentences and scenes that can’t be touched are the ones you’d remember if your hard drive died and you had no backup. (You should still back up your work; less-than-memorable fourth-draft prose is a much better working material than a year-old outline on a coffee-stained napkin.) Editing is part of the game. And it’s fun. It’s a different kind of fun from pantsing out a 7,800-word battle scene at 2:37 in the morning, but it’s just as worthy an endeavor as the original writing. In the first draft, you get to watch a movie in your insane little mind and write down what happens. In editing, you get to make it look like a real writer rather than an insane person wrote it.

A truth about writing is that excellence comes out intermittently. You can’t force it. You take risks and some of your sentences surprise you on the second read, whereas others make you want to throw your computer at the wall. Jokes are especially volatile; I’d guess that 30 percent survive revision; the other 70 percent served a purpose– paying myself for busting my ass– but don’t belong in the final product. Sometimes, you have to keep driving through the thick and accept that you’ll re-work everything in revision, more than once. Oh, much more than once.

It’s humbling, to realize that with four revision passes, I’m a better writer than I could ever be in one pass. I look at something I’ve worked on several times and I’m like “shit, I don’t know if I could write that again.” In one draft, I couldn’t.

My first draft, as I said, looks like upper-tier slush. My second draft (minus typos and spelling errors that’d come out in a copy edit) is mid-level commercial-quality prose: gets out of the way, with flashes of almost-goodness, but it’s not something I’d be proud of. My third draft starts to look slightly literary-ish; and by drafts four and five and six, I’m starting to have something that, five-and-a-half days out of seven, I wouldn’t mind seeing in print.

Perhaps, after revising and editing and continuing to read books written by experts on what-the-fuck-I’m-trying-to-do, Farisa’s Crossing will only be a Six. Or an Almost-Seven. That would suck, but one doesn’t write without knowing that disappointment is a possibility. Nine-tenths of the blog posts I start, I don’t finish, much less publish. We’ll see. But I’m going to fight hard against the abyss of non-prose, the blank page (the empty string, to a computer scientist) from which anything is possible but nothing emerges without force, as hard as I can.

This ongoing experience– which I am writing this as a break from, because for as much as I enjoy it, I need to do something else sometime– gives me a perspective on a common flamewar within fiction: the topic of “genre fiction”. Ask a bitter creative writing teacher about “genre”, and you’ll hear about how publishers only want to publish “genre trash” and celebrity books– while advances for “real writing” (and real writing, they say, is never self-published, but that’s another misconception in this tangle of buttfail) are low because no one wants to buy literary fiction.

There’s so much wrongness in that complex that I don’t know where to start. First of all, the often-cited claim that genre authors garner huge advances is off the mark. Most genre authors get crappy advances and the same negligent treatment by their publishers that non-bestsellers get in the literary world. If publishers offered $100,000 for every shitty romance novel, supply of shitty romance novels would increase and the price would crater. Second, literary novels often do get major advances and sell well. It’s rare that it happens, because it’s rare in general for any book to get that treatment, and it has more to do with agent clout and auctions than with the quality of the words themselves… but the idea that literary fiction is somehow maltreated is off the mark. Literary authors do get on Oprah and Charlie Rose sometimes. Third, the notion that contemporary neorealism is a superior genre is a bit silly. Its constraints are different, that’s all. When you create your own world, you give up the need to give verisimilitude to that small town in Ohio in 1895; but, in exchange, you have an equally hard job of creating an engaging world that you just made up.

I like literary fiction. If you presented me with a random critically-acclaimed literary novel versus a random self-published epic fantasy novel, I know which one I’d be more likely to pick, and there’s perhaps a bit of irony (although not hypocrisy, since I have no intention of writing an average or “random” book) there, since I’m writing epic fantasy that I’ll probably self-publish. However, I also have to say that the comparison is unfair. Why? Well, because we’re comparing the best contemporary neorealism– the stuff called literary instead of mainstream— against the whole genre of fantasy.

The notion of literary fiction conflates three notions:

  • (P) Fiction in a difficult but beloved (by writers and critics) and often popular genre: contemporary neorealism. No magic, no dragons, no interstellar travel.
  • (Q) Fiction where the prose is polished using an expense of time at least 3 (and often 5-10) times what’s required to make a novel commercially viable.
  • (R) Fiction that does not follow templates that, while often commercially reliable, are sometimes trite: boy-girl romances, murder mysteries, spy thrillers.

If you have P and not-Q, we call that mainstream fiction. If you have P and Q, and the talent to make Q show through, it might be literary fiction. R is a weird turn card here, because a lot of great literary novels have not-R, but generally you need P-and-Q-and-R to be considered fully literary.

If you have not-P, then you’re writing speculative fiction and not literary.

So, what do we call the (not-P)-and-Q writers like me? Well, you can just call me a fantasy author– once I finish the damn thing. Until then, you can use the word “aspiring” and I won’t flame you because I’m busy with, uh, writing. Still, I think the notion that fantasy and science fiction writers can’t be literary is misguided.

As I see it, the distinction that matters most of the ones above is Q. As for R, I think it misses the point. Plots and characters shouldn’t be cliche, predictable, or one-dimensional. Can one write a book of literary quality in a time-worn genre, like a murder mystery or a romance? Of course. There will usually be more to it than the template, just as Farisa’s Crossing is about a lot more than magic, dragons, and steam-era technology. The truth is: every book has a genre. Great books, arguably, tend often to have more than one.

I will say that there is a valid distinction between commercial and literary approaches to writing. Here, literary is divorced from the “literary versus genre” debate, and I’d like to pull out a different word, but I’m at a loss to come up with one that isn’t worse. (Artisanal might work by its original definition, but that word has become so commercialized that I’m going to gag just at the fact that I suggested it.) This distinction tends to be ill-formed in pre-professional writers, who don’t yet know what they want to do with their careers, and there’s nothing wrong with that. I’m not going to say that the literary approach is superior from the commercial one. It’s just different.

Once a writer is established enough to sell books with minimal effort– i.e., the years of rejection and the stupid agent querying process are behind him– does he stop revising at “good enough”, or does he push himself into new territory, wanting each book to be the best he can do, and better than the last?

Some people want to write six books per year. Others want to write one great book every six years. How hard is it to write a book? It can quite easy, or it can be astoundingly difficult. The truth is that, once one can write at the minimally publishable commercial standard– which, to its credit, less than 1 percent of adults reach– it’s difficult to make an economic case for writing great novels. It’s not that mediocre novels invariably sell better than great ones. I don’t think that that’s true. I think that literary quality is positively correlated with sales performance. Average readers might not be able to perceive superior prose, but it wouldn’t surprise me if they still respond to it. It has value; just not enough to justify itself on economic terms alone. You do 10 times the work, and you might get a 50-percent bump in your advance.

An author who is optimizing for income, especially if reliable income is the goal, will crank out potboilers, set up a diversified portfolio of work, and eventually have enough exposure and readership that, even if individual books backlist poorly, the total income will suffice. That’s how commercial writing works. What’s wrong with it? Nothing. These books entertain. Then, there are writers who want to play with prose, create memorable characters, play with the form of the novel, and have a shot at being remembered after they die. Some of them pursue contemporary neorealism (the “literary” genre) but not all of them do; I consider them (whether in that literary genre or not) to be literary writers. So, Tolkien and Jordan wrote literary fantasy; Orwell and Huxley wrote literary sci-fi, and I’m writing literary steampunk fantasy (and almost certainly not the first person to do so).

It’s important to note that I make no distinction of one being superior over the other. If I had to rely on writing to get a stable, monthly income, I’d be inclined to spend more time on commercial work than trying to write the Great World Novel. Given that I’ve spent close to 15 years between finance and private-sector technology… let’s just say that I’ve done far, far more distasteful things for money than writing airport books.

I’m writing epic fantasy, to a literary standard, not because I’m better, but largely because it’s what I want to do (and because I believe I can). Could I make a living writing commercial potboilers? Probably. I don’t look down on people who do. What I intend to do with Farisa’s Crossing— the first book epic fantasy series that’ll probably take me 20 years to finish, and that I have no idea whether it will sell– isn’t necessarily better; it’s different.

I’m in excess of 3,000 words, the point at which the revision intensity of an essay (on that topic) seems to increase… and I think I’m done, and time’s scarce. Back to writing.