The Talent Crash

Most of us assume when we talk about technology that, while jobs will be automated out of existence and technological unemployment will be an issue, it’ll most likely be someone else’s problem. At least in technology, we associate ourselves with the unemployers rather than the unemployed. We’re talented, we note. We’ll be OK.

Is that so?

Let me ask a question: what is the universal sign of an untalented person?

Think about it. Write something down.

It’s not being bad at something, because even talented people are bad at many things. No one would pay to watch me play basketball.

Here’s the answer: an untalented person lives on his reputation. That’s it. That’s how it works. Talented people can succeed on their merits; untalented people succeed if they create and exploit feedback loops (wealth and prestige begetting same) in human societies. I suppose that if someone can do this intentionally (and a few people, I believe, can) that would be a talent of its own sort, but most often this is unintentional. With celebrities, it’s called “famous for being famous”. It’s the same thing with business executives, though. I will never be tapped for those jobs; I’m not part of that social network, I wasn’t born into that class, et cetera. But anything those guys could do, I could do better. They’re not talented people. They just have a track record (with support, often invisible, from their social class) that presents an image of professional competence that compels other people to do work on their behalf and keep that impression going. It’s nothing magical.

I don’t think it’s hard to make that case. Looking at the world economy, we can examine the intersection of class and talent. At the bottom, class-wise, the distinctions don’t matter so much because people survive by offering raw labor. There is a reputation component to getting these jobs– a felony conviction is nearly fatal– but it’s relatively easy to stay above that blade. In the middle and upper classes, it gets harder. Talent is hard enough to measure as it is, but talented people are usually doing things other than evaluating talent. This means that our society must trust talentless people to evaluate talent when filling coveted jobs. The results are just as laughable as one would expect, but industrialization is such a win that society can chug along (with 1-2 percent economic growth instead of 5+, but it’s not negative) with mediocre people at the helm. Reputation, and the manipulation thereof, become important in such a world.

Talent doesn’t matter in the workaday world because it’s been successfully managed out of the equation. An adept manager doesn’t bet his company on the intermittent availability of top talent. He tries to find a way to make sure the trains will still run with mediocre people driving them. This is a disturbing realization for me, but my existence on a job site means that, from a cost-cutting MBA’s perspective, someone fucked up. A more capable executive would find a way to replace the expensive, ornery high-talent person with a plug-and-play mediocrity.

Does this mean that society doesn’t need talent? Exactly that, at least in the short term. It should want talent– mediocrities are never going to cure cancer, nor are they going to fix global warming (although, with only mediocrities, we wouldn’t have global warming in the first place)– but that’s a separate discussion. The world benefits from top talent. Do individual hiring managers trying to protect their positions, within workaday corporations that would rather standardize mediocre processes than take a risk on excellence, get what they want from people like me? No. It’s a disturbing realization, but high-talent people need to be aware of it. Google and Facebook are advertising companies, not AI or social-engineering companies. They need a few high-talent people, no doubt, but the fewer of them these companies truly need, the better their executives are doing.

Our society might have want for high talent but it doesn’t really have much economic demand for it. In light of the collapsing demand for top talent, reputation and social manipulation become more important than ever. Which means that the 85th, 95th, and possibly 99th percentiles are forced to live on their reputations, like talentless hacks. People who could once work with their talents are now forced to fall back on their reputations. Why? Because corporate management, on its own terms, works. The system runs well enough on mediocre inputs. To be talented enough to be above the reputation trap got harder. There might soon be no level of talent that escapes it. That’s a scary notion.

Over the past 30 years, while we weren’t looking, reputation became something more malevolent and far more powerful. There are no fresh starts. The only way to reinvent oneself and try again is to break the last rule in a workplace world that has no real honor left, and to lie, and not just to lie but to support the lie with false social proof that can be bought, like any other commodity, on the Internet. I won’t take a position either way on whether it’s right for people to lie on their CVs or in job interviews. I’ll only note that most people have few other defenses against a more powerful adversary that can manipulate reputation against individual interests. It’s ideal not to have to lie to get jobs, but some people have no better way to fix their reputations and I don’t especially fault them for it. We live in a superficial, stupid world where “Mr. Kim” gets jobs that “Kim” can’t.

As the Internet came online, optimists viewed it as the most important publishing tool to come about since Gutenberg– and they turned out to be right. Yet, we ignored the risks: that our corporate masters would use this tool to surveil us. We now live in a world where not having a LinkedIn profile is, like some people’s natural hair, “political”; and where most people unknowingly sign away their legal rights when they interview for a job (to sue over bad references) as well as when they take one (if their company has a binding arbitration clause). Opting out isn’t really possible. The corporate masters have won. In a split second, they can (and do) manipulate this miasma of information that comprises “reputation” and destroy anyone they want. Anyone who doesn’t think the top corporate executives, hedge-fund supercapitalists, and Sand Hill Road king-makers use the same “troll farms” that splattered barf all over the 2016 election… isn’t paying attention.

When the world needed labor, most of us (often self-anointed) “talented” people were able to outrun “the other guy”, if the not the bear. This time, the forest floor’s littered with half-eaten corpses of “other guys” and the bear’s still coming. This surveillance/reputation capitalism beast we unwittingly created, we barely understand it, and not one of us is really free of it.

So, let’s discuss Trump. First of all, there’s a perception that Trumpism is about white male “emasculation” as we move from an industrial to a service economy. I prefer not to think of it in terms of emasculation, because any humiliating thing that men should not have to put up with, women should also not have to put up with. However, surveillance/reputation capitalism is objectively humiliating. No one of any gender should have to live under it. What some people see in Trump is a man who beat a bad reputation (which, in his case, he earned) and won in spite of being despised by the upper-middle classes (whom the working classes conflate with “the rich”). By winning in spite of a negative reputation, he slew the dragon; it doesn’t matter (for some) that he’s a ghoul who leaned heavily on reputation’s perversities (e.g., the sexist assault on a woman’s character because of her husband’s failings) to win.

One can dissect Trump and his reputation in a variety of ways. His reputation is negative in the sense of this low character has been obvious for decades, but his “brand” is the only think he actually built with any competence. How negative could his reputation be, if he could turn it into such a successful brand? He exploited an obvious, degenerate trait in our society: that after the 1980s, being an asshole became a status symbol. There’s probably more to analyze here than there is time to analyze it, and Trump’s defeat of the reputation monster is an illusion. With his wealth and contacts, he never had any trouble getting inside said monster and making it do what he wanted.

Demand for talent, and therefore its market value, seems to decrease. This scares me. Taking a long-term perspective, the world still needs talent. Only one-fifth of the world has been lifted out of miserable poverty, and the planet gets hotter every year. However, the world’s running just fine on the terms of the people in charge. From their perspective, they’d probably prefer a world with less talent, so they can’t be challenged. And although there’ve been no improvements in the accuracy of this tool called reputation, it now comes with shocking (if false) precision.

Perhaps not 85 or 95, but 100, percent of us will be forced to live on our reputations, like talentless hacks. It’s hard to come up with an alternative.

Advertisements

Back

What a winter it’s been. Not the weather. That’s been mild. Too mild. I miss the Midwestern winter, with its perfect 23 °F (-5 °C) days and snow. No, what has happened over the past few months hasn’t winter’s fault. My health hasn’t been ideal, though it seems to be springing back to place; and earlier this year, someone tried to extort me. Unfortunately but fortunately, I know this person and I’m not terribly worried.

Enough about that.

I fell behind on Farisa’s Crossing: a couple months, with mediocre writing progress. That was unplanned, but probably necessary. I doubt I’ll have a ready book by October 1, 2018. If I pursue traditional publishing– and, although I’m likely to turn down anything but a lead-title deal, I ought to at least try it– then it certainly won’t be out till 2019 (possibly 2020). If I self-publish, early 2019 is the best bet. Writing a novel isn’t hard. I’ve written several “books”: essay series, the unpublished early drafts of Farisa, and I’m sure the 905 blog posts (there’ve got to be 40 or 50 good ones in there) that I’ve never published could be turned into some kind of salable book. I could turn out a 60,000-word potboiler in a week (and, if I can’t make it in tech for some reason, I may have to write dinosaur erotica, because I’ve heard there’s money in that particular banana stand) if I had the connections to make it economically feasible.

No, writing a book isn’t hard. Writing a significant novel is hard. I figure that one can expect two hours of reading/research for every hour writing (which includes revising) and five hours of revising for every hour of primary writing. So, divide 150,000 words by 1,500-words-per-hour and multiply times eighteen: 1,800 hours. Add 30 percent for administration (e.g., finding beta readers, hiring editors, promotion) and you’re around 2,300. A year of work. Not a year of office work– the 2-hour-days that stretch out over 10 because of social nonsense– but a year of real work. Now, it’s easier than it sounds because it’s an enjoyable process and the time flies by. But even then, you’re haunted by the possibility that you might not be as good as you think you are, or people tell you that you are, et cetera. The only way to know for sure is to finish it… and then hope it sells, which is often uncorrelated to literary quality (and usually has the most to do with how well your book is promoted, which depends on internal politics at the publishing house) but of essential importance for getting the next book published.

The good news is that, even in a slowdown on my main project, I’ve been active. I’ve managed to keep my side interests relevant to my overall direction. For example, one of my recent improvements to Farisa’s Crossing was to replace an awkward on-the-nose conversation with a card game that does a better job of foreshadowing character development. Not afraid to toot my own horn, I used Ambition; although in Farisa’s world they call it ehrgeiz. There was a problem, though. For several years, I was aware of a design flaw in the game– and I wouldn’t want to risk my book drawing attention to the game in its flawed state. However, it took me a while (including about a month of trial and error) to figure out how to fix it.

What was wrong with Ambition, in its earlier form? What seemed innovative (a game-ending condition triggered by the losing player) and merciful– the ending at four (originally three) strikes– didn’t work well in practice. It created king-maker scenarios, it forced people into weird strategic positions, and it sometimes ended the game too early and sometimes too late. This wouldn’t necessarily annoy a first-time player, but as someone who wrote the game, it bothered me. Armed with a computer and the knowledge of how to use it, I ran various potential fixes through hundreds of thousands of simulated games and, eventually, figured out how to get the game to end at about the right time. (Obviously, this is a subjective question; but, there are statistically sound ways to look at it.) Now I’m writing an AI for Ambition, because I want people to be able to play the game (against something more competent than a random-legal-move player) without having to convince three other people to sit down for a card game they’ve probably never heard of.

To add to the challenge for the AI-for-Ambition project, I want it to run anywhere, so I’m not using any libraries but C’s standard ones. For example, I wrote my own linear regression solver. (I’m sure it’s slower and less numerically stable than what I’d use under other circumstances; but for my purposes, that’s not an issue.) The game itself doesn’t even use malloc and free, though the training process (which relies on evolving neural networks) does. So, if you want to train an AI game player on your toaster, keep that in mind.

My first approach was to write the most naive player I could think of: a static, layered neural net (e.g. 558-350-150-50-1) that treated each game variable as an input, and attempted to Q-learn a “heuristic” valuation for actions-per-state, then choose the best action. There were about 550 variables; for example, there was a 0/1 variable corresponding to whether a player held the 7♦, and fifty-one other ones for each card. How’d it work? Not very well, unfortunately.

That approach can work for perfect-knowledge, deterministic games, but I think the hidden information and probabilistic nature of the card game had the AI player bouncing about randomly, with each iteration of SGD, without much progress, because the theoretical convergence of stochastic gradient descent is cold comfort when (a) the environment itself is highly stochastic, and (b) I have better things to do than muck about with learning rates. After 12 hours of training, the damn thing wasn’t playing any better than a random-legal-move player. In order to make sure the neural net was, in fact, training at all; I used it again to assess a static dataset from a few hundred thousand games, and while training error reduced (establishing that the neural net itself did work) neither test-set error nor performance at the game improved. My guess is that the neural net was over-parameterized, causing it to evolve from and under-fit state to an over-fit state without any true learning.

Writing a decent AI for a card game isn’t hard. If that were the only goal, I’d probably be done: I’d get “dirty” and use rollouts and Monte Carlo tree search and my own knowledge about the game, and it wouldn’t take long to get a passable player. I’m still hoping, though, that I can get a model-free player to work. I want this damn thing to teach me about the game, rather than me having to teach it.

My next approach (what I’m working on now) is to hybridize gradient-based training and evolutionary techniques (e.g., NEAT) that favor simple network topologies. The change I intend to make to NEAT is how speciation is handled. Rather than down-moderating fitness for large species in what appears to be an ad hoc way– a fitness function ought to be shift- and scale-invariant, meaning you get the same results from f(x) and 17*f(x – 39), so dividing fitness by species population is mathematically meaningless– I intend to use speciation (if I need it, and if I’m lucky, I may not) to allocate training time. What I want is a sparse, genetic approach to network topology, but a gradient-based approach (with occasional clone-and-reinitialize-randomly steps) to weight selection.

We’ll see if it works. My prediction is that the naive model-free variant (i.e., here are 600 variables; figure it out) won’t work, but that with some explicit feature engineering (i.e., a pre-arranged design of the network that pools like information with like, similar to how a convolutional neural network exploits the 2D topology of an image) it will get there. I want to avoid rollouts and MCTS in training– they’ll slow everything down– but I’ll probably implement them when I write a user-facing player (in which case, milliseconds and even seconds are acceptable “thinking” time). Those, coupled with a simple heuristic (even linear regression based on 100,000 games of training data) could be enough.

Ambition for AI may be one of the last “serious” hard program I write. I’m approaching 35, which is half of the Biblical “three score and ten”. It’s time to put away childish things. Is programming a childish thing? Absolutely not. It’s a useful skill and I’m glad I have it. Corporate programming, however, I seem to have aged out of. Back when programming was an R&D job, instead of this Agile Scrotum nonsense, software was a perfectly fine thing to do into late middle age. However, the naive belief I once had that the life of a private-sector programmer could be fulfilling… that somehow the world would get back to a state where software engineers were trusted professionals, like researchers; as opposed to rent-a-coders constantly competing with unqualified cheap hacks… well, that’s gone. Silicon Valley is a resource-extraction culture, similar to the Gulf States, but instead of the finite resource being petroleum, it’s the good faith of the American middle class, and the memory of a reasonably meritocratic mid-century, that’s being tapped. It will run out soon; I’m surprised 2008 didn’t drink that milkshake outright. Anyway, as I said, I’m almost 35. Technology’s important and I’d like to still be a part of it, but… the crisis of conscience I have about my industry’s role in gutting the American middle class hits me every day now.

Also, I’m going to die soon. No, I’m not terminally ill; and by “soon”, I mean 50 years, more likely than 5. I’m not dying any faster than anyone else my age; but I cannot ignore that I will someday die.

I find most religion paradoxically nihilistic– “our god is good because he is powerful and he burns dead people who don’t believe he exists”– and I am closer to atheism than to any organized religion, but one of my personal beliefs is that reincarnation is likely true; that existence is eternal (or, at least, very long) but that every moment matters. Fifty years is the blink of an eye, and it’s on me to make my time mean something. I need to get back to work– real work– and finish my book.

One of the things that amuses me is that all adults (and most children, after about six) know that we’re going to die, and yet, we ignore this basic fact. We behave as if it’s not true or doesn’t matter. What would happen if people truly understood their mortality? Corporate America would fall within 5 minutes. Okay, okay; make it 10, because some people would have to finish up in the bathrooms, but the principle holds. Rich people would stop showing up at jobs that, even though they get to give orders, they still hate; the middle classes would leverage everything they’ve got to get some other kind of work; the poor who have no other options would unionize. This basic fact about life– that it ends, and that we have no idea what comes after it– is something we all know, but our denial of it keeps the whole system going. Hold aside for now the unexamined life; we, as a culture, can’t even examine a small (but important) part of life, which is death.

When you do anything important, you have to face death. See, most corporate work is so monotonous and meaningless that one can go completely numb, and thereby forget one’s own mortality. Death will at least be interesting, an escape from the idiocy. Die at your desk? Your boss has to deal with a corpse. That image is humorous enough that one laughs the scare away… for about forty seconds, which is long enough to get distracted and forget about the whole death thing.

Perhaps it’s not because they ignore death that people work in corporate jobs; perhaps they do it to ignore death. Aldous Huxley said that a heroin addict “lives his 24 months in eternity”, while the non-user lives “only in time”. I disagree with the sentiment; personally, I’d rather live in time, and make something of myself (as Huxley in fact did, in spite of this quote) because either eternity or nonexistence (which is not to be feared) awaits, no matter what I do. Yet, on Huxley’s point, I’d argue that corporate work has the same effect, without heroin’s deleterious effect on life expectancy.

The corporate illusion is one of monotonic progress. “We have a better culture.” “We’re going to crush our competitors.” “World domination.” The corporate eros is bland, colorless, and ultimately quite pathetic; but thanatos is completely absent. Companies no longer fire people or have layoffs; they “streamline” and “improve performance”. All of this is a total denial of impermanence and the inevitability of decay. Then, when companies meet reality and cannot continue monotonic expansion, their reaction is paroxysmal and harsh, so much so that individuals within fear for their jobs and standing, and focus on only on short-term survival (because corporate cults have convinced people to conceive of a job ending as a death, to the point where we even use mortal lingo). Never, whether in the silly expansive mode or the nefarious struggle for political survival, does the corporate denizen have time to think about the fact that he’s imperceptibly approaching his own demise– the real one, after which job titles and bank accounts mean nothing.

Like Farisa, I’m more of an anti-nihilist than anything else. I oppose the devaluation of everything authentic that I see in the corporate world. I believe that what I do in this life matters. Whether it’s eternal, that’s not for me to know yet, but the possible finitude of existence doesn’t render it meaningless. My job, in this life, is to do something authentic. Something that matters. Something real.

If you do something real, though– something worth actually caring about, like finishing a novel and turning it into a series that stands a chance to outlive you– you confront your mortality. You ask, “What if I die tomorrow?” “What if I never get to finish this?” It’s a harrowing thought. My advice, though I’m in no position to give any, would be to focus on the creative process rather than the end results, especially when so many of those are under external control. That’s easier said than done, though. It’s largely advice to myself, that I should take.

I’ve still been thinking, ideas forming and others dissolving, and I’m back to writing.

Memento mori. Refuse to forget, as the corporate lords require that you do, that death is coming and every moment at least ought to matter.

Startups Don’t Fail (Or Succeed) Well

I’ve said more than my piece about startups, and here I include billion-dollar ex-startups that IPO’d decades ago, and more generally I intend to speak about the Silicon Valley culture. These venture-funded startups aren’t small businesses. They’re large companies, but disposable by design. VCs intentionally push these companies to favor rapid growth over sustainability, and therefore keep them in thrall to investors. That way, if one becomes inconvenient, it can be killed easily: just don’t do the next round. Workers unionize? Dead. Sexual harassment lawsuit? Dead. Founder disagrees with investors on strategy? Dead.

That’s all covered territory. People know that venture-funded startups often fail. It’s not news. Still, the more general 1990s-based mythos used to sell these companies to young people is dangerously inaccurate. They often have no idea what ‘success’ and ‘failure’ are going to look like. They go in unprepared. I must call it out.

People join venture-funded companies:

  • knowing that more than 50% of them (arguably 80-98%, depending on terminology) will fail (“Outcome 0”), and
  • knowing that people who get into successful startups, early, sometimes become extremely wealthy (“Outcome 1”), but:
  • believing (falsely) that their work improves their chances of being executives or founders in their next gig,
  • believing (falsely) that personal performance can increase one’s odds of Outcome 1 or 1A,
  • not knowing what nefarious drama unfolds as a startup matures (either to success or failure).

Let’s first talk about the happy case.

Success

What does success look like for a venture-funded company? Usually, it’s a tired slog to the finish line that leaves most people demoralized. Usually, the founders don’t have the experience or charisma necessary to bring about the necessary changes on a timetable their investors will accept. Going public may be profitable, but it isn’t fun.

Founders get rich, some of the time. Sometimes, companies succeed in a way where even the founders get screwed. Employees usually get screwed. There are so many ways it can happen. I’ll list a few, and this list is not all-inclusive, and all of these outcomes have happened. Executives (or founders, or investors) can fire people “for performance” and steal unvested equity. They can force “voluntary” de-equitization by using the above threat– effectively, extortion. They can dilute common shares and issue themselves preferred shares. They can sell the company at a loss (putting common shares to zero) while lining up management positions, in the acquiring company, for founders and executives. These are things that I’ve seen happen.

Most of the time, the only way to get considerable wealth (more than is lost in opportunity cost) from a startup is to be an executive, and venture-funded startups are so bad at promoting within that the only way to make that happen, whether one gets in early or late, is to be good at politics. Can a software engineer make a million when a startup is acquired? Sure, it’s possible, but in addition to the 1-in-10 chance of a decent exit, she relies on the generosity of her bosses, of the founders, and of investors. She’s lucky if she has her job, after the cards fall– and, remember, this is the case where everything goes right for the company and it succeeds.

Failure

Ah, failure. Silicon Valley worships it, without knowing what it means. Fail fast. Go fail, young man.

In the real world, failure is devastating. Jobs end. Sometimes, careers end. Houses get sold at a loss. Kids get taken out of schools and lose their futures. People blow their brains out. Though some of these things are rare, they actually happen. It deserves address.

What does it look like when a startup fails? It’s ugly. Founders and executives put forward a mythos of being on the same ship together, but it rarely plays out that way. When a traditional company has a tough time, there’s a hiring freeze (often, for months before) and a formal layoff with severance. It sucks, but the company does what it can to minimize the damage to employees’ careers.

Venture-funded startups rarely have an honest layoff. They’d rather throw their own people under the bus. Their first instinct is to hide that anything is wrong. They’ll fire 20 percent of their people “for performance” before they admit that they’re in trouble. This is bad for those who lose their jobs. It’s a different kind of unpleasant if you’re in a managerial role, forced to pretend good employees are bad and run bullshit “performance improvement plans” against people who, through no fault of their own, were doomed from the start.

Oddly, hiring rarely freezes or even slows during the startup layoff (that the unethical firm fails to acknowledge as a layoff). Why is that? These companies know that they’re going to lose good people (unplanned losses, that they’ll need to close up) when they start acting that way. In truth, showing loyalty to one of these firms is almost never rewarded. Venture-funded startups know they treat their people badly, and assume that any non-founder who’s with the company after four years just isn’t good enough to get another job.

Up until the point where a venture-funded startup’s in serious trouble, there will be no sign of anything wrong. No hiring freezes, no harrowing corporate speeches, no voluntary separation offers. The firm will be aggressively hiring, advertising, and blowing its own praises… up until the day that payroll checks bounce.

Let’s talk about that outcome: the out-of-business game-over scenario. In today’s startup world, it’s rare. I’ve seen companies fail three or four times– reinventing themselves, changing business models, somehow keeping the rank-and-file unaware that “pivoting” is code for “failing but surviving”– and stick around for 10 years. At a certain point, there’s always venture money in the banana stand. Final shutdown could, at least in principle, at least be graceful.

This is, perhaps, the most emotionally shocking thing people encounter in startup failure. It surprises no one that impersonal business circumstances end jobs. One might imagine it as a stern but respectful conversation in which one learns that one’s company has run out of money, and that one no longer has a job. That’s rare, because a venture-funded startup usually has some book value. The founders and executives, fully aware that the company’s shutting down (or, at least, in a bad enough state that its career value falls below opportunity cost) scrambling to manage up into investors, who will be their prime references and sources of capital in the next gig. Although people will be laid off in restructuring, the words “layoff” can’t be used, for risk of tipping off the few key people the company needs to retain that there might be trouble. Even if the executives and founders don’t believe they can turn the troubled company around, they’re in such an intense investor spotlight that a single unplanned departure can’t be allowed. So, they have to kill the possibility for (true) rumors that the company isn’t doing so well. The workers have to not talk. But they will.

Some of these people will be culled the concealed layoff, but the company still needs some of its productive capacity. How does it prevent its employees from figuring out the company’s true situation? It divides them against each other: teams against teams, people on teams against each other, and so on. Internal competition doesn’t stop rumors from existing, but it floods the channel with so much bad and politically-motivated information that no one knows what’s going on. It also creates a cloud of hate and resentment that makes it easier to fire more people in case things get worse, which they usually do.

Startups tell the young people they’re trying to ensnare that they’ll either succeed or they’ll fail. That’s vacuously true. They promise that if they succeed, they’ll “hit it big”. That’s true insofar as a moderate outcome will be treated by investors as a failure, who’ll gladly put an otherwise-prosperous midsize company at a 90+ percent existential risk if the numbers make sense to them. Startups also say that a success will make regular employees rich. That’s a lie. If they make it far enough to collect anything significant, that’s because it they ceased being regular employees (which means they had to have been good at politics). Startups also lie about what happens when they fail. They prime employees to imagine failure as a regular business shutdown– impersonal, fair, and not especially embarrassing to the average employees– while masking the collateral damage of everything the founders and executives will do, as the company struggles and squirms on the way to that point.

Fascism– and What I Learned By Struggling Against It

According to this article from Business Insider, Trump has become subdued. He’s not tweeting about “Rocket Man” or making racist comments on the Internet. This is the new standard.

Let’s say that Trump stops acting like a clown, learns to control himself, and “becomes a president”. Or even just an adult. Then, this is when the dangerous times start.

Trump is not very good at fascism. I don’t think he possesses a coherent ideology or could be called a fascist. He is an opportunist and a person of low character who has re-invented some of the fascist’s tools. Whether he has advanced or humiliated the fascist cause, it is too early to tell. My view is that he has done both.

Seven Years Of Struggle

I experienced fascism years ahead of everyone else. It started in the Peak Weimar year of 2011, when the apparent threat of a full-blown fascist takeover of the United States was nonexistent. Even when I was personally fighting fascists, I had little cognizance of why I was in battle, or what their objectives were. I have resolved some of that, and I’ll discuss it here.

Why is my story relevant to such a broad-spectrum topic as fascism? We are now in times of such potential for extremity that individual stories, while failing to meet expectations of scientific data, are something we must rely on. Thus, my seven years of personal struggle against an emerging fascist adversary, which probably does not know yet that it is fascist, become relevant to the whole society.

Before we can have that discussion, I’ll have to discuss what fascism is, and why it’s already here. Donald Trump didn’t bring it. In fact, as I’ll discuss, Mr. Trump isn’t very good at the fascism thing and he’s largely a distraction from the deeper threat that has been building up over decades. Donald Trump’s rise to power is more of an expression of damage than a cause of it.

What Is Fascism?

Write down a 20-word description of what you think fascism is.

If you wrote nationalism or racism or sexist down, scratch those words out for now. Fascism’s nature dooms it to take a nationalistic, racist, and sexist course. However, it does not need to be racist or sexist. It only is nationalistic, at its root, in that it is often deployed as a strategy to govern a nation. It is harder to deploy fascism in a corporation that people enter and leave constantly. In fact, the fascist element of corporate capitalism lives not in the corporation, meaning a single company, but in the overstructure that now connects the fleet of them.

If you wrote down right-wing, scratch that out as well. Fascism is not overly concerned with economic notions of “left” or “right” and will use tools from the whole spectrum. It will be populist and elitism at the same time with no fear of self-contradiction. Nazis called themselves “national socialists“. Fascism is two-faced and will talk out of both sides of its mouth. It often becomes a company union for a nation’s people, pretending to offer egalitarianism to the strong ethnic majority of the country, while consolidating meaningful power in a small number of people.

If you wrote down oligarchy, you’re getting warmer. But most societies tend toward oligarchy. For example, most monarchies become oligarchies, because one person cannot govern a complex society. Democracies become oligarchies because most people aren’t interested in most issues; representative democracy (controlled oligarchy) is one tool that addressees this. So, is fascism oligarchic? Yes, but not all oligarchies are fascist.

Here’s how I would put it, in a sentence: fascism encourages power to congeal while the people are divided against each other.

It’s the dual opposite of democracy. Democracy encourages us, in our capacity as citizens, to cooperate. In the ideal, we’re all working toward the same goals: we want high-quality infrastructure, a fair legal environment, and not to go to war unless it’s absolutely necessary. However, those who wish to hold or acquire administrative power must compete. The burden of competition is an equalizing force. If those in power serve the people poorly, they can be fired in a nonviolent, impersonal revolution called an election.

Voting, in truth, isn’t only about deciding who gets jobs. Statistically speaking, a single vote almost never has that effect. Rather, it’s about holding the people in those jobs accountable for doing them well. A narrow win is more threatening to the winner’s political future than a decisive one. An incumbent’s loss not only removes the loser from office, but sends a message about the will of the people.

Democracy establishes competition for and within power, in order to prevent unaccountable monopolies. It isn’t perfect, and it doesn’t need to be in order to be democratic. In the long term, democracy makes more good calls than bad. It took way too long, but the U.S. eventually realized that slavery was morally wrong and that mistreating people because of skin pigmentation is unacceptable.

Fascism is the dual opposite of democracy: power unifies, but the people compete. Corporate, political, social, cultural, and religious power merge and become an unyielding fasces – these elements cannot be asked to compete and it is treasonous to try– while the people are divided against each other in endless competition. It’s no coincidence that Nineteen Eighty-Four has Winston Smith perform calisthenics in front of a telescreen: people jostle for rank in such a society, even when it is meaningless.

Fascism, then, inverts democracy’s noble goal of demanding that power serve people, and replaces it with a dual-opposite evil one of holding people accountable to power.

“Holding people accountable.” Remember those words for a second. Doesn’t it sound… exactly like a familiar workplace euphemism?

I’ve described fascism in the abstract. We’ll get to the concrete, dismal artifacts that come out of it, soon. In any case, democracy’s weakness is that it’s slow and careful. It wants to destroy the persistent power relationships and feedback loops (e.g. “money makes money”) that exist in human organizations. Over time, democracy allows the natural human dislike for injustice– not as powerful a force as greed or fear, but a noble drive that exists in us– to drive change. In contrast, fascism wants to speed feedback loops up. It wants to move fast and break things, while at the same time managing society tightly enough that those in power experience no real risk of losing it.

So where do fascism’s racism and jingoism come from? As defined here, fascism’s initial focus is largely on domestic affairs: those in power want to keep it, at any cost. Race, national identity, and foreign aggression need not emerge just yet. Not only should the domestic fascist have no interest in war, but he should find its risks unpalatable. So why does fascism turn toward conquest?

Simply put, people won’t tolerate endless competition, especially if it’s a pointless ‘rat race’ to serve those in power, unless certain conditions are imposed. Fear of a made-up enemy can work for some time. Bordering nations and ethnic minorities can be demonized and blamed for the fascist society’s stagnation or decline. Walls that serve no purpose can be built. Over time, however, people wise up to the distractions. Their resistance grows.

Fascism, in its endgame, realizes the improbability of domestic self-repair. Every circumstance that will make the people tolerate endless competition is tried, except one: conquest. So, a narrative of national or racial superiority must be used. If the people believe themselves superior to everyone else, they might clamor for permission to spill out in to the world at large (LebensraumHakkō ichiu; arguably, Manifest Destiny) and plunder. In this case, people will tolerate domestic authoritarianism.

The United States in the 19th century was not fascist– the mechanisms of fascism rely on broadcast technologies that didn’t exist then– but the fascists learned lessons from that era. When the slave-holding elements of society started to lose their moral battles on the nation’s eastern coast, and failed to gain any sympathy in the country’s increasingly important lake-blessed central-north, they brought us into war with our southern neighbor, Mexico. Since Mexico’s heartland has never been its desert north, it had less interest in holding these then-sparse territories, and the U.S. was victorious. This delayed the brewing conflict, in the U.S., for a few years. The vast majority of Southern whites, then called “clay eaters” or “white trash”, were badly served by the existing system; but, the prospect of westward expansion and the more aggressive (and probably impractical) imperial aims of the traitors’ new government, which intended to conquer South America after becoming independent of its mother nation, gave these unfortunate people a sense of stake.

People tolerate endless competition, given its atrocious effects on the poor and weak, and its basic inhumanity, if and only if they have a racial or national predestination myth that sits in their favor. Fascism doesn’t make trains more timely, but it creates so many distractions that people accept a prohibition on calling late trains “late”. Domestic malfunctions seem so unimportant compared to foreign threats and opportunity. When convinced that an authoritarian government is righting the wrong that is the supposed oppression of their superior race, nation, or culture, people lose interest in timekeeping over trivial matters.

“We have a better culture here.” Is that a nation, or a tech company? “We’re so much better than those idiots, we’ll IPO in three years.” Ah, the destiny is manifest.

Fascism, like democracy, is more like an oligarchy than it wishes to admit, and oligarchies (unlike their differently dysfunctional, conservative cousins, aristocracies) always have irregular, loosely-defined borders. There isn’t a clear separation between “those in power” and “the governed”; there are tiers of hierarchy, inner parties and outer parties, and organizations that pretend to be one thing but are another (e.g. Hitler Youth). While the high divide the low against each other, the high also compete against each other. It’s inevitable. The impulse to fight, encouraged in the low, can’t be restrained in the high. Therefore, there is, in fact, plenty of competition for position and influence within a fascist society. Just as corporate managers fight over turf and influence, lieutenants scrap for rank. In fact, the higher prefer to pit the merely-high against other merely-high, and the highest do the same to the only-higher. The competition is not disallowed. Does this refute my claim that, under fascism, power colludes and unifies?

No. Key is that the within-elite competition must happen in secret, behind a social barrier. Power can scrap against itself, but it must show a unified front. It is more important to be strong than to be right; ergo, debate and division (although they exist in secret, even up to the highest ranks) cannot be shown. The low must never see the reticence or in-fighting among the high. It may exist; it must be hidden.

Fascism in 2018

So, what is Corporate America?

Most people, when they find themselves under managerial adversity, mistakenly believe they can go over the boss’s head to the boss’s boss, or to the HR department, or seek transfer to another team. How often does that work? Almost never. What usually happens? The employee gets herself fired faster. Whether the manager was at fault rarely matters. The aggressive move must be punished. An example must be made.

Like police in a corrupt regime, managers are encouraged by most companies to protect their own. A manager who accepts a transfer candidate, if that employee is under managerial adversity elsewhere, invalidates the other manager’s judgment, and thereby breaks the expectations placed upon rank. So, it almost never happens.

To make it creepier, managers protect their own across companies, even when those companies are supposed to be in competition.

Consider the importance of reference checking in a person’s career: it shows us that a manager takes the word of a manager at a competing company over the account of the person who lived through something. If we expect competition (in this case, for labor) among employers, this is paradoxical. If we view corporate management as a nebulous, still-forming fascist party, this makes sense.

Middle managers are the outer party– the Winston Smiths, who hold belief most fervently in the system, because to lose it is traumatic and life-altering. Above them, the paradoxically brutal but preening– a synthesis of the worst masculine and the worst feminine traits– faux-mandarins called executives are the inner party.

An employee under managerial adversity rarely gets another chance: another manager, or a higher manager, or an executive, or an HR “professional”, is unlikely to contradict the word of a manager. The Party must endure for a thousand years, and nothing else matters. It cannot show division.

Corporate America is a one-party system: the managerial hierarchy never shows its debates to anyone below. While there are debates among managers over how things should be handled, they ought not be visible to the managed.

Wait. Surely I couldn’t mean…?

Yes, I do.

Corporate America is not “like” fascism. It is not “somewhat” fascist. I do not participate in the left’s annoying tendency to disparage regular conservatives with the label of fascism, but Corporate America is not conservative. Instead, it is nihilistic, radical, and abusive. It is as anti-conservative as were the left-authoritarian nightmares that blighted the 20th century in the name of communism.

Corporate America is fascist.

It has gone Stage IV and its metastases are all over our culture and politics, redirecting the blood supply and devouring healthy tissue. Notions of truth and decorum have been some of the first cultural organs to fail– now, we have a president using a term, “fake news”, that rightfully applied to internet tabloids, to disparage critical, legitimate coverage– and they won’t be the last ones to break down.

To wit: Donald Trump, a bullying birther Boomer billionaire businessman, successfully ran against decades of damage done by… bullying birther Boomer billionaire businessmen. Fascism is consistent in its dishonesty. Donald Trump is the corporate “turnaround expert” who is, in truth, a con artist.

On that, note that corporate turnarounds usually speed up the company’s demise. Why do they tend to make things worse? Often, a corporate board’s solution to the failures caused by executives is… not to turn the company over to the workers who have been failed, because that would be socialist!… but to hire new executives from the same toxic social elite. No one is truly accountable, nothing changes, and improvement is impossible.

The Capitalist Party is not especially capitalist, any more than the left-authoritarian disasters that called themselves Communist Parties were devoted to communism. The contemporary Capitalist Party is averse to capitalistic competition it cannot control, and will crush competitors even at an economic loss to the acting organization, because the Party must endure. Rather than being capitalistic in any meaningful sense, it’s a social elite, defined (like any other) by connections and corruption more than wealth, that finds democracy and even libertarian capitalism to be too volatile and has decided that the collapse of the American nation– politically, culturally, soon economically– is a fair price to pay for its own continued dominance.

Are we screwed? Well, we might be. There’s still time. Perhaps we’re five years away from the “fireside catch-up sync” meetings and ten years away from inconvenient minorities and intellectuals being sent to “performance improvement” camps. (“Meeting expectations will set you free.”) It is not too late to defuse the conflict. We must first acknowledge that it exists.

Make no mistake on this, though: we will finally solve our corporate problem or it will finally solve us. The difference is in the likely numbers: over the next 50 years, 50,000 of the elite, or 50 million of us. If these numbers seem ridiculously high, consider that the U.S. Healthcare Dead number about one million.

I can’t predict how this conflict will play out. Right up until the moment when it starts in earnest, it will seem like a distant threat or even an impossibility. Moreover, it’s going to involve quite a large number of countries. The fight against injustice and fascism is a global one. Though I am an economic populist, I cannot tolerate the racism of the Trumpist movement, and I find anti-globalism to be ridiculous. (I’m a writer and, as a creative person, I’m a globalist. The majority of the audience I want to reach is outside of my country.) Like technological automation, globalism is inevitable and too important to ignore. It must be done right.

“May you live in interesting times.” We do.

I hesitate to call the present time, at any time, the most dangerous or difficult year. It is like the persistent bias humans have toward calling their own point in life the hardest one. This said, I find 2018 to be a dangerous year. We have lived for a year under Trump and, not only have no nuclear weapons been used, but our stock market’s in record high territory. We even have low unemployment if imprisoned African-Americans– and a person with a felony conviction might leave incarceration but, having to disclose the fact on job applications, never escapes the imprisonment of imposed social and economic disability– are not counted. Things feel sorta-okay, if you’re white and cisgendered and not under 30 and live 50 feet above sea level and (most importantly) are not paying attention. Things are not okay.

Our guard is at risk of fatigue. We are so exhausted by our president’s insane, racist, idiotic antics that many of us fail to notice: the government shutdowns; the declining American life expectancy; the unaffordable welfare checks– sorry, “tax relief”, as I forgot that welfare meant “those people”– for the rich; leftist complicity in a casual-sex culture that commoditizes human affection and alienates the young; the technology companies that mask layoffs as firings for performance; the 68 microkelvins of global warming per day; the re-emergence of racist tendencies that don’t belong in this century (and didn’t belong then); and so much more that a complete list would double the length of what I’m writing here. All these real misbehaviors, failures, and issues go ignored while Donald Trump shoots 140-character wads into the Kleenex of the Internet. Regular corporate evil, which at least accedes to the weird (and also too complicated for one essay) doctrine called political correctness, seems utterly moderate now.

Actually, let me say one thing about political correctness (“PC”) for one reason: the hatred directed at this straw dog helped elect Trump. PC is a form of divisive class warfare that (1) uses historical evils like sexism and racism as a bludgeon against the lower classes, by pumping up the pretense that our poor are the sole source of continuing prejudice,s and (2) creates the impression among the lower classes that the left-leaning upper-middle classes are the aggressors. In fact, it’s extremely rare for a person to be fired over a stray dick joke– as opposed to say, a persistent pattern of predatory behavior (actual sexual harassment) that merits far more than the loss of a job– but most people in Trump country think it’s commonplace. They feel surveilled and threatened.

PC lets the rich blame racism and sexism on the poor– to say that racism is a “redneck” or “Southern” problem rather than an American problem– and thereby convince us that we must accept these ills, unless we’re willing to kill the poor, presumed to be incorrigible. (That many women and non-whites are part of the working class is ignored.) PC is the corporate solution of saying the right thing but doing nothing, e.g., failing to pay and promote women fairly, but firing some low-status man for saying “bitch” on a Tuesday. Though it remains extremely rare that people are actually fired for stray dick jokes, PC has taken the the smugness of my native social class (left-leaning, upper-middle-class, highly educated and therefore culturally conservative) and presented it as a cudgel that threatens peoples’ jobs. We must correct the record on this. No person with a heart can tolerate actual sexual (or racial) harassment, but no person with a brain wants someone to get fired over a stray joke.

In truth, the enemy isn’t the straw dog of political correctness. Rather, it’s a culture of so-called “performance” surveillance that workers face. Almost no one gets fired over a stray joke or swear word; but people do get fired when employer-owned weapons of war detect that they’re only delivering 9.99 packages per hour. Like Trump, PC and the rage it inspires are distractions.

Trump Trumps Trump

I am not especially worried about Donald J. Trump, himself. He seems lazy, and I don’t think he could pull fascism off.

A fascist leader must appear sacrificial. Adolf Hitler remained a bachelor and presented himself as celibate; he claimed that he was married to Germany. The dictator must appear godlike, mechanically inevitable, and infallible, but also stoic and not the least bit self-indulgent. He can’t be seen to enjoy his wealth and power, any more than (to crib an apt quote from a terrible man) he can be seen in a bathing suit.

Mr. Trump, on the other hand, is a decaying narcissist, ridiculed and despised even by those around him, who uses crass antics to make up for his glaring lack of charisma. Is he dangerous? Absolutely; he is a powerful man who appears to be deranged. One cannot rest easy while this is the case. There’s a lot of random damage he can do, that may or may not be correlated to a fascist ideology that he doesn’t seem to understand well enough to possess; but, is Trump a credible risk, in terms of a fascist takeover? I would say “No”. A long-term national crisis, or even another 9/11, he would handle too incompetently for that. He could do a lot of harm through incompetence and petty malevolence, but he would trip over himself and be set aside before he could become the next Hitler.

Currently, the main element of Trump’s damage seems to be making his weird, self-involved proto-fascism seem less deranged and less harmful than it actually is. This has created a situation in which regular, corporate fascism might be able to sneak in undetected.

I don’t worry about Trump. Rather, I worry about some 39-year-old Silicon Valley tech founder– today, he might be a non-entity who hasn’t even raised a Series B yet– who will come after Trump. He will present himself as the face of moderation. Like Trump, he will use personality (but, in his case, an apparent bipartisan competence) to mask his lack of meaningful apparent ideology, knowing that his actual ideology cannot be revealed until he has gained power. And what ideology will he be hiding? I doubt, in his first year as president, he will conceive of himself as fascist. He will simply continue to live by the might-makes-right doctrine that corporate executives have held for decades. He may never form an ideological commitment to fascism; the tendency toward belligerent authoritarianism, inherited from his time as an executive when the stakes were jobs rather than lives, might forever go unexamined by him. It won’t matter.

When Hitler rose to power, Germany had a massive prison population due to long periods of poverty. It wasn’t the Nazis, nor Hitler, who first came up with the idea for an “innovative” (and, if it were invented today, it would be called “disruptive”) new prison layout: an open-air “re-education camp”. The German population knew of these camps, but did not associate them with genocide, since that wasn’t their original purpose. They existed for years as regular prisons– and were popular at the time, being held to improve the performance of industrial society– until they were transformed into weapons of war and, later, extermination.

It didn’t happen overnight. If it happens again, expect the same: a gradual relaxation of morality that delivers us to unanticipated calamity.

The Battles I Fought

Unfortunately, I know Silicon Valley. Therefore I know, with more precision than most, how the next generation of fascists are likely to operate. The breadth and scope of their tool kit would require at least 100,000 words to cover in detail; neither I nor my reader have the time for that.

I cannot predict the future. I do not know how strong their zeal for wealth, power, relevance, and immortality will be, and how long it will stay strong. I experienced a “zero-to-one” variety of Silicon Valley industrial fascism. What their “one-to-10,000,000” phase will look like, that’s unclear.

Let me give a small subset of my personal experiences in the fight against fascism.

Before Battle: up to November 10, 2011.

I had a six-month, unpleasant stint at Google. No, this wasn’t fascism– just poor management. The fascism part doesn’t come into play until after I left Google. But hang tight.

This manager had, at least, a 5-year track record of using phony performance problems to tease out peoples’ health problems, which he’d then use to mess with them. I believe he eventually got fired for doing this (long after I left) and is now in a regular-engineer role at a less prestigious company than Google. However, judging from his rank and tenure at Google, he probably doesn’t have to work, so the fact that he got fired isn’t much consolation.

As of 2011, at Google the only thing that mattered was your “Perf Score”, a numerical measure of whether you were working hard enough to be set free of present demands and control structures. At a 4.0, you become promotable and transferrable around the company. Below 3.0, you’re untermenschen, and transfer to a project or team that might be a better fit becomes impossible. So, I asked my boss what Perf Score I’d receive if I took on a certain risky project that he wanted me to do (and that I couldn’t say no to). He promised me that, in 2011’s Perfing, I’d get a 3.4. He instead inserted, without lubrication or consent, a 2.8 into my file.

I exposed his dishonesty on a mailing list with thousands of subscribers. People found this to be, and I am not making this word up, “un-Googley”.

That wasn’t fascism. It was a regular old scrap between an abusive manager and an employee who refused to take shit. The fascism comes later– like I said, it started in earnest after I left Google.

People have asked me what I think about James Damore’s “diversity memo”. I don’t. I tried to read one of the myriad versions and couldn’t get through it. If I want to read about ethics, I’d rather read a philosopher than a corporate executive; the executive’s expertise relates only to the question of how an offender evades detection. If I want “self-help”, I’d rather read a psychologist or a Buddhist monk who has meditated for thirty years, in order to get real insight into the mind, than read a rich white male who got hit by a car and had to walk on crutches for two years. By analogy, if I want to learn more about gender, a software engineer at a company is about the last person I’m going to offer hours of reading time, much less give any real authority.

It wasn’t PC leftist “social justice warrior” fascism (which doesn’t have much power) that got Damore fired. It was regular executive cowardice. Google claims to value internal dissent and largely does, insofar as internal dissent is almost always ineffectual. When that dissent has effects and executives take notice, that person usually gets fired. I’ve been in Corporate America for long to know that private-sector managers don’t fire “low performers”. They don’t fire high performers, either. Being lazy creatures, they rarely know the difference. They fire whoever costs them time.

There’s a lot of organizational dysfunction to pick apart here, but none of this stuff is fascist. Yet.

November 11, 2011

I was not fired by Google. I found another job, and I left.

Somehow, my name ended up on one of those “suspected unionist” lists that gets passed around Silicon Valley. These are highly illegal, but Silicon Valley people do not follow laws. They “disrupt” them. It is not useful here and now to suspect how my name ended up on such a list.

My intended next employer called me on November 11, to tell me that they’d rescinded the offer. Why? Someone from Google had called a hit. Beyond that, it took me years of private investigation to get the whole picture.

Was Google, the company itself or its executives, hell-bent on enforcing fascism? I highly doubt it. That’s the scary thing about fascism. The dictator doesn’t need to do all that work. He likely doesn’t know that you (or I, in this case) exist, and he probably doesn’t. There are plenty of brownshirts who will do the ground work for him. The person responsible for the November 2011 was not an executive at Google and I would bet that no Google executives took part.

This wasn’t the only time I lost an opportunity because of an ex-Googler grinding an axe against my exposure of managerial misconduct. It was my first skirmish with brownshirts of the American fascist movement. It wouldn’t be the last. As for corporate fascists everywhere, their numbers increase, they cover more targets, and while Google itself has been on good behavior since I left it, I am sure that my name is on numerous “union risk” lists. There was a period in which I couldn’t travel to San Francisco because it wasn’t safe,

February 20, 2013

I interviewed for a job at a large investment bank on February 20, 2013. I didn’t get it. That’s not surprising. I’ve not-gotten plenty of jobs, often for valid reasons like… someone else did… or that the job required experience and skills that I lacked… or just that my interview performance wasn’t the best. It is not fascist when I do not get a job I don’t deserve. It’s what should happen.

In fact, noting the date of the interview and the state I was in, I am astonished that I got as far as I did. This was 17 days after my mother died. It would not have surprised me to have learned that I’d performed poorly in that interview, given the timing, and someone’s poor interview performance is a perfectly non-fascist reason to deny him the job. I went on the interview, despite the timing, because powering through these things is what a person does. I did not expect to succeed; I was not at my best.

I found out, several months later, that I had passed the interview. Perhaps I had even done quite well. Like I said, I was shocked. I learned that all but one of the interviewers wanted to hire me, and the one dissenter did not object based on interview performance, but because I had opposed the interests– “bad-mouthing”, and harmless bad-mouthing, because Google’s doing just fine– of a previous employer.

Fascism draws near! (F)ight, (R)un, (S)pell, or (I)tem?

Were it not for that one tip, from a close friend of the hiring manager, I wouldn’t have known that fascist interference had been a factor. I would have readily believed, concerning any date in February 2013, that my interview performance was poor and that I had been denied the job for just reasons. In fact, it was what I said about Google that led to an unjust denial. That’s scary, to me. The acts of a fascist are rarely detected. When a fascist moves, one rarely knows. How can we fight an enemy whose weapons are lies, secrecy, and confusion?

The “never bad-mouth a former employer” rule is imposed not because it harms companies for employees to discuss them honestly– an individual disgruntled employee sharing truth does hardly any damage at all, unless the truth is so severe in its nature that the company deserves to die– but because our fascist elements believe that even the most harmless dissent must be punished with isolation, starvation, and violence.

Fascism, fascism, fascism. No exaggeration. Fascism, fascism, fascism. Not “like fascism”. Not “right-of-center, legitimate political activity that I am wrongly using the fascist epithet to disparage.” Fascism, fascism, fascism. Actual fascism. It’s not “coming”. It’s here and it has been here for a while. There are people who believe that someone who speaks truthfully about a prior manager, ever, deserves to be permanently unemployed. We must finally solve these people, before they finally solve us.

Why would a manager at an investment bank care that I had said something, two years ago, about a manager at Google? What connection is there between investment banking and web search? None. It is managers protecting their own across companies, because companies no longer compete in any meaningful way, and they haven’t done so for years. The Capitalist Party is more than one company, and it must protect its own high officers.

If the Capitalist Party were not planning harm to their own country, this would not have been an issue that I ever faced. And these are not the only opportunities where I lost an opportunity, thanks to fascists.

In 2018, the corporate fascist’s most powerful tool is the threat of negative reference. What should be the legal status of job references, that’s complicated and there’s a lot to unpack. At the least, just as New York City nobly banned questions about salary history, we need a law that requires, any time a job is denied based on a negative reference, the afflicted candidate to know exactly what was said and by whom, as well as a public fund for appropriate legal action. We must deny fascists every tool they will use to divide workers against each other.

It does not matter that I was turned down for one job at one company for an illegitimate reason. The pattern matters. The fight matters.

The fascists’ fight is just getting started, and we seem not to have started at all. That’s a problem.

September 4, 2015

I did a stupid thing. By 2015, I knew that Silicon Valley was corrupt, but in my mind I still made exceptions. I put hundreds of hours into, and invested my reputation in, a questions-and-answers website called Quora. Though Quora is a venture-funded startup, I didn’t believe that they could all be bad, right? And what are they going to do to me, as a user?

I’m not exactly talentless as a writer. High-quality answers on Quora enabled me to bring my reputation closer into line with my actual character and competence.

Quora has faded from importance and no one really cares about it now. It’s shocking how quickly things like that change. But between 2013 and 2015, everyone in Silicon Valley seemed to read it. I was one of Quora’s top writers, frequently published on sites like Time and BBC (online, not print) that had partnerships with Quora, and had one of the largest follower counts for a non-celebrity.

Y Combinator bought Quora, in order to gain control of it. Paul Graham disliked me, falsely believing a 2013 blog post to be about him. Not much later, Quora banned me on false, libelous pretenses. It punished people who defended me. For some time, it erased my profile page. That I received its “Top Writer” distinction– which is, in fact, given out rather liberally, including to people I considered only average writers– has been scrubbed from the record, and so has my publication on partner sites. As in the Soviet Union, the picture was forever altered, with inconvenient people like me erased from history.

I am far from the only person to have been affected by Quora’s corruption. Feminists who live in India and China are often banned, not because Quora itself has an issue with what they say, but because it tires of fielding complaints about them from users in their countries. The site has chosen to align with anti-feminism out of economic convenience, not by ideology. In fascism’s nebulous state, there is no ideological consistency, but only an increasing preponderance of convenience that favors corruption and conglomeration of power.

This is a lot of words for… a website ban. It sounds so ridiculous, doesn’t it? Indeed, it does. Why on earth would a person care about such a silly thing? Quora is a silly thing, but in the technology industry, it mattered once. I was actually denied job interviews because I had been banned on Quora. I hadn’t said a thing on Quora that merited a ban, but the fact that the ban existed wrecked my employability in my own industry. I have records of absurd conversations that prove it.

The Quora ban wasn’t driven by a desire to get a troublesome contributor off a website, because I wasn’t troublesome. It was an attempt, likely driven by Y Combinator, to wreck my reputation. That such things happen is important to everyone who relies on his reputation (read: the 99.9%) to survive. Fascism, when it arrives, won’t declare that it is here. Instead, it will wreck the reputations of people it deems threatening to it. By doing this, it will seem competent and inevitable. It is neither.

February 4, 2018

My scuffles with fascism appear to have ended. Have I won or lost? I’d say that I lost. My career isn’t what it would have been, had I not fought. Not much has changed in my wake. Really, what have I achieved? Have I staved off a threat? Please let me know if I have, because it’s not what I see.

To me, it looks like, by provoking fascists and letting them wreck my career, I’ve been made into an example. This is one reason why I don’t like to talk about my past experiences in the corporate world. I don’t mind the embarrassment. If something is good for the world, I’ll accept embarrassment. Yet to suffer in public only vindicates the power of the emerging fascist movement.

I’m not bitter about the lack of success I had at Google. Looking back on 2011, I see pretty clearly what mistakes I made, and I know why I made them.

However, I’m disgusted by the fact that, even quite recently, I’ve lost job opportunities and consulting clients due to things that happened at Google, and relatively mild things that I’ve said about my time there. The perpetuation of the corporate state– the health of the Capitalist Party, the stiffness of the fasces– is more important than an individual’s right to express truthfully what he has experienced.

People who believe that a person should be blacklisted for breaking the “no badmouthing” rule are nothing but moral filth. We have to fight moral filth. The methods we use will be selected in response to their methods. There is a space here that I don’t want to project my mind into, unless and until we are actually there.

I won’t belabor my own struggles. They took a long time to live through; why give it more time?

Here are some insights about fascism worth taking note.

It won’t call itself fascism.

It did once, in one time: Italian Fascism gave the name to this more generic concept. Still, fascist is such a charged term that modern fascists will almost certainly avoid it.

German Nazis called themselves national socialists. Japanese and German fascists during the 1940s each believed in their own racial and ideological superiority, but managed to work together. Fascism will use leftist and rightist regalia in order to present itself as new, innovative, and effective. It is anti-liberal as much as it is anti-conservative, as it believes the traditional political spectrum to be anachronistic.

In the United States, fascism will first present itself as moderate, competent. and pragmatic.

Though dishonest, it mirrors the society in which it lives.

If fascism came to the United States in 2000, it would have undoubtably incorporated religion into itself. Yet, 2018’s fascists lack a consistent religious thrust. The “alt right” has extreme homophobic elements, but it also has gay elements. It has religious and anti-religious elements.

Fascism cares so little about traditional morality that it will either support or destroy a society’s cherished positions, depending on which confers more benefit: the safe adherence to a widely held belief, or the perceived (and false) courage of blind iconoclasm.

When playing from behind, fascists test the waters with a variety of approaches, discarding the failed ones (and the people associated with them) and doubling down on the ones that work– much like Silicon Valley’s venture capital business model, and much like Donald Trump’s belittling (but effective) nicknames for its opponents.

It will make itself seem inevitable.

Corporate employment used to be different. If you followed the rules and did some nonzero amount of work, you’d get promoted. The main thing to fight over was how fast it happened. This explains the era in which people bought first homes at 24 and retired at 49. Performance reviews were a formality in which people were rated “Excellent” and got 5 percent raises, or “Outstanding” and got 10 percent raises, and five years of merely “Excellent” was a sign that you might want to consider a less demanding firm for your next leadership role. It isn’t like that anymore.

On what date did the corporate world change into the surveillance-obsessed hellhole it is today? May 4, 1992? November 17, 1995? No one can say. It was gradual, taking place over decades, and no one noticed because each incremental change seemed inevitable. It has been said, over and over, that corporations are legally responsible to maximize the short-term value captured by shareholders and that all the corporate excesses and crimes follow from this legal mandate. It isn’t true, but it makes the dismantling of what decency once existed in capitalism seem as if it could not have been prevented, because no other options existed.

When it’s playing from behind, fascism doesn’t draw attention to itself. It does its first bit of work under the pretense that there are no other options, and its boldest moves unfurl so slowly (and after so much damage has been done) that they cannot be debated. Fascism moves slow or quick– never at the moderate speed that might enable discussion.

Italian Fascism never did get the trains to run on time, but banning people from calling late trains “late” made the country’s evolution seem efficient, inevitable, and expressive of the nation’s power.

Destruction of the Feminine

This one’s tricky. It’s not limited to fascist societies, and it deserves its own essay.

Is fascism doomed to sexism? It’s hard to say. It’s impractical for any society to hate women, yet most fascist societies mistreat them. Fascism has no hatred for what is female, yet it seeks to obliterate what it perceives as feminine.

To note the distinction (and why these calls are so hard to make) consider that cats are no more female than dogs– sex ratios are about even in both animal– and yet cats are perceived as feminine (rebellious, avoidant, useless) and dogs are masculine (loyal, courageous, stoic). I don’t know whether innate traits of either species play into these characterizations, and I highly doubt that “masculine virtues” are correlated with gender in humans. In the real world, courageous, stoic women are too numerous to count. Within-gender variation is so much greater than the between-genders differences in real humans that notions of what is masculine and feminine, within a society, are nearly useless.

Yet, fascist societies seem to crush all they perceive as weak– what is vulnerable, cautious, artistic, spontaneous, or nurturing. (No attempt shall be made to characterize these virtues, though stereotypically feminine, as female in any way.) For a concrete example in most corporations, managers who value mentorship, employee development, and internal social justice are derided for doing “female work” or “womanaging”, whereas abusive “tough” managers are often promoted. Blood and iron, profits and loss; only those matter. Everything else is an object, either to be cast away and discarded, or made into a weapon.

Fascism does not destroy all that is female, of course. It needs women to cast aside their “useless” femininity and become production workers: soldier factories on two legs.

The Destruction of the Feminine is not a women-only issue. Innately, men and women aren’t very different. We overlap far more than we diverge. I doubt that humans have significant innate psychological dimorphism at all; I suspect that our species’s race to become smart and versatile (e.g. large heads, opposable thumbs) forced it to put useful code in both men and women. In any case, a typical man might be 53% masculine and 47% feminine; a typical woman might be the other way around. Toxic (hyper-)masculinity and fascism destroy the man’s feminine part with the same disgust and militancy that it destroys the woman’s femininity.

It will isolate, before it acts.

Difficult circumstances can bring people together, or they can separate, isolate, and exclude people. People react to adversity in a variety of different ways, some more noble than others. Fascism manipulates circumstances so that positive and negative trends benefit its own power and divides the people. A boon proves the people’s strength and validates the leader. A failure or calamity needs an individual scapegoat. “Heads, I win; tails, you lose.”

No matter how improbable the connection of fault, the worst events are always the results of manufactured saboteurs. The communists, the un-Googley, one or another ethnic minority, did it. Fascist leadership’s purported infallibility requires the continuing selection and isolation of perceived enemies.

This achieves two things for the fascist. For one thing, an outnumbered adversary (and, worth note, an adversary chosen in part because it is outnumbered) can easily be defeated. Secondly, by isolating and then destroying a targeted set of people, the fascist society shows its strength. People do not fear death or injury, which cannot in the long run be avoided, one-tenth as much as they fear social rejection and abandonment.

Onward

This hasn’t been an easy essay to write. The topic is depressing.

I end with good news: there’s a decent chance that fascism doesn’t get much further in the United States. I am not saying, “Fascism is here and it will soon take us over”; but if I were, I’d hope to be wrong. No one can predict the future, and I will not claim exceptionality. We can learn from the past and assess the present. We can choose the best actions as we struggle in time and place. We don’t get to know what will actually happen until it happens.

The bad news is: fascism might also win.

How likely is it that our nation’s Fourth Turning involves a fascist takeover of the U.S. government, leading to domestic barbarity and foreign aggression, as we saw in the European 1940s? I’d put it between 5 and 15 percent. Those are higher numbers than I’d like to see, but below 50 percent. I think we’ll step back from this cliff. But in November 2015, I gave Trump a 1-in-250 chance of becoming President (1-in-10 primary; 1-in-25 general) when most people had him at zero.

Fascism is a funnel: if the adversary wins, options and liberties close. In that case, the best time to fight is now, when we have the most choices. If the adversary begins losing, nothing is lost by joining the battle before dawn. Either way, it is better to fight fascists now than to wait. When civilization and humanity come under threat, little else matters. Making “Executive Vice President” can be deferred for a few years.

I do not hold a simple political ideology. In the United States, I’m a liberal if not an outright leftist. If I lived in Sweden, I’d be considered conservative. I would rather fix capitalism than scrap it, and I am hawkish in my aversion to complexity and unintended consequences. I find value in ideas from both the traditional left and right. I am, however, aggressively and unapologetically antifascist.

The fight is important.

People have asked me, in the wake of the Google-related events describe above, if I think Google is a fascist company. Of course, no. I don’t think that Google is any worse (or any better) than any other large technology company. The boldest thing I will say is that there are fascists, and many more of them than we think. It was not Google but fascists at Google who attempted to damage my reputation because I issued mild criticism of the company. It was not Google but fascists not at Google, in other firms, who attempted to blacklist me because I had “bad mouthed” a company in a dispute that did not concern them. The enemy is not one company. It isn’t even capitalism. It’s not the political right or left. It is fascism, plain and simple, and we must destroy it.

This is a privileged moment. We know where the next attack will come from. We know a few things about the adversary and what methods they are likely to use. My experience tells us what techniques they’re likely to use in their quest to discredit their adversaries. Five years ago, the nature of the Fourth Turning enemy was still anyone’s guess. On the other hand, we can still avert disaster. We can still take our country back from plutocrats, bullying psychopaths, and fascists. Five years hence, will that still be an option?

We do not ask that; we do not answer it; we fight, now.

The Time I Ruined Programming

 

Part I: The Personal

I remember the time I ruined programming. It was long enough ago (and I recovered) that I’m no longer embarrassed by the fact.

Nearly all programmers have slowdowns and periods where they just can’t get anything done. It’s why we hate two-week “sprint” nonsense and stack-ranking; no one wants to be watched by a guy in a guard tower with a gun, just waiting to pop the programmer who slows down the chain.

When people learn how to program, they do so enticed by what the job once was: an R&D job of pure creation, free of nonsensical deadlines and business-driven engineering. A computer science curriculum doesn’t prepare one to be a bored grunt, but for a life of work on projects like compilers, interactive games, and machine learning algorithms. Of course, only a lucky few get paid to do that kind of stuff. Most programmers end up writing and maintaining boring business programs, solving problems that are only challenging because of tight deadlines and the sheer weight of bad decisions made before.

It’s easy to ruin a creative activity: do it as a subordinate, and you’ll often grow to hate it. This is why, although I’m a decent enough writer, I’ve never considered writing jobs; in fact, I’m wary of traditional publishing and its increasing tendency to push writers into an unfavorable employee/employer relationship. Authors pay a lot for that “Not Entirely Awful” stamp of approval they get from being “published”, thinking it means more than it does. Publishers then drive terms that turn authors into subordinates, and agents aren’t going to push back on behalf of a midlist or unknown author. Getting dumped by a publisher– or worse, an agent– can be worse than being fired. You don’t just lose your relationships, but your reputation.

I don’t intend to imply that it invariably crushes passion to work for someone else. Everyone works for someone else. In the abstract, we’re all subordinates. The alternative is solipsistic madness, idiocy in the original Greek definition, by which the word idiot meant not a lack of inborn intelligence, but one who non-virtuously chose to ignore public life. An idiot was a man only into himself. Our notion diametrically opposite the idiot, held up as the pinnacle of human character, is the hero. She’s not a subordinate in an organizational sense, but she still adheres to a set of ethical principles and works toward benefits she wishes to deliver to the world. She serves, but she takes agency over whom and how she serves.

Of course, workplace subordination is not heroic. To subordinate to the private benefit of someone else, who is under no obligation to return the loyalty (and probably won’t) is demoralizing and toxic.

The truth is that I have no use for people who are constitutionally insubordinate. To blindly disobey orders, because they are orders, is even more idiotic than blindly following orders. Civilization requires what I call operational subordination. An example would be stopping at red lights while driving. We do this not because we consider ourselves inferior to these robotic lights, but because driving wouldn’t be safe if we didn’t obey their directives. We don’t think of it as subordination; it’s just good sense. .

Workplaces, of the Theory X variety that has become the norm since the downsizing epidemic of the past few decades, don’t settle for operational subordination. They want personal subordination. The good of the company (read: the careers and reputations of executives) must take a higher priority than the career goals and personal needs of the worker, and the worker is expected not simply to obey stated commands, but to internalize this sense of moral inferiority. If he has no orders, he must ask for more work. If he seems to value his own advancement over that of his superiors– a successful out-of-work side project suffices to create this impression– he’ll be terminated even if he does nothing wrong.

Programmers, by and large, don’t mind operational subordination. In fact, we have an affinity for it. We like to solve complex problems with simple rules that make sense. We operationally subordinate, every day, to the syntactical demands of a compiler that simply won’t do anything with code it cannot parse. When rules are sane, and the benefit in their existence is obvious, we eagerly follow them. It’s the personal subordination that burns us out. We’re smart enough to spot a system that demands personal loyalty from us, while refusing to reciprocate, and it disgusts us. We recognize that our rules-based, overly rational way of thinking is under attack; someone is trying to hack us and take advantage.

The nature of employment has changed for programmers, and for the worse. Software development used to be a highly-compensated job with an R&D flavor, where programmers were trusted professionals rather than overworked grunts subjected to the likes of Scrum. The open-allocation environment of a company like Valve used to be the norm. What changed? Employers realized that line-of-business code could be written by mediocre people, and replaced the high-talent curmudgeons with this continuing churn of fresh-faced halfwits who have never written a program from scratch, have no mathematical or theoretical insight, and don’t know what “buffer overflow” or “O(n^2)” mean.

By and large, it has worked. Employers have successfully dumbed programming down. Large companies may need a few excellent programmers, but line-of-business software can be done by people of mediocre talent who’ll accept bad wages and worse working conditions. Scrum is not going away. Does it produce excellent software? No, not even close. It doesn’t even produce secure or safe or maintainable software. It works just well enough that deliverables get deliverated, and just barely so but at a sufficiently close approximation to working software that executives get promoted away from their messes before anything starts falling apart at a macroscopically visible level.

For my part, I got through that code slowdown, long enough ago that I don’t mind writing about it.

These days, I’m more interested in small programs that exist to solve problems than large ones written to justify budgets or “prove” technical choices. (I don’t care to convince anyone that Haskell is “ready for production”. If we can’t agree on a language, that’s fine; I’ll write C. It’s old and a bit wonky but it works.) One of my current projects is to build an AI for the card game Ambition (which I came up with in 2003, and have been refining since then) because I want to test out certain design changes; that, because I use Ambition for a pivotal card game scene in Farisa’s Crossing and I wanted to fix certain flaws before giving the game this added visibility. In order to test these changes, I need to run simulations with believable players; random-move players give some statistical insight, but they don’t explore any interesting strategy spaces. The “AI” doesn’t need to be AlphaGo, and it won’t be: if a simple Q-learner, backed by a basic backpropagation network suffices, that’s what I’ll use. If I need something more, I’ll write something more complex. These days, I’m rarely (if ever) impressed by code. Lines of code are spent, not acquired.

Now that I’m older, I have almost no passion for programming as an end in itself– an average corporate codebase is far more complex than anything I would intentionally write, and yet that complexity is wasteful and ugly– but, still, certain problems that can be solved with software interest me.

On the other hand: code for code’s sake, in million-line piles of corporate cruft and the dead shells of Jira tickets? Nah, I’m done with that. That really is a young idiot’s game.

Part II: The Fourth Turning

I reached a dark night of the soul with regard to software and technology. There were moments when I looked around and realized that my total contribution to humanity, by working for an increasingly maleficent industry, might be negative. The 21st century’s American theatre has featured the dismantling of the middle class, and I can’t say I had nothing to do with it.

In the 1990s, we had to hear about those “South Park Republicans”. In 2018, I find that I’m a “Black Mirror Liberal”. I loathe and fear Silicon Valley, with its complete lack of morality, more than Donald Trump; the latter is, to be frank, too stupid and too transparent in his self-indulgence to pull fascism off. If we don’t learn our lesson this time, a 39-year-old startup founder, more reserved and competent than the orange idiot, could do a lot more damage.

I don’t view technology as evil; however, I fear what humans will do with it. Like the Black Mirror series, I have the conservative’s skepticism toward human nature. Technological progress without moral improvement will lead us straight to hell. That, to me, is the important lesson of Black Mirror, arguably the most important television series of our time. Yes, science and technology are good, and liberal economics is largely correct; but, without cultural and moral improvements as well, other forms of progress can only do so much.

To me, the quintessential Black Mirror episode is “Fifteen Million Merits”. I shan’t do it justice here, but let me sum it up briefly. It’s set in (probably) a far-future dystopia. The middle classes ride stationary bikes to generate power, and outside of work they’re doomed to a virtualized, entertainment-driven life in which one must pay not to see ads (many of which are offensive, even pornographic). Those unfit to ride the bikes fall into the lower classes (“lemons”) and are treated as subhuman. There seems to be no “outdoors” in this world; food is grown in a Petri dish using energy that comes from… the bikes. Or, at least, that’s what the bike slaves are told. I’ll get to the plausibility of that, in a second.

The main characters try to escape their position. There’s a television called Hot Shots , modeled on shows like American Idol, through which talented people vie for a chance to join the celebrity upper class. Most of them fail, and some are forced into neoliberal prostitution. One character becomes a pornographic actress and her humiliation is broadcast to the entirety of her society; another, after a display of authenticity and rage, is enticed to make a performance out of his anger (thus commoditizing and killing his own authenticity).

There’s a scientific easter egg in “Fifteen Million Merits”. Examine the principle of it: humans must ride bikes to generate power for a society that, so it says, has been reduced to growing food artificially. This is physically impossible. Human efficiency is about 25 percent: if we eat 2000 kilocalories’ worth of food, we can generate 500 kilocalories of mechanical energy. It could conceivably increase, but never would it reach (much less exceed) 100 percent. No life is a power source; autotrophs like plants and algae consume solar power that the rest of us eat. Without an energy source like the sun (a giant, faraway fusion reactor) we could not survive.

Low-tech societies used on human power (stored solar power; faraway nuclear power) because they had no alternatives. If, however, a society were forced to use human power to grow food to feed humans, it would die. Therefore, we conclude that the society in “Fifteen Million Merits” isn’t energy poor, but energy rich. It can waste human effort and physical energy on pointless drudgery. The bikers think they’re keeping the lights on and the food growing, but anyone scientifically literate would recognize that as a lie.

I don’t think this is a mistake in the show’s writing. Someone would have pointed it out, I imagine. (Maybe not; The Matrix got this wrong.) For my part, I think it’s intentional. This means that the society has eliminated the need for work, but it has kept scarcity and work around. Why? Sadism.

That’s the horrifying truth that sneaks out of “Fifteen Million Merits”. It’s not a post-apocalyptic society that needs every erg of human effort. Rather, I suspect that it’s a rich society that has kept scarcity around to dominate and humiliate people, as entertainment both for the put-upon middle classes (who must be given seemingly useful work, lest they rebel) and the vicious upper classes.

To me, that’s quite a plausible future. Technology will eliminate the need for human work. It will eliminate most forms of scarcity. Is this going to lead us to a wealthy, egalitarian utopia? It might, but there are few guarantees. The elites could keep scarcity in place, valuing dominance over others too much to let the rest share in the wealth. There’s a lot that we don’t know about so-called “human nature”, and we’ve never had that kind of rich world before.

At any rate, let’s get back to the 21st century. One disturbing trend is that work is becoming more subordinate. I know, because I’ve been involved in making it so.

A few years ago, I worked on a “performance management” system, imposed on truckers, that would track which drivers were too fast, which were too slow, and even which drivers were eating lunch off-route to save money or have time with their children. It doesn’t save much to prohibit a driver from eating off-route: how much does five miles’ worth of gas cost? But, thanks to technology, this surveillance costs even less.

One could argue that Agile and Jira are programmers’ own professional karma. What we’ve allowed to be done to every other class of worker is now being done to us. It shocks us more than it should. Many of us worked toward bad ends– some examples would be so-called “performance management” systems, misuses of data to corporate benefit and human detriment, systems that allowed health insurers to deny care– and it’s only fitting that the surveillance capitalism we created would, at some point, be turned on us.

Technology has, I’ll note, also damaged the world for professional writers. (It may fix it; self-publishing is becoming a viable alternative, but that’s another topic.) Mastering writing and mastering sales tend to happen at different times. In the 1980s, a strong novelist whose first four books had mediocre sales could easily get a fifth book deal. These days, it’s a lot harder, because surveillance capitalism has no qualms about using data toward bad, career-denying ends. It’s not publishers who are to blame, in fact; the chain bookstores did this. Publishers, in the old days, would offer a fifth deal, along with an advance and some promotion, to a talented author who hadn’t yet built a following or mastered the golden touch. In the late 1990s, though, bookstores started pushing back and gutting authors who stayed on the midlist. Why risk space for someone who has “failed” four times, when an unknown might be a breakout bestseller? The long-term effect has been detrimental. Publishers have lost money and many have been put out of business or forced into conglomeration; writers’ careers have been wrecked by this; and the literary world has become more commercial and overall worse in terms of quality. The true culprit in this clusterfuck is, yet again, technology used for worker (in this case, author) surveillance. In the 1980s, an author with mediocre sales got more chances; in 2018, the bookstores run her numbers and say, “No thanks”, even if she’s quite talented.

Technology used to be in alliance with human progress, and now it seems to be moving against us. What’s going on? I think we’re headed for a Fourth Turning, or a historical crisis. The benefits of technology and surveillance accrue to the rich overseers, and they’re not going to give their ill-gotten wealth or power up.

We can assess the Fourth Turning, toward which we careen, by looking at the crises of the 20th century: the World Wars and the Great Depression.

As we moved from the 19th century into the 20th, we got very good at making food. It’s hard to imagine this being a bad thing; yet, it led to a Great Depression in North America and long-running, total wars (of a kind that prior agricultural capabilities and supply chains couldn’t support) in Europe. Ill-managed prosperity is more dangerous than true scarcity, it seems. Scarcity’s bad, but at least it slows things down.

In North America, food prices started dropping in the 1920s. Farms couldn’t survive. Our later corrections persist and, in some ways, are pernicious; due to our perennial corn surplus, we spike soft drinks with high-fructose Franken-sugars. However, that came after the Depression. In the last Gilded Age, consensus was that it was best to let these farmers fail. So, rural small towns that served the farmers died along with them. Heavy industry got hit around 1925-27, and the stock market grew more volatile in turn. Then it crashed, notably, in October 1929, but that didn’t “cause” the Depression; we’ve had worse stock market crashes since then (e.g., 1987) to minimal effect. Rather, the Great Depression actually started in the early 1920s; it just wasn’t until about 1930 that it started hitting rich people in the cities. We learned that poverty wasn’t some “moral medicine” to shock people back into adhering to the so-called protestant work ethic; rather, it was a cancer that, left to its own devices, would keep spreading until it destroyed a society.

What killed the American economy in the 1930s? Conservative mismanagement of agricultural prosperity.

What’s killing the American middle class in 2018? Conservative mismanagement of technological prosperity.

What happened to farmers in the 1920s is happening to… all human labor. Let that sink in. The vast majority of Americans produce things of less usefulness than what farmers produce. If farmers weren’t safe in the ’20s, public relations managers and computer programmers aren’t safe today. We’ll die without food; we’ll be fine without TPS reports.

Globalization is inevitable and desirable, but we’re letting it go off in a way that benefits the rich and eviscerates the middle class. Technological automation is wiping out jobs, and surveillance is turning high-autonomy, fulfilling jobs (like what programming used to be, when it had that R&D flavor) into drudge work. The high-surveillance culture doesn’t merely make the workplace unpleasant, but also lowers the bar for who can contribute– it becomes profitable to employ unskilled scabs, if surveillance becomes cheap enough– depressing wages further. Though Agile Scrum reduces the effectiveness of competent programmers, it turns incompetent ones into marginally employable code-cutters. So what happens? Well, employers replace the high-talent curmudgeonly experts with scrummy rent-a-coders, and wages nosedive. Furthermore, as jobs are wiped out in one industry, people who worked in it become “refugees” and flow into another, making that industry more competitive, and driving wages down further.

The wage slide is going to be far more severe than most people predict, due to economic inelasticity. When people have a desperate need for something– like gas to go to work, or water in the desert, or illegal drugs to which they’re physically addicted– they will pay nearly any price, and therefore small drops in availability will drive massive price increases. During the oil shocks of the 1970s, the supply only dropped by about 5 percent, but prices quadrupled. That’s demand inelasticity, but supply (of labor) experiences the same phenomenon. If people desperately need jobs to survive, then we should expect that either a small increase in worker availability or a slight decrease in demand for labor will tank wages, disproportionate to the actual amount of change.

For example, self-driving trucks will put millions of drivers out of business in the next twenty years. There are millions more who serve those drivers: hotel owners in the Midwest, logistics coordinators, and so on. There are a lot of smug programmers who think that this won’t affect their wages. Wrong. Again, no one is safe.

Let’s say that half the truck drivers lose their jobs. We won’t even account for all that supporting labor: the hotels and diners. That’s 1.75 million workers, fresh on the market. Let’s be conservative say that only 5% of them learn how to write code and become programmers; most will go into other industries. That’s 87,500 new programmers on the U.S. market. That’s about a 2.5 percent increase in our numbers. Seems survivable, right? If we expected wages to drop only by 2.5 percent, yes; but keeping in mind inelasticity, they could drop by 20 percent, just due to that.

Now, add in the workers who support trucking, the hotel owners who see the writing on the wall. They won’t all go into programming. Some will go into other industries (and push out other words, perhaps). Multiple industries will have refugee-driven inelasticity crises and wage collapses. It won’t be just us; but, no one will be safe. Look at what has happened to attorneys (once considered a safe, protected profession) over the past thirty years; many work long hours for mediocre salaries and are paying off student debt into their 40s. If it could happen to them, it can happen to us.

Is the world of 2050, even if we leave corporate capitalism to its own perverse devices, going to be jobless? No. Society will invent things for people to do. If nothing else, the rich will spend more time in school and the poor will spend more time in prison. Is it possible that continual downward wage shocks and industrial refugee crises will demolish the middle class? Yes. If we don’t recognize the danger that technological unemployment represents, and come up with proactive solutions, it will happen.

I used to hold the somewhat pleasant belief that technological progress would, if it hurt the economy enough, blow out its own flames. It ought to be self-limiting, right? If things get bad, people will stop creating (a sort of unprogrammed strike) and technical progress will slow… and we can find a way to clean up our messes. That was what I hoped for.

I’m no longer so sure of this. Technical progress, from a humanistic standpoint, has slowed down. Basic research funding has collapsed and we’re no longer putting people on the Moon; we’re tweeting about “covfefe” instead. Antibiotics did more to improve human life than a blog post about technological immortality written by some 24-year-old who optimizes ad placements; that much is true. Finally, the great technological marvel of the 21st century is a cloud. Not “the Cloud”; that’s just jargon for “the Internet”. I’m talking about the cloud of prehistoric algal carbon pumped out by the literal septillions of useless computations, performed to mine so-called “BitCoins”. Yes, that cloud, for anyone planning a tropical vacation to the North Pole. Ah, BitCoin and its ilk; this high-pitched electric whine, inaudibly belch-screaming carbon dioxide into our atmosphere, might be the siren song of a desperate middle class, seeing its impending demise at the hands of authoritarian capitalism, and not knowing what else to do but invest in libertarian wank tulips. Technology is becoming less useful and less friendly, from a humanist’s standpoint, but it does not seem to be self-limiting. Its direction leaves much to be desired, but its speed remains high.

Technology and globalization shall continue. There’s no other way. If we let our society and morality implode, this whole picture will take an undesirable direction. Without innovations that are currently untenable, such as a universal basic income and the eventual eradication of global poverty– in my view, in the next 50 years we ought to take a preemptive Marshall Plan attitude toward the elimination of third-world misery– Black Mirror is our future.

Technical achievement and beneficence have stalled. Sadly, the technological process will continue even if there’s a complete lack of human progress. There will, alas, always be money in helping rich people unemploy those pesky, expensive worker-serfs… and, also, helping the powerful surveil those whom they haven’t yet figured out how to unemploy.

Does Genius Exist?

Does genius exist?

I would argue that it does, but not in the way our society imagines it. Our society holds the notion that people like Albert Einstein, Nikola Tesla, or Andrew Wiles are natural geniuses, born apart from the rest of the world. It would be dishonest to ignore the unusual natural capability that such people have. However, are they born-apart geniuses, or people of high achievement? Does the distinction matter?

No such thing as “a genius”

The notion of a person as “a genius”, somehow separate from humanity and proto-angelic because of her superior intellect, I find harmful. As a discrete zero/one variable, this thing called “genius” doesn’t exist.

In 1999, I attended the Math Olympiad Summer Program, or MO(s)P. I met people with much more natural talent than me. To call them “geniuses”, however, would mitigate the sheer amount of work that it took them to accomplish what they did. I asked an IMO Gold Medalist how much time he spent thinking about math. He said, “50 percent”. Remember that this was in high school. Did he have a lot of natural ability? Of course, but he also worked his ass off (although it may not have felt like “work” to him).

This binary notion of “genius” seems to be a justification for our society’s othering of highly intelligent people: mad scientists, failed prodigies, bitter adult geniuses seeking revenge against a society that never recognized them. Then, to make matters worse, we have the cultural blight that is venture-funded, private-sector technology (“Silicon Valley”) in which the supposed genius nerds have come around full circle and become the bullies. Silicon Valley is the worst PR for smart people since Nagasaki.

The standard Silicon Valley narrative is that the horrible men at the top of the VC-funded ecosystem are taking revenge for two or three decades of unpopularity. That’s mostly correct, but let me add a couple of notes. First of all, the nerds/jocks model of high school is largely incorrect. I went to high school. I remember it. Not all smart people were unpopular. (I was smack-average in the hierarchy; not prom king, but not picked on either. I wasn’t invited to all the parties, but people respected me.) Also, “jocks” never were the bad guys. Jocks worked as hard as nerds, and most of them were nice people. Bullies were the bad guys. Sure, a few bullying jocks existed, and so did bullying nerds, and so did bullying jockish nerds. Some of today’s tech barons were nerds and some were more fratty, some were unpopular and some were not, but most of all they’re bullying assholes and always have been. Forget the Hollywood narrative; most bullies were unpopular in high school. (Were there popular bullies? Sure, but all being equal, bullying hurts one’s popularity.) It doesn’t take popularity to be a jerk.

Today’s Silicon Valley barons aren’t nerds, though they’ve appropriated nerd culture. They certainly aren’t geniuses; I’m much closer to that than they are, and I wouldn’t put that label on myself, because I’m hesitant to believe that it exists. Mostly, they are bullies who pretend to be nerds because it pays. As for bullying, I suspect they’ve always had that inclination, but corporate capitalism gives them a power they lacked in adolescence.

Are there people in Silicon Valley who approach or even exceed the level of cognitive excellence that we’d call “genius”? Sure, but they’re never the ones at the top of the pecking order, making millions of dollars or getting news articles written about them. They’re research scientists and software engineers no one has ever heard of, and they can’t afford a house less than an hour and a half from work. So it goes.

Okay, okay…

Highly intelligent people get othered. We’re nerds who “must” have poor social skills, because we spend so much time reading and thinking; at the same time, we’re geniuses who never had to work for anything. Note the inconsistency: we lack social skills because we work too hard on other things, but we’re lazy bastards to whom everything came easy. It’s a bit hard to win, in a society so focused on corporate back-biting that any positive trait or distinction gets a person torn down.

My experience leads me to conclude that: (1) natural ability is a continuous variable, not a discrete, binary one we could call “genius” and use to set people apart, (2) that meaningful natural ability is so hard to test beyond 2.5-3.0 standard deviations (IQ 137-145) that it’s hardly worth the bother, (3) that natural cognitive abilities tend to diverge below the level we’d call “genius”, so a mathematical genius might be only average at music composition or the visual arts, (4) that cognitive originality is separate from what we consider “IQ”, and (5) that said cognitive originality requires hard work far more than it requires natural talent. How one can be original without the sheer work it takes to study the work of others, imitating what is competent while rejecting what is overused? No intellectual achievement comes naturally; experience matters and that takes hard work. I don’t write well because I was born with a silver semicolon; I write well because I’ve written millions of words, and some of that stuff, early-early on, was terrible.

I’m not going to lie. Does the average child, age seven, have a shot at winning the International Math Olympiad in ten years? No. It’s not fair to place that kind of expectation on him. Discrepancies of natural ability exist, and their importance grows as a technological society develops a use for the ability to grapple with abstraction. That’s clear. But, does a person require a have-it-or-not single variable we can call “genius” in order to reach the highest levels of achievement. My experience has led me to conclude that the answer is “no”.

It’s a harmful notion, too, because most of us can’t agree on what “a genius” is. In business, people of rather bland cognitive ability (with nonexistent diligence and originality) routinely take credit for their subordinates’ work and are hailed as “visionaries” or “geniuses”, while the hundreds of smarter people doing the actual work toil in obscurity.

Sure, some people have more natural ability than others. Some have more drive. Some work harder. Some people are luckier in terms of what experiences life gives them. Over decades, these result in drastic differences in (perceived and real) capability. All of that seems to be true, without much controversy. Yet the archetype of “a genius” has so much baggage that it might be wise to get rid of it.

So what is genius?

If I reject the notion of a person as “a genius”, I still prefer not to throw out the concept of genius, not as a binary personal variable but as a notion we might affix to high intellectual performance. There are people who can reach a state wherein their cognitive originality, conceptual clarity, and productive competence is far beyond the norm. When they achieve the right state of consciousness, they reach a level of insight and capability that seems inaccessible to everyone else, like stage magic.

Natural ability plays a role in this, but it’s not as big a factor as we tend to think. Less than 0.01 percent of people in our society are truly high-performing intellectuals, as I see it, and yet I doubt that 99.99th percentile natural ability (IQ 156) is required. (In practice, we can’t measure intelligence that high in adults– and childhood IQs are both inflated and not as meaningful, since adult capability is what we care about.) Perhaps 0.5 percent, or even 2 percent, of people have the requisite natural ability. Perhaps it’s more, because we still don’t understand what intelligence is or where it comes from. Did Shakespeare have a 190 IQ? Or was he a 125 IQ person with a knack for wordplay and storytelling? Does it even matter? Not really.

In childhood, one’s creativity is high but competence is low. In adulthood, rote industrial competence is high, but creativity tends to be low. Society beats it out of most people, forcing them to subordinate to a system run mostly by people of mediocre intellect. Workaday life crushes originality and excellence. So, we see that two ingredients for what we might call “genius” exist at separate times in life and, for most people, never the twain shall meet. Neither time is conducive to anything we’d consider meaningful genius: the child’s inexperienced creativity lacks the insight necessary to achieve originality; while the adult’s mechanical, risk-averse fairly-goodness, borne of the perfusive oppression of needing an income, clearly falls short.

Most gifted children seem like they might achieve creative excellence in adulthood; very few actually do. I’ve observed the careers of extremely intelligent (i.e., IQ 160+) people and the results are, at best, disappointing. About half go to graduate school; the other half go to Wall Street or Silicon Valley straight out of college. Either way, they expect to defeat the morons in business handily, retire within ten years, and dedicate the remainders of their lives to intellectual pursuits. It almost never works out that way. It’s not uncommon for highly intelligent people to be mobbed and bullied in their corporate jobs by resentful mediocrities, although even more common is for them to disappear into the bland, beige fog, and to lose every element of originality they once had. Most often, they disappear somewhere in the folds of middle management, and do what they can to hide what they once were.

When highly creative people enter the corporate world, they perceive (correctly) that they are singled out for unfair treatment because of their abilities. However, they tend to attribute this to envy and resentment, as it was with bullies in school. It’s not so; there are two problems with this argument. First, plenty of high-IQ people join in the bullying. While creatively and morally stunted, having acquiesced to mediocrity a long time ago, such people do not perceive themselves as inferior. Second, the upper tiers of intelligence and creativity do not bring the economic performance or social status that such people would envy. The corporate masters crush the brilliant not because they’re resentful, but because they’re afraid. To that point, they’re afraid that the irreverence (if not the talent) of such people could become infectious and undermine their authority. Natural intellectual talent may or may not be a necessary condition for creative excellence – that element can be debated – but without an indifference toward the sort of arbitrary authority that exists in the corporate world, it is so difficult to protect creativity that it might as well be considered impossible. The corporate regime cannot allow authority to be seen protecting, or even tolerating, the irreverence that human excellence requires. Therefore, the excellent must be crushed until they lose the desire to excel. But it is not envy that drives this; it is a calculated purge.

It’s important to recognize the notion of genius, not as a separate category of human, because such a thing never existed, but as a rare combination of traits we ought to encourage rather than crush. It’s not something that nature doles out in rare packets. Rather, it’s a combination of traits that can be encouraged or defeated by circumstances. Our society has reached a state in which the latter is the norm; thus, our squalid culture and stagnant economy should surprise no one.

The rare element: arrogant humility

What makes cognitive originality so rare?

I’ll confess it. I’m not that smart. If I had to guess my adulthood IQ, it’d be somewhere between 140 and 160. Statistically speaking, it’s nearly a guarantee that there’s someone smarter than me on a typical subway train in New York, or in a traffic jam on I-95, or serving coffee at Starbucks.

Yet, I am a misfit in human society. The corporate world, with its polite but militant brand of mediocrity, has tried to crush me; somehow, I’ve retained this diamantine cognitive integrity that holds negative economic value, flouting self-preservation for a reason even I don’t fully know. People like me become rarer with age; some of us just give in. At age 34, with my cultural and cognitive integrity intact, I’m an absurd, alien rarity for reasons that have little to do with natural ability. What makes me different from all the drones out there is not “IQ”; I know plenty of high-IQ who lack it, and it’s plausible that people can retain it with only average cognitive gifts.

If not natural intelligence, what is the set of traits one needs in order to preserve a child’s cognitive integrity, while gaining the competence and experience of adulthood? In two words, I’d call it arrogant humility. One needs the pugnacious assertiveness of a child, coupled with the quiet diligence of an adult who knows how much she doesn’t know.

To be better than other people, you have think that you’re better. You must set a higher standard for yourself than you’d set for anyone else. In addition, you have to fight all the people who want to bring you down to their level: the corporate vassals and barons who can’t stand to be outshined, society’s myriad talentless gatekeepers, and so on. That’s difficult. A person who does this seems aloof and will be disliked.

To be honest, I think that what we now call “mental illness” – many of these are physical illnesses with mental symptoms, but that’s a topic for another time – can be, in its milder forms, an asset. During a depression, the brain comes up with a certain brand of charismatically negative, but incorrect and damaging, though: you’re a loser and you’ll never amount to anything, so just be content to bump along the bottom. If you’re used to telling this sort of impulse to fuck right off, because your survival has literally depended on it for more than a decade, then you’re more likely to hold integrity when you enter the corporate world and hundreds of supposed social superiors are sending you the same sort of message (in more polite words). You recognize them as Team Depression and, just like the enemy within, full of nonsense.

To hold your cognitive integrity in a world that wants to break it, you’ve got to be a bit arrogant. You have to believe that you’re better than the mediocrity you see everywhere, especially in a time of organizational decay like this one. Maybe you are better; I am. I’m not afraid to say it, because I value honest perception more than politesse.

At the same time, believing in one’s superiority doesn’t make it so. It can lead to laziness and failure. The Dunning-Kruger results have been, in a bit of irony, misinterpreted by most people who cite them. (They indicate not that unskilled people invariably perceive themselves as superior; but that, when it comes to social skills in particular, there’s no correlation.) Still, there are domains in which people accurately estimate their level of capability, and others where they don’t. Nonetheless, if being arrogant were sufficient for genius, we’d see a lot more geniuses than we do. Most arrogant people lack insight into their own limitations and underestimate how hard they’ll still have to work, no matter how talented they are, if they want to achieve anything significant.

In my experience, the smartest people know what they are. They find workaday life oppressive, toxic, and boring. They mock the corporate world’s needless busyness and its bland replica of hypomania. At the same time, if they wish to sustain high cognitive performance and originality, they can’t get complacent. It’s not that they feel competitive pressure (we don’t compete; however, others compete with us) because they’re more resistant to social impulses than average people; it’s that they recognize how little they know, and how much is out there that they need to learn.

Cognitive excellence is an anomaly. I’ve often wondered why there seem to be severe differences in human intellectual capacity. It doesn’t seem that 140 IQ brains are larger or more consumptive than 100 IQ brains, so why didn’t we rapidly evolve to (and then fly right past) the IQ 140 level? Why did human intelligence stop at “high, but enough”, when it could have kept going? There are three possible answers. One is: the premise is wrong, and human intelligence continues to increase. It’s plausible, though I suspect the Flynn Effect overstates it. A second possibility is that the world in which we evolved had no use for intelligence beyond IQ 100. To be honest, I doubt that. Though I doubt the difference between IQ 140 and IQ 130 matters in the wild, I’d imagine that an IQ of 110 would provide an advantage over 100, even in the Stone Age. As with height, 99th-percentile intelligence might make one a freak; but in a species that evolved in tribal groups, 90th-percentile intelligence would make one the pre-eminent alpha. A third explanation, which I find most plausible, is that high intelligence correlates with undesirable genetic traits – in particular, mental illness.

IQ and mental illness don’t seem to correlate. Creativity (as far as it can be measured) and mental illness do, but there’s a question of which drives which, and how. Does mental illness provide a creative impulse, or are creative people so oppressed by an inferior society that they break down? I suspect that both are true in some degree.

Perhaps there is value, to a creative person, in mental illness. To be honest, I doubt that extreme states of consciousness such as frank mania or clinical depression serve much value, except perhaps when experienced once. (As a writer, I can write a panic attack scene. Not everyone can.) However, the constant need to reject one’s own thoughts in order to survive tends to imbue a certain humility that is otherwise rare in this world.

I’ve met people who perform at a cognitive level one would call genius. I’ve also met business executives. (The intersection between these two sets is almost empty.) What I’ve noticed in executives is their lack of introspection, a total absence of precision in their thinking; but, most of all, their inability to reject errant impulses. They’re not mentally ill. However, they’re prone to biased thinking, emotional reasoning, and hubristic nonsense. These afflict everyone, and especially children, and business executives seem to regress to the child’s level. They’re inexperienced at rejecting their own impulses (being neurotypical, they haven’t had to do so in order to survive) but they’re also surrounded by people who can’t tell them “No”. They revert to a juvenile state, though they lack the creativity they may have had as children. (They lost it while climbing the corporate ladder, and they won’t get it back.)

Corporate barons have the arrogance down pat, but they lack humility. Their capacity for original thought is, therefore, disappointing. Being an executive is like being stoned; you think you’re at the pinnacle of performance, but it’s a delusion. Clearly, arrogance alone does not lead to intellectual excellence. Unchecked by a broader acknowledgement of the world’s vastness, it pushes the other way.

How does one cultivate this paradoxical combination of arrogance and humility? To be honest, I’m not sure. Decided theists like Cantor or Tolkien might have a pat answer: humble before God, arrogant toward Mammon. Personally, I lean Buddhist and accept my lack of knowledge, though it is a hard thing to pray to. I don’t know, for sure, if there are gods or what happens after I die – and that’s, in fact, part of samsara. So long as I remain in samsara, I will have to experience death without truly knowing if anything’s on the other side of it. (I imagine that I have held many world religions, and none from time to time, in past lives.) Even if I could delude myself into believing that I understand the bulk of what’s important in this world – and, if I haven’t made this obvious, let me admit that I don’t – I’d have to contend with my complete lack of expertise on what, if anything, is to come.

When I examine the state of workaday society, I rightfully recognize myself as superior, and I cannot fully take part. Sometimes, I wish I could. Anti-mediocrity is a disability that has cost me millions. Still, I recognize it as built for something lesser than what I am, and I shall not condescend. However, when I look at a universe that existed for billions of years without me, or at the highest reaches of human potential, or even at the simple contentedness of a happy cat… I must be humble. I find myself not as an outsider, and neither inferior nor superior, but as a perpetual beginner.

Panic, Office Work, and What Each Taught Me

I have panic disorder. I get mild-to-moderate anxiety attacks two to five times per week, disruptive panic attacks about once per month, and the severe kind that wear me out (a “go-homer”) every six months or so. It sucks, but it used to be worse. Nine years ago, before treatment, I’d have the hallucinatory, dissociative kind that felt like my heart was about to explode. I haven’t had one of those for almost a decade. These days, they’re annoying interruptions.

The disorder remits slowly. It leaves a lesser shadow of what it once was, but never quite ends. After you experience the five hundred bizarre symptoms that panic can throw– phantom smells, visual flashing, light-headedness, vomiting– you lose a fear of them. The attacks become painful and annoying, like headaches, but they’re not especially scary, per se, to a person who’s been through several hundred. They’re not dangerous; they just suck.

There’s no cure or exit that always works. Sometimes, a glass of fruit juice helps; sometimes, it doesn’t. A walk can make it better, or worse. Usually better, but it feels like a gamble. Reading isn’t usually possible; writing might be, but expect typos. The peak of an attack only lasts for a minute or so; an attack may have one peak or ten spaced a few minutes apart, and the recovery is long: 30 to 90 minutes, usually, before the damn thing is conclusively and safely over.

Do It Calmly

There does seem to be an overarching principle to the long recovery: whatever you do, do it calmly. That’s not much to say, but it’s what I’ve got.

That’s harder than it sounds. When your anxiety goes up to 11, and the fight-or-flight response is out of control, the impulse is to sprint out of a place, not walk calmly as if everything were normal. Yet, walking is often better for escape than running, in this case. Though regular exercise reduces the frequency and severity of panic attacks, I wouldn’t recommend all-out effort during one.

I find that panic produces two desires: one helpful and one hurtful. The first is the desire to complete a task that might make it better: a walk in the cold, going home, playing a mindless video game, or getting to the bathroom. It gives the mind and body something to work on, not so taxing as to exacerbate anxiety, but engaging enough to pass the time and spend some of that surplus adrenaline and neuro-electrical activity. That can be useful. The second impulse, the harmful one, is that raging urge to do it quickly and “get out of” the panic experience as fast as possible. That blistering hatred for the experience keeps it going.

The panicked mind thinks that achieving some task (say, leaving a crowded place) will make the attack end. The process of working toward the task usually helps; but, achieving the task in a hurried, tense, or reckless manner will create more anxiety. One escapes the situation that caused the attack, only to find the same panic in a new theatre (and, then, falsely attribute it to some other cause, back where one started). Sure, you escaped the movie theater, but now you’re having the same attack in a parking lot.

I don’t have all the answers, but the best I’ve got for panic is: find something to do but, no matter what it is, do it calmly. There’s something zen about the notion; more important than what you do is how you do it. That’s an attitude I try to take to most of life. Between health issues and work and random events, I can’t control the bulk amount of output I can produce. I don’t (and can’t) even know whether I have 60 years or 60 minutes left of life. The quality, I decide. Quantity isn’t up to me.

When I have a panic attack at home, I do housework. It takes twice as long as it otherwise would, and perhaps I do a less-than-stellar job, but the time isn’t wasted. If I have one while working out, I keep doing what I’m doing, in order to dump the adrenaline into exercise. At the office? Well, I still haven’t figured that out. The work never was the problem; it’s the subordinate context that makes office life so terrible. I suspect there’s no cure, for anyone, save sweeping societal overhaul (which will, though potentially beneficial, itself be stressful).

One good thing about panic attacks is that they end. Office life never does, really. Unemployed people have to search for jobs, which is just as stressful; retired people are so worn out by office existence that they often struggle to adapt, after having been institutionalized for so long. For this reason, the year after retirement has an above-normal rate of death by all causes: natural ones, and suicide. Such people have taken themselves out of the office– or, in most cases, been pushed out due to ageism– but the can’t take the office out of themselves.

Do It Tensely (…Or Else)

Let’s invert the directive above. Instead of “do it calmly”, let’s examine the distinctively unhealthy advice, “do it tensely.” You now have a recipe for anxiety– but, to its credit, also one for office survival.

One of most important social skills in the corporate world is mood mirroring. If the executives are tense, middle managers will be tenser, and workers who want to be upwardly mobile will be most tense at all. Carrying around this anxiety does no good. Nothing gets done faster, and the quality of work is worse than what it otherwise would have been. So what’s the purpose of holding this unpleasant mood? In most companies, management and especially upper management has no idea who’s good at his job and who’s just collecting a salary. Managers and executives are too busy managing up to pay much attention to what’s happening below them. By and large, it might be only 1 in 5 managers who has an intelligent sense of which of her reports are good and which are bad; as for upper management, two or three or eight levels above the workers: forget it. But, they all have opinions. Those opinions are largely formed based on appearances and perceptions: body language, verbal intonation, and various mood cues. As an executive sees it, the people who are running around anxious are team players; the relaxed people don’t give a shit.

High-ranking individuals within companies assess their subordinates based on what they perceive to be merit, but they’re usually picking up superficial cues. To social status, they respond strongly. The aversion to cognitive dissonance is sufficient that a person who reliably projects high social status will be viewed as competent and high-performing in any workplace. The problem is that this is hard to do. Everyone’s trying to project high status– it’s less taxing than projecting anxiety– but most people who play this game come across as low-status strivers. Also, groups define status in different ways and one who projects the wrong kind of status will draw resentment. To be coolly aristocratic might work in a London investment bank; it will inspire anger if you’re a foreman at a steel mill or a programmer in Silicon Valley. Since it takes a long time to discern what kind of social status a group values, and because these tokens are often hard to fudge, it’s more prudent to fall back on the second element: mood mirroring. When the boss seems angry, be angry. If the boss relaxes, relax– but not quite as much as he does, for that’s sometimes a trap. Forget what companies say about themselves being casual, fun, or (gag) family-like places to work. The old “shit rolls downhill” rules about not leaving before one’s boss, and not sitting while he stands, still apply in Corporate America.

If you seem relaxed at work, people assume you don’t care. If you’re the odd-man-out in theatre of shared suffering, that’s points off. Substance doesn’t matter. Merit doesn’t matter. Merit? What, you think you’re still in college?

Often, this anxiety comes straight from the top. Corporate executives are like toddlers with guns. They can’t do anything for themselves, but they can cause incredible damage, quickly. As bad as it is to suffer their whims, there might be slight comfort (not much) in the observation that their position is also stressful. They hold notional responsibility, but can’t do any of the real work that powers the company. They’re constantly in a betting position with regard to the performance– which they cannot measure, and the people to whom they must trust this job cannot be trusted– of the people below them. The personal stakes are low– if an executive fails and is fired, his buddies on the board will line up another job– but the anxiety is real. And, of course, executives take what anxiety they have and deliberately amplify it. As they see it, a display of twitchy trigger finger will motivate the people at the bottom.

Thus, corporate capitalism will always be a game where one of the most important skills is the ability to display anxiety in deference to the nervous brats up top. Of course, one shouldn’t go overboard. A full-on panic attack is embarrassing. It suggests that one can’t handle the stress. One who wishes to maximize his corporate survival ought to display about 1.2 times the anxiety level of his immediate superior: not so much to become the long pole in the tent or seem unreliable, but enough to send the sycophantic message, “When you’re upset, I’m upset.”

To wind down (or avoid) a panic attack: find something (anything) to do, and do it calmly. Try to do it well.

To survive at the office: find something (anything) to do, and do it tensely. It doesn’t matter if you do it well.

One might see why office life wears people out. Most people can’t mirror anxiety without themselves becoming anxious. Making an angry face tends to make a person angry; the same applies to smiling, grimacing, and other expressions. Just holding one’s facial pose tends to create the emotion. Even professional actors do best “in character” and need time to recover from tense scenes, and that’s when they know the events and emotions aren’t real. (In the office, the fear often is real, because most people live on the wrong end of arbitrary but brutal power relationships.) Being around people who at least have to pretend to be anxious will make most people, in accord, actually anxious.

Of course, though I hold a low opinion of corporate executives, I don’t intend to assert that they want people to be running around with panic attacks. They probably don’t think about that issue at all. (It is rare for a corporate executive to think about anything other than his own enrichment.) That a small percentage of the population would react to standard, mandatory office anxiety in a painful and counterproductive way is, from their perspective, tolerable suffering since they are not the ones who suffer. It is not within their intent, though. They are ignorant assholes, but almost never deliberate oppressors.

This particular divergence between what is mentally healthy and what is necessary to survive the subordination to rich people that we call “work” is not a unique one, and I’d argue that it’s the tip of a seemingly unrelated but fascinating iceberg: the notion of minimalism.

I can’t do the entire topic justice, but what I’ve realized as I’ve gotten older is that people are generally happiest with few things of high quality, rather than a plethora of low-quality objects (“clutter” or “junk”). A sturdy small house might be better than a troublesome large one, since most people spend the bulk of their home time in a small number of places, while a house that’s falling apart in a constant source of cost and headaches. High-quality objects and experiences, even if few in number, make people happy. Unreliable junk makes people miserable, even if they have a lot of it.

Most people know that, I think. Further, when there are a small number of elements to contend with, one can transform them. Cold, if one is properly dressed and therefore in no danger from it, becomes invigorating. The difficulty and pain of physical exercise can be offset by the design and healthful simplicity of the task: run 10 miles, or deadlift 225 pounds, or swim 50 laps. People do these things for free, with no economic purpose, because they enjoy physical exertion. Yet I don’t know anyone who would move a typical apartment for free. That work is just unpleasant: packing boxes, awkward objects, possessions breaking. Likewise, virtually no one can “transform” a torrent of junk and clutter into something enjoyable.

Office work is the opposite of minimalism. In bulk, it forces workers to contend with low-quality communication, low-quality ideas, low-quality projects, low-quality power relationships; and, in most cases, a low quality of delivered work. It’s busy, but it’s inefficient and often produces things of low or even negative social value. This incessant junk generation isn’t merely tolerated, but celebrated. If people weren’t spewing garbage into the stream of ideas, practices, records and discourse; well then, how would anyone know that they were working?

George Carlin decried our society as one where people buy shit they don’t need with money they don’t have. That’s bad enough, but one can opt out. Workplace clutter, on the other hand? Ha. Good luck with that.

Just as office social demands push people to treat minor shortfalls or risks as if they were personal, existential crises; it deprives them of the right to recognize clutter as such. It forces people to contend with, and generate, garbage so their managers can put together status reports on which they’ll be judged not on what was accomplished, but on the number of bullet points and words the report contains.

Largely, this exists because the best way to survive in a corporate environment is to flood the channel. Executives are always looking to cut costs; this usually means cutting people. Savvy corporates learn that it’s best to have complete opacity about how much one does and how much effort it takes to achieve it. Obviously, it’s no good to be seen as lazy; but it’s also damaging to be seen as hard-working, since it can present a person as over-eager, and it can set high expectations. You don’t want executives to know if you work a 2-hour day; you also don’t want them to know if you work a 14-hour day; or if you work a 14-hour Tuesday and a 2-hour Wednesday; or if you work exactly 8 hours every day. There is no upside in being surveilled by people with a hard-on for cutting (and, in many cases, externalizing) costs. Corporates with any sense of self-preservation will ensure that their superiors cannot pinpoint them in time and space.

 

In the cat-and-mouse game between executives looking to cut costs and workers trying to survive, one of the most common defenses is a channel-flooding attack. It doesn’t work not to communicate at all; it’s far more effective, in most cases, to flood executives with so much extraneous information that they can’t help but fall back on the default decision (to delay) with regard to anything that might cause harm.

Consequently, the main activity performed in corporate offices is not meaningful “work” but channel flooding. I don’t expect this to change any time soon.

But, imagine how much better of a world we’d have if this mandatory waste of time and emotional energy were eradicated.