Techxit (Part 2 of 2)

If you haven’t read Part 1, please do so. The story I’m in the process of telling is not one to enter in the middle. It is too strange for that–– it becomes nigh-unbelievable the scrupulous accounting that I have painstakingly provided.

If you’re caught up, welcome to the fourth circle. Five more to go.

In case anyone forgot, Nazis are bad. That hasn’t changed in the past 48 hours.

Chapter 13: The Misappropriation of the Nerd Archetype

During its fifty-year reign, Silicon Valley has created one meaningful invention: the disposable company. That is its true product.

In order to understand how we got here, we need to look into one of the more irritation, inexact archetypes of the modern era–– the nerd, stereotypically associated with hyper-intelligence and middle-class authenticity. A nerd is endearingly harmless, straightforward and socially uncomplicated, and vaguely asexual. Nerds are authentic because they only have one mode of interaction–– they lack the social skill of keeping separate multiple version of themselves. You might find the nerd in your life infuriating, but you can trust her.

In the 2000s and 2010s, this evolved into frank disability appropriation. Software executives–– bullies who swept into the tech industry to exploit nerds–– will often use “aspie chic”, despite being neurotypical, to excuse damage caused by their lack of empathy for other humans. This blurs an important distinction–– between a neuro-socially disabled person’s reduced capacity to appropriately express empathy, and the psychopath’s utter absence of it.

Software executives, for the most part, are people who wanted to be somewhere else. The top third of business school graduates go into hedge funds and develop trading strategies. The middle third go into management consulting or do “soft” work in private equity. The ones sent West to boss nerds around are the leftovers. They don’t like being there, and they view the people working for them as unlikable misfits, but over time they grow to view nerds as a puzzle–– how can this type of person’s earnestness, ego, and social inadequacy be used against him?

One failing of nerds is the desire to avoid “politics” and focus only “on the work”. To say, “I just want to code.” This results in programmers building systems without asking how they’ll be used; it gives us the weapons of mass unemployment, and it gives us the “performance” surveillance inflicted on honest workers.

Nerds, as I’ve noticed, don’t have a lot of leverage in today’s workplace, because they tend to fall behind the curve when it comes to performative emotion. They tend to fail at the effusive over-emotion that American culture expects. Neurotypical people understand that a person’s real job in a corporate workplace is to mirror management’s anxieties, without actually being affected–– to be Xanax in human form. Nerds, to their detriment, are straightforward and legible. They either shut out exterior anxieties (which management reads as disengagement) and focus on the work, or they let the nervousness get inside them, taking a hit to performance. They lack the essential two-facedness that workplace survival requires.

The neurological social ineptitude we observe in people with autism, and in the hyper-intelligent, is not what we find in software executives. Software executives know what the rules and expectations are, and they break not out of unaware earnestness, but as a means of belligerence. The breaches of decorum, the microaggressions, and the brazen flashes of non-empathy, those all use the archetype of a nerd as air cover, but these people are something else.

What characterizes the Silicon Valley software executive is a deep-seated contempt for human “softness”–– for empathy and for what makes us human. His dream company employs zero people–– no emotional cooties–– and makes a trillion dollars per hour.

I’m not against technology itself, of course. At a nuclear or higher technology level, post-scarcity automated luxury communism is the only economic system that stands a chance, and we should race to it. Automation and globalization aren’t evil–– we have to do them right, to distribute the wealth decently. We cannot trust the current financio-technological elite to do it right. If we leave the job to them, we’ll watch as they build increasingly profligate toys, migrate to off-planet bases, strip mine the Earth, and leave the bulk of us to die.

Chapter 14: The Disposable Company

A corporation, legally, has all the rights of an embodied person (corporis). It has none of the weaknesses, however, that come with a human body. It is designed to live forever. It cannot be put in prison. It commands such an obscene share of society’s resources that it can become “too big to fail” and stake an economy’s health on its persistence. It is increasingly unaccountable to the nations in which it operates.

That’s not an artificial person. That’s an artificial god. Gods only die in one way: people stop believing. That’s what killed Ereshkigal, Zeus, Thor, and Enron. Financial markets tell us, in real time, how strongly society believes in each god–– and how willing a society is to overlook the failings of the corrupt priests who take for themselves what is sacrificed to these gods.

In the corporate gods, I’m a nonbeliever. For quite some time, though, I bought in to the venture-funded technology industry (Silicon Valley). I let myself get duped. Silicon Valley is a god designed for nonbelievers.

There are thousands of venture capitalists, but only a few of them matter, and they mostly live in a small geographical area. The ones in the in-crowd, who can arrange publicity and introduce clients, decide as a group what gets funded, what gets bought at large companies, and what gets shut down. Silicon Valley is a factory for lightweight companies that can be inflated if circumstances demand it, but that can also be scrapped or mined if necessary. If the workers form a union, or if a founder goes to jail for domestic violence, the syndicate of investors will decline to participate in the next funding round, and redirect its resources and clients to another option in that space.

By design, these venture-funded companies cannot survive without a new infusion of cash every 18–24 months, because it is not only a one-time investment these companies require. The bosses on Sand Hill Road give them clients and publicity, and hold sway on whether, should the company fail (as most do) to become an independent concern, the founders get a favorable job outcome in the acquisition.

Founders present themselves as entrepreneurs running independent companies, but they function as a middle management layer between the true executives (investors) and the workers. They have no choice but to accept the venture capitalist’s high-risk, aggressive growth plan. If the founders fail to keep the VCs happy, they won’t only lose their companies, but they’ll lose their personal reputations.

Startups are risky, of course, but if you listen to people like Paul Graham, you shouldn’t fear this risk because even failure will advance your career. No, you won’t become an IPO billionaire this time around. You’ll have to take a time-out as a VP at a FaceGoog, and you’ll have to show up at a place once or twice a week, but you’ll be able to recover on your own terms. That’s not how it works at all.

Founders sometimes get “soft landings”–– most “acquired” companies are six months away from failure when bought–– but not because employers value their experience. When the VCs decide that it’s time for one of their companies to die–– they have no interest in funding it further, or sending it more clients, or pulling strings to arrange publicity–– they understand that founders typically don’t agree with the decision, and have some leverage in the shutdown process. Legally, founders are within their rights to fight, but that would delay the inevitable result, and damage reputations in the process. So, instead, the VCs make sure the founders will land in desirable jobs (e.g., VP-level sinecures at the acquiring company, “entrepreneur in residence” roles) and have acceptable financial outcomes. A startup acquisition is usually a hush fee paid to those who will have strong (if unfavorable to their ex-bosses, the VCs) opinions on why the company didn’t work out.

What about employees? What about the engineers who build the product? Oh, they get shanked.

Chapter 15: What Startup Failure Actually Looks Like–– Or: Why Your CTO Drinks

Here’s the picture most people have of business failure. The boss comes in, calls a meeting, and says that the company is defunct and that everyone’s next paycheck is the last. It’s awful, but it isn’t personal. The laid-off worker’s reputation stays intact, and she gets a comparable or better job, because of the experience and contacts she’s acquired. This is what venture capitalists, when they spew claptrap about “embracing risk” and “failing fast”, want people to imagine. Consequently, by this narrative, a startup that is not defunct is nowhere near failure.

In truth, a venture-funded startup’s failure is an ugly, drawn-out process that unfolds over years, often invisible to regular employees, that sinks the careers of innocents by the tens or hundreds before anyone figures out what’s going on.

When a venture-funded company starts to fail, it’s still able to raise money, but it has to get capital from less-connected investors and the terms get worse over time. This is why technology companies are cagey about the details of the “equity” (in truth, illiquid call options on penny stocks) they offer to compensate for low salaries. Deal terms can be mind-bogglingly complex–– I won’t get into that here–– and it’s not uncommon for a startup to be acquired for $250 million while its regular workers, after liquidation preferences and several rounds of financial shenanigans, get nothing.

The failure process of a venture-funded firm occurs in stages. Founders cede control, or initiate “pivots”–– complete changes in what the company exists to do–– and the result is a culture of constant reorganization. Upward mobility is rare because, as the company is forced to accept worse terms to raise capital, executive positions are sold off to friends of investors. Founders of foundering companies insist, while this is going on, that everything’s going exceedingly well, and they blame subordinates for shortfalls.

This results is jobs getting lost, and I’m not talking about layoffs. I’m talking about humiliating terminations “for cause” of innocents. The startup environment is a downhill highway, full of busses barreling down at a hundred miles per hour, and you don’t have to do anything wrong to be thrown under one.

The sociology of a churn-failing startup is fascinating, but for now, just trust me: this is how it always goes. Venture-funded founders do not admit they made mistakes. They blame the people under them. Technology is first to suffer blame, because it’s a soft target. Nerds don’t fight back, and nontechnical investors and bosses and clients don’t understand what they do.

To a young programmer, being a Chief Technology Officer (CTO) seems like a dream job, but it’s actually a high-stress position with a lot of turnover. When the company fails to execute, nerds get the blame.

An old-style company, when it had to lay people off, had the decency to admit that circumstances required it to terminate good workers. Often, firms would work with the press, at their own expense, to ensure that the reputations of departing workers were unharmed. That’s not how Silicon Valley does these things, because it’s run at the highest levels by empathy-deficient psychopaths–– who’ve taken the nerd label to give themselves plausible deniability.

When a tech founder founders, he presents himself as a visionary impervious to mistakes. Alas, his subordinates failed to implement his brilliant ideas. He didn’t fuck up; they sabotaged him!

This industry’s full of 25-year-old companies that claim they’ve never had to lay anyone off. Their history is one of monotonic progress that will never end. Dig deeper, and what you find is that these companies, during bad economic times, laid people off just as non-tech firms do. The difference is: tech firms disguise layoffs as firings for cause or for performance reasons–– protecting management’s reputation, at the expense of now jobless employees.

Technology founders present a mythology in which they either win big (get rich, buy boats) or die as a group. That’s not how it works, though. Startup failure takes 5–10 years to run its course as it usually involves a slow deflation in the founders’ standing in the investor community–– and these people will macerate workers by the hundreds before they go down. The “prima donna” programmers screwed it up. “Technical difficulties.”

Founders survive startup failure if they do right by their investors. If they shut the company down, when and how their bosses ask them to do so, their reputations stay intact and they can be founders again. Workers? Fired, no severance, often for phony performance reasons to disguise a layoff.

The disposable company’s political appeal is right-wing: no matter how badly the workers are treated, it is unlikely to unionize.

Most of Silicon Valley culture and mythology can be understood as anti-union prophylaxis. Programmers are led to believe they’re getting some “revenge of the nerds” against the girls who rejected them in high school by working on Jira tickets and making low six figures. Workers are pitted against each other–– tech versus non-tech, designers versus engineers, employees versus “red badges” (contractors), old hands versus entry-level–– in order to keep false consciousness strong.

Workers in a venture-funded company know that if they unionize, the VCs will simply nonexist it.

Chapter 16: Post-Truth

Corporate capitalism is a post-truth world.

I don’t love Jeb Bush, but his candidacy in 2016 was not ended on substance but because Donald Trump labelled him, “Low Energy Jeb.” What does that mean? It’s hard to say. It doesn’t matter. There need be no factual truth to it. It stuck.

Donald Trump pulled a corporate in presidential politics and it worked.

In Chapter 7, I mentioned the Carly Simon Problem. Someone misinterpreted an old blog post and he stabbed me in the back–– a reliable job reference turned negative. This raises an interesting question: why are negative references so damaging as to render otherwise excellent job candidates unhireable? It has nothing to do with the hiring manager believing the content of the negative reference “is true”. It’s probably not. In the corporate game of promotions, demotions, performance appraisal, and terminations, there is no truth–– there is only power. People get what they get.

Someone who gets a bad reference is unemployable because he “got got”. Was he bad at his job? It doesn’t matter. Donald Trump illustrated this viewpoint when he slagged a literal war hero, John McCain: “I like people who don’t get captured.”

Reputation, in the world of corporate false consciousness, is an entity unto itself. It need nor respect what is true. Donald Trump’s success in life proves this. He has shown no talent in running businesses. He has shown no significant intelligence or creative ability. He has been a degenerate reputationeer for forty years and it has worked. All he needed to kill off most of his political opposition, in his rise to the presidency, was a knack for nasty nicknames.

My personal view is that one should never invest oneself, or trust, in reputation. It’s too easily destroyed. It is a volatile asset, increasingly controlled by the world’s worst people. But it is not only distant, very rich malefactors one must fear. If I were a bad human being, I could render unemployable any young professional I wanted–– with less than $1,000 and in under 48 hours. I won’t get into the strategy, for obvious reasons, but it doesn’t take brilliance. Such a person would be sidelined at his job and, over time, terminated. Prospective employers would Google him, see a slew of rumors and whispers, and pass. Truth doesn’t matter, in the corporate world. No one wants to hire someone with a bad reputation.

Chapter 17: Reputation Management–– Keeping the Gods Alive

Why is reputation so important in the modern economy?

Largely, it’s because the most highly compensated people in our workforce do absolutely nothing. There are workers and watchers–– most white-collar people are watchers who participate (sometimes, indirectly) in the buying and selling of others. Those who do measurable work can be tracked, surveilled, and bargained against. The winning strategy is to get an advanced degree, keep one’s contributions abstract, destroy anyone who has the gall to point out the needlessness of one’s activity, and focus full time on the protection and expansion of one’s own reputation.

The most highly-compensated people justify their consumptive existences by saying they “allocate resources”. That is, they do nothing but “solve”, in political and suboptimal ways, problems that could be solved organically by a less oppressive system.

Napoleon may or may not have called England “a nation of shopkeepers”. Thanks to corporate capitalism, we’ve become one of reputation managers. A worker is promoted, demoted, passed-over, or fired based on his contribution to his boss’s reputation. The middle manager is likewise rewarded or punished for his perceived effect on the reputations of the executives above him. A CEO’s job is to bolster the reputation of the company he supposedly also runs. Innovation is nonexistent; the work itself hardly matters at all. It’s all about reputation, but why?

Above, I mentioned that the modern business corporation isn’t an artificial person but an artificial god. What can kill a god? Disbelief. Whether a true God exists in the abstract is another discussion, but ethnic gods are beasts that exist in society because they have reputations for existing. Corporations are the same.

The business world runs on reputation–– a product of cognitive laziness, social inertia, and a low degree of respect for accuracy in information. Every white-collar worker’s job is the management of some reputation. This is why a rumor about someone, no matter how absurd or demonstrably false, renders him unemployable. The rumor’s existence shows poor performance in the management of the target’s own reputation. How can he be trusted to defend and expand a boss’s image, if he can’t even control his own?

Nothing is true or false, in the scatological agora of corporate life. There is no good or bad content; all is just content. Things are loud or quiet, amplified or ignored. Rank begets rank. The longest eigenvector wins the right to poke you in the eye, or somewhere else if it so desires.

Chapter 18: Is the U.S. at Risk of State-Level Fascism?

I have no love for Richard Nixon, Ronald Reagan, or George W. Bush. This said, their professional ethics, while in office, were world-class compared to those of the typical corporate executive. Richard Nixon resigned over offenses that, in the private sector, would be everyday office politics. Government, we hold to a higher standard.

The Age of Reason, as begun in the European 1700s, led to the institution of rules-based, rational governments operating on laws rather than clerical fiat or the whims of charismatic individuals. The rich have largely accepted rational government as beneficial, the alternative being unpredictable; but in the 1800s when this led to discussion of rational economy–– also known as socialism–– they did what they could to slam the breaks on progress. National governments could be democratic, constitutional, and legalistic… this would make their operation slow-paced and “boring”, which would be good for business… so long as no one interfered with the boss’s “right to manage” on the factory floor.

Rational government and pre-rational economic principles coexisted for some time, but modern technology has made this untenable. One or the other must go. Which?

The Age of Reason has always had its skeptics. A pervert and not-even-middling French intellectual, Donatien Alphonse François–– also known as the Marquis de Sade–– managed to gain relevance by his inflammatory anti-rationalism. He believed that, given the human thirst for power and delight in the suffering of others, rational government could not exist. (He was wrong.) Donald Trump, our first truly corporate president, has doubled down on anti-rationalism. In a perverse irony, his supporters find him to be “a straight shooter” even though half of what he says is untrue. He uses mendacity as a power move (a business-world trick he has, over decades, perfected) and to some people, by doing so he communicates the only truth that matters–– that he’s in charge.

Fascists do not believe in truth. They only believe in power. Power decides what is true; power makes the rules. Donald Trump was impeached (unsuccessfully) for abuse of power. To a fascist, the term “abuse of power” makes no sense. In their view, we who are out of power are “losers and haters” using the term abuse toward power because we do not have it. To a fascist, no rules should exist over power.

Donald Trump is racist, misogynistic, self-indulgent, mendacious, volatile, deleterious, incompetent, and stupid. Is he a fascist?

Chapter 19: What Is Fascism?

I described earlier that no one is truly a nihilist, because meaning voids get filled. A person can be unprincipled, but that is different. Systems can be nihilistic or even destructive of value (cf., corporate capitalism) but, in an individual, nihilism is untenable.

Political nihilism, when observed, has a flavor of might making right. This goes back to antiquity. Trial by combat, on its own terms, solves two problems at once: the party that wins goes free; guilt passes to the deceased. Everyone wins because no one is alive to lose. It’s a great system if you don’t believe in truth.

Fascism, of course, isn’t just moral nihilism. There is more to fascism than a belief that might makes right; the notion is celebrated. Furthermore, while fascism is fundamentally empty, it presents itself not as nihilistic but as a rebellion against nihilism and relativism. Fascism promises answers. It is decadent and empty but blames society’s decay and emptiness on vulnerable minorities, or external enemies, and by doing this, it fills the failing society’s purpose void with hatred.

Corporate capitalism has little apparent interest in state-level fascism. It is amoral and nihilistic, but it is too lethargic to overthrow democratic societies if there is no profit in them. Much of what drives fascism to emerge in its wake, as it did in 1920s Italy and 1930s Germany–– and as it could have done in 1930s America–– is that people would rather live with a bad purpose than live, as they would under corporate capitalism, with no purpose.

Fascism doesn’t simply assert that might makes right. It celebrates the notion. Like philosophical sadism, it confronts something ugly in human nature (the problem of evil) that stymies well-intended philosophers and theologians and, instead of treating the malady as a flaw to worked around, embraces it and declares it good.

Every time we encounter another person, we decide whether to cooperate or compete. Societies generate rules to determine whether we favor one or the other. A nation might use a market (competition) to price commodities but institute a basic income or welfare state (cooperation). Representative democracy holds that we cooperate as citizens, but that those who wish to gain and hold power must compete for it. Competition, then, is introduced to make power accountable to the governed.

Fascism is the dual opposite of that. The people are divided against each other, constantly measured and compared, and locked in endless battles for artificially scarce resources. Power, at the same time, unifies. State, cultural, religious, economic, and corporate power congeal into an inflexible fasces.

A fascist society introduces competition to make people accountable to the ruling elite. There will be competition in high places, but it must only be seen from above. Fascism’s ruling class must present a unified front at all times. There will only be one political party, one leader, and one vision for society. All else is the enemy–– the other.

In the corporate world, people with bad bosses think they can improve their situation by appealing to HR or a higher-level manager. I have never seen anyone make that work. Usually, they get themselves fired. To a fascist, the attempt to divide power against itself is unforgivable.

Chapter 20: Why Corporates Might Favor State-Level Fascism

It’s said that if you scratch a capitalist, a fascist bleeds.

Corporates, outwardly, like to play both sides. They take on liberal identity politics and conservative economics, while striving for an image of centrist pragmatism. They will almost always, however, favor a rightward lurch over even modest leftist progress. Why? They view fascism as an in-one-country problem. They will move family if safety requires, reallocate capital to take advantage, and wait out the storm. Genuine social progress is more of a threat to their capital and social status, and–– worse yet–– likely to have longevity. What is “socialism” before it is implemented, people like once it is there, and it becomes impossible to roll back.

The United States has always associated leftist politics with radicalism, but in our recent history, we’ve faced orders of magnitudes more danger from thee right. The Weather Underground, at its worst, was a nonentity compared to the horror of the Ku Klux Klan. We live under active threat–– school shootings, theater shootings, church shootings, synagogue shootings–– from a belligerent, far-right counterrevolution the corporates manufactured to divide the proletariat against itself, for the benefit of the ruling class, and to distract people from the widespread, but notionally centrist, looting of society by the executive class.

Why do corporates present themselves as centrists? Frame dragging. They want to nudge the Overton Window to the right, but they do so by holding on to the zero point. Despite Machiavelli, they’d rather be loved than feared. Machiavelli’s advice in The Prince pertained to an individual seeking to block short- and medium-term challenges to his power, but an owning class that wants to hold power forever will prefer, in peacetime, to make itself loved. That is the purpose false consciousness serves. In event of active conflict, however, they will resort to fear.

The way I’ve discussed fascism may sound bloodless. With my focus on the unification of the ruling class–– and workers competing to serve the masters–– it might seem that I’m downplaying fascism’s end-stage horrors: racism, misogyny, religious bigotry, belligerent nationalism, and genocide. Not so. Those emerge as a matter of course. A fascist leader’s goal is not to rack up a body count per se, but to hold power at all costs. This said, the people governed will not tolerate ceaseless competition without a narrative of expansion. If the poor figure out that they’re being pitted against each other for the benefit of the rich, they’ll revolt. Instead, says the fascist, they’re being prepared for a grand conflict, an upcoming war–– in which they are predestined to win, because of national, racial, religious, or cultural supremacy–– in which the society will prosper and expand (Lebensraum) through the vanquishment of undeserving, “inferior” people.

Narratives in the startup’s corporate playbook are not especially different. The “lean” (understaffed and underfunded, with workers artificially divided against each other) startup is destined to drink the milkshake of its “bloated” competitor because “We have a better culture”, because “All they do over there is play politics”, because “No one over there works Sundays.” Sure, many of the workers–– the weak, the unworthy–– will burn our or get fired; but in the end, the startup will demolish its competitors because of its superior “culture”.

Chapter 21: Masculine Crisis

Economic systems like ours produce epidemics of masculine failure. High-status, rich males never need to grow up (that is, become men); low-status men are denied the opportunity. Men and women lose.

I recognize that I need to tread with care here. I make no absolute claims about men or women. I categorically reject any line of thinking that declares one sex superior to the other, or that encourages the stigmatization, exclusion, or punishment of those who do not conform to sex or gender roles.

It’s an inoffensive, common leftist position that gender is entirely socially constructed, but I don’t think that’s true. Much of it is, of course. That Brian is a boy’s name, and Emily is a girl’s name, that’s socially constructed. That pink is a feminine color, or that truck driving is a masculine job, that’s socially constructed. That said, there are patterns that recur in societies to such an extent as to suggest sex-linked differences in the aggregate–– in probability distributions, even if not relevant at the individual level.

I agree that gender roles do not suit everyone. This said, if one looks at the cultural mainstream, one finds deep-seated attitudes that, right or wrong, will not be abandoned by 90 percent of the population at once. Heterosexual men, in general, want to see themselves as masculine, and are attracted to women they perceive as feminine. Heterosexual women, in general, want to see themselves as feminine and are attracted to men they perceive as masculine. I’m making no statement on what should be–– only one on what is.

Corporate capitalism has a problem. It requires men to live on their reputations. That is not masculine. Subordinate men, in a courtier society, are seen as obsequious and supernumerary. Men do not want to see themselves that way; women are not attracted to such men. For this reason alone, corporate capitalism is unstable.

To be clear, I don’t think women should have to live that way either. I focus on the masculine side of this crisis not because that, in my view, is what drives the lurch toward fascism. Men who support demagogues like Trump do so out of rage at their emasculation by corporate capitalism. Women who support demagogues like Trump do so out of rage at the destruction of men in their lives.

Masculinity is, and will always be, the weak point of hierarchical courtier societies. Traditionally masculine endeavors (hunting, exploring, defending) do not pay. Corporate capitalism must sell the narrative that making money is a sex act. A real man provides, even at the expense of his own comfort. If this means he peddles drugs to children, or builds bombs, that’s what he does. If this means he supports a fascist regime, that’s what he does. Freedom is just another disposable comfort of lower rank than his obligation to “be a man” and provide.

The problem, of course, is that court life is emasculating. Men who earn their coin by subordinating to other men are useless. Women are the reproductive bottleneck, not us. In our role in the reproductive equation, we’re replaceable.

Corporate capitalism tells men they must provide, but only leaves them one way to do it, which is to be emasculated by other men. Eventually, men figure out that they’ve been duped. They get angry. Equally angry are the women who, in a decaying society where male adulthood is increasingly rare, cannot find husbands.

Is it emasculating to be an organizational subordinate? Five thousand years of human history has produced exactly one counterexample, one context in which a man can be subordinate and fully masculine–– the archetype of the soldier.

We see yet another reason why fascists love war.

I could write thousands of words on toxic masculinity, but I’d rather not. It’s disgusting and depressing. Our economic system induces toxic masculinity. The degradation of the feminine distracts men from the game being played against them and women both. At the same time, toxic masculinity is what keeps the corporate system going. Inertia does not suffice to explain it. The corporate system is always busy. It propagates false consciousness, enforces a social hierarchy, resists challenges, and bolsters the image of a hereditary elite disguised as meritocratic ubermenschen. That’s not a conspiracy–– it’s all done in the open, and legal even if its methods aren’t–– but it is a lot of work. Who keeps it going? What motivates the plutocrats and corporate executives who (unlike us) could easily retire from the world’s shittiest mini-game, but keep playing?

For the most part, the system’s raison d’être is to procure sexual access for rapacious, disgusting men. Harvey Weinstein, Roger Ailes, and Donald Trump wouldn’t have a lot to offer women if they had to compete on an even footing with socioeconomically inferior but otherwise superior men like me (and like 99% of my male readership). Corporate capitalism is a way for these odious men, using paperwork and poverty, to disempower their competition.

The reason we do not have health insurance in the United States is that, in 1947, a bunch of racist Southern Senators fought a movement that would result in desegregated hospitals. Millions of people have died of lousy or nonexistent health insurance because a bunch of now-dead, inadequate white men feared losing “their” women to… not the British Broadcasting Corporation.

Chapter 22: Is Donald Trump Fascist?

This might be the only section where I don’t know the answer. Is Donald John Trump a fascist? I don’t know.

He is heinous and bizarre. We could be debating fifty years from now whether he is a fascist or opportunist. His mental health is questionable, but I’m not qualified to opinee on that topic. He seems to have no coherent ideology. There are fascists around him–– that’s clear. There are also opportunists around him. There may be one or two noble souls putting his career at risk in a sacrificial effort to limit Trump’s damage. As for whether the man holds fascism in his heart, we’ll tackle that some other day.

Objectively, we can say that Donald Trump has normalized behaviors and practices that threaten democracy, making the job easier for any fascist who follows him. What about capability, though? Is he capable of making himself a fascist threat to this country? Three years ago, I would have said, “No, absolutely not.” On that, I must admit my confidence has waned.

Having studied fascism, I would have said in 2017 that Donald Trump would be unable to pull the requisite image off. Adolf Hitler was a wealthy, self-indulgent, flatulent buffoon who had a number of trysts, and Mussolini’s sexual perversions are now legendary, but the public images these men presented were more in line with stoic, traditional masculinity than the flagrantly toxic variety of Berlusconi, Bloomberg, and Trump. It was all a lie, but the Fuhrer presented himself as a simple-living celibate bachelor, “married to Germany”. He himself said that a politician should never let himself be photographed in a bathing suit.

Donald Trump, for a contrast, lived like a clown for his entire adult life. I did not think, in 2017, that such a man could sustain enthusiasm of any kind, fascist or otherwise, for more than a couple years. I expected his movement to die out as he became part of the establishment he railed against.

So far, time has proven me wrong. Toxic masculinity hasn’t been a liability for Trump. He has doubled down on it, to no cost to himself. Fascism has proven itself protean.

This acknowledged, I will not say that Donald Trump poses no fascist threat to our society. He clearly does. But, I continue in my belief that he hasn’t taken the most efficient or obvious route to fascism. In 2016, he nearly lost. His approval rating is lousy. If he wins in 2020, it will have had more to do with Democratic incompetence than any appealing personal traits of his.

All of this said, and recognizing that a fascist can play either to traditional or to Trump’s overtly toxic masculinity, the greatest fascist threat in my view comes not from Trump, but from Silicon Valley. We could see, in 2024, a young technology founder running on an image of centrist competence, with a sterling reputation (because anyone who would say bad things about him has been silenced), who will present himself as “post-political” and an antidote to “these polarized times”. I would imagine that he would avoid the public self-indulgence of Donald Trump, while nonetheless bolstering his personal reputation (at the expense of others) using the same dirty tricks he learned in the corporate world.

Whatever Trump’s fate, what Trump represents will not go away. The corporate class has taken notes, and continues to take notes, on what works and what doesn’t. The owners of everything are watching his deleterious presidency and learning what can be gotten away with. So long as corporate capitalism remains our economic system, we shall always be one bad roll of the dice away from nation-level fascism.

Fascists fight dirty. I know, because I’ve seen how they fight.

For the purpose of this essay, I’m going to call militant fascists, Nazis, differentiating them from the abstract notion of a person who might support fascism but not participate in enforcement. The far-right militants I’m about to discuss are not members of the German NDSAP, because it no longer exists. They may or may not be in that nonexistent racial category called “Aryan”, although most of them are white-male supremacists. The people on the intellectual fringe who spout odious politics on the internet, we’ll stick to calling fascists. The enforcers and dirtbags who–– let’s say–– send death threats to leftists and feminists, or who cause people to lose job opportunities they were qualified for, those are the modern-day Nazis. We will have to fight them.

Chapter 23: Panic Disorder (Trigger Warning–– Mental Illness)

If state-level fascism comes to the United States, I will be one of the first to die.

This issue, for me, is not about so-called virtue signaling. Whether I’m a virtuous person, that’s for another discussion. To be in this fight, for me, isn’t a choice.

I can’t become “not political”. In a more liberal time, I wrote political content under my real name. At this point, there is no harm in my continuing to do so. I am an outed leftist. My existence is political. I’ve been doxxed over and over. I assume I have no privacy. I don’t feel like I have anything to hide.

Far-right operatives got me banned from Hacker News and Quora on defamatory pretenses. Far-right operatives have sent me death threats. Far-right operatives have caused me to lose job opportunities even after successful interviews, leading to offers. The Nazis know who I am; they will not forget me.

Of course, I chose to speak politically in the open. There is no such thing as an “ethnic leftist”. To share my views is something I decided to do, not something I was born into. Were that the only factor pinning me inflexibly to one side, in any future conflict with fascism, I could not say “The fight chose me.” I would have to fess up to having entered it.

So, here’s the other part of the story.

I have a chronic neurological disability, manageable but not curable.

March 3, 2008, was an unseasonably warm, sunny day in New York. I was recovering well from an ordinary bout of influenza. Around 2:30 that afternoon, a stabbing sensation erupted in my throat, spreading throughout my body. Laryngospasm. Couldn’t breathe. Tried to drink water. Couldn’t swallow. I was sure that I was going to die, in front of my co-workers, right there on the floor. A woman, able to see my distress, called emergency services.

Diagnosis: panic attack. There was a physical cause to my illness; more on that later.

The second attack, on March 10, was the worst I ever experienced. I had written the March 3 attack off as a one-off, but now I realized I would keep having these things. It came in waves, for 23 hours, until fatigue took over after midnight the next day. During that one, I considered admitting myself to a psychiatric hospital.

I had more, tens or hundreds, over March and April. Often, I could not eat a meal because I could not swallow. After some time, I found a competent doctor, an ENT in Chinatown who found a bacterial plaque, left over from the flu–– an easy problem to treat.

Problem is, once the body and brain are “trained” into the panic process, it becomes a thing that can happen, without warning, at any time. Panic attacks, for the most part, aren’t “about” anything–– nothing in daily life merits such an extreme bodily reaction. These attacks don’t often have clear triggers and, at this point in my life, I don’t think panic is the right word for it. I don’t actually panic. I’ve cycled through the five hundred or so symptoms this horrible disease can throw–– chest pain, shortness of breath, auditory hallucinations, derealization, tachycardia, tremors, tingling, intrusive thoughts, sudden depression, vomiting, akathisia–– and, having survived all of this nonsense, I’m no longer scared of these attacks. It took me years to get to this point, mind you, but they’re more like severe headaches than anything that would cause me to “panic”.

Truth is, if I have a panic attack in public, I handle the episode better than anyone else. I’ve been through it, hundreds of times. I know that these things end.

I won’t mince words, though. A true panic attack is extremely unpleasant. Even now, I’d probably pay $500 not to have one. I would wager that a quarter of the population has had the movie version of a panic attack–– racing heart, hot flashes, mild visual disturbances, nausea and vomiting. I consider that a mere anxiety attack and would put it at 2.25 on the panic scale, as I’ve come to know it. At 4, the level I have about once per year, we talking about symptoms that would put a civilian in the emergency room–– if he could form the words to get himself there. At 6, every system in your body’s screaming, and you’re begging God for your own quietus… and you’ll be sore for a week afterward in muscles you didn’t know you had. As for eight… well an 8 compares unfavorably to a bad salvia extract trip. It’s worse because, at the end of it, you know that it came from you, not some stupid chemical you ingested.

I haven’t had worse than 5 or so since 2010. In my experience, this sort of thing gets worse, and then it gets better.

Chapter 24: One Hit Point

I mentioned my mistake, in summer 2008, of leaving finance.

It became clear that I was not suited to work at a trading desk. By necessity, prop trading is done in a noisy, open-plan environment. I despise the software industry’s use of open-plan offices–– for programmers, they are unnecessary and qualify as hostile architecture–– but there are a couple jobs that necessitate them. I’ll defend trading firms for using open spaces, because seconds matter in that game, and a traditional office layout would be untenable.

What irony that I left finance because of open-plan offices, just before the plague of Agile de-professionalization, one-sided transparency, and (of course) open-plan fetishism hit the software industry.

I never had an attack as bad as the second one, on March 10, 2008. The dreaded Big One that would render me permanently insane, never came because it does not exist. That said, attacks continued to come.

A severe, punishing experience leads your mind to look for patterns, even if none exist. This produces phobias. If you have an attack on a crowded subway, you might mistakenly attribute it to the environment. At bottom (autumn 2009) I was a shut-in. Home was safe. Work was mostly safe. I could go back home (Pennsylvania) with preparation, but I’d sometimes have a nasty attack on the train. I didn’t date, because any time my heart sped up, the fear of an attack (anticipatory anxiety) would hit me. “Safe spaces”, as they do, got smaller and smaller, because no such thing exists. As Confucius said: Wherever you go, there you are.

No one ever intends to become a shut-in, to become agoraphobic. It happens one day at a time. To have panic attacks on a regular basis produces lethargy, apathy, and aversion. Dysfunctional cognition and self-reinforcing superstition accrete over time. Eventually, the entire world feels unsafe for no good reason.

It was not easy, but I built myself back from scratch–– recovery from 1 hit point. Limit break after limit break. I re-established the confidence to do ordinary things. I started dating again and got married. There was a first-again airplane ride, a first-again ride on a bike, a first-again drive, a first-again long hiking trip. I rebuilt myself from zero and, in the end, built a better self than what had been there before.

Petty phobias disappear when you beat the monsters, as I have. Public speaking is said to be the number-one fear of most people. (Death ranks second.) It’s not an issue for me. I took up scuba diving in 2015. In 2018, I swam with sharks (no cage) under 78 feet of sea water, off the coast of Honduras. That’s not as dangerous as it sounds, but it seems to impress people, so we’ll use it.

I must speak on the issue of safety. I can drive during an attack–– it’s unpleasant, but it’s not unsafe. However, there are things, given my diagnosis, that I will never do. In open water, I can safely ascend from 130 feet (4 minutes) in event of unexpected neuro-adrenal fuckery. Cave and wreck diving, those are out for good.

Chapter 25: Fearlessness (?)

The petty fears that restrain most people do not faze me.

It’s said that death and public speaking are the human creature’s two biggest fears. Death, I haven’t done yet, despite some half-hearted attempts by others. I can’t speculate on how much fear I’ll experience when I get to that point. In the abstract, I have no dread of it. If there is a hereafter, I look forward to meeting it; if there is not, I will not exist to be disappointed.

Public speaking, that one’s easier. I like it. I’m good at it.

Funny thing is, stress itself doesn’t cause the dysphoria that turns into panic. I can handle swimming with sharks, biking in heavy traffic, and the physical sequelae of an extreme workout. When there is purpose to the stress, I handle it well–– better than most people. It’s gnawing, pointless stress that angers me.

Inoculation to extreme, underworld-level fear has left me immune to the petty fears that rule most people. That is an asset in life. In corporate undeath, it is not. One achieves social success in the corporate world by mirroring management’s anxieties without becoming affected–– because if one becomes as dysfunctional as they are, one will be unable to perform. I am not good at this. I can, as corporate managers might desire, induce fear in myself based on minor discomfort and meaningless shit–– I am diagnosed as having a brain far too good at that–– but I have learned that it is unhealthy.

I’ve faced my own death, thousands of times in a body-brain mock execution, and quite a few life-threatening situations I haven’t talked about. Given this, I can’t force myself to care about “Sprint 31”. If a director’s worst fear is explaining to his VP that his software, version 7.0, doesn’t support the blink tag… he and I are not going to relate well.

I’m terminally one-faced. Mirroring another person’s anxieties without being affected by them, which is the most important office survival skill, is one I lack.

I don’t handle the open plan office well.

Stress? Under 80 feet of water, surrounded by sharks, with a compressed-air canister at my back, I’m fine. Diving is pretty safe if you follow the rules and keep your wits about you. Worst-case scenario, I’m 160 seconds from the surface. Giving a presentation in front of hundreds, on three hours of sleep… no issue. If my nervous system bitches out, I’ll play it off as a headache.

The mandatory 9-hour economy-class flight from nowhere to nowhere, five times per week, is not physically stressful. Its main demand is that I sit in a chair, to be seen by other people, and hold in any farts. Hardly Herculean, that. The problem is not the level of stress–– it is that the stress is so pointless.

Chapter 26: The Open-Plan Virus

I won’t opine on Jordan Peterson’s lobster theories, but it is true that we as humans are attuned to social status. Public speaking is stressful, but it’s a positive stress–– the stress of giving a compelling presentation, of having something to say that merits the high-status position of being the speaker. There is a job to be done; there is a point to the stress.

Office culture is not illegible. To be visible from behind is a sign of low status. Though tech companies boast of their “egalitarian” office architectures, the truth is that you can figure out exactly who matters and who doesn’t by counting lines of sight. Yes, the managers work in the open space, but they all have walls and windows at their back. This is how the company shows they are trusted, supported, and (no pun intended) backed up. The people whose monitors are visible to the largest number of people are the most disliked, least trusted, and first to be fired.

Additionally, it is infantilizing, the claim that open-plan offices are egaliatarian, because executives can come and go as they please, while workers cannot.

Open-plan offices are not productive. People get less done, perform worse on tasks requiring concentration, and get sick more often. Technology executives cite “collaboration” as a reason for using these horrible offices. That’s bullshit. The topic has been studied to death. People do not become more collaborated when they are enervated by constant unwanted visibility and contact. In truth, these offices breed low-grade hostilities due to noises, odors, and invasions of personal space.

What’s the real reason for technology executives to prefer open-plan offices? Never assume malevolence where ignorance suffices; I think 70 percent of it is that these offices are cheap, in all senses of the word. Another 10 percent is showmanship. The open-plan fetish began in the startup world as a means for founders to showcase how many busy nerds they have working under them. In this light, open-plan programmers are valued as carbon-based office furniture more than for the code they produce. (Having seen the quality of code these startups produce, I… nah, let’s skip this one.) A further 10 percent is classic managerial malignancy: control and surveillance. Finally, the last 10 percent of the motivation is the diametrical opposite of “fostering collaboration”. If personal space becomes another artificially scarce resource for the proles to fight over, they will grow to loathe each other’s company, and this drives to zero the probability of their cohesion around collective interests.

Open-plan offices, for programmers, exist to humiliate them–– to remind them that they’re unimportant and untrusted.

In that environment, my skin crawls, because I feel like I don’t belong, because in fact I know I don’t belong. In an open-plan, Agile Scrotum software shop, where it’s normal for people to interview for their own jobs every day as if they were interns or on PIPs, I feel like an adult sitting at the kid’s table.

Chapter 27: Trump’s America

A lot happened in 2016.

Far-right attacks on my career became common. I had to start hiding my tracks. I skipped a couple tech conferences because I couldn’t safely go to the cities where they were held. I was assaulted twice.

By this point, I was planning a “techxit” from the private-sector software industry. I had a strategy that would have probably worked but, due to post-2016 dysfunction in the public sphere, was unsuccessful. During this time, I joined one of the so-called “artificial intelligence” startups, a venture-funded outfit in Reston, Virginia, as a software engineering manager.

I’ve wrestled, over the past month, with the question of whether to name this company. Its founders are absolute fecal garbage. If I could name them without collateral damage, I would. If a time comes when that is the case, perhaps I will. The operation was one of chaos induced from the top by a culture of childish management and dishonesty to investors and employees alike. Why have I chosen not to name this company? Past experience.

The problem, when you slag a company, is that the people responsible get off. Barring a criminal conviction or an eight-figure lawsuit, the scumbags will always be supported and protected by other scumbags. The people responsible for a company’s terrible culture, they get off scot-free. The ones at risk of being hurt are regular workers–– fellow proletarians–– who did nothing wrong, but now have a tarnished name on their résumé.

Soon enough, I will expose by name an unethical organization (not an ex-employer) because it will be in the public interest for me to do so. In the case of this so-called “artificial intelligence” company, I see no public-interest reason to name it. I have ready made its investors aware of the founders’ unethical behaviors. I have done my job.

I ran a team of 17 people, and I must say this. The people working for me were professional, capable, intelligent, and all-around great to work with. It was a pleasure to have such a high calibre of people under me, and I would hire any one of them again. Not one of them is at fault, in any way, for the ethical faults of the company where they and I worked.

Like most software companies in the late 2010s, this outfit used an open-plan office and discouraged working from home. The environment was tolerable, for a while, because as a manager I had the right to use one of the unallocated side offices (“breakout” spaces). As a supervisor, I sometimes used it for one-on-one conversations. As a person who diligently tried to excel at his work–– and did, in fact, excel at it–– I sometimes used the side office to get my job done. As a person with a neurological disability, I sometimes used it to ride out an attack.

Chapter 28: Open-Planic!

Panic attacks, as I’ve said, aren’t “about” anything, although patterns exist. Phobias develop because one anxiety about panic attacks can, itself, induce panic attacks. Trying not to have a panic attack can, in fact, cause one.

Open-plan offices and micromanagement (Agile) exist on the theory that petty inducements of anxiety nudge the lethargic and unmotivated into marginal employability. That may be. I’m not an expert on the lethargic and unmotivated. For self-motivated people like me, though, those petty insults reduce performance. We don’t need to be watched; we’re at our best when left alone.

Such offices create anxiety in the neurotypical, but they feed into the anticipatory anxiety, the anxiety about anxiety. Have a panic attack in your living room, and you’ll probably be fine in an hour. Have a panic attack in an open-plan office, and you’ll be working somewhere else in 6 months. Perhaps they’ll find a way to fire you. More likely, you’ll be demoted and gimp-tracked. I’ve had it happen to me; I’ve seen it happen to other people. Bosses hear “panic attack” and do not think “manageable neurological problem” but “personal weakness”.

I dislike the term “mental illness”. I think it gets a key thing wrong. It is not the mind (or, if you will, soul) that is sick, in a person with depression, bipolar disorder, generalized anxiety, panic disorder, schizophrenia, or any other of these terrible diseases. We’ve moved beyond the four humors, and it’s time to move beyond sick souls.

These diseases are physical, but have mental symptoms. Thing is, we know today that most physical diseases have some mental symptoms. The lack of a clear causative mechanism does not merit a leap into superstition and stigma. The truth is that disorders like mine deserve the status of “boring” health problems like atrial fibrillation or cluster headaches. They’re unpleasant and can be dangerous, but they deserve neither the romance nor stigma assigned to them.

One of the reasons panic disorder gets easier to deal with, over the years, is that one learns that the attacks are physically harmless. I’ve had them while driving–– hellish, but not unsafe. Problem is, in the corporate workplace, a panic attack is not harmless. It can become cause for the bosses to presume personal weakness or reduced leverage, leading to termination or reduced opportunities–– gimp-tracking.

The rule of the open-plan office is simple: don’t panic. Don’t panic. DO NOT PANIC. Don’t panic. Panic? Don’t panic. Don’t panic don’t panic ¿panic? don’t panic they’ll see you they’ll judge you. DO NOT PANIC WHAT IS WRONG WITH YOU. Don’t. Don’t. Don’t don’t panic. DON’T PANIC YOU CRAZY MOTHERFUCKER. Breathe. Not so fast. Not so slow. Breathe. If you forget to breathe you die. If you breathe too fast you panic you lose your job. Don’t. Don’t panic. Stop staring it’s creepy don’t panic. Stop it stop the panic this cannot happen here it is getting the better of you. Don’t panic don’t panic stop your panic they all see you. They all see you. You haven’t written a line of code for ¡¡¡13!!! minutes you panicky broken motherfucker you soon-to-be-jobless motherfucker they see you they see you as a they see you they ¡see! you. Panic, don’t. Don’t. PANIC. You cannot grasp the true form of… et cetera et cetera.

One might ask: is there medication for this sort of thing? There is. High doses of SSRIs reduce the frequency and intensity of the attacks, although the side effects are unpleasant. Benzodiazepines are a good short-term treatment, but they’re not a panacea. For one thing, it takes time for them to have an effect–– you take the drug to put an upper bound on the attack’s duration, and to smooth your recovery, but there is no way to abort an in-progress attack. You still have to get through the next five or ten minutes.

Furthermore, regular use of benzodiazepines (say, for prophylaxis rather than treatment) carries a high risk of tolerance and dependency. These drugs are a lifesaver when needed, but addiction is hellish and I do not use them except when necessary. My first-line prophylaxis, if I begin to feel raw at 2:00 in the afternoon, is to take a side office. Perhaps the attack will never come. If it comes, I’ll ride it out and get back to work as soon as I can.

At this particular company, the fake-news AI company in Reston, that’s what I did: used a side office. Most of my team was remote, so it didn’t matter where I worked. Other people began to use side offices, too. I did not intervene; who was I to say they didn’t have a legitimate reason for using them? (I hate open plan offices and think everyone has a legitimate reason to break away.) Anyway, executives took notice of people using the side offices to get work done, and HR got involved. I was labelled the one who “started the trend”, which I suppose technically I was.

The CTO and my immediate manager pulled me into a meeting, late in the afternoon on January 25, 2018. I was admonished about the use of side offices, even though I (and possibly others who used them) had legitimate reasons to do so. I was told that the CEO had “concerns” about the frequency of my doctor visits–– not for panic disorder, but for a physical problem that would later turn out to be gallbladder disease necessitating emergency surgery. Changes, therefore, would be made to my job duties.

Some of the changes I agreed with. I had a large team doing complex work and was excited about the prospect of running a smaller team and becoming an “individual contributor” (non-manager). I counter-offered with a proposal that was mostly identical, except with my non-managerial contributions in data science–– a natural fit for an AI specialist at a company that claims to do AI. The CTO’s refusal of this offer, and his explanation, made it clear that he was aware of my disability and presumed lesser leverage on the job market–– the ol’ gimp track.

Recognizing the obvious demotion, I confirmed in writing that I suffer from a disability, but would continue to give advance notice of doctor appointments (which, as I planned, might be interviews).

I was fired the next morning. Illegal? Yes. Expected? Not so quickly, no. Usually, when a company wants to fire someone for an illegal reason, it offers severance in exchange for an agreement not to sue or disparage the company. This firm, instead, decided to place a bet on extortion.

Hold on to your hats. Keep your arms, legs, and tentacles inside the car at all times.

Chapter 29: Octopus Royalty

Not long after this, I spoke to an attorney. She described my case, barring perjury, as a slam dunk. The problem, of course, is that “barring perjury, 100 percent” does not mean a sure thing.

Suing an employer is not like suing a tire manufacturer. You’re going up against an organization that can–– and, knowing the founders of this company, I am sure they would–– threaten people with their jobs into lying about your performance and professional ethics. Unless you think a seven-figure award is possible–– and for a highly-skilled 35-year-old whose disability is mild and intermittent, that ain’t likely–– you are often better off to make like a frozen and let it go, especially when the adversary is a startup that has the option of just not paying. If you win a judgment against a FaceGoog, you’ll probably collect. Against a money-losing startup? Remember what I said about a disposable company. It’s hard to collect on a judgment, after suing a hole in the ground.

Nonetheless, the company perceived it had a lot to fear from me, so they made threats–– the usual negative publicity, frivolous litigation, nothing I took too seriously. What I did take seriously was when my ex-manager said things that were not true about my departure to former colleagues. I informed him that I would not tolerate illegal, defamatory statements.

Threats continued. I dug up what I could about the founders and executives; they dug up a few things (minor shit) about me. Most of what I found doesn’t matter and is not well-enough sourced for me to get into it, even without naming them. I will only say this. One of the people involved in their extortion effort, I was able to link to a racist, far-right organization that advocates violence.

Great. Nazis in my life.

Chapter 30: Techxit Achieved (?)

That was how 2018 began. After that, I did some consulting, some weightlifting, and some work on Farisa. The next part of this story occurs mainly in May, 2019.

Before 2016, this would have been front-page news in the Washington Post, the kind of scandal that would have led to public resignations of top management. Now, we’re so used to public dysfunction that I don’t know if it even registers. But it’s my story, so I’m going to tell it.

Given my views of the technology industry, it shouldn’t be surprising that I tried to get out of it. In April 2019, I applied for a job at the MITRE Corporation, a federally funded research and development center (FFRDC) in MacLean, Virginia.

For reasons that will become clear, I will stick to what is factual.

My application led to a phone interview on April 26, one that was mutually successful. I left the conversation excited about opportunities MITRE had to offer–– about returning to R&D. MITRE invited me to an in-person interview on May 10. In about three days, I put together a presentation on set theory and why it matters to computer science. I felt like I did very well, and MITRE seemed to agree. On May 13, I received a job offer from MITRE, to join as a senior simulation and modeling engineer on June 3.

I accepted the offer that day, and put in my two-week notice at my then-employer on May 17. So far, standard job-change story.

I was thrilled to be out of the corporate world. Agile and the short-term nonsense, no more. Instead of working on sprint work for which I’m more than a decade overqualified… I’d get to work in R&D again.

Techxit successful? Or would this be like Michael Corleone’s “just when I thought I was out” moment?

Chapter 31: Nazis in McLean

“Out” leftists, feminists, and antiracists deal with people on the far right who follow our careers and try to interfere. In the startup world, it’s 50–50 whether you’ll still have a job, after that happens. It’s not that employers are dumb enough to consider right-wing keyboard warriors a reliable source–– they just don’t want to make a stand.

I do not consider radical the assertion that MITRE, an FFRDC that relies on the trust of the government–– the trust of the American people–– should be held to a higher standard than a fly-by-night tech startup.

“Mr. W” would have been my manager at MITRE. A leak of information occurred, between May 14 and May 30, and a far-right operative–– likely the one discussed in Chapter 30, because no one else fits the pattern of motive and opportunity–– leaked to Mr. W that I suffer from, and have been treated for, panic disorder.

Mr. W emailed me on May 30 to arrange a time when we could discuss what he falsely represented as a benign administrative detail. “Nothing to worry about,” he said.

We spoke at 9:00 am on May 31. He informed me that he had become aware of my disability status. He said, “I don’t see you ever getting a security clearance” with a diagnosis of panic disorder. (More on that, below.) Moreover, since I did not disclose my diagnosis–– I was never legally required to do so–– he rescinded the offer.

I will not argue against the federal government itself having the right to apply increased scrutiny, on the matter of security clearances, to people with psychiatric diagnoses. When lives are possibly at risk, the rules are different.

MITRE is not the federal government. Not all the work it does requires a security clearance. It is legal for a government contractor who terminate someone who applies for a security clearance and fails to get one. It is legal for a contractor to make an offer contingent on a clearance (a conditional job offer, or CJO). It is not legal for a hiring manager to discriminate against people with disabilities on the suspicion that they might take longer to clear.

I don’t know Mr. W’s politics. Is he a far-right operative? A Nazi? That’s between him and God. As I said, I’m sticking to the facts. Some time between May 14 and May 30, he spoke to a Nazi and made a decision based on information furnished by said Nazi. Whether he is guilty of mere irresponsibility, or bears blame for something deeper and more shameful, I do not intend to say. Here, it is essential that I stick to the facts.

During the conversation of May 31, Mr. W mentioned that he could only get away with rescinding the offer because of “our current political time.” Trump.

As it were, I had a full SF-86 looked over by one of the nation’s top security clearance attorney. I have no recreational drug use (including alcohol) since March 2008. I have no criminal record, no financial mishaps. Foreign contacts, the attorney said, might be an issue. My health problems, he told me, would be “absolutely no issue” for the level of clearance in discussion, and “would not significantly delay” the process.

It’s possible that the illegal rescission of the offer was not motivated by the stated reason, but the result of a far-right infiltration of one of the nation’s most important government contractors.

Either way, it was insanely fucking illegal.

I would, of course, put the probability of Donald J. Trump’s personal involvement at zero. I don’t think he makes time on his calendar to call up MITRE and screw over leftists with mild disabilities. I doubt he even knows, or cares, that MITRE exists. But he normalized the might-makes-right moral filth of corporate America, and brought it into the public sector, and by doing so, created a problem for me.

In May 2019, a literal outed fascist emerged from the woodwork to attack my career by sending true (I do suffer from panic disorder) but irrelevant information to Mr. W. This led to MITRE allowing the illegal rescission of a job offer made to a non-radical, non-violent leftist with a job-irrelevant disability.

The Nazis won.

Epilogue 1

Government, until 2016, was supposed to be immune to the bush-league chicanery we encounter in the startup industry. Illegal terminations and illegal rescissions of offers made are not supposed to happen there. Today, all bets are off. Be afraid.

I’ve studied fascism. I know what it means when a government tells people of a certain kind they now live under a five o’clock curfew. I know what it means when people like me experience rescinded job offers. Civilization’s enemy, fascism, starts with minor stuff–– boycotts, unionbusting, blacklisting–– before it builds up a seven- or eight-figure body count.

It’s difficult to predict which ethnicities and minorities will be targeted, in what order. We know that fascism takes the accessible first. It hits unionists and leftists and feminists–– people who speak out. It attacks people with disabilities–– whom it perceives as weak.

It’s tempting for people in that non-aligned majority to take comfort in the notion that they need not outrun the bear–– they only need outrun the other guy. For me, the fight’s not optional. I am that other guy.

The metaphor of a bear is not adequate, however. A bear, once sated, will cease to feed. Rather, what advances is a rising tide of ethical failure–– a saturated, soaking mud of moral filth that, if not opposed, will drag civilization to oblivion.

I have about an hour of video, audio, and picture media pertaining to the matters discussed in Chapters 27–31. There’s plenty of detail that, for the sake of brevity, I haven’t shared, but that makes my case even stronger.

MITRE’s illegal rescission of my job offer is exactly the sort of thing that happens before a far-right flashover. In the battle against the far right–– against fascists and Nazis, against infiltrators of trusted institutions–– we are at eleven fifty-nine and a sweeping second hand.

Epilogue 2

It wasn’t easy to tell that story. Thank you for taking the time to hear it.

The word count nears novella territory. Unfortunately, I could not have told that story in two thousand words. I doubt I could have told it in six thousand words. At brevity, it would have read as a paranoid rant. Extreme claims require justification and analysis. Consider post-2016 politics, from a pre-2015 vantage point. Some things are hard to believe unless every detail is backed up.

It is not with pleasure that I write on an existential threat to this great nation, and to civilization and its future. It is not with pleasure that I write on the probable infiltration by far-right militants of an organization that relies on the trust of the federal government.

We have a world to win; we also have one to lose. We do not have to live in a world where experiences like mine, relayed above in twenty thousand words of horror, are the norm.

For the love of God, fight.

Techxit (Part 1 of 2)

(For Part 2, go here.)

Nazis are bad. This is going to be a plot point, much later in this essay, so if you weren’t aware of the fact, write it down: Nazis are bad.

Chapter 1: A Kind of Reckoning

Some stories start with mistakes. This one does. In the summer of 2008, I left a lucrative career in finance to join a technology startup.

At the time I did this, I believed strongly in technological capitalism. I figured we were 20–40 years away from a post-scarcity society in which to be “poor” meant sitting on a two-week waiting list to go to the Moon. We, the programmers who implement human progress, were the good guys.

Our record shows that we’re not. We created fake news. The companies we create–– and, because our purpose is to unemploy people, those are often profitable and draw attention–– have juvenile, toxic cultures. We’ve normalized witch hunts over trivialities, and people lose jobs over jokes about devices called dongles. We’ve built this so-called “new economy” in which recessions destroy workers’ finances and careers, but recoveries are jobless.

Our major contribution to the world, as private-sector programmers, is to push the balance of power between takers (capital) and makers (labor) in the wrong direction.

We have built an empire of garbage. It has not been pleasant for me, in my 30s, to come to the realization that I have unwittingly chosen a career path in opposition to the welfare of society.

What I plan to do with my life, that’s for another day. I’d like to have Farisa’s Crossing ready for publication in early 2021. The project’s been a lot of fun, a lot of work, and I can’t wait to have a finished product. I should be honest about its prospects, though. It’s a very high-potential book, but some of the best writers I know (people who will be remembered, I am sure of it, in 100 years) are still unable to subsist on book sales. So, I have kept my mathematical and computational skills sharp. I have no intention to abandon those. I enjoy programming quite a lot, and I’m still good at it, so long as I’m working on a real project rather than Jira tickets.

The software industry itself? I’ll be honest. I’d rather starve than work in another company where “Agile” is taken seriously. It’s not that I imagine COVID-19 to be a lot of fun; but at least I’d only have to do it once, not every two weeks.

I have written about 250,000 lines of code in my career in at least 20 different programming languages, and in spite all this, the sum contribution of my work to society comes out in the red. It doesn’t matter what technology can do. It matters what it does. We need to stop fantasizing about our 200-line open-source monad tutorials somehow advancing the state of human knowledge enough to cancel out the harm done by the WMU’s (weapons of mass unemployment) we build at our day jobs.

Over the past 30 years, the balance of power in our society has shifted toward capital and away from labor, toward employers and away from workers. We can’t blame all of this on politics; someone taught the machines how to run the dystopia. This means: we’re the bad guys.

Chapter 2: Understanding Automation

We have been here before. Ill-managed prosperity caused the Great Depression, and it caused the rise of fascism.

In the first two decades of the 20th century, we became far better than we’d ever been at making food (nitrogen fixation). A boon, right? What could go wrong? Capitalism does not handle boons well. In 1920s North America, the pattern unfolded like this: prices for agricultural commodities declined, bankrupting farmers and communities that served them, leading to cascading rural poverty, which eventually reduced demand for the products of industry, and finally this became known as “the Great Depression” when it tanked the stock market (October 1929) and thereby hit rich people in the cities.

What happened to farming, to agricultural goods, in the 1920s… is happening again to all human labor.

The global rich prosper. Everyone else suffers under malevolent mismanagement and a concentration of power that would not be possible without the tools that we, as programmers, have built.

Not all markets have legible, objective moral states. I do not think it is of great ethical importance whether a tube of toothpaste costs $3.49 or $3.59–– it seems that supply and demand can be trusted to figure that out. If God exists, she likely has no opinion on what should be the price of palladium or platinum. We are not entitled by divide fiat to a $47 price on a barrel of combustible hydrocarbons that, in any case, we ought to be using less of. Markets determine exchange rates of various assets–– how much of one thing is worth how much of another–– and most of these exchange rates do not carry primary moral weight. But one does.

The exchange rate between capital and labor–– between property and talent, between past and future, between the whims of the dead and the needs of the living–– has a clear, objective, morally favored direction. That’s the one, if God exists and has a will aligned with the health of human culture, that matters.

As I’ve said, quite a number of new technologies push the balance of power to favor employers, not workers. This is objectively evil work.

When I look at how life has changed in the past 20 years, I don’t think the smartphone is more than in incremental improvement, and I’m not impressed by the eggplant emoji or the $1,500 embarrassment that was Google Glass. What most people have experienced is an increase in their feeling squeezed, and it’s not just a feeling. The major contribution of private-sector technology to daily life has been a slew of surveillance tools sold to health insurers, authoritarian governments, and employers.

There is an open hypocrisy at play in the workplace. A worker in constant search for better options will be disliked–– he’s “not a team player”. That seems fair. No one likes someone who’s only out for himself. Yet, companies expend a considerable share of resources to figure out which workers can be replaced and how quickly. There are people in our society who collect a salary by finding ways to take salaries from others in the companies where they work. Doubly weird is the expectation, within the so-called corporate “family”, to treat these people as teammates rather than adversaries.

A worker who changes jobs as soon as a better offer comes along is a “job hopper”. He’ll get bad references and rumors will spread that he failed up or was fired. Yet, our employers spend a significant fraction of their funds (wealth we generated) looking at us from every angle to see whether we can be replaced.

Social media has played a central role in this dystopia. We now live in a world where one needs a public reputation–– an asset that 99.9 percent of people should not want, because reputation is an asset easily destroyed by some of the world’s worst people–– to get a job. Gone are the days when anyone able to speak in complete sentences could call up a CEO and talk his way into a high-paying position. In today’s world, it’s impossible for workers to reinvent themselves–– every detail can be checked, and people who opt out (who don’t have “a Facebook” or “a LinkedIn”) are assumed to have something to hide.

Social media promises a path to influence, but for employers its purpose is to ratify the lack of influence that most people have. In the old world, a terminated employee got three months of severance and glowing references, because a boss never knew if he was letting go someone who had powerful friends and could bring the pain back. In the new world, an employer can look up a target’s Twitter feed, see a lack of blue-check followers, and confidently presume that person to be in the 99 percent of people who can safely be treated like garbage.

Chapter 3: Tech–– Not Even Once

I mentioned before that I left a lucrative career in finance, in 2008, to join “the tech industry”. This was, financially, a seven-figure mistake. Possibly eight. It was the stupidest decision I ever made, and I assure you there’s a lot of competition for that distinction.

Private sector technology (“tech”) is not a career. There is no stability in it. You are only as good as your last job; your job is only as good as your last “sprint”. Unless you become a successful founder, you will not be respected. You’re a thousand times more likely to end up like me–– 36 years old with no clear path to where I want to be–– than even to become a modest millionaire.

You might think, like I did, that you’re going to beat the odds because you’re smarter than the average hoser. Not so. Compared to the people in charge of this industry, I’m a black swan seven-wingèd eidolon of merit. It does not fucking matter, how smart you are.

Your IQ doesn’t matter because you’re not going to be using machine learning to cure cancer. You’re going to be working on Jira tickets to build a product that corporate executives will use to unemploy fellow proletarians. Any idiot can do that kind of work. Furthermore, at a salary higher than idiots can get elsewhere, many idiots will try. Unless you are 21 and have no obligations, quite a few of those idiots will be able to work longer hours than you.

Private-sector technology is not “meritocracy”. It’s a fart in a cave that has not ceased to echo.

I’ve had the whole spectrum of tech-industry experiences. I’ve worked at companies that have failed. I’ve also worked at companies that succeeded, whose founders went on to fail those who got ’em there. At a “Big 4”, I worked for a manager with an 8-year track record of using phony performance issues to tease out people’s personal health issues, which he would blab about to colleagues. (I was told that he was fired for this, but after a five-year absence, he returned to that company.) As a middle manager, I sat and listened as two executives threatened physical violence on someone who reported to me (someone who was, in truth, quite good at his job) because of unavoidable delays on a project. One of my favorite people (of note, a black female) was harassed out of a company–– her manager, a personal friend of the CEO, was not fired, and went on to be a VP in his next job. I’ve seen tech companies offer the same leadership role–– title and responsibilities the same–– two multiple new hires with the intention of their fighting for the job they were promised. In March 2012, I was fired for refusing to commit a felony that would have cost its victims hundreds of thousands of dollars. In the mid- and late 2010s, I got death threats related to this blog–– and (as a public leftist) my name became known to some scary far-right fuckers–– a topic I’ll cover at length in Part 2.

All of this, and for what? Nothing.

Yes, I know how to program. I have taste and I have the (rare, apparently) skill of knowing how to do it right. I can talk a great game about functional programming, artificial intelligence, and programming language design. I have a solid understanding of what the various abstraction layers (e.g., operating system) are doing. Here’s the problem. My peers, in the middling years of legitimate careers, are able to buy houses and start families. They’re in the position to move about the economy at least the upper-middle. Me? I’m stuck in a trade where even people with “senior” and “principal” in their title have to interview daily for their own jobs. What a fucking joke.

I left Wall Street, and joined this career, because I bought into the Paul Graham Lie: that if you join a startup and it fails, it won’t hurt you, because you’ll be respected for being “entrepreneurial”. You won’t get your IPO today, and you’ll “have to settle” for a $500,000-per-year VP position at a FaceGoog, or an EIR role at a venture fund, but you can use your time out to recover your finances and energies until you’re ready to play again.

There is no truth in the Paul Graham Lie. There are too many failing startups and most of them do not become VPs at FaceGoogs. They get regular crappy jobs.

I found no meritocracy in the technology industry. I had a slew of intensely negative experiences. I must be honest on this, though: I got exactly what I deserved.

Whether I’m a good man, that’s not for me to say; it is true (and perhaps a weakness) that I lack the stomach for evil. Yes, I am a person of merit. Compared to the people running the tech industry, I am seven-S-Tier merit. However, I entered a line of work that, in the final analysis, has dedicated itself to the advancement of the power held by employers over my fellow human beings. Failure is what I deserved. Misery is what I deserved.

My youthful self-deception about the true nature of corporate capitalism is no excuse. When one who desires to be a good man, nonetheless, works for the baddies… what else can be expected?

Chapter 4: Artificial Stupidity

The last thing I intend here is to tell a pity-me story. Until 2018, none of my experiences with injustice stepped outside the range that is typical. I’ve seen people smarter and better than I am get screwed far worse than I ever have.

Do not pity me, because I don’t pity myself. Learn from my experiences and make better choices than I did. The takeaway from all this should be that, if a person of eminent merit can have a terrible time in the tech industry, it can happen to anyone. Most people get screwed; few have the private privileges I have that enable them to talk about it.

There cannot be “meritocracy” in private-sector technology, because we serve a purpose without merit. We can opt for self-deception and tell ourselves that our work is advancing the state of knowledge about database indexing, but if our work’s real purpose is to allow the rich to “disrupt” the poor out of their incomes, then a negative multiplier applies to our efforts, and diligence only means we drive fast in the wrong direction.

John Steinbeck made a brilliant comment on American false consciousness–– that socialism never took hold here because of self-deceptive workers who see themselves not as an exploited proletariat, but as temporarily embarrassed millionaires. Having worked in technology, I understand the private-sector software programmer’s mind pretty well. We see ourselves as temporarily embarrassed research scientists, philanthropists, public intellectuals, and scholars. We assume there is an exponential growth curve to our production and therefore it is immaterial what we’re doing now, because in 20 years, when we’re calling the shots, we’ll make moral choices.

Employers indulge our wounded egos with the promise that, if we programmers put their heads down and plow through some ugly work–– just up to this next “milestone”, guys!–– we’ll eventually be restored to glory. That’s the promise used to pull some of the best minds of my generation (and, to be honest, quite a few not-best minds) into socially detrimental work–– performance surveillance employers use to squeeze workers, propaganda machines for capitalists and authoritarians, and weapons of mass unemployment.

I’d like to talk about artificial intelligence. I’ve been studying it since the early 2000s, when the field was considered a land of misfit toys, a bucket of ideas that didn’t work–– when neural networks were considered a bad joke ill-told. I don’t consider myself an actual expert in this very deep field, but I’ll note that quite a number of the “data science” consultants earning $350 per hour come to me for advice. (I left a doctoral program at one year, so I don’t have the paperwork to get such jobs.) There has been, in the 2010s, a plethora of startups raising venture capital on the claim that they do “artificial intelligence research”. In the vast majority of cases, they’re not.

I’ve been in more than one of these fake-news AI startups. Usually, the AI approach doesn’t work–– at least, it doesn’t scale up to real-world problems on a timeframe investors or clients will accept. The founder starts with an idea that’s usually an expansion of ideas from a college thesis (sometimes his own) and pulls a family connection to get seed funding, then hires a few rent-a-nerds to implement his “brilliant” idea. When the AI approach fails–– genuine AI research is demanding, expensive, and intermittent–– the company “pivots” away from the original project and moves into business process automation. The startup becomes a portable back office–– it failed to automate an ugly task, but by squeezing extra hours out of H1-Bs, it manages to make the work cheaper.

This switcheroo isn’t a surprise to investors. In fact, they’re usually the first ones to step in and tell the spring chicken founders that it’s time to put away childish things. Once founders realize their job is to delegate, rather than do, the work, they don’t really object to the notion of pivoting to something more mundane.

It is not immoral, of course, for a business to change its strategy. The issue here is in the continuing deception. These companies claim to be doing “next-generation machine learning” when they’re actually running on cheap manual labor. Clients buy into something that appears to have more long-term upside than it actually does–– they take the early adoption risk of something that’s unlikely to merit it.

The biggest losers in the fake-news AI con, though, are employees. It’s hard to get smart people to work at no-name companies for below-market salaries on the low-status, boring line-of-business problems encountered by a startup serving as a portable back office. The trick is to tell these programmers that if they bear down and endure 6–12 months of drudgery, they’ll graduate into the research positions they were originally promised. In reality, what lives at the end of that 6–12 months of drudgery is a middle manager saying “we just need” 6–12 months more.

I’ve worked on Wall Street. I’ve worked for venture-funded companies. I’ll say without reservation that the ethics on Wall Street are far better. Often, VC-funded founders are people with MBAs who failed out of Wall Street (if you can believe this) for being too toxic and unethical.

Say what you will about finance. There are plenty of things to dislike about its culture. I’m no fan of the noisy environments, or of the constant wagering on everything, or of the occasional encounter with openly-asshole politics of someone who read Ayn Rand at too young an age to get the joke, or of the sense–– though, I assure you, financial workers are treated better than tech workers–– that the job is still paperwork for rich people. To suggest that Wall Street is some workplace utopia would murder my credibility. It isn’t. I only mean to say that the ethical and intellectual qualify of people in finance is higher, on average, than in the private-sector technology world.

Why does Wall Street, then, have a worse reputation than Silicon Valley? Finance, unlike Jira tickets, is for adults. Ethical failures on Wall Street make news. When a bank collapse or a market fails, people learn about it. In my experience, traders are no more or less honest than the general population. The major difference is that traders are smart enough, at least when it comes to careers, to play the long game. The narrow-minded taskmasters who run daily operations in technology, for a contrast, think in terms of two-week “sprints”.

The person who promises you the moon but, three weeks after you’ve moved across the country to join his operation, changes your job description and puts you on sprint work, that guy’s going to be a techie.

Chapter 5: Teabagged by an Agile Scrotum–– Or, Why Programming Is a Dead Career

The non-career of private-sector programming calls itself “software engineering” to give itself the aura of being a profession. It isn’t one.

A profession is a set of traditions and institutions setting forth (that is, professing) ethical obligations that supersede managerial authority and the short-term expediency. That is only possible–– because professionals aren’t any better or worse than anyone else, and the need to survive will push anyone to extremes–– if those who work in the profession are protected from compromising positions.

For example, a doctor must obey the Hippocratic Oath, even if it requires him to defy superior orders. This is only tenable if the medical profession makes it so a doctor can survive losing his job–– he can get another one; he is still a doctor–– and that will only be the case if entry is limited, lifting all professionals above the daily (and ethically compromising) need to survive. The profession puts a floor on wages by limiting entry to the qualified, and it puts a floor on credibility by giving its workers institutional support.

If a profession collapses and any hungry loser can get in, the cheapest people drive out the skilled. Workers lose, clients lose, and society as a whole loses. The only winners are employers. They benefit from de-professionalization because a professional executive’s real trade is the buying and selling of others’ work, and a debased talent pool enables higher trading volume.

Software engineering has been thoroughly de-professionalized. Highly-compensated specialists have been driven out in favor of rent-a-coders who don’t understand computing or mathematics, but will accept two-week sprints and tolerate the daily “interview for your own job” meetings. I’ve referred to Agile as Beer Goggles Software Management–– the unemployable 2’s become marginally productive 4’s, but the 6+ see a drunken loser and want nothing to do with it–– but I’ve realized, over time, that the Agile Beer Goggles are here to stay. The software business has successfully refit itself to run on low-grade talent; this will not be reversed.

A boss’s incentive isn’t to hire the best people; it’s to stay in charge. Daily status meetings remind the plebeians that they’re not trusted professionals, and that they can’t invest in their own development “on the clock” but should think of themselves as day laborers who will be replaced–– there’s an army of hungry losers lined up outside the door–– as soon as their “story points” per week (or per “sprint”) drop below a certain threshold.

I tried to save this industry from this Agile madness, but I failed.

Chapter 6: This Story Peaks Early, Guys

I wrote a few posts in 2012–13 about the startup economy, although I was still figuring it out myself at the time. One concept in which I invested a lot of hope is open allocation–– the notion that workers are better at judging the relative merits of projects that they’ll let on in an authoritarian, command-driven company; that, therefore, trusting them to vote with their feet makes excellence more likely. I didn’t invent the concept, but I named and evangelized it. I still believe that open allocation fundamentally works, but I have no hope in its eventual adoption. The genuine malevolence that exists in global corporate capitalism, since 2015, has been so evident in such force lately that issues greater than inefficient allocation of talent dominate my concern.

Still, I was thrilled to see my theories on open allocation get traction. Tom Preston-Werner quoted me at Oscon 2013 (go to 13:37). This blog, in 2013–15, began to get hundreds, then thousands, of unique views per day. On my best days, I broke 100,000; my Alexa ranking in the San Francisco metropolitan area was, for a long time, in the four digits.

There were stressful moments during this time. A mistake I made in 2011 got more publicity than it deserved, for reasons largely my fault. My left-leaning (and, increasingly, fully leftist) politics attracted death threats from various far-right elements–– a topic we’ll return to in Part 2. I’ve been doxxed so many times and in so many different ways, I assume I have no secrets–– but, then again, I have nothing to hide. Still, on the whole, the good outweighed bad.

One place where I achieved prominence was Quora. Today, we know Quora as a buggy, name-walled Yahoo! Answer clone that generates privacy violations as reliably as summer humidity generates swamp ass. In 2015, however, Quora had (in spite of itself) an excellent community. It showed flashes of potential that, in the end, it would never really meet–– but, from 2013–15, there was a high quality of questions posted, and a high quality of people answering them.

I achieved the “Top Writer” distinction in 2013, 2014, and 2015. I was frequently consulted by the site’s moderators on policy and community management. I had about 8,500 followers. I don’t know what that number means now, but at the time it ranked me third or fourth among non-celebrities (depending on what we call a “celebrity”–– I should be forgiven for having fewer followers than Barack Obama) and first (breaking seven figures, some weeks) for answer views. A number of my responses, mini-essays in which I’d sometimes invest several hours, were published by partner sites such as ForbesTime magazine, and the BBC’s online edition.

On June 15, 2015, I was an “It Programmer”, as much as one can exist. (There turns out to be a low ceiling on a non-founder’s status; by stepping above it, I got myself in trouble.) People all over the world reached out, sight unseen, and offered to fly me out to discuss positions at their companies. Often, I was called “the conscience of Silicon Valley,” even though I never lived there.

The next day (June 16) an event occurred that has nothing directly to do with me, and involves a man who probably does not know that I exist.

I lived in Chicago. Seven hundred and fifteen miles east-and-slightly-south, on a cloudy Manhattan morning, a deranged real estate baron descended an escalator, like Kefka in the last battle of Final Fantasy VI, and gave a circuitous, self-promoting, and racist speech in which he announced a presidential campaign that would ultimately be successful.

I’ll talk later on–– this story gets dark, my friends–– about fascism and whether I think Donald Trump constitutes or fits into a credible fascist threat to this country. Some people consider Trump a fascist; others view him as a mere opportunist. For now, observe that there were, at the least, coincidences in timing. Trump’s rise to power occurred as the far right, or “alt-right”–– a morass of tribalism, pseudo-academic racism, and might-makes-right attitudes toward topics ranging from international relations to corporate conduct–– evolved from an incel affectation into a full-fledged, mainstream political movement. The private attitudes of venture-funded tech founders were now finding public voice in a presidential candidate.

I did not expect Trump to become president. I remember a conversation well with some friends about him, in late 2015. Most people said he had no chance of becoming president. I gave him a 1-in-250 shot, but I would have given him a 4-in-5 shot, even then, of performing well enough at the primary to speak at the convention in Cleveland.

It wasn’t hard to see what Trump was doing. His schlock about Mexican “rapists” was old-school miscegenation panic. The left blames societal failures caused by corporate capitalism on corporate capitalism; the right blames societal failures caused by corporate capitalism on women, minorities, and immigrants. Trump played the demagogue game disgustingly well. His victory, I did not expect, but I knew that Trumpism was going to be with us for a long time, even if he lost in November 2016. Having worked in the tech industry, I saw it coming.

Chapter 7: The Man Who Killed Paul Graham… Is Screwed

No, I didn’t murder Paul Graham. As far as I know, he’s very much alive. He’s only “dead” insofar as his relevancy (like, by my own choice, mine) has taken a precipitous dive.

I take credit in jest. Substantial evidence exists that his decision, in February 2014, to step down as president of Y Combinator, and thereby reduce his relevance in the tech industry, was driven in part by his dislike for skepticism he faced among the public and media. Though I was a tipster and source for a Gawker story he disliked, I did not intend to “kill” Paul Graham. Most of this happened by accident. Still, I know based on private conversations with people in shared circles, that my work contributed to his decision.

One of the worst things about fame or even semi-fame is the Carly Simon Problem. “You’re so vain,” she sang, “you probably think this song about you.” In this case, there was a person intended as the target of the song, so he would be correct in believing the song to be about him. That’s not the issue here. The Carly Simon Problem exists because some people, as I’ve observed, think all songs are about them. People see themselves where they aren’t, they get butthurt, and then they fuck up your life.

When I publish Farisa’s Crossing, I am terrified about ex-girlfriends from the Bush Era coming out to say Farisa is based on them. Let me address that now: she isn’t. What Farisa represents, that’s a secret I’ll take to my grave.

I’ve been burned by the Carly Simon Problem more than once. I’ll give two examples here.

Number one: an ex-colleague managed a successful return to finance–– he got a job as the head quant at a hedge fund. I considered this guy a friend; we played board games together on multiple occasions, and I’ve been over to his house to have dinner with his wife and kids. For a position at a Chicago hedge fund, I used him as a reference.

Little did I know that he had read one of my blog posts and believed it to be about a place where we had worked. He found it to be “bad form” to write about our shared prior employer–– to be clear, I wasn’t. The post in question was about a 1990s corporate meltdown I studied in my research on open allocation.

I got shanked. He gave me a bad reference and I didn’t get a job.

I grew up in central Pennsylvania. Unlike these soft-faced preppies who dominate the upper echelons of the corporate world, I grew up understanding the notion of respect. You fight; or, you shut up and walk away. There is absolutely no shame in walking away from a fight. Almost always, walking away is what you want to do, because most serious fights don’t have any winners. One should, for similar reasons, avoid the conduct (such as throwing around bad references) that necessitates a physical fight. This being the case, I have zero patience for white-collar, lily-livered, passive-aggressive failmen who pretend to be a friend, but throw around bad references and sink people’s job prospects. Don’t like what I have to say? Confront me. I’ll stick to words as long as you do–– no one needs to know, either, what we argued about, or that we ever argued.

I can respect a wide variety of people, but I cannot respect a craven crud-ball who thinks that an acceptable response to an anodyne blog post is give bad job references like a fucking dirtbag. If I ever get cancer, I will name it after this guy.

The Carly Simon Problem is one of the main reason I nearly quit writing in 2016. I’m more than willing to go the distance in a fair fight, if that’s where we are. I cannot tolerate being stabbed in the back by cowards–– especially cowards who weren’t even in a conversation, but who took offense to it on the incorrect assumption of a song being about them. Sometimes, the song is about someone else. Sometimes, the song is about no one. Sometimes, the song is just a damn song.

The second major encounter I had with the Carly Simon Problem involves Paul Graham.

I know, I know: I promised Nazis, and here I am talking about Paul Graham. (I don’t think Paul Graham is a Nazi.) There’s some back story, some buildup. Unfortunately, this means I have to get into events that sound like petty drama, but that will in fact lead into something major and criminal.

Even now, I don’t harbor strong opinions about Paul Graham. I would be happy to mend fences with him, if he apologized for the conduct described below, almost all of which was committed not by him but by his subordinates at Y Combinator.

There is a lot to like and respect about the man. For a start, in his prime he wrote some excellent books on the programming language, Lisp. He got more right than he got wrong. Unlike me, he won the birth-year lottery and walked away from Viaweb (Yahoo! Store) with a boatload of Boomer Bucks. He’s an above-average writer and, although I haven’t always agreed with what he’s had to say, his contributions to technology discussions have, at times, been insightful.

A business model that thrived in the 1990s technology boom was the so-called “startup incubator”, which made small investments in tiny companies and thereby made a diversified wager on the startup economy as a whole. Incubators always had a seedy reputation–– they promised mentorship and introduction to venture capitalists, while rarely providing more than office space and coffee–– but the business model isn’t prima facie evil.

After the 2001 tech crash, internet startups developed the reputation of being a goofy 1990s fad that would never return–– the “new economy”, conventional wisdom said, was a short con that had failed. Incubators, as well, went out of fashion and became a symbol of 1990s clownery.

Paul Graham, having become rich enough to retire in the 1990s, continued to evangelize the startup economy while the rest of the world’s faith in it sat at a nadir. He cheer-led the notion of a small technology company when no one else would. In 2005, he opened up an incubator called Y Combinator–– named after a computer science construct discovered by a distant relative of mine–– or “YC”.

I dislike Y Combinator. I think it has done more harm than good to the world, because it has exacerbated the ageism and clubby classism of the technology industry, and because it has inadvertently given credence to “new economy” ideas that actually haven’t worked very well. This being said, I don’t think Y Combinator is the typical, seedy incubator. I’ve researched Paul Graham and his operation, and everything convinces me that he makes good-faith efforts to truly back the companies he picks–– and quite a number have gone on to be successful. We can debate another time whether Y Combinator’s strong track record proves its merit or its founders’ social connections, but his incubator became unique among the pack in developing a prestige that no other one has.

I met Mr. Graham in person once (March 2007). No one had any reason then to know who I was, so I doubt he remembers me. He seemed like a nice guy, I liked him and, until 2015, I still liked him, even though we disagreed on many things.

So why, in late 2013, did he suddenly dislike me? Again, it’s the Carly Simon Problem, because of course it is.

Chapter 8: There Are Chickenhawks Among Us

A logic puzzle goes like so. One hundred people live on an island; ninety have brown eyes and ten have blue eyes. No mirrors exist and no one talks about eye color, because there’s a rule that, while it is not illegal to have blue eyes, anyone who knows he has blue eyes must, at dawn the next day, leave the island forever.

They live in peace, until one day, an outsider (“oracle”) known never to lie comes to the island, calls an assembly of all hundred inhabitants, and says, “At least one of you has blue eyes.”

What happens? You would think: Nothing. No new information is introduced, so you would imagine that the oracle has no effect.

The answer is: 10 days later, all 10 blue-eyed people leave the island. The oracle introduces something they know (since everyone sees either 9 or 10 blue-eyed people) into common knowledge and that changes everything. For a full explanation, click the link above.

In this way, saying something that everyone knows (introducing no new knowledge) can have a social effect.

In December 2013, I wrote a blog post about chickenhawking. A chickenhawk is a business executive who expresses his midlife crisis not by purchasing a sports car or having an affair, but by investing in the career of a younger man–– usually, for reasons that will be discussed, a certain type of younger man–– and living vicariously through him.

A chickenhawk gives his young protege (or “chicken”) rapid career advancement and a high income, in exchange for exciting stories. There is a revenge drive in play; the “hawk” punishes women who rejected him 20 years ago by inflating the economic virility of a sociopath who will–– as I then put it, capable even in barely-trigenarian literary infancy of the occasional limit break–– “tear through party girls like a late-April tornado”.

A fictional example occurs in The Office. The show has to stay PG-rated and humorous, so there’s a lot left unsaid, but Michael Scott harbors a vaguely homoerotic (and non-reciprocated) obsession with subordinate Ryan Howard, one that leads him to assist the latter’s career (and eventually be surpassed). He takes an interest in his protege’s personal life; he lives out his midlife crisis through a younger man with the social skills, courage, and resources (due to the hawk’s support of the chicken’s career) to things that, in the hawk’s twenties, he couldn’t pull off.

Silicon Valley is ageist and sexist. VCs “pattern match” to a certain type of person–– a young, unattached, usually heterosexual, male sociopath–– and one cannot understand the venture-funded software industry without an understanding of why. Sand Hill Road ought to be renamed Chickenhawk Alley.

Of course, this isn’t unique to technology. The corporate system’s raison d’être is to funnel sexual access to unattractive, rapacious men who have nothing to offer women outside of the social status induced by their control of resources. Without this motivation in play, the corporate system would have likely collapsed, leading to socialism, several decades ago. The rich do not hold on to the corporate system because they enjoy TPS reports; they do it because it gives them an advantage over other men (especially younger men) and thins out the competition. Chickenhawks tend to be too timid to abuse their control of resources in the way a more typical corporate executive would; they do it vicariously through someone else.

Paul Graham took offense to my December 2013 about chickenhawking–– but what does chickenhawking have to do with Paul Graham? I don’t know. I still don’t know. I don’t think he is a chickenhawk. I do not accuse him of being one. I never have. That song was never, ever about him.

No evidence exists of Paul Graham being a chickenhawk. Nor is there evidence of him being pro-chickenhawk.

Except what follows.

Chapter 9: The Vultures Chickenhawks Attack!

I make this analysis in good faith. In discussing Paul Graham’s personality, I find common ground. What could be called faults are traits I share.

I’ve been told on good authority that, at least at one time, he spent 6 hours per day on Hacker News, a news aggregator and community created around Y Combinator. Obsessive? I am not one to talk here–– I have also suffered unhealthy addictions to internet communities that consumed similar quantities of my free time. It takes a sort of obsessive mind to excel at detail-oriented crafts like programming and writing.

Creative people have another flaw: we tend to take criticism and skepticism around our ideas personally. It would not surprise me to learn that others’ skepticism of him was a primary reason for (a) his actions in 2013–15, to be discussed, and (b) his decision to step down as president of Y Combinator in early 2014.

My writing got to him. As I said before, Paul Graham is an above-average writer who won the birth-year lottery and whose optimism about the startup economy played a major role in restoring public faith in it. Some time later, I showed up on the scene. I’m also an above-average writer, but I did not win the birth-year lottery and I did not make millions for showing up at a place. My experiences in 2008–15 (detailed above) led me to conclude that the “new economy” was an ersatz replica of the old one. As my skepticism grew, I did not hesitate to express it.

My comments frequently rose to the top on Hacker News. Whether this means I was right, or merely wrote well, I shan’t say. I’ll only observe that often I achieved top comment.

And then, because I had the nerve to say something everyone already knew–– that there are chickenhawks in Silicon Valley–– I suffered the dreaded Hacker News “rankban”.

What the fuck’s a Lommy rankban? In a less stupid world, you wouldn’t have to care about this sort of thing. In today’s world, though, where opaque algorithms determine the placement and implied social proof of user-created content, and in which these reputation measurements make the difference between “influence” and unemployable obscurity, this kind of thing matters.

As I said, Hacker News (or “HN”) is a news aggregator and discussion hub for private-sector programmers. Even to be in the running for serious programming jobs–– not low-end rent-a-coder sprint work where you’re competing with sweatshop workers–– you need a pre-existing reputation. Hacker News is where a lot of people go to build one.

Y Combinator, a startup incubator, owns it. The conflict of interest should be obvious. It is a news aggregator owned by a baby-league venture capitalist. It is a PR organ that papers the reputations of YC-backed companies. It punishes those who express skepticism of these startups, or of the (defective) ecosystem in which they exist.

Someone banned from Hacker News is not notified of his offense (and there is no appeal). He does not even know he is banned, in most cases. He’s “hellbanned”, which means that his comments and posts are visible to him but no one else. This is contraindicated by the psychiatric community–– it’s a form of gaslighting. Less drastic is the “slowban”, by which a site performs poorly. You see this a lot in the venture-funded world–– in real estate and personal finance, there are a number of venture-funded companies using slowbanning to redline. Rankban, most insidious, exists when a site’s opaque content ranking algorithms systematically degrade one’s posts and comments–– if the content is successful, it is still represented as unsuccessful, and suffers reduced readership.

An anonymous tipster, in January 2014, informed me that I had been put on slowban and rankban by Paul Graham. I did not believe it at first–– I thought better of the man, and failed to see why he would have a strong opinion of me–– but these were relatively easy to test. Slowban, I verified by comparing response times on HTTP requests when logged in versus logged out. Rankban was harder to prove–– this I tested by digging up old high-performing posts and verifying that (years later) they had fallen to the bottom, where they would go unread.

I’ll confess that this is minor shit–– I only bring it up to prove that Paul Graham held an animus toward me as early as 2013 because of my anti-chickenhawk stance.

Rather than bog you down, dear reader, in more petty drama, let’s catch up to 2015 and the rise of Trump–– of note is that his increasing success (long before he won the presidency) validated a certain might-makes-right attitude toward publicity and business; long before November 2016, corporate executives were taking note.

In August 2015, I suggested, based on things Travis Kalanick said about his own motivations for starting Uber, that the company likely had a toxic culture. (Two years later….) This got me banned–– actually banned–– from Hacker News.

Banned from Hacker News! By this, I was truly, deeply… sorry, it is still too much….

Nah. It didn’t bother me. I was 32 at the time; I had outgrown the Hacker News community and the mentality it serves. Being banned from that place was no big deal–– a liberation of time, to be honest about it. The only issue was that Dan Gackle misrepresented the reason for banning me, taking an entirely different comment out of context in a way that any court in the U.S. would classify as defamatory.

Perhaps a week later, Paul Buchheit, a man who jokes about gun violence as a means of handling business negotiations, attacked me on Quora.

Worth noting is that Y Combinator bought a piece of Quora in May 2014 at a fire-sale price. It seemed an odd deal at the time, and still does, but I think both parties saw themselves as getting the better end of that one. Quora got to claim it was “YC” at the peak of the incubator’s prestige. Y Combinator, at the same time, gained the ability to “moderate” Quora’s community and content so as to favor YC-backed companies.

After this nonsense–– the “rankban”; Dan Gackle libeling me on Hacker News after banning me; the bizarre personal attacks from Paul Buchheit; and various other factors I shan’t get into–– I could tell there was a pattern. If nothing else, Paul Graham was doing a poor job of controlling his puppies.

I challenged Paul Graham to (wait for it) a rap duel. I’m not a stellar rapper; I did some freestyle in college and I’m half-decent for a white guy, nothing to write home about, but I felt confident that I could beat Paul Graham. I was, on one hand, extending an olive branch. Not having anything against Paul Graham himself–– he was negligent in failing to call off his puppies, but that could be fixed–– I felt that a public rap battle would be an opportunity to show that, despite our differences, we could respect each other well enough to put on a mutually beneficial (and entertaining) show. At the same time, I needed to make it clear that, if Paul Graham couldn’t control his puppies and the embarrassment they were causing, I would continue to demonstrate this incapacity.

On September 4, in retaliation for the rap-duel challenge, YC-backed Quora banned me–– again, in a common pattern, on false pretenses. My account, which had more than 8,500 followers, had been turned into a defamation page with a bright red text block saying, “This user has been banned.”

Mucho internet drama. I won’t blame you if your eyes glazed over. You’d think such things wouldn’t matter in the real world. You’d think. Don’t worry–– the stakes are about to go up, and the Nazis aren’t far behind.

Chapter 10: When Nonsense Decides To Matter

I interviewed for a job in January 2016 where it came up–– not as a stupid thing to laugh about, but as a serious concern–– that I’d been banned from Hacker News. A Chicago-based hedge fund decided not to hire me for a quant role because–– as I have from a headhunter who was decent enough to give me the real reason–– an MD observed that, “apparently this Paul Graham fellow doesn’t like him.”

This is an objective moral fact: internet drama like that should never affect someone’s ability to earn an income.

Unfortunately, the world has a surfeit of immature, deficient men who, on the basis of something as minuscule as a website ban, will close doors–– even, if not especially, doors that are not theirs to close.

I have seen all sides of this Black Mirror–level idiocy. I’ve been a manager. I’ve been involved in hiring decisions. I’ve made calls; I’ve defended people; I’ve also failed at defending people.

More than once, I’ve seen irrelevant internet activity–– as minor as rumors on sites like the blessedly-defunct Juicy Campus–– come up as cause to deny candidates jobs, reduce their offers on the assumption of lesser leverage, or to fire otherwise excellent employees.

Also, though I never cared about job candidate’s politics, this is not a difficult matter for employers to discern. It’s something they care about for “cultural fit” reasons, but not in the ways one might expect. I’ve never seen anyone hosed for being a Republican or Democrat, or for supporting a mainstream presidential candidate–– it’s possible that it happens; I just haven’t seen it–– but I have frequently seen people denied opportunites for “being political”, and it is almost always the left that is penalized.

Overt racism will get someone dinged, true, but if the candidate’s a white guy who retweets Breitbart articles, an executive will always step in and say, “We don’t know that he supports those views.” On the other hand, someone who’s anti-racist–– say she’s active in Black Lives Matter–– will get similarly dinged, not for her politics per se, but for the fear of hiring a “troublemaker”. Once I overheard a conversation in which an executive described a colleague as “terminal” (not promotable into management) because “you can never trust a male feminist”.

Corporates don’t show their far-right colors often, but anti-leftism is the payload of their aversion to “the political”. They’ll fire a racist because it’s good for publicity, but their real fear is of the left–– of truth and justice.

Chapter 11: Morality

Does God exist?

That’s the easiest question there is. Yes. God–– the God of the Torah, the Bible, the Quran–– exists. Zeus also exists. Osiris exists. Iago, in Shakespeare’s Othello, exists. Farisa will exist, once I finish the damn story. They exist as much as the number 2 or the color “magenta”. They may exist only in our minds, but they exist as concepts.

The harder question is: are there supernatural humanoids who interfere with the observed laws of physics? On that one, I’ve seen absolutely no evidence, so I’m going to profess non-belief. More interesting is: is there an afterlife? I’m on the happier side of 50–50, on that one. My reason would require another essay, but I find accessible reasons to believe there is one–– and while I might be wrong; if I am, I won’t have to bear the disappointment, since I won’t exist.

Does absolute morality exist? I think so. Most ethical mandates are situational and relative, but their underlying reasons for existence seem less flexible. I am unable to articulate precisely the moral principles of existence, but I believe they exist.

I’m not a nihilist, and I go further. I don’t believe nihilists exist. At least, I don’t think a person can stay nihilistic for very long. Meaning vacuums get filled.

Let’s say someone who considers himself a nihilist, but who is a good person, is offered $5,000 to torture a kitten. He’ll refuse, because some actions he accepts and others he finds repulsive. Meaning is a weird term. Perhaps “purpose” or “value” is better. I would not torture the kitten, not because I expect the kitten to “mean” anything, but because I value the creature’s existence and welfare.

Nihilism is dangerous because it’s unstable. The meaning void will fill itself with something, but not always something good. Ultra-nihilistic villains like the Joker (Batman franchise) or Kefka (Final Fantasy VI) fill it with hatred and blood lust. Fascism, an outgrowth of might-makes-right nihilism, sells itself to the masses by presenting itself as aggressively anti-nihilistic–– thereby disavowing the decadence of which it is a culmination.

A person doesn’t stay nihilistic for long; but systems can be nihilistic. Corporate capitalism is a belligerent nihilism machine. It does not hate its victims; it simply does not value their subjective experience. A tree will be cut down unless it can pay not to be cut down.

Chapter 12: The Two-Stroke Nihilism Engine

Global corporate capitalism was not designed, technically speaking, but I cannot think of a better way to design an economic system to destroy things humans value–– a self-replicating monument to nihilism, a belligerent anti-meaning device.

The first thing to understand about global corporate capitalism is that it’s totalitarian. If the people in one nation are unfree, others must compete on wages and working conditions and will be unfree. It’s important to discuss economic totalitarianism, because while leftism has had a bad run for the past 35 years, almost all of the negativity directed at “communism” is more accurately blamed on left-wing economic totalitarianism (old-style tankie socialism). Right-wing economic totalitarianism is no better.

We’ve been pushed, over previous decades, to accept corporate rule on account of disingenuous claims that “communism killed 100 million people“. Did it? Not really. Mao Zedong’s incompetence killed some, Stalinist repression killed some, and anticommunist reaction (including fascism and World War II) killed a lot of people–– deaths that have been blamed on “communism”, even though none of those societies were communist.

A difference at issue is that capitalism has no memory and takes no responsibility; socialism, to the ill-health of its image, has far too much memory and responsibility. Americans who were unable to secure health insurance, and Pakistanis who were “freedomed” by drones, are not considered to be killed “by capitalism”. There’s a whole lot of dishonest accounting that goes on; the truth is that capitalism’s record is just as bad, if not worse.

In either case, the true enemy isn’t an economic system’s baseline principles, but totalitarian application. Global corporate capitalism is totalitarian because the employer is not happy to make a modest profit. It must make the highest profit, at any moral cost. It must have the worker’s indivisible loyalty. It takes everything it can get.

Global corporate capitalism wants for all things humans value to be “converted into dollars”. Who gets to live by the lake. The highest bidder. A “view” created by God or by Nature becomes just another form of money. Who gets the bulk of people’s time and attention? The people and organizations (often, authoritarian organizations) who specialized in the buying and selling of others–– employers. People’s friends and families get the leftovers.

Cultural influence, educational experiences, and personal relationships become nothing but “capital” in new forms. Everything gets converted into money, and if it resists such conversion, it’s marginalized to the point of nonexistence. Rebellions get bought. Sexual and cultural expressions of marginalized people are exoticized and appropriated by the rich. Social media, for a concrete example, has become a mechanism through which corporate marketing departments can buy the perception of grassroots authenticity.

Corporate capitalism’s first move is to convert all things humans value–– sexuality, social connectedness, leisure, culture, opportunity–– into an abstract quantity called money, measured in units called dollars or euros or yen. That’s the nihilism engine’s first stroke.

The second stroke is: to find the place of least utility for the dollars (euros, yen) and put there as many as possible. The rich get richer; the poor get poorer. The well-resourced have full-time staff to manufacture their reputations and appearances, so they present themselves as cosmopolitan ubermenschen (when they are, in fact, as provincial as the yokels they despise) while the poor become socially and culturally isolated.

If all things humans value are “converted into dollars”, all things humans value will go to those who have the dollars.

What is a dollar’s value? Of course, it’s not a constant. One dollar represents 8 minutes of a minimum wage worker’s time, but only half a second of a CEO’s time. If a dollar’s parked in the garage of someone who already has a billion, it’s being put where it isn’t needed. Its value is being minimized.

This shows that corporate capitalism seeks to turn all things humans value into a tradable form (money) and then to put every dollar of the money into the coffers of a person or corporation who does not need it. Since they have an excess of it, they use it to buy not things they need, but a future excess of money. This is a belligerent, nearly unstoppable utility minimizer–– an ever-advancing nothingness and pointlessness.

In 2011, Marc Andreessen said that “software is eating the world”. Having worked in the software industry during that time, I can refine this observation: corporate capitalism continues to be what’s eating the world. Software is merely what it shits out.

Technological growth of a kind that would benefit everyone has disappeared. We don’t have flying cars or robot maids. We have time-tracking software. We have Jira. The major innovations of our time have been surveillance technologies (weapons) for the benefit of health insurers, despotic governments, and authoritarian employers. That’s who’s buying this stuff.

Employers used to fear their workers, at least a little, but these days they share information (contrary to law) about suspected unionists. Workers in the trades–– in the “blue-collar” jobs displaced office workers are told to consider–– often suffer belligerent performance tracking enabled by devices running code written by people like me. Retail workers often have less than 24 hours notice of when they will work, because their shifts are determined algorithmically. The working world has gotten worse, has gotten more fascistic, and it’s our fault as private-sector programmers.

I mentioned the “Agile” garbage that makes a typical programmer’s life hell. It’s not only that we implement the weapon designs of psychopaths who profit by immiserating workers. We are also the first subjects of many such experiments, the first to taste the poisons (and stupid/earnest enough to refine them) before they are rolled out into the broader economy. “Scrum” is the same malevolent performance management applied to truck drivers and factory workers, but using that name when applied to low-status programmers. Nowhere is it written in the Cannibal Bible that a cannibal cannot be consumed by other cannibals.

Part 2 is here.

End of Part 1–– What’s to Come in Part 2

So far, we’ve covered the technology industry during 2008–15 and my experiences within it. We know of the emergence of might-makes-right politics (Trump) and we can see that it is a natural extension of global corporate capitalism.

In the first half of this exploration, I told a story with political, moral, and personal threads, all of which have diverged. In the second, we’ll arrive at the convergence. We’ll discuss the acceleration of capitalistic disease under Trump. We’ll cover purposes of the technology industry (and the Silicon Valley business model) of which most people are unaware. We’ll deepen our understanding of fascism–– what it is, why it emerges, and my own experiences in the fight against it. At the end, I’ll present why I believe the probability of a violent conflict, with fascist elements that exist within our society right now, is high.

There is much that has happened in the past five years that must be revealed. I will establish (with verifying details) something heinous about an organization of middling profile but high importance. In so doing, I may put my life in danger, but public service demands it. Names will be named; events will be explained.

Farisa’s Crossing Final Round Beta Reading

On January 3, 2020, I’ll be opening up a round of beta reading for Farisa’s Crossing. Since I intend this to be the last round I do– I would like to begin serialization in April 2020, with the entire book published by February 2021– there’ll be more slots than in previous rounds.

What does a beta reader do? You’re not asked to copyedit the manuscript; copyeditors do a much more intensive read and that’s a paid service. As a beta reader, your job is to read the manuscript as you would any other book, and give feedback on what works for you and what doesn’t. You’re not expected to do more than that. It’s a time commitment of about an hour per week over 3–4 months.

Tentatively, I’m looking for 10–12 readers, preferably a diverse set with regard to age and gender (since there are major LGBT characters as well as characters with disability) as well as sexual orientation and disability.

More to follow. For now, if you’re interested in being a beta reader for an epic fantasy novel, my email address is michael.o.church at Google’s email service.

“Eat The Babies”

Congresswoman Alexandria Ocasio-Cortez held a town hall on October 3 at which an ostensibly mentally ill woman demanded, as a solution to climate change, “we got to start eating babies”. AOC, who has always handled adversity and difficulty with poise, addressed legitimate concerns around climate change and, while visibly disturbed by the spectacle, did not resort to ridicule or immediate denouncement, as lesser people would be wont to do. I think she handled the situation as well as anyone could have. Bravo, Ocasio.

I usually wake up between 3:00 and 5:00. At 3:27 this morning, I found #EatTheBabies trending on Twitter, and I had the distinct displeasure of reading right-wing tweets about how “leftist” climate change “hysteria” is triggering dangerous mental-health crises. According to the right, Ocasio-Cortez’s choice not to immediately denounce infantivory represents leftist endorsement of the notion. That is, of course, absurd.

Take note of this. The right wing, which has already labelled this woman a “climate activist” rather a person likely in need of psychiatric health, is testing out a false equivalency. Americans are fed up with the stochastic terrorism and bad-faith argumentation used by our society’s reactionary, authoritarian, and upper-class elements to stoke the nation’s right-wing, racist, paranoid counterrevolution (designed to be “populist” in a way that is no threat to the upper class, because it cannot help but punch down). The conservative and pro-corporate elements of our society want nothing more than to associate AOC’s moderate socialism with baby-eating, as if Cormac McCarthy’s The Road were a leftist how-to manual.

We live in a strange time. I’ve seen enough strange stuff to have a sense of the nation’s politics, and I’ll try to answer the question, What the hell is going on? There are three possibilities.

The first possibility for what happened on October 3 is that a woman in poor mental health had an outburst. Alexandria Ocasio-Cortez, for security reasons if nothing else, had to operate on this assumption and respond to the woman with compassion while not confirming her assertions. Here’s the thing to remember: AOC is not your typical bourgeois liberal who preaches compassion but avoids conflict with the afflicted or poor. She’s seen real shit. She’s worked with the public– as a waitress in Manhattan. She knows that ridicule is not how to handle a volatile situation.

The second possibility, because of the time we live in, and because of the exceedingly low character of the corporate upper class and today’s political right, is that this spectacle was created to humiliate the left. Several right-wingers are claiming that climate change activism is triggering “hysteria”. Others are arguing (in bad faith) that because Ocasio-Cortez did not immediately denounce infantivory and the bombing of Russia, that leftists and liberals are secretly cozy with these terrifying ideas. It is possible (although I don’t consider it likely) that this woman was an actor commissioned to damage the image of the left, of the environmental movement, and of Ocasio-Cortez’s proposed Green New Deal.

The last thing I want to do is accuse an ostensibly mentally ill person of acting in bad faith. As I said, I don’t consider it likely that she’s a “troll”, but the possibility requires discussion insofar as the the right has already used the event as an opportunity to argue in bad faith. Yes, people are actually saying that intelligent moderate socialists are on the fence about cannibalism.

Again, this is all an effort to create a false equivalency between leftist non-denial of climate change and the deranged right-wing conspiracy theories (like “white genocide”) that lead to violence.

So, while I don’t think it’s likely, I consider it possible that certain upper-class, conservative, or pro-employer authoritarian elements in our society have arranged this spectacle.

A third possibility, a more likely variant on the second, is: it’s a mix of both. It’s possible that this woman was a mentally ill person (not a bad-faith “troll”) but also that she was directed to Ocasio-Cortez’s town hall for the purpose of having a mental health crisis in public, harming the left. Does this seem far-fetched? Perhaps, but it’s the Silicon Valley playbook. It’s only a matter of time before the entire alt-right begins to use it.

For several years, it has been a common practice in the venture-funded technology industry (“Silicon Valley”) to recruit the homeless to harass guests as, for example, a rival’s launch party. I don’t think this is done to disrupt business operations, because I don’t think it’s very effective; it’s more of a mechanism to threaten and annoy one’s adversaries. I bring it up not because it’s underreported, and also because it’s in extremely bad taste.

I know about this tactic from personal experience. Some of my readers know that, since 2011, I’ve had to deal with fascist attacks on my career and reputation. I’m at least half a million dollars poorer (and that’s a conservative estimate) than I would be, had I not been fending off assaults from literal fascists my entire career. I’ve had written job offers rescinded, more than once, when a fascist sympathizer (and, likely, actual fascist) discovered online that I have said positive things about antifa (which literally means no more and no less than antifascism). We live in a time when to be rational and humane (both of which stand in opposition to fascism) is to be political; but we also live in a time where to “be political” upsets proudly and self-assertedly “apolitical” entities like corporate employers.

I’ll share just one example. A few years ago, I interviewed for a machine learning engineer position at a reputable Chicago trading firm. The team wanted to make an offer; an executive blocked it. Why? He discovered my mild criticism of an unethical (and, likely, illegal) corporate practice unrelated to trading or finance. He formed the opinion that if a publicly “political” person were discovered to work at the firm (even in a non-executive role) it would be a publicity risk. Here’s the kicker, though: this fascist assault occurred in June 2013, long before fascism was part of the national conversation. In 2013, a right-wing takeover of this nation (already well into progress, but through private employers rather than government) seemed unthinkable.

Authoritarians do not have an ideology in the classical sense. It is not about “free markets” for them; it is not about tradition or philosophical conservative. They are the win-at-all-costs players. They will gladly weaponize our society’s most vulnerable people.

I have stood in vocal opposition to Silicon Valley’s coziness with fascism– before 2016, the employer-nucleated fascism-lite our society tolerated because it was not overtly racist, misogynist, or warlike; after 2016, the expressed literal fascism that is killing people– and, as a result of this, I’ve endured considerable harassment. I was banned from Hacker News and Y Combinator–owned Quora in the summer of 2015 on false pretenses that, if the claims were true, would cause embarrassment. I’ve been harassed on the streets by deranged, ostensibly homeless, people. In many cases, it’s been clear that they “knew” (or, more likely, had recently been told) my name and affiliations. I’ve been ordered by people I’ve never met to stop writing about certain topics. There was a period of time when I could not go to San Francisco– I feared for my life.

I bring this up because it is not a new tactic for the right wing to weaponize our society’s most vulnerable and unfortunate people. Is that what happened at Alexandria Ocasio-Cortez’s town hall, which the right is disingenuously using to equate moderate socialism with infantivory? I don’t know. We’ll probably never know. But it happens, and it will happen more in the future as our society’s corporate, fascist, and authoritarian interests gear up for the fight of their lives– which happens to be, as it were, the fight of our lives as well.

A Killing in Menlo Park: Qin Chen’s Death and the Need for Justice

Qin Chen, a 38-year-old software engineer working at Facebook, jumped to his death from a high building on September 19, 2019. One might say that, in proximate terms, the death was a suicide. I’m averse to that word in this case– it seems highly likely, given the evidence, that it is far more appropriate to say: he was killed.

It’s important to get this right. The word “suicide”, in our culture, implies personal failure and individual fault. It’s not appropriate, therefore,  o say that someone who leapt from the World Trade Center on 9/11, preferring impact over immolation, “committed suicide”, even if the mechanism of death was chosen by the deceased. A fox that chews off its own leg to escape a trap is not engaging in self-harm. Likewise, I don’t think it’s appropriate to present Aaron Swartz’s death as a suicide, without mentioning the malicious prosecution that led to his demise. As Quinn Norton said, “the old world killed him.” We tend to be quick to focus on the mechanism of death– far too quick to call an event an aberration and blame it on “mental health”– out of a misguided desire not to hang blame on the living.

If Patrick Shyu’s account is accurate, Qin Chen’s was killed, and his survivors have a right to justice. He was killed by his manager’s petty retaliation over his desire to do something else at Facebook. He was killed by HR officials who refused to override the libel of a rogue manager, who refused to let an employee acting in good faith restore his reputation. He was killed by his employer’s indifference, allowing the institution of a cruel system under which he could not transfer to a team more appropriate to his skills and interests.

The account linked gives a credible narrative. First, it alleges that Qin’s manager enticed him to stay on a project he disliked, in exchange for a guaranteed positive performance review. Having worked in large technology companies, side deals pertaining to “perf” scores are remarkably common. (It is also not uncommon for people in the HR office to accept side payments in order to fix an internal record.) That number means everything– it is one’s total human worth as assessed by the organization. It is also not uncommon for managers to break these arrangements, and they do so without consequence.

Though I don’t know whether Qin Chen’s story is true, I’ve encountered so many people with stories just like it that I see no reason to disbelieve it. Here’s the thing: managers running less desirable teams have chips on their shoulders and are quick to punish “disloyal” employees who deign to seek transfer. It’s disgustingly common for a naive software engineer, assuming good faith on the part of his manager and company, to make known his interest in internal mobility– and be shocked when he slagged with a negative “performance” review, often without explanation.

I’m getting old. I’m 36, which is 0x7F in software years. I’ve seen people repeat the same stupid mistakes over and over. Investigations into Enron’s culture of mendacity found responsible a style of high-stakes performance review notorious for creating a culture of suspicion, politicking, dishonesty, and widespread cheating of all forms on all levels. Funny thing is, Enron’s widely-hated performance review system (“stack ranking”) wouldn’t be out of the ordinary in a technology company. The buzzwords change every five years. The behaviors don’t, and as fascism– both the public nation-level variety, and its more contained private corporate form– becomes normalized, we should only expect this to get worse. We have to fight back. We have to crush fascism, and we have to bring unlawful killers to justice.

Facebook’s HR, by the account that has emerged to this point, did not repair the damage done to Qin Chen’s internal reputation by a malicious manager. They did not restore his right of internal mobility. As a result of their criminal negligence, Qin’s professional situation and reputation deteriorated to the point where he saw death as the only option.

This should not be pinned on a “difficult” or “tough” or (gag) “high-performance” company culture. Qin Chen was killed. His manager literally killed him. The HR “business partners” who did not intervene on his behalf are, at best, accessories and, at worst, killers themselves. Although a situation of intentional murder seems unlikely to be, these malefactors put a man in lethal danger– and he died.

Sunlight, they say, is the best disinfectant. The public has a right to know who Qin Chen’s managers were, and which HR officers were involved in his case.

I am available as michael.o.church on Google’s email service. If I am furnished reliable information pertaining to the identities of Qin Chen’s managers, as well as HR officers involved in his case, I will publish it here. Furthermore, it is the public’s right to seek justice not only for this death but in future cases like it. Therefore, any personal or contact information about guilty individuals, I will also publish– after review and verification.

I do this without condemning nor condoning specific approaches to public justice. Whatever justice the affected portion of the public chooses to seek, it is not my call to make.

Projected Release Dates for Farisa’s Crossing

I’m serializing Farisa’s Crossing. Conceptually, the story divides neatly into five segments: The Forest, The City, The Road, The Dead; and The Lovers.

The first segment will be available on Amazon for the lowest possible price (99c) and for free elsewhere (stay tuned) in April 2020. As of now, my intention is to make each segment (at the time of release) free; a short reading comprehension quiz (designed to be easy for anyone who has read the previous segment) will be used to make availability of each segment conditional on having completed the last.

When the last segment is made available, the intermediate segments will no longer be available for free, but the complete book will be accessible at a reasonable price (between $5 and $8).

The planned release dates, which I’d give a high degree of confidence (85+ percent) I will make, are:

  • April 26, 2020: “The Forest”
  • July 3, 2020: “The City”
  • September 4, 2020: “The Road”
  • November 15, 2020: “The Dead”
  • January 17, 2021: “The Lovers”

I intend to release the complete book on January 17, 2021, unless further editing is (for some unforeseen reason) necessary. I’ll be encouraging early readers to discuss the book and form theories on the reddit r/antipodes.

How I Would Fix (and End) Game of Thrones [Spoiler Warning]

My first novel, Farisa’s Crossing, is on track for serialization beginning April 26, 2020, with the full book available by January 17, 2021.

(Minor editing for clarity was made on the morning of May 19.)

Pre(r)amble

I hope I’m wrong about this, and a brilliant last episode could change everything. That said, it appears so far that the final season of Game of Thrones has failed on account of hasty writing. The Night King plot seems to have been under-explained in service to a Long Night–inspired prequel; the ending of Season 8 feels more like a homework exercise, designed to hit plot points without much attention to craft, than it does a story.

Let me remark that it’s entirely possible that what appear to be lapses in story craft are, in fact, artistic debt. With every word of a reader’s time the writer uses, the author incurs such debt; only when a story is complete can a final judgment be made about whether the debt is paid off. It is a possibility that the last episode, to be aired on May 19, fixes the apparent problems. As of now, though, I see so many issues that it’s hard to me to picture a resolution of all of what, so far, appear to be frank artistic errors.

What are those flaws?

To start, stupidity has been used far too often as a plot device. Rhaegal is Daenerys’s son (in spirit) and a prime military asset. Thirty seconds of thought, by any of her allies, would have prevented him from dying in such a facile way. War is hard to write, sure, and no one’s asking for (or, in fantasy, wants) perfect realism, but stupidity’s utility as a plot device has been overdrawn. I suspect I know why this error was made– it, and other apparent failures of craft, derive from changes made by HBO to Euron Greyjoy– and I will analyze that further, below.

Suspension of disbelief collapsed utterly, for me, in Episode 5 with the burning of King’s Landing. Don’t get me wrong. Daenerys was always a severely flawed heroine. I could have seen her turning into an antagonist. A war criminal, though? It seemed like a plot contrivance. To use Missandei’s death as cause for her turn toward atrocity is unbelievable. Although she is dogmatic, arrogant, and temperamental, Daenerys has always been principled, and it violates one of her core principles to take that path– it’s not something she’d do lightly. Moreover, Daenerys has seen (and brought) so much death and war that it is unreasonable to suggest that the death of another confidante would lead her to murder tens of thousands.

I’m going to try to give the final season a developmental edit. I will stick to the mostly-tragic sort of ending that I believe both George R. R. Martin and the HBO showrunners intended and that is almost certainly correct for the series. I will have to turn away from this obnoxious and poorly-executed specific bad ending.

What I’m Not Writing About

This analysis is not a criticism of the unfinished book series, A Song of Ice and Fire. The book series is different altogether, and I consider myself unqualified to criticize it beyond the impression of an educated reader. Why so? It’s not what I would write. I intend on keeping my career in technology, so The Antipodes could possibly be the only novel-length fiction I publish. I have no interest in writing grimdark fantasy. As grimdark isn’t what I write, and I don’t much enjoy reading it, my opinion of Martin’s work doesn’t mean very much. It would be a pointless exercise in ego to give a “how I would write it” synopsis of Martin’s work, in which I risk comparing it to some imagined but entirely different piece of work. Instead, shouldn’t I go and write my own book?

I feel more qualified to assess the HBO series because the writers clearly did want to garner increased viewership with the promise of traditional heroism. The arc of platonic affection between Brienne and Jaime– which culminated his knighting of her in Episode 2, and was then trashed to make the woman his one-night stand– does not keep the spirit of grimdark. As the HBO series led us on with promises more in line with traditional fantasy (in which we’ll tolerate tragic endings, but demand traditional value-positive heroism) I feel more equipped to criticize it.

There are a few other reasons why I want to be careful in what I say about George R. R. Martin.

First, I feel that he is often unduly criticized, especially on a personal level, for what he chooses to write. I don’t believe that he’s a misogynist and he’s certainly not a feudalist. I do not assume he is a nihilist or narcissist, simply because he writes about nihilistic, narcissistic people. It is too often assumed that the moral flaws of protagonists reflect on their writers, and although this has been proven true too often for comfort in literary fiction (not to mention Hollywood; see: Woody Allen)– I do not like the stereotype. It limits what people can write. One especially loathsome portrayal of George R. R. Martin– I was physically angry as I watched this– was in the otherwise-enjoyable TV series Younger, focused on the publishing industry. (“Beware the wrath of the sky.”) The character of “Edward L. L. Moore” was sloppily-written and offensive to the fantasy genre as a whole; it was irresponsible. The fantasy genre is no more juvenile, prima facie, than another “literary” novel about a 57-year-old male professor of literature sleeping with undergrads. The only thing we know about George R. R. Martin from his work is that he writes dark fantasy (“grimdark”) and that he writes it well. Speculation about him as a person should stop.

Second: although Martin’s vision of fantasy is different from mine, the author of Ice and Fire has done a great deal for the genre. He is far from the first to write fantasy to an adult standard and he won’t be the last (what up, yo) but he has shown to a massive audience that it can be done. He and I are certainly not competitors in any way. On the contrary, a successful, competent author brings others up.

Third and relatedly, I am not an envious author– but I don’t want that look. As a Bayesian, I judge it far more likely than not that his series will outsell mine, for decades to come. (I would only have a chance of outselling him if the Antipodes were, as well, adapted for the screen. The spring of 2019 has left me under-attracted to that notion.)

Fourth, I respect Martin’s vision. He has created a compelling world that captures late-medieval ideology (Westeros), Renaissance-era politics (Free Cities and the smarter Westerosi), and Lovecraftian horror-fantasy (Essos; the Land of Always Winter). The originality and precision of his worldbuilding are admirable.

Fifth, as I mentioned before: since grimdark isn’t what I choose to write, I can’t possibly give suggestions to the book series without the risk of turning it into something else– which, again, would be an expense of time better used on my own work.

Sixth: in a way, I’m indebted to George R. R. Martin. His work takes what many of us feel to be an extreme position on a spectrum between (a) Tolkein’s brand of fantastic heroism and (b) the moral relativism favored by his “grimdark” as well as by modern literary fiction. (He has, at least in the popular perception, created this spectrum within fantasy.) By doing so, he has opened a dialogue on what the fantasy genre should be. Farisa’s Crossing lives, I would like to think, in the middle of this continuum. My work is quite dark– it takes place in a steampunk world where the Pinkertons won, now control the entire economy, and are fast becoming Nazis– but I strive to give hope that a moral north star still exists and can be seen on the one night out of four that isn’t cloudy. Farisa is not perfect, but she is genuinely good.

George R. R. Martin is not the first to write complex, adult fantasy, but he has shown the world that it is possible, and for that I feel I owe him a great deal.

I don’t see Martin making basic mistakes of craft. The thing I dislike most about his book series, to be honest about it, is his tendency to end on cliffhangers. His ensemble approach, with rotating points of view, worked beautifully in the first two books– it gave us different and often opposing perspectives on shared experiences– but as the plotlines separated (geographically and thematically) I found it to be borderline untenable. I don’t think he wrote with manipulative intent; by the modern standard, it is preferred to end chapters on cliffhangers and push of-the-episode denouement (“sequel”) into the next one. This gives a can’t-put-it-down feel to books with one or two plotlines but it fails when 250 pages exist before a plotline is resumed. But this is a subtle mistake (if it can be called that; arguably, it is not even that but a difference in style) and de minimis compared to the Writing 101 mistakes of the HBO series.

What Went Wrong With Season 8?

I want to make it clear. I believe the staff writers who worked on Game of Thrones, including the ill-fated Season 8, are competent. They are not dumb, lazy, or inexperienced people. On the contrary, I believe they’re excellent at their jobs, relative to constraints. They worked on an ambitious existing series, without source material, under incredible deadlines. Although 20 months elapsed between the end of Season 7 and the beginning of this one, I doubt the writers had more than two or three months of “blue sky” writing time, given the immense complexity of the project.

Also, let’s be real here: great writing takes time. Commercial novels are measured in pages per hour; literary novels in hours per page. Seven to ten rounds of revision (before line and copy editing) is usual when writing to a literary standard. The process takes years even for the most skilled authors. I expect The Antipodes to require 15–25 years. The HBO staff writers did not have this kind of flexibility with their schedule.

In fact, I believe that one major decision caused the story arc of Game of Thrones to fail, and it’s one I don’t think the staff writers had much say in.

I won’t opine on the pacing, either. Pacing is quite subjective. All forms of narrative, whether on stage or in print, speed up as they near the end. Readers demand it. As the anvil falls, the rope at the other end whips around. Role-playing games give the players an airship (or equivalent) in the late-middle for a reason. I don’t fault HBO for the rapid pacing of Seasons 7 and 8. It’s not an artistic failure because it’s what readers and viewers want at this point.

There have been serious omissions (e.g., of motivation) in the late seasons, but those are not pacing problems. In fact, compared to a traditional movie (in which an entire story is told in two hours) the pacing of Thrones remains slow even in the final season. There are plenty of elements that could have been cut to make room for what’s missing.

So, with all of that said: why did Game of Thrones fail toward the ending?

As I said, George R. R. Martin writes dark fantasy (“grimdark”) that approaches the nihilistic. I do not intend to say that his series (which is not yet complete) “is nihilistic” and, again, I make no observation about him as a person. However, he writes about nihilistic, narcissistic, and narrow people. In characterization, he is closer to the MFA-educated metrorealism that is today called “literary fiction” than many in that camp would like to admit. When it comes to characterization, he doesn’t write heroes; for personality traits, he gives us the same middle-heavy bell curve that we encounter in real life.

Robert Baratheon defines the good life as “crack[ing] skulls and fuck[ing] girls”. Illyrio Mopatis, a fat old man, weeps in front of a statue of himself at age sixteen. Plenty of Martin’s characters admit they enjoy killing, although few people (even, if not especially, among those who must do it lawfully in war) actually do. Drinking, food, sex, fighting, and especially power seem to be the things that matter most to the characters in Martin’s world.

The author also indulges a trope that I find tiresome in fantasy: Adults Are Evil. (Arya, in the books, is a prepubescent girl.) As a middle-aged man, I am too old to hate that trope, but I must laugh at its absurdity. (Those over sixty can apply for “wise mentor” slots if willing to die in late Act II. Everyone between twenty-one and fifty-nine must be incompetent or malignant, and usually both.) Living in the real world, I find that age has no correlation with moral decency or value. If anything, one of the major improvements HBO has made on the source material is their aging of the characters: Jon, Sansa, and Arya are adults at the series end. I usually find “Adults Are Evil” to be exquisitely unskillful, but I give Martin a pass for it, because it actually fits his nihilistic, depressing world quite well.

The truth is that Martin’s brand of grimdark nihilism isn’t palatable to a large audience. People make jokes about Cthulhu, but few people actually deign to read Lovecraft. The HBO series, wisely, pulled away from the unpalatable grimdark roots of the book series. It turned true-neutral Arya into the face of chaotic good– and made her an adult with agency rather than an unlucky child. Tyrion and Varys, whose actions in the books were reprehensible, were made into genuine good humans in the HBO series. Between Jaime and Brienne, we got a beautifully depicted arc of platonic affection culminating before the Long Night. The HBO series sold us hope in a series about more than “cracking skulls and fucking girls” and that stupid iron chair.

The Northern Crisis, it seemed, brought a few of the series’s characters to focus on what was truly important in life. Daenerys, for all her flaws, chose to go north and fight in the war that mattered rather than the petty squabble in the south. We saw hope emerging, despite a desolate world. This is something I do with Farisa, too; I think I’m far from alone in that approach. We love it when beautiful, competent people exist despite a horrible world. When it comes to George R. R. Martin, I’d respect him for sticking with grimdark; I will also respect him if he gives us a more traditionally meaningful ending.

The HBO series broke away from grimdark. They seemed committed to the pro-meaning side. So, while we knew we’d get a tragic ending, we expected to be freed from the depressing nihilism of grimdark. Right?

Nopers! Why? Because fuck you, that’s why.

The Walkers were dispatched in a “Long Night” that seemed to last a regular night. Brienne became Jaime’s one-night-stand– because women are totally at their best when used as plot pieces to showcase a man’s moral depravity. Daenerys went on the rag and torched fifty thousand innocent people, because that is totally a realistic depiction of mental illness, and because women are emotionally unstable and that’s why they commit most of the violent crimes. (Oh, wait. No. That’s wrong.) Villains like Euron and Cersei who deserved humiliating deaths got I-die-but-I-win Heisenbergian endings, without the positive character traits that made it acceptable for Mr. White to get such an end.

It would have been fine (as I’ve said) to stick with grimdark and continue on to what appear to be Martin’s depressing plot points. There is no artistic reason why Daenerys couldn’t become a war criminal; it was foreshadowed that she might. It would have also been fine to continue to diverge from the apparent moral nihilism of the early work, and stick with the traditional heroism we saw toward the end. I feel utterly manipulated by HBO, though, for the switchback and the utter repeal of character growth.

The ultra-cynical view of this is that HBO deliberately gave us a more palatable (less nihilistic) series– knowing that high production values and copious nudity would only keep viewers invested for so long in an otherwise nihilistic story– only to swerve back into nihilism, perhaps in order to anger people and maximize buzz for the show.

1 + 1 = -5

Even though there are no rules in writing, there are rules. At least, there are guidelines. Sol Stein famously says, “One plus one equals one-half.” This is an observation of what I call rhetorical non-monotonicity.

To explain the notion, let’s consider that a die-hard logician (or, say, a number-crunching Bayesian) must perceive evidence as only making an argument (stronger). That’s monotonicity. A strong argument followed by a weak argument is strictly more evidence than a strong argument alone.

Of course, that’s not how we respond to real arguments made by real humans, and there are strong social reasons for us to be that way. For example, if I’m trying to sell you on creationism (which I’m not, because while a theist/deist, I believe evolution by natural selection is true) I might point out all the “holes” in evolution. I believe that irreducible complexity arguments are flawed and lead to an incorrect conclusion, but there is enough meat to them to merit further study and (likely) refutation. Now, compare Presentations A and B. Under A, the proponent of creationism focuses on biological arguments alone; under B, he makes all the same arguments and continues on to say, “Evolution is also untrue because it contradicts the Christian Bible, which millions of people believe to be literally true.” Which is more convincing? If we expect logical monotonicity, then B (which presents a stronger argument, then a weak one) is. However, most of us would find presentation A more convincing; B supplements A’s case with an additional social-proof argument– and we know from our history that those are nearly useless in assessing scientific truth. We feel, after Presentation B, that we’ve endured a sales pitch. That’s rhetorical non-monotonicity in action.

Rhetorical non-monotonicity applies to art, as well, and writing in particular. One of the oft-cited “rules” to writing is “Show, don’t tell”. Actually, writers “tell” all the time. Showing one element often involves telling three supporting details. Show-don’t-tell expansions have to end at some point, lest the story ramble on for a million words. What often falls flat is to show and tell.

For example, consider this snippet: “It was a beautiful day. The sun shone, the clouds were brighter than clean linen, and the faint golden cast of the October wood suggested treasure within.” In most cases, one of those two sentences (the first tells; the second shows) ought to be cut. Which one? It depends on context. But, together, they weaken each other. That’s a case of one plus one equaling one-half.

Intensifying adjectives and adverbs function the same way. Overselling only works with ironic intent; when a serial killer narrates, “It was a beautiful day”, he is not commenting on the weather. There’s nothing wrong with the sentence, “We knew we were totally safe here.”, so long as the POV character is not safe there.

It is, in my view, extraordinarily difficult for a writer to know when one plus one is one-half and when it is two. In the second book of a series, the writer must offer a “reboot” that repeats details of book one. And to offer exactly three details (“rule of three”) in escalating power seems (although it is arguably repetitive, to suggest the same principle in three ways) remarkably effective.

There are times when one plus one equals one-half. “It was a very beautiful day” is a weaker sentence than “It was a beautiful day”. There are times, though, when one plus one equals minus five. Unskillful writing draws attention to itself. Metafictional elements and fourth-wall breaks can spice up the middle of a story, but toward the end they approach the sin of bathos. Most readers or viewers aren’t aware of the specific artistic sin; they just have a sense that the work “feels off” or is bad. Although Daenerys’s turn toward irrational evil is not bathos, and although it has been foreshadowed that she may become a destructive force or an antagonist, her turn feels “off” because of writing that has called too much attention to itself.

M. Night Shyamalan is infamous for his overuse of plot twists– his twists often call so much attention to themselves that they feel forced, like they exist for the sake of plot and are not organic. In general, twists follow a “zero, one, or many” principle. A straightforward story with no twists can work; a story with a single twist can work. Thrillers use frequent twists as a matter of course. What rarely works is to have two major twists– especially when the second negates the first. In that case, plot will always draw attention to itself and “feel off”. One plus one will equal minus five.

That’s what we got with HBO’s Game of Thrones. The show diverged from apparent nihilism and toward a more traditional heroic epic. A character we hated from the first episode, Jaime Lannister, showed his nobility in the Northern Crisis. And then, to hit what appear to be Martin’s original plot points, the show swerved back into grimdark nihilism, leaving us as viewers to feel cheated.

Let me propose one fix to the plot of Thrones.

The Rules of the Fix

I don’t want to add personal touches. I’m playing the role of a subordinate editor whose job is to make the product better. So I’ll aim for minimalism in my fixes. In particular, I’ll keep the plot mostly as-is, including the burning of King’s Landing by Drogon. The characters who die will still die, and around the same time.

I will, however, attempt fix the egregious flaws of craft, without making major changes to the story as made.

Here goes.

Euron (and Rhaegal)

Of all the characters the showrunners changed for the worst, Euron Greyjoy ranks at the top. They seem to have gotten him and his purpose entirely wrong.

Sure, he’s evil; but Euron of the books isn’t “just evil”. We’ve already grown tired of regular evil: Joffrey, Ramsay Bolton, and Cersei Lannister. We’ve endured horrible people for eight seasons.

In the books, Euron steps beyond petty sadism and bland ambition; he has something the TV series has written out of him: magic.

In the books, he’s not just a bad guy. He’s a menace equivalent in threat level to the Others (in the show, White Walkers). He has a magical horn that is believed to control dragons. He’s been to Valyria, a dangerous ruined city that has become Ice and Fire‘s version of hell; and he’s been trained in Asshai, the ultra-Lovecraftian capital of magic. He could probably teach Melisandre a thing or two.

The Others seem to use inhuman ice magic; Euron brings the opposite: fire magic, with the distinctly human elements of narcissism, cruelty, and ambition.

However, for some reason I’ll never understand, HBO took away his magic. The showrunners turned him into a dopey pirate and a dirtbag pickup artist. Epic fail.

Stripping Euron of his magic broke other bits of the show. For one thing, it required Daenerys’s stupidity to bring about the death of Rhaegal. (I think it was right to have Euron kill Rhaegal, as he is the ice/fire dual of the Night King and therefore ought to kill one out of symmetry; but it should have been better executed.) Furthermore, since the bad guys lost their mage, Bran had to be rendered mostly useless, lest the good side be over-powered.

I would have killed and abused the dragons in a different way.

Two of the dragons (Viserion, Drogon) were named after bad men. It made sense for them to be put to evil purposes, one way or another.

Rhaegal, though, was named after a good man who died with a bad reputation. Most fitting, I think, would be not to let Euron control that dragon, but to allow Euron the image of a dragon. For Euron to use Rhaegal’s visage for illusionism would fit the theme of the series (“power resides where men think it resides”). Euron could become a tyrant of King’s Landing on the hologram of a dragon alone.

There’s more to say about fixing Euron; I’ll get to that later on when I cover Daenerys and Drogon.

Brienne

I like that HBO made Brienne interesting. In the books, her chapters are a boring slog. The series put life into an underutilized character. Good on them for that.

Unfortunately, a genuine arc of platonic affection and mutual respect was trashed in favor of supernumerary on-screen sexing. We ought to be beyond the point in time where writers use female characters as props to showcase a man’s moral failures. It’s the current year, people.

I would rip out everything that happens after the Long Night. Brienne deserves better and so, for that matter, does Jaime.

Brienne has a new home in the North and, certainly, a role to play in rebuilding society after the catastrophe. If Sansa becomes Queen of the North, she can be Queensguard.

Cersei

Don’t get me wrong: it’s painful to be crushed by rocks. In prose, that could be written as the sort of death we feel Cersei deserves, with her screaming in the darkness until she runs out of oxygen. Slow carbon dioxide poisoning is (quite likely) worse than mere strangulation; it’s a terminal panic attack that can go on for hours. Burial alive is a frightening way to go.

Still, Cersei’s death isn’t cathartic on camera. It’s far too impersonal. We feel let down; we deserve more.

Moreover, the “emotional” reunion between the ex-twin/lovers was unskillful beyond description. I am more moved by an average fart than I was by that scene.

To get Cersei’s death right is challenging. I believe I know what the “perfect death” for that character is, but before I get into that, let’s consider what doesn’t work.

First, an ultra-violent death (meaning, one that exceeds the regular violence of the show) would fail; the gore would get in the way of catharsis. If we took joy in Cersei’s torture (or worse) we’d be as bad as she is. We don’t actually want to see her get the same treatment she and “Robert Strong” gave Septa Unella. That would make us the bad guys.

Second: Cersei can’t get the “villain dies laughing” death (that was unskillfully given to Euron) because that only works for chaotic evil. A true chaotic-evil villain is like a rabid dog that must be destroyed. There’s no joy in seeing such a villain suffer. But when the villain is humanly evil– neutral evil– we demand that she suffer. The ending need not be violent (and it is most skillful, often, when it is not) but it must tear her apart, psychologically, like the Bastille was dismantled brick by brick on July 14, 1789.

Public shame can work, but in Cersei’s case, it was already done (Sept of Baelor) and she came back from it. So that’s not enough, when it comes to Cersei’s death.

Noting these challenges, I believe the perfect death for Cersei is… to die at the hands of King’s Landing’s street children.

Her death ought to repay debts (as Lannisters do) to the poorest of the poor– and the orphans her wars have made. I’ll leave the level of violence to the writers. It could be an off-camera clubbing, where the viewers only get one terrified scream. Or it could be a bloody, protracted slicing-apart with dirty seashells, one that leaves her flesh in ribbons. It doesn’t matter, from my artistic perspective, whether it’s a painful death or a regular (by Thrones standards) one that she gets.

This death is perfect for Cersei. To start, she hates the common people and they hate her. Moreover, her conceit is that her evil is all done for her children. (Side question: why did viewers hate Skyler on Breaking Bad, even though she saw through that conceit, when they love Tyrion, who fell for that lie?) Thus, she ought to be killed by children among the millions of other peoples’ kids that she didn’t give a damn about.

That notion of justice is satisfying, too, in the context of today’s world. Consider that the entire global-corporate-capitalist system is powered not by otherworldly evil (Euron) or stupid sadism (Joffrey) but by those who do not think of themselves as evil, but nonetheless do evil things to acquire and preserve zero-sum advantages for their own progeny. I’m not an anti-natalist, but the kibbutzniks who disavowed the purported “virtue of family”– the narcissistic tendency to humans to care only about the future in the context of a tiny number of its citizens– were on to something. More crimes are committed to keep corporate executives’ children in private schools than for the “evil” reasons more typical of narrative. To have Cersei killed by Flea Bottom urchins offers the same poetic justice as does the humiliation of parents who participated in the recent college admissions scandal.

Cersei ruined the world for everyone else’s children, so her children could rule. I can think of no better death for her.

Cleganebowl

Some people said this fight was “fan service” that didn’t belong. I disagree.

Sure, it was an over-the-top, comic-book battle, set against a background of fire, on a stairway to nowhere. It served little narrative purpose, but that’s OK. In fact, the irony of Sandor’s quest to kill (again) what is already dead and repulsive, I appreciate. It fits Sandor’s style of dark humor.

No changes. Cleganebowl stays.

Arya

We’ll probably see Arya assassinate someone in the last episode of Thrones, but I would have used her in a specific way that makes sense in the context of Ice and Fire.

Arya’s chaotic good, so she must assassinate someone who deserves it. She killed the Night King, the “big bad” of ice magic. Who deserves to die, and furthermore functions as a “big bad” of fire magic? Euron, if his original powers were restored.

Jaime killing Euron makes little sense. That was a useless fight. If its purpose was to demonstrate the value of the prize that is Cersei, through the social proof of two failed men fighting over her, well… the whole device failed. Throw that out. Jaime doesn’t kill Euron at all; they never meet.

Arya kills Euron. Whose face does she use? Cersei’s, after the street children are done with her. It fits Cersei, too.  The queen cared about her image; for Arya to (literally) rip her face away from her is (once again) poetic justice.

After that, I like the idea of Arya retiring from killing, and going out for a more nonviolent sort of adventure. Though Thrones ought to remain a tragedy, not everyone needs to get a miserable ending.

Gendry

I would have Arya and Gendry end up together. The scene between the two, under belief that it will lead into a romantic relationship, was a an example (rare in Thrones) of positive sexuality. I don’t know why anyone would have a problem with it. Don’t get me wrong; I’d be furious if an author explicitly said I could name my daughter after a character, then paired her with a bad boy. Gendry, though, is about as far from a bad boy as it gets.

Arya’s initial rejection of him (or, more precisely, her rejection of the role as a traditional lady) makes sense and is not unusual by the standards of a romantic arc. The characters ought to be able to make it work. Gendry seems like a genuinely good human; I don’t think he’d force her to choose between adventure (whatever that means in her next phase of life) and romantic love. He should want for her to have both. And she should be able to have both. Again, it’s the current year.

Daenerys

The diablo ex machina of the penultimate episode made this woman’s arc a joke.

I expected Daenerys to become, if not a villain, an antagonist toward the end of the series. There was plenty of foreshadowing for that. What I find absurd and a bit offensive is the suggestion that “madness” causes principled people to snap and sic their nuke lizards on cities. That is not how actual mental illness works. It sets on gradually, and people with it are no violent, in general, than the rest of the population.

We also endure the “bitches be cray” trope; a woman proves herself unfit to rule by committing a war crime, just because she can.

There are contexts in which I would believe Dany would make the wrong decision. Game of Thrones did not provide one. Daenerys appeared to snap because the plot needed her to snap.

Furthermore, there’s nowhere to go from this terrible plot point. What can really be done with Daenerys? She can be killed; we can watch her die; that’s about it. The penultimate episode’s twist traded an interesting heroine for a villain whose defining traits are (a) that she was once semi-good and (b) that she has a dragon. Boring.

She deserved better. I don’t mind heroes becoming villains over time. I don’t mind twist antagonists. I don’t mind Daenerys taking a tragic turn. There’s nothing wrong with any of that.

The twist in her character was implausible. Yes, the beheading of her confidante was terrible, but this woman has seen (and brought) war. If she had pure evil in her, we would have seen it long ago. We saw a principled but ruthless character that it was hard to root for, but we did not see pure evil.

Moral mystery characters, like Snape in Harry Potter and Rhaegar in this series, are easier to turn for good than evil. To do the latter takes far more skill and setup time than we saw.

As I would edit the final series, I’d make Daenerys innocent of the major crime (which still occurs). That’s not because I love the character. I don’t. I think she’s immature, dogmatic, and (like Jon, and unlike Sansa) unfit to rule. However, I don’t buy her sudden change into a war criminal. Antagonist, sure. Person who makes a bad decision under pressure, sure. Villain, maybe. Psychopath; no. We have already seen that she is not one.

The Burning of King’s Landing

The burning of King’s Landing is thematically necessary. I’m not going to shoehorn Thrones into a happy ending. It is a tragedy and it does not shrink from the horrors of war, including its morose effects on the common people. Millions were going to starve already, because of the dreadful conflict; but starvation isn’t nearly as cinematic as for a city to go up in flames. The Sack of King’s Landing stays.

So, if the burning still happens but Daenerys isn’t responsible, then who is?

We return to Euron.

His magic makes him far more menacing than the Golden Company. The Golden Company is easy to hate. People who’ll fight for Cersei’s money are like hit men in Sin City: no matter what you do to ’em, you don’t feel bad. They serve two purposes. One is (as mentioned) as mooks to get blown up or burned. Two: Cersei’s resorting to mercenaries shows that she has lost the respect of her people.

Machiavelli, though, gave us the final word on mercenaries: they don’t fight during winter, and it’s winter.

The Golden Company gives us the right to cheer when some people are burned alive, but they’re not that scary– especially given that HBO took away their elephants, because mahouts plus dragons right-facing-angle-bracket budget. Euron, as a magician, is terrifying.

Make him a warg.

After Arya kills Euron, he uses his last to warg into Drogon. Bran knows it’s coming, so he wargs in as well. Now we get a mind fight, inside a dragon, between two of the most powerful mages in the world: a Stark warg (ice mage) who has been north of the Wall, and a fire mage who’s been to Valyria and Asshai.

Daenerys fights to pound sense into her child. Drogon (while Euron is in control) ruins King’s Landing. Bran wins the warg-fight but the dragon ends up dead (as well as Euron). Daenerys lives.

She’s innocent, but the world doesn’t know that.

HBO appears to have turned this heroine into a war criminal. It’s possible that Drogon was warged by some evil mage, but that hasn’t been foreshadowed. It appears that Daenerys has broken bad and it’s a fait accompli. There’s nothing left to do but kill her, and there’ll be little emotion or catharsis because people have died for far less. By Westerosi logic, to execute Daenerys-the-war-criminal is the only thing that can be done. There’s no choice and, when there’s no choice, there’s no drama.

Daenerys-the-innocent, who nearly died riding a warged dragon, and who appears to have done something unforgivable… now she’s a source of drama. An innocent who appears to the world to be guilty? That’s painful to watch, sure, but good writers make it hard for the heroes.

Tyrion

Tyrion’s bright and he loves to argue. We’ve seen him argue for himself. He sometimes loses, but he’s good at it. Now, in the past, he’s been competent– but a competent person becomes heroic when she uses her strength not for herself, but toward another’s benefit.

The best lawyer in all of Westeros ought to be the first to believe Daenerys’s account– that something outside her comprehension happened, and she lost control of Drogon. We’re putting Tyrion into 12 Angry Men. All of Westeros thinks she’s guilty; he thinks (knows) she’s innocent.

We’ve learned that Tyrion’s brilliance was of the old world, when reputation and the throne mattered most. He’s not very competent in the new one. Considering that, and the fact that Ice and Fire is almost certainly a tragedy and will hold Daenerys accountable for bad decisions in the past (including the burning-alive of Dickon Tarly); I have to conclude that Tyrion is unsuccessful and Daenerys must die.

Personally, I’d put Samwell Tarly on the tribunal that must decide on Daenerys’s innocence, and although I like that character, I’d have him get it wrong (just as he was a liability in the Battle of Witnerfell.) His vote for her guilt, in (say) a 4–3 decision, dooms her.

Tyrion, as Daenerys’s advocate, does not die; in fact, his reputation improves in the following years as Westeros learns of her innocence. But he lives out the rest of his life in the Free Cities. Tyrion’s tragic turn, in the series, began when he rejected the opportunity to flee with Shae (who genuinely loved him, unlike in the books) to the Free Cities. His exile from Westeros (possibly self-imposed) must atone for this.

Jon Snow

Jon Snow, as interim ruler of Westeros, recused himself from the vote on Daenerys’s innocence, and he does not want to kill her, but by honor he must. “He who passes the sentence should swing the sword.”

After killing Daenerys, Jon is in no emotional state to rule and installs in power a man with a weak personality who has as many reasons as the others to be disgusted by the game of thrones: Edmure Tully. He’s the interim leader of the Seven (or Six?) Kingdoms while a constitution is drafted.

Jon goes home to rebuild the North. Samwell Tarly goes with him.

He (or Samwell) demands to know, from Bran, whether Daenerys was innocent. They find out that they were wrong. Jon and Samwell, stricken by grief and guilt, decide to honor Daenerys’s memory by building a better Westeros. Jon offers his services to the north and reconstitutes the Night’s Watch, this time focused on protecting rather than warring against the Free Folk. He strives for positive relations with the Children of the Forest and (if relevant) the Others/White Walkers.

Alternate, darker possibility: the White Walkers aren’t dead. In fact, the dragons’ death has made their spooky and possibly malignant ice magic stronger. Jon Snow enlists their aid for the purpose of reviving Daenerys. As in “The Monkey’s Paw” by W. W. Jacobs, it doesn’t end well.

Sansa

Edmure, as interim ruler of (southern) Westeros, is largely ineffectual. Jon is an emotional wreck after (a) killing his lover-aunt and (b) learning she was innocent of the crime for which he killed her. Though both men have the wish to build a better Westeros, Sansa must take the lead at rebuilding society.

(This assumes that HBO doesn’t erase her character growth, as it has done far too often, and make her a naif again.)

Sansa would be good at this. She’d probably enjoy it, too. Sansa might be elected the first president of the Constitutional Republic of Westeros.

Davos

Davos began the series illiterate, but he was always smart. He should become a maester. Perhaps he is the one (rather than Samwell) who writes the final account of what happened in Westeros. Having seen war and the unnecessary death of an innocent girl (Shireen), Davos writes an account of the myriad Westerosi disasters so they never occur again. Samwell may play a role in this effort, but I don’t see him taking the lead in it; he is too shaken by his role in Daenerys’s death.

Bran’s ending

It’s hard to give Bran a satisfying ending. In my series, Farisa (if she lives) presents a similar problem. After a “happy ending”, banal human evil must still exist; what role is there for an ultra-powerful but good mage? Does the mage intervene (and thereby deprive humans of autonomy) or retire from practice? I don’t want to self-spoil Farisa (although I have not fully decided on an ending, being several books away from it) so I’ll tread carefully.

With Bran, we assume that he’s “a good guy” because of his family name, but we know (a)  that he no longer considers himself a Stark, and (b) from the books that many historical Starks were not good. Bloodraven was a morally complicated character before he became the Three-Eyed Raven and there’s no reason to assume he “became good” after his transition. So, Bran’s moral status remains opaque, and what did Old Nan say? “Crows are liars.” (And what does the name Bran mean?) Perhaps Bran’s agenda is darker than we expected.

Still, little foreshadows Bran turning evil, and it seems unlikely that ultra-good characters such as Jojen and Meera Reed would traffic him to a destiny they knew to be malign. So, it seems more likely than not that Bran should end as a force for good.

Perhaps Bran is the first to recognize that the Northern Crisis (from Walkers, or the Children of the Forest, or something new like those ice spiders we were promised) isn’t over. He uses his magic to rebuild the wall… like Bran the Builder.

There are darker possibilities, too: I find the theory that Bran accidentally created and in some sense “is” the Night King to be (in some form) fairly credible. Without such a tragic element, the affliction of Hodor seems a bit of a boondoggle. This being said, I think it would take several episodes to properly service this element of the story. Under the six-episode constraint, I’d use Bran to support the political happy ending (as Westeros’s memory, he helps the world heal) rather than giving him a personal tragic turn.

The Prophecies

I’m not going to focus on “the Prince who was Promised” or any of those other prophecies. I don’t much care about them. The view I take of Martin’s work is that all the gods are equally nonexistent. If prophesied plot points can be hit, great; I don’t find it worth it to bend the story out of shape to service those, though.

Jaime

Jaime, as I would end the series, still returns to King’s Landing– but to kill Cersei, and not because she sent Bronn to kill him, but because she sent him after Tyrion. Still, when Jaime sees the street children killing his twin sister, he considers this too cruel a death for her and tries to intervene. But he’s an aging knight whose power (like Tyrion’s old-world learning) is no good these days, and cannot.

After being unable to protect her, he either kills himself or (more hopefully) returns to the North to build his new life.

Qyburn

No changes. That man was (I believe) heavily inspired by Dr. Mengele, and I’m all for that sick fuck being killed by his own morbid creation rather than dying in South America or Pentos.

The Iron Throne

The true villains of Game of Thrones are outmoded ideas: a “true king” (divine right) or, in Daenerys’s case, a true queen. If anyone sits on the Iron Throne, we perceive Westeros as getting a thousand years of (to quote Jon Snow) “more of the same”.

No family (Tyrion excepted) represents “more of the same” as much as the Lannisters. Gold, greed, and extreme conservatism are their trade. They are the nihilism of Tyrion’s alcoholism as well as modern-day corporate America. Ice and Fire is about many things, but one among them is the delicious fall of the Lannisters. Tywin wanted a “dinn-a-stee” that would last a thousand years; well, nope.

So who gets the Iron Throne? Well, we could put it in a museum, but someone should still get to sit on it. It just seems right.

There is one living Lannister who is innocent: King Tommen’s cat, Ser Pounce.

Illyrio Mopatis, the slave trader who started all of this, is captured and lives out his days as “The Paw” (or, alternately, “The Scoop”) and cleans out the litter box.

Image result for ser pounce iron throne

A Computational Understanding of ZFC Set Theory

When I first encountered the ZFC set theory axioms, the notion that they in particular should be the foundation of mathematics struck me as not making a great deal of sense. What was this Axiom of Foundation:

∀x (∃y (y ∈ x)) → (∃z (z ∈ x) ∧ (∀w ~((w ∈ z) ∧ (w ∈ x))) 

and why would anyone care? What do we learn about practical mathematics from this logical sentence? What makes ZFC the mathematics we care about?

Further study in mathematics tells us: not all that much. ZFC isn’t special. Set theorists study all sorts of axiomatic systems, some strong and others weaker; ZFC happens to be a point of comfort, a level of mathematical proving strength which comfortably gives us the mathematics we want (and, arguably, strange mathematical results that a few might not want). It is not so strong as to be sufficient to prove all that is true, because no logical system is. It also cannot, without appealing to a stronger system, prove that it is without contradiction. and it cannot, without appealing to a stronger logical system, prove that it contains no contradictions.

So, why ZFC? The axioms suffice to construct a mathematics that does not seem to diverge in a meaningful way from the “real world mathematics” (to be discussed later) that power the sciences. A point in space, it seems, requires three real numbers to define it. We do not actually know what is the fine-grained nature of physical space, but it feels like a continuum.

The truth is that ZFC isn’t “the foundation of mathematics” but the foundation of a mathematics, one that (a) arguably suffices for real-world purposes, (b) allows us to reason about infinite objects far beyond what seems to exist in the real world, while (c) being small enough for its axioms(*) to fit on a blackboard.

(*) Except, of course, for the need to introduce two axiom schemas, but that shall be covered.

What is ZFC?

Zermelo-Fraenkel set theory with Choice is a first-order logic with equality. Its syntax includes:

  • The connectives of propositional logic: ~ ∧ ∨ → ⇔.
  • An equality symbol: =.
  • Grouping symbols ( and ) are used to specify order of operations. Spaces are used for clarity but have no meaning.
  • As many variable symbols (lower-case letters x, y, z, and so on) as are needed.
  • The universal and existential quantifiers: and .

All of which are called logical symbols, and a domain-specific non-logical symbol .

The logical symbols’ meanings and corresponding axioms are not included among ZFC’s axioms. All propositional tautologies are considered theorems of ZFC, but we shall not discuss them further. The same holds for first-order sentences that can be proven without using any set theory, like:

(x = y) → (y = x)

which is invariably presented as an axiom attendant to equality

∀x (x = z) → ∃x (x = z)

which follows from the quantifiers being defined in the standard way (if something is true of all x, it must be true of at least one x by definition.

When we discuss ZFC, we’ll present the axioms that define for us:

  • what things are and are not sets.
  • how they relate to each other, using the added non-logical symbol, .

When we speak of formal logic, we do not need to interpret . Formal logical systems do not describe what is “true” or “false” in any real world, but what is provable (a theorem) from the axioms. Which statements are theorems and which are not is derivable syntactically (or “typographically”) from the symbols themselves, without caring about their meaning. A machine could do the job.

All we know is that:

  • Some sentences, like ∃x∀y ~(y∈x) (there is an empty set) are theorems of ZFC.
  • Some valid sentences, like ∃x (x∈x), are non-theorems of ZFC. They may be negations of known theorems, or they may be unprovable.
  • Some strings of logical symbols, like ∃∈∈x→~, are not valid sentences and shall not be considered further.

Our lack of commitment to interpretations of these symbols– using only that logical symbols have the familiar axioms, with  to be introduced and applied according to ZFC’s axioms to-be-given– is necessary because, from first principles, we don’t know anything yet about the collection of values over which variables– the x‘s quantified over by existential (∃x) and universal (∀x) quantifiers, is. We haven’t built a model that tells us that.

In set theory, these variables always range over sets. But what is a set? We haven’t determined that yet.

Equality has a special meaning: if x = y, they are the same object and perfectly interchangeable with one another. That is, they are elements of the same things and have the same elements or, more formally:

(x = y) → ∀z (x ∈ z) ⇔ (y ∈ z), and

(x = y) → ∀z (z ∈ x) ⇔ (z ∈ y)

Now that we’ve done this work, let’s introduce ZFC’s axioms.

Axiom 0: Sets Exist

In fact, we’ll make a stronger claim: the empty set exists.

∃x∀y ~(y ∈ x)

Literally, “there exists a set that all sets are not in”; more legibly, there exists a set with nothing in it.

We could derive the existence of this set from the Axioms of Infinity (which declares that an infinite set exists) and Comprehension (using a logical formula that is always false). We technically do not need it. So, why include it? From a programmer’s perspective, it helps. What’s the first thing one wants to know about a data structure K? It’s, How does one create it? We now have the empty function (or method) of the Set class. Furthermore, since sets are immutable values, we can treat empty as a value.

Axiom 1: Extensionality, which lets us prove things equal.

We have an empty set. How many do we have?

It would be annoying if we had to keep track of a myriad of empty sets, all of which had differing “personalities” irrelevant to the theory. Computers can handle this extraneous detail; they must keep track of equivalent data structures if those are mutable. In set theory, though, we deal with objects that never change.

Two sets are equal if they contain the same elements. This is not innate to logical systems; we must make it an axiom of our set theory.

∀x∀y (∀z (z ∈ x) ⇔ (z ∈ y)) → (x = y)

This, in essence, gives us an equals function– or, at least, the notion required to build one. We also know everything about a set if we know its elements.

So, there is only one empty set, and we know all we need to know– there is nothing in it.

By Extensionality, there is no meaningful distinction between {x} and {x, x} or between {x, y} and {y, x}. We’ll discuss how to handle that with our next axiom.

Axiom 2: Pairing, which lets us build sets.

The Axiom of Pairing gives us the unordered world’s equivalent of a cons cell; given two sets, we can always create the set containing both.

∀x∀y ∃z ∀w (w = x) ∨ (w = y) ⇔ (w ∈ z)

This will have one or two elements; one, in the special case where x = y.

We shall later see that Pairing is redundant. It could be proven using Infinity, Comprehension, and Replacement, to be discussed below; but that would be monstrously inelegant. From a computational perspective, we would not construct an ordered pair by constructing an infinite data structure, then choosing two elements.

Pairing upholds the notion of sets as containing more than one thing. It also indicates that there is only one set universe; there are no x and y that live in disjoint universes, because ZFC will always contain {x, y}.

From this, we can build the notion of an ordered pair: we interpret {{x}, {x, y}} as <x, y>.

So far, though, we can only build sets with zero, one, or two elements. We’re quite limited in what we can do.

Axiom 3: Union, which flattens set structures and yields bigger sets.

The Axiom of Union allows us to collect all the sets that live in a given set into one.

∀x ∃u ∀y (y ∈ u) ⇔ ∃t (y ∈ t) ∧ (t ∈ x) 

So, from {{x, y}, {z, w}} we can make {x, y, z, w}. We can think of this as a flatten function that removes (or interprets) one level of set structure, or as set theory’s natural notion of what functional programming calls a reduce.

The binary union operator (∪) is a special case of this axiom: x ∪ y is the set that must exist by applying Union to {x, y}, which must exist by Pairing.

This gives us the ability to generate the fundamental objects of arithmetic, the natural numbers.

0     := {}

x + 1 := x ∪ {x}

It may not be meaningful to say, in an absolute sense, that the natural number 3 is the set constructed in this way; but we can interpret that set as 3. (Building the rest of arithmetic, such as addition and multiplication, can be done in the first-order logic of ZFC; but it is tedious and will not be done here.) Note that this 3 is defined as the set {0, 1, 2}; this practice continues with infinite ordinals: for example, the smallest infinite ordinal ω is identified with the set of all natural numbers.

Axiom (Schema) 4: Comprehension, the filter of sets.

We’ve developed tools for making sets larger. Comprehension gives us a tool for making them smaller. A definable subset of a set ought also to be a set.

For any well-formed logical formula with one free variable, F, we have:

∀x ∃y ∀z (z ∈ y) ⇔ (z ∈ x) ∧ F[z]

In other words, given a set x and a definable logical property F, the y containing all elements z for which F[z] is a set. This corresponds to filter in functional programming, with the F taking the role of the function argument.

This is not, technically speaking, an axiom. It’s an axiom schema. It represents a countably infinite set of axioms, one for each of the F that can be defined. In this way, ZFC doesn’t have have nine (or so) axioms but infinitely many. That turns out to be no problem in practice. As long as a computer can check within finite time whether a sentence is or is not an axiom, we’re okay.

We do not need Filter (or Replacement, to be discussed) in the hereditarily finite world– the world where all sets (including the sets within sets) are finite. Nor do we need the Axiom of the Power Set. We will need these axioms if we want to wrangle which infinite sets, which of course we do.

Axiom (Schema) 5: Replacement or map

One of the most subtle axioms of set theory is that of Replacement. It seems less powerful than it is. In fact, the largest infinities known to ZFC can only be constructed with Replacement. This is counterintuitive. In the finite world, the Power Set Axiom generates “much larger” sets (from size n to size 2n) but Replacement does not; used alone, it creates a set no larger than an existing set.

The Axiom Schema of Replacement gives us, for every logical formula F with two free variables, if F[a, b] is true for exactly one b per a, this sentence.

∀x ∃y (∀a ∀b  (a ∈ x) → F[a, b] → (b ∈ y)) ∧ (∀c (c ∈ y) → ∃d ((d ∈ x) ∧ F[d, c]))

The F described behaves likes like a function the elements of x to those of y. It says that if {a, b, c, …} is a set, then so is {f(a), f(b), f(c), …}.

You can get a simpler formulation if you demand that F be one-to-one; then you have:

∀x ∃y (∀a ∀b F[a, b] → ((a ∈ x) ⇔ (b ∈ y))

In ZFC, both presentations– the simpler one mandating a one-to-one F, or the more complex one without that restriction– generate the same sets, and result in the same theory.

In computational terms, this is akin to that crown jewel of functional programming, map.

Axiom 6: Power Set

The axiom of the Power Set is stated thus:

∀x ∃p (∀y (∀z (z ∈ y) → (z ∈ x)) ⇔ (y ∈ p))

In other words, for every x, there is a larger set p that contains all of x‘s subsets y. In the finite world, this is strictly larger: we jump from n elements to 2elements. That turns out to be true in the infinite world, where size is a more complicated matter, as well. There is always a bigger set. Since there is no largest set, there is no “set of all sets”. We’ll get more specific on this when we discuss foundation.

Axiom 7: Infinity, where the fun begins.

We have the tools to build up (and tear down) all the natural numbers, and all finite sets of natural numbers, all finite sets of finite sets of natural numbers, and so on.

We don’t yet have infinite sets. None of our axioms provides that one must exist. By the tools we have, we can’t get to the infinite world from what we have.

We have infinitely many sets already. We have the natural numbers. That’s an outside-the-system judgment about what exists in our world thus far. We don’t yet have that {0, 1, 2, …} is itself a set. Since the purpose of set theory is to wrangle with notions of infinity, and since we need an axiomatic statement that one exists, we add:

∃n ∃z (z ∈ n) ∧ (∀x (x ∈ n) → ∃s ((s ∈ n) ∧ (x ∈ s) ∧ (∀y (y ∈ x) → (y ∈ s))

What does this say? It says that there exists some set n that is not empty (that is, it contains at least one element z) and that, for every x in it, contains the s(x) defined as  x ∪ {x}. This will be a larger set than x, so long as we never have x ∈ x, which we’ll establish to be the case when with Foundation, next.

We could, alternatively, make it an axiom that N, the set of natural numbers exists. This is achieved by setting z to the empty set. In that case, we don’t need to use ∀x ~(x ∈ x), which we haven’t yet established. In ZFC, it is irrelevant whether we use the weaker “an infinite set” exists or construct specifically the set of natural numbers; with Replacement, they are equivalent.

We’ve now reached a space beyond what we can compute. We can conceive of the infinite and we can write programs that never terminate (using a while loop) but we cannot store a completed infinity in a computer. As infinities get larger, and sets more complicated, the divergence between what “exists mathematically” and what we can realize, computationally, grows.

An aside on Replacement, massive infinities, and “ordinary mathematics”.

We define two sets x and y to have the same size (or cardinality) if we can define a function from x to y that is invertible (one-to-one) covers all of y (onto). It turns out that for every set x, the power set P(x) is larger. So from the smallest infinity of the natural numbers, N, we get the larger infinities P(N), P(P(N)), P(P(P(N))), and so on.

If we exclude the Axiom of Replacement, we still have a mathematical universe that contains:

  • the natural numbers,
  • negative, rational, real, and complex numbers, which can be constructed using set-theoretic tools,
  • sets of (and sets of sets of, and sets of sets of sets of) mathematical entities above,
  • algebraic entities like groups, rings, fields, and vector spaces,
  • functions on the mathematical entities above (e.g., functions C8 → C),
  • and higher-order functions that can accept as arguments or return the entities above.

So, even bizarre mathematical objects like {7, 3.14159…, 3+4i, {(λx → x + 3), -17, {}}} exist in our universe. We have ordinary mathematics– almost all of mathematics excluding set theory.

What don’t we have, without Replacement? We don’t have this thing as a set:

{NP(N), P(P(N)), P(P(P(N))), … }

or its union; both of which, most of us feel, deserves to be a set. That massive infinite set, bigger than anything we encounter in ordinary mathematics, may be not be something we need in daily math… but it seems that it should deserve to exist in the mathematical universe.

Axiom 8: Foundation, the limiter.

So far, we’ve discussed axioms that create sets. We have the tools to create the sets we want. How do we avoid calling things sets that we don’t want?

First-order logic can’t limit size. We can’t say that our mathematical universe is “only yay big”. For example, ZFC without Replacement could not generate the large set described above, but it also cannot prove that entity not to be a set. So, there will always be more things in the heaven and earth of our first-order logic than are dreamt-of in our philosophy; but, we can limit undesirable behaviors.

In programming terminology, we can impose a contract that sets must follow.

The Axiom of Foundation is the only one to place limits on set-ness, the only one to prevents us from making sets.

I put it toward the end because its formulation is the most opaque. It is:

∀x (∃y (y ∈ x)) → (∃z (z ∈ x) ∧ (∀w ~((w ∈ z) ∧ (w ∈ x))) 

To which an appropriate response might be, What the hell does that mean?

The answer is: For every set x that is not empty (that is, has a y in it), there is a z in it that is disjoint from x.

We know what it means; the question, then is, Why the hell do we care?

The most important consequence of this might be that no set contains itself. Therefore, we cannot define the set x = {x}. In computer science, self-containing data structures are admissible. (To paraphrase Trump, when you’re a *, pointers let you do it.) Mathematicians, however, don’t want their foundations be self-referential. (There are mathematical structures, like graphs, that allow such recursion and self-reference, but sets themselves should not be.) Why? Russell’s Paradox. If sets can contain themselves, and all collections are sets, then define a “Russell set” as any set that doesn’t contain itself. Is the set of all Russell sets a Russell set? If the set of all Russell sets is a Russell set, then it contains itself, so it’s not a Russell set; if it’s not a Russell set, then it  is. Mathematics becomes inconsistent and the world blows up. Clearly, mathematics can’t afford to confer set-ship on just anything.

The more general result of foundation: there is no infinite descending chain of sets a b c ... It is a consequence of this that there are no self-containing sets or set-membership cycles; e.g., there are no x, y, z for which  x ∈ y, y ∈ z, and z ∈ x).

An alternate way to think of this is that one can always “get to the bottom of” a set, even if it’s infinite. Or: imagine a single-player game like Nim, but instead of stones on a table (from which a player takes some, each turn) there is a set X “on the table”. Each turn, the player selects an element a of X and “places” that a on the table, unless a is the empty set, in which case the game terminates.

For example, letting N be the set of natural numbers, we could have the following state evolution:

Table: {3, 17, {4, 29, {6}}, P(N)}
... player chooses P(N)
... Table : P(N) = {{}, {0}, {1}, {0, 1}, ... }
... player chooses {2, 3, 5, 7, 11, ...}
... Table : {2, 3, 5, 7, 11, ...}
... player chooses 173
... Table:  173 = {0, 1, 2, ..., 172}
... no more than 173 steps later...
... Table : 0

Foundation means that this game always terminates in finite time. It can take an arbitrarily long time– the player could have chosen 82,589,933, or the first prime number larger than TREE(4,567), instead of 173– but there are no infinite paths, assuming that what’s on the table is a well-founded set.

Axiom 9: Choice, the controversial one.

I’ve saved the “controversial” Axiom of Choice, the namesake for the “C” in ZFC, for last.

Is it controversial? Well, let’s understand what a mathematical controversy is. Mathematicians do not, by and large, get into arguments about whether Choice is true or false; it is neither. There are valid mathematical systems with it, and others without it. It is subjective, which systems one prefers to study. But no mathematician will say that the Axiom of Choice is “wrong”.

Although I have worked entirely in the bare language of first-order logic plus equality and the “non-logical” , I will, to make notation easier, include the ordered-pair notation <a, b> = {{a}, {a, b}}. Extending the alphabet of a logical language can be done in a principled, conservative way (that is, one that does not produce new theorems) but I will skip over the details. One who wants them can read Kenneth Kunen’s 2011 book, Set Theory.

The Axiom of Choice is:

∀x ∃f ∀a (a ∈ x) → (∃c c ∈ a) → ∃p (p ∈ f) ∧ (p = <a, b>) ∧ (b ∈ a) 

This means that for any set x we have a choice function f– a set of ordered pairs– that for all non-empty a in x contains an <a, b> for which b ∈ a.

In other words, from any set of sets, we can derive a choice function that maps each of its nonempty sets to one of its elements (that is, it “picks” one).

Choice, of all the mathematical axioms, is most flagrant in divergence from the real world. Infinity can be problematic, but physics forces us to grapple with its possibility. We do not know the fine-grained nature of physical space, but it seems to exhibit rotational symmetry (that is, there are no privileged x-, y-, and z-axes, but the basis vectors we choose) which suggests strongly that it is R3. Infinite sets, we need for geometry to work.

Choice, however, is counter-physical. If we agree that choosing an element from a set is computational, and therefore requires energy (or increases the entropy of the universe, or costs in some currency) than to realize a choice function has an infinite cost. In other words, we are calling things “sets” that have no presence in the physical world.

To wit, in a world with Choice, events can be described (in theory) that have no probability. That is not: They have zero probability; rather, it creates a contradiction to assign them probability zero or to assign any positive probability to them. More troubling, a sphere can be decomposed into five pieces and rearranged into two spheres of the same size. While this absurdity cannot be realized with a real-world object, it does establish the behavior of mathematics, with Choice, to be counter-physical.

So, why have it? Well, there are mathematical objects that do not exist in a world without Choice. A vector space might not have a basis, in ZF + ~C. Mathematics isn’t supposed to be about what we can make in the real world, but what exists theoretically.

One source of trouble with Choice, I think, is that as beginning or amateur mathematicians, it’s tempting to think of an agent actively choosing. There’s a quip about the Axiom of Choice: you don’t need it to choose one shoe each from an infinite set of pairs of shoes, but you do need it to choose one sock each from an infinite set of pairs of socks. (The shoes are distinguishable; the socks are not. Analogously, Comprehension and Replacement derive from existing logical formulae; Choice is fully independent of how the choosing is done, and may be arbitrary.) The problem is that Choice is not about “you can choose…” because mathematics exists in an ethereal world where there is no “you” or “I”, nor an energetic process of “choosing”. The real debate is not about “you can…” or “you cannot…” but about what we choose to call a set. If we want to include among sets arbitrary selections, we need Choice.

Mathematical existence, itself, is a slippery and tricky notion. For example, we’ve covered many of the sets that exist and we can build. Now, this is a set:

{n : 2^2^n is prime}

However, which set? Is it finite or infinite? We don’t know. Most likely, it’s the very simple set, {0, 1, 2, 3, 4}. At present, though, we don’t know what it is.

Computational Analogue

To wrap this up. let’s think of Set as a type (or a class) in a programming language. We’ll pretend we have a computer with infinite space, so infinity is no issue. The axioms of set theory support the following interface:

empty :: Set
-- () A set with no elements.
elementOf :: Set -> Set -> Bool
-- (x, y) Returns True if x ∈ y, False if not.
equal :: Set -> Set -> Bool
-- (x, y) Two sets are equal if they contain exactly the same elements.
pair :: Set -> Set -> Set
-- (x, y) Returns {x, y}.
union :: Set -> Set
-- (x) Returns {a : a ∈ t ∈ x for some t}. Or: flattens a set of sets by one level.
filter :: Set -> WFP -> Set
-- Comprehension
-- (x, F) F is a well-formed predicate with one free variable; returns {a : a ∈ x where F[a]}.
map :: Set -> WFF -> Set
-- Replacement
-- (x, F) F is a well-formed binary predicate for which F(x, y) holds for exactly one y per x, corresponding to a function f; returns {f(a) : a ∈ x}.
powerSet :: Set -> Set
-- (x) Returns {y : y is a subset of x}.
infinity :: Set
-- () A set of infinite cardinality.
validate :: Set -> ()
-- Foundation
-- (x) Asserts that x has an element disjoint from itself. If all sets created pass validation, then no infinite descending chains (or cycles) of ∈ exist; every set is well-founded.
choice :: X@Set -> (A@X -> Either () A)
-- (x) Returns a function on x that maps every non-empty a ∈ x to [Right b] for some b ∈ a, and the empty set to [Left ()].

Conclusion

Set theory has a reputation for being painfully abstract and opaque. Much work is required to get from the expression of, say, Foundation, to what it means and why we care about it.

We learn later that the axiom exists because set-theorists did not want to contend with the notion of unwrapping sets ad infinitum; they did not want a theory that wasn’t well-founded or whose subtle imprecisions could destroy all of mathematics (as Russell’s Paradox did naive set theory). What makes for compact formalism is:

∀x (∃y (y ∈ x)) → (∃z (z ∈ x) ∧ (∀w ~((w ∈ z) ∧ (w ∈ x))) 

but without additional context, the formula is calamitously uninteresting. The human element (why we care) and the story behind axiomatic set theory are (as they should be, in formalism) absent.

I bring this up because, for me, Foundation was the hardest of the axioms to understand its motivation. (Replacement was difficult to motivate, until I realized it was just map or, in the Python world, a list comprehension.) Foundation’s often presented early in set theory, and not without good reasons. It is the only axiom that mandates certain possibilities not be sets, ensuring we don’t have to worry about Russell’s Paradox.

An interesting consequence of this is that we can now discuss what the variables in ZFC, a first-order logic, range over. When we say ∀x something, in ZFC, what is the space we’re quantifying over? All sets. But there is no “set of all sets” in ZFC. We need a bigger logical system to think about what ZFC actually means. This does not invalidate mathematics; arguably, this sort of distinction is what saves it.

Set theory’s implications are complicated and abstruse, but most of the axioms on which it’s built have computational analogues that a programmer uses every day.  Comprehension is like filter, Replacement like map.

What about Choice? Well, Choice gives us that weird, hackish hand-written function that may be a giant switch/case block. On many infinite sets, we cannot hope to code it by hand or realize it in physical space. However, it exists in set theory, so long as we want set theory to contain everything that mathematics can imagine.

Set theory, in all its glory, isn’t the easiest subject to comprehend. The simplicity of it, as a first-order logic with only one added symbol (), renders it difficult to reconcile it with the complexities of full mathematics, much in the way that quantum physics is not hard to comprehend but difficult to apply to real systems. The more perspectives we can take to this, the easier it might be to learn. I hope that by focusing on computation and construction, as imperfect as those lenses are, I’ve added one more perspective and that it helps.

Idle Rich Are the Best Rich. Here’s Why.

The college-admissions cheating scandal of 2019 has provided plenty of opportunities for schadenfreude at the expense of the lower-upper class: hangers-on and minor celebrities who needed a bit of lift to get their underachieving children into elite colleges. (The true upper class does not struggle with educational admissions; those are negotiated before birth, and often involve buildings.) The fallout has stoked discussion, internet-wide, about social class in the United States.

I’m 35 years old and don’t have any children, so college admissions are not (at least, not now) my fight. I care quite little about the topic itself because, to be honest, I find all this noise irrelevant. There’s the global climate crisis. There’s the imminent collapse of the wage market, due to automation. I have my own personal projects– I’m a year behind schedule on Farisa’s Crossing, my novel. (This is good lateness, insofar as I continue to discover ways to make the book better, but it is still lateness.) So, with all the things that actually matter, I don’t have a whole lot of cognitive bandwidth for the topic of college admissions. That issue’s mostly one of parental narcissism, and I’m not a parent.

Besides, just as the 1929–45 crisis (“Fourth Turning“, if you will) made irrelevant who went to Harvard versus Michigan in 1923, I believe the near future will find today’s obsessive attention given to minor differences in educational institutions to be absurd.

Still, not all of my readers are in the United States, and so many lack the privilege of knowing how bizarre and corrupt American college admissions are. It’s not that admissions officers intend it to be this way. They don’t. But there are a lot of absurd non-academic factors that go into college admissions, and the rich have far more time to assess and exploit the process than the poor.

Is this our society’s biggest issue? Hardly. Rich parents do all sorts of unethical things– often, completely legal ones– to give unfair advantages to their progeny. It has been going on for centuries, and it will likely continue for some time. College admissions fraud is a footnote in that narrative. I am glad to see the laws enforced here, but there are bigger issues in society than this.

Instead, I want to talk about the problem exposed by this scandal. See, it’s not enough for the American rich to have more money than we do, and all the material comforts that follow: bigger houses, speedier cars, golden toilets. They have to be smarter than us, too. But God did a funny thing: when She was handing out talents, she didn’t even in look in the daddies’ bank accounts. So, here we are. We live in a world where people make six- and seven-figure incomes helping teenagers cheat on tests. This isn’t new, either.

As a society, we suffer for this. Having to pretend talentless people from wealthy backgrounds are much more capable than they are, as I’ll argue, has major social costs. It keeps people of genuine ability in obscurity, and it leads to bad decisions that have brought the economy to stagnation. It would be better if we were rid of such ceremony and obligation.

How I Stopped Worrying and Learned to Love the Idle Rich

I want to talk about “the idle rich”, the aristocratic goofballs who don’t amount to much. They get a bad rap. I don’t know why.

They’re my favorite rich people, to be honest about it.

I don’t much like the guys (and, yes, they’re pretty much all “guys”, because our upper class is not at all progressive) at Davos. They do significant damage to the world. We’d be better off without them. Their neoliberal nightmare, a 21st-century upgrade of colonialism, has produced unwinnable wars, a healthcare system that has exsanguinated the middle class, and an enfeebled, juvenile culture that has lain low what was once the most prosperous nation in human history.

If we as a society decide to do something about the corporate executives who’ve gutted our culture and downsized the middle class out of existence, I’ll volunteer to clean up the blood. These people have a lot to answer for, and the quicker we can stop further damage, the better.

Do I hate “rich people”, though? I’ve met quite a number of them. Billionaires, three-digit millionaires, self-made as well as generationally blue-blooded, I’ve met all kinds. The truth is: no, I don’t hate them. Not really. What is “rich people”? Someone with more money. But, to 99.9 percent of humans who have ever lived, I am (along with most of my readers) “rich people”, just because I was born in the developed world after antibiotics. People born in 2200 may never know scarcity, and live for thousands of years. Good for them. I won’t burden the reader with my own philosophy on life, death, the future, spirituality, or life’s meaning (or lack thereof); suffice it to say, I believe that in the grand picture, material inequalities are fairly minuscule. If I’m still flying “klonopin class” in ten years, I’ll deal. The health of society, on the other hand, is of major importance. We only have one planet and, today, we are one civilization. Getting the big things right matters.

I don’t especially want to “eat the rich”. I don’t even care that much about making them less rich (although the things I do want will make them less rich, at least in relative terms, by making others less poor). I want our society to have competent, decent, humane leadership. That’s what I care about. Eradicating poverty is what matters; small differences and social status issues, we can deal with that later.

American society seems to have a time-honored, historic hatred for “idle rich”. Why? It does seem unfair that some people are exempt from the Curse of Adam, often solely because of who their parents were? It’s hard to accept it, that a few people don’t need to work while the rest are thrown into back-breaking toil. From a 17th-century perspective, which is when the so-called puritan work ethic formed, this attitude makes sense. It was better for morale for communities to see their richest doing something productive.

In the 21st century, though, do these attitudes toward work apply?

First, we can toss away the assumption that everyone ought to work.

We already afford a minimal basic income to people with disabilities, but most of these people aren’t incapable of work, and plenty of them even want to work. They’re incapable of getting hired. There’s a difference. Furthermore, as the labor market is especially inhospitable to the unskilled and young, it is socially acceptable for people of wealth to remove their progeny from the labor market, for a time, if they invest in education (real or perceived).

I support a universal basic income. On a related topic, there’s a problem with minimum wage laws, one that conservatives correctly point out: they argue that price floors on labor result in unemployment. That’s absolutely true. Some people simply do not have the ability to render work that our current labor market values at $15 per hour. A minimum wage is, in essence, a basic income (often, a shitty one) subsidized by low-end employers. They respond by cutting jobs. We can debate endlessly why some work is valued so low, but the truth of wages seems to be a trend toward zero.

As technology progresses, the fate of an everyone-must-work society is grim. The most important notion here is economic inelasticity. Desperation produces extreme nonlinearities in price levels. If the gasoline supply shrunk by 5 percent, we wouldn’t see prices go up by 5 percent; they’d likely quadruple, because people need to get to work. (This happened in the 1970s oil crisis.) We’re seeing it in the 2010s with medicines, where prices are often malignantly manipulated. It doesn’t take much effort to do that. Desperation drives extremity, and people are (in our society) desperate for jobs. Are we at any risk of all jobs being automated by 2030? No. But it takes far less than that to tank wages. No matter how much the technology improves, I guarantee that there will be trucking jobs in 2030. There might be fewer, though. Let’s say that 40 percent of the industry’s 8 million jobs disappear. That’s 3.2 million workers on the market. No matter how smart you think you are, some of those 3.2 million can do your job. And some will. As displaced workers leave one industry, they’ll tank wages where they land, causing a chain reaction of refugee crises. No job is safe. The jobs will exist, yes, but they’ll get worse.

In our current society where everyone just work to live or-else, the market value of almost everything humans produce (at least, as subordinates) is doomed to decrease by about 5 percent per year. That’s not necessarily a bad thing; another way to look at it is that things are getting cheaper. It’s only bad in a world where work is a typical person’s main source of income.

Ill-managed prosperity was a major cause of the Great Depression. In the early 20th century, we figured out how to turn air into food. That advance, among others, led to us getting very good at producing agricultural commodities. Cheap, abundant food. As a result… people starved. Wait, what? Well, it happened like this: farmers couldn’t turn a profit at lower prices, leading to rural poverty by the early 1920s, causing a slowdown in heavy industry in the middle of that decade. It was finally called a “Depression” when this problem (along with “irresponsible” speculation that is downright prosaic compared to what occurs on derivatives exchanges today) hit the stock market and affected rich people in the cities.

Why did we let rural America fester in the 1920s? Because the so-called “puritan work ethic” had us believing poverty was a sort of bitter moral medicine that would drive people to better behavior. Wrong. Poverty is a cancer; it spreads.

Ill-managed prosperity hit us hard in the 1930s; it’s likely to do the same in the 2020s, if we’re not careful.

All else being equal, for a person to show up to work doesn’t make society better. What she does at work, that might. The showing-up part, though… well, that in fact depresses wages for everyone else. It would be better for workers if there were fewer of them. That there are so many workers willing to tolerate low wages and terrible conditions devalues them.

For many workers out there, their bulk contribution to society is… to make everything worse for other workers. It is not their fault, it is not a commentary on them. It is just a fact: by being there, they cause an infinitesimal decrease in wages. And, today, we have this mean-spirited and anachronistic social model that has turned automation’s abundance into a possible time bomb.

Really, though, do we need so many TPS reports?

Obviously, if no one worked, that would be catastrophic. We don’t really need to force everyone to work, though. People work for reasons other than need: to have extra spending power, to gain esteem in the community, or because it gives their lives a sense of meaning. Fear of homelessness, though, doesn’t make anyone’s work better. It always makes things worse.

We could get at least 90 percent of society’s current revenues without forcing people to work. There’s no reason we couldn’t have a generous basic income and a free-market economy. That’s quite possibly the best solution. And, while I say “at least 90”, I really mean “more likely than not, more than 100”. That is, I think we’d probably have a more productive economy if people were free to allocate their time and talents to efforts they care about, rather than the makework they have to do, to stay in the good graces of the techno-feudal lords called “employers”.

It is not such a travesty for a person to remove himself from the labor market; for the rich, who already can, I don’t see it as a reason to shame them.

The second problem with the everyone-even-rich-people-must-work model is that it fails to create any real equality. Let’s be honest about it. “Going to work” is not the same for all social classes. Working-class workers are treated like the machines that will eventually replace them. Middle-class workers have minuscule autonomy but are arguably worse off, since it is the mind that is put into subordination rather than the body. For the rich, though, work is a playground, a wondrous place where they can ask strangers to do things, and those things (no matter how trivial or humiliating) will be done, without complaint. The wizards of medieval myth did not have this much power.

In other words, the idea that we are equalizing society by forcing the offspring of the rich to fly around in corporate jets and give PowerPoint presentations (which their underlings put together) is absurd. It would be better to let them live in luxury while slowly declining into irrelevance. When rich kids work difficult jobs, it’s toward one end: getting even richer, which makes our inequality problem worse.

Third, when we force rich kids to work, they take up most of the good jobs. There are about 225 million working-age adults. Whatever one may think of his own personal brilliance, the truth is that the corporate world has virtually no need for intelligence over IQ 130 (top 2.2%). We could debate, some other time, the differences between 130, 150, 170 and up– whether those distinctions are meaningful, whether they can be measured reliably, and the circumstances (of which there are not many) where truly high intelligence is necessary– but, for corporate work, 130 is more than enough to do any job. I don’t intend to say that no corporate task ever existed that required higher intelligence; it is, however, possible to ascend even the more technical corporate hierarchies with that much or less. So, using our somewhat arbitrary (and probably too high) cutoff of 130, there are still 5 million people who are smart enough to complete any corporate job. For the record, this is not an implication of corporate management’s capability. A manager’s job is to reduce operational risk and uncertainty, and dependence on rare levels of talent is a source of risk.

There are 5 million people who are smart enough, in any corporation in America, to competently fulfill an entire path from the entry level to the CEO. 5 million. For a contrast, there aren’t 5 million people in the U.S. with meaningful connections. I doubt there are even 500,000. Talent is fairly common. Connections are rare, and therefore more valued. The rich and the well-connected get the first dibs on jobs. The rest can’t possibly compete. No matter how smart they are, it doesn’t really matter.

Frankly, it’s of infinitesimal importance that Jared Kushner slithered into a spot at Harvard, causing a more-deserving late-1990s 17-year-old to have to settle for Columbia or Chicago. Whatever 17-year-old got bumped, I doubt she cares all that much. We all know that college admissions are more about the parents, anyway. No one intelligent really believes that American college admissions are all that meritocratic in the first place.

On the other hand, the U.S. corporate world is a self-asserted “meritocracy” (and, should you point out its non-meritocracy, you will soon be without income) despite being thicker with nepotism and corruption than third-world governments. That, I care about. Admissions corruption might lead a talented student to have to take her roughly-identical undergraduate classes from slightly less prestigious professors. In the work world, though, personal health, finances, and reputation are on the line. The false meritocracy of college admissions is a joke; the false meritocracy of the corporate world kills people.

Fourth and finally, when rich kids go to work, what do they do? Damage the world, mostly. A large number of them are stupid and incompetent, the result of which is: bad decisions that cause corporate scandals and failures that vaporize jobs. Most of the smart ones, worse yet, are evil. See: the Koch Brothers, Roger Ailes, and Erik Prince.

We would have such a better world if we convinced these guys it was OK to goof off.

The European aristocrats, to their credit, were content to be rich. Our ruling class has to be smarter than us.

I don’t mind that the corporate executives fly business class and I don’t. I do mind being forced to indulge their belief that their more fortunate social placement is a result of their being (intellectually speaking) what they think they are but are not, that I actually am. That galls me. If these people could admit to their mediocrity and step aside, it’d be better for all of us. The adults could get to work; everyone would win.

I don’t hate “the rich”. In fact, I wish everyone were rich. There may be a time in our future when that is, in effect, the reality. I hope so, because we seem thus far to be an “up or out” creature and, in 2019, we are effectively one civilization. Our current state is unsustainable. In the next two hundred, we either get rich or (at least, as a culture, although we may survive in the biological sense) we shall die. In the former case, I do not forecast utopia. There will always be disparities of wealth and social position. More likely than not, those advantages will be uncorrelated to personal merit– this as true of today’s upper class “meritocrats” as much as it was of medieval lords. On its own, that’s benign. So, to hate “rich people” is like to hate a tornado.

On the contrary, though I do not hate the rich, I hate their effects. I hate living in a society run by morons and criminals– one where housing in major cities is unaffordable for almost everyone; one where people have to buy insurance plans on their own bodies that cost $1,000 per month and provide half-assed coverage; one where bullshit jobs and managerial feudalism are the norm.

Furthermore, I do not think it makes the rich happy that we force them to work. It certainly does not make us happy to be shoved out of important, decision-making roles because the well-connected incompetents (or, far worse, the self-serving and evil) need those scarce jobs.

We, as a society, have reached a point where idleness is the most harmless of vices. We do not need more people hunting on the Serengeti; we do not need more internal combustion engines hauling people around to say “Yes, sir”.

Most so-called “work” has trivial, nonexistent, or even negative social value. The vast majority of corporate jobs exist to perpetuate or enhance private socioeconomic inequalities, rather than to better society. The so-called “protestant work ethic” would have us predict that price signals (salaries) correlate to the moral value of work. They don’t. Anyone who thinks they might needs to leave the 1970s because Studio 54 closed a long time ago.

If rich people stop working, they stop hogging the good jobs. They stop hogging the investment capital and wasting it on artisanal dog food delivery companies. Since they’re enjoying life more, they will feel less desire to exact revenge on society for forcing them to make four PowerPoint presentations per year, which will make them less aggressive in squeezing employees. So they’ll hog less of the damn money, too. People will start leaving their office jobs while the sun is shining, writing their own performance reviews because the bosses are skiing all month, and everyone will be better off for it.

Let’s not eat the rich. Instead, let’s get them fat, and roll them out of the way, so competent adults can take the reins of this society, before it’s too late.

American Fascism 2– Is the United States Fascist?

Part 1 Part 2

In Part 1, I discussed the four political impulses– communalism, libertarianism, republican democracy, and fascism– that seem to be the base elements of which more complex ideologies are made. Of course, an entire society can be communalist in some ways, but libertarian in others. To ask whether the United States “is fascist” may seem simplistic. The question might be phrased better as, “How established is fascism?”

Upsettingly, fascism is the most limber in its self-presentation. Fascists lie. They will, if it is convenient, use ideas from other ideologies to push their agendas. We’ve seen fascists in leftist, rightist, religious, and anti-religious costumes before. Corporate fascism asserts “meritocracy”. Donald Trump managed to step over his personal elitism to run as a populist. Rarely does one spot a fascist in his revealed ideology; we observe what he does.

We are not at the point yet where the United States has been afflicted by state-level fascism. One hopes that it never will be. Are we under threat? Yes, and to understand the problem, we’ll have to know why fascism has emerged.

Is Donald Trump a fascist threat?

Donald Trump’s victory was the culmination of a bizarre irony: a man running against forty years of economic damage wrought by Boomers, bullies, and billionaires… despite being all three.

Establishment politicians represent, in today’s dysfunctional political environment, the disingenuous, effete, and hypocritical superego of the corporate system. In 2016, people decided to try out that system’s id.

How did this all happen? The mechanics of it deserve another essay, probably not in this series, but the short version is that Trump managed to unify, for a time, the otherwise disparate in-authority and out-of-authority fascisms. Corporate executives and race-war preppers do not go to the same parties, and they express their thuggish inclinations in different ways, but Trump managed to draw support from both crowds.

All of this being said, I don’t think of Donald Trump as a high-magnitude fascist threat to this country. I did not ever support him, did not vote for him, and was displeased (to put it mildly) when he won the election (which surprised me). He has done a lot of damage, especially on the environmental front. He has embarrassed us in front of the entire world. Still, he lacks the image necessary to pull sustained, effortless authoritarianism off.

Donald Trump puts explicitly what is subtle in corporate fascism. He doesn’t think differently from those people; he just can’t filter himself. In general, corporate fascism is effective because of its bloodlessness. Few people notice that it’s there unless they think deeply about it; corporate fascism presents itself as “not political”. (The corporate fascist’s enemies are the ones “being political.” That’s why they were fired. Trump’s authoritarianism, belched out 280 characters at a time, is too flagrant and plain-spoken for either the emasculated robot fascism of the corporate world or the lawfully-masculine (in presentation) inevitability of the brutal dictator.

Donald Trump, though, has an even bigger flaw as a would-be fascist: his lifestyle. He’s been a self-indulgent man-child for his entire life. On-camera fuckery built “the Trump brand”, which he’s cited as his most valuable asset. This was great for him when he was a zeitgeist of unapologetic, gangster capitalism. It’s repugnant, and so is fascism, but the brands of malignancy could not be more different.

For a contrast, the proper fascist dictator appears superficial. He cannot be self-indulgent in public. If he enjoys his power and wealth in front of people, he’ll be seen to have an appetite for comfort, which kills the aura of masculine inevitability that a fascist leader requires. Adolf Hitler was, in fact, a rich man late in life– Mein Kampf was a bestseller– and he likely had several mistresses. To the public, however, he presented himself as a simple-living, celibate man. He was married, he said, to the German people. The fascist’s sacrificial austerity gives credence to the perceived inevitability of his reign.

Donald Trump could not pull that off. He has been a volatile, self-absorbed clown in the public for longer than many of us have lived. His own history destroys him. Trump is the sort that thrives in disorder and damage, but sustained fascism requires a damaging order– and that’s quite different.

If fascism comes to the United States, it won’t come via the self-indulgent, emotionally incontinent septuagenarian in the White House. Instead, it’ll come under the aegis of a 39-year-old Silicon Valley tech founder whom few of us have heard of.

He’ll arrive with a pristine reputation, because (like anyone who succeeds in Silicon Valley) he will have preserved his image at any cost, destroying the careers of those who opposed him. The same sleazy tactics that founders, executives, and venture capitalists use to protect and expand their reputations, he’ll have mastered before he even considered going into politics.

He’ll use his dirty corporate tricks, more subtly than Trump, as well as the resources within his companies to build up an image of centrist, pragmatic, and professional competence. He’ll likely present himself as a bipartisan figure– a unifier “in these divided times”, a centrist capitalist who can also “speak nerd”. He may or may not hold racist views– he’s probably too smart to believe that shit– but when it suits him, he’ll use any racial tension he can to divide people, just as he used factional tensions within companies to build his corporate career.

State of the States

We can assess our current fascist risk by asking: what keeps fascism at bay? We have a constitutional government. That’s good, but it inevitably comes down to us what that means. Societies can be assessed on several planes: culture, politics, economics, and the social. I’ll cover each of them; doing this gives us a clear sense of how much danger exists, and whether it’s getting worse.

Center-leftists have underestimated the corporate and fascist threats over the past ten years, because they believe that we are winning the culture wars. That’s true enough right now. The religious right is dying out. Marijuana legalization once seemed impossibly radical. Same-sex marriage support is strong among the young. These are all very good signs. So, can’t we let time do its thing, considering our cultural headwinds?

No, we can’t. The cultural is driven, over time, by the economic. The economic and political drive each other; that arrow goes in both directions and sometimes it is hard to tell the planes apart. In turn, the economic and political are driven by the social: who knows whom and in what context, which groups are favored for various opportunities, et cetera. It suits us best to analyze the cultural, social, political, and economic planes separately and, in each, ask, in terms of the four elemental political impulses– communalism, libertarianism, republican democracy, and fascism– “Are we fascist?”

Culturally, we are mostly communalistic. Division and exclusion are frowned-upon. A center-left coalition won the cultural wars of the late 20th-century. Two-thirds of Americans support gay marriage, and there’s no strong desire to prosecute harmless pot smokers. Racism still exists, but it’s largely detested. It’s more acceptable, by far, to err on the side of inclusivity than otherwise.

Sometimes, the right refers to our culture as being “politically correct”. Our popular culture is, for good and bad, deliberately inoffensive. This is likely tied in to the importance of our popular culture to our self-definition and economic standing; it is the most effective export we’ve ever had. To start, we would be an irrelevant European knock-off without the cultural influences of once-disparaged minorities. More importantly, if our popular culture were racist, misogynistic, or belligerently nationalistic, the rest of the world would be unlikely to buy it.

Culture, however, changes quickly; it did in the German 1930s, when Weimar liberalism fell, like so much else, to the Nazis. Environmental, political, economic, and social forces can crush cultural defenses. That happens all the time.

Politically, we remain a democratic republic. Our elections work. They do so imperfectly, but they work well enough that, when plutocrats cheat, they still bother to hide it. Voters have the power to fire representatives who become unaccountable to their constituents, and although it’s not used often enough, it is used. Though there are issues with our electoral system on account of its age, they’re not so severe that one would call us, at this point, a non-democracy.

For now, we’re on the better side of this one.

Economically, we are a market-driven libertarian society. That is not all bad. Many have argued that this is what should be. Do we need public control of the economy? To some degree, yes; total control is undesirable. Government should prevent poverty; but we can trust markets to, say, decide the price of toothpaste. Command economies are not innovative, they don’t work well at scale, and they’re too easily corruptible. When well-structured, and used in a society that takes care of the big-picture issues (e.g., basic income, job guarantees) so everyone has a vote, markets work.

It is not evil that our economy uses libertarian, market dynamics. It probably should. The evil is the totalitarian influence that economic life (not to mention artificial scarcity( has over everything else. Where people live, how they structure their time, and what careers are available to them, are all dictated by a closed social elite of unaccountable, often-malignant bureaucrats called “executives”.

When an economy functions well, it recedes. Economic life becomes less a part of daily existence as people become richer, freer, and more productive in their (fewer, usually) working hours. We’ve seen the opposite. We’ve seen dysfunction spreading. We’ve seen people sacrificing more of their life on the altar of the economic, without much progress.

It has been said to the young, “You don’t hate Mondays; you hate capitalism”. That’s not quite right. Working Americans aren’t miserable at their jobs because, say, oil prices are set by free markets. They’re miserable because of corruption. They’re miserable because they are forced by circumstance to work for a malignant elite– a predominantly social rather than economic one– that despises them.

We’ve covered the good news: we are culturally communalistic. We are politically republican. We are economically libertarian. Generally, this is how things should be. So what’s wrong?

Socially, we are fascist. On the social plane, we are not “becoming fascist”. We are not “at risk of fascism”. We are there. A malignant upper class has won.

As discussed, is social drives the political; the political often drives the economic; economic forces drive culture far more than the other way around. As we are thoroughly corrupt, in the social plane, we should understand that we are not in danger until there is a radical overhaul of our current upper class. State-level fascism isn’t here yet, but we’re governed by an elite (“the 0.1 percent”) that would make it so, if it were in their personal interests. Everything could fall, and it wouldn’t take long.

For example, we’ve already lost freedom of speech. The federal government cannot bar political disagreement or peaceful opposition. But employers can– and do. Job opportunities are stolen from people based on social media posts but, at the same time, job opportunities can be stolen from people because they don’t use social media.

One of the key revelations of the 2010s is that only one social class distinction matters in the United States: those with generational wealth and social connections (“the 0.1 percent”) and those without. The higher-income supposed upper-middle and middle classes will be just as screwed, if a significant percentage of jobs are automated out of existence, as the poor. In any case, the upper class has all the important land and runs all the important institutions. It decides, monopolistically, what jobs people get: who works on what, when, and where. Some people get to be VPs of Marketing and university presidents who earn $1 million dollars per year for three hours per week of work; others get blacklisted and become unemployable. There are people who make those decisions; most of us are not among them.

Under fascism, the governed compete while power unifies. That’s what we’re observing in the corporate world right now. “Performance” is a myth. “Meritocracy” is a malevolent joke on the middle class (and “middle class” is itself, under our fascist society, a distinction invented to make upper-proles feel better about ourselves, and to divide us against lower-proles). What actually matters, in corporate jobs? Not performance. Not even profits. (I’ll come back to that.) Loyalty to the existing upper class. Corporate do not work for shareholders; in practice, they work for their management.

Corporate executives, in truth, have insulated themselves from meaningful competition. It will occur on occasion that one must be replaced. When this happens, they ensure a soft landing for the outgoing executive, while ensuring another member of their class steps in. Positions are shuffled around, but they keep these overpaid positions confined to a small elite. None of us really have a chance at those jobs; the idea that anyone can make it is just a cruel joke they play.

These people set each other’s pay. They use clever systems to hide the class’s rapacious self-dealing. For example, venture capital allows a rich man’s son to manufacture the appearance of success on a competitive market– he’s an entrepreneur, he says– when, in truth, the clients and resources are furtively delivered by their backers. This ruse and many others make it appear merit-based when their children succeed, at the expense of ours.

There is some competition allowed within the upper class, but there are rules to it. No one can damage the image for fortune of the class. Corporate executives are far more vicious in their competition against their workers than against nominally antagonistic firms: competitors in the classical sense.

Executives self-deal and get away with it, because their bosses are other executives, who are doing the same. Is all this self-dealing good for corporate profits? It’s hard to say. Executive-level fascism reduces performance but it seems to reduce variance. The left is often to quick to assert that social evils derive from “profit motive” when it is, in fact, executive self-dealing that is the essence of the corporate problem. Profit maximizing has its own moral issues, but they’re not the most relevant ones.

Do executives care about profit? They want to make enough profit to appease shareholders, and not a dollar more. If they’re making outsized profits, they could have paid themselves better. They could have hidden money in the company, to be drawn out in bad times. They could have used those profits to push efforts that would improve and expand their personal reputations. To an executive, a dollar of profit is waste, because he wasn’t able to find a way to take it for himself. In Corporate America, no executive works for a company. Companies (and their workers) work for executives.

What about shareholders? Why don’t they step in and drop a pipe on these self-dealing, comfort-addicted executives? The answer is that the shareholders who matter are… wait for it… rich people. How did they get rich? By sitting in overpaid executive positions, peddling connections, and ingratiating themselves to the upper class. They will never quash the executive swindle. That game keeps them rich, and ensures that their children are even richer. Perhaps it would do good for “companies” in the abstract if someone stepped in on executive excess, but it would be so bad for the upper class that it will never happen.

Of course, if returns to shareholders are abysmal– enough for the press and public to take notice– there will be executive shuffling, but it’s engineered so that no one really gets hurt. A CEO can be fired, yes, but with generous severance, and his career will be handed back to him (plus interest) within a year or two. The only thing that would put an executive on the outs would be disloyalty to the upper class itself. That, they would never forgive; he would likely be suicided.

What about when firms compete, as they’re nominally supposed to? Firms will compete for customers; that is true. Sometimes, they do so ruthlessly. It is not bad, from the consumer’s end, to live under capitalism. What firms cannot stand is having to compete for workers or their loyalty. They will ruin the careers of people who try to make them do that. Sure, they whine from time to time about a tight labor market and a lack of domestic talent, usually in order to scam the government into allowing them to hire more indentured servants from abroad, but their incessant whining about competition is a part of their strategy to ensure they never face it. They consider “job hopping” a sin, because they can’t tolerate the idea of having to compete for a subject’s loyalty. They share data on personnel and compensation, often in violation of the law (which they do not care about, since they own the most expensive attorneys). Most companies, before finalizing a job offer, call references: other managers at nominally competing firms. This would make no sense if there were real competition between companies. It makes complete sense if there is not.

Executives are not rewarded or punished based on their loyalty to shareholders, but rather to the upper class. Middle managers (who are not part of the upper class, and have no reason to care about it) are, in turn, rewarded or punished based on loyalty to their superiors’ careers. Workers, by and large, know that in today’s one-chance, fast-firing corporate culture, they don’t work for “companies” at all; they work for managers. The explicit theme of class domination is obscured to some degree, leaving workers unsure whether that their failure to advance may be a personal failure, and therefore avoiding public admission of the otherwise prosaic fact: the game has been rigged against them. Only one in a thousand who tries for corporate entry into the upper class will be accepted, and this will require total moral self-deletion.

I’ve mentioned the loss of one’s freedom of speech under corporate rule and that, at the same time, many people must nonetheless have social media profiles to have a career. It “looks weird” to people in HR not to have “a LinkedIn” or “a Twitter”. Opting out of technological surveillance is not an option for many people. They’ve been tricked and extorted into rendering unto current and future employers– corporate capitalism, that is– information that will only be used against them.

Mainstream corporate employers are not especially tolerant. It is bad to be the office liberal, the office conservative, the office Christian, the office atheist, or the office Jew. To win at corporate self-presentation, one must be prolifically bland. One should avoid excess and abstinence both in profanity. One should avoid the topic of labor rights at all costs. What about our other cultural institutions, though? What about our press, our universities, and our sundry nonprofit organizations? Yes, mainstream magazines will publish center-left views. Universities in particular house more leftist than conservative voices. How much will this protect us? Not that much, I’m afraid. Most people will not be part of those institutions for life, and therefore still rely on the Adversary for their careers. Even outside of the for-profit world, many are trained to turn on those who threaten the hegemony of the generationally well-connected. This is a shame, because that’s our society’s number-one problem right now.

State-level fascism hasn’t arrived yet, but our social elite has been preparing for it for decades. They are in no hurry to make it happen, but they will if they judge it to favor their interests. Why have they been fomenting right-wing populism– using racial resentments, religious bigotry, and the frank irrationality that emerges from stunted masculinity and (economically enforced) permanent adolescence? To ensure that, no matter what else happens during a populist uprising, they’ll have an easy time getting their money out of it. The upper class has convinced the rabble that generational wealth and connections– neither of which the rabble themselves have– are a right; meanwhile, leftists and racial minorities are a source of their misery.

This society is set up so that, if such events come to pass, the most armed and ready militants will be on the right wing. Not only will this support the elite’s economic goals and keep the proletariat divided against itself, but it will also mean that any revolutionary effort is likely to be overcome by people with such repugnant ideological and cultural aims that they will never gain global sympathy. The upper class would rather have a 95 percent chance of a rightist-racist revolt that no one (present company included) would support than a 25 percent chance of a leftist revolt that would quickly gain global sympathy.

Do today’s generationally well-connected want to live under state-level fascism? They don’t care. They wouldn’t be living under it; they’d be running it. I do not think they are, down to a man, ardent fascists. I imagine that the vast majority are individually apathetic on the matter. So long as they live in a world where they don’t have to compete for what they have, they remain disinterested in ideology. If fascism rises, they will quickly support it, not because of prior ideological commitment, but because it is practically designed for them; though fascism presents itself as popular indignation, it is deliberately constructed to keep the powerful (except for a few, who may be scapegoated) out of harm’s way.

Socially, we already have fascism. The generationally well-connected live with impunity. They do not tolerate division within their ranks, and do whatever they can to divide us against each other. This includes the division between so-called “red” and “blue” America, which are allegiances to manufactured brands– bloodless center-leftism and right-wing indignation, both of which are harmless to the entrenched upper class– more than coherent ideologies. Meanwhile, our society is almost entirely constructed so that no one can represent significant harm to upper-class interests and keep his career, reputation, and life intact.

In the next installment, we’ll discuss how we got here. Our turn toward fascism in the social sphere occurred around 1975; it is often blamed (hyperbolically, oversimplistically) on the Baby Boom generation. In truth, the sequence of events that led us there was, if not inevitable, predictable and cannot be blamed on a specific generation. So in Part 3, we’ll get a handle on how our current fascist mess was made– and how it might be unmade.