Techxit (Part 1 of 2)

(For Part 2, go here.)

Nazis are bad. This is going to be a plot point, much later in this essay, so if you weren’t aware of the fact, write it down: Nazis are bad.

Chapter 1: A Kind of Reckoning

Some stories start with mistakes. This one does. In the summer of 2008, I left a lucrative career in finance to join a technology startup.

At the time I did this, I believed strongly in technological capitalism. I figured we were 20–40 years away from a post-scarcity society in which to be “poor” meant sitting on a two-week waiting list to go to the Moon. We, the programmers who implement human progress, were the good guys.

Our record shows that we’re not. We created fake news. The companies we create–– and, because our purpose is to unemploy people, those are often profitable and draw attention–– have juvenile, toxic cultures. We’ve normalized witch hunts over trivialities, and people lose jobs over jokes about devices called dongles. We’ve built this so-called “new economy” in which recessions destroy workers’ finances and careers, but recoveries are jobless.

Our major contribution to the world, as private-sector programmers, is to push the balance of power between takers (capital) and makers (labor) in the wrong direction.

We have built an empire of garbage. It has not been pleasant for me, in my 30s, to come to the realization that I have unwittingly chosen a career path in opposition to the welfare of society.

What I plan to do with my life, that’s for another day. I’d like to have Farisa’s Crossing ready for publication in early 2021. The project’s been a lot of fun, a lot of work, and I can’t wait to have a finished product. I should be honest about its prospects, though. It’s a very high-potential book, but some of the best writers I know (people who will be remembered, I am sure of it, in 100 years) are still unable to subsist on book sales. So, I have kept my mathematical and computational skills sharp. I have no intention to abandon those. I enjoy programming quite a lot, and I’m still good at it, so long as I’m working on a real project rather than Jira tickets.

The software industry itself? I’ll be honest. I’d rather starve than work in another company where “Agile” is taken seriously. It’s not that I imagine COVID-19 to be a lot of fun; but at least I’d only have to do it once, not every two weeks.

I have written about 250,000 lines of code in my career in at least 20 different programming languages, and in spite all this, the sum contribution of my work to society comes out in the red. It doesn’t matter what technology can do. It matters what it does. We need to stop fantasizing about our 200-line open-source monad tutorials somehow advancing the state of human knowledge enough to cancel out the harm done by the WMU’s (weapons of mass unemployment) we build at our day jobs.

Over the past 30 years, the balance of power in our society has shifted toward capital and away from labor, toward employers and away from workers. We can’t blame all of this on politics; someone taught the machines how to run the dystopia. This means: we’re the bad guys.

Chapter 2: Understanding Automation

We have been here before. Ill-managed prosperity caused the Great Depression, and it caused the rise of fascism.

In the first two decades of the 20th century, we became far better than we’d ever been at making food (nitrogen fixation). A boon, right? What could go wrong? Capitalism does not handle boons well. In 1920s North America, the pattern unfolded like this: prices for agricultural commodities declined, bankrupting farmers and communities that served them, leading to cascading rural poverty, which eventually reduced demand for the products of industry, and finally this became known as “the Great Depression” when it tanked the stock market (October 1929) and thereby hit rich people in the cities.

What happened to farming, to agricultural goods, in the 1920s… is happening again to all human labor.

The global rich prosper. Everyone else suffers under malevolent mismanagement and a concentration of power that would not be possible without the tools that we, as programmers, have built.

Not all markets have legible, objective moral states. I do not think it is of great ethical importance whether a tube of toothpaste costs $3.49 or $3.59–– it seems that supply and demand can be trusted to figure that out. If God exists, she likely has no opinion on what should be the price of palladium or platinum. We are not entitled by divide fiat to a $47 price on a barrel of combustible hydrocarbons that, in any case, we ought to be using less of. Markets determine exchange rates of various assets–– how much of one thing is worth how much of another–– and most of these exchange rates do not carry primary moral weight. But one does.

The exchange rate between capital and labor–– between property and talent, between past and future, between the whims of the dead and the needs of the living–– has a clear, objective, morally favored direction. That’s the one, if God exists and has a will aligned with the health of human culture, that matters.

As I’ve said, quite a number of new technologies push the balance of power to favor employers, not workers. This is objectively evil work.

When I look at how life has changed in the past 20 years, I don’t think the smartphone is more than in incremental improvement, and I’m not impressed by the eggplant emoji or the $1,500 embarrassment that was Google Glass. What most people have experienced is an increase in their feeling squeezed, and it’s not just a feeling. The major contribution of private-sector technology to daily life has been a slew of surveillance tools sold to health insurers, authoritarian governments, and employers.

There is an open hypocrisy at play in the workplace. A worker in constant search for better options will be disliked–– he’s “not a team player”. That seems fair. No one likes someone who’s only out for himself. Yet, companies expend a considerable share of resources to figure out which workers can be replaced and how quickly. There are people in our society who collect a salary by finding ways to take salaries from others in the companies where they work. Doubly weird is the expectation, within the so-called corporate “family”, to treat these people as teammates rather than adversaries.

A worker who changes jobs as soon as a better offer comes along is a “job hopper”. He’ll get bad references and rumors will spread that he failed up or was fired. Yet, our employers spend a significant fraction of their funds (wealth we generated) looking at us from every angle to see whether we can be replaced.

Social media has played a central role in this dystopia. We now live in a world where one needs a public reputation–– an asset that 99.9 percent of people should not want, because reputation is an asset easily destroyed by some of the world’s worst people–– to get a job. Gone are the days when anyone able to speak in complete sentences could call up a CEO and talk his way into a high-paying position. In today’s world, it’s impossible for workers to reinvent themselves–– every detail can be checked, and people who opt out (who don’t have “a Facebook” or “a LinkedIn”) are assumed to have something to hide.

Social media promises a path to influence, but for employers its purpose is to ratify the lack of influence that most people have. In the old world, a terminated employee got three months of severance and glowing references, because a boss never knew if he was letting go someone who had powerful friends and could bring the pain back. In the new world, an employer can look up a target’s Twitter feed, see a lack of blue-check followers, and confidently presume that person to be in the 99 percent of people who can safely be treated like garbage.

Chapter 3: Tech–– Not Even Once

I mentioned before that I left a lucrative career in finance, in 2008, to join “the tech industry”. This was, financially, a seven-figure mistake. Possibly eight. It was the stupidest decision I ever made, and I assure you there’s a lot of competition for that distinction.

Private sector technology (“tech”) is not a career. There is no stability in it. You are only as good as your last job; your job is only as good as your last “sprint”. Unless you become a successful founder, you will not be respected. You’re a thousand times more likely to end up like me–– 36 years old with no clear path to where I want to be–– than even to become a modest millionaire.

You might think, like I did, that you’re going to beat the odds because you’re smarter than the average hoser. Not so. Compared to the people in charge of this industry, I’m a black swan seven-wingèd eidolon of merit. It does not fucking matter, how smart you are.

Your IQ doesn’t matter because you’re not going to be using machine learning to cure cancer. You’re going to be working on Jira tickets to build a product that corporate executives will use to unemploy fellow proletarians. Any idiot can do that kind of work. Furthermore, at a salary higher than idiots can get elsewhere, many idiots will try. Unless you are 21 and have no obligations, quite a few of those idiots will be able to work longer hours than you.

Private-sector technology is not “meritocracy”. It’s a fart in a cave that has not ceased to echo.

I’ve had the whole spectrum of tech-industry experiences. I’ve worked at companies that have failed. I’ve also worked at companies that succeeded, whose founders went on to fail those who got ’em there. At a “Big 4”, I worked for a manager with an 8-year track record of using phony performance issues to tease out people’s personal health issues, which he would blab about to colleagues. (I was told that he was fired for this, but after a five-year absence, he returned to that company.) As a middle manager, I sat and listened as two executives threatened physical violence on someone who reported to me (someone who was, in truth, quite good at his job) because of unavoidable delays on a project. One of my favorite people (of note, a black female) was harassed out of a company–– her manager, a personal friend of the CEO, was not fired, and went on to be a VP in his next job. I’ve seen tech companies offer the same leadership role–– title and responsibilities the same–– two multiple new hires with the intention of their fighting for the job they were promised. In March 2012, I was fired for refusing to commit a felony that would have cost its victims hundreds of thousands of dollars. In the mid- and late 2010s, I got death threats related to this blog–– and (as a public leftist) my name became known to some scary far-right fuckers–– a topic I’ll cover at length in Part 2.

All of this, and for what? Nothing.

Yes, I know how to program. I have taste and I have the (rare, apparently) skill of knowing how to do it right. I can talk a great game about functional programming, artificial intelligence, and programming language design. I have a solid understanding of what the various abstraction layers (e.g., operating system) are doing. Here’s the problem. My peers, in the middling years of legitimate careers, are able to buy houses and start families. They’re in the position to move about the economy at least the upper-middle. Me? I’m stuck in a trade where even people with “senior” and “principal” in their title have to interview daily for their own jobs. What a fucking joke.

I left Wall Street, and joined this career, because I bought into the Paul Graham Lie: that if you join a startup and it fails, it won’t hurt you, because you’ll be respected for being “entrepreneurial”. You won’t get your IPO today, and you’ll “have to settle” for a $500,000-per-year VP position at a FaceGoog, or an EIR role at a venture fund, but you can use your time out to recover your finances and energies until you’re ready to play again.

There is no truth in the Paul Graham Lie. There are too many failing startups and most of them do not become VPs at FaceGoogs. They get regular crappy jobs.

I found no meritocracy in the technology industry. I had a slew of intensely negative experiences. I must be honest on this, though: I got exactly what I deserved.

Whether I’m a good man, that’s not for me to say; it is true (and perhaps a weakness) that I lack the stomach for evil. Yes, I am a person of merit. Compared to the people running the tech industry, I am seven-S-Tier merit. However, I entered a line of work that, in the final analysis, has dedicated itself to the advancement of the power held by employers over my fellow human beings. Failure is what I deserved. Misery is what I deserved.

My youthful self-deception about the true nature of corporate capitalism is no excuse. When one who desires to be a good man, nonetheless, works for the baddies… what else can be expected?

Chapter 4: Artificial Stupidity

The last thing I intend here is to tell a pity-me story. Until 2018, none of my experiences with injustice stepped outside the range that is typical. I’ve seen people smarter and better than I am get screwed far worse than I ever have.

Do not pity me, because I don’t pity myself. Learn from my experiences and make better choices than I did. The takeaway from all this should be that, if a person of eminent merit can have a terrible time in the tech industry, it can happen to anyone. Most people get screwed; few have the private privileges I have that enable them to talk about it.

There cannot be “meritocracy” in private-sector technology, because we serve a purpose without merit. We can opt for self-deception and tell ourselves that our work is advancing the state of knowledge about database indexing, but if our work’s real purpose is to allow the rich to “disrupt” the poor out of their incomes, then a negative multiplier applies to our efforts, and diligence only means we drive fast in the wrong direction.

John Steinbeck made a brilliant comment on American false consciousness–– that socialism never took hold here because of self-deceptive workers who see themselves not as an exploited proletariat, but as temporarily embarrassed millionaires. Having worked in technology, I understand the private-sector software programmer’s mind pretty well. We see ourselves as temporarily embarrassed research scientists, philanthropists, public intellectuals, and scholars. We assume there is an exponential growth curve to our production and therefore it is immaterial what we’re doing now, because in 20 years, when we’re calling the shots, we’ll make moral choices.

Employers indulge our wounded egos with the promise that, if we programmers put their heads down and plow through some ugly work–– just up to this next “milestone”, guys!–– we’ll eventually be restored to glory. That’s the promise used to pull some of the best minds of my generation (and, to be honest, quite a few not-best minds) into socially detrimental work–– performance surveillance employers use to squeeze workers, propaganda machines for capitalists and authoritarians, and weapons of mass unemployment.

I’d like to talk about artificial intelligence. I’ve been studying it since the early 2000s, when the field was considered a land of misfit toys, a bucket of ideas that didn’t work–– when neural networks were considered a bad joke ill-told. I don’t consider myself an actual expert in this very deep field, but I’ll note that quite a number of the “data science” consultants earning $350 per hour come to me for advice. (I left a doctoral program at one year, so I don’t have the paperwork to get such jobs.) There has been, in the 2010s, a plethora of startups raising venture capital on the claim that they do “artificial intelligence research”. In the vast majority of cases, they’re not.

I’ve been in more than one of these fake-news AI startups. Usually, the AI approach doesn’t work–– at least, it doesn’t scale up to real-world problems on a timeframe investors or clients will accept. The founder starts with an idea that’s usually an expansion of ideas from a college thesis (sometimes his own) and pulls a family connection to get seed funding, then hires a few rent-a-nerds to implement his “brilliant” idea. When the AI approach fails–– genuine AI research is demanding, expensive, and intermittent–– the company “pivots” away from the original project and moves into business process automation. The startup becomes a portable back office–– it failed to automate an ugly task, but by squeezing extra hours out of H1-Bs, it manages to make the work cheaper.

This switcheroo isn’t a surprise to investors. In fact, they’re usually the first ones to step in and tell the spring chicken founders that it’s time to put away childish things. Once founders realize their job is to delegate, rather than do, the work, they don’t really object to the notion of pivoting to something more mundane.

It is not immoral, of course, for a business to change its strategy. The issue here is in the continuing deception. These companies claim to be doing “next-generation machine learning” when they’re actually running on cheap manual labor. Clients buy into something that appears to have more long-term upside than it actually does–– they take the early adoption risk of something that’s unlikely to merit it.

The biggest losers in the fake-news AI con, though, are employees. It’s hard to get smart people to work at no-name companies for below-market salaries on the low-status, boring line-of-business problems encountered by a startup serving as a portable back office. The trick is to tell these programmers that if they bear down and endure 6–12 months of drudgery, they’ll graduate into the research positions they were originally promised. In reality, what lives at the end of that 6–12 months of drudgery is a middle manager saying “we just need” 6–12 months more.

I’ve worked on Wall Street. I’ve worked for venture-funded companies. I’ll say without reservation that the ethics on Wall Street are far better. Often, VC-funded founders are people with MBAs who failed out of Wall Street (if you can believe this) for being too toxic and unethical.

Say what you will about finance. There are plenty of things to dislike about its culture. I’m no fan of the noisy environments, or of the constant wagering on everything, or of the occasional encounter with openly-asshole politics of someone who read Ayn Rand at too young an age to get the joke, or of the sense–– though, I assure you, financial workers are treated better than tech workers–– that the job is still paperwork for rich people. To suggest that Wall Street is some workplace utopia would murder my credibility. It isn’t. I only mean to say that the ethical and intellectual qualify of people in finance is higher, on average, than in the private-sector technology world.

Why does Wall Street, then, have a worse reputation than Silicon Valley? Finance, unlike Jira tickets, is for adults. Ethical failures on Wall Street make news. When a bank collapse or a market fails, people learn about it. In my experience, traders are no more or less honest than the general population. The major difference is that traders are smart enough, at least when it comes to careers, to play the long game. The narrow-minded taskmasters who run daily operations in technology, for a contrast, think in terms of two-week “sprints”.

The person who promises you the moon but, three weeks after you’ve moved across the country to join his operation, changes your job description and puts you on sprint work, that guy’s going to be a techie.

Chapter 5: Teabagged by an Agile Scrotum–– Or, Why Programming Is a Dead Career

The non-career of private-sector programming calls itself “software engineering” to give itself the aura of being a profession. It isn’t one.

A profession is a set of traditions and institutions setting forth (that is, professing) ethical obligations that supersede managerial authority and the short-term expediency. That is only possible–– because professionals aren’t any better or worse than anyone else, and the need to survive will push anyone to extremes–– if those who work in the profession are protected from compromising positions.

For example, a doctor must obey the Hippocratic Oath, even if it requires him to defy superior orders. This is only tenable if the medical profession makes it so a doctor can survive losing his job–– he can get another one; he is still a doctor–– and that will only be the case if entry is limited, lifting all professionals above the daily (and ethically compromising) need to survive. The profession puts a floor on wages by limiting entry to the qualified, and it puts a floor on credibility by giving its workers institutional support.

If a profession collapses and any hungry loser can get in, the cheapest people drive out the skilled. Workers lose, clients lose, and society as a whole loses. The only winners are employers. They benefit from de-professionalization because a professional executive’s real trade is the buying and selling of others’ work, and a debased talent pool enables higher trading volume.

Software engineering has been thoroughly de-professionalized. Highly-compensated specialists have been driven out in favor of rent-a-coders who don’t understand computing or mathematics, but will accept two-week sprints and tolerate the daily “interview for your own job” meetings. I’ve referred to Agile as Beer Goggles Software Management–– the unemployable 2’s become marginally productive 4’s, but the 6+ see a drunken loser and want nothing to do with it–– but I’ve realized, over time, that the Agile Beer Goggles are here to stay. The software business has successfully refit itself to run on low-grade talent; this will not be reversed.

A boss’s incentive isn’t to hire the best people; it’s to stay in charge. Daily status meetings remind the plebeians that they’re not trusted professionals, and that they can’t invest in their own development “on the clock” but should think of themselves as day laborers who will be replaced–– there’s an army of hungry losers lined up outside the door–– as soon as their “story points” per week (or per “sprint”) drop below a certain threshold.

I tried to save this industry from this Agile madness, but I failed.

Chapter 6: This Story Peaks Early, Guys

I wrote a few posts in 2012–13 about the startup economy, although I was still figuring it out myself at the time. One concept in which I invested a lot of hope is open allocation–– the notion that workers are better at judging the relative merits of projects that they’ll let on in an authoritarian, command-driven company; that, therefore, trusting them to vote with their feet makes excellence more likely. I didn’t invent the concept, but I named and evangelized it. I still believe that open allocation fundamentally works, but I have no hope in its eventual adoption. The genuine malevolence that exists in global corporate capitalism, since 2015, has been so evident in such force lately that issues greater than inefficient allocation of talent dominate my concern.

Still, I was thrilled to see my theories on open allocation get traction. Tom Preston-Werner quoted me at Oscon 2013 (go to 13:37). This blog, in 2013–15, began to get hundreds, then thousands, of unique views per day. On my best days, I broke 100,000; my Alexa ranking in the San Francisco metropolitan area was, for a long time, in the four digits.

There were stressful moments during this time. A mistake I made in 2011 got more publicity than it deserved, for reasons largely my fault. My left-leaning (and, increasingly, fully leftist) politics attracted death threats from various far-right elements–– a topic we’ll return to in Part 2. I’ve been doxxed so many times and in so many different ways, I assume I have no secrets–– but, then again, I have nothing to hide. Still, on the whole, the good outweighed bad.

One place where I achieved prominence was Quora. Today, we know Quora as a buggy, name-walled Yahoo! Answer clone that generates privacy violations as reliably as summer humidity generates swamp ass. In 2015, however, Quora had (in spite of itself) an excellent community. It showed flashes of potential that, in the end, it would never really meet–– but, from 2013–15, there was a high quality of questions posted, and a high quality of people answering them.

I achieved the “Top Writer” distinction in 2013, 2014, and 2015. I was frequently consulted by the site’s moderators on policy and community management. I had about 8,500 followers. I don’t know what that number means now, but at the time it ranked me third or fourth among non-celebrities (depending on what we call a “celebrity”–– I should be forgiven for having fewer followers than Barack Obama) and first (breaking seven figures, some weeks) for answer views. A number of my responses, mini-essays in which I’d sometimes invest several hours, were published by partner sites such as ForbesTime magazine, and the BBC’s online edition.

On June 15, 2015, I was an “It Programmer”, as much as one can exist. (There turns out to be a low ceiling on a non-founder’s status; by stepping above it, I got myself in trouble.) People all over the world reached out, sight unseen, and offered to fly me out to discuss positions at their companies. Often, I was called “the conscience of Silicon Valley,” even though I never lived there.

The next day (June 16) an event occurred that has nothing directly to do with me, and involves a man who probably does not know that I exist.

I lived in Chicago. Seven hundred and fifteen miles east-and-slightly-south, on a cloudy Manhattan morning, a deranged real estate baron descended an escalator, like Kefka in the last battle of Final Fantasy VI, and gave a circuitous, self-promoting, and racist speech in which he announced a presidential campaign that would ultimately be successful.

I’ll talk later on–– this story gets dark, my friends–– about fascism and whether I think Donald Trump constitutes or fits into a credible fascist threat to this country. Some people consider Trump a fascist; others view him as a mere opportunist. For now, observe that there were, at the least, coincidences in timing. Trump’s rise to power occurred as the far right, or “alt-right”–– a morass of tribalism, pseudo-academic racism, and might-makes-right attitudes toward topics ranging from international relations to corporate conduct–– evolved from an incel affectation into a full-fledged, mainstream political movement. The private attitudes of venture-funded tech founders were now finding public voice in a presidential candidate.

I did not expect Trump to become president. I remember a conversation well with some friends about him, in late 2015. Most people said he had no chance of becoming president. I gave him a 1-in-250 shot, but I would have given him a 4-in-5 shot, even then, of performing well enough at the primary to speak at the convention in Cleveland.

It wasn’t hard to see what Trump was doing. His schlock about Mexican “rapists” was old-school miscegenation panic. The left blames societal failures caused by corporate capitalism on corporate capitalism; the right blames societal failures caused by corporate capitalism on women, minorities, and immigrants. Trump played the demagogue game disgustingly well. His victory, I did not expect, but I knew that Trumpism was going to be with us for a long time, even if he lost in November 2016. Having worked in the tech industry, I saw it coming.

Chapter 7: The Man Who Killed Paul Graham… Is Screwed

No, I didn’t murder Paul Graham. As far as I know, he’s very much alive. He’s only “dead” insofar as his relevancy (like, by my own choice, mine) has taken a precipitous dive.

I take credit in jest. Substantial evidence exists that his decision, in February 2014, to step down as president of Y Combinator, and thereby reduce his relevance in the tech industry, was driven in part by his dislike for skepticism he faced among the public and media. Though I was a tipster and source for a Gawker story he disliked, I did not intend to “kill” Paul Graham. Most of this happened by accident. Still, I know based on private conversations with people in shared circles, that my work contributed to his decision.

One of the worst things about fame or even semi-fame is the Carly Simon Problem. “You’re so vain,” she sang, “you probably think this song about you.” In this case, there was a person intended as the target of the song, so he would be correct in believing the song to be about him. That’s not the issue here. The Carly Simon Problem exists because some people, as I’ve observed, think all songs are about them. People see themselves where they aren’t, they get butthurt, and then they fuck up your life.

When I publish Farisa’s Crossing, I am terrified about ex-girlfriends from the Bush Era coming out to say Farisa is based on them. Let me address that now: she isn’t. What Farisa represents, that’s a secret I’ll take to my grave.

I’ve been burned by the Carly Simon Problem more than once. I’ll give two examples here.

Number one: an ex-colleague managed a successful return to finance–– he got a job as the head quant at a hedge fund. I considered this guy a friend; we played board games together on multiple occasions, and I’ve been over to his house to have dinner with his wife and kids. For a position at a Chicago hedge fund, I used him as a reference.

Little did I know that he had read one of my blog posts and believed it to be about a place where we had worked. He found it to be “bad form” to write about our shared prior employer–– to be clear, I wasn’t. The post in question was about a 1990s corporate meltdown I studied in my research on open allocation.

I got shanked. He gave me a bad reference and I didn’t get a job.

I grew up in central Pennsylvania. Unlike these soft-faced preppies who dominate the upper echelons of the corporate world, I grew up understanding the notion of respect. You fight; or, you shut up and walk away. There is absolutely no shame in walking away from a fight. Almost always, walking away is what you want to do, because most serious fights don’t have any winners. One should, for similar reasons, avoid the conduct (such as throwing around bad references) that necessitates a physical fight. This being the case, I have zero patience for white-collar, lily-livered, passive-aggressive failmen who pretend to be a friend, but throw around bad references and sink people’s job prospects. Don’t like what I have to say? Confront me. I’ll stick to words as long as you do–– no one needs to know, either, what we argued about, or that we ever argued.

I can respect a wide variety of people, but I cannot respect a craven crud-ball who thinks that an acceptable response to an anodyne blog post is give bad job references like a fucking dirtbag. If I ever get cancer, I will name it after this guy.

The Carly Simon Problem is one of the main reason I nearly quit writing in 2016. I’m more than willing to go the distance in a fair fight, if that’s where we are. I cannot tolerate being stabbed in the back by cowards–– especially cowards who weren’t even in a conversation, but who took offense to it on the incorrect assumption of a song being about them. Sometimes, the song is about someone else. Sometimes, the song is about no one. Sometimes, the song is just a damn song.

The second major encounter I had with the Carly Simon Problem involves Paul Graham.

I know, I know: I promised Nazis, and here I am talking about Paul Graham. (I don’t think Paul Graham is a Nazi.) There’s some back story, some buildup. Unfortunately, this means I have to get into events that sound like petty drama, but that will in fact lead into something major and criminal.

Even now, I don’t harbor strong opinions about Paul Graham. I would be happy to mend fences with him, if he apologized for the conduct described below, almost all of which was committed not by him but by his subordinates at Y Combinator.

There is a lot to like and respect about the man. For a start, in his prime he wrote some excellent books on the programming language, Lisp. He got more right than he got wrong. Unlike me, he won the birth-year lottery and walked away from Viaweb (Yahoo! Store) with a boatload of Boomer Bucks. He’s an above-average writer and, although I haven’t always agreed with what he’s had to say, his contributions to technology discussions have, at times, been insightful.

A business model that thrived in the 1990s technology boom was the so-called “startup incubator”, which made small investments in tiny companies and thereby made a diversified wager on the startup economy as a whole. Incubators always had a seedy reputation–– they promised mentorship and introduction to venture capitalists, while rarely providing more than office space and coffee–– but the business model isn’t prima facie evil.

After the 2001 tech crash, internet startups developed the reputation of being a goofy 1990s fad that would never return–– the “new economy”, conventional wisdom said, was a short con that had failed. Incubators, as well, went out of fashion and became a symbol of 1990s clownery.

Paul Graham, having become rich enough to retire in the 1990s, continued to evangelize the startup economy while the rest of the world’s faith in it sat at a nadir. He cheer-led the notion of a small technology company when no one else would. In 2005, he opened up an incubator called Y Combinator–– named after a computer science construct discovered by a distant relative of mine–– or “YC”.

I dislike Y Combinator. I think it has done more harm than good to the world, because it has exacerbated the ageism and clubby classism of the technology industry, and because it has inadvertently given credence to “new economy” ideas that actually haven’t worked very well. This being said, I don’t think Y Combinator is the typical, seedy incubator. I’ve researched Paul Graham and his operation, and everything convinces me that he makes good-faith efforts to truly back the companies he picks–– and quite a number have gone on to be successful. We can debate another time whether Y Combinator’s strong track record proves its merit or its founders’ social connections, but his incubator became unique among the pack in developing a prestige that no other one has.

I met Mr. Graham in person once (March 2007). No one had any reason then to know who I was, so I doubt he remembers me. He seemed like a nice guy, I liked him and, until 2015, I still liked him, even though we disagreed on many things.

So why, in late 2013, did he suddenly dislike me? Again, it’s the Carly Simon Problem, because of course it is.

Chapter 8: There Are Chickenhawks Among Us

A logic puzzle goes like so. One hundred people live on an island; ninety have brown eyes and ten have blue eyes. No mirrors exist and no one talks about eye color, because there’s a rule that, while it is not illegal to have blue eyes, anyone who knows he has blue eyes must, at dawn the next day, leave the island forever.

They live in peace, until one day, an outsider (“oracle”) known never to lie comes to the island, calls an assembly of all hundred inhabitants, and says, “At least one of you has blue eyes.”

What happens? You would think: Nothing. No new information is introduced, so you would imagine that the oracle has no effect.

The answer is: 10 days later, all 10 blue-eyed people leave the island. The oracle introduces something they know (since everyone sees either 9 or 10 blue-eyed people) into common knowledge and that changes everything. For a full explanation, click the link above.

In this way, saying something that everyone knows (introducing no new knowledge) can have a social effect.

In December 2013, I wrote a blog post about chickenhawking. A chickenhawk is a business executive who expresses his midlife crisis not by purchasing a sports car or having an affair, but by investing in the career of a younger man–– usually, for reasons that will be discussed, a certain type of younger man–– and living vicariously through him.

A chickenhawk gives his young protege (or “chicken”) rapid career advancement and a high income, in exchange for exciting stories. There is a revenge drive in play; the “hawk” punishes women who rejected him 20 years ago by inflating the economic virility of a sociopath who will–– as I then put it, capable even in barely-trigenarian literary infancy of the occasional limit break–– “tear through party girls like a late-April tornado”.

A fictional example occurs in The Office. The show has to stay PG-rated and humorous, so there’s a lot left unsaid, but Michael Scott harbors a vaguely homoerotic (and non-reciprocated) obsession with subordinate Ryan Howard, one that leads him to assist the latter’s career (and eventually be surpassed). He takes an interest in his protege’s personal life; he lives out his midlife crisis through a younger man with the social skills, courage, and resources (due to the hawk’s support of the chicken’s career) to things that, in the hawk’s twenties, he couldn’t pull off.

Silicon Valley is ageist and sexist. VCs “pattern match” to a certain type of person–– a young, unattached, usually heterosexual, male sociopath–– and one cannot understand the venture-funded software industry without an understanding of why. Sand Hill Road ought to be renamed Chickenhawk Alley.

Of course, this isn’t unique to technology. The corporate system’s raison d’être is to funnel sexual access to unattractive, rapacious men who have nothing to offer women outside of the social status induced by their control of resources. Without this motivation in play, the corporate system would have likely collapsed, leading to socialism, several decades ago. The rich do not hold on to the corporate system because they enjoy TPS reports; they do it because it gives them an advantage over other men (especially younger men) and thins out the competition. Chickenhawks tend to be too timid to abuse their control of resources in the way a more typical corporate executive would; they do it vicariously through someone else.

Paul Graham took offense to my December 2013 about chickenhawking–– but what does chickenhawking have to do with Paul Graham? I don’t know. I still don’t know. I don’t think he is a chickenhawk. I do not accuse him of being one. I never have. That song was never, ever about him.

No evidence exists of Paul Graham being a chickenhawk. Nor is there evidence of him being pro-chickenhawk.

Except what follows.

Chapter 9: The Vultures Chickenhawks Attack!

I make this analysis in good faith. In discussing Paul Graham’s personality, I find common ground. What could be called faults are traits I share.

I’ve been told on good authority that, at least at one time, he spent 6 hours per day on Hacker News, a news aggregator and community created around Y Combinator. Obsessive? I am not one to talk here–– I have also suffered unhealthy addictions to internet communities that consumed similar quantities of my free time. It takes a sort of obsessive mind to excel at detail-oriented crafts like programming and writing.

Creative people have another flaw: we tend to take criticism and skepticism around our ideas personally. It would not surprise me to learn that others’ skepticism of him was a primary reason for (a) his actions in 2013–15, to be discussed, and (b) his decision to step down as president of Y Combinator in early 2014.

My writing got to him. As I said before, Paul Graham is an above-average writer who won the birth-year lottery and whose optimism about the startup economy played a major role in restoring public faith in it. Some time later, I showed up on the scene. I’m also an above-average writer, but I did not win the birth-year lottery and I did not make millions for showing up at a place. My experiences in 2008–15 (detailed above) led me to conclude that the “new economy” was an ersatz replica of the old one. As my skepticism grew, I did not hesitate to express it.

My comments frequently rose to the top on Hacker News. Whether this means I was right, or merely wrote well, I shan’t say. I’ll only observe that often I achieved top comment.

And then, because I had the nerve to say something everyone already knew–– that there are chickenhawks in Silicon Valley–– I suffered the dreaded Hacker News “rankban”.

What the fuck’s a Lommy rankban? In a less stupid world, you wouldn’t have to care about this sort of thing. In today’s world, though, where opaque algorithms determine the placement and implied social proof of user-created content, and in which these reputation measurements make the difference between “influence” and unemployable obscurity, this kind of thing matters.

As I said, Hacker News (or “HN”) is a news aggregator and discussion hub for private-sector programmers. Even to be in the running for serious programming jobs–– not low-end rent-a-coder sprint work where you’re competing with sweatshop workers–– you need a pre-existing reputation. Hacker News is where a lot of people go to build one.

Y Combinator, a startup incubator, owns it. The conflict of interest should be obvious. It is a news aggregator owned by a baby-league venture capitalist. It is a PR organ that papers the reputations of YC-backed companies. It punishes those who express skepticism of these startups, or of the (defective) ecosystem in which they exist.

Someone banned from Hacker News is not notified of his offense (and there is no appeal). He does not even know he is banned, in most cases. He’s “hellbanned”, which means that his comments and posts are visible to him but no one else. This is contraindicated by the psychiatric community–– it’s a form of gaslighting. Less drastic is the “slowban”, by which a site performs poorly. You see this a lot in the venture-funded world–– in real estate and personal finance, there are a number of venture-funded companies using slowbanning to redline. Rankban, most insidious, exists when a site’s opaque content ranking algorithms systematically degrade one’s posts and comments–– if the content is successful, it is still represented as unsuccessful, and suffers reduced readership.

An anonymous tipster, in January 2014, informed me that I had been put on slowban and rankban by Paul Graham. I did not believe it at first–– I thought better of the man, and failed to see why he would have a strong opinion of me–– but these were relatively easy to test. Slowban, I verified by comparing response times on HTTP requests when logged in versus logged out. Rankban was harder to prove–– this I tested by digging up old high-performing posts and verifying that (years later) they had fallen to the bottom, where they would go unread.

I’ll confess that this is minor shit–– I only bring it up to prove that Paul Graham held an animus toward me as early as 2013 because of my anti-chickenhawk stance.

Rather than bog you down, dear reader, in more petty drama, let’s catch up to 2015 and the rise of Trump–– of note is that his increasing success (long before he won the presidency) validated a certain might-makes-right attitude toward publicity and business; long before November 2016, corporate executives were taking note.

In August 2015, I suggested, based on things Travis Kalanick said about his own motivations for starting Uber, that the company likely had a toxic culture. (Two years later….) This got me banned–– actually banned–– from Hacker News.

Banned from Hacker News! By this, I was truly, deeply… sorry, it is still too much….

Nah. It didn’t bother me. I was 32 at the time; I had outgrown the Hacker News community and the mentality it serves. Being banned from that place was no big deal–– a liberation of time, to be honest about it. The only issue was that Dan Gackle misrepresented the reason for banning me, taking an entirely different comment out of context in a way that any court in the U.S. would classify as defamatory.

Perhaps a week later, Paul Buchheit, a man who jokes about gun violence as a means of handling business negotiations, attacked me on Quora.

Worth noting is that Y Combinator bought a piece of Quora in May 2014 at a fire-sale price. It seemed an odd deal at the time, and still does, but I think both parties saw themselves as getting the better end of that one. Quora got to claim it was “YC” at the peak of the incubator’s prestige. Y Combinator, at the same time, gained the ability to “moderate” Quora’s community and content so as to favor YC-backed companies.

After this nonsense–– the “rankban”; Dan Gackle libeling me on Hacker News after banning me; the bizarre personal attacks from Paul Buchheit; and various other factors I shan’t get into–– I could tell there was a pattern. If nothing else, Paul Graham was doing a poor job of controlling his puppies.

I challenged Paul Graham to (wait for it) a rap duel. I’m not a stellar rapper; I did some freestyle in college and I’m half-decent for a white guy, nothing to write home about, but I felt confident that I could beat Paul Graham. I was, on one hand, extending an olive branch. Not having anything against Paul Graham himself–– he was negligent in failing to call off his puppies, but that could be fixed–– I felt that a public rap battle would be an opportunity to show that, despite our differences, we could respect each other well enough to put on a mutually beneficial (and entertaining) show. At the same time, I needed to make it clear that, if Paul Graham couldn’t control his puppies and the embarrassment they were causing, I would continue to demonstrate this incapacity.

On September 4, in retaliation for the rap-duel challenge, YC-backed Quora banned me–– again, in a common pattern, on false pretenses. My account, which had more than 8,500 followers, had been turned into a defamation page with a bright red text block saying, “This user has been banned.”

Mucho internet drama. I won’t blame you if your eyes glazed over. You’d think such things wouldn’t matter in the real world. You’d think. Don’t worry–– the stakes are about to go up, and the Nazis aren’t far behind.

Chapter 10: When Nonsense Decides To Matter

I interviewed for a job in January 2016 where it came up–– not as a stupid thing to laugh about, but as a serious concern–– that I’d been banned from Hacker News. A Chicago-based hedge fund decided not to hire me for a quant role because–– as I have from a headhunter who was decent enough to give me the real reason–– an MD observed that, “apparently this Paul Graham fellow doesn’t like him.”

This is an objective moral fact: internet drama like that should never affect someone’s ability to earn an income.

Unfortunately, the world has a surfeit of immature, deficient men who, on the basis of something as minuscule as a website ban, will close doors–– even, if not especially, doors that are not theirs to close.

I have seen all sides of this Black Mirror–level idiocy. I’ve been a manager. I’ve been involved in hiring decisions. I’ve made calls; I’ve defended people; I’ve also failed at defending people.

More than once, I’ve seen irrelevant internet activity–– as minor as rumors on sites like the blessedly-defunct Juicy Campus–– come up as cause to deny candidates jobs, reduce their offers on the assumption of lesser leverage, or to fire otherwise excellent employees.

Also, though I never cared about job candidate’s politics, this is not a difficult matter for employers to discern. It’s something they care about for “cultural fit” reasons, but not in the ways one might expect. I’ve never seen anyone hosed for being a Republican or Democrat, or for supporting a mainstream presidential candidate–– it’s possible that it happens; I just haven’t seen it–– but I have frequently seen people denied opportunites for “being political”, and it is almost always the left that is penalized.

Overt racism will get someone dinged, true, but if the candidate’s a white guy who retweets Breitbart articles, an executive will always step in and say, “We don’t know that he supports those views.” On the other hand, someone who’s anti-racist–– say she’s active in Black Lives Matter–– will get similarly dinged, not for her politics per se, but for the fear of hiring a “troublemaker”. Once I overheard a conversation in which an executive described a colleague as “terminal” (not promotable into management) because “you can never trust a male feminist”.

Corporates don’t show their far-right colors often, but anti-leftism is the payload of their aversion to “the political”. They’ll fire a racist because it’s good for publicity, but their real fear is of the left–– of truth and justice.

Chapter 11: Morality

Does God exist?

That’s the easiest question there is. Yes. God–– the God of the Torah, the Bible, the Quran–– exists. Zeus also exists. Osiris exists. Iago, in Shakespeare’s Othello, exists. Farisa will exist, once I finish the damn story. They exist as much as the number 2 or the color “magenta”. They may exist only in our minds, but they exist as concepts.

The harder question is: are there supernatural humanoids who interfere with the observed laws of physics? On that one, I’ve seen absolutely no evidence, so I’m going to profess non-belief. More interesting is: is there an afterlife? I’m on the happier side of 50–50, on that one. My reason would require another essay, but I find accessible reasons to believe there is one–– and while I might be wrong; if I am, I won’t have to bear the disappointment, since I won’t exist.

Does absolute morality exist? I think so. Most ethical mandates are situational and relative, but their underlying reasons for existence seem less flexible. I am unable to articulate precisely the moral principles of existence, but I believe they exist.

I’m not a nihilist, and I go further. I don’t believe nihilists exist. At least, I don’t think a person can stay nihilistic for very long. Meaning vacuums get filled.

Let’s say someone who considers himself a nihilist, but who is a good person, is offered $5,000 to torture a kitten. He’ll refuse, because some actions he accepts and others he finds repulsive. Meaning is a weird term. Perhaps “purpose” or “value” is better. I would not torture the kitten, not because I expect the kitten to “mean” anything, but because I value the creature’s existence and welfare.

Nihilism is dangerous because it’s unstable. The meaning void will fill itself with something, but not always something good. Ultra-nihilistic villains like the Joker (Batman franchise) or Kefka (Final Fantasy VI) fill it with hatred and blood lust. Fascism, an outgrowth of might-makes-right nihilism, sells itself to the masses by presenting itself as aggressively anti-nihilistic–– thereby disavowing the decadence of which it is a culmination.

A person doesn’t stay nihilistic for long; but systems can be nihilistic. Corporate capitalism is a belligerent nihilism machine. It does not hate its victims; it simply does not value their subjective experience. A tree will be cut down unless it can pay not to be cut down.

Chapter 12: The Two-Stroke Nihilism Engine

Global corporate capitalism was not designed, technically speaking, but I cannot think of a better way to design an economic system to destroy things humans value–– a self-replicating monument to nihilism, a belligerent anti-meaning device.

The first thing to understand about global corporate capitalism is that it’s totalitarian. If the people in one nation are unfree, others must compete on wages and working conditions and will be unfree. It’s important to discuss economic totalitarianism, because while leftism has had a bad run for the past 35 years, almost all of the negativity directed at “communism” is more accurately blamed on left-wing economic totalitarianism (old-style tankie socialism). Right-wing economic totalitarianism is no better.

We’ve been pushed, over previous decades, to accept corporate rule on account of disingenuous claims that “communism killed 100 million people“. Did it? Not really. Mao Zedong’s incompetence killed some, Stalinist repression killed some, and anticommunist reaction (including fascism and World War II) killed a lot of people–– deaths that have been blamed on “communism”, even though none of those societies were communist.

A difference at issue is that capitalism has no memory and takes no responsibility; socialism, to the ill-health of its image, has far too much memory and responsibility. Americans who were unable to secure health insurance, and Pakistanis who were “freedomed” by drones, are not considered to be killed “by capitalism”. There’s a whole lot of dishonest accounting that goes on; the truth is that capitalism’s record is just as bad, if not worse.

In either case, the true enemy isn’t an economic system’s baseline principles, but totalitarian application. Global corporate capitalism is totalitarian because the employer is not happy to make a modest profit. It must make the highest profit, at any moral cost. It must have the worker’s indivisible loyalty. It takes everything it can get.

Global corporate capitalism wants for all things humans value to be “converted into dollars”. Who gets to live by the lake. The highest bidder. A “view” created by God or by Nature becomes just another form of money. Who gets the bulk of people’s time and attention? The people and organizations (often, authoritarian organizations) who specialized in the buying and selling of others–– employers. People’s friends and families get the leftovers.

Cultural influence, educational experiences, and personal relationships become nothing but “capital” in new forms. Everything gets converted into money, and if it resists such conversion, it’s marginalized to the point of nonexistence. Rebellions get bought. Sexual and cultural expressions of marginalized people are exoticized and appropriated by the rich. Social media, for a concrete example, has become a mechanism through which corporate marketing departments can buy the perception of grassroots authenticity.

Corporate capitalism’s first move is to convert all things humans value–– sexuality, social connectedness, leisure, culture, opportunity–– into an abstract quantity called money, measured in units called dollars or euros or yen. That’s the nihilism engine’s first stroke.

The second stroke is: to find the place of least utility for the dollars (euros, yen) and put there as many as possible. The rich get richer; the poor get poorer. The well-resourced have full-time staff to manufacture their reputations and appearances, so they present themselves as cosmopolitan ubermenschen (when they are, in fact, as provincial as the yokels they despise) while the poor become socially and culturally isolated.

If all things humans value are “converted into dollars”, all things humans value will go to those who have the dollars.

What is a dollar’s value? Of course, it’s not a constant. One dollar represents 8 minutes of a minimum wage worker’s time, but only half a second of a CEO’s time. If a dollar’s parked in the garage of someone who already has a billion, it’s being put where it isn’t needed. Its value is being minimized.

This shows that corporate capitalism seeks to turn all things humans value into a tradable form (money) and then to put every dollar of the money into the coffers of a person or corporation who does not need it. Since they have an excess of it, they use it to buy not things they need, but a future excess of money. This is a belligerent, nearly unstoppable utility minimizer–– an ever-advancing nothingness and pointlessness.

In 2011, Marc Andreessen said that “software is eating the world”. Having worked in the software industry during that time, I can refine this observation: corporate capitalism continues to be what’s eating the world. Software is merely what it shits out.

Technological growth of a kind that would benefit everyone has disappeared. We don’t have flying cars or robot maids. We have time-tracking software. We have Jira. The major innovations of our time have been surveillance technologies (weapons) for the benefit of health insurers, despotic governments, and authoritarian employers. That’s who’s buying this stuff.

Employers used to fear their workers, at least a little, but these days they share information (contrary to law) about suspected unionists. Workers in the trades–– in the “blue-collar” jobs displaced office workers are told to consider–– often suffer belligerent performance tracking enabled by devices running code written by people like me. Retail workers often have less than 24 hours notice of when they will work, because their shifts are determined algorithmically. The working world has gotten worse, has gotten more fascistic, and it’s our fault as private-sector programmers.

I mentioned the “Agile” garbage that makes a typical programmer’s life hell. It’s not only that we implement the weapon designs of psychopaths who profit by immiserating workers. We are also the first subjects of many such experiments, the first to taste the poisons (and stupid/earnest enough to refine them) before they are rolled out into the broader economy. “Scrum” is the same malevolent performance management applied to truck drivers and factory workers, but using that name when applied to low-status programmers. Nowhere is it written in the Cannibal Bible that a cannibal cannot be consumed by other cannibals.

Part 2 is here.

End of Part 1–– What’s to Come in Part 2

So far, we’ve covered the technology industry during 2008–15 and my experiences within it. We know of the emergence of might-makes-right politics (Trump) and we can see that it is a natural extension of global corporate capitalism.

In the first half of this exploration, I told a story with political, moral, and personal threads, all of which have diverged. In the second, we’ll arrive at the convergence. We’ll discuss the acceleration of capitalistic disease under Trump. We’ll cover purposes of the technology industry (and the Silicon Valley business model) of which most people are unaware. We’ll deepen our understanding of fascism–– what it is, why it emerges, and my own experiences in the fight against it. At the end, I’ll present why I believe the probability of a violent conflict, with fascist elements that exist within our society right now, is high.

There is much that has happened in the past five years that must be revealed. I will establish (with verifying details) something heinous about an organization of middling profile but high importance. In so doing, I may put my life in danger, but public service demands it. Names will be named; events will be explained.

Farisa’s Crossing Final Round Beta Reading

On January 3, 2020, I’ll be opening up a round of beta reading for Farisa’s Crossing. Since I intend this to be the last round I do– I would like to begin serialization in April 2020, with the entire book published by February 2021– there’ll be more slots than in previous rounds.

What does a beta reader do? You’re not asked to copyedit the manuscript; copyeditors do a much more intensive read and that’s a paid service. As a beta reader, your job is to read the manuscript as you would any other book, and give feedback on what works for you and what doesn’t. You’re not expected to do more than that. It’s a time commitment of about an hour per week over 3–4 months.

Tentatively, I’m looking for 10–12 readers, preferably a diverse set with regard to age and gender (since there are major LGBT characters as well as characters with disability) as well as sexual orientation and disability.

More to follow. For now, if you’re interested in being a beta reader for an epic fantasy novel, my email address is michael.o.church at Google’s email service.

“Eat The Babies”

Congresswoman Alexandria Ocasio-Cortez held a town hall on October 3 at which an ostensibly mentally ill woman demanded, as a solution to climate change, “we got to start eating babies”. AOC, who has always handled adversity and difficulty with poise, addressed legitimate concerns around climate change and, while visibly disturbed by the spectacle, did not resort to ridicule or immediate denouncement, as lesser people would be wont to do. I think she handled the situation as well as anyone could have. Bravo, Ocasio.

I usually wake up between 3:00 and 5:00. At 3:27 this morning, I found #EatTheBabies trending on Twitter, and I had the distinct displeasure of reading right-wing tweets about how “leftist” climate change “hysteria” is triggering dangerous mental-health crises. According to the right, Ocasio-Cortez’s choice not to immediately denounce infantivory represents leftist endorsement of the notion. That is, of course, absurd.

Take note of this. The right wing, which has already labelled this woman a “climate activist” rather a person likely in need of psychiatric health, is testing out a false equivalency. Americans are fed up with the stochastic terrorism and bad-faith argumentation used by our society’s reactionary, authoritarian, and upper-class elements to stoke the nation’s right-wing, racist, paranoid counterrevolution (designed to be “populist” in a way that is no threat to the upper class, because it cannot help but punch down). The conservative and pro-corporate elements of our society want nothing more than to associate AOC’s moderate socialism with baby-eating, as if Cormac McCarthy’s The Road were a leftist how-to manual.

We live in a strange time. I’ve seen enough strange stuff to have a sense of the nation’s politics, and I’ll try to answer the question, What the hell is going on? There are three possibilities.

The first possibility for what happened on October 3 is that a woman in poor mental health had an outburst. Alexandria Ocasio-Cortez, for security reasons if nothing else, had to operate on this assumption and respond to the woman with compassion while not confirming her assertions. Here’s the thing to remember: AOC is not your typical bourgeois liberal who preaches compassion but avoids conflict with the afflicted or poor. She’s seen real shit. She’s worked with the public– as a waitress in Manhattan. She knows that ridicule is not how to handle a volatile situation.

The second possibility, because of the time we live in, and because of the exceedingly low character of the corporate upper class and today’s political right, is that this spectacle was created to humiliate the left. Several right-wingers are claiming that climate change activism is triggering “hysteria”. Others are arguing (in bad faith) that because Ocasio-Cortez did not immediately denounce infantivory and the bombing of Russia, that leftists and liberals are secretly cozy with these terrifying ideas. It is possible (although I don’t consider it likely) that this woman was an actor commissioned to damage the image of the left, of the environmental movement, and of Ocasio-Cortez’s proposed Green New Deal.

The last thing I want to do is accuse an ostensibly mentally ill person of acting in bad faith. As I said, I don’t consider it likely that she’s a “troll”, but the possibility requires discussion insofar as the the right has already used the event as an opportunity to argue in bad faith. Yes, people are actually saying that intelligent moderate socialists are on the fence about cannibalism.

Again, this is all an effort to create a false equivalency between leftist non-denial of climate change and the deranged right-wing conspiracy theories (like “white genocide”) that lead to violence.

So, while I don’t think it’s likely, I consider it possible that certain upper-class, conservative, or pro-employer authoritarian elements in our society have arranged this spectacle.

A third possibility, a more likely variant on the second, is: it’s a mix of both. It’s possible that this woman was a mentally ill person (not a bad-faith “troll”) but also that she was directed to Ocasio-Cortez’s town hall for the purpose of having a mental health crisis in public, harming the left. Does this seem far-fetched? Perhaps, but it’s the Silicon Valley playbook. It’s only a matter of time before the entire alt-right begins to use it.

For several years, it has been a common practice in the venture-funded technology industry (“Silicon Valley”) to recruit the homeless to harass guests as, for example, a rival’s launch party. I don’t think this is done to disrupt business operations, because I don’t think it’s very effective; it’s more of a mechanism to threaten and annoy one’s adversaries. I bring it up not because it’s underreported, and also because it’s in extremely bad taste.

I know about this tactic from personal experience. Some of my readers know that, since 2011, I’ve had to deal with fascist attacks on my career and reputation. I’m at least half a million dollars poorer (and that’s a conservative estimate) than I would be, had I not been fending off assaults from literal fascists my entire career. I’ve had written job offers rescinded, more than once, when a fascist sympathizer (and, likely, actual fascist) discovered online that I have said positive things about antifa (which literally means no more and no less than antifascism). We live in a time when to be rational and humane (both of which stand in opposition to fascism) is to be political; but we also live in a time where to “be political” upsets proudly and self-assertedly “apolitical” entities like corporate employers.

I’ll share just one example. A few years ago, I interviewed for a machine learning engineer position at a reputable Chicago trading firm. The team wanted to make an offer; an executive blocked it. Why? He discovered my mild criticism of an unethical (and, likely, illegal) corporate practice unrelated to trading or finance. He formed the opinion that if a publicly “political” person were discovered to work at the firm (even in a non-executive role) it would be a publicity risk. Here’s the kicker, though: this fascist assault occurred in June 2013, long before fascism was part of the national conversation. In 2013, a right-wing takeover of this nation (already well into progress, but through private employers rather than government) seemed unthinkable.

Authoritarians do not have an ideology in the classical sense. It is not about “free markets” for them; it is not about tradition or philosophical conservative. They are the win-at-all-costs players. They will gladly weaponize our society’s most vulnerable people.

I have stood in vocal opposition to Silicon Valley’s coziness with fascism– before 2016, the employer-nucleated fascism-lite our society tolerated because it was not overtly racist, misogynist, or warlike; after 2016, the expressed literal fascism that is killing people– and, as a result of this, I’ve endured considerable harassment. I was banned from Hacker News and Y Combinator–owned Quora in the summer of 2015 on false pretenses that, if the claims were true, would cause embarrassment. I’ve been harassed on the streets by deranged, ostensibly homeless, people. In many cases, it’s been clear that they “knew” (or, more likely, had recently been told) my name and affiliations. I’ve been ordered by people I’ve never met to stop writing about certain topics. There was a period of time when I could not go to San Francisco– I feared for my life.

I bring this up because it is not a new tactic for the right wing to weaponize our society’s most vulnerable and unfortunate people. Is that what happened at Alexandria Ocasio-Cortez’s town hall, which the right is disingenuously using to equate moderate socialism with infantivory? I don’t know. We’ll probably never know. But it happens, and it will happen more in the future as our society’s corporate, fascist, and authoritarian interests gear up for the fight of their lives– which happens to be, as it were, the fight of our lives as well.

A Killing in Menlo Park: Qin Chen’s Death and the Need for Justice

Qin Chen, a 38-year-old software engineer working at Facebook, jumped to his death from a high building on September 19, 2019. One might say that, in proximate terms, the death was a suicide. I’m averse to that word in this case– it seems highly likely, given the evidence, that it is far more appropriate to say: he was killed.

It’s important to get this right. The word “suicide”, in our culture, implies personal failure and individual fault. It’s not appropriate, therefore,  o say that someone who leapt from the World Trade Center on 9/11, preferring impact over immolation, “committed suicide”, even if the mechanism of death was chosen by the deceased. A fox that chews off its own leg to escape a trap is not engaging in self-harm. Likewise, I don’t think it’s appropriate to present Aaron Swartz’s death as a suicide, without mentioning the malicious prosecution that led to his demise. As Quinn Norton said, “the old world killed him.” We tend to be quick to focus on the mechanism of death– far too quick to call an event an aberration and blame it on “mental health”– out of a misguided desire not to hang blame on the living.

If Patrick Shyu’s account is accurate, Qin Chen’s was killed, and his survivors have a right to justice. He was killed by his manager’s petty retaliation over his desire to do something else at Facebook. He was killed by HR officials who refused to override the libel of a rogue manager, who refused to let an employee acting in good faith restore his reputation. He was killed by his employer’s indifference, allowing the institution of a cruel system under which he could not transfer to a team more appropriate to his skills and interests.

The account linked gives a credible narrative. First, it alleges that Qin’s manager enticed him to stay on a project he disliked, in exchange for a guaranteed positive performance review. Having worked in large technology companies, side deals pertaining to “perf” scores are remarkably common. (It is also not uncommon for people in the HR office to accept side payments in order to fix an internal record.) That number means everything– it is one’s total human worth as assessed by the organization. It is also not uncommon for managers to break these arrangements, and they do so without consequence.

Though I don’t know whether Qin Chen’s story is true, I’ve encountered so many people with stories just like it that I see no reason to disbelieve it. Here’s the thing: managers running less desirable teams have chips on their shoulders and are quick to punish “disloyal” employees who deign to seek transfer. It’s disgustingly common for a naive software engineer, assuming good faith on the part of his manager and company, to make known his interest in internal mobility– and be shocked when he slagged with a negative “performance” review, often without explanation.

I’m getting old. I’m 36, which is 0x7F in software years. I’ve seen people repeat the same stupid mistakes over and over. Investigations into Enron’s culture of mendacity found responsible a style of high-stakes performance review notorious for creating a culture of suspicion, politicking, dishonesty, and widespread cheating of all forms on all levels. Funny thing is, Enron’s widely-hated performance review system (“stack ranking”) wouldn’t be out of the ordinary in a technology company. The buzzwords change every five years. The behaviors don’t, and as fascism– both the public nation-level variety, and its more contained private corporate form– becomes normalized, we should only expect this to get worse. We have to fight back. We have to crush fascism, and we have to bring unlawful killers to justice.

Facebook’s HR, by the account that has emerged to this point, did not repair the damage done to Qin Chen’s internal reputation by a malicious manager. They did not restore his right of internal mobility. As a result of their criminal negligence, Qin’s professional situation and reputation deteriorated to the point where he saw death as the only option.

This should not be pinned on a “difficult” or “tough” or (gag) “high-performance” company culture. Qin Chen was killed. His manager literally killed him. The HR “business partners” who did not intervene on his behalf are, at best, accessories and, at worst, killers themselves. Although a situation of intentional murder seems unlikely to be, these malefactors put a man in lethal danger– and he died.

Sunlight, they say, is the best disinfectant. The public has a right to know who Qin Chen’s managers were, and which HR officers were involved in his case.

I am available as michael.o.church on Google’s email service. If I am furnished reliable information pertaining to the identities of Qin Chen’s managers, as well as HR officers involved in his case, I will publish it here. Furthermore, it is the public’s right to seek justice not only for this death but in future cases like it. Therefore, any personal or contact information about guilty individuals, I will also publish– after review and verification.

I do this without condemning nor condoning specific approaches to public justice. Whatever justice the affected portion of the public chooses to seek, it is not my call to make.

Projected Release Dates for Farisa’s Crossing

I’m serializing Farisa’s Crossing. Conceptually, the story divides neatly into five segments: The Forest, The City, The Road, The Dead; and The Lovers.

The first segment will be available on Amazon for the lowest possible price (99c) and for free elsewhere (stay tuned) in April 2020. As of now, my intention is to make each segment (at the time of release) free; a short reading comprehension quiz (designed to be easy for anyone who has read the previous segment) will be used to make availability of each segment conditional on having completed the last.

When the last segment is made available, the intermediate segments will no longer be available for free, but the complete book will be accessible at a reasonable price (between $5 and $8).

The planned release dates, which I’d give a high degree of confidence (85+ percent) I will make, are:

  • April 26, 2020: “The Forest”
  • July 3, 2020: “The City”
  • September 4, 2020: “The Road”
  • November 15, 2020: “The Dead”
  • January 17, 2021: “The Lovers”

I intend to release the complete book on January 17, 2021, unless further editing is (for some unforeseen reason) necessary. I’ll be encouraging early readers to discuss the book and form theories on the reddit r/antipodes.

How I Would Fix (and End) Game of Thrones [Spoiler Warning]

My first novel, Farisa’s Crossing, is on track for serialization beginning April 26, 2020, with the full book available by January 17, 2021.

(Minor editing for clarity was made on the morning of May 19.)

Pre(r)amble

I hope I’m wrong about this, and a brilliant last episode could change everything. That said, it appears so far that the final season of Game of Thrones has failed on account of hasty writing. The Night King plot seems to have been under-explained in service to a Long Night–inspired prequel; the ending of Season 8 feels more like a homework exercise, designed to hit plot points without much attention to craft, than it does a story.

Let me remark that it’s entirely possible that what appear to be lapses in story craft are, in fact, artistic debt. With every word of a reader’s time the writer uses, the author incurs such debt; only when a story is complete can a final judgment be made about whether the debt is paid off. It is a possibility that the last episode, to be aired on May 19, fixes the apparent problems. As of now, though, I see so many issues that it’s hard to me to picture a resolution of all of what, so far, appear to be frank artistic errors.

What are those flaws?

To start, stupidity has been used far too often as a plot device. Rhaegal is Daenerys’s son (in spirit) and a prime military asset. Thirty seconds of thought, by any of her allies, would have prevented him from dying in such a facile way. War is hard to write, sure, and no one’s asking for (or, in fantasy, wants) perfect realism, but stupidity’s utility as a plot device has been overdrawn. I suspect I know why this error was made– it, and other apparent failures of craft, derive from changes made by HBO to Euron Greyjoy– and I will analyze that further, below.

Suspension of disbelief collapsed utterly, for me, in Episode 5 with the burning of King’s Landing. Don’t get me wrong. Daenerys was always a severely flawed heroine. I could have seen her turning into an antagonist. A war criminal, though? It seemed like a plot contrivance. To use Missandei’s death as cause for her turn toward atrocity is unbelievable. Although she is dogmatic, arrogant, and temperamental, Daenerys has always been principled, and it violates one of her core principles to take that path– it’s not something she’d do lightly. Moreover, Daenerys has seen (and brought) so much death and war that it is unreasonable to suggest that the death of another confidante would lead her to murder tens of thousands.

I’m going to try to give the final season a developmental edit. I will stick to the mostly-tragic sort of ending that I believe both George R. R. Martin and the HBO showrunners intended and that is almost certainly correct for the series. I will have to turn away from this obnoxious and poorly-executed specific bad ending.

What I’m Not Writing About

This analysis is not a criticism of the unfinished book series, A Song of Ice and Fire. The book series is different altogether, and I consider myself unqualified to criticize it beyond the impression of an educated reader. Why so? It’s not what I would write. I intend on keeping my career in technology, so The Antipodes could possibly be the only novel-length fiction I publish. I have no interest in writing grimdark fantasy. As grimdark isn’t what I write, and I don’t much enjoy reading it, my opinion of Martin’s work doesn’t mean very much. It would be a pointless exercise in ego to give a “how I would write it” synopsis of Martin’s work, in which I risk comparing it to some imagined but entirely different piece of work. Instead, shouldn’t I go and write my own book?

I feel more qualified to assess the HBO series because the writers clearly did want to garner increased viewership with the promise of traditional heroism. The arc of platonic affection between Brienne and Jaime– which culminated his knighting of her in Episode 2, and was then trashed to make the woman his one-night stand– does not keep the spirit of grimdark. As the HBO series led us on with promises more in line with traditional fantasy (in which we’ll tolerate tragic endings, but demand traditional value-positive heroism) I feel more equipped to criticize it.

There are a few other reasons why I want to be careful in what I say about George R. R. Martin.

First, I feel that he is often unduly criticized, especially on a personal level, for what he chooses to write. I don’t believe that he’s a misogynist and he’s certainly not a feudalist. I do not assume he is a nihilist or narcissist, simply because he writes about nihilistic, narcissistic people. It is too often assumed that the moral flaws of protagonists reflect on their writers, and although this has been proven true too often for comfort in literary fiction (not to mention Hollywood; see: Woody Allen)– I do not like the stereotype. It limits what people can write. One especially loathsome portrayal of George R. R. Martin– I was physically angry as I watched this– was in the otherwise-enjoyable TV series Younger, focused on the publishing industry. (“Beware the wrath of the sky.”) The character of “Edward L. L. Moore” was sloppily-written and offensive to the fantasy genre as a whole; it was irresponsible. The fantasy genre is no more juvenile, prima facie, than another “literary” novel about a 57-year-old male professor of literature sleeping with undergrads. The only thing we know about George R. R. Martin from his work is that he writes dark fantasy (“grimdark”) and that he writes it well. Speculation about him as a person should stop.

Second: although Martin’s vision of fantasy is different from mine, the author of Ice and Fire has done a great deal for the genre. He is far from the first to write fantasy to an adult standard and he won’t be the last (what up, yo) but he has shown to a massive audience that it can be done. He and I are certainly not competitors in any way. On the contrary, a successful, competent author brings others up.

Third and relatedly, I am not an envious author– but I don’t want that look. As a Bayesian, I judge it far more likely than not that his series will outsell mine, for decades to come. (I would only have a chance of outselling him if the Antipodes were, as well, adapted for the screen. The spring of 2019 has left me under-attracted to that notion.)

Fourth, I respect Martin’s vision. He has created a compelling world that captures late-medieval ideology (Westeros), Renaissance-era politics (Free Cities and the smarter Westerosi), and Lovecraftian horror-fantasy (Essos; the Land of Always Winter). The originality and precision of his worldbuilding are admirable.

Fifth, as I mentioned before: since grimdark isn’t what I choose to write, I can’t possibly give suggestions to the book series without the risk of turning it into something else– which, again, would be an expense of time better used on my own work.

Sixth: in a way, I’m indebted to George R. R. Martin. His work takes what many of us feel to be an extreme position on a spectrum between (a) Tolkein’s brand of fantastic heroism and (b) the moral relativism favored by his “grimdark” as well as by modern literary fiction. (He has, at least in the popular perception, created this spectrum within fantasy.) By doing so, he has opened a dialogue on what the fantasy genre should be. Farisa’s Crossing lives, I would like to think, in the middle of this continuum. My work is quite dark– it takes place in a steampunk world where the Pinkertons won, now control the entire economy, and are fast becoming Nazis– but I strive to give hope that a moral north star still exists and can be seen on the one night out of four that isn’t cloudy. Farisa is not perfect, but she is genuinely good.

George R. R. Martin is not the first to write complex, adult fantasy, but he has shown the world that it is possible, and for that I feel I owe him a great deal.

I don’t see Martin making basic mistakes of craft. The thing I dislike most about his book series, to be honest about it, is his tendency to end on cliffhangers. His ensemble approach, with rotating points of view, worked beautifully in the first two books– it gave us different and often opposing perspectives on shared experiences– but as the plotlines separated (geographically and thematically) I found it to be borderline untenable. I don’t think he wrote with manipulative intent; by the modern standard, it is preferred to end chapters on cliffhangers and push of-the-episode denouement (“sequel”) into the next one. This gives a can’t-put-it-down feel to books with one or two plotlines but it fails when 250 pages exist before a plotline is resumed. But this is a subtle mistake (if it can be called that; arguably, it is not even that but a difference in style) and de minimis compared to the Writing 101 mistakes of the HBO series.

What Went Wrong With Season 8?

I want to make it clear. I believe the staff writers who worked on Game of Thrones, including the ill-fated Season 8, are competent. They are not dumb, lazy, or inexperienced people. On the contrary, I believe they’re excellent at their jobs, relative to constraints. They worked on an ambitious existing series, without source material, under incredible deadlines. Although 20 months elapsed between the end of Season 7 and the beginning of this one, I doubt the writers had more than two or three months of “blue sky” writing time, given the immense complexity of the project.

Also, let’s be real here: great writing takes time. Commercial novels are measured in pages per hour; literary novels in hours per page. Seven to ten rounds of revision (before line and copy editing) is usual when writing to a literary standard. The process takes years even for the most skilled authors. I expect The Antipodes to require 15–25 years. The HBO staff writers did not have this kind of flexibility with their schedule.

In fact, I believe that one major decision caused the story arc of Game of Thrones to fail, and it’s one I don’t think the staff writers had much say in.

I won’t opine on the pacing, either. Pacing is quite subjective. All forms of narrative, whether on stage or in print, speed up as they near the end. Readers demand it. As the anvil falls, the rope at the other end whips around. Role-playing games give the players an airship (or equivalent) in the late-middle for a reason. I don’t fault HBO for the rapid pacing of Seasons 7 and 8. It’s not an artistic failure because it’s what readers and viewers want at this point.

There have been serious omissions (e.g., of motivation) in the late seasons, but those are not pacing problems. In fact, compared to a traditional movie (in which an entire story is told in two hours) the pacing of Thrones remains slow even in the final season. There are plenty of elements that could have been cut to make room for what’s missing.

So, with all of that said: why did Game of Thrones fail toward the ending?

As I said, George R. R. Martin writes dark fantasy (“grimdark”) that approaches the nihilistic. I do not intend to say that his series (which is not yet complete) “is nihilistic” and, again, I make no observation about him as a person. However, he writes about nihilistic, narcissistic, and narrow people. In characterization, he is closer to the MFA-educated metrorealism that is today called “literary fiction” than many in that camp would like to admit. When it comes to characterization, he doesn’t write heroes; for personality traits, he gives us the same middle-heavy bell curve that we encounter in real life.

Robert Baratheon defines the good life as “crack[ing] skulls and fuck[ing] girls”. Illyrio Mopatis, a fat old man, weeps in front of a statue of himself at age sixteen. Plenty of Martin’s characters admit they enjoy killing, although few people (even, if not especially, among those who must do it lawfully in war) actually do. Drinking, food, sex, fighting, and especially power seem to be the things that matter most to the characters in Martin’s world.

The author also indulges a trope that I find tiresome in fantasy: Adults Are Evil. (Arya, in the books, is a prepubescent girl.) As a middle-aged man, I am too old to hate that trope, but I must laugh at its absurdity. (Those over sixty can apply for “wise mentor” slots if willing to die in late Act II. Everyone between twenty-one and fifty-nine must be incompetent or malignant, and usually both.) Living in the real world, I find that age has no correlation with moral decency or value. If anything, one of the major improvements HBO has made on the source material is their aging of the characters: Jon, Sansa, and Arya are adults at the series end. I usually find “Adults Are Evil” to be exquisitely unskillful, but I give Martin a pass for it, because it actually fits his nihilistic, depressing world quite well.

The truth is that Martin’s brand of grimdark nihilism isn’t palatable to a large audience. People make jokes about Cthulhu, but few people actually deign to read Lovecraft. The HBO series, wisely, pulled away from the unpalatable grimdark roots of the book series. It turned true-neutral Arya into the face of chaotic good– and made her an adult with agency rather than an unlucky child. Tyrion and Varys, whose actions in the books were reprehensible, were made into genuine good humans in the HBO series. Between Jaime and Brienne, we got a beautifully depicted arc of platonic affection culminating before the Long Night. The HBO series sold us hope in a series about more than “cracking skulls and fucking girls” and that stupid iron chair.

The Northern Crisis, it seemed, brought a few of the series’s characters to focus on what was truly important in life. Daenerys, for all her flaws, chose to go north and fight in the war that mattered rather than the petty squabble in the south. We saw hope emerging, despite a desolate world. This is something I do with Farisa, too; I think I’m far from alone in that approach. We love it when beautiful, competent people exist despite a horrible world. When it comes to George R. R. Martin, I’d respect him for sticking with grimdark; I will also respect him if he gives us a more traditionally meaningful ending.

The HBO series broke away from grimdark. They seemed committed to the pro-meaning side. So, while we knew we’d get a tragic ending, we expected to be freed from the depressing nihilism of grimdark. Right?

Nopers! Why? Because fuck you, that’s why.

The Walkers were dispatched in a “Long Night” that seemed to last a regular night. Brienne became Jaime’s one-night-stand– because women are totally at their best when used as plot pieces to showcase a man’s moral depravity. Daenerys went on the rag and torched fifty thousand innocent people, because that is totally a realistic depiction of mental illness, and because women are emotionally unstable and that’s why they commit most of the violent crimes. (Oh, wait. No. That’s wrong.) Villains like Euron and Cersei who deserved humiliating deaths got I-die-but-I-win Heisenbergian endings, without the positive character traits that made it acceptable for Mr. White to get such an end.

It would have been fine (as I’ve said) to stick with grimdark and continue on to what appear to be Martin’s depressing plot points. There is no artistic reason why Daenerys couldn’t become a war criminal; it was foreshadowed that she might. It would have also been fine to continue to diverge from the apparent moral nihilism of the early work, and stick with the traditional heroism we saw toward the end. I feel utterly manipulated by HBO, though, for the switchback and the utter repeal of character growth.

The ultra-cynical view of this is that HBO deliberately gave us a more palatable (less nihilistic) series– knowing that high production values and copious nudity would only keep viewers invested for so long in an otherwise nihilistic story– only to swerve back into nihilism, perhaps in order to anger people and maximize buzz for the show.

1 + 1 = -5

Even though there are no rules in writing, there are rules. At least, there are guidelines. Sol Stein famously says, “One plus one equals one-half.” This is an observation of what I call rhetorical non-monotonicity.

To explain the notion, let’s consider that a die-hard logician (or, say, a number-crunching Bayesian) must perceive evidence as only making an argument (stronger). That’s monotonicity. A strong argument followed by a weak argument is strictly more evidence than a strong argument alone.

Of course, that’s not how we respond to real arguments made by real humans, and there are strong social reasons for us to be that way. For example, if I’m trying to sell you on creationism (which I’m not, because while a theist/deist, I believe evolution by natural selection is true) I might point out all the “holes” in evolution. I believe that irreducible complexity arguments are flawed and lead to an incorrect conclusion, but there is enough meat to them to merit further study and (likely) refutation. Now, compare Presentations A and B. Under A, the proponent of creationism focuses on biological arguments alone; under B, he makes all the same arguments and continues on to say, “Evolution is also untrue because it contradicts the Christian Bible, which millions of people believe to be literally true.” Which is more convincing? If we expect logical monotonicity, then B (which presents a stronger argument, then a weak one) is. However, most of us would find presentation A more convincing; B supplements A’s case with an additional social-proof argument– and we know from our history that those are nearly useless in assessing scientific truth. We feel, after Presentation B, that we’ve endured a sales pitch. That’s rhetorical non-monotonicity in action.

Rhetorical non-monotonicity applies to art, as well, and writing in particular. One of the oft-cited “rules” to writing is “Show, don’t tell”. Actually, writers “tell” all the time. Showing one element often involves telling three supporting details. Show-don’t-tell expansions have to end at some point, lest the story ramble on for a million words. What often falls flat is to show and tell.

For example, consider this snippet: “It was a beautiful day. The sun shone, the clouds were brighter than clean linen, and the faint golden cast of the October wood suggested treasure within.” In most cases, one of those two sentences (the first tells; the second shows) ought to be cut. Which one? It depends on context. But, together, they weaken each other. That’s a case of one plus one equaling one-half.

Intensifying adjectives and adverbs function the same way. Overselling only works with ironic intent; when a serial killer narrates, “It was a beautiful day”, he is not commenting on the weather. There’s nothing wrong with the sentence, “We knew we were totally safe here.”, so long as the POV character is not safe there.

It is, in my view, extraordinarily difficult for a writer to know when one plus one is one-half and when it is two. In the second book of a series, the writer must offer a “reboot” that repeats details of book one. And to offer exactly three details (“rule of three”) in escalating power seems (although it is arguably repetitive, to suggest the same principle in three ways) remarkably effective.

There are times when one plus one equals one-half. “It was a very beautiful day” is a weaker sentence than “It was a beautiful day”. There are times, though, when one plus one equals minus five. Unskillful writing draws attention to itself. Metafictional elements and fourth-wall breaks can spice up the middle of a story, but toward the end they approach the sin of bathos. Most readers or viewers aren’t aware of the specific artistic sin; they just have a sense that the work “feels off” or is bad. Although Daenerys’s turn toward irrational evil is not bathos, and although it has been foreshadowed that she may become a destructive force or an antagonist, her turn feels “off” because of writing that has called too much attention to itself.

M. Night Shyamalan is infamous for his overuse of plot twists– his twists often call so much attention to themselves that they feel forced, like they exist for the sake of plot and are not organic. In general, twists follow a “zero, one, or many” principle. A straightforward story with no twists can work; a story with a single twist can work. Thrillers use frequent twists as a matter of course. What rarely works is to have two major twists– especially when the second negates the first. In that case, plot will always draw attention to itself and “feel off”. One plus one will equal minus five.

That’s what we got with HBO’s Game of Thrones. The show diverged from apparent nihilism and toward a more traditional heroic epic. A character we hated from the first episode, Jaime Lannister, showed his nobility in the Northern Crisis. And then, to hit what appear to be Martin’s original plot points, the show swerved back into grimdark nihilism, leaving us as viewers to feel cheated.

Let me propose one fix to the plot of Thrones.

The Rules of the Fix

I don’t want to add personal touches. I’m playing the role of a subordinate editor whose job is to make the product better. So I’ll aim for minimalism in my fixes. In particular, I’ll keep the plot mostly as-is, including the burning of King’s Landing by Drogon. The characters who die will still die, and around the same time.

I will, however, attempt fix the egregious flaws of craft, without making major changes to the story as made.

Here goes.

Euron (and Rhaegal)

Of all the characters the showrunners changed for the worst, Euron Greyjoy ranks at the top. They seem to have gotten him and his purpose entirely wrong.

Sure, he’s evil; but Euron of the books isn’t “just evil”. We’ve already grown tired of regular evil: Joffrey, Ramsay Bolton, and Cersei Lannister. We’ve endured horrible people for eight seasons.

In the books, Euron steps beyond petty sadism and bland ambition; he has something the TV series has written out of him: magic.

In the books, he’s not just a bad guy. He’s a menace equivalent in threat level to the Others (in the show, White Walkers). He has a magical horn that is believed to control dragons. He’s been to Valyria, a dangerous ruined city that has become Ice and Fire‘s version of hell; and he’s been trained in Asshai, the ultra-Lovecraftian capital of magic. He could probably teach Melisandre a thing or two.

The Others seem to use inhuman ice magic; Euron brings the opposite: fire magic, with the distinctly human elements of narcissism, cruelty, and ambition.

However, for some reason I’ll never understand, HBO took away his magic. The showrunners turned him into a dopey pirate and a dirtbag pickup artist. Epic fail.

Stripping Euron of his magic broke other bits of the show. For one thing, it required Daenerys’s stupidity to bring about the death of Rhaegal. (I think it was right to have Euron kill Rhaegal, as he is the ice/fire dual of the Night King and therefore ought to kill one out of symmetry; but it should have been better executed.) Furthermore, since the bad guys lost their mage, Bran had to be rendered mostly useless, lest the good side be over-powered.

I would have killed and abused the dragons in a different way.

Two of the dragons (Viserion, Drogon) were named after bad men. It made sense for them to be put to evil purposes, one way or another.

Rhaegal, though, was named after a good man who died with a bad reputation. Most fitting, I think, would be not to let Euron control that dragon, but to allow Euron the image of a dragon. For Euron to use Rhaegal’s visage for illusionism would fit the theme of the series (“power resides where men think it resides”). Euron could become a tyrant of King’s Landing on the hologram of a dragon alone.

There’s more to say about fixing Euron; I’ll get to that later on when I cover Daenerys and Drogon.

Brienne

I like that HBO made Brienne interesting. In the books, her chapters are a boring slog. The series put life into an underutilized character. Good on them for that.

Unfortunately, a genuine arc of platonic affection and mutual respect was trashed in favor of supernumerary on-screen sexing. We ought to be beyond the point in time where writers use female characters as props to showcase a man’s moral failures. It’s the current year, people.

I would rip out everything that happens after the Long Night. Brienne deserves better and so, for that matter, does Jaime.

Brienne has a new home in the North and, certainly, a role to play in rebuilding society after the catastrophe. If Sansa becomes Queen of the North, she can be Queensguard.

Cersei

Don’t get me wrong: it’s painful to be crushed by rocks. In prose, that could be written as the sort of death we feel Cersei deserves, with her screaming in the darkness until she runs out of oxygen. Slow carbon dioxide poisoning is (quite likely) worse than mere strangulation; it’s a terminal panic attack that can go on for hours. Burial alive is a frightening way to go.

Still, Cersei’s death isn’t cathartic on camera. It’s far too impersonal. We feel let down; we deserve more.

Moreover, the “emotional” reunion between the ex-twin/lovers was unskillful beyond description. I am more moved by an average fart than I was by that scene.

To get Cersei’s death right is challenging. I believe I know what the “perfect death” for that character is, but before I get into that, let’s consider what doesn’t work.

First, an ultra-violent death (meaning, one that exceeds the regular violence of the show) would fail; the gore would get in the way of catharsis. If we took joy in Cersei’s torture (or worse) we’d be as bad as she is. We don’t actually want to see her get the same treatment she and “Robert Strong” gave Septa Unella. That would make us the bad guys.

Second: Cersei can’t get the “villain dies laughing” death (that was unskillfully given to Euron) because that only works for chaotic evil. A true chaotic-evil villain is like a rabid dog that must be destroyed. There’s no joy in seeing such a villain suffer. But when the villain is humanly evil– neutral evil– we demand that she suffer. The ending need not be violent (and it is most skillful, often, when it is not) but it must tear her apart, psychologically, like the Bastille was dismantled brick by brick on July 14, 1789.

Public shame can work, but in Cersei’s case, it was already done (Sept of Baelor) and she came back from it. So that’s not enough, when it comes to Cersei’s death.

Noting these challenges, I believe the perfect death for Cersei is… to die at the hands of King’s Landing’s street children.

Her death ought to repay debts (as Lannisters do) to the poorest of the poor– and the orphans her wars have made. I’ll leave the level of violence to the writers. It could be an off-camera clubbing, where the viewers only get one terrified scream. Or it could be a bloody, protracted slicing-apart with dirty seashells, one that leaves her flesh in ribbons. It doesn’t matter, from my artistic perspective, whether it’s a painful death or a regular (by Thrones standards) one that she gets.

This death is perfect for Cersei. To start, she hates the common people and they hate her. Moreover, her conceit is that her evil is all done for her children. (Side question: why did viewers hate Skyler on Breaking Bad, even though she saw through that conceit, when they love Tyrion, who fell for that lie?) Thus, she ought to be killed by children among the millions of other peoples’ kids that she didn’t give a damn about.

That notion of justice is satisfying, too, in the context of today’s world. Consider that the entire global-corporate-capitalist system is powered not by otherworldly evil (Euron) or stupid sadism (Joffrey) but by those who do not think of themselves as evil, but nonetheless do evil things to acquire and preserve zero-sum advantages for their own progeny. I’m not an anti-natalist, but the kibbutzniks who disavowed the purported “virtue of family”– the narcissistic tendency to humans to care only about the future in the context of a tiny number of its citizens– were on to something. More crimes are committed to keep corporate executives’ children in private schools than for the “evil” reasons more typical of narrative. To have Cersei killed by Flea Bottom urchins offers the same poetic justice as does the humiliation of parents who participated in the recent college admissions scandal.

Cersei ruined the world for everyone else’s children, so her children could rule. I can think of no better death for her.

Cleganebowl

Some people said this fight was “fan service” that didn’t belong. I disagree.

Sure, it was an over-the-top, comic-book battle, set against a background of fire, on a stairway to nowhere. It served little narrative purpose, but that’s OK. In fact, the irony of Sandor’s quest to kill (again) what is already dead and repulsive, I appreciate. It fits Sandor’s style of dark humor.

No changes. Cleganebowl stays.

Arya

We’ll probably see Arya assassinate someone in the last episode of Thrones, but I would have used her in a specific way that makes sense in the context of Ice and Fire.

Arya’s chaotic good, so she must assassinate someone who deserves it. She killed the Night King, the “big bad” of ice magic. Who deserves to die, and furthermore functions as a “big bad” of fire magic? Euron, if his original powers were restored.

Jaime killing Euron makes little sense. That was a useless fight. If its purpose was to demonstrate the value of the prize that is Cersei, through the social proof of two failed men fighting over her, well… the whole device failed. Throw that out. Jaime doesn’t kill Euron at all; they never meet.

Arya kills Euron. Whose face does she use? Cersei’s, after the street children are done with her. It fits Cersei, too.  The queen cared about her image; for Arya to (literally) rip her face away from her is (once again) poetic justice.

After that, I like the idea of Arya retiring from killing, and going out for a more nonviolent sort of adventure. Though Thrones ought to remain a tragedy, not everyone needs to get a miserable ending.

Gendry

I would have Arya and Gendry end up together. The scene between the two, under belief that it will lead into a romantic relationship, was a an example (rare in Thrones) of positive sexuality. I don’t know why anyone would have a problem with it. Don’t get me wrong; I’d be furious if an author explicitly said I could name my daughter after a character, then paired her with a bad boy. Gendry, though, is about as far from a bad boy as it gets.

Arya’s initial rejection of him (or, more precisely, her rejection of the role as a traditional lady) makes sense and is not unusual by the standards of a romantic arc. The characters ought to be able to make it work. Gendry seems like a genuinely good human; I don’t think he’d force her to choose between adventure (whatever that means in her next phase of life) and romantic love. He should want for her to have both. And she should be able to have both. Again, it’s the current year.

Daenerys

The diablo ex machina of the penultimate episode made this woman’s arc a joke.

I expected Daenerys to become, if not a villain, an antagonist toward the end of the series. There was plenty of foreshadowing for that. What I find absurd and a bit offensive is the suggestion that “madness” causes principled people to snap and sic their nuke lizards on cities. That is not how actual mental illness works. It sets on gradually, and people with it are no violent, in general, than the rest of the population.

We also endure the “bitches be cray” trope; a woman proves herself unfit to rule by committing a war crime, just because she can.

There are contexts in which I would believe Dany would make the wrong decision. Game of Thrones did not provide one. Daenerys appeared to snap because the plot needed her to snap.

Furthermore, there’s nowhere to go from this terrible plot point. What can really be done with Daenerys? She can be killed; we can watch her die; that’s about it. The penultimate episode’s twist traded an interesting heroine for a villain whose defining traits are (a) that she was once semi-good and (b) that she has a dragon. Boring.

She deserved better. I don’t mind heroes becoming villains over time. I don’t mind twist antagonists. I don’t mind Daenerys taking a tragic turn. There’s nothing wrong with any of that.

The twist in her character was implausible. Yes, the beheading of her confidante was terrible, but this woman has seen (and brought) war. If she had pure evil in her, we would have seen it long ago. We saw a principled but ruthless character that it was hard to root for, but we did not see pure evil.

Moral mystery characters, like Snape in Harry Potter and Rhaegar in this series, are easier to turn for good than evil. To do the latter takes far more skill and setup time than we saw.

As I would edit the final series, I’d make Daenerys innocent of the major crime (which still occurs). That’s not because I love the character. I don’t. I think she’s immature, dogmatic, and (like Jon, and unlike Sansa) unfit to rule. However, I don’t buy her sudden change into a war criminal. Antagonist, sure. Person who makes a bad decision under pressure, sure. Villain, maybe. Psychopath; no. We have already seen that she is not one.

The Burning of King’s Landing

The burning of King’s Landing is thematically necessary. I’m not going to shoehorn Thrones into a happy ending. It is a tragedy and it does not shrink from the horrors of war, including its morose effects on the common people. Millions were going to starve already, because of the dreadful conflict; but starvation isn’t nearly as cinematic as for a city to go up in flames. The Sack of King’s Landing stays.

So, if the burning still happens but Daenerys isn’t responsible, then who is?

We return to Euron.

His magic makes him far more menacing than the Golden Company. The Golden Company is easy to hate. People who’ll fight for Cersei’s money are like hit men in Sin City: no matter what you do to ’em, you don’t feel bad. They serve two purposes. One is (as mentioned) as mooks to get blown up or burned. Two: Cersei’s resorting to mercenaries shows that she has lost the respect of her people.

Machiavelli, though, gave us the final word on mercenaries: they don’t fight during winter, and it’s winter.

The Golden Company gives us the right to cheer when some people are burned alive, but they’re not that scary– especially given that HBO took away their elephants, because mahouts plus dragons right-facing-angle-bracket budget. Euron, as a magician, is terrifying.

Make him a warg.

After Arya kills Euron, he uses his last to warg into Drogon. Bran knows it’s coming, so he wargs in as well. Now we get a mind fight, inside a dragon, between two of the most powerful mages in the world: a Stark warg (ice mage) who has been north of the Wall, and a fire mage who’s been to Valyria and Asshai.

Daenerys fights to pound sense into her child. Drogon (while Euron is in control) ruins King’s Landing. Bran wins the warg-fight but the dragon ends up dead (as well as Euron). Daenerys lives.

She’s innocent, but the world doesn’t know that.

HBO appears to have turned this heroine into a war criminal. It’s possible that Drogon was warged by some evil mage, but that hasn’t been foreshadowed. It appears that Daenerys has broken bad and it’s a fait accompli. There’s nothing left to do but kill her, and there’ll be little emotion or catharsis because people have died for far less. By Westerosi logic, to execute Daenerys-the-war-criminal is the only thing that can be done. There’s no choice and, when there’s no choice, there’s no drama.

Daenerys-the-innocent, who nearly died riding a warged dragon, and who appears to have done something unforgivable… now she’s a source of drama. An innocent who appears to the world to be guilty? That’s painful to watch, sure, but good writers make it hard for the heroes.

Tyrion

Tyrion’s bright and he loves to argue. We’ve seen him argue for himself. He sometimes loses, but he’s good at it. Now, in the past, he’s been competent– but a competent person becomes heroic when she uses her strength not for herself, but toward another’s benefit.

The best lawyer in all of Westeros ought to be the first to believe Daenerys’s account– that something outside her comprehension happened, and she lost control of Drogon. We’re putting Tyrion into 12 Angry Men. All of Westeros thinks she’s guilty; he thinks (knows) she’s innocent.

We’ve learned that Tyrion’s brilliance was of the old world, when reputation and the throne mattered most. He’s not very competent in the new one. Considering that, and the fact that Ice and Fire is almost certainly a tragedy and will hold Daenerys accountable for bad decisions in the past (including the burning-alive of Dickon Tarly); I have to conclude that Tyrion is unsuccessful and Daenerys must die.

Personally, I’d put Samwell Tarly on the tribunal that must decide on Daenerys’s innocence, and although I like that character, I’d have him get it wrong (just as he was a liability in the Battle of Witnerfell.) His vote for her guilt, in (say) a 4–3 decision, dooms her.

Tyrion, as Daenerys’s advocate, does not die; in fact, his reputation improves in the following years as Westeros learns of her innocence. But he lives out the rest of his life in the Free Cities. Tyrion’s tragic turn, in the series, began when he rejected the opportunity to flee with Shae (who genuinely loved him, unlike in the books) to the Free Cities. His exile from Westeros (possibly self-imposed) must atone for this.

Jon Snow

Jon Snow, as interim ruler of Westeros, recused himself from the vote on Daenerys’s innocence, and he does not want to kill her, but by honor he must. “He who passes the sentence should swing the sword.”

After killing Daenerys, Jon is in no emotional state to rule and installs in power a man with a weak personality who has as many reasons as the others to be disgusted by the game of thrones: Edmure Tully. He’s the interim leader of the Seven (or Six?) Kingdoms while a constitution is drafted.

Jon goes home to rebuild the North. Samwell Tarly goes with him.

He (or Samwell) demands to know, from Bran, whether Daenerys was innocent. They find out that they were wrong. Jon and Samwell, stricken by grief and guilt, decide to honor Daenerys’s memory by building a better Westeros. Jon offers his services to the north and reconstitutes the Night’s Watch, this time focused on protecting rather than warring against the Free Folk. He strives for positive relations with the Children of the Forest and (if relevant) the Others/White Walkers.

Alternate, darker possibility: the White Walkers aren’t dead. In fact, the dragons’ death has made their spooky and possibly malignant ice magic stronger. Jon Snow enlists their aid for the purpose of reviving Daenerys. As in “The Monkey’s Paw” by W. W. Jacobs, it doesn’t end well.

Sansa

Edmure, as interim ruler of (southern) Westeros, is largely ineffectual. Jon is an emotional wreck after (a) killing his lover-aunt and (b) learning she was innocent of the crime for which he killed her. Though both men have the wish to build a better Westeros, Sansa must take the lead at rebuilding society.

(This assumes that HBO doesn’t erase her character growth, as it has done far too often, and make her a naif again.)

Sansa would be good at this. She’d probably enjoy it, too. Sansa might be elected the first president of the Constitutional Republic of Westeros.

Davos

Davos began the series illiterate, but he was always smart. He should become a maester. Perhaps he is the one (rather than Samwell) who writes the final account of what happened in Westeros. Having seen war and the unnecessary death of an innocent girl (Shireen), Davos writes an account of the myriad Westerosi disasters so they never occur again. Samwell may play a role in this effort, but I don’t see him taking the lead in it; he is too shaken by his role in Daenerys’s death.

Bran’s ending

It’s hard to give Bran a satisfying ending. In my series, Farisa (if she lives) presents a similar problem. After a “happy ending”, banal human evil must still exist; what role is there for an ultra-powerful but good mage? Does the mage intervene (and thereby deprive humans of autonomy) or retire from practice? I don’t want to self-spoil Farisa (although I have not fully decided on an ending, being several books away from it) so I’ll tread carefully.

With Bran, we assume that he’s “a good guy” because of his family name, but we know (a)  that he no longer considers himself a Stark, and (b) from the books that many historical Starks were not good. Bloodraven was a morally complicated character before he became the Three-Eyed Raven and there’s no reason to assume he “became good” after his transition. So, Bran’s moral status remains opaque, and what did Old Nan say? “Crows are liars.” (And what does the name Bran mean?) Perhaps Bran’s agenda is darker than we expected.

Still, little foreshadows Bran turning evil, and it seems unlikely that ultra-good characters such as Jojen and Meera Reed would traffic him to a destiny they knew to be malign. So, it seems more likely than not that Bran should end as a force for good.

Perhaps Bran is the first to recognize that the Northern Crisis (from Walkers, or the Children of the Forest, or something new like those ice spiders we were promised) isn’t over. He uses his magic to rebuild the wall… like Bran the Builder.

There are darker possibilities, too: I find the theory that Bran accidentally created and in some sense “is” the Night King to be (in some form) fairly credible. Without such a tragic element, the affliction of Hodor seems a bit of a boondoggle. This being said, I think it would take several episodes to properly service this element of the story. Under the six-episode constraint, I’d use Bran to support the political happy ending (as Westeros’s memory, he helps the world heal) rather than giving him a personal tragic turn.

The Prophecies

I’m not going to focus on “the Prince who was Promised” or any of those other prophecies. I don’t much care about them. The view I take of Martin’s work is that all the gods are equally nonexistent. If prophesied plot points can be hit, great; I don’t find it worth it to bend the story out of shape to service those, though.

Jaime

Jaime, as I would end the series, still returns to King’s Landing– but to kill Cersei, and not because she sent Bronn to kill him, but because she sent him after Tyrion. Still, when Jaime sees the street children killing his twin sister, he considers this too cruel a death for her and tries to intervene. But he’s an aging knight whose power (like Tyrion’s old-world learning) is no good these days, and cannot.

After being unable to protect her, he either kills himself or (more hopefully) returns to the North to build his new life.

Qyburn

No changes. That man was (I believe) heavily inspired by Dr. Mengele, and I’m all for that sick fuck being killed by his own morbid creation rather than dying in South America or Pentos.

The Iron Throne

The true villains of Game of Thrones are outmoded ideas: a “true king” (divine right) or, in Daenerys’s case, a true queen. If anyone sits on the Iron Throne, we perceive Westeros as getting a thousand years of (to quote Jon Snow) “more of the same”.

No family (Tyrion excepted) represents “more of the same” as much as the Lannisters. Gold, greed, and extreme conservatism are their trade. They are the nihilism of Tyrion’s alcoholism as well as modern-day corporate America. Ice and Fire is about many things, but one among them is the delicious fall of the Lannisters. Tywin wanted a “dinn-a-stee” that would last a thousand years; well, nope.

So who gets the Iron Throne? Well, we could put it in a museum, but someone should still get to sit on it. It just seems right.

There is one living Lannister who is innocent: King Tommen’s cat, Ser Pounce.

Illyrio Mopatis, the slave trader who started all of this, is captured and lives out his days as “The Paw” (or, alternately, “The Scoop”) and cleans out the litter box.

Image result for ser pounce iron throne

A Computational Understanding of ZFC Set Theory

When I first encountered the ZFC set theory axioms, the notion that they in particular should be the foundation of mathematics struck me as not making a great deal of sense. What was this Axiom of Foundation:

∀x (∃y (y ∈ x)) → (∃z (z ∈ x) ∧ (∀w ~((w ∈ z) ∧ (w ∈ x))) 

and why would anyone care? What do we learn about practical mathematics from this logical sentence? What makes ZFC the mathematics we care about?

Further study in mathematics tells us: not all that much. ZFC isn’t special. Set theorists study all sorts of axiomatic systems, some strong and others weaker; ZFC happens to be a point of comfort, a level of mathematical proving strength which comfortably gives us the mathematics we want (and, arguably, strange mathematical results that a few might not want). It is not so strong as to be sufficient to prove all that is true, because no logical system is. It also cannot, without appealing to a stronger system, prove that it is without contradiction. and it cannot, without appealing to a stronger logical system, prove that it contains no contradictions.

So, why ZFC? The axioms suffice to construct a mathematics that does not seem to diverge in a meaningful way from the “real world mathematics” (to be discussed later) that power the sciences. A point in space, it seems, requires three real numbers to define it. We do not actually know what is the fine-grained nature of physical space, but it feels like a continuum.

The truth is that ZFC isn’t “the foundation of mathematics” but the foundation of a mathematics, one that (a) arguably suffices for real-world purposes, (b) allows us to reason about infinite objects far beyond what seems to exist in the real world, while (c) being small enough for its axioms(*) to fit on a blackboard.

(*) Except, of course, for the need to introduce two axiom schemas, but that shall be covered.

What is ZFC?

Zermelo-Fraenkel set theory with Choice is a first-order logic with equality. Its syntax includes:

  • The connectives of propositional logic: ~ ∧ ∨ → ⇔.
  • An equality symbol: =.
  • Grouping symbols ( and ) are used to specify order of operations. Spaces are used for clarity but have no meaning.
  • As many variable symbols (lower-case letters x, y, z, and so on) as are needed.
  • The universal and existential quantifiers: and .

All of which are called logical symbols, and a domain-specific non-logical symbol .

The logical symbols’ meanings and corresponding axioms are not included among ZFC’s axioms. All propositional tautologies are considered theorems of ZFC, but we shall not discuss them further. The same holds for first-order sentences that can be proven without using any set theory, like:

(x = y) → (y = x)

which is invariably presented as an axiom attendant to equality

∀x (x = z) → ∃x (x = z)

which follows from the quantifiers being defined in the standard way (if something is true of all x, it must be true of at least one x by definition.

When we discuss ZFC, we’ll present the axioms that define for us:

  • what things are and are not sets.
  • how they relate to each other, using the added non-logical symbol, .

When we speak of formal logic, we do not need to interpret . Formal logical systems do not describe what is “true” or “false” in any real world, but what is provable (a theorem) from the axioms. Which statements are theorems and which are not is derivable syntactically (or “typographically”) from the symbols themselves, without caring about their meaning. A machine could do the job.

All we know is that:

  • Some sentences, like ∃x∀y ~(y∈x) (there is an empty set) are theorems of ZFC.
  • Some valid sentences, like ∃x (x∈x), are non-theorems of ZFC. They may be negations of known theorems, or they may be unprovable.
  • Some strings of logical symbols, like ∃∈∈x→~, are not valid sentences and shall not be considered further.

Our lack of commitment to interpretations of these symbols– using only that logical symbols have the familiar axioms, with  to be introduced and applied according to ZFC’s axioms to-be-given– is necessary because, from first principles, we don’t know anything yet about the collection of values over which variables– the x‘s quantified over by existential (∃x) and universal (∀x) quantifiers, is. We haven’t built a model that tells us that.

In set theory, these variables always range over sets. But what is a set? We haven’t determined that yet.

Equality has a special meaning: if x = y, they are the same object and perfectly interchangeable with one another. That is, they are elements of the same things and have the same elements or, more formally:

(x = y) → ∀z (x ∈ z) ⇔ (y ∈ z), and

(x = y) → ∀z (z ∈ x) ⇔ (z ∈ y)

Now that we’ve done this work, let’s introduce ZFC’s axioms.

Axiom 0: Sets Exist

In fact, we’ll make a stronger claim: the empty set exists.

∃x∀y ~(y ∈ x)

Literally, “there exists a set that all sets are not in”; more legibly, there exists a set with nothing in it.

We could derive the existence of this set from the Axioms of Infinity (which declares that an infinite set exists) and Comprehension (using a logical formula that is always false). We technically do not need it. So, why include it? From a programmer’s perspective, it helps. What’s the first thing one wants to know about a data structure K? It’s, How does one create it? We now have the empty function (or method) of the Set class. Furthermore, since sets are immutable values, we can treat empty as a value.

Axiom 1: Extensionality, which lets us prove things equal.

We have an empty set. How many do we have?

It would be annoying if we had to keep track of a myriad of empty sets, all of which had differing “personalities” irrelevant to the theory. Computers can handle this extraneous detail; they must keep track of equivalent data structures if those are mutable. In set theory, though, we deal with objects that never change.

Two sets are equal if they contain the same elements. This is not innate to logical systems; we must make it an axiom of our set theory.

∀x∀y (∀z (z ∈ x) ⇔ (z ∈ y)) → (x = y)

This, in essence, gives us an equals function– or, at least, the notion required to build one. We also know everything about a set if we know its elements.

So, there is only one empty set, and we know all we need to know– there is nothing in it.

By Extensionality, there is no meaningful distinction between {x} and {x, x} or between {x, y} and {y, x}. We’ll discuss how to handle that with our next axiom.

Axiom 2: Pairing, which lets us build sets.

The Axiom of Pairing gives us the unordered world’s equivalent of a cons cell; given two sets, we can always create the set containing both.

∀x∀y ∃z ∀w (w = x) ∨ (w = y) ⇔ (w ∈ z)

This will have one or two elements; one, in the special case where x = y.

We shall later see that Pairing is redundant. It could be proven using Infinity, Comprehension, and Replacement, to be discussed below; but that would be monstrously inelegant. From a computational perspective, we would not construct an ordered pair by constructing an infinite data structure, then choosing two elements.

Pairing upholds the notion of sets as containing more than one thing. It also indicates that there is only one set universe; there are no x and y that live in disjoint universes, because ZFC will always contain {x, y}.

From this, we can build the notion of an ordered pair: we interpret {{x}, {x, y}} as <x, y>.

So far, though, we can only build sets with zero, one, or two elements. We’re quite limited in what we can do.

Axiom 3: Union, which flattens set structures and yields bigger sets.

The Axiom of Union allows us to collect all the sets that live in a given set into one.

∀x ∃u ∀y (y ∈ u) ⇔ ∃t (y ∈ t) ∧ (t ∈ x) 

So, from {{x, y}, {z, w}} we can make {x, y, z, w}. We can think of this as a flatten function that removes (or interprets) one level of set structure, or as set theory’s natural notion of what functional programming calls a reduce.

The binary union operator (∪) is a special case of this axiom: x ∪ y is the set that must exist by applying Union to {x, y}, which must exist by Pairing.

This gives us the ability to generate the fundamental objects of arithmetic, the natural numbers.

0     := {}

x + 1 := x ∪ {x}

It may not be meaningful to say, in an absolute sense, that the natural number 3 is the set constructed in this way; but we can interpret that set as 3. (Building the rest of arithmetic, such as addition and multiplication, can be done in the first-order logic of ZFC; but it is tedious and will not be done here.) Note that this 3 is defined as the set {0, 1, 2}; this practice continues with infinite ordinals: for example, the smallest infinite ordinal ω is identified with the set of all natural numbers.

Axiom (Schema) 4: Comprehension, the filter of sets.

We’ve developed tools for making sets larger. Comprehension gives us a tool for making them smaller. A definable subset of a set ought also to be a set.

For any well-formed logical formula with one free variable, F, we have:

∀x ∃y ∀z (z ∈ y) ⇔ (z ∈ x) ∧ F[z]

In other words, given a set x and a definable logical property F, the y containing all elements z for which F[z] is a set. This corresponds to filter in functional programming, with the F taking the role of the function argument.

This is not, technically speaking, an axiom. It’s an axiom schema. It represents a countably infinite set of axioms, one for each of the F that can be defined. In this way, ZFC doesn’t have have nine (or so) axioms but infinitely many. That turns out to be no problem in practice. As long as a computer can check within finite time whether a sentence is or is not an axiom, we’re okay.

We do not need Filter (or Replacement, to be discussed) in the hereditarily finite world– the world where all sets (including the sets within sets) are finite. Nor do we need the Axiom of the Power Set. We will need these axioms if we want to wrangle which infinite sets, which of course we do.

Axiom (Schema) 5: Replacement or map

One of the most subtle axioms of set theory is that of Replacement. It seems less powerful than it is. In fact, the largest infinities known to ZFC can only be constructed with Replacement. This is counterintuitive. In the finite world, the Power Set Axiom generates “much larger” sets (from size n to size 2n) but Replacement does not; used alone, it creates a set no larger than an existing set.

The Axiom Schema of Replacement gives us, for every logical formula F with two free variables, if F[a, b] is true for exactly one b per a, this sentence.

∀x ∃y (∀a ∀b  (a ∈ x) → F[a, b] → (b ∈ y)) ∧ (∀c (c ∈ y) → ∃d ((d ∈ x) ∧ F[d, c]))

The F described behaves likes like a function the elements of x to those of y. It says that if {a, b, c, …} is a set, then so is {f(a), f(b), f(c), …}.

You can get a simpler formulation if you demand that F be one-to-one; then you have:

∀x ∃y (∀a ∀b F[a, b] → ((a ∈ x) ⇔ (b ∈ y))

In ZFC, both presentations– the simpler one mandating a one-to-one F, or the more complex one without that restriction– generate the same sets, and result in the same theory.

In computational terms, this is akin to that crown jewel of functional programming, map.

Axiom 6: Power Set

The axiom of the Power Set is stated thus:

∀x ∃p (∀y (∀z (z ∈ y) → (z ∈ x)) ⇔ (y ∈ p))

In other words, for every x, there is a larger set p that contains all of x‘s subsets y. In the finite world, this is strictly larger: we jump from n elements to 2elements. That turns out to be true in the infinite world, where size is a more complicated matter, as well. There is always a bigger set. Since there is no largest set, there is no “set of all sets”. We’ll get more specific on this when we discuss foundation.

Axiom 7: Infinity, where the fun begins.

We have the tools to build up (and tear down) all the natural numbers, and all finite sets of natural numbers, all finite sets of finite sets of natural numbers, and so on.

We don’t yet have infinite sets. None of our axioms provides that one must exist. By the tools we have, we can’t get to the infinite world from what we have.

We have infinitely many sets already. We have the natural numbers. That’s an outside-the-system judgment about what exists in our world thus far. We don’t yet have that {0, 1, 2, …} is itself a set. Since the purpose of set theory is to wrangle with notions of infinity, and since we need an axiomatic statement that one exists, we add:

∃n ∃z (z ∈ n) ∧ (∀x (x ∈ n) → ∃s ((s ∈ n) ∧ (x ∈ s) ∧ (∀y (y ∈ x) → (y ∈ s))

What does this say? It says that there exists some set n that is not empty (that is, it contains at least one element z) and that, for every x in it, contains the s(x) defined as  x ∪ {x}. This will be a larger set than x, so long as we never have x ∈ x, which we’ll establish to be the case when with Foundation, next.

We could, alternatively, make it an axiom that N, the set of natural numbers exists. This is achieved by setting z to the empty set. In that case, we don’t need to use ∀x ~(x ∈ x), which we haven’t yet established. In ZFC, it is irrelevant whether we use the weaker “an infinite set” exists or construct specifically the set of natural numbers; with Replacement, they are equivalent.

We’ve now reached a space beyond what we can compute. We can conceive of the infinite and we can write programs that never terminate (using a while loop) but we cannot store a completed infinity in a computer. As infinities get larger, and sets more complicated, the divergence between what “exists mathematically” and what we can realize, computationally, grows.

An aside on Replacement, massive infinities, and “ordinary mathematics”.

We define two sets x and y to have the same size (or cardinality) if we can define a function from x to y that is invertible (one-to-one) covers all of y (onto). It turns out that for every set x, the power set P(x) is larger. So from the smallest infinity of the natural numbers, N, we get the larger infinities P(N), P(P(N)), P(P(P(N))), and so on.

If we exclude the Axiom of Replacement, we still have a mathematical universe that contains:

  • the natural numbers,
  • negative, rational, real, and complex numbers, which can be constructed using set-theoretic tools,
  • sets of (and sets of sets of, and sets of sets of sets of) mathematical entities above,
  • algebraic entities like groups, rings, fields, and vector spaces,
  • functions on the mathematical entities above (e.g., functions C8 → C),
  • and higher-order functions that can accept as arguments or return the entities above.

So, even bizarre mathematical objects like {7, 3.14159…, 3+4i, {(λx → x + 3), -17, {}}} exist in our universe. We have ordinary mathematics– almost all of mathematics excluding set theory.

What don’t we have, without Replacement? We don’t have this thing as a set:

{NP(N), P(P(N)), P(P(P(N))), … }

or its union; both of which, most of us feel, deserves to be a set. That massive infinite set, bigger than anything we encounter in ordinary mathematics, may be not be something we need in daily math… but it seems that it should deserve to exist in the mathematical universe.

Axiom 8: Foundation, the limiter.

So far, we’ve discussed axioms that create sets. We have the tools to create the sets we want. How do we avoid calling things sets that we don’t want?

First-order logic can’t limit size. We can’t say that our mathematical universe is “only yay big”. For example, ZFC without Replacement could not generate the large set described above, but it also cannot prove that entity not to be a set. So, there will always be more things in the heaven and earth of our first-order logic than are dreamt-of in our philosophy; but, we can limit undesirable behaviors.

In programming terminology, we can impose a contract that sets must follow.

The Axiom of Foundation is the only one to place limits on set-ness, the only one to prevents us from making sets.

I put it toward the end because its formulation is the most opaque. It is:

∀x (∃y (y ∈ x)) → (∃z (z ∈ x) ∧ (∀w ~((w ∈ z) ∧ (w ∈ x))) 

To which an appropriate response might be, What the hell does that mean?

The answer is: For every set x that is not empty (that is, has a y in it), there is a z in it that is disjoint from x.

We know what it means; the question, then is, Why the hell do we care?

The most important consequence of this might be that no set contains itself. Therefore, we cannot define the set x = {x}. In computer science, self-containing data structures are admissible. (To paraphrase Trump, when you’re a *, pointers let you do it.) Mathematicians, however, don’t want their foundations be self-referential. (There are mathematical structures, like graphs, that allow such recursion and self-reference, but sets themselves should not be.) Why? Russell’s Paradox. If sets can contain themselves, and all collections are sets, then define a “Russell set” as any set that doesn’t contain itself. Is the set of all Russell sets a Russell set? If the set of all Russell sets is a Russell set, then it contains itself, so it’s not a Russell set; if it’s not a Russell set, then it  is. Mathematics becomes inconsistent and the world blows up. Clearly, mathematics can’t afford to confer set-ship on just anything.

The more general result of foundation: there is no infinite descending chain of sets a b c ... It is a consequence of this that there are no self-containing sets or set-membership cycles; e.g., there are no x, y, z for which  x ∈ y, y ∈ z, and z ∈ x).

An alternate way to think of this is that one can always “get to the bottom of” a set, even if it’s infinite. Or: imagine a single-player game like Nim, but instead of stones on a table (from which a player takes some, each turn) there is a set X “on the table”. Each turn, the player selects an element a of X and “places” that a on the table, unless a is the empty set, in which case the game terminates.

For example, letting N be the set of natural numbers, we could have the following state evolution:

Table: {3, 17, {4, 29, {6}}, P(N)}
... player chooses P(N)
... Table : P(N) = {{}, {0}, {1}, {0, 1}, ... }
... player chooses {2, 3, 5, 7, 11, ...}
... Table : {2, 3, 5, 7, 11, ...}
... player chooses 173
... Table:  173 = {0, 1, 2, ..., 172}
... no more than 173 steps later...
... Table : 0

Foundation means that this game always terminates in finite time. It can take an arbitrarily long time– the player could have chosen 82,589,933, or the first prime number larger than TREE(4,567), instead of 173– but there are no infinite paths, assuming that what’s on the table is a well-founded set.

Axiom 9: Choice, the controversial one.

I’ve saved the “controversial” Axiom of Choice, the namesake for the “C” in ZFC, for last.

Is it controversial? Well, let’s understand what a mathematical controversy is. Mathematicians do not, by and large, get into arguments about whether Choice is true or false; it is neither. There are valid mathematical systems with it, and others without it. It is subjective, which systems one prefers to study. But no mathematician will say that the Axiom of Choice is “wrong”.

Although I have worked entirely in the bare language of first-order logic plus equality and the “non-logical” , I will, to make notation easier, include the ordered-pair notation <a, b> = {{a}, {a, b}}. Extending the alphabet of a logical language can be done in a principled, conservative way (that is, one that does not produce new theorems) but I will skip over the details. One who wants them can read Kenneth Kunen’s 2011 book, Set Theory.

The Axiom of Choice is:

∀x ∃f ∀a (a ∈ x) → (∃c c ∈ a) → ∃p (p ∈ f) ∧ (p = <a, b>) ∧ (b ∈ a) 

This means that for any set x we have a choice function f– a set of ordered pairs– that for all non-empty a in x contains an <a, b> for which b ∈ a.

In other words, from any set of sets, we can derive a choice function that maps each of its nonempty sets to one of its elements (that is, it “picks” one).

Choice, of all the mathematical axioms, is most flagrant in divergence from the real world. Infinity can be problematic, but physics forces us to grapple with its possibility. We do not know the fine-grained nature of physical space, but it seems to exhibit rotational symmetry (that is, there are no privileged x-, y-, and z-axes, but the basis vectors we choose) which suggests strongly that it is R3. Infinite sets, we need for geometry to work.

Choice, however, is counter-physical. If we agree that choosing an element from a set is computational, and therefore requires energy (or increases the entropy of the universe, or costs in some currency) than to realize a choice function has an infinite cost. In other words, we are calling things “sets” that have no presence in the physical world.

To wit, in a world with Choice, events can be described (in theory) that have no probability. That is not: They have zero probability; rather, it creates a contradiction to assign them probability zero or to assign any positive probability to them. More troubling, a sphere can be decomposed into five pieces and rearranged into two spheres of the same size. While this absurdity cannot be realized with a real-world object, it does establish the behavior of mathematics, with Choice, to be counter-physical.

So, why have it? Well, there are mathematical objects that do not exist in a world without Choice. A vector space might not have a basis, in ZF + ~C. Mathematics isn’t supposed to be about what we can make in the real world, but what exists theoretically.

One source of trouble with Choice, I think, is that as beginning or amateur mathematicians, it’s tempting to think of an agent actively choosing. There’s a quip about the Axiom of Choice: you don’t need it to choose one shoe each from an infinite set of pairs of shoes, but you do need it to choose one sock each from an infinite set of pairs of socks. (The shoes are distinguishable; the socks are not. Analogously, Comprehension and Replacement derive from existing logical formulae; Choice is fully independent of how the choosing is done, and may be arbitrary.) The problem is that Choice is not about “you can choose…” because mathematics exists in an ethereal world where there is no “you” or “I”, nor an energetic process of “choosing”. The real debate is not about “you can…” or “you cannot…” but about what we choose to call a set. If we want to include among sets arbitrary selections, we need Choice.

Mathematical existence, itself, is a slippery and tricky notion. For example, we’ve covered many of the sets that exist and we can build. Now, this is a set:

{n : 2^2^n is prime}

However, which set? Is it finite or infinite? We don’t know. Most likely, it’s the very simple set, {0, 1, 2, 3, 4}. At present, though, we don’t know what it is.

Computational Analogue

To wrap this up. let’s think of Set as a type (or a class) in a programming language. We’ll pretend we have a computer with infinite space, so infinity is no issue. The axioms of set theory support the following interface:

empty :: Set
-- () A set with no elements.
elementOf :: Set -> Set -> Bool
-- (x, y) Returns True if x ∈ y, False if not.
equal :: Set -> Set -> Bool
-- (x, y) Two sets are equal if they contain exactly the same elements.
pair :: Set -> Set -> Set
-- (x, y) Returns {x, y}.
union :: Set -> Set
-- (x) Returns {a : a ∈ t ∈ x for some t}. Or: flattens a set of sets by one level.
filter :: Set -> WFP -> Set
-- Comprehension
-- (x, F) F is a well-formed predicate with one free variable; returns {a : a ∈ x where F[a]}.
map :: Set -> WFF -> Set
-- Replacement
-- (x, F) F is a well-formed binary predicate for which F(x, y) holds for exactly one y per x, corresponding to a function f; returns {f(a) : a ∈ x}.
powerSet :: Set -> Set
-- (x) Returns {y : y is a subset of x}.
infinity :: Set
-- () A set of infinite cardinality.
validate :: Set -> ()
-- Foundation
-- (x) Asserts that x has an element disjoint from itself. If all sets created pass validation, then no infinite descending chains (or cycles) of ∈ exist; every set is well-founded.
choice :: X@Set -> (A@X -> Either () A)
-- (x) Returns a function on x that maps every non-empty a ∈ x to [Right b] for some b ∈ a, and the empty set to [Left ()].

Conclusion

Set theory has a reputation for being painfully abstract and opaque. Much work is required to get from the expression of, say, Foundation, to what it means and why we care about it.

We learn later that the axiom exists because set-theorists did not want to contend with the notion of unwrapping sets ad infinitum; they did not want a theory that wasn’t well-founded or whose subtle imprecisions could destroy all of mathematics (as Russell’s Paradox did naive set theory). What makes for compact formalism is:

∀x (∃y (y ∈ x)) → (∃z (z ∈ x) ∧ (∀w ~((w ∈ z) ∧ (w ∈ x))) 

but without additional context, the formula is calamitously uninteresting. The human element (why we care) and the story behind axiomatic set theory are (as they should be, in formalism) absent.

I bring this up because, for me, Foundation was the hardest of the axioms to understand its motivation. (Replacement was difficult to motivate, until I realized it was just map or, in the Python world, a list comprehension.) Foundation’s often presented early in set theory, and not without good reasons. It is the only axiom that mandates certain possibilities not be sets, ensuring we don’t have to worry about Russell’s Paradox.

An interesting consequence of this is that we can now discuss what the variables in ZFC, a first-order logic, range over. When we say ∀x something, in ZFC, what is the space we’re quantifying over? All sets. But there is no “set of all sets” in ZFC. We need a bigger logical system to think about what ZFC actually means. This does not invalidate mathematics; arguably, this sort of distinction is what saves it.

Set theory’s implications are complicated and abstruse, but most of the axioms on which it’s built have computational analogues that a programmer uses every day.  Comprehension is like filter, Replacement like map.

What about Choice? Well, Choice gives us that weird, hackish hand-written function that may be a giant switch/case block. On many infinite sets, we cannot hope to code it by hand or realize it in physical space. However, it exists in set theory, so long as we want set theory to contain everything that mathematics can imagine.

Set theory, in all its glory, isn’t the easiest subject to comprehend. The simplicity of it, as a first-order logic with only one added symbol (), renders it difficult to reconcile it with the complexities of full mathematics, much in the way that quantum physics is not hard to comprehend but difficult to apply to real systems. The more perspectives we can take to this, the easier it might be to learn. I hope that by focusing on computation and construction, as imperfect as those lenses are, I’ve added one more perspective and that it helps.

Idle Rich Are the Best Rich. Here’s Why.

The college-admissions cheating scandal of 2019 has provided plenty of opportunities for schadenfreude at the expense of the lower-upper class: hangers-on and minor celebrities who needed a bit of lift to get their underachieving children into elite colleges. (The true upper class does not struggle with educational admissions; those are negotiated before birth, and often involve buildings.) The fallout has stoked discussion, internet-wide, about social class in the United States.

I’m 35 years old and don’t have any children, so college admissions are not (at least, not now) my fight. I care quite little about the topic itself because, to be honest, I find all this noise irrelevant. There’s the global climate crisis. There’s the imminent collapse of the wage market, due to automation. I have my own personal projects– I’m a year behind schedule on Farisa’s Crossing, my novel. (This is good lateness, insofar as I continue to discover ways to make the book better, but it is still lateness.) So, with all the things that actually matter, I don’t have a whole lot of cognitive bandwidth for the topic of college admissions. That issue’s mostly one of parental narcissism, and I’m not a parent.

Besides, just as the 1929–45 crisis (“Fourth Turning“, if you will) made irrelevant who went to Harvard versus Michigan in 1923, I believe the near future will find today’s obsessive attention given to minor differences in educational institutions to be absurd.

Still, not all of my readers are in the United States, and so many lack the privilege of knowing how bizarre and corrupt American college admissions are. It’s not that admissions officers intend it to be this way. They don’t. But there are a lot of absurd non-academic factors that go into college admissions, and the rich have far more time to assess and exploit the process than the poor.

Is this our society’s biggest issue? Hardly. Rich parents do all sorts of unethical things– often, completely legal ones– to give unfair advantages to their progeny. It has been going on for centuries, and it will likely continue for some time. College admissions fraud is a footnote in that narrative. I am glad to see the laws enforced here, but there are bigger issues in society than this.

Instead, I want to talk about the problem exposed by this scandal. See, it’s not enough for the American rich to have more money than we do, and all the material comforts that follow: bigger houses, speedier cars, golden toilets. They have to be smarter than us, too. But God did a funny thing: when She was handing out talents, she didn’t even in look in the daddies’ bank accounts. So, here we are. We live in a world where people make six- and seven-figure incomes helping teenagers cheat on tests. This isn’t new, either.

As a society, we suffer for this. Having to pretend talentless people from wealthy backgrounds are much more capable than they are, as I’ll argue, has major social costs. It keeps people of genuine ability in obscurity, and it leads to bad decisions that have brought the economy to stagnation. It would be better if we were rid of such ceremony and obligation.

How I Stopped Worrying and Learned to Love the Idle Rich

I want to talk about “the idle rich”, the aristocratic goofballs who don’t amount to much. They get a bad rap. I don’t know why.

They’re my favorite rich people, to be honest about it.

I don’t much like the guys (and, yes, they’re pretty much all “guys”, because our upper class is not at all progressive) at Davos. They do significant damage to the world. We’d be better off without them. Their neoliberal nightmare, a 21st-century upgrade of colonialism, has produced unwinnable wars, a healthcare system that has exsanguinated the middle class, and an enfeebled, juvenile culture that has lain low what was once the most prosperous nation in human history.

If we as a society decide to do something about the corporate executives who’ve gutted our culture and downsized the middle class out of existence, I’ll volunteer to clean up the blood. These people have a lot to answer for, and the quicker we can stop further damage, the better.

Do I hate “rich people”, though? I’ve met quite a number of them. Billionaires, three-digit millionaires, self-made as well as generationally blue-blooded, I’ve met all kinds. The truth is: no, I don’t hate them. Not really. What is “rich people”? Someone with more money. But, to 99.9 percent of humans who have ever lived, I am (along with most of my readers) “rich people”, just because I was born in the developed world after antibiotics. People born in 2200 may never know scarcity, and live for thousands of years. Good for them. I won’t burden the reader with my own philosophy on life, death, the future, spirituality, or life’s meaning (or lack thereof); suffice it to say, I believe that in the grand picture, material inequalities are fairly minuscule. If I’m still flying “klonopin class” in ten years, I’ll deal. The health of society, on the other hand, is of major importance. We only have one planet and, today, we are one civilization. Getting the big things right matters.

I don’t especially want to “eat the rich”. I don’t even care that much about making them less rich (although the things I do want will make them less rich, at least in relative terms, by making others less poor). I want our society to have competent, decent, humane leadership. That’s what I care about. Eradicating poverty is what matters; small differences and social status issues, we can deal with that later.

American society seems to have a time-honored, historic hatred for “idle rich”. Why? It does seem unfair that some people are exempt from the Curse of Adam, often solely because of who their parents were? It’s hard to accept it, that a few people don’t need to work while the rest are thrown into back-breaking toil. From a 17th-century perspective, which is when the so-called puritan work ethic formed, this attitude makes sense. It was better for morale for communities to see their richest doing something productive.

In the 21st century, though, do these attitudes toward work apply?

First, we can toss away the assumption that everyone ought to work.

We already afford a minimal basic income to people with disabilities, but most of these people aren’t incapable of work, and plenty of them even want to work. They’re incapable of getting hired. There’s a difference. Furthermore, as the labor market is especially inhospitable to the unskilled and young, it is socially acceptable for people of wealth to remove their progeny from the labor market, for a time, if they invest in education (real or perceived).

I support a universal basic income. On a related topic, there’s a problem with minimum wage laws, one that conservatives correctly point out: they argue that price floors on labor result in unemployment. That’s absolutely true. Some people simply do not have the ability to render work that our current labor market values at $15 per hour. A minimum wage is, in essence, a basic income (often, a shitty one) subsidized by low-end employers. They respond by cutting jobs. We can debate endlessly why some work is valued so low, but the truth of wages seems to be a trend toward zero.

As technology progresses, the fate of an everyone-must-work society is grim. The most important notion here is economic inelasticity. Desperation produces extreme nonlinearities in price levels. If the gasoline supply shrunk by 5 percent, we wouldn’t see prices go up by 5 percent; they’d likely quadruple, because people need to get to work. (This happened in the 1970s oil crisis.) We’re seeing it in the 2010s with medicines, where prices are often malignantly manipulated. It doesn’t take much effort to do that. Desperation drives extremity, and people are (in our society) desperate for jobs. Are we at any risk of all jobs being automated by 2030? No. But it takes far less than that to tank wages. No matter how much the technology improves, I guarantee that there will be trucking jobs in 2030. There might be fewer, though. Let’s say that 40 percent of the industry’s 8 million jobs disappear. That’s 3.2 million workers on the market. No matter how smart you think you are, some of those 3.2 million can do your job. And some will. As displaced workers leave one industry, they’ll tank wages where they land, causing a chain reaction of refugee crises. No job is safe. The jobs will exist, yes, but they’ll get worse.

In our current society where everyone just work to live or-else, the market value of almost everything humans produce (at least, as subordinates) is doomed to decrease by about 5 percent per year. That’s not necessarily a bad thing; another way to look at it is that things are getting cheaper. It’s only bad in a world where work is a typical person’s main source of income.

Ill-managed prosperity was a major cause of the Great Depression. In the early 20th century, we figured out how to turn air into food. That advance, among others, led to us getting very good at producing agricultural commodities. Cheap, abundant food. As a result… people starved. Wait, what? Well, it happened like this: farmers couldn’t turn a profit at lower prices, leading to rural poverty by the early 1920s, causing a slowdown in heavy industry in the middle of that decade. It was finally called a “Depression” when this problem (along with “irresponsible” speculation that is downright prosaic compared to what occurs on derivatives exchanges today) hit the stock market and affected rich people in the cities.

Why did we let rural America fester in the 1920s? Because the so-called “puritan work ethic” had us believing poverty was a sort of bitter moral medicine that would drive people to better behavior. Wrong. Poverty is a cancer; it spreads.

Ill-managed prosperity hit us hard in the 1930s; it’s likely to do the same in the 2020s, if we’re not careful.

All else being equal, for a person to show up to work doesn’t make society better. What she does at work, that might. The showing-up part, though… well, that in fact depresses wages for everyone else. It would be better for workers if there were fewer of them. That there are so many workers willing to tolerate low wages and terrible conditions devalues them.

For many workers out there, their bulk contribution to society is… to make everything worse for other workers. It is not their fault, it is not a commentary on them. It is just a fact: by being there, they cause an infinitesimal decrease in wages. And, today, we have this mean-spirited and anachronistic social model that has turned automation’s abundance into a possible time bomb.

Really, though, do we need so many TPS reports?

Obviously, if no one worked, that would be catastrophic. We don’t really need to force everyone to work, though. People work for reasons other than need: to have extra spending power, to gain esteem in the community, or because it gives their lives a sense of meaning. Fear of homelessness, though, doesn’t make anyone’s work better. It always makes things worse.

We could get at least 90 percent of society’s current revenues without forcing people to work. There’s no reason we couldn’t have a generous basic income and a free-market economy. That’s quite possibly the best solution. And, while I say “at least 90”, I really mean “more likely than not, more than 100”. That is, I think we’d probably have a more productive economy if people were free to allocate their time and talents to efforts they care about, rather than the makework they have to do, to stay in the good graces of the techno-feudal lords called “employers”.

It is not such a travesty for a person to remove himself from the labor market; for the rich, who already can, I don’t see it as a reason to shame them.

The second problem with the everyone-even-rich-people-must-work model is that it fails to create any real equality. Let’s be honest about it. “Going to work” is not the same for all social classes. Working-class workers are treated like the machines that will eventually replace them. Middle-class workers have minuscule autonomy but are arguably worse off, since it is the mind that is put into subordination rather than the body. For the rich, though, work is a playground, a wondrous place where they can ask strangers to do things, and those things (no matter how trivial or humiliating) will be done, without complaint. The wizards of medieval myth did not have this much power.

In other words, the idea that we are equalizing society by forcing the offspring of the rich to fly around in corporate jets and give PowerPoint presentations (which their underlings put together) is absurd. It would be better to let them live in luxury while slowly declining into irrelevance. When rich kids work difficult jobs, it’s toward one end: getting even richer, which makes our inequality problem worse.

Third, when we force rich kids to work, they take up most of the good jobs. There are about 225 million working-age adults. Whatever one may think of his own personal brilliance, the truth is that the corporate world has virtually no need for intelligence over IQ 130 (top 2.2%). We could debate, some other time, the differences between 130, 150, 170 and up– whether those distinctions are meaningful, whether they can be measured reliably, and the circumstances (of which there are not many) where truly high intelligence is necessary– but, for corporate work, 130 is more than enough to do any job. I don’t intend to say that no corporate task ever existed that required higher intelligence; it is, however, possible to ascend even the more technical corporate hierarchies with that much or less. So, using our somewhat arbitrary (and probably too high) cutoff of 130, there are still 5 million people who are smart enough to complete any corporate job. For the record, this is not an implication of corporate management’s capability. A manager’s job is to reduce operational risk and uncertainty, and dependence on rare levels of talent is a source of risk.

There are 5 million people who are smart enough, in any corporation in America, to competently fulfill an entire path from the entry level to the CEO. 5 million. For a contrast, there aren’t 5 million people in the U.S. with meaningful connections. I doubt there are even 500,000. Talent is fairly common. Connections are rare, and therefore more valued. The rich and the well-connected get the first dibs on jobs. The rest can’t possibly compete. No matter how smart they are, it doesn’t really matter.

Frankly, it’s of infinitesimal importance that Jared Kushner slithered into a spot at Harvard, causing a more-deserving late-1990s 17-year-old to have to settle for Columbia or Chicago. Whatever 17-year-old got bumped, I doubt she cares all that much. We all know that college admissions are more about the parents, anyway. No one intelligent really believes that American college admissions are all that meritocratic in the first place.

On the other hand, the U.S. corporate world is a self-asserted “meritocracy” (and, should you point out its non-meritocracy, you will soon be without income) despite being thicker with nepotism and corruption than third-world governments. That, I care about. Admissions corruption might lead a talented student to have to take her roughly-identical undergraduate classes from slightly less prestigious professors. In the work world, though, personal health, finances, and reputation are on the line. The false meritocracy of college admissions is a joke; the false meritocracy of the corporate world kills people.

Fourth and finally, when rich kids go to work, what do they do? Damage the world, mostly. A large number of them are stupid and incompetent, the result of which is: bad decisions that cause corporate scandals and failures that vaporize jobs. Most of the smart ones, worse yet, are evil. See: the Koch Brothers, Roger Ailes, and Erik Prince.

We would have such a better world if we convinced these guys it was OK to goof off.

The European aristocrats, to their credit, were content to be rich. Our ruling class has to be smarter than us.

I don’t mind that the corporate executives fly business class and I don’t. I do mind being forced to indulge their belief that their more fortunate social placement is a result of their being (intellectually speaking) what they think they are but are not, that I actually am. That galls me. If these people could admit to their mediocrity and step aside, it’d be better for all of us. The adults could get to work; everyone would win.

I don’t hate “the rich”. In fact, I wish everyone were rich. There may be a time in our future when that is, in effect, the reality. I hope so, because we seem thus far to be an “up or out” creature and, in 2019, we are effectively one civilization. Our current state is unsustainable. In the next two hundred, we either get rich or (at least, as a culture, although we may survive in the biological sense) we shall die. In the former case, I do not forecast utopia. There will always be disparities of wealth and social position. More likely than not, those advantages will be uncorrelated to personal merit– this as true of today’s upper class “meritocrats” as much as it was of medieval lords. On its own, that’s benign. So, to hate “rich people” is like to hate a tornado.

On the contrary, though I do not hate the rich, I hate their effects. I hate living in a society run by morons and criminals– one where housing in major cities is unaffordable for almost everyone; one where people have to buy insurance plans on their own bodies that cost $1,000 per month and provide half-assed coverage; one where bullshit jobs and managerial feudalism are the norm.

Furthermore, I do not think it makes the rich happy that we force them to work. It certainly does not make us happy to be shoved out of important, decision-making roles because the well-connected incompetents (or, far worse, the self-serving and evil) need those scarce jobs.

We, as a society, have reached a point where idleness is the most harmless of vices. We do not need more people hunting on the Serengeti; we do not need more internal combustion engines hauling people around to say “Yes, sir”.

Most so-called “work” has trivial, nonexistent, or even negative social value. The vast majority of corporate jobs exist to perpetuate or enhance private socioeconomic inequalities, rather than to better society. The so-called “protestant work ethic” would have us predict that price signals (salaries) correlate to the moral value of work. They don’t. Anyone who thinks they might needs to leave the 1970s because Studio 54 closed a long time ago.

If rich people stop working, they stop hogging the good jobs. They stop hogging the investment capital and wasting it on artisanal dog food delivery companies. Since they’re enjoying life more, they will feel less desire to exact revenge on society for forcing them to make four PowerPoint presentations per year, which will make them less aggressive in squeezing employees. So they’ll hog less of the damn money, too. People will start leaving their office jobs while the sun is shining, writing their own performance reviews because the bosses are skiing all month, and everyone will be better off for it.

Let’s not eat the rich. Instead, let’s get them fat, and roll them out of the way, so competent adults can take the reins of this society, before it’s too late.

American Fascism 2– Is the United States Fascist?

Part 1 Part 2

In Part 1, I discussed the four political impulses– communalism, libertarianism, republican democracy, and fascism– that seem to be the base elements of which more complex ideologies are made. Of course, an entire society can be communalist in some ways, but libertarian in others. To ask whether the United States “is fascist” may seem simplistic. The question might be phrased better as, “How established is fascism?”

Upsettingly, fascism is the most limber in its self-presentation. Fascists lie. They will, if it is convenient, use ideas from other ideologies to push their agendas. We’ve seen fascists in leftist, rightist, religious, and anti-religious costumes before. Corporate fascism asserts “meritocracy”. Donald Trump managed to step over his personal elitism to run as a populist. Rarely does one spot a fascist in his revealed ideology; we observe what he does.

We are not at the point yet where the United States has been afflicted by state-level fascism. One hopes that it never will be. Are we under threat? Yes, and to understand the problem, we’ll have to know why fascism has emerged.

Is Donald Trump a fascist threat?

Donald Trump’s victory was the culmination of a bizarre irony: a man running against forty years of economic damage wrought by Boomers, bullies, and billionaires… despite being all three.

Establishment politicians represent, in today’s dysfunctional political environment, the disingenuous, effete, and hypocritical superego of the corporate system. In 2016, people decided to try out that system’s id.

How did this all happen? The mechanics of it deserve another essay, probably not in this series, but the short version is that Trump managed to unify, for a time, the otherwise disparate in-authority and out-of-authority fascisms. Corporate executives and race-war preppers do not go to the same parties, and they express their thuggish inclinations in different ways, but Trump managed to draw support from both crowds.

All of this being said, I don’t think of Donald Trump as a high-magnitude fascist threat to this country. I did not ever support him, did not vote for him, and was displeased (to put it mildly) when he won the election (which surprised me). He has done a lot of damage, especially on the environmental front. He has embarrassed us in front of the entire world. Still, he lacks the image necessary to pull sustained, effortless authoritarianism off.

Donald Trump puts explicitly what is subtle in corporate fascism. He doesn’t think differently from those people; he just can’t filter himself. In general, corporate fascism is effective because of its bloodlessness. Few people notice that it’s there unless they think deeply about it; corporate fascism presents itself as “not political”. (The corporate fascist’s enemies are the ones “being political.” That’s why they were fired. Trump’s authoritarianism, belched out 280 characters at a time, is too flagrant and plain-spoken for either the emasculated robot fascism of the corporate world or the lawfully-masculine (in presentation) inevitability of the brutal dictator.

Donald Trump, though, has an even bigger flaw as a would-be fascist: his lifestyle. He’s been a self-indulgent man-child for his entire life. On-camera fuckery built “the Trump brand”, which he’s cited as his most valuable asset. This was great for him when he was a zeitgeist of unapologetic, gangster capitalism. It’s repugnant, and so is fascism, but the brands of malignancy could not be more different.

For a contrast, the proper fascist dictator appears superficial. He cannot be self-indulgent in public. If he enjoys his power and wealth in front of people, he’ll be seen to have an appetite for comfort, which kills the aura of masculine inevitability that a fascist leader requires. Adolf Hitler was, in fact, a rich man late in life– Mein Kampf was a bestseller– and he likely had several mistresses. To the public, however, he presented himself as a simple-living, celibate man. He was married, he said, to the German people. The fascist’s sacrificial austerity gives credence to the perceived inevitability of his reign.

Donald Trump could not pull that off. He has been a volatile, self-absorbed clown in the public for longer than many of us have lived. His own history destroys him. Trump is the sort that thrives in disorder and damage, but sustained fascism requires a damaging order– and that’s quite different.

If fascism comes to the United States, it won’t come via the self-indulgent, emotionally incontinent septuagenarian in the White House. Instead, it’ll come under the aegis of a 39-year-old Silicon Valley tech founder whom few of us have heard of.

He’ll arrive with a pristine reputation, because (like anyone who succeeds in Silicon Valley) he will have preserved his image at any cost, destroying the careers of those who opposed him. The same sleazy tactics that founders, executives, and venture capitalists use to protect and expand their reputations, he’ll have mastered before he even considered going into politics.

He’ll use his dirty corporate tricks, more subtly than Trump, as well as the resources within his companies to build up an image of centrist, pragmatic, and professional competence. He’ll likely present himself as a bipartisan figure– a unifier “in these divided times”, a centrist capitalist who can also “speak nerd”. He may or may not hold racist views– he’s probably too smart to believe that shit– but when it suits him, he’ll use any racial tension he can to divide people, just as he used factional tensions within companies to build his corporate career.

State of the States

We can assess our current fascist risk by asking: what keeps fascism at bay? We have a constitutional government. That’s good, but it inevitably comes down to us what that means. Societies can be assessed on several planes: culture, politics, economics, and the social. I’ll cover each of them; doing this gives us a clear sense of how much danger exists, and whether it’s getting worse.

Center-leftists have underestimated the corporate and fascist threats over the past ten years, because they believe that we are winning the culture wars. That’s true enough right now. The religious right is dying out. Marijuana legalization once seemed impossibly radical. Same-sex marriage support is strong among the young. These are all very good signs. So, can’t we let time do its thing, considering our cultural headwinds?

No, we can’t. The cultural is driven, over time, by the economic. The economic and political drive each other; that arrow goes in both directions and sometimes it is hard to tell the planes apart. In turn, the economic and political are driven by the social: who knows whom and in what context, which groups are favored for various opportunities, et cetera. It suits us best to analyze the cultural, social, political, and economic planes separately and, in each, ask, in terms of the four elemental political impulses– communalism, libertarianism, republican democracy, and fascism– “Are we fascist?”

Culturally, we are mostly communalistic. Division and exclusion are frowned-upon. A center-left coalition won the cultural wars of the late 20th-century. Two-thirds of Americans support gay marriage, and there’s no strong desire to prosecute harmless pot smokers. Racism still exists, but it’s largely detested. It’s more acceptable, by far, to err on the side of inclusivity than otherwise.

Sometimes, the right refers to our culture as being “politically correct”. Our popular culture is, for good and bad, deliberately inoffensive. This is likely tied in to the importance of our popular culture to our self-definition and economic standing; it is the most effective export we’ve ever had. To start, we would be an irrelevant European knock-off without the cultural influences of once-disparaged minorities. More importantly, if our popular culture were racist, misogynistic, or belligerently nationalistic, the rest of the world would be unlikely to buy it.

Culture, however, changes quickly; it did in the German 1930s, when Weimar liberalism fell, like so much else, to the Nazis. Environmental, political, economic, and social forces can crush cultural defenses. That happens all the time.

Politically, we remain a democratic republic. Our elections work. They do so imperfectly, but they work well enough that, when plutocrats cheat, they still bother to hide it. Voters have the power to fire representatives who become unaccountable to their constituents, and although it’s not used often enough, it is used. Though there are issues with our electoral system on account of its age, they’re not so severe that one would call us, at this point, a non-democracy.

For now, we’re on the better side of this one.

Economically, we are a market-driven libertarian society. That is not all bad. Many have argued that this is what should be. Do we need public control of the economy? To some degree, yes; total control is undesirable. Government should prevent poverty; but we can trust markets to, say, decide the price of toothpaste. Command economies are not innovative, they don’t work well at scale, and they’re too easily corruptible. When well-structured, and used in a society that takes care of the big-picture issues (e.g., basic income, job guarantees) so everyone has a vote, markets work.

It is not evil that our economy uses libertarian, market dynamics. It probably should. The evil is the totalitarian influence that economic life (not to mention artificial scarcity( has over everything else. Where people live, how they structure their time, and what careers are available to them, are all dictated by a closed social elite of unaccountable, often-malignant bureaucrats called “executives”.

When an economy functions well, it recedes. Economic life becomes less a part of daily existence as people become richer, freer, and more productive in their (fewer, usually) working hours. We’ve seen the opposite. We’ve seen dysfunction spreading. We’ve seen people sacrificing more of their life on the altar of the economic, without much progress.

It has been said to the young, “You don’t hate Mondays; you hate capitalism”. That’s not quite right. Working Americans aren’t miserable at their jobs because, say, oil prices are set by free markets. They’re miserable because of corruption. They’re miserable because they are forced by circumstance to work for a malignant elite– a predominantly social rather than economic one– that despises them.

We’ve covered the good news: we are culturally communalistic. We are politically republican. We are economically libertarian. Generally, this is how things should be. So what’s wrong?

Socially, we are fascist. On the social plane, we are not “becoming fascist”. We are not “at risk of fascism”. We are there. A malignant upper class has won.

As discussed, is social drives the political; the political often drives the economic; economic forces drive culture far more than the other way around. As we are thoroughly corrupt, in the social plane, we should understand that we are not in danger until there is a radical overhaul of our current upper class. State-level fascism isn’t here yet, but we’re governed by an elite (“the 0.1 percent”) that would make it so, if it were in their personal interests. Everything could fall, and it wouldn’t take long.

For example, we’ve already lost freedom of speech. The federal government cannot bar political disagreement or peaceful opposition. But employers can– and do. Job opportunities are stolen from people based on social media posts but, at the same time, job opportunities can be stolen from people because they don’t use social media.

One of the key revelations of the 2010s is that only one social class distinction matters in the United States: those with generational wealth and social connections (“the 0.1 percent”) and those without. The higher-income supposed upper-middle and middle classes will be just as screwed, if a significant percentage of jobs are automated out of existence, as the poor. In any case, the upper class has all the important land and runs all the important institutions. It decides, monopolistically, what jobs people get: who works on what, when, and where. Some people get to be VPs of Marketing and university presidents who earn $1 million dollars per year for three hours per week of work; others get blacklisted and become unemployable. There are people who make those decisions; most of us are not among them.

Under fascism, the governed compete while power unifies. That’s what we’re observing in the corporate world right now. “Performance” is a myth. “Meritocracy” is a malevolent joke on the middle class (and “middle class” is itself, under our fascist society, a distinction invented to make upper-proles feel better about ourselves, and to divide us against lower-proles). What actually matters, in corporate jobs? Not performance. Not even profits. (I’ll come back to that.) Loyalty to the existing upper class. Corporate do not work for shareholders; in practice, they work for their management.

Corporate executives, in truth, have insulated themselves from meaningful competition. It will occur on occasion that one must be replaced. When this happens, they ensure a soft landing for the outgoing executive, while ensuring another member of their class steps in. Positions are shuffled around, but they keep these overpaid positions confined to a small elite. None of us really have a chance at those jobs; the idea that anyone can make it is just a cruel joke they play.

These people set each other’s pay. They use clever systems to hide the class’s rapacious self-dealing. For example, venture capital allows a rich man’s son to manufacture the appearance of success on a competitive market– he’s an entrepreneur, he says– when, in truth, the clients and resources are furtively delivered by their backers. This ruse and many others make it appear merit-based when their children succeed, at the expense of ours.

There is some competition allowed within the upper class, but there are rules to it. No one can damage the image for fortune of the class. Corporate executives are far more vicious in their competition against their workers than against nominally antagonistic firms: competitors in the classical sense.

Executives self-deal and get away with it, because their bosses are other executives, who are doing the same. Is all this self-dealing good for corporate profits? It’s hard to say. Executive-level fascism reduces performance but it seems to reduce variance. The left is often to quick to assert that social evils derive from “profit motive” when it is, in fact, executive self-dealing that is the essence of the corporate problem. Profit maximizing has its own moral issues, but they’re not the most relevant ones.

Do executives care about profit? They want to make enough profit to appease shareholders, and not a dollar more. If they’re making outsized profits, they could have paid themselves better. They could have hidden money in the company, to be drawn out in bad times. They could have used those profits to push efforts that would improve and expand their personal reputations. To an executive, a dollar of profit is waste, because he wasn’t able to find a way to take it for himself. In Corporate America, no executive works for a company. Companies (and their workers) work for executives.

What about shareholders? Why don’t they step in and drop a pipe on these self-dealing, comfort-addicted executives? The answer is that the shareholders who matter are… wait for it… rich people. How did they get rich? By sitting in overpaid executive positions, peddling connections, and ingratiating themselves to the upper class. They will never quash the executive swindle. That game keeps them rich, and ensures that their children are even richer. Perhaps it would do good for “companies” in the abstract if someone stepped in on executive excess, but it would be so bad for the upper class that it will never happen.

Of course, if returns to shareholders are abysmal– enough for the press and public to take notice– there will be executive shuffling, but it’s engineered so that no one really gets hurt. A CEO can be fired, yes, but with generous severance, and his career will be handed back to him (plus interest) within a year or two. The only thing that would put an executive on the outs would be disloyalty to the upper class itself. That, they would never forgive; he would likely be suicided.

What about when firms compete, as they’re nominally supposed to? Firms will compete for customers; that is true. Sometimes, they do so ruthlessly. It is not bad, from the consumer’s end, to live under capitalism. What firms cannot stand is having to compete for workers or their loyalty. They will ruin the careers of people who try to make them do that. Sure, they whine from time to time about a tight labor market and a lack of domestic talent, usually in order to scam the government into allowing them to hire more indentured servants from abroad, but their incessant whining about competition is a part of their strategy to ensure they never face it. They consider “job hopping” a sin, because they can’t tolerate the idea of having to compete for a subject’s loyalty. They share data on personnel and compensation, often in violation of the law (which they do not care about, since they own the most expensive attorneys). Most companies, before finalizing a job offer, call references: other managers at nominally competing firms. This would make no sense if there were real competition between companies. It makes complete sense if there is not.

Executives are not rewarded or punished based on their loyalty to shareholders, but rather to the upper class. Middle managers (who are not part of the upper class, and have no reason to care about it) are, in turn, rewarded or punished based on loyalty to their superiors’ careers. Workers, by and large, know that in today’s one-chance, fast-firing corporate culture, they don’t work for “companies” at all; they work for managers. The explicit theme of class domination is obscured to some degree, leaving workers unsure whether that their failure to advance may be a personal failure, and therefore avoiding public admission of the otherwise prosaic fact: the game has been rigged against them. Only one in a thousand who tries for corporate entry into the upper class will be accepted, and this will require total moral self-deletion.

I’ve mentioned the loss of one’s freedom of speech under corporate rule and that, at the same time, many people must nonetheless have social media profiles to have a career. It “looks weird” to people in HR not to have “a LinkedIn” or “a Twitter”. Opting out of technological surveillance is not an option for many people. They’ve been tricked and extorted into rendering unto current and future employers– corporate capitalism, that is– information that will only be used against them.

Mainstream corporate employers are not especially tolerant. It is bad to be the office liberal, the office conservative, the office Christian, the office atheist, or the office Jew. To win at corporate self-presentation, one must be prolifically bland. One should avoid excess and abstinence both in profanity. One should avoid the topic of labor rights at all costs. What about our other cultural institutions, though? What about our press, our universities, and our sundry nonprofit organizations? Yes, mainstream magazines will publish center-left views. Universities in particular house more leftist than conservative voices. How much will this protect us? Not that much, I’m afraid. Most people will not be part of those institutions for life, and therefore still rely on the Adversary for their careers. Even outside of the for-profit world, many are trained to turn on those who threaten the hegemony of the generationally well-connected. This is a shame, because that’s our society’s number-one problem right now.

State-level fascism hasn’t arrived yet, but our social elite has been preparing for it for decades. They are in no hurry to make it happen, but they will if they judge it to favor their interests. Why have they been fomenting right-wing populism– using racial resentments, religious bigotry, and the frank irrationality that emerges from stunted masculinity and (economically enforced) permanent adolescence? To ensure that, no matter what else happens during a populist uprising, they’ll have an easy time getting their money out of it. The upper class has convinced the rabble that generational wealth and connections– neither of which the rabble themselves have– are a right; meanwhile, leftists and racial minorities are a source of their misery.

This society is set up so that, if such events come to pass, the most armed and ready militants will be on the right wing. Not only will this support the elite’s economic goals and keep the proletariat divided against itself, but it will also mean that any revolutionary effort is likely to be overcome by people with such repugnant ideological and cultural aims that they will never gain global sympathy. The upper class would rather have a 95 percent chance of a rightist-racist revolt that no one (present company included) would support than a 25 percent chance of a leftist revolt that would quickly gain global sympathy.

Do today’s generationally well-connected want to live under state-level fascism? They don’t care. They wouldn’t be living under it; they’d be running it. I do not think they are, down to a man, ardent fascists. I imagine that the vast majority are individually apathetic on the matter. So long as they live in a world where they don’t have to compete for what they have, they remain disinterested in ideology. If fascism rises, they will quickly support it, not because of prior ideological commitment, but because it is practically designed for them; though fascism presents itself as popular indignation, it is deliberately constructed to keep the powerful (except for a few, who may be scapegoated) out of harm’s way.

Socially, we already have fascism. The generationally well-connected live with impunity. They do not tolerate division within their ranks, and do whatever they can to divide us against each other. This includes the division between so-called “red” and “blue” America, which are allegiances to manufactured brands– bloodless center-leftism and right-wing indignation, both of which are harmless to the entrenched upper class– more than coherent ideologies. Meanwhile, our society is almost entirely constructed so that no one can represent significant harm to upper-class interests and keep his career, reputation, and life intact.

In the next installment, we’ll discuss how we got here. Our turn toward fascism in the social sphere occurred around 1975; it is often blamed (hyperbolically, oversimplistically) on the Baby Boom generation. In truth, the sequence of events that led us there was, if not inevitable, predictable and cannot be blamed on a specific generation. So in Part 3, we’ll get a handle on how our current fascist mess was made– and how it might be unmade.

American Fascism 1– What Is Fascism, and How Did It Get Here?

Part 1 Part 2

This series of essays shall cover one of the most depressing topics I’ve ever written about: fascism. The truth is, I’ve been writing and rewriting “the fascism essay” for almost two years. I’ve worked on one version or iteration, polished a bit… only to decide not to publish it. It’s such a dreary, demoralizing subject.

When fascism descends, one is faced with a fight– probably a losing fight– that a person of conscience still owes the world to fight.

I promise that this series will not focus on Donald Trump. It would be a mistake to conflate him with the more general fascist threat. More than he is a fascist, he’s an opportunist. Inevitably, someone would have tried what he did. Perhaps we are lucky. For reasons that will be discussed later on, he is quite ineffective when it comes to fascism. He has damaged this country, and he will probably damage it more before he is gone, but it would be going a lot worse if the game he is playing were played competently.

I’ve had to fight fascists for 7 years. In 2011, a comment I made about a product at a large tech company received far too much internal publicity, after which my name was placed on the list of suspected unionists that circulates around in Silicon Valley. I got death threats– I still get death threats. I experienced, more than once, a job offer that was rescinded after someone found my name on the list. I’ve been libeled in various corners of the Internet, and this libel has had a negative effect on my career.

Having been fighting fascists for 7 years, and having to continue to fight them, I am well aware of our nation’s fascist energies. Donald Trump did not create them out of thin air, and we will not be rid of the threat after he is gone.

In fact, as I’ll establish over the next few essays, it is the nature of end-stage corporate capitalism to become fascist.

We have been lucky with Trump, at least so far. Two years have passed and he has not instituted state-level fascism. I don’t think he can. We would be in much worse shape if, instead, we had been saddled with a polished 39-year-old tech founder as opposed to the an emotionally incompetent, openly racist septuagenerian who tried to trademark the phrase, “You’re fired.”

Fascism is an immense and unpleasant topic, so I’ve broken this essay up into several pieces. The planned schedule is to release one every three days, in eight installments. I shall cover:

  • What is fascism?
  • Is the United States fascist?
  • Fascism and capitalism.
  • Why fascism appeals to people.
  • Fascism’s endgame.
  • Why we have to fight fascism– now.
  • How we must fight fascism.
  • When it is acceptable, and when it is not, to use violence against it.

Before we can discuss fascism, we must ask: what is it, and where does it come from?

Ideologies are as numerous as human cultures, but complex societies tend to establish and differentiate themselves in their handling of four elemental impulses that recur in human politics, and probably have for all time. Those are: communalism, libertarianism, republican democracy, and fascism.

We can understand each of the four from first principles by noting that much of politics comes down to one question, which we face on a daily basis in economic and social life: does one cooperate, or compete? Do we honor social contracts, or break them for personal gain? When we encounter other tribes, is our instinct to share resources and allow further specialization, or do we fight until we’ve chased them off– or killed them all?

Communalism

In general, those who cooperate are better off, in the aggregate, than people who fight. “Winning” a war often mean losing less. The communalist sees this sort of competition as unsavory and would prefer that it never happen. Of course, communalists have no issue with competition in games and sports– it’s well understood that sportsmanlike, low-stakes competition has a place in any society– but they do not want to see the high-stakes fights in which people, businesses, and nations work to actually hurt the other.

A team, tribe, or group does better if its members cooperate than if they suffer in-fighting. An example I know far too well is that of programmers, who have low status in the workplace– even in software companies and startups, where they ought to be in charge. There’s a well-known reason for this: despite their superior individual intelligence, they have zero collective intelligence, which makes it easy for their bosses to pit them against each other.

The communalist view has a lot to recommend it. The toughest global problems– climate change, public health, avoidance of international conflict– are cooperative in nature.

Libertarianism

No matter what, though, people will compete. Rules will be broken. Interests diverge. The communalist view is that we should cooperate all the time, but the libertarian counterargument needs only four words, followed by a mic drop: Have you met people?

Arrangements that seem to lack competition, on closer inspection, have unsavory varieties thereof. Foremost in mind would be a business monopoly, which is not a true absence of competition– it is certainly not a cooperative arrangement where everyone wins– but an asymmetric and socially harmful conflict where an in-group (the monopolist) holds all the cards, and the public loses. The situation would improve if others could enter competition with the monopolist.

Libertarians don’t want governments to eradicate competition, but to protect the individual’s right to enter. In general, libertarians want government to be limited, transparent, and simple.

We might consider the communalist impulse to be a sort of ancestral left, while the libertarian one represents the primordial right. Just as most of us call ourselves centrists, we generally recognize the value in both impulses.

Republican democracy

Communalism proposes an ideal, but the libertarian reminds us of an uncomfortable truth: competition– of the serious kind, where people can get hurt– is inevitable. Therefore, it’s better to have well-structured and fair competition than pretend that none exists.

How do we reconcile a communalist ideal with competitive reality?

Republican democracy, the third elemental impulse, puts it like so: as citizens, we cooperate. We share information in order to make the best decision, and largely want the same things: good government, prosperous daily life. However, anyone who wants to acquire or retain power must compete for it. Additionally, a private citizen who believe he can do better in a leadership role than the person currently there may run for the office.

The above, we take for granted. We shouldn’t. Workplaces, for example, are not run this way. Someone who even jokingly suggested running for his boss’s position would be summarily fired.

In sum, the republic holds the communalist idea, but introduces competition to hold political leaders accountable to the public.

The communalist would not have anyone compete; we should all cooperate. The libertarian’s worldview is one in which everyone competes for everything. The republican impulse is the only one of the three introduced thus far, as expressed by the table below, that makes a difference between someone in power (or seeking it) and the general public.

Who Competes?
Political System Leadership The Public
Communalism No No
Libertarianism Yes Yes
Republic Yes No
???? No Yes

Communalists and libertarians both have a blind spot: the fact that power relationships and leadership roles emerge almost immediately in human societies. Communalists underestimate what people will do to compete for position. This is easy enough to see. Libertarians have a blind spot, too, and in some ways it’s a bigger one.

The libertarian mindset approaches governance with a mathematician’s conservatism, by which I mean it starts from a small set of rules (analogous to mathematical axioms) and wants to restrict government’s role to what can be proven from those rules. No distinction is made between rich and poor, in-crowd or underprivileged. Everyone competes, all of the time– survival of the fittest. But, what is the first thing people do after winning in socioeconomic competition? See, the libertarian believes that past behavior predicts future results and that people who achieved socioeconomic success will double down on whatever worked… but that’s not what happens. Instead, those who’ve won (often, by pure luck) will do everything they can to insulate themselves (and their progeny) from future competition, and stay “winning” forever. At absolute most, society gets one generation of rule by the fittest. After that, a self-protecting, effete, useless oligarchy sets in.

Republican democracy does better. It says: cooperate as citizens, but compete for office. Then, it invests resources to make these competitions– which happen at regular times and are subjected to rules to prevent corruption– as fair as possible. This seems to be the best solution. A well-structured republic uses the competitive energies of the ambitious for the greater good. In the republic, power is self-limiting, as it comes with increased scrutiny, responsibility, and competition. The objective here is that no one seeks power just to have it, and people contend for office only if they have a higher moral or public goal they wish to achieve.

Does the republic have a blind spot? In a way, it does. The objective of the republic is to make government reliable, trustworthy, and therefore boring. Such systems are engineered to prevent the emergence of feedback loops that otherwise dominate human systems. The issue is that feedback loops emerge anyway. We seem, as humans, to be primed to recognize and react to them quickly, although this exacerbates the problem. For example, when one side of a conflict appears to be winning, many of us begin to act as if that side has already won. It is through these feedback loops that the mere suggestion of a person’s popularity (or stigma) can become fact, and billions of dollars are spent every year to induce them.

The republican element of human politics tends toward self-limitation, but other elements emerge and dominate. Those tend to be unanticipated feedback loops that weren’t known to exist until someone exploited them. Republics will, from time to time, have to contend with a sort of Jungian shadow: a dual-opposite mentality asserting the right of the rich to get richer, and of those with power to use it however they want (including, notably, to acquire more power).

Fascism

The dual opposite of a republic would be a society where the governed must compete, merely to survive. Meanwhile, the powerful are immune to challenge from below. There is only one political party and it will always be that way. Those with power have no responsibilities to those below them, because power is subject to no appeal but itself.

That sounds like an unimaginable dystopia, right? That would never, ever emerge from a free society. Right?

It has already done so. Consider the corporate workplace. Regular employees are ranked and pitted against each other– and against the hungry masses, for management is happy to remind its subjects of the desperate millions ready to take more abuse and less pay. Stack ranking and annual reviews exist largely as a mechanism through which executives remind the little people that they aren’t a permanent part of the company– they are a resource that will be used up and discarded. Meanwhile, corporations rely on a self-dealing one-party government called “management” that uses every bit of power it has (which is, all of it) to keep the underlings where they are. Power begets power. It does not accept limitation; who has the right to limit it? Certainly, there shall be no separation of powers. Power is allowed and expected to unify– managers protect their own, and those who do not learn this one rule do not remain in management for long.

Of course, individual corporations are too small to indulge in the end-stage horrors for which fascism is known: international belligerence, extreme racism, repression and disinformation. In comparison to state-level fascism, the corporation’s fascism-lite seems benign. Is it? It’s hard to say, because state-level fascism seems, likewise, harmless to the general public when it sets in.

The core of fascism, I would argue, is not to be found in the end-stage calamities to which it often inexorably leads. Rather, it is this: the people compete against each other, endlessly, but power unifies.

Under fascism, power’s disparate forms– cultural, political, religious, state, economic, legal, and social power– congeal into an inflexible fasces. Industrialists, political officials, media personalities, and sundry middling bureaucrats and managers form a one-party system that cannot be appealed. At the same time, people are divided against each other, ranked in ceaseless competition. Those judged to rank at the bottom– a small percentage that must be called “work-shy”, or “below expectations”, or Lebensunwertes Leben— must be punished. This is not always done out of hatred for the unlucky; it’s done to terrify the middle-ranking majority.

Fascism is neither leftist nor rightist in any traditional sense. Fascists learn that they can lie with impunity; there’s no one above them for the public to appeal to. The fascist will use socialist, capitalist, royalist, revanchist, communist, populist, nationalist, or religious symbology as needed. A corporation will declare itself a meritocracy and punish anyone who says it is not so. Truth doesn’t matter; the closest thing there is, is reputation, which the fascist manipulates masterfully.

Donald Trump lies so frequently not because it is part of a political strategy, but because he’s taking his corporate tricks into the public theater– with mixed results. His lies are of a kind that would pass easily in the corporate world; it is good for us that, in presidential politics, he’s out of his depth. What one must understand about Trumpian lies is that anyone who would recognize them as lies is not part of his intended audience. These lies exist to rally the loyal and to frighten– not convince– the opposition. Loyalists see a man so fervid he occasionally gets a detail wrong; opponents see a person unconstrained by truth or apparent logic. When intelligent people are called out on their support for someone so obviously divorced from truth, they often use the Thiel defense: they’re taking him seriously, but not literally.

A corporate executive (and an established fascist) can say anything, because he’s in a milieu that admires bullies– “tough leadership” is the corporate term of art for the sorts of people who smashed science projects in grade school– and because he’s surrounded by people who are paid to behave as if they believe every word he says (and to rat out nonbelievers). Trump’s problem is that he still has to deal with the 50-plus percent of the population that won’t put up with his mendacity. A president cannot, at the current time, fire the public.

Republics are set up to force politicians to compete, in an effort to make sure that elected officials work on behalf of the public. Ours isn’t perfect, but the system does does a decent job. Voters don’t fire incumbents often enough, one might argue, but political officials know that they can.

While republics strive for responsible government, fascism imposes competition on the people, to render them accountable to the elite– against which no one and nothing can compete.

What about competition within the elite? Surely, that must happen, even under fascism– right? Of course, it does. The same divide-and-conquer techniques that fascism uses against the public, the dictator will use against his lieutenants and middle managers. Such bureaucrats and seneschals are happy to squabble for the boss’s favor. However, there’s one rule, and it’s absolute: the competition can never be seen from below. (As a corollary, mid-ranking hierarchs cannot court popular support.) Court intrigue within power is fine, so long as it stays there. To the public, though, they must present a unified front.

Fascism requires this unity among power because it does not present itself as a brand of politics. Rather, fascist is bigger (as in, more totalitarian) but also harder-to-see than regular politics, toward which it project disdain. It presents itself as post-political. Current exigencies, it argues, require a union of power to make swiftly the decisions that are inevitable and beyond appeal. Those could not, it must always say, have been made any other way. If people became aware of a debate within power, this would suggest that alternatives existed, and the sense of inevitability in the fascist’s movement would be compromised.

When fascism runs smoothly, the governed do not perceive themselves as under a self-serving elite, or having a repressive government. Authority assures them that, for each concession it demands of them, there were no other options. We had to shoot the protesters, because if hostile nations found out about internal dissent, they’d take advantage of our weakness. We have to fire 5% of our workers every year, because otherwise nothing will get done.

It is shocking how readily people will accept authoritarianism if fed a halfway-coherent argument that there are no alternatives.

I used to write a lot, between 2010 and 2015, about organizational dynamics. As a result, I got a lot of letters from people facing managerial adversity at their workplaces.

I mentioned that fascist governments are mendacious and will present themselves as needed: if they need to seem populist, they’ll seem populist. If socialism is en vogue, they’ll become left-authoritarians. If a veneer of capitalism suits their needs, they’ll take the right. The corporation’s lie is meritocracy, and it’s so pervasive that people believe in it. So, when they face managerial adversity, they believe that “performance” can save them. (It can’t.) Or, they go over the boss’s head, or to the company’s HR department. After all, if it were a meritocracy, it would reward when a good employee rats out a bad manager, right? Of course, that move almost never works. If anything, the afflicted employee gets fired faster.

Corporate “performance” is mythical. It’s a word they made up that sounds objective but, in fact, means whatever the corporates want it to mean. (It is, arguably, unintentionally honest. Succeeding in the corporate world has nothing to do with performance in the sense of being good at one’s job; but it is a performance in the theatrical sense.) Corporate “meritocracy” is a litmus test for ideological compliance and personal loyalty to management. One must not only follow orders, but pledge fealty to inflexible managerial supremacy with every action. In the United States, one must remember that managers do not work for companies. (I’ll bust the “shareholder” myth, some other time.) Rather, companies work for their managers.

So, what happens when these unfortunate people, suffering managerial adversity, attempt to appeal to higher “meritocracy”? They are crushed; the system requires it. The unspoken agreement among corporate bosses is never to let the little people pit them against each other. Whether the little people are right is immaterial. Anyone who tries this must be destroyed. Even if the worker could somehow prove to HR that he was a “high performer” (whatever that means) who had a bad boss, his “boss-killer” reputation would follow him, he would be unable to join another team, and he’d be terminated within time for that reason alone. To do that is to break the one rule the corporates actually care about. Ethics, laws, and even public perceptions have flexibility, but managerial unity must never be challenged.

Fascism, like corporate management, requires a one-party system. It will never allow real elections. It will use the strangest lies to test loyalty; those who value truth too much become a problem that must be dealt with. Even when disloyalty is deserved, for the bureaucrat or manager was incompetent or abusive, fascism will not tolerate it. Fascism would rather kill innocents than risk division from below.

Understanding Complex Societies

In the next essay, I’ll answer the question, “Is the United States fascist?”

The short answer is: No. Not yet, and I hope not ever. The United States is a republic with serious problems, but none even approach the magnitude to state-level fascism.

The longer answer is… more complicated. Whether the 21st-century corporate system’s effete brand of fascism-lite can be transmuted into full-bore national fascism is a matter that remains untested. Our first true “President Corporate America” has been unpopular and largely ineffective. On the left, we ought to use his continuing failure whenever possible to embarrass the milieu from which he came.

It’s easy to understate the corporate threat, because we’ve had “corporate capitalism” for a long time, and for decades it represented no threat to our nation’s freedom at all. Why it has changed requires further analysis, and I’ll cover that in a future essay.

For now, we observe that corporate existence has primed people to accept life under, at the very least, fascism-lite. Our adversaries- people who would impose fascism if they could benefit from doing so– are collecting data, as I write this, on their workers. What do they see? I’ve been in the corporate world, so I’ve seen it as well. To impose fascism is easy. It’s like taking freedoms from a baby.

In the corporate world, when someone is fired unjustly, what do her colleagues do? Do they encourage customer boycotts? Do they threaten to quit unless the wrongly-fired employee is reinstated (or, at least, offered a reasonable severance)? Do they storm the manager’s office, like it was done back when people had the courage to handle these things properly? None of the above. They get back to work, as if it had never happened.

What about the increasing totalitarianism that corporate jobs assert over a worker’s time, living arrangements, and (in the age of technology) reputation? Have any of these people pushed back against that? No.

We feel safe, in the United States, because our “professional” middle and upper-middle classes remain notionally liberal. We should not. Their politics is the politics of not being political, which fascists (who present their own aggressive politics as not-political) love. We see how they’ve been trained to fail when put to ethical tests in the lower-stakes corporate game, and they reliably do. What’s going to happen, then, if the stakes become high? If we can’t count on them when jobs are at risk, we surely can’t count on them when freedom and lives are on the line.

The corporate world is full of of-course-I-would-hide-Anne-Frank quasi-liberals who, nonetheless, nod in agreement when some mid-level managerial thug calls one of their colleagues “a low performer”. They probably make up 85–90 percent of corporate denizens, because people of conscience don’t last long. Forgive me for not trusting them to hold society up, should it ever endure an attack of national scope.

In the next essay, we’ll assess in more detail the fascist threats to the United States, as well as why the ostensible liberalism of our popular culture is unlikely to protect us. We’ll also answer one of the most important questions that I have not yet addressed, which is motivation. Why would anyone want to turn this country fascist? What would be in it for them.

It has often been argued that a system like ours is resistant to fascism because it would not bring comfort or wealth to the current elite. To take over such a large country requires massive effort, and the financial rewards are minuscule (at absolute best) from the perspective of an upper class that, materially speaking, already has everything.

That argument is wrong. A nuanced picture of our society, and a psychographic profile elite, both of which will come in later essays, will establish their perceived gain– and it’s terrifying.

More relevantly, the vast majority of us, should fascism come to pass, will lose. Some of us, including me, will lose everything. This could become the fight of our lives. For me, for seven years, it already has been.