Panic, Office Work, and What Each Taught Me

I have panic disorder. I get mild-to-moderate anxiety attacks two to five times per week, disruptive panic attacks about once per month, and the severe kind that wear me out (a “go-homer”) every six months or so. It sucks, but it used to be worse. Nine years ago, before treatment, I’d have the hallucinatory, dissociative kind that felt like my heart was about to explode. I haven’t had one of those for almost a decade. These days, they’re annoying interruptions.

The disorder remits slowly. It leaves a lesser shadow of what it once was, but never quite ends. After you experience the five hundred bizarre symptoms that panic can throw– phantom smells, visual flashing, light-headedness, vomiting– you lose a fear of them. The attacks become painful and annoying, like headaches, but they’re not especially scary, per se, to a person who’s been through several hundred. They’re not dangerous; they just suck.

There’s no cure or exit that always works. Sometimes, a glass of fruit juice helps; sometimes, it doesn’t. A walk can make it better, or worse. Usually better, but it feels like a gamble. Reading isn’t usually possible; writing might be, but expect typos. The peak of an attack only lasts for a minute or so; an attack may have one peak or ten spaced a few minutes apart, and the recovery is long: 30 to 90 minutes, usually, before the damn thing is conclusively and safely over.

Do It Calmly

There does seem to be an overarching principle to the long recovery: whatever you do, do it calmly. That’s not much to say, but it’s what I’ve got.

That’s harder than it sounds. When your anxiety goes up to 11, and the fight-or-flight response is out of control, the impulse is to sprint out of a place, not walk calmly as if everything were normal. Yet, walking is often better for escape than running, in this case. Though regular exercise reduces the frequency and severity of panic attacks, I wouldn’t recommend all-out effort during one.

I find that panic produces two desires: one helpful and one hurtful. The first is the desire to complete a task that might make it better: a walk in the cold, going home, playing a mindless video game, or getting to the bathroom. It gives the mind and body something to work on, not so taxing as to exacerbate anxiety, but engaging enough to pass the time and spend some of that surplus adrenaline and neuro-electrical activity. That can be useful. The second impulse, the harmful one, is that raging urge to do it quickly and “get out of” the panic experience as fast as possible. That blistering hatred for the experience keeps it going.

The panicked mind thinks that achieving some task (say, leaving a crowded place) will make the attack end. The process of working toward the task usually helps; but, achieving the task in a hurried, tense, or reckless manner will create more anxiety. One escapes the situation that caused the attack, only to find the same panic in a new theatre (and, then, falsely attribute it to some other cause, back where one started). Sure, you escaped the movie theater, but now you’re having the same attack in a parking lot.

I don’t have all the answers, but the best I’ve got for panic is: find something to do but, no matter what it is, do it calmly. There’s something zen about the notion; more important than what you do is how you do it. That’s an attitude I try to take to most of life. Between health issues and work and random events, I can’t control the bulk amount of output I can produce. I don’t (and can’t) even know whether I have 60 years or 60 minutes left of life. The quality, I decide. Quantity isn’t up to me.

When I have a panic attack at home, I do housework. It takes twice as long as it otherwise would, and perhaps I do a less-than-stellar job, but the time isn’t wasted. If I have one while working out, I keep doing what I’m doing, in order to dump the adrenaline into exercise. At the office? Well, I still haven’t figured that out. The work never was the problem; it’s the subordinate context that makes office life so terrible. I suspect there’s no cure, for anyone, save sweeping societal overhaul (which will, though potentially beneficial, itself be stressful).

One good thing about panic attacks is that they end. Office life never does, really. Unemployed people have to search for jobs, which is just as stressful; retired people are so worn out by office existence that they often struggle to adapt, after having been institutionalized for so long. For this reason, the year after retirement has an above-normal rate of death by all causes: natural ones, and suicide. Such people have taken themselves out of the office– or, in most cases, been pushed out due to ageism– but the can’t take the office out of themselves.

Do It Tensely (…Or Else)

Let’s invert the directive above. Instead of “do it calmly”, let’s examine the distinctively unhealthy advice, “do it tensely.” You now have a recipe for anxiety– but, to its credit, also one for office survival.

One of most important social skills in the corporate world is mood mirroring. If the executives are tense, middle managers will be tenser, and workers who want to be upwardly mobile will be most tense at all. Carrying around this anxiety does no good. Nothing gets done faster, and the quality of work is worse than what it otherwise would have been. So what’s the purpose of holding this unpleasant mood? In most companies, management and especially upper management has no idea who’s good at his job and who’s just collecting a salary. Managers and executives are too busy managing up to pay much attention to what’s happening below them. By and large, it might be only 1 in 5 managers who has an intelligent sense of which of her reports are good and which are bad; as for upper management, two or three or eight levels above the workers: forget it. But, they all have opinions. Those opinions are largely formed based on appearances and perceptions: body language, verbal intonation, and various mood cues. As an executive sees it, the people who are running around anxious are team players; the relaxed people don’t give a shit.

High-ranking individuals within companies assess their subordinates based on what they perceive to be merit, but they’re usually picking up superficial cues. To social status, they respond strongly. The aversion to cognitive dissonance is sufficient that a person who reliably projects high social status will be viewed as competent and high-performing in any workplace. The problem is that this is hard to do. Everyone’s trying to project high status– it’s less taxing than projecting anxiety– but most people who play this game come across as low-status strivers. Also, groups define status in different ways and one who projects the wrong kind of status will draw resentment. To be coolly aristocratic might work in a London investment bank; it will inspire anger if you’re a foreman at a steel mill or a programmer in Silicon Valley. Since it takes a long time to discern what kind of social status a group values, and because these tokens are often hard to fudge, it’s more prudent to fall back on the second element: mood mirroring. When the boss seems angry, be angry. If the boss relaxes, relax– but not quite as much as he does, for that’s sometimes a trap. Forget what companies say about themselves being casual, fun, or (gag) family-like places to work. The old “shit rolls downhill” rules about not leaving before one’s boss, and not sitting while he stands, still apply in Corporate America.

If you seem relaxed at work, people assume you don’t care. If you’re the odd-man-out in theatre of shared suffering, that’s points off. Substance doesn’t matter. Merit doesn’t matter. Merit? What, you think you’re still in college?

Often, this anxiety comes straight from the top. Corporate executives are like toddlers with guns. They can’t do anything for themselves, but they can cause incredible damage, quickly. As bad as it is to suffer their whims, there might be slight comfort (not much) in the observation that their position is also stressful. They hold notional responsibility, but can’t do any of the real work that powers the company. They’re constantly in a betting position with regard to the performance– which they cannot measure, and the people to whom they must trust this job cannot be trusted– of the people below them. The personal stakes are low– if an executive fails and is fired, his buddies on the board will line up another job– but the anxiety is real. And, of course, executives take what anxiety they have and deliberately amplify it. As they see it, a display of twitchy trigger finger will motivate the people at the bottom.

Thus, corporate capitalism will always be a game where one of the most important skills is the ability to display anxiety in deference to the nervous brats up top. Of course, one shouldn’t go overboard. A full-on panic attack is embarrassing. It suggests that one can’t handle the stress. One who wishes to maximize his corporate survival ought to display about 1.2 times the anxiety level of his immediate superior: not so much to become the long pole in the tent or seem unreliable, but enough to send the sycophantic message, “When you’re upset, I’m upset.”

To wind down (or avoid) a panic attack: find something (anything) to do, and do it calmly. Try to do it well.

To survive at the office: find something (anything) to do, and do it tensely. It doesn’t matter if you do it well.

One might see why office life wears people out. Most people can’t mirror anxiety without themselves becoming anxious. Making an angry face tends to make a person angry; the same applies to smiling, grimacing, and other expressions. Just holding one’s facial pose tends to create the emotion. Even professional actors do best “in character” and need time to recover from tense scenes, and that’s when they know the events and emotions aren’t real. (In the office, the fear often is real, because most people live on the wrong end of arbitrary but brutal power relationships.) Being around people who at least have to pretend to be anxious will make most people, in accord, actually anxious.

Of course, though I hold a low opinion of corporate executives, I don’t intend to assert that they want people to be running around with panic attacks. They probably don’t think about that issue at all. (It is rare for a corporate executive to think about anything other than his own enrichment.) That a small percentage of the population would react to standard, mandatory office anxiety in a painful and counterproductive way is, from their perspective, tolerable suffering since they are not the ones who suffer. It is not within their intent, though. They are ignorant assholes, but almost never deliberate oppressors.

This particular divergence between what is mentally healthy and what is necessary to survive the subordination to rich people that we call “work” is not a unique one, and I’d argue that it’s the tip of a seemingly unrelated but fascinating iceberg: the notion of minimalism.

I can’t do the entire topic justice, but what I’ve realized as I’ve gotten older is that people are generally happiest with few things of high quality, rather than a plethora of low-quality objects (“clutter” or “junk”). A sturdy small house might be better than a troublesome large one, since most people spend the bulk of their home time in a small number of places, while a house that’s falling apart in a constant source of cost and headaches. High-quality objects and experiences, even if few in number, make people happy. Unreliable junk makes people miserable, even if they have a lot of it.

Most people know that, I think. Further, when there are a small number of elements to contend with, one can transform them. Cold, if one is properly dressed and therefore in no danger from it, becomes invigorating. The difficulty and pain of physical exercise can be offset by the design and healthful simplicity of the task: run 10 miles, or deadlift 225 pounds, or swim 50 laps. People do these things for free, with no economic purpose, because they enjoy physical exertion. Yet I don’t know anyone who would move a typical apartment for free. That work is just unpleasant: packing boxes, awkward objects, possessions breaking. Likewise, virtually no one can “transform” a torrent of junk and clutter into something enjoyable.

Office work is the opposite of minimalism. In bulk, it forces workers to contend with low-quality communication, low-quality ideas, low-quality projects, low-quality power relationships; and, in most cases, a low quality of delivered work. It’s busy, but it’s inefficient and often produces things of low or even negative social value. This incessant junk generation isn’t merely tolerated, but celebrated. If people weren’t spewing garbage into the stream of ideas, practices, records and discourse; well then, how would anyone know that they were working?

George Carlin decried our society as one where people buy shit they don’t need with money they don’t have. That’s bad enough, but one can opt out. Workplace clutter, on the other hand? Ha. Good luck with that.

Just as office social demands push people to treat minor shortfalls or risks as if they were personal, existential crises; it deprives them of the right to recognize clutter as such. It forces people to contend with, and generate, garbage so their managers can put together status reports on which they’ll be judged not on what was accomplished, but on the number of bullet points and words the report contains.

Largely, this exists because the best way to survive in a corporate environment is to flood the channel. Executives are always looking to cut costs; this usually means cutting people. Savvy corporates learn that it’s best to have complete opacity about how much one does and how much effort it takes to achieve it. Obviously, it’s no good to be seen as lazy; but it’s also damaging to be seen as hard-working, since it can present a person as over-eager, and it can set high expectations. You don’t want executives to know if you work a 2-hour day; you also don’t want them to know if you work a 14-hour day; or if you work a 14-hour Tuesday and a 2-hour Wednesday; or if you work exactly 8 hours every day. There is no upside in being surveilled by people with a hard-on for cutting (and, in many cases, externalizing) costs. Corporates with any sense of self-preservation will ensure that their superiors cannot pinpoint them in time and space.


In the cat-and-mouse game between executives looking to cut costs and workers trying to survive, one of the most common defenses is a channel-flooding attack. It doesn’t work not to communicate at all; it’s far more effective, in most cases, to flood executives with so much extraneous information that they can’t help but fall back on the default decision (to delay) with regard to anything that might cause harm.

Consequently, the main activity performed in corporate offices is not meaningful “work” but channel flooding. I don’t expect this to change any time soon.

But, imagine how much better of a world we’d have if this mandatory waste of time and emotional energy were eradicated.


The Seasonality of Workplace Conflict

January is supposed to mark the beginning of a new year, and yet as I get older, I’m shocked by how little changes from year to year. We don’t have flying cars, we still die of the same diseases at around the same age, and most of us still have to go to offices and perform bureaucratic chores for rich people in the name of “Work”. We had a rapid era of progress in society between 1945 and 2000 and then it just… ended. We don’t have more recessions than we used to (in fact, we have fewer) but we also fail to recover from them. Our economy continues to grow, though at a mediocre pace; our society is falling to pieces.

Every year, we each get closer to death, we burn a few billion barrels of oil, and our progress is, on the whole, disappointing. Our corporate masters get richer and their parties more lavish; the upper-middle class sees stagnation, while the middle and lower classes face outright decline. It’s an ugly picture. Yet, people have to spend 2,000 hours per year pretending that something else is the case. They go to work and put forth efforts as if they’re on a fast track to promotion and wealth; most of them (and all of their superiors) know they’re not. Social expectations force them to work as hard as they would if they were to be made CEOs tomorrow, even when it’s clearly not true.

I’ve seen enough of the corporate world to know that there are three seasons in which conflict is more likely than in the rest of the year. One needs to watch out every day in the corporate world; not a day goes by where someone, somewhere, isn’t stabbed in the back. Yet, there are three periods that are dangerous enough to merit special concern: the post-holiday depths of winter, the first warm days in the spring, and those waning dog days of summer that linger on into an autumn of disappointment. The conflicts in each hunting season are different– so are those who tend to hunt and be hunted. It’s worth exploring each.

All of these assessments are based on an American work culture and a New York climate. It’s probable that they apply differently to other climates and cultures.

A Sky Blue As Death (Jan. 2 to Jan. 31)

The holidays end with a crash; what was once a proud and perfectly good young tree is left tossed into billowing darkness, landing with a thud on the ice-caked street, destined either for a landfill, or to be pulp for tomorrow’s TPS reports. People have tired of festivities and fake merriment, but it’s still going to be winter for two or three months.

I’ll discuss this later: poor people pin their futile hopes on the summer, while rich people pin theirs on the early winter: ski season, free-flowing spicy booze, the kiss-at-midnight of the romance novel. (The middle classes venerate autumn, reminiscent of the elite New England schools they want their kids to attend; hence, the “pumpkin spice” craze.) Disappointment abounds everywhere; it just comes at different times for different people.

Executives return to work on January 2nd, having failed to reconnect with their work-orphaned children and having been asked too many unwanted questions about their work (since they’d freely admit, after a few drinks, that they don’t know what they do, but that it’s clearly socially harmful). They’re annoyed by the failure of the holiday season, but also amped. It’s a New Year; a new chance.

Corporate executives are predatory creatures. Their primary goal is to take risks where the benefits accrue to them and the (probable) losses are externalized to someone else. Usually, their companies suffer, but this doesn’t bother them. If they’re any good at being executives, they’ll be promoted away from their messes before anyone notices what happened. January is the perfect time to make a mess and try to turn to it into personal profit. If it doesn’t work, one has the whole rest of the year to clean it up. So, what do executives do? The only thing they know how to do: reorganize. This often involves firings, demotions, unwanted changes in responsibilities, and various forms of cost-cutting.

People tend to reflect most on their mortality at two times: their birthday, and the New Year. A year has passed, they think, and what have I done with it? Executives aren’t immune to this. They see VPs who’ve become SVPs and EVPs and C-Words in record time, or founders who’ve raised $100 million while they’re scrounging along on a Series A, and they compare their lap times to others, and they get pissed off. They take it out on their workers. Clearly, they reason, their inferior career results (relative to what they consider their peer group) are the fault of the people below them.

If there’s something good to be said for the January conflicts, it’s that they tend to stay within the first month of the year. The anger tends to flow downward from the top, which means that the people most pissed off can act quickly, and so it ends fast as well. Workers get fired and move on, and by the beginning of February, it’s over. The same can’t be said of the other two conflict seasons, which come at unpredictable times and tend to linger.

Slushy Times — Low Conflict (Feb. 1 to March 15)

After the January reorganizations, companies find a new, slightly lesser rhythm, and it becomes “the new normal”. The sky is gray, the sun is faint, and the world whirrs along, the season less oppressive each morning. Situation normal. Ah, back to work. Add a shot of espresso to my chai, please.

Then spring comes along and everything gets fucked up again.

Breakup Season (March 15 to May 15)

This is the only conflict season, of the three, that seems to be driven by weather rather than the cultural aspects we ascribe to seasons. Office people spend so much time indoors that climactic seasons aren’t that relevant. Holiday stress and post-holiday misery will be about the same in San Diego or Miami as in Boston or New York. Summer, as well, is cultural more than climatic; it’s the time of freedom for schoolchildren, the period of barbecues and long evenings, and in North America, the time of our high national holidays (May 5 in Mexico; July 1 in Canada; July 4 is the U.S.)

“Spring fever”, on the other hand, seems to come from the weather itself, and spring weather in the U.S. is notoriously unpredictable. March 15 might be 80 degrees; it might be the day of a blizzard.

In college, spring was “breakup season”. People tended to leave unsatisfying relationships in search of better partners. “Men shed their beards, women shed their men,” the saying was. It’s also a rutting period. “Alpha male” conflicts are common, and since men and women are far more similar than they are different, I imagine that alpha female conflicts occur with similar frequency. This “spring fever” period is short– perhaps three weeks– but it’s unpredictable when it will arrive. It may not exist in tropical climates; I’m not sure.

Winter conflicts tend to come from above: management is pissed off and acts; and, although the company is worse for it– because, please note, executives do things for the good of themselves and never that of the company– it’s over quickly. Summer conflicts tend to come from below; dashed proletarian hopes make August anything but. Springtime hostilities come from everywhere. No one is safe. As people come out of hibernation, they assess their social status and, if unhappy– which, in the artificial scarcity of office life, they always will be– they will fight over turf.

Not all of these conflicts are directed at improving salary or career potential. If two men are competing over a “work wife”, one of them will get the other fired. There’s no economic reasoning to this sort of thing, and that makes it dangerously unpredictable.

This is perhaps the meanest of the three conflict seasons, since it seems to lack purpose. Most of the fights, though their results are brutal and jobs are lost in them, are over nothing. Though 90 percent of reorganizations and layoffs are poorly-thought-out, ill-intended, and harmful; people understand that corporations actually need to do these things in order to survive. Winter conflicts tend to be impersonal: people get laid off, but they get severance. In the typical January conflict, no one’s trying to hurt anyone. Spring conflicts, in the workplace, have a personal flavor to them: people are trying to take their rivals down. They’ll fight over the silliest things; or they’ll fight over nothing. The least politically astute tend to fail hardest in this time, because they’re drawn into conflicts that seem to be about one thing but are, in fact, about something else. Efforts to resolve the notional problem often worsens the real one, as the makers of the problem profit when their rivals misestimate their true motives.

Intern Season — Low Conflict (May 16 to Aug. 15)

Spring’s conflicts die out, the fever breaks, and people find themselves exhausted enough to be satisfied, for a while, with their new positions in the pecking order. Those who’ve lost rank (if not fired) will start to look for other jobs, and those who’ve gained position will take a breather. As a result, there’s a low-conflict period in the early summer. People start to go on vacations.

This low-conflict period may be endangered. Why? As the American work culture gets meaner and stupider, taking vacation gets more dangerous. Startups offer “unlimited vacation” knowing people won’t take more than one week at a time. They eschew long vacations not because they’re scared of their bosses; they’re afraid of their co-workers snatching turf while they’re gone: better responsibilities taken away, and worse ones delegated onto them. If American work culture continues as it has, and vacations go extinct as they have among Japanese salarymen, we can expect this to become a meaner and more bitter time when people are pissed off because late spring used to be a relaxed season.

In the more prestigious companies, there’s another factor: interns. The company wants to put on a nice face. Elite college students, and recent graduates, have the backing of what’s effectively a union for smart people. If one Harvard or MIT kid has a bad experience, recruiting at the whole college will be more difficult for years. Corporations like to play nice during the summer, because they don’t want to look bad in front of the young people they need to recruit (bait, then switch) in order to get their grunt work done.

Failed Harvest (Aug. 16 to Nov. 15)

Ah, summer. Catching fireflies, building tree houses, telling ghost stories. Lemonade, water balloon fights, and freedom that lasts forever…

…then you grow up, and become an office drone, and summer is just shit. Hazy, humming, tepid shit.

As I said, the rich executive types tend to pin their false hope on the holiday season, while the poors and subordinates venerate the summer. It disappoints. It always does. The weekend gets rained out, or is too hot; that 78-degree sunny day happens on Tuesday and is therefore worthless. The stack of novels one intended to “get around to” remains unread, because who has time to read after capitalism eats its fill of one’s time and energy?

January conflicts tend to be top-down; executives are looking to make cuts and changes that will enable them, if random fluctuations turn in their favor, to claim credit. Summer conflicts tend to come from the bottom. The poors collectively find themselves thinking, Well, summer is over, and what a lousy one it was. This begins around August. Company-wide mailing lists and Slack chats blow up. Passive aggression mounts. Terminal middle managers, stewing about their lot, decide which peons they’re willing to sacrifice in the next “harvest”. One doesn’t feel the snappy bitterness of the executive-initiated January conflicts, or the aggressive head-butting of spring fever; this is more of a dull, dog-days warping, like train rails bent and made useless by heat. The intensity is low, but it goes on for a long time.

As the daylight wanes in August, people realize how little time they spent outdoors because they were cooped up in an office, because that’s what adults are supposed to do. September’s just sad; it used to be a time of new beginnings, and now it’s more-of-the-same with less daylight. Then comes October, when one sees the pretty foliage on the way to work and promptly forgets that it’s out there. Sneezy, drippy November whirls in and, in an open-plan office, everyone gets the sniffles.

Dismal Gray Merriment — Low Conflict (Nov. 16 to Jan. 1)

To be an office peon in November is truly depressing. It’s dark when one leaves work. Yet it’s a shared depression. It hits everyone the same. Summer is a distant memory (until it is spring and that failed summer was just yesterday) and the weather’s not that cold, but it’s not warm either. Then come holidays and travel (to the often colder Jobless Interior, where most people grew up, and which would be a decent place to live if there were, you know, still jobs there… like in the ’70s) and alcohol-fueled parties.

This is a period of low conflict. The holidays are distracting, and performance expectations are low. It’s unpleasant to have to go to work, because the sunny, warm afternoons are becoming rarer, but it’s not to bad to be there, because there aren’t a lot of executive hard-ons poking around either.

In American work culture, people try to take turf in a colleague’s absence. If someone goes to Australia for three weeks, he might come back to find that someone else has become his boss, or that some smiling dickhead took over his plum project. Yet, most absences in the holiday period are due to the mild but annoying illnesses that circulate in the winter. Because performance expectations are low toward the end of the year, it’s not a time when there’s high demand to go on vacation.

That most absences are due to transient illness makes it difficult for people who would vie for the absentee’s turf, because they don’t know how long a person will be out. It’s hard to campaign against someone when you don’t know how much time you have. Moreover, a young person who never takes vacation can attack someone who’s in another country, without fearing retaliation; but colds and flus and stomach bugs get everyone. It’s socially acceptable, in American work culture, to fuck someone over while he’s on vacation, but someone who steals from the sick will have it done right back to him when he gets the dog flu a month later.


This topic deserves no conclusion. After all, the droning cycle of office competition– with petty motives; but, often, catastrophic results– does not end. If it has no conclusion, with one misspent year rolling soundlessly into the next, should an essay on the topic? What is there to say?

I describe here what is, not what should be. Office culture should not be. It’s a blight. It’s useless. It’s not even very productive; robots will obviate it soon, and any fight against them will be a losing one. Yet, so long as people must survive and endure it, there will be value available to the world in analyzing its ebbs and flows. That said, this cycle of winter bitterness, spring mania, summer disappointment, and autumn wretchedness serves no purpose. It is not natural; in failing to extinguish subordination to the rich as a necessity for survival, we as a species have created it. It sucks, and I do not expect it to change. Storms and droughts end, but their causes do not.

Happy New Year?

2017 Revealed Executive Predators in Entertainment; High Water Mark for Corporate Capitalism?

I made the joke earlier that, at the end of 2016, someone asked God to stop killing celebrities. She listened, and spent 2017 revealing the creeps.

This could be– I use the words could be, because predicting the future is impossible and the risk of future embarrassment is high when one tries to do so– the end of an era; it’s possible that 2017 is the high water mark for Corporate America.

I will probably never be a billionaire. It’s not my inclination. If I could get a reliable middle-class life by writing, and never have to subordinate in the false name of “work”, I’d probably do that instead. I care more about cultural influence and personal integrity than a bunch of expensive toys I’d get used to in fifteen minutes. Yet I know that others are not like me. There are people in the world who wish to dominate, to humiliate, and to grow in power without bound. It’s not material wealth that drives them; like anyone else would, they bore of the toys quickly. It’s the subordination of the world that energizes them: a private view of a well-bred, Ivy-educated girl willing to debase herself.

We can’t end sexual harassment in Corporate America without ending Corporate America. This is no obstacle; we should end them both.

Let’s take note of what we’re up against. Harvey Weinstein infamously said, “that was the culture then.” With more secrecy in it, it’s the culture now. Understand this: to the sorts of people who become corporate executives, there is no such thing as an “abuse of power”. What we call abuse, they call power. Their imagined right to use their social and economic position for sexual gratification is, for many of them, the executive perk they care the most about. They’ll give up the private flights and bully pulpits before they give up hope of debasing a woman (or, in some cases, a man) half their age.

Everyone gets funneled into the corporate game, and most decent people hate it. Some decent people beat the odds against them and succeed, and even after that, they hate it. Yet there are a few people who are energized by endless social competition, zero-sum status games, and the plethora of pointless nonsense that characterize high bureaucratic capitalism. They will win. The whole process takes energy from normal people until they cease to compete, but there are some who gain lift from it. They don’t mind “working” 100-hour weeks because the courtiers’ games are the only thing they care to do. They’ll always have an advantage. It cannot be fixed. Narcissists and psychopaths will, in the current system, inexorably win and drive out everyone else.

This is why I mock so-called “corporate feminism”, the Lean In nonsense. Of course, corporate feminism is not completely self-contradictory. In theory, a matriarchy could exist that is an oppressive as the patriarchy that exists today (which is, though this does not excuse bad behavior, relatively mild compared to patriarchies of antiquity). Such a society could be just as corporate as this one. A vision of corporate dominion coupled with female supremacy would be internally consistent. But, to most people, feminism means female equality, not supremacy. Gender equality is not compatible with the corporate system as it exists today, because the projection of power matters, more than merit and more than anything that might otherwise be a source of power, and society will always deem one gender more capable (contrary to reality, which seems to have endowed the genders with roughly equal amounts of competence) and powerful than the other. The only major difference between men and women is that we look different (obvious and underwhelming, right?) but the post-truth corporate system runs on appearances.

Furthermore, the corporate system cannot survive without executive payoffs that decent people would consider distasteful and wrong. Seven- and eight-figure salaries are not enough. People get used to those, mighty quick. After three months, a person’s salary becomes, to him, an objective entitlement. Material wealth is not enough for these people; they must dominate.

One might say that if a model of society requires people to be mistreated, and that this falls disproportionately on one-half of the population that tends to hold less power for historical reasons, then we ought to scrap that model of society. I agree. Let’s end executive capitalism. In Scandinavia, managers work for companies; in the U.S., companies work for managers. Let’s become more civilized. Do we really need an overpaid, self-indulgent feudal overclass? I doubt it. Society would run better without those idiots in charge.

It is possible– I do not say certain or even likely, but possible— that 2017 represents a high water mark for the self-indulgent corporate capitalism that Donald Trump has exemplified for the past forty years. I’ve heard people crack jokes about “Hollywood executives”. Don’t make that mistake; it’s not just Hollywood. We hear the Hollywood stories, because those tend to have prettier women and uglier men involved, but this runs deep and for every creep who gets caught, there are twenty who’ve gotten away with it. Our system exists in its current form because the most powerful people go to work for reasons other than money (they’ve got enough). Take away their perceived “right” to abuse power, and they’ll lose motivation and drop out of the system, and that’ll be good for everyone. Our society is broken in more ways than I can count right now; we’ve got a lot of earnest work to do, in order to fix it.

Crazy Is Better Than Stupid

Something I’ve observed in the corporate world is that most people lose intelligence and creativity after 5 to 10 years. Sometimes they burn out even faster. The flame goes out; a robot stands where a person once was.

A few of us don’t go that way; we’re the rebellious ones who refuse to turn stupid. We rarely rise up the ranks. On the contrary, our self-protection often gets us disliked, excluded, and even fired.

This midlife cognitive decline is far from inevitable; it’s not biological. Our society romanticizes youth and exception, as if it were the norm to know everything at 25, but people who surround themselves with intelligent company, who continue to read and learn, and who find opportunities meaningful contribution to the world around them, will continue to grow, quite late in life. It’s true that Keats and Galois peaked young; they also died young. The norm is to peak after 40, even in mathematics, which is notable (more in romance than truth) for its early blossoming. Writers, on average, seem to peak after 50. Some of the great ones started after 50.

In other words, people who specialize at getting smart continue to get smarter as they grow older. People who specialize in climbing organizational ladders were probably the smartest when they were in school.

My suspicion is that the midlife cognitive decline one observes in Corporate America is, in addition, a low-grade and otherwise subclinical variety of depression. Is it the severe depression that requires electroconvulsive therapy and leads to catatonia and literal bed-shitting? No, of course not. It’s a rational adaptation to defeat. Though I imagine that major depression was always pathological, its milder cousin might be an adaptive, life-preserving response to transient low social status. Archaeological evidence and present-day observation suggest that violence related to social status killed about 1-in-5 adult men in the primitive world. Mild depression would keep one out of conflict. That dimming and shutting down saved lives. Is it useful now? I doubt it. It seems to have left us with a sleepy, bored, corporate-dominated society that can barely keep its own lights on.

In the business world, we see midlife cognitive decline both among the vast majority who lose, but we also in the small number of winners. Power gets to them and, though it has different effects, it seems just as capable of rendering formerly smart people stupid.

The rebels stay smart. The people who recognize the corporate game as pointless bullshit, who know that even if they get rich it’ll come with a sacrifice of identity that’s hardly worth it because the part of the person that matters will have died, they’re the ones who hit 30, 45, 60, and can still hold an interesting conversation. They’re not immune to depression, but they seem built to fight it. They don’t get stupid. Morose? Yeah. Anxious? Sure. Miserable? Sometimes.

If these rebels speak out, they’ll be labelled crazy. I shan’t unpack that word, nor shall I discuss mental illness and how poorly most people understand what it is and is not. There isn’t time for that, today. The accusation of craziness hurts us, not because we are insecure about our mental health, but because we are moralistic and self-critical, and our negative experiences pile on– that is the essence of any fight, and rebellion is no exception– and this causes a divergence between what we want to be and what we are.

We aren’t, in general, crazy. I’ll talk about that some other time. If we were, though, it’d still be better than turning stupid.

State of Society, 2017 (Part 1)

Writ large, what’s happening in the United States? How did we get from Roosevelt and Eisenhower to Trump? What’s going wrong, what’s going right; and how can we fix it?

The Disappointment

There’s been progress, for sure. We elected a black President, we’ve legalized gay marriage, and the computers we use for phones are superior to mainframes not long ago.

In terms of economic life, though, the 21st century has been one of disappointment. The poor are getting poorer. The middle-class is getting less upwardly mobile. The upper-middle-class is growing out of touch, so much that the lower middle-class hate-voted for a rich psychopath to spite us. The rich are not just getting richer, but greedier and stupider.

In the 1960s, the world economy grew at 5-6 percent per year, and U.S. research and development was a major reason for this. We led. After winning a war in the 1940s, we rebuilt the vanquished countries, instead of exploiting their position of weakness for some short-sighted, zero-sum gain. That approach worked so well that the countries disallowed (by the Soviet Union) from participating are still suffering for it. In truth, the main reason Eastern Europe is poor is not that “communism didn’t work” but, more specifically, that those countries weren’t allowed to participate in the Marshall Plan.

Yet, our country ran off the rails. It’s hard to say if it’s even capitalistic anymore. The Capitalist Party, the boardroom elite, seems no more devoted to capitalism and free markets than the Communist Party of Stalin’s Russia. They, in the Capitalist Party, collude with each other, they vote up each others’ “executive incentive plans” (and golden parachutes), and they generally have the system rigged so a social elite wins, irrespective of market forces or the decidedly middle-class notion of “merit”.

Subtly, in the 1970s, old-style Gilded Age elitism came back, with more force in the 1980s and ’90s, and now it dominates the business environment outright. In my view, this had not much to do with traditional left-right politics; Reagan didn’t cause it, Clinton didn’t slow it, the Bushes were oblivious, and Obama couldn’t stop it.

Trump’s America wasn’t born overnight. Like Trump, it’s been around for decades, and it never really hid.

The Satanic Trinity

For now, let’s ignore the big cultural issues. Plenty of people do. The story of the failing American middle-class comes down to the Satanic Trinity: housing, healthcare, and education costs. No one really gets to avoid what’s happening on these fronts. They affect everyone. So, we’ll talk about all three.

Healthcare costs are spiraling for a simple reason: people want not to be in pain, not to die before their time, and not to inflict sadness and misery on people who depend on them. We have this fiction in mind of efficient markets where fair prices are discovered based on abstract principles: supply (how hard is it to make widgets?) and demand (what value can people extract from widgets?). Often, this works. In many cases, markets are the best solution to the pricing problem.

Here’s an issue, though: in bad times, value becomes binary. What’s the value of water? If you’re fifteen minutes away from dying of thirst, it’s infinite. If you’re healthy and live in normal conditions, it’s practically free. I’d argue that this applies even to money itself. When people feel they have enough, they spend it without much thought. When money’s tight, they watch every cent.

For most commodities, including money itself, the marginal value is either nearly zero or nearly infinite; and, it’s never good to be in the latter situation.

What fuels the hospital-industrial complex? Fear, obviously. People who can afford it will pay whatever it costs not to die in pain of a tooth infection that leads to sepsis. It’s hard to put a number on not dying in the streets of a preventable illness. Moreover, it functions as a middle-class inheritance tax. The rich, who bequeath $10-million estates to their heirs, are largely unaffected by medical bills. Middle-class people who die with $250,000 to $2 million in net worth? Often, their finances are wiped out by end-of-life medical bills. Their kids are too deep in grief to care about money right then; so, that’s the best time for the hospital-industrial complex to strike.

The hospital-industrial complex is, financially, not very different from another, much older and more nakedly barbaric, wealth transfer from semi-rich old people to unscrupulous young people: witch hunting. I don’t believe for a second that educated people in the 17th century actually believed that these old women were hanging out with Satan in the forests of Massachusetts or Bavaria. Witch hunting existed not because everyone believed in that nonsense, but because it was so profitable for the hunter (and the church) to seize the wealth of a person who’d amassed significant money, but who lacked the vigor (and, in a time when a woman’s testimony was given little value, male gender) to defend it.

The medical-industrial complex exists for one reason: old people have money they can’t use and that they have neither the vigor nor reason to defend.

Let’s go to tuition next.

First, let’s admit that college education always had an economic purpose. High-minded ideals? Bullshit. We might wish it were otherwise, but less than 1 percent of people can afford to take four years off from the subtle, gray-beige oppression of economic life.

What differentiated colleges from trade schools is the obliquity of the approach to the individual economic problem. Trade schools provide knowledge that’s profitable now. So long as the trade stays strong, the tradesman is well-off. That’s always been true and it’s still true. There are probably more six-figure plumbers than six-figure art history professors. The issue is that no one can predict which trades will be strong, 20 years out. Trucking is a solid middle-class job that might be automated out of existence in a decade. So-called “blue-collar” jobs are commodity work and no one pretends otherwise. Wages are at the mercy of the market.

The selling point of a college degree is insurance against the vicissitudes of commodity values. Officer of a horse-carriage company, in 1902? Well, you might lose that job soon; but, thanks to your college degree, you have all the upper-middle-class skills necessary to run a car company.

We’re now at a point where there are so many college degrees, and the quality of graduated students is so variable, that college degrees bring very little. Largely, their job-winning purpose is to wash off the negative social signal of not having one.

To be fair in this assessment, the quality of education that’s available has probably never been higher. Just as a $5 bottle of wine today is superior to what Napoleon drank, someone who attends a no-name school and actually does the work will get a better education, in most ways, than someone who went to Harvard in 1980. The Internet has changed so much; it makes the smart, smarter, and the dumb, dumber. (That’s a topic for another time.) I’d even argue that the objective differences between colleges have shrunk– the difference in quality between the elite colleges and the next tier, which was never all that much, is one-third of what it used to be– but the social ramifications have spread out.

I work in the technology industry. In Silicon Valley, there are three schools that matter: Stanford, MIT, and Harvard. It has nothing to do with lay prestige (“Ivy League”) nor with the quality of the schools themselves. In venture-funded software, even Yale (or Princeton, or Dartmouth) is indistinguishable from the University of Michigan, which is indistinguishable from Northern Illinois University in De Kalb, which is indistinguishable from a community college, which is indistinguishable from nine weeks at a “code boot camp”. It’s not about the quality of the educational programs. It’s about venture capital and it’s about connections. If you want to raise a $2,000,000 seed round at 22, go to Stanford and not Yale; in the tech industry, Yale is Siberia.

These prejudices vary by industry. Consider traditional publishing. (I mean, don’t consider it as a career, unless your uncle is a power agent; but let’s examine it.) Yale and Princeton have incredible pull, whereas Stanford is openly mocked; MIT might get you a 6-figure advance, but only if you’re writing science fiction. If you want the New York Times to review your memoir, I Took Classes, then go to Yale and not Stanford; in trade publishing, Stanford is Siberia.

I’m sure that Stanford’s English program is excellent… and, besides, one doesn’t become a great writer by taking classes, but by reading and writing, which you can do anywhere. I’m sure that Yale’s undergraduate computer science is very good. You can get a rock-solid education at a state flagship college, a top-50 or so liberal arts college, et cetera. If you have the work ethic, you don’t necessarily need to go to college; there’s plenty of excellent material online. It’s probably no surprise that the tuition bubble is about that. It’s about connections. It’s about extracting personal benefit from the low-level corruption that exists in almost any society.

So why are tuitions rising? Why do people pay $65,000 per year to go to college, in addition to hundreds of thousands spent on elite boarding schools, private admissions consultants,  and donations? It’s not like connections “weren’t a thing” back in the 1980s. Small-scale corruption has always existed and always will. So what changed? Why is there a market for private college admissions counseling, to the tune of $50,000 per head?

It’s the panic purchase of credibility and connections, as the U.S. middle class dies.

We’ve covered two items of the Satanic Trinity: healthcare and tuition.

I suppose that there is some good news in both. With healthcare, the costs are usually exacted when we have no personal use for the money. There are exceptions. People get cancer at 47 (whoops!) but it’s rare. Just as most people in Vichy France didn’t die in camps, after all; the majority of people won’t suffer egregiously from the U.S. medical-industrial complex and its extortive billing practices. Relax, you’ll be fine… well, 95 percent of you.

People live in terror that if they don’t toe the line and stay employed and insured, they’ll be left to die on a hospital doorstep. That happens but, statistically, it’s far more likely that a person will suffer the opposite: being kept alive longer than, if they had their full faculties aforethought, they would want.

What about tuition? I suppose we can take solace in the fact that the spending is self-limiting to a degree. Colleges will still admit low- and middle-income students– it helps the rich feel better about themselves, and they can self-segregate by social class once behind the gates– and offer price reductions (to what people can actually just-barely afford) under the guise of “financial aid”. The prices will be calibrated to drain, but not break, the parents.

Behind healthcare billing, the soothing whisper is, “You don’t need this; you’re going to die soon.” Behind tuition, it’s, “Relax, this is what a good parent does; pay out, today, and your kids will be set for life.”

What about housing? Housing is… much more complex.

If you took an unweighted geographic average, you’d find that houses in the U.S. are reasonably priced. Cheap, even. There isn’t even anything wrong, either, with most of the places where one can find affordable houses. I’d rather live in the mountains of North Carolina than some overpriced, depressing suburb in the Bay Area that’s three hours (accounting for traffic) from anything worth looking at (i.e. the parts of California that are actually beautiful). Houses are only obnoxiously expensive… where there are jobs.

Oops. This country wasn’t supposed to have a Jobless Interior.

This is where it gets super touchy. It brings in gender and race and politics– why do people in so-called “red” versus “blue” states seem to live in different realities?– and there’s no easy solution to it. I’m about to get into trouble here.

Let’s take a break. We’ll talk about housing in the next installment.


Realistic LinkedIn “Poetry”… the 99%-er Edition

I seem to have missed the LinkedIn Career Poetry that’s apparently a thing. See, I spent 2017 actually working– I have a full-time job, and I’m revising Farisa’s Crossing— and so, sadly, I had little time to pretend to be awesome on a website full of white-collar oversharers.

Apparently, this execrable new genre tends to feature a Zero’s Journey with the following sort of cadence:

I was homeless.

I was fired yesterday.

I was walking home.

I took an Uber.

Someone stopped me on the street.

My boss told me not to take a chance on anyone over 50, but I hired him anyway.

It was Elon Musk.

(Follow-up: he looked at my profile, saw that I was a state school grad, and told me to go fuck myself. We had shared the Uber; I paid.)

Only in San Francisco is a person homeless immediately after joblessness (n.b. “fired yesterday”). In any other city, it takes a while for a person’s life to go to shit. Also, how does one walk “home”, and why is one taking an Uber, if homeless? But, I digress.

Okay, you soppy fucks, now here’s some career poetry for the 99%. I shall master this genre, in order to kill it.

A “performance” plan?
Set up to fail; two weeks hence,
I cleaned out my desk.


No jobs, for it was,
they said, Series A winter.
“Your CV’s on file.”


I couldn’t afford
to keep my health insurance;
I’m now shitting blood.


can beat this bleak Depression.
“Recession,” I mean.


Don’t live in Musk’s ‘hood;
Can’t get no EIR job.
I’m still shitting blood.

In Defense Of Millennials

In early 2001, I read Millennials Rising, by William Strauss and Neil Howe, authors of Generations and The Fourth Turning.

They predicted, in the optimistic 1990s when it was unthinkable, that we’d approach a Fourth Turning, or crisis, this century. This seems to be coming true.

On the other hand, they predicted that my generation (Millennials) would rise to be civic heroes, reversing the trend of institutional decay that began in the 1980s. So far, that doesn’t seem to be happening.

Instead, the most successful among us are not reversing decay, but profiting from it. Mark Zuckerberg and Lena Dunham aren’t successful because they restored a troubled civilization to health, but because they’ve figured out how to thrive in this post-apocalyptic landscape: a world of economic decline, permanent immaturity, and cultural anomie.

The popular opinion about us Millennials seems to be the opposite of what Strauss and Howe predicted: it’s that we’re lazy, whiny, apathetic brats. That’s not true. Nor is it entirely false. The less-than-climactic revelation about each generation seems to me to be that none is worse or better than any other, taken in toto.

We look like shit right now– with Zuckerberg running for President, and Dunham commanding a $3-million book deal as “the voice of our generation”– because the people in the limelight are those who promoted by Boomers. Given that, how would we not look like shit? What else would one expect? It may change; give it time.

Ascribing moral value to a generation is a tricky business, and I have a hard time buying into it. After all, segregationists like George Wallace and Strom Thurmond were part of the “Greatest Generation”; arch-thug Bull Connor only missed it by a few years. They were repulsive! The organizationally adept Greatest Generation gave us the Rotary Club, but it also revived the Klan. Let’s not white-wash it.

Of course, it’s fashionable for young people to hate the Boomers. In general, I don’t think Boomers are an exception to my statement that no generation is, in an individual moral sense, better or worse than any other. The Boomer 1%, the current global leadership, has been an atrocious nightmare. We cannot tear them down fast enough. But most Boomers aren’t part of the elite that draws this (deserved) hatred. The worst-off victims of our nightmare society are, in fact, Boomers; many of the young will recover from this mess, but the 73-year-old who is bagging groceries despite his bad back, or the 59-year-old programmer who just got fired (“too old”) by his spoiled shithead Xer/Millennial bosses, will not. Toward the vast majority of Boomers who are merely middle or working class, we should feel empathy– not resentment.

Why Boomer Hatred Exists

Why do people hate Baby Boomers so much? They’re blamed for the abrupt decay in the quality of American economic and social leadership. The collapse was brutal, and it continues, but to blame it on one generation is, in my view, somewhat of a mistake. Decay started before Boomers were in charge, under the Silent Generation. It has continued, in Silicon Valley, under Gen X and the Millennials. No one generation deserves all the blame for this mess.

The standard narrative is this: Very Bad Things happened in the 1920s to ’40s, but the Greatest Generation heroically rose up saved us from the Depression and Hitler, and built us a society with a large middle class. They saved capitalism by integrating what was good about socialism, they sent their soldiers off to college and became the generation of warrior-scholars that made America great. Then, the Boomers, never knowing hardship, came and ruined it because… instead of building on what their parents gave them, they wanted to smoke pot at Woodstock (in the ’60s) and snort coke on Wall Street (in the ’80s) and then rise to the top of Corporate America, poison the environment, and pull the ladder up from under them (in the ’00s). Self-indulgent and narcissistic all the way, they ran our society into the ground. Their elders said that of them, half a century ago; we’re saying it now. Is it true? Self-indulgent narcissists exist in every generation, and I find no evidence that their numbers are worse in any particular one. We should, instead, indict the cultural factors that brought such people, at one point in time, to the top of society.

What’s wrong with the “standard narrative” above? To start, it’s U.S.-centric. Include more countries, and generational theory becomes harder to keep together. I’m guessing that Germany doesn’t call its World War II veterans, “The Greatest Generation”. As for the Baby Boomers, in this country, there’s no question that their leadership has been atrocious. In that regard, they may be the worst we’ve had. Yet, when we slag Boomers– painting them as that spoiled generation that had everything and left us nothing– we forget about black Boomers and gay Boomers and coal miner Boomers in West Virginia.

“Globalism” is sometimes given as an explanation for American decline, but it raises more questions. Globalism is both desirable and inevitable. As a creative, I say: we need every audience we can get. So yes, dammit, I’m a globalist. I wrote a card game, Ambition, that has been published in print… in Japan. I’m writing a book (Farisa’s Crossing, for publication in 2018 or ’19) and most of the readers I’ll want to reach are not in the U.S.

Globalism shall continue; we can’t ignore it. We can’t rewind our economy back to 1960. On globalism, we need to do it right.

There’s a perception in the U.S. that globalism occurs at the expense of the American (increasingly former) middle class. Is it true? Not really. The rich, including American rich, are making out like bandits while the middle class shrinks and suffers. We’re losing money to our own top 0.1 percent– not the people rising out of poverty. (Remember: that’s a good thing.) We’re not being stabbed in the back by the middle class of India; we’re being stabbed in the back by our own elite.

Some have argued that our morally restrained “national elite” lost out to the execrable “global elite”: the Davos Men who pine for the 1937 Germany, when fascism was good for business but before it started killing people; the Arab oil sheikhs with harems and child brides; the businessmen in China who bust unions with machine guns; the murderous dictators of sub-Saharan Africa.

For sure, the global elite is disgusting. We must face up to this, though: our national elite is, even today, a plurality contingent of the global elite. The crimes of the world do not come from “savage” people overseas. They come directly from the top of a socioeconomic order that our elite, even to this day, maintains. The global elite are not a cabal; they do not meet in one room, and self-interest explains their cohesion and operation. We do not need “Conspiracy theories” when lower-case-c conspiracies exist all around us and suffice to explain what’s happening. Though no upper-case-C “Conspiracy to rule them all” exists– that’s a fantasy, for if it were true one could blow up the room where they meet– the fact is that a tiny oligarchy (of, perhaps, a couple thousand people) now makes almost all of the important decisions.

We ascribe relative virtue to our national elite, as opposed to the global one, because… let me recite the popular narrative… they got us out of the Depression, they saved capitalism by tempering it with the best elements of socialism, they defeated the Nazis and Fascists and Japanese Imperialists, they gave us cars and spacious suburban homes, and they built a mid-century pax americana. They were charitable, their rich didn’t mind being taxed at marginal rates over 80 percent, and they founded companies using “Theory Y” management, because they cared about their workers. They made America Great, the story tells us, and it was the globalists or the Boomers or the liberals or conservatives (depending on whom one asks) who made us un-great.

We need to understand the era in which we had a relatively virtuous elite. What caused it? What made them operate with such (unusual, as elites go) restraint? Why did they allow the 1930s-80s prosperity to occur?

Our national elite was not born into superior virtue. The American elite of the First Gilded Age was just as crooked and onerous as the global elite of this Second Gilded Age. That should give us hope; if the American elite let up in the 1930s to ’70s, perhaps the global one will let up in the future. Our national elite (the “WASP Establishment”) grew content to be merely rich, as they were in the 1950s, rather than brutally hegemonic, as they are today. Why? During the Depression, there was a real threat, in every country, of communist overthrow. Being rational humans, people in the American national elite chose graceful relative decline rather than the guillotines. Smart call. That made life better for all of us. We got to a point where people, even of moderate means, could afford international air travel. Add technology to that, and we became a global society. It’s not a bad thing, and it couldn’t have been prevented.

Here’s what happened in the 1980s: our young rich met the young rich of other countries, and they felt they came up short. If you’re an American millionaire and you drive 150 miles an hour on the freeway, then crash and kill someone, you go to jail. If you’re an entertainment executive who sodomizes a 13-year-old girl, you’ll be charged with rape. Meanwhile, Arab oil sheikhs own harems, can murder the poor of their own countries with impunity, and import slave labor for domestic help. The mere two-digit millionaires of the American elite met the hegemonic billionaires of less evolved societies and asked themselves, “Why can’t we have that?”

Starting in the late 1970s, the American elite began shucking off moral restraint and pushing the bounds of decency. Drum circles and marijuana gave way to cocaine-fueled Studio 54 elitism. The rich manipulated politics to give themselves tax cuts, turning some of the most effective governments in the world (our federal, state, and local governments) into underfunded, dysfunctional messes. Those who’d climbed the proverbial corporate ladder pulled it up, then learned how to pit the people at the bottom against each other, so they’d ignore what was really happening. In the 2010s, dormant racial tensions re-emerged, as our upper classes relied on old techniques for keeping the poor divided and conquered.

This slow-motion national catastrophe, still grinding on, happened while the Baby Boomers were in charge. Did it happen because they were an evil generation? No. As I’ve said, they have no fewer or more scumbags than any other generation; but, there has been a climate over the past few decades in which bad people have a disproportionate likelihood of rising into leadership roles. We’re becoming a global society and we haven’t yet figured out how to do it right.

Institutional decay: double or nothing

One of the reasons why the future’s hard to predict is that, in any era, there will be things that seem bizarre, out of place, or otherwise wrong. Call them anomalies. A digital something-rather called a “bit coin” should not be worth $15,000, am I right? Oughtn’t that go right to zero, and soon, having less utility than a tulip bulb? Perhaps. Perhaps not. Tulip bulbs were a great investment for decades.

One might expect anomalies to mean-revert out of existence. Yet, each of those artifacts exists for some reason– little of impact is truly random— and it is often as probable that the anomaly will double itself up, before it gets worked out of the system. Let me be concrete. In the late ’90s, people recognized that dot-com stocks were overvalued and short-sold them. Many of these short-sellers got hit with margin calls and were wiped out. They were right– there was a dot-com bubble– but they timed its end poorly and they lost. As John Maynard Keynes said, “The market can stay irrational longer than you can stay solvent.” Every anomaly has a force behind it, a positive-feedback loop, that wishes to increase. So, when you bet on an anomaly, either in current with or against it, you’re making a “double or nothing” bet.

Furthermore, it is difficult in the grand scheme to know what is anomalous and what is genuine permanent change.

For a brutally relevant example, a society with a large middle class, in which the richest people and the most powerful institutions behave with a reasonable degree of moral decency is, although desirable, anomalous. It existed in the U.S. between, approximately, 1940 and 2000. We are seeing an erosion of that society, as we revert to something more similar to the naked elitism of, say, 18th century Europe. Some have argued that a prosperous society of any kind is anomalous, and that a cause like global warming or fossil fuel depletion will imminently drive us back to the poverty that dominated most of human history. I don’t subscribe to that view, though it is intellectually defensible.

On the other hand, historical trends show technical progress and it’s nearly monotonic. Even in the Dark Ages (the main losers of which were the Roman elite; average Europeans were hardly affected) technology improved. In 2017, birth control has permanently eliminated our tendency toward involuntary overpopulation. Automation has eradicated the need for many kinds of painful and dangerous labor. So, perhaps the bad times we’re having now are but a minor dip in an upward-sloping road. I find Donald Trump repugnant, but I don’t think he’s anywhere near Mussolini or Hitler. It’s reasonable to conclude that, while the first third of the 21st century will be unpleasant for the American middle class, our global progress toward prosperity shall go impeded. Will it? I don’t know. I hope so.

Which analysis is right? It’s hard to say. I’m a short-term pessimist and a long-term optimist. The atrocious economic, political, social and cultural leadership that the United States now experiences will not die out just because the Boomers vacate. Generation X and Millennials are fully capable of continuing the decay. The main reason Millennials have a bad reputation is that, right now, most Millennials in prominence are human garbage– because they’re the ones who were promoted by the Boomer elite. I believe that chaos and probable violence live in our future. The Class War– a necessary process, because the global elite needs to learn the same lesson that the American national one did in the 1930s– will be ugly.

For my part, ugliness is not what I ever wish for. I’d like to see the Class War won by the right side, without violence. Violence begets chaos, and the petty reward of vengeance (however deserving the target) is never worth the risk of harm to innocents. However, history cares not about my preference for nonviolent resolution; it will do what it wills itself to do.

Millennials Rising predicted that my generation would repair the institutional decay that started under the Baby Boomers– a decay that became inevitable once our national elite re-polarized and joined the global elite. I don’t see it happening yet. I see willful continuation of decay. It’s quite profitable; as Littlefinger said in Game of Thrones, chaos is a ladder.

Let’s look at that supposed bastion of innovation, Silicon Valley. The main innovation to come out of venture-funded technology has nothing to do with science, computation, or technology itself. It’s the disposable company. The true executives of this brave new economy are venture capitalists, and so-called “founders” are middle managers who must manage up into Sand Hill Road. The difference is the ease with which a company can be crumpled up and thrown in the wastebasket. Pesky workers want a union? No Series D for you! Founder-level sexual harassment issues causing bad press? Scrap the company, start again, and try not to get caught this time.

An old-style corporation, when it scrapped a project, would find something else for people to do. Workers on the failed project were deemed innocent and would be eligible for transfer to more promising work within the company. The postmodern corporate entity of Sand Hill Road, when it decides a company is unfundable– note that supposedly competing investors, in fact, collude and make decisions in concert– sentences it to death. Jobs end. One cannot meet investor expectations without unsustainable spending, which means that none of these companies will survive unless they continue to raise funding. This, of course, gives investors managerial power, so founders must preserve their reputations among the Sand Hill Road oligarchy at all costs. What happens, then when a project/company gets scrapped, to the workers? You might guess, “They get laid off.” True, but it’s made worse than that. An old-style company would own up to an honest layoff. Venture-funded companies don’t want the negative press, so they claim they’re firing people for performance. The number of companies that claim never to have laid anyone off, but have politically-charged “low performer initiatives” (stack ranking) any time executives screw up and lose money, is astonishing.

In this less-than-ethical climate, institutions rust quickly. People realize their employers have no sense of loyalty or fair play, and they reciprocate. I’d guess that eighty percent of people lie on their CVs, and it’s hard to blame them in an industry where bait-and-switch hiring is the norm, and where dishonesty to employees is business-as-usual. (Lie to investors, though, and that’s “pound-me-in-the-ass” prison.) If a company can lie about the career benefits of a job it offers, can’t an employee fudge his own political success– or, shall we indulge the fiction and use the term “performance”?– at previous jobs? I don’t care to unpack this particular topic; what’s moral is one debate. What is, is what interests me here. We don’t have a culture that strengthens institutions or builds durable ones. We have one that builds flimsy companies that either decay rapidly or “disrupt” some other industry, capturing great wealth quickly at some external expense. We have a culture where everyone lies and no one trusts anyone, and where everything’s falling apart.

The Daily Anomaly

I expect Corporate America to melt down under the Millennials, but I can’t say when it’ll happen. As I’ve said, predicting the future is hard; anomalies can double up multiple times before they dissipate.

Corporate work is somewhat of a joke these days. People spend 8-12 hours per day defending an income and their professional status, and very little of that time is spent actually working. The weird irony of American life is that people’s leisure activities are more work-like than their paid jobs. They hunt, read, write, hike, run, garden, and sail on their weekends. What do they do at “work”? Sit an office and try not to get fired. If that means slacking, they slack; if that means working, they work. Their only real goal is protect an income. It’s not intellectually or physically demanding, but it’s obnoxiously stressful. Until we establish a universal basic income (which will save work, not destroy it; as the New Deal saved capitalism) this will be a reality for most white-collar Americans. We recognize corporate “work” as a stupid game people are forced to play.

Automation will destroy jobs. Good. Fuck “jobs”. If we had a universal basic income, no one would shed a tear about the elimination of unpleasant labor from human life. We don’t miss death by “consumption” in 2017, and no one in 2117 will wish he’d been around to spend 50 hours per week in a box, doing a job that a robot can do using 53 cents’ worth of electricity. At some point, we won’t have to work in the way we do now. We can recognize the grand joke that is American-style office work as an anomaly. Will it go away soon, without pain? I doubt it.

Self-driving trucks are an unemployment time bomb. Consider not only the truck driving jobs, but the jobs in support of that industry. Hotels and restaurants in the Kobless Interior will fold. It’ll be a catastrophe.

Upper-middle-class office workers feel safe from this. Here’s what no one’s yet talking about, and it’s going to hit the whole middle class: inelasticity.

During the oil shocks of the 1970s, the fuel’s supply only decreased by about 5 percent, but prices went up several hundred percent. The same thing’s going to happen to wages, in the opposite direction. Laid-off truck drivers aren’t stupid. They’ll move into other trades, driving wages down. They’ll go into code boot camps. We’ll see wage inelasticity: a small increase in labor availability will cause wages to plummet, disproportionately, and beyond what most people expect. It will ripple throughout the entire middle-class job market. No job is safe. Will there be computer programmers in 2030? Without a doubt, there will be. Will they make the money they do now? I doubt it.

This notion may seem far-fetched, but consider some of what our society does already to limit the labor supply, just because the stakes are so high. We imprison so many people, I would argue, for this reason; we are, then, preventing wage collapse, albeit at an unacceptable moral cost. Draconian drug laws keep people off the job market. Within the middle class, the arms race for educational credentials exists toward a similar end: society self-corrects against wage collapse by pushing people into school, thus out of the workforce, even though the individuals pushed into schooling (often, unnecessary for the jobs they’ll be able to get) must accrue unsustainable debt in order to be there.

Self-driving vehicles will save lives. Without a doubt, we want to see them developed and used. However, if we don’t have a universal basic income (and I doubt that we will) by the time this disemployment time-bomb goes off… we are so, so, so fucked.

Green-Eye Island

Millennials recognize corporate “work” for what it is. Yet, they still go. They have no other choice. Why do they put up with it?

Here’s a parable, perhaps a riddle, that explains it; and the counterintuitive answer gets to why it’s so hard to predict future human behavior.

On Green-Eye Island, it’s illegal to have blue eyes (to simplify, everyone has green or blue eyes). If you know that your eyes are blue, you must leave the island at sunset. However, no one discusses eye color, and there are no mirrors. Of the people who live there, exactly 10 have blue eyes. These people are perfectly logical and follow the rules to the letter.

They’ve lived in harmony, each blissfully ignorant of their own eye color, for years. People see others with blue eyes (if they have blue eyes, they see 9 others with blue eyes; if they have brown eyes, they see 10).

One day an outsider, the Man In Black, comes to the island and says, “At least one of you has blue eyes.” What happens?

The intuitive answer is, “Nothing.” He is not telling them something they don’t already know. Right? In fact, the answer is: ten nights later, all the blue eyed people leave. This is a weird result. On the surface of it, the Man In Black offers no new information; yet, he causes a change in behavior.

Why? It works like this. Let’s consider the case where there were only one blue-eyed person, this information (that at least one person has blue eyes) would be new to her; not seeing anyone else with blue eyes, she’d know that her eyes are blue, and leave the island that same night.

If one night passes and no one leaves, this means there are at least two people with blue eyes. If so, then each of them will see only one other person with blue eyes and know that they have to leave, on the second night.

So, let’s say that two nights have passed and no one has left. This means there are at least 3 people with blue eyes. And so on. In the example where 10 people have blue eyes, nine nights pass and no one leaves the island. Each person with blue eyes realizes that there are, in fact, at least 10 people with blue eyes… and seeing only nine others, they must leave.

Before the Man In Black came, everyone knew that at least one person had blue eyes, but it wasn’t common knowledge. Common knowledge is stronger than that: it requires that everyone knows, and that everyone knows that everyone knows, and that everyone knows that everyone knows that everyone knows, and so on. In the example above, we have nine levels of “everyone knows”, but not ten… until the Man In Black establishes common knowledge.

Played by real people, I doubt this simulation would go on as described. People are not fully logical; they do not immediately deduce all things they could know from the information they have, as that would be computationally impossible. What would happen if this game were played out with real people? Probably nothing. Not only are we not perfectly logical, but we cannot reasonably assume that everyone else is perfectly logical.

The example above shows, in principle, how the promotion of shared knowledge (everyone knows it) to common knowledge (everyone knows everyone knows everyone knows…) can be powerful.

In Corporate America, there seems to be a similar shift underway, from shared to common knowledge.

Most individuals recognize the absurdity. People enter this miserable contest, chasing the 0.01% chance of becoming so famous, rich, or important that they no longer have to play. It’s ridiculous: they club each other, with the goal of getting out of the room where clubbings occur.

The rewards are artificially scarce and delayed, the game is hopelessly corrupt, and the odds of success are pathetic. As far as game design goes, corporate work is best viewed as an antigame, like an antinovel, but far more artless. While it is (like a game) a process whose main purpose is competition, it lacks the intellectual fulfillment and harmless fun of regular games. Corporate work is not defined by the joy of exploring new territory or deploying strategies, but avoidance of artificial unpleasantness: late working hours due to deadlines that serve no purpose, emotionally-charged confrontations over nothing that can nonetheless result in a 100-percent drop in income if one misplays then, et cetera.

What makes games fun (or not) is beyond the scope of this essay, but one factor is their creation of a status hierarchy different from the one in the real world. In a game of skill, the fun is in exploring the game’s structure (architectural and emergent) and climbing the skill ladder– in a meritocracy where an unskilled rich person loses to a skilled pauper. In a game with more luck, the light-hearted fun comes from the fluctuations of the in-game status hierarchy. Even a beginning player might win and be queen for an hour.

The anti-game of work is designed, instead of either of those goals, to elect as winners not the people of merit (as in a skill game) or to allow serendipitous wins (as in a luck-enhanced “party” game) but to ratify the socioeconomic status hierarchy– to make an oligarchy appear meritocratic– that already exists in the world.

Do we need office work for modern society? Probably. We don’t need so much of it. I’d guess that 75 percent of the time and 98 percent of the emotional suffering invested into it is pure waste.

Virtually every thinking person knows that what I’m saying is true. It’s shared knowledge. Yet when a person like me (a Man In Black) risks making it common knowledge, he becomes a pariah. It’s bad for morale. Her “tone” is “shrill”. He’s a bitter loser who just didn’t make it. Et cetera, et cetera, et cetera.

Millennials get a lot of flak for our role in “killing” travel, magazine subscriptions, restaurants, and other things we can’t afford because we don’t have the money that the Boomers stole when they ruined Corporate America. We’re called lazy because we don’t invest loyalty if we don’t expect reciprocation. We’re a hardscrabble, post-apocalyptic generation.

Generation X knew the corporate game was rigged, but it hadn’t become common knowledge yet. The morale problem had not achieved public liminality. We’re the ones destroying morale, one nonexistent avocado toast slice– I literally didn’t know that it was a thing; and is it toast with avocado in the dough, or as a topping?– at a time.

We’re the Men (and Women) In Black who come to the island and state what everyone already knows… and that’s why the people at the top of society hate us so much.

Why I Didn’t Do It

Forgive me if I don’t find the best order to use in telling this story. Life is chaos; chronological order may dissatisfy. And, since this narrative continues into the future, I have no idea how it ends.

I left a high-paying job in finance, early in 2008, to work at a startup. I had, one could say, a naive, rosy view of technology and the nobility of its place in society. I believed that if I became a great programmer, I’d both have a positive effect on the world, and earn my own reward. I wrote code, I wrote words, I read a lot, and I worked my ass off. That hasn’t changed.

My first taste of what one could call “fame” came in March 2011. A now-deleted essay hit Reddit and got 30,000 views in two days. In July 2012, I wrote “Don’t waste your time in crappy startup jobs.” At the time, I wasn’t advising people to avoid startups– only to be selective about which ones to work for. That post received 135,000 views in one day, and about 250,000 over its entire life.

Through this, I made some enemies. Advising tech employees to negotiate with employers does not earn love from all corners.

I removed those “hit” essays in February 2016, after receiving a few not-credible but disturbing physical threats. I intended to restore them, but a technical mistake (partly mine) led to their permanent deletion.

If one wants to find them, the Internet Archive (“Wayback Machine”) is what I’d recommend. The problem with my earlier writing on technology is that it has diverged from my interests and, to a lesser extent, from my values. I spent years trying to inject efficiency and integrity into venture-funded, private-sector technology. I no longer have faith that I, or anyone, can improve it. My aim in 2012 was to save it; in 2017, my goal is to minimize harm (especially, to myself) from its inevitable Untergang.

I experienced an aggressive public attack, starting in the fall of 2015. I was “de-platformed.” To wit, I was banned from Hacker News and Quora on false, defamatory pretenses. Why was I banned? It had nothing to do with my conduct on either site. First, I suggested that, instead of enduring the creep of micromanagement and surveillance, software engineers might consider collective bargaining. Second, I wrote a blog post that Paul Graham thought was about him– it wasn’t. Third, Y Combinator abused its power as an investor in Quora to force a ban on my account. It would have shut the company down, costing 120 innocents their jobs, had it not complied.

It must seem bizarre that I’m still upset about website bans from two years ago. In fact, I’m glad those sites banned me; they were monstrous wastes of time. I’m disgusted by the defamatory pretenses they used to do so, and the public statements they made. Their goal wasn’t to get me off the sites (I was a top contributor) but to damage my reputation. In a normal industry, such things would have no effect. How many industries or careers are there where a website ban could be used as a reason not to offer someone a contract or job? I can’t think of any, but one: venture-funded technology– that is, startups and ex-startups like Google and Facebook.

Leaving finance for a startup, in 2008, was a failure of career planning. That’s on me, and me alone. By doing so, I locked myself for a time in an incestuous weird industry where petty gossip drives careers. It is also an industry whose values and mine have diverged.

In early 2016, I was informed that I had been turned down for a job because of these bans. The perception was that I’d been humiliated by Dan Gackle and Marc Bodnick and failed to strike back. This petty gangster shit ought to be beneath me.

I don’t want to “strike back”. I don’t want a damn thing to do with those sick fucks. Revenge keeps you involved. Life’s too short.

I spoke to a public relations specialist about that experience. She asked me what money I would have made if I had gotten the job. I told her. She laughed.

“As smart as you are, you’re concerned about a startup job making $XXX,000?”

It amused her that, the stakes being so low, I’d even care to consult a PR coach at all. Here’s what I had to explain to her: the rest of the economy doesn’t want people from the startup world. (There are good reasons for this; most of us are sociopaths.) Often, we get stuck in it. In the tech industry– startups and ex-startups– it’s usual that one has to change jobs every 18 months to have a career, because those companies don’t invest in their people or promote from within. In real careers, that’s a sociopath’s résumé.

There are many undisclosed dangers of private-sector technology. Yes, it pays well, relative to most other careers, in the first 5 years. Still, it maroons almost all of them by middle age– and “middle age” in tech means 30. The job-hopping résumé that’s necessary in private-sector technology looks terrible anywhere else. Silicon Valley may think that it’s the future, but the rest of the country looks at five jobs in 6 years and says, “Nope.” Those who enter the startup scene often ignore the high probability of being stuck there. They think they’re younger and more invincible than they really are.

I ought to admit that I’ve never been great at processing the bizarre adversity that started with my first attempts to improve the tech industry. I have nightmares and panic attacks. I Google phone numbers I don’t recognize. I watch my back, especially in large cities.

The anonymous threats, the unjustifiable closing of doors, the necessary vigilance… that took a toll on me in 2015 and ’16. For an example of what I was going through, a homeless person in San Francisco chased me, brandishing a stick. He told me not to “fuck with” certain people, whom he named.

I hit rock bottom around March of that year. It wasn’t that I gave credibility to the death threats. Those came from high-placed people in Silicon Valley who had too much to lose, and I lived in Chicago, so I perceived myself as out of their way. Looking back on it, their objective wasn’t physical harm. Their work was incompetent and that was intentional. Rather, they wanted me to speak up. They knew I would, and I played into it. Why? Because it sounds utterly fucking nuts. If I stand up and say that, one time in San Francisco, a person associated with Y Combinator sent a homeless man to harass me, I sound insane. It seems bizarre and unreasonable, because it is. However, it happened. I wish I were making it all up.

Even I have trouble integrating these experiences, years later. I’ll confess to this: the other-than-real aura of certain events in the 2010s has led me to seek professional assistance in their processing. The normal reaction to abnormal occurrences, sometimes, requires that.

At that rock-bottom point in March, I was considering my own exit. Why? When I wrote about open allocation, or organizational dynamics, or programming languages, I held a certain opinion. Namely, that private-sector technology was a well-intended but wayward industry. There were bad guys, sure, but good guys as well, and the good guys could win.

Quora seemed to be the good guys. (Ha!) Even Y Combinator seemed, at one time, to operate with moral decency. I had this sense of computer programming as this noble activity; we were automating away worlds’ worth of undesirable work. I learned, abruptly, that I was wrong about almost everything. I realized that I’d invested in almost 10 years in an immoral career.

Our other favored debates seem so small, in comparison. One can argue about the merits of Haskell versus Python, or Bayesian models versus maximum likelihood, but to what point? These technical matters are hills of sand compared to the shit mountain that is our industry’s ethical failure.

I had a hard time accepting the role I had played. Yes, I experienced death threats and attempted blacklisting. From an objective external perspective, I’m not a sympathetic party. First, I chose to work in the tech industry. Second, by revealing unethical and illegal activities to the public, press, and authorities, I “bit the hand”. Third, my experiences raise questions but don’t answer them. I’ve proven corruption in Silicon Valley; do I have a fix for it? I don’t. Fourth, I must confess to my immaturity while the worst fights (2011 to ’15) were going down. In one case, my revelations of illegal practices led to numerous successful lawsuits against the company. Am I a hero? Nah; I did it to settle a grudge. I did a good thing, but my intentions were pedestrian. If I represent my story with honesty, I must admit this.

So, there I was, in March 2016, doubting whether I wanted to consider existing. Harassment and defamation from people who are powerful in one’s industry has that effect. Believing you’ll never get a decent job again (false, proved later) because a Quora ban (tech is petty; it’s plausible) has that effect. Spoiler alert: I’m still alive. As a general rule, I’m not suicidal, for two reasons.

First, while I don’t ascribe to literal religion, I find it plausible-to-likely that (A) there’s more to consciousness than we see on the surface, and (B) that my conduct in this life matters. So, I see no upside in self-violence, even when it tempts. There’s no guarantee, in any event, that it provides the cessation of existence that, in darker moments, I might desire. Whereas there’s a certainty of emotional harm to people who remain.

Second, when I get to that point, I often pretend I am dead or dying, just for the exercise. “I’m dead already; what do I do now?” We’re all terminally ill, after all; we just don’t know the timeframe.

Usually, I can come up with something worth doing. Perhaps it’s as pedestrian as cleaning the cat’s litter box. I ask myself how much life, in the current state, I can tolerate… and then figure out what I can do in that amount of time. Let’s say I decide that I can tolerate 6 more months. If I rushed and left editing to posterity– I have too much pride to do that, unless necessary; but it’s what I’d do if diagnosed with a terminal illness– I could finish my novel, Farisa’s Crossing, in half that time. That’d give me a valid reason to kick around for a few months, right? I find that, once I get to work on something I care about, that wish for a long sleep (which may or may not be what death is) dissipates.

It was at this bottom of night that I started writing Farisa’s Crossing. I figured it’d be a 60,000-word book. After several rounds of revision, and several to go, I’m on target for 175,000. That’s only the first book. I expected the amount of work involved in writing a significant (as opposed to merely publishable) novel to be high. It’s much more than I expected, but it’s fun work. As Camus said of Sisyphus, “One must imagine [him] happy”.

I found that I enjoy fiction more than I enjoyed tech writing. I’ll be publishing it in a year or so. There’s a lot to figure out, on that front. We live in a time where some of the best work is self-published and where any celebrity could get a prestigious house to print garbage. So, I view the process as unpredictable. My job, though, is to write significant work– and maybe, for once, give some value to what I’ve experienced.

Over 2016, for reasons mixing protest and privacy, I accelerated my own de-platforming. It was bad for my reputation to be banned from Hacker News and Quora on the defamatory pretenses that were chosen, but it was good to be banned from them.

What I realized, that year, was that the addiction to internet microapprovals had damaged my focus. It became hard to read, much less write, significant work. Ten thousand words became “too long” to read. In online magazines, even for excellent, enjoyable articles, I’d find myself checking that side cursor for total length. “Are we there yet?” “Are we there yet?” Social media feeds the monkey mind. It leads to a loss of discipline.

I quit Twitter in November 2016. Like I said, there was an element of protest, and this may have been rash. When you’re publishing a book, you need “platform”. I burned mine down. I had 2,600 followers. If I joined again, I’d start at zero.

Now, I am facing the question of whether and how to “re-platform”, as I want Farisa’s Crossing to succeed. Should I rejoin the world of 140-character insights and @-mentions? Should I start batting out 750-word blog posts that say the same thing as one from three years ago, but might “go viral” this time?

I know I can “re-platform”. I could get 10 times the attention I had at my peak. But at what cost? When I used social media, I developed unhealthy obsessions: famous followers, page view counts, blue fucking checkmarks. Do I want that in my life again? My sense is that I don’t.

It might be my age, but I enjoy books more than websites these days. Some promises of technology have been fulfilled. Most have not. The industry sucks, and it’s not getting better.

What ought to have been the first sign of broad-based moral corruption was this: in 2011, I remember someone saying she wanted to “demolish” a competitor. Not “we’d like to build a better product” but “we want to end them.” (They’re still around.) See, it’s valid and usually moral in business to compete. If another firm suffers because one offers a superior product, that’s not something to be ashamed of. However, taking job in the other’s destruction– or, in today’s language, “disruption”– seems perverse. Why wish for another’s failure, as opposed to pursuing one’s own excellence?

It’s a sad fact, but most of what we do in technology is destructive. Very few of us make new things under the sun. Most of us make business processes cheaper. There’s nothing wrong with that; we might think, naively, that the value we create would be invested into research and development. That’s not what happens. Businessmen lay people off to pay their own bonuses. We’re the ones who make that possible. Society gets worse with each iteration, and it’s our fault.

Then, is it a surprise that we fail to arouse public sympathy when we can’t afford houses in the Bay Area? Or when we suffer age discrimination at 30?

I don’t know what life’s ultimate purposes is. Though I don’t ascribe to literal religion, I tend toward anti-nihilism, like Farisa. There must be a purpose, I can’t help but feel. What is it? It’s not to destroy.

Life’s purpose is not to code people out of jobs. It’s not to wreck the reputations of innocents on social media. It’s not to get people addicted to meaningless social microapprovals. Whatever imperative I can find, in the moments when the darkness goes away, points in the opposite direction.



There’s a game called Universal Paperclips in which one plays the villain: a paperclip maximizer, or an AI whose purpose is to make as many paperclips as possible, at any expense. The result of this, should the thought experiment become real, would be our own quick death; the machine would want our matter for its own work.

Paperclip maximizers come to mind often, as someone who’s worked in the tech industry for more than a decade, and has nothing to show for it. I didn’t get rich. I didn’t change the world. I know approximately 47 programming languages, but who cares? I’m 34 years old and the vast majority of my time in this industry has been pure waste and an embarrassment.

There’s one thing I got from the tech industry. Although I developed the illness beforehand, my panic disorder really came into its own thanks to open-plan offices and startup health insurance. It didn’t help that, when I was finally on the mend in 2011, I joined Google and had a manager who provoked attacks for his own amusement. That was fun.

If I hadn’t gotten myself stuck in the tech industry, the condition would have fully remitted by now, if not several years ago. Instead, the fight has gone on for a decade, and I’m not fully out just yet.

So, my souvenir from the tech industry is, rather than some neat futuristic bauble, a defect in an ancient part of my brain, the amygdala.

When I grew up, in the 1980s, we learned about what technology might do one day: holiday lunar trips, robot servants, an end to illness, certainly an end to work except for the most rewarding kinds of it. And what have we actually achieved? Fucking Bitcoin. A 140-character President. Literal fake news. That’s what we have to show for ourselves.

As private-sector programmers, we’ve unemployed a lot of people: we’ve annihilated hundreds of millions of jobs. Some of these people got better jobs; many didn’t. We never cared when it was happening to other people, but now we have “Agile Scrum” and Jira and open-plan offices and the surveillance system we built… sits over us, its passive-management eye always watching.

In what we do, as private-sector programmers, where is the honor? There’s none. We are a failed tribe that has made rich people richer– even at our own expense. If we’re lucky, our work will be erased and we will be forgotten.

This may explain the Fermi Paradox. Perhaps there is a plateau of mediocrity at which, though a civilization could continue to innovate, it chooses not to. Perhaps it does not go the way of violence, but bored purposelessness. Perhaps we are not totally alone in the universe, but all those other supposedly intelligent civilizations are mired in thousands of years of user stories and TPS reports. Seems unlikely, right? Sure. But it’s even more absurd, if we could send a man to the Moon using 1969 computers, that we’re using supercomputers to run Jira and do “user stories” in 2017.