Java is Magic: the Gathering (or Poker) and Haskell is Go (the game)

It may be apocryphal, but there’s a parable in the Go (for this essay, I will never refer to Google’s programming language, so I’m talking about the ancient board game) community in which a strong player boasts about his victory over a well-known professional player, considered one of the best in the world. He said, “last month I finally beat him– by two points!” His conversation partner, also a Go player, is unimpressed. She says, “I’ve also played him, and I beat him by one point.” Both acknowledge that her accomplishment is superior. The best victory is a victory with control, and control to a margin of one point is the best.

Poker, on the other hand, is a game in which ending a night $1 up is not worthy of mention, unless the bidding increment is measured in pennies. The noise in the game is much greater. The goal in Poker is to win a lot of money, not to come out slightly ahead. Go values an artful, subtle victory in which a decision made fifty moves back suffices to bring the one-point advantage that delivers a game. Poker encourages obliterating the opponents. Go is a philosophical debate where one side wins but both learn from the conversation. Poker is a game where the winner fairly, ethically, and legally picks the loser’s pocket.

Better yet, I could invoke Magic: the Gathering, which is an even better example for this difference in what kinds of victories are valued. Magic is a duel in which there are an enormous number of ways to humiliate your opponent: “burn decks” that enable you to do 20 points of damage (typically, a fatal sum) in one turn, “weenie decks” that overrun him with annoying creatures that prick him to death, land and hand destruction decks that deprive him of resources, and counterspell decks that put everything the opponent does at risk of failure. There are even “decking decks” that kill the opponent slowly by removing his cards from the game. (A rarely-triggered losing condition in Magic is that a player unable to draw a card, because his active deck or “library” has been exhausted, loses.) If you’re familiar with Magic, then think of Magic throughout this essay; otherwise, just understand that (like Poker) it’s a very competitive game that usually ends with one side getting obliterated.

If it sounds like I’m making an argument that Go is good or civilized and that Magic or Poker are barbaric or bad, then that’s not my intention because I don’t believe that comparison to make sense, nor am I implying that those games are bad. The fun of brutal games is that they humiliate the loser in a way that is (usually) fundamentally harmless. The winner gets to be boastful and flashy; the loser will probably forget about it, and certainly live to play again. Go is subtle and abstract and, to the uninitiated, impenetrable. Poker and Magic are direct and clear. Losing a large pot on a kicker, or having one’s 9/9 creature sent to an early grave with a 2-mana-cost Terror spell, hurts in a way that even a non-player, unfamiliar with the details of the rules, can observe. People play different games for different reasons, and I certainly don’t consider myself qualified to call one set of reasons superior over any other.


Ok, so let’s talk about programming. Object-oriented programming is much like Magic: there are so many optional rules and modifications, often contradicting, available. There are far too many strategies for me to list them here and do them justice. Magic, just because its game world is so large, has inevitable failures of composition: cards that are balanced on their own but so broken in combination that one or the other must be banned by Magic‘s central authority. Almost no one alive knows “the whole game” when it comes to Magic, because there are about twenty thousand different cards, many introducing new rules that didn’t exist when the original game came out, and some pertaining to rules that exist only on cards written in a specific window of time. People know local regions of the game space, and play in those, but the whole game is too massive to comprehend. Access to game resources is also limited: not everyone can have a Black Lotus, just as not everyone can convince the boss to pay them to learn and use a coveted and highly-compensated but niche technology.

In Magic, people often play to obliterate their opponents. That’s not because they’re uncivilized or mean. The game is so random and uncontrollable (as opposed to Go, with perfect information) that choosing to play artfully rather than ruthlessly is volunteering to lose.

Likewise, object-oriented programmers often try to obliterate the problem being solved. They aren’t looking for the minimal sufficient solution. It’s not enough to write a 40-line script that does the job. You need to pull out the big guns: design patterns that only five people alive actually understand (and, for which, 4 of those 5 have since decided that they were huge mistakes). You need to have Factories generating Factories, like Serpent Generators popping out 1/1 Serpent counters. You need to use Big Products like Spring and Hibernate and Mahout and Hadoop and Lucene regardless of whether they’re really necessary to solve the problem at hand. You need to smash code reviews with “-1; does not use synchronized” on code that will probably never be multi-threaded, and you need to build up object hierarchies that would make Lord Kefka, the God of Magic from Final Fantasy VI, proud. If your object universe isn’t “fun”, with ZombieMaster classes that immediately increment values of fields in all Zombies in the heap in their constructors and decrement those same fields in their finalizers, then you’re not doing OOP– at least, as it is practiced in the business world– right, because you’re not using any of the “fun” stuff.

Object-oriented programmers play for the 60-point Fireballs and for complex machinery. The goal isn’t to solve the problem. It’s to annihilate it and leave a smoldering crater where that problem once stood, and to do it with such impressive complexity that future programmers can only stand in awe of the titanic brain that built such a powerful war machine, one that has become incomprehensible even to its creator.

Of course, all of this that I am slinging at OOP is directed at a culture. Is object-oriented programming innately that way? Not necessarily. In fact, I think that it’s pretty clear Alan Kay’s vision (“IQ is a lead weight”) was the opposite of that. His point was that, when complexity occurs, it should be encapsulated behind a simpler interface. That idea, now uncontroversial and realized within functional programming, was right on. Files and sockets, for example, are complex beasts in implementation, but manageable specifically because they tend to conform to simpler and well-understood interfaces: you can read without having to care whether you’re manipulating a robot arm in physical space (i.e. reading a hard drive) or pulling data out of RAM (memory file) or taking user input from the “file” called “standard input”. Alan Kay was not encouraging the proliferation of complex objects; he was simply looking to build a toolset that enables to people to work with complexity when it occurs. One should note that major object-oriented victories (concepts like “file” and “server”) are no longer considered “object-oriented programming”, just as “alternative medicine” that works is recognized as just “medicine”.

In opposition to the object-oriented enterprise fad that’s losing air but not fast enough, we have functional programming. I’m talking about Haskell and Clojure and ML and Erlang. In them, there are two recommended design patterns: noun (immutable data) and verb (referentially transparent function) and because functions are first-class citizens, one is a subcase of the other. Generally, these languages are simple (so simple that Java programmers presume that you can’t do “real programming” in them) and light on syntax. State is not eliminated, but the language expects a person to actively manage what state exists, and to eliminate it when it’s unnecessary or counterproductive. Erlang’s main form of state is communication between actors; it’s shared-nothing concurrency. Haskell uses a simple type class (Monad) to tackle head-on the question of “What is a computational effect?”, one that most languages ignore. (The applications of Monad can be hard to tackle at first, but the type class itself is dead-boring simple, with two core methods, one of which is almost always trivial.) While the implementations may be very complex (the Haskell compiler is not a trivial piece of work) the computational model is simple, by design and intention. Lisp and Haskell are languages where, as with Go or Chess, it’s relatively easy to teach the rules while it takes time to master good play.

While the typical enterprise Java programmer looks for an excuse to obliterate a simple ETL process with a MetaModelFactory, the typical functional programmer tries to solve almost everything with “pure” (referentially transparent) functions. Of course, the actual world is stateful and most of us are, contrary to the stereotype of functional programmers, quite mature about acknowledging that. Working with this “radioactive” stuff called “state” is our job. We’re not trying to shy away from it. We’re trying to do it right, and that means keeping it simple. The $200/hour Java engineer says, “Hey, I bet I could use this problem as an excuse to build a MetaModelVisitorSingletonFactory, bring my inheritance-hierarchy-record into the double-digits, and use Hibernate and Hadoop because if I get those on my CV, I can double my rate.” The Haskell engineer thinks hard for a couple hours, probably gets some shit during that time for not seeming to write a lot of code, but just keeps thinking… and then realizes, “that’s just a Functor“, fmaps out a solution, and the problem is solved.

While not every programmer lives up to this expectation at all times, functional programming values simple, elegant solutions that build on a small number of core concepts that, once learned, are useful forever. We don’t need pre-initializers and post-initializers; tuples and records and functions are enough for us. When we need big guns, we’ve got ’em. We have six-parameter hyper-general type classes (like Proxy in the pipes library) and Rank-N types and Template Haskell and even the potential for metaprogramming. (Haskell requires the program designer to decide how much dynamism to include, but a Haskell program can be as dynamic as is needed. A working Lisp can be implemented in a few hundred lines of Haskell.) We even have Data.Dynamic in case one absolutely needs dynamic typing within Haskell. If we want what object-oriented programming has to offer, we’ll build it using existential types (as is done to make Haskell’s exception types hierarchical, with SomeException encompassing all of them) and Template Haskell and be off to the races. We rarely do, because we almost never need it, and because using so much raw power usually suggests a bad design– a design that won’t compose well or, in more blunt terms, won’t play well with others.

The difference between games and programming

Every game has rules, but Games (as a concept) has no rules. There’s no single principle that unifies games that each game must have. There are pure-luck games and pure-skill games, there are competitive games and cooperative games (where players win or lose as a group). There are games without well-defined objective functions. There are even games where some players have objective functions and some don’t, as with 2 Rooms and a Boom‘s “Drunk” role. Thus, there isn’t an element of general gameplay that I can single out and say, “That’s bad.” Sometimes, compositional failures and broken strategies are a feature, not a bug. I might not like Magic‘s “mana screw” (most people consider it a design flaw) but I could also argue that the intermittency of deck performance is part of what makes that game addictive (see: variable-schedule reinforcement, and slot machines) and that it’s conceivable that the game wouldn’t have achieved a community of such size had it not featured that trait.

Programming, on the other hand, isn’t a game. Programs exist to do a job, and if they can’t do that job, or if they do that job marginally well but can never be improved because the code is incomprehensible, that’s failure.

In fact, we generally want industrial programs to be as un-game-like as possible. (That is not to say that software architects and game designers can’t learn from each other. They can, but that’s another topic for another time.) The things that make games fun make programs infuriating. Let me give an example: NP-complete problems are those where checking a solution can be done efficiently but finding a solution, even at moderate problem size, is (probably) intractable. Yet, NP-complete (and harder) problems often make great games! Go is PSPACE-complete, meaning that it’s (probably) harder than NP-complete, so exhaustive search will most likely never be an option. So is Microsoft’s addictive puzzle game Minesweeper. Tetris and Sudoku are likewise computationally hard. (Chess is harder, in this way, to analyze, because computational hardness is defined in terms of asymptotic behavior and there’s no incontrovertibly obvious way to generalize it beyond the standard-issue 8-by-8 board.) It doesn’t have to be such a way, because human brains are very different from computers, and so there’s no solid reason why a game’s NP-completeness (or lack thereof) would bear on its enjoyability to humans, yet the puzzle games that are most successful tend to be the ones that computers find difficult. Games are about challenges like computational difficulty, imperfect information (network partitions), timing-related quirks (“race conditions” in computing), unpredictable agents, unexpected strategic interactions and global effects (e.g. compositional failures), and various other things that make a human social process fun, but often make a computing system dangerously unreliable. We generally want games to have traits that would be intolerable imperfections in any other field of life. The sport of Soccer is one where one’s simulated life depends on the interactions between two teams and a tiny ball. Fantasy role-playing games are about fighting creatures like dragons and beholders and liches that would cause us to shit our pants if we encountered them on the subway because, in real life, even a Level 1 idiot with a 6-inch knife is terrifying.

When we encounter code, we often want to reason about it. While this sounds like a subjective goal, it actually has a formal definition. The bad news: reasoning about code is mathematically impossible. Or, more accurately, to ask even the simplest questions (“does it terminate?” “is this function’s value ever zero?”) about an arbitrary program in any Turing-complete language (as all modern programming languages are) is impossible. We can write programs for which it is impossible to know what they do, except empirically, and that’s deeply unsatisfying. If we run a program that fails to produce a useful result for 100 years, we still cannot necessarily differentiate between a program that produces a useful result after 100.1 years and one that loops forever.

If the bad news is that reasoning about arbitrary code is impossible, the good news is that humans don’t write arbitrary code. We write code to solve specific problems. Out of the entire space of possible working programs on a modern machine, less than 0.000000001 percent (with many more zeros) of possible programs are useful to us. Most syntactically correct programs generate random garbage, and the tiny subspace of “all code” that we actually use is much more well-behaved. We can create simple functions and effects that we understand quite well, and compose them according to rules that are likewise well-behaved, and achieve very high reliability in systems. That’s not how most code is actually written, especially not in the business world, the latter being dominated by emotional deadlines and hasty programming. It is, however, possible to write specific code that isn’t hard to reason about. Reasoning about the code we actually care about is potentially possible. Reasoning about randomly-generated syntactically correct programs is a fool’s errand and mathematically impossible to achieve in all cases, but we’re not likely to need to do that if we’re reading small programs written with a clear intention.

So, we have bad news (reasoning about arbitrary code is formally impossible) and good news (we don’t write “arbitrary code”) but there’s more bad news. As software evolves, and more programmers get involved, all carrying different biases about how to do things, code has a tendency to creep toward “arbitrary code”. The typical 40-year-old legacy program doesn’t have a single author, but tens or hundreds of people who were involved. This is why Edsger Dijkstra declared the goto statement to be harmful. There’s nothing mathematically or philosophically wrong with. In fact, computers use it in machine code all the time, because that’s what branching is, from a CPU’s perspective. The issue is the dangerous compositional behavior of goto— you can drop program control into a place where it doesn’t belong and get nonsensical behavior– combined with the tendency of long-lived, multi-developer programs using goto to “spaghettify” and reach a state where it is incomprehensible, reminiscent of a randomly-generated (or, worse yet, “arbitrary” under the mathematician’s definition) program. When Dijkstra came out against goto, his doing so was as controversial as anything that I might say about the enterprise version of object-oriented programming today– and yet, he’s now considered to have been right.

Comefrom 10

Where is this whole argument leading? First, there’s a concept in game design of “dryness”. A game that is dry is abstract, subtle, generally avoiding or limiting the role of random chance, and while the game may be strategically deep, it doesn’t have immediate thematic appeal. Go is a great game, and it’s also very dry. It has white stones and black stones and a board, but that’s it. No wizards, no teleportation effects, not even castling. You put a stone on the board and it sits there forever (unless the colony is surrounded and it dies). Go also values control and elegance, as programmers should. We want our programs to be “dry” and boring. We want the problems that we solve to be interesting and complex, but the code itself should be so elegant as to be “obvious”, and elegant/obvious things are (in this way) “boring”. We don’t want that occurrence where a ZombieMaster comes into play (or the heap) and causes all the Zombies to have different values in otherwise immutable fields. That’s “fun” in a game, where little is at stake and injections of random chance (unless we want a very-dry game like Go) are welcome. It’s not something that we want in our programs. The real world will throw complexity and unpredictability at us: nodes in our networks will fail, traffic will spike, and bugs will occur in spite of our best intentions. The goal of our programs should be to manage that, not to create more of it. The real world is so damn chaotic that programming is fun even when we use the simplest, most comprehensible, “dryest” tools like immutable records and referentially transparent functions.

So, go forth and write more functions and no more SerpentGeneratorGenerators or VibratorVisitorFactory patterns.

Academia, the Prisoner’s Dilemma, and the fate of Silicon Valley

In 2015, the moral and cultural failure of American academia is viewed as a fait accompli. The job market for professors is terrible and will remain so. The academy has sold out two generations already, and shows no sign of changing course. At this point, the most prominent function of academia (as far as the social mainstream is concerned) isn’t to educate people but to sort them so the corporate world knows who to hire. For our society, this loss of academia is a catastrophe. Academia has its faults, but it’s too important for us to just let it die.

To me, the self-inflicted death of academia underlies the importance of social skills. Now, I’m one of those people who came up late in terms of social interaction. I didn’t prioritize it, when I was younger. I focused more on knowledge and demonstration of intelligence than on building up my social abilities. I was a nerd, and I’m sure that many of my readers can relate to that. What I’ve learned, as an adult, is that social skills matter. (Well, duh?) If you look at the impaired state that academia has found itself in, you see how much they matter.

I’m not talking about manipulative social skills, nor about becoming popular. That stuff helps an individual in zero-sum games, but it doesn’t benefit the collective or society at large. What really matters is a certain organizational (or, to use a term I’ll define later, coordinative) subset of social skills that, sadly, isn’t valued by people like academics or software engineers, and both categories suffer for it.


How did academia melt down? And why is it reasonable to argue that academics are themselves at fault? To make it clear, I don’t think that this generation of dominant academics is to blame. I’d say that academia’s original sin is the tenure system. To be fair, I understand why tenure is valuable. At heart, it’s a good idea: academics shouldn’t lose their jobs (and, in a reputation-obsessed industry, such a loss often ends their careers) because their work pulls them in a direction disfavored by shifting political winds. The problem is that tenure allowed the dominant, entrenched academics to adopt an attitude– research über alles— that hurt the young, especially in the humanities. Academic research is genuinely useful, whether we’re talking about particle physics or medieval history. It has value, and far more value than society believes that it has. The problem? During the favorable climate of the Cold War, a generation of academics decided that research was the only part of the job that mattered, and that teaching was grunt work to be handed off to graduate students or minimized. Eventually, we ended up with a system that presumed that academics were mainly interested in research, and that therefore devalued teaching in the evaluation of academics, so that even the young rising academics (graduate students and pre-tenure professors) who might not share this attitude still had to act according to it, because the “real work” that determined their careers was research.

The sciences could get away with the “research über alles” attitude, because intelligent people understand that scientific research is important and worth paying for. If someone blew off Calculus II but advanced the state of nuclear physics, that was tolerated. The humanities? Well, I’d argue that the entire point of humanities departments is the transmission of culture: teaching and outreach. So, while the science departments could get away with a certain attitude toward their teaching and research and the relative importance of each– a “1000x” researcher really is worth his keep even if he’s a terrible teacher– there was no possible way for humanities departments to pull it off.

To be fair, not every academic individually feels negatively about teaching. Many understand its importance and wish that it were more valued, and find it upsetting that teaching is so undervalued, but they’re stuck in a system where the only thing that matters, from a career perspective, is where they can get their papers published. And this is the crime of tenure: the young who are trying to enter academia are suffering for the sins of their (tenured, safe) predecessors.

Society responded to the negative attitude taken toward teaching. The thinking was: if professors are so willing to treat teaching as commodity grunt work, maybe they’re right and it is commodity grunt work. Then, maybe we should have 300 students in a class and we should replace these solidly middle-class professorships with adjunct positions. It’s worth pointing out that adjunct teaching jobs were never intended to be career jobs for academics. The purpose of adjunct teaching positions was to allow experienced non-academic practitioners to promote their professional field and to share experience. (The low salaries reflect this. These jobs were intended for successful, wealthy professionals for whom the pay was a non-concern.) They were never intended to facilitate the creation of an academic underclass. But, with academia in such a degraded state, they’re now filled with people who intended to be career academics.

Academia’s devolution is a textbook case of a prisoner’s dilemma. The individual’s best career option is to put 100% of his focus on research, and to do the bare minimum when it comes to teaching. Yet, if every academic does that, academia becomes increasingly disliked and irrelevant, and the academic job market will be even worse for the next cohort. The health of the academy requires a society in which the decision-makers are educated and cultured (which we don’t have). People won’t continue to pay for things that seem unimportant to them, because they’ve never been taught them. So, in a world where even most Silicon Valley billionaires can’t name seven of Shakespeare’s plays and many leading politicians couldn’t even spell the playwright’s name, what should we expect other than academia’s devolution?

Academia still exists, but in an emasculated form that plays by the rules of the corporate mainstream. Combining this with the loss of vision and long-term thinking in the corporate world (the “next quarter” affliction) we have a bad result for academia and society as a whole. Those first academics who created the “research über alles” culture doomed their young to a public that doesn’t understand their value, declining public funding, adjunct hell and second and third post-docs. With the job market in tatters, professors became increasingly beholden to corporations and governments for grant money, and intellectual conformism increased.

I am, on a high level, on the side of the academics. There should be more jobs for them, and they should get more respect, and they’re suffering for an attitude that was copped by their privileged antecedents in a different time, with different rules. A tenured professor in the 1970s had a certain cozy life that might have left him feeling entitled to blow off his teaching duties. He could throw 200 students into an auditorium, show up 10 minutes late, make it obvious that he felt he had better things to do than to teach undergraduates… and it really didn’t matter to him that one of those students was a future state senator who’d defund his university 40 years later. In 2015, hasty teaching is more of an effect of desperation than arrogance, so I don’t hold it against the individual academic. I also believe that it is better to fix academia than to write it off. What exists that can replace it? I don’t see any alternatives. And these colleges and universities (at least, the top 100 or so most prestigious ones) aren’t going to go away– they’re too rich, and Corporate America is too stingy to train or to sort people– so we might as well make them more useful.

The need for coordinative action

Individuals cannot beat a Prisoner’s Dilemma. Coordination and trust are required in order to get a positive outcome. Plenty of academics would love to put more work into their teaching, and into community outreach and other activities that can increase the relevance and value assigned to their work, but they don’t feel like they’ll be able to compete with those who put a 100% focus on research and publication (regardless of the quality of the research, because getting published is what matters). And they’re probably right. They’re in a broken system, and they know it, but opposing it is individually so damaging, and the job market is so competitive, that almost no one can do anything but the individually beneficial action.

Academic teaching suffers from the current state of affairs, but the quality of research is impaired as well. It might have made sense, for individual benefit, for a tenured academic in the 1970s to blow off teaching. But this, as I’ve discussed, only led society to undervalue what was supposed to be taught. The state now for academia has become so bad that researchers spend an ungodly amount of time begging for money. Professors spend so much time in fundraising that many of them no longer perform research themselves; they’ve become professional managers who raise money and take credit for their graduate students’ work. To be truthful, I don’t think that this dynamic is malicious on the professors’ part. It’s pretty much impossible to put yourself through the degrading task of raising money and to do creative work at the same time. It’s not that they want to step back and have graduate students do the hard work; it’s that most of them can’t, due to external circumstances that they’d gladly be rid of.

If “professors” were a bloc that could be ascribed a meaningful will, it’s possible that this whole process wouldn’t have happened. If they’d perceived that devaluing teaching in the 1970s would lead to an imploded job market and funding climate two decades later, perhaps they wouldn’t have made the decisions that they did. Teach now, or beg later. Given that pair of choices, I’ll teach now. Who wouldn’t? In fact, I’m sure that many academics would love to put all the time and emotional energy wasted on fundraising into their teaching, instead, if it would solve the money problem now instead of 30 years from now. The tenure system that allowed a senior generation of academics to run up a social debt and hand their juniors the bill, and academia’s stuck in a shitty situation that it can’t work its way out of. So what can be done about it?

Coordinative vs. manipulative social skills

It’s well-understood that academics have poor social skills. By “well understood”, I don’t mean that it’s necessarily true, but it’s the prevailing stereotype. Do academics lack social skills? In order to answer this question, I’m going to split “social skills” up into three categories. (There are certainly more, and these categories aren’t necessarily mutually exclusive.) The categories are:

  • interpersonal: the ability to get along with others, be well-liked, make and keep friends. This is what most people think of when they judge another person’s “social skills”.
  • coordinative: the ability to resolve conflicts and direct a large group of people toward a shared interest.
  • manipulative: the ability to exploit others’ emotions and get them to unwittingly do one’s dirty work.

How do academics stack up in each category? I think that, in terms of interpersonal social skills, academics follow the standard trajectory of highly intelligent people: severe social difficulty when young that is worst in the late teens, and resolves (mostly) in the mid- to late 20s. Why is this so common a pattern? There’s a lot that I could say about it. (For example, I suspect that the social awkwardness of highly intelligent people is more likely to be a subclinical analogue of a bipolar spectrum disorder than a subclinical variety of autism/Asperger’s.) Mainly, it’s the 20% Time (named in honor of Google) Effect. That 10 or 20 percent social deficit (whether you attribute it to altered consciousness, via a subclinical bipolar or autism-spectrum disorder, or whether you attribute it to just having other interests) that is typical in highly intelligent people is catastrophic in adolescence but a non-issue in adulthood. A 20-year-old whose social maturity is that of a 17-year-old is a fuckup; a 40-year-old with the social maturity of a 34-year-old would fit in just fine. Thus, I think that, by the time they’re on the tenure track (age 27-30+) most professors are relatively normal when it comes to interpersonal social abilities. They’re able to marry, start families, hold down jobs, and create their own social circles. While it’s possible that an individual-level lack of interpersonal ability (microstate) is the current cause for the continuing dreadful macrostate that academia is in, I doubt it.

What about manipulative social skills? Interpersonal skills probably follow a bell curve, whereas manipulative social skill seems to have a binary distribution: you have civilians, who lack them completely, and you have psychopaths, who are murderously good at turning others into shrapnel. Psychopaths exist, as everywhere, in academia, and they are probably not appreciably less or more common than in other industries. Since academia’s failure is the result of a war waged on it by external forces (politicians devaluing and defunding it, and corporations turning it toward their own coarser purposes) I think it’s unlikely that academia is suffering from an excess of psychopaths within its walls.

What academia is missing is coordinative social skill. It has been more than 30 years since academia decided to sell out its young, and the ivory tower has not managed to fix its horrendous situation and reverse the decline of its relevance. Academia has the talent, and it has the people, but it doesn’t have what it takes to get academics working together to fight for their cause, and to reward the outreach activities (and especially teaching) that will be necessary if academia wants to be treated as relevant, ever again.

I think I can attribute this lack of coordinative social skill to at least two sources. The first is an artifact of having poor interpersonal skills in adolescence, which is when coordinative skills are typically learned. This can be overcome, even in middle or late adulthood, but it generally requires that a person reach out of his comfort zone. Interpersonal social skills are necessary for basic survival, but coordinative social skills are only mandatory for people who want to effect change or lead others, and not everyone wants that. So, one would expect that some number of people who were bad-to-mediocre, interpersonally, in high school and college, would maintain a lasting deficit in coordinative social skill– and be perfectly fine with that.

The second is social isolation. Academia is cult-like. It’s assumed that the top 5% of undergraduate students will go on to graduate school. Except for the outlier case in which one is recruited for a high-level role at the next Facebook, smart undergraduate students are expected to go straight into graduate school. Then, to leave graduate school (which about half do, before the PhD) is seen as a mark of failure. Few students actually fail out on a lack of ability (if you’re smart enough to get in, you can probably do the work) but a much larger number lose motivation and give up. Leaving after the PhD for, say, finance is also viewed as distasteful. Moreover, while it’s possible to resume a graduate program after a leave of absence or to join a graduate program after a couple years of post-college life, those who leave the academic track at any time after the PhD are seen as damaged goods, and unhireable in the academic job market. They’ve committed a cardinal sin: they left. (“How could they?”) Those who leave academia are regarded as apostates, and people outside of academia are seen as intellectual lightweights. With an attitude like that, social isolation is expected. People who have started businesses and formed unions and organized communities could help academics get out of their self-created sand trap of irrelevance. The problem is that the ivory tower has such a culture of arrogance that it will never listen to such people.

Seem familiar?

Now, we focus on Silicon Valley and the VC virus that’s been infecting the software industry. If we view the future as linear, Silicon Valley seems to be headed not for irrelevance or failure but for the worst kind of success. Of course, history isn’t linear and no one can predict its future. I know what I want to happen. As for what will, and when? Some people thought I made a fool of myself when I challenged a certain bloviating, spoiled asshat to a rap duel– few people caught on to the logic of what I was doing– and I’m not going to risk making a fool of myself, again, by making predictions.

Software engineers, like academics, have a dreadful lack of coordinative social skill. Not only that, but the Silicon Valley system, as it currently exists, requires that. If software engineers had the collective will to fight for themselves, they’d be far better treated and be running the place, and it would be a much better world overall but the current VC kingmakers wouldn’t be happy. Unfortunately, the Silicon Valley elite has done a great job of dividing makers on all sorts of issues: gender, programming languages, the H1-B program, and so on… all the while, the well-connected investors and their shitty paradrop executive friends make tons of money while engineers get abused– and respond by abusing each other over bike-shed debates like code indentation. When someone with no qualifications other than going to high school with a lead investor is getting a $400k-per-year VP/Eng job and 1% of the equity, and engineers are getting 0.02%, who fucking cares about tabs versus spaces?

Is Silicon Valley headed down the same road as academia? I don’t know. The analogue of “research über alles” seems to be a strange attitude that mixes young male quixotry, open-source obsession– and I think that open-source software is a good thing, but less than 5% of software engineers will ever be paid to work on it, and not everyone without a Github profile is a loser– and crass commercialism couched in allusions to mythical creatures. (“Billion-dollar company” sounds bureaucratic, old, and lame; “unicorn” sounds… well, incredibly fucking immature if you ask me, but I’m not the target market.) If that culture seems at odds with itself, that’s an accurate perception. It’s intentionally muddled, self-contradictory, and needlessly divisive. The culture of Silicon Valley engineering is one created by the colonial overseers, and not by the engineers. Programmers never liked open-plan offices and still don’t like them, and “Scrum” (at least, Scrum in practice) is just a way to make micromanagement sound “youthy”.

For 1970s academia, there was no external force that tried to ruin it or (as has been done with Silicon Valley) turn it into an emasculated colonial outpost for the mainstream business elite. Academia created its own destruction, and the tenure system allowed it by enabling the arrogance of the established (which ruined the job prospects of the next generation). It was, I would argue, purely a lack of coordinative social skill, brought on by a cult-like social isolation, that did this. I would argue, though, that Silicon Valley was destroyed (and so far, the destruction is moral but not yet financial, insofar as money is still being made, just by the wrong people) intentionally. We only need examine one dimension of social skill– a lack of coordinative skill– to understand academia’s decline. In Silicon Valley, there are two at play: the lack of coordinative social skill among the makers who actually build things, and the manipulative social skills deployed by psychopaths, brought in by the mainstream business culture, to keep the makers divided over minutiae and petty drama. What this means, I am just starting to figure out.

Academia is a closed system and largely wants to be so. Professors, in general, want to be isolated from the ugliness of the mainstream corporate world. Otherwise, they’d be in it, making three times as much money on half the effort. However, the character of Silicon Valley makers (as opposed to the colonial overseers) tends to be ex-academic. Most of us makers are people who were attracted to science and discovery and the concept of a “life of the mind”, but left the academy upon realizing its general irrelevance and decline. As ex-academics, we simultaneously have an attitude of rebellion against it and a nostalgic attraction to its better traits including its “coziness”. What I’ve realized is that the colonial overseers of Silicon Valley are very adept at exploiting this. Take the infantilizing Google Culture, which provides ball pits and free massage (one per year) but has closed allocation and Enron-style performance reviews. Google, knowing that many of its best employees are ex-academics– I consider grad-school dropouts to be ex-academic– wants to create the cult-like, superficially cozy world that enables people to stop asking the hard questions or putting themselves outside of their comfort zones (which seems to be a necessary pre-requisite for developing or deploying coordinative social skills).

In contrast to academia, Silicon Valley makers don’t want to be in a closed system. Most of these engineers want to have a large impact on the world, but a corporation can easily hack them (regardless of the value of the work they’re actually doing) by simply telling them that they’re having an effect on “millions of users”. This enables them to get a lot of grunt work done by people who’d otherwise demand far more respect and compensation. This ruse is similar to a cult that tells its members that large donations will “send out positive energy waves” and cure cancer. It can be appealing (and, again, cozy) to hand one’s own moral decision-making over to an organization, but it rarely turns out well.


I’ve already said that I’m not going to try to predict the future, because while there is finitude in foolishness, it’s very hard to predict exactly when a system runs out of greater fools. I don’t think that anyone can do that reliably. What I will do is identify points of strain. First, I don’t think that the Silicon Valley model is robust or sustainable. Once its software engineers realize on a deep level just how stacked the odds are against them– that they’re not going to be CEOs inside of 3 years– it’s likely either to collapse or to be forced to evolve into something that has an entirely different class of people in charge of it.

Right now, Silicon Valley prevents engineer awakening through aggressive age discrimination. Now, ageism is yet another trait of software culture that comes entirely from the colonial overseers. Programmers don’t think of their elders as somehow defective. Rather, we venerate them. We love taking opportunities to learn from them. No decent programmer seriously believes that our more experienced counterparts are somehow “not with it”. Sure, they’re more expensive, but they’re also fucking worth it. Why does the investor class need such a culture of ageism to exist? It’s simple. If there were too many 50-year-old engineers– who, despite being highly talented, never became “unicorn” CEOs, either because of a lack of interest or because CEO slots are still quite rare– kicking around the Valley, then the young’uns would start to realize that they, also, weren’t likely enough to become billionaires from their startup jobs to justify the 90-hour weeks. Age discrimination is about hiding the 50th-percentile future from the quixotic young males that Silicon Valley depends on for its grunt work.

The problem, of course, with such an ageist culture is that it tends to produce bad technology. If there aren’t senior programmers around to mentor the juniors and review the code, and if there’s a deadline culture (which is usually the case) then the result will be a brittle product, because the code quality will be so poor. Business people tend to assume that this is fixable later on, but often it’s not. First, a lot of software is totaled, by which I mean it would take more time and effort to fix it than to rewrite it from scratch. Of course, the latter option (even when it is the sensible one) is so politically hairy as to be impractical. What often happens, when a total rewrite (embarrassing the original architects) is called, is that the team that built the original system throws so much political firepower (justification requests, legacy requirements that the new system must obey, morale sabotage) at it that the new-system team is under even tighter deadlines and suffers from more communication failures than the original team did. The likely result is that the new system won’t be any good either. As for maintaining totaled software for as long as it lives, these become the projects that no one wants to do. Most companies toss legacy maintenance to their least successful engineers, who are rarely people with the skills to improve it. With these approaches blocked, external consultants might be hired. The problem there is that, while some of these consultants are worth ten times their hourly rate, many expensive software consultants are no good at all. Worse yet, business people are horrible at judging external consultants, while the people who have the ability to judge them (senior engineers) have a political stake and therefore, in evaluating and selecting external code fixers, will affected by the political pressures on them. The sum result of all of this is that many technology companies built under the VC model are extremely brittle and “technical debt” is often impossible to repay. In fact, “technical debt” is one of the worst metaphors I’ve encountered in this field. Debt has a known interest rate that is usually between 0 and 30 percent per year; technical debt has a usurious and unpredictable interest rate.

So what are we seeing, as the mainstream business culture completes its colonization of Silicon Valley? We’ve seen makers get marginalized, we’ve seen an ageism that is especially cruel because it takes so many years to become any good at programming, and we’ve seem increasing brittleness of the products and businesses created, due to the colonizers’ willful ignorance of the threat posted by technical debt.

Where is this going? I’m not sure. I think it behooves everyone who is involved in that game, however, to have a plan should that whole mess go into a fiery collapse.

Employees at Google, Yahoo, and Amazon lose nothing if they unionize. Here’s why.

Google, Yahoo, and Amazon have one thing in common with, probably, the majority of large, ethically-challenged software companies. They use stack-ranking, also known as top-grading, also known as rank-and-yank. By top-level mandate, some pre-ordained percentage of employees must fail. A much larger contingent of employees face the stigma of being labelled below-average or average, which not only blocks promotion but makes internal mobility difficult. Stack ranking is a nasty game that executives play against their own employees, forcing them to stab each other in the back. It ought to be ended. Sadly, software engineers do not seem to have the ability to get it abolished. They largely agree that it’s toxic, but nothing’s been done about it, and nothing will be done about it so long as most software engineers remain apolitical cowards who refuse to fight for themselves.

I’ve spent years studying the question of whether it is good or bad for software engineers in the Valley to unionize. The answer is: it depends. There are different kinds of unions, and different situations call for different kinds of collective action. In general, I think the way to go is to create guilds like Hollywood’s actors’ and writers’ guilds, which avoid interfering with meritocracy with seniority systems or compensation ceilings, but establish minimum terms of work, and provide representation and support in case of unfair treatment by management. Stack ranking, binding mandatory arbitration clauses, non-competes, and the mandatory inclusion of performance reviews in a candidate’s transfer packet for internal mobility could be abolished if unions were brought in. So what stands to be lost? A couple hundred dollars per year in dues? Compared to the regular abuse that software engineers suffer in stack-ranked companies, that has got to be the cheapest insurance plan that there is.

To make it clear, I’m not arguing that every software company should be unionized. I don’t think, for example, that a 7-person startup needs to bring in a union. Nor is it entirely about size. It’s about the relationship between the workers and management. The major objections to unionization come down to the claim that they commoditize labor; what once could have had warm-fuzzy associations about creative exertion and love of the work is now something where people are disallowed from doing it more than 10 hours per day without overtime pay. However, once the executives have decided to commoditize the workers’ labor, what’s lost in bringing in a union? At bulk levels, labor just seems to become a commodity. Perhaps that’s a sad realization to have, and those who wish it were otherwise should consider going independent or starting their own companies. Once a company sees a worker as an atom of “headcount” instead of an individual, or a piece of machinery to be “assigned” to a specific spot in the system, it’s time to call in the unions. Unions generally don’t decelerate the commoditization of labor; instead, they accept it as a fait accompli and try to make sure that the commoditization happens on fair terms for the workers. You want to play stack-ranking, divide-and conquer, “tough culture” games against our engineers? Fine, but we’re mandating a 6-month minimum severance for those pushed out, retroactively striking all binding mandatory arbitration clauses in employment contracts should any wrongful termination suits occur, offering to pay legal expenses of exiting employees, and (while we’re at it) raising salaries to a minimum of $220,000 per year. Eat it, biscuit-cutters.

If unions come roaring into Silicon Valley, we can expect a massive fight from its established malefactors. And since they can’t win in numbers (engineers outnumber them) they will try to fight culturally, claiming that unions threaten to create an adversarial climate between engineers and management. Sadly, many young engineers will be likely to fall for this line, since they tend to believe that they’re going to be management inside of 30 months. To that, I have two counterpoints. First, unions don’t necessarily create an adversarial climate; they create a negotiatory one. They give engineers a chance to fight back against bad behaviors, and also provide a way for them to negotiate terms that would be embarrassing for the individual to negotiate. For example, no engineer, while he’s negotiating a job offer, can talk about about ripping out the binding mandatory negotiation clause (it signals, “I’m considering the possibility, however remote, that I might have to sue you”) or fight against over-broad IP assignments (“I plan on having side projects which won’t directly compete with you, but that may compete for my time, attention and affection”) or non-competes (“I haven’t ruled out the possibility of working for a competing firm”). Right now, the balance of power between employers and employees in Silicon Valley is so demonically horrible that simply insisting on having one’s natural and legal rights makes that prospective employee, in HR terms, a “PITA” risk and that will end the discussion right there. Instead, we need a collective organization that can strike these onerous employment terms for everyone.

When a company’s management plays stack-ranking games against its employees, an adversarial climate between management and labor already exists. Bringing in a union won’t create such an environment; it will only make the one that exists more fair. You absolutely want a union whenever it becomes time to say, “Look, we know that you view our labor as a commodity– we get it, we’re not special snowflakes in your eyes, and we’re fine with that– so let’s talk about setting fair terms of exchange”.

Am I claiming that all of Silicon Valley should be unionized? Perhaps an employer-independent and relatively lightweight union like Hollywood’s actors’ and writers’ guilds would be useful. With the stack-rank companies in particular, however, I think that it’s time to take the discussion even further. While I don’t support absolutely everything that people have come to associate with unions, the threat needs to be there. You want to stack-rank our engineers? Well, then we’re putting in a seniority system and making you unable to fire people without our say-so.

At Google, for example, engineers live in perennial fear of “Perf” and “the Perf Room”. (There actually is no “Perf Room”, so when a Google manager threatens to “take you into the Perf Room” or to “Perf you”, it’s strictly metaphorical. The place doesn’t actually exist, and while the terminology often gets a bit rapey– an employee assigned a sub-3.0 score is said to be “biting the pillow”– all that actually happens is that a number is inserted into a computerized form.) Perf scores, which are often hidden from the employee, follow him forever. They make internal mobility difficult, because even average scores make an engineer less desirable as a transfer candidate than a new hire– why take a 50th- or even 75th-percentile internal hire and risk angering the candidate’s current manager, when you can fill the spot with a politically unentangled external candidate? The whole process exists to deprive the employee of the right to state her own case for her capability, and to represent her performance history on her terms. And it’s the sort of abusive behavior that will never end until the executives of the stack-ranked companies are opposed with collective action. It’s time to take them, and their shitty behaviors, into the Perf Room for good.

Anger’s paradoxical value, and the closing of the Middle Path in Silicon Valley


Anger is a strange emotion. I’ve made no efforts to conceal that I have a lot of it, and toward such vile targets (such as those who have destroyed the culture of Silicon Valley and, by extension due to that region’s assigned status of leadership, the technology industry) that most would call it “justified”. Anger is, however, one of those emotions that humans prefer to ignore. It produces (in roughly increasing order of severity) foul language, outbursts, threats, retaliations and destroyed relationships, and frank physical violence. The fruits of anger are disliked, and not for bad reasons, because most of those byproducts are horrible. Most anger is, additionally, a passing and somewhat errant emotion; the target of the anger might not be deserving of violence, retaliation, or even insults. In fact, some anger is completely unjustified; so it’s best not to act on anger until we’ve had a chance to process and examine it. The bad kind of anger tends to be short-lived but, if humans acted on it when it emerged, we wouldn’t have made it this far as a species. Still, most of us agree that much anger, especially the long-lived kind that doesn’t go away, is justified in some moral sense. To be angry, three years later, at an incompetent driver is deemed silly. To be angry over a traumatic incident or a life-altering injustice is held as understandable.

However, is justified anger good? The answer, I would say, is paradoxical. For the individual, anger isn’t good. I’m not saying that the emotion should be ignored or “bottled in”. It should be acknowledged and let to pass. Holding on to it forever is, however, counterproductive. It’s stressful and unpleasant and sometimes harmful. As Buddha said, “holding on to anger is like grasping a hot coal with the intent of throwing it at someone else; you are the one who gets burned.” Anger, held too long, is a toxic and dreadful emotion that seems to be devoid of value– to the individual. This isn’t news. So what’s the issue? Why am I interested in talking about it? Because anger is extremely useful for the human collective.

Only anger, it often seems, can muster the force that is needed to overthrow evil. Let’s be honest: the problem has its act together. We aren’t going to overthrow the global corporate elite by beaming love waves at them. No one is going to liberate the technology industry from its Damaso overlords with a message of hope and joy alone. We can probably get them to vacate without forcibly removing them, but it’s not going to happen without a threatening storm headed their way. Any solution to any social problem will involve some people getting hurt, if only because the people who run the world now are willing to hurt other people, by the millions, in order to protect their positions.

Anger is, I’m afraid, the emotion that spreads most quickly throughout a group, and sometimes the only thing that can hold it together. Of course, this can be a force for good or for evil. Many of history’s most noted ragemongers were people did bad to the world. I would, however, say that this fact makes the argument that, if good people shy away from the job of spreading indignation and resentment, then only evil people will being doing it. For me, that’s an upsetting realization.

Whether we’re talking about “yellow journalism” or bloggers or anyone else who fights for social change, spreading anger is a major part of what they do. It’s something that I do, often consciously. The reason, when I discuss Silicon Valley’s cultural problems, for me to mention Evan Spiegel or Lucas Duplan (for the uninitiated, they are two well-connected, rich, unlikeable and unqualified people who were made startup founders) is because they inspire resentment and hatred. Dry discussions of systemic problems don’t lead to social change; they lead to more dry debate and that debate leads to more debate, but nothing ever gets done until someone “condescends” to talk to the public and get them pissed off. For that purpose, a Joffrey figure like Evan Spiegel is just much “catchier”. This is why founder-quality issues like Duplan and Spiegel, and “Google Buses”, are a better vector of attack against Sand Hill Road than the deeper technical reasons (e.g. principal-agent problems that take kilowords to explain in detail) for that ecosystem’s moral failure. It’s hard to get people riled up about investor collusion, and much easier to point to this picture of Lucas Duplan.

This current incarnation of Silicon Valley needs to be pushed aside and discarded, because it’s hurting the world. The whole ecosystem– the shitty corporate cultures with the age discrimination and open-plan fetishism, the juvenile talk about “unicorns” because it’s a cute way of covering up the reality of an industry that only cares about growth for its own sake, the insane short-term greed, the utter lack of concern for ethics, the investor collusion, and the founder-quality issues– needs to be burned to the ground so we can build something new. And I have enough talent that, while I can’t change anything on my own, I can contribute. When I (unintentionally) revealed the existence of stack-ranking at Google to the public, I damaged that company’s reputation. The degree to which I did so is probably not significant, relative to its daily swings on the stock market, but with enough people in the good fight, victory is possible.

Here’s what I don’t like. Clearly, anger is painful for the person experiencing it. As an individual, I would be better to let it pass. I can personally deal with the pain of it, but it’s leads me to question whether there is social value in disseminating it. And yet, without people like me spreading and multiplying this justified anger at the moral failure of Silicon Valley, no change will occur and evil will win. This is what makes anger paradoxical. As an individual, the prudent thing to do is to let it go. For society, moral justice demands that it spread and amplify. Even if we accept that collective anger can just as easily be a force for bad (and it can) we still have to confront the fact that if good people decline to spread and multiply anger against evil, then the sheer power of collective anger will be wielded only by evil. We need, as a countervailing force, for the good people to comprehend and direct the force of collective anger.

The Middle Path

Why do I detest Silicon Valley? I don’t live there, and I have better options than to take a pay cut in exchange for 0.03% of a post-A startup, so why does that cesspool matter to me at this point? In large part, it’s because the Bay Area wasn’t always a cesspool. It used to be run by lifelong engineers for engineers, and now it’s some shitty outpost of the mainstream business culture, and I find that devolution to be deplorable. The Valley used to be a haven for nerds (here, meaning people who value intellectual fulfillment more than maximizing their wealth and social status) and now it’s become a haven for MBA-culture rejects who go West to take advantage of the nerds. It’s a joke, it’s awful, and it’s very easy to get angry at it. But why? Why is it worth anger? Shouldn’t we divest ourselves, emotionally, and be content to let that cesspool implode?

I don’t care about Silicon Valley, meaning the Bay Area, but I do care about the future of the technology industry. Technology is just too important to the future of humanity for us to ignore it, or to surrender it to barbarians. The technology industry used to represent the Middle Path between the two undesirable options of (a) wholesale subordination to the existing elite and (b) violent revolt. It was founded by people who neither wanted to acquiesce to the Establishment nor to overthrow it with physical force. They just wanted to build cool things, to indulge their intellectual curiosities, and possibly to outperform an existing oligarchy and therefore refute its claims of meritocracy.

Unfortunately, Silicon Valley became a victim of its own success. It outperformed the Establishment and so the Establishment, rather than declining gracefully into lesser relevance, found a way to colonize it through the good-old-boy network of Bay Area venture capital. To be fair, the natives allowed themselves to be conquered. It wasn’t hard for the invaders to do, because software engineers have such a broken tribal identity and such a culture of foolish individualism that divide-and-conquer tactics (like– for a modern example that illustrates how fucked we are as a tribe, “Agile”/Scrum– which has evolved into a system where programmers rat each other out to management for free) worked easily. Programmers are, not surprisingly, prone to a bit of cerebral narcissism, and the result of this is that they lash out with more anger at unskilled programmers and bad code than against the managerial forces (lack of interest in training, deadline culture) that created the bad programmers and awful legacy code in the first place. It’s remarkably easy for a businessman to turn a group of programmers against itself, so much so that any collective action (either a labor union, or professionalization) by programmers remains a pipe dream. The result is a culture of individualism and arrogance where almost every programmer believes that most of his colleagues are mouth-breathing idiots (and, to be fair, most of them are severely undertrained). There’s a joke in Silicon Valley about “flat” software teams where every programmer considers himself to be the leader, but it’s not entirely a joke. In the typical venture-funded startup, the engineers each believe that they’ll have investor contact within 6 months and founder/CEO status inside of 3 years. (They wouldn’t throw down 90-hour weeks if it were otherwise.) By the time programmers are old enough to recognize how rarely that happens (and how even more rarely people actually get rich in this game, unless they were born into the contacts that put them on the VC side or can have them inserted in high positions in portfolio companies, allowing diversification) they’re judged as being too old to program in the Valley. That is too convenient for those in power to be attributed to coincidence.

Sand Hill Road needs to be taken down because it has blocked the Middle Path that used to exist in Silicon Valley, and that should exist, if not in that location, in the technology industry somewhere. The old Establishment might have its territory chipped away (harmlessly, most often, because large corporations don’t die unless they do it to themselves) by technology startups, and it was content to have this happen because, so often, the territory it lost was what it didn’t understand well enough to care about. The new Establishment, on Sand Hill Road, is harder to outperform because, if it sees you as a threat, it will fund your competitors, ruin your reputation, and render your company unable to function.

I don’t believe that Silicon Valley’s closing of the Middle Path will be permanent, and it’s best for all of us that it not be. I am obviously not in favor of subordination to the global elite. They are the enemy, and something will have to be done about, or at least around, them in order to reverse the corruption and organizational decay that they’ve inflicted on the world. On the other hand, I view violent revolution as an absolute last resort. Violence is preferable to subordination and defeat, but nonetheless it is usually the absolute worst possible way to achieve something. Disliking the extremes, I want the moderate approach: effective opposition to the enemies of progress, without the violence that so easily leads to chaos and the harm of innocents. So when the mainstream business elite enters a space (like technology) in which it does not belong, colonizes it, and thereby blocks the Middle Path, it’s a scary proposition. Of course I cannot predict the future, but I can perceive risks; and the closing of the Middle Path represents too much of a risk for us to allow it. If the Middle Path has closed in venture-funded technology in the Valley, it’s time to move on to something else.

Do I think that humanity is doomed, simply because a man-child oligarchy in one geographical area (“Silicon Valley”) has closed the Middle Path when it existed in their location? Of course not. Among those in the know, the VC-engorged monstrosity that now exists in the Valley has ceased to inspire, or even to lead. It seems, then, that it is time to move past it, and to figure out where to open a new Middle Path.

If getting people to do this– to recognize the importance of doing this– requires a bit of emotional appeal along a vector such as anger or resentment, I’ll be around and I know how to pull it off.

Technology is run by the wrong people

I have a confession to make: I have a strong tendency to “jump”, emotionally and intellectually, to the biggest problem that I see at a given time. I’ve tempered it with age, because it’s often counterproductive. In organizational or corporate life, solving the bigger problem, or jumping to the biggest problem that you have the ability to solve, often gets you fired. Most organizations demand that a person work on artificial small problems in a years-long evaluative period before he gets to solve the important problems, and I’ve personally never had the patience to play that game (and it is a political game, and recognizing it as such has been detrimental, since it would be harder to resent it had I not realized what it was) at all, much less to a win. The people who jump to the biggest problem are received as insubordinate and unreliable, not because they actually are unreliable, but because they reliably do something that those without vision tend both not to understand, and to dislike. There are too many negative things (whether there is truth or value in them, or not) that can be said, in the corporate theater, about a person who immediately jumps to the biggest problem– she only wants to work on the fun stuff, she’s over-focused and bad at multi-tasking, she’s pushy and obsessive, she wants the boss’s boss’s job and isn’t good at hiding it– and it’s only matter of time before many of them are actually said.

Organizations need people like this, if they wish to survive, and they know this; but they also don’t believe that they need very many of them. Worse yet, corporate consistency mandates that the people formally trusted (i.e. those who negotiated for explicitly-declared trust in the form of job titles) be the ones who are allowed to do that sort of work. The rest, should they self-promote to a more important task than what they’ve been assigned, are considered to be breaking rank and will usually be fired. People dislike “fixers”, especially when their work is what’s being fixed. It’s probably no surprise, then, that modern organizations, over time, become full of problems that most people can see but no one has the courage to fix.

Let’s take this impulse– attack the bigger problem or, better yet, find an even bigger one– and discuss the technology industry. Let’s jump away from debates about tools and get to the big problems. What is the biggest problem with it? Tabs versus spaces? Python versus Ruby? East Coast versus West versus Midwest? Hardly. Don’t get me wrong: I enjoy debating the merits and drawbacks of various programming languages. I mean, I may not like the language Spoo as much as my favored tools, but I’d never suggest that the people promoting Spoo are anything but intelligent people with the best intentions. We may disagree, but in good faith. Except in security, discussion of bad-faith players and their activity is rare. It’s almost taboo to discuss that they exist. In fact, Hacker News now formally censors “negativity”, which includes the assertion or even the suggestion that there are many bad actors in the technology world, especially in Silicon Valley and even more especially at the top. But there are. There is a level of power, in Silicon Valley, at which malevolent players become more common than good people, and it’s people at that level of power who call the most important shots. If we ignore this, we’re missing the fucking point of everything.

There is room for this programming language and that one. That is a matter of taste and opinion, and I have a stance (static as much as possible) but there are people of equal or superior intellectual and moral quality who disagree with me. There is room for functional programming as well as imperative programming. Where there is no nuance (unless one is a syphilitic psychopath) is on this statement: technology, in general, is run by the wrong people. While this claim (“wrong people”) is technically subjective just as far as color is technically subjective, we can treat it as a working fact, just as the subjectivity of color does not excuse a person running a red light under the argument that he perceived it as green. Worse, the technology industry is run by bad people, and by bad, I don’t mean that they are merely bad at their jobs; I mean that they are unethical, culturally malignant, and belong in jail.

Why is this? And what does it mean? Before answering that, it’s important to understand what kind of bad people have managed to push themselves into the top ranks of the technology industry.

Sadly, most of the people who comprise the (rising, and justified) anti-technology contingent don’t make a distinction between me and the actual Bad Guys. To them, the $140k/year engineers and the $400k/year VP/NTWTFKs (Non-Technical Who-The-Fuck-Knows) getting handed sinecures in other peoples’ companies by their friends on Sand Hill Road are the same crowd. They perceive classism and injustice, and they’re right, but they’re oblivious to the gap between the upper-working-class engineers who create technological value (but make few decisions) and the actually upper class pedigree-mongers who capture said value (and make most of the decisions, often badly) and who are at risk of running society into the ground. (If you think this is an exaggeration, look at house prices in the Bay Area. If these fuckers can’t figure out how to solve that problem, then who in the hell can trust them to run anything bigger than techie cantrips?) Why do the anti-technology protestors fail to recognize their true enemies, and therefore lose an opportunity to forge an alliance with the true technologists whose interests have also been trampled by the software industry’s corporate elite? Because we, meaning the engineers and true technologists, have let them.

As I see it, the economy of the Bay Area (and, increasingly, the U.S.) has three “estates”. In the First Estate are the Sand Hill Road business people. They don’t give a damn about technology for its own sake, and they’re an offshoot of the mainstream business elite. After failing in private equity or proving themselves not to be smart enough to do statistical arbitrage, they’re sent West to manage nerds, and while they’re poor in comparison to the hedge-fund crowd, they’re paid immensely by normal-people (or even Second Estate) standards. As in the acronym “FILTH” (Failed In London, Try Hong (k)ong), they are colonial overseers who weren’t good enough for leadership positions in the colonizing culture (the mainstream business/MBA culture) so they were sent to California to wave their dicks in the air while blustering about “unicorns“. In the Second Estate are the technologists and engineers who actually write the code and build the products; their annual incomes tend to top out around $200,000 to $300,000– not bad at all, but not enough to buy a house in the Bay Area– and becoming a founder is (due to lack of “pedigree”, which is a code word for the massive class discrepancy between them and the VCs they need to pitch) is pretty much out of the question. The Third Estate are the people, of average means, who feel disenfranchised as they are priced out of the Bay Area. They (understandably) can’t quite empathize with Second-Estate complaints about the cost of housing and pathetic equity slices, because they actually live on “normal people” (non-programmer, no graduate degree) salaries. As class tensions have built in San Francisco, the First Estate has been exceptionally adept at diverting Third-Estate animosity toward the Second, hence the “Google Bus” controversies. This prevents the Second and Third Estates from realizing that their common enemy is the First Estate, and thereby getting together and doing something about their common problem.

This echoes a common problem in technology companies. If a tech-company CEO in France or Germany tried to institute engineer stack ranking, an effigy would be burned on his own front lawn, his vehicle would be vandalized if not destroyed, and the right thing would happen (i.e., he’d revert the decision) the next day. An admirable trait that the European proletariat has, and that the American one lacks, is an immunity to divide-and-conquer tactics. The actual enemies of the people of San Francisco are the billionaires who believe in stack ranking and the NIMBYs, not 26-year-old schlubs who spend 3 hours per day on a Google bus. Likewise, when software engineers bludgeon each other over Ruby versus Java, they’re missing the greater point. The enemy isn’t “other languages”. It’s the idiot executive who (not understanding technology himself, and taking bad advice from a young sociopath who is good at pretending to understand software) instituted a top-down one-language policy that was never needed in the first place.

Who are the right people to run technology, and why are the current people in charge wrong for the job? Answering the first question is relatively easy. What is technology? It’s the application of acquired knowledge to solve problems. What problems should we be solving? What are the really big problems? Fundamentally, I think that the greatest evil is scarcity. From the time of Gilgamesh to the mid-20th century, human life was dominated by famine, war, slavery, murder, rape and torture. Contrary to myths about “noble savages”, pre-industrial men faced about a 0.5%-per-year chance of death in violent conflict. Aberrations aside, most of horrible traits that we attribute to “human nature” are actually attributable to human nature under scarcity. What do we know about human nature without scarcity? Honestly, very little. Even the lives of the rich, in 2015, are still dominated by the existence of scarcity (and the need to protect an existence in which it is absent). We don’t have a good idea of what “human nature” is when human life is no longer dominated either by scarcity or the counter-measures (work, sociological ascent) taken to avoid it.

The goal of a technologist is to make everyone rich. Obviously, that won’t happen overnight, and it has to be done in the right way. It’s better to do it with clean energy sources and lab-grown meat than with petroleum and animal death. The earth can’t afford to have people eating like Americans and able to fly thousands of miles per year until certain technological problems are solved (and I, honestly, believe that they can be solved, and aren’t terribly difficult). We have a lot of work to do, and most of us aren’t doing the right work, and it’s hard to blame the individual programmer because there are so few jobs that enable a person to work on fundamental problems. Let’s, however, admit to a fundamental enemy: scarcity. Some might say that death is a fundamental enemy, especially in the Singularitarian crowd. I strongly disagree. Death is an unknown– I look forward to “the other side”, and if I am wrong and there is nothing on the other side, then I will not exist to be bothered by the fact– but I see no reason to despise it. Death will happen to me– even a technological singularity can only procrastinate it for a few billion years– and that is not a bad thing. Scarcity, on the other hand, is pretty fucking awful– far more deserving of “primal enemy” status than death. If scarcity in human life should continue indefinitely, I don’t want technological life extension. Eighty years of a mostly-charmed life in a mostly-shitty world, I can tolerate. Two hundred? Fuck that shit. If we’re not going to make major progress on scarcity in the next fifty years, I’ll be fucking glad to be making my natural exit.

Technologists (and, at this point, I’m speaking more about a mentality and ideology than a profession, because quite a large number of programmers are anti-intellectual fuckheads just as bad as the colonial officers who employ them) are humanity’s last resort in the battle against scarcity. Scarcity has been the norm, along with the moral corrosion that comes with it, for most of human history, and if we don’t kill it soon, we’ll destroy ourselves. We learned this in the first half of the 20th century. Actual scarcity was on the wane even then, because the Industrial Revolution worked; but old, tribalistic ideas– ideas from a time when scarcity was the rule– caused a series of horrendous wars and the deployment of one of the most destructive weapons ever conceived. We ought to strive to break out of such nonsense. There will always be inequalities of social status, but we ought to aim for a world in which being “poor” means being on a two-week waiting list to go to the Moon.

Who are the right people to run technology? Positive-sum players. People who want to make everyone prosperous, and to do so while reducing or eliminating environmental degradation. I hope that this is clear. There are many major moral issues in technology around privacy, safety and security, and our citizenship in the greater world. I don’t mean to make light of those. Those are important and complicated issues, and I won’t claim that I always have the right answer. Still, I think that those are ancillary to the main issue, which is that technology is not run by positive-sum players. Instead, it’s run by people who hoard social access, damage others’ careers even when there is little to gain, and play political games against each other and against the world.

To make it clear, I don’t wish to identify as a capitalist or a socialist, or even as a liberal or conservative. The enemy is scarcity. We’ve seen that pure capitalism and pure socialism are undesirable and ineffective at eliminating it; but if it were otherwise, I’d welcome the solution that did so. It’s important to remember that scarcity itself is our adversary, and not some collection of ideas called an “ideology” and manufactured into an “other”. I don’t think that one needs to be a liberal or leftist necessarily in order to qualify as a technologist. This is about something different than the next election. This is about humanity and its long-term goals.

All of that said, there are people in society who prosper by creating scarcity. They keep powerful social organizations and groups closed, they deliberately concentrate power, and they excel at playing zero-sum games. And here’s the problem: while such people are possibly rarer than good-faith positive-sum players, they’re the ones who excel at organizational politics. They shift blame, take credit, and when they get into positions of power, they create artificial scarcity. Why? Because scarcity rarely galvanizes the have-nots against the haves; much more often, it creates chaos and distrust and divides the have-nots against each other, or (as in the case of San Francisco’s pointless conflict between the Second and Third Estates) pits the have-a-littles against the have-nothings.

Artificial scarcity is, in truth, all over the place in corporate life. Why do some people “get” good projects and creative freedom while others don’t? Why are many people (regardless of performance and the well-documented benefits of taking time off) limited to two or three weeks of vacation per year? Why is stack ranking, which has the effect of making decent standing in the organization a limited resource, considered morally acceptable? Why do people put emotional investment into silly status currencies like control over other peoples’ time? It’s easy to write these questions off as “complex” and decline to answer them, but I think that the answer’s simple. Right now, in 2015, the people who are most feared and therefore most powerful in organizational life are those who can create and manipulate the machinery of scarcity. Some of that scarcity is intrinsic. It is not an artifact of evil that salary pools and creative freedom must fall under some limit; it is the way things are. However, an alarming quantity of that scarcity is not. How often is it that missing a “deadline” has absolutely no real negative consequence on anything– other than annoyance to a man-struating executive who deserves full blame for inventing an unrealistic timeframe in his own mind? Very. How many corporations would suffer any ill effect if their stack ranking machinery were abolished? Zero, and many would find immediate cultural improvements. Artificial scarcity is all over the place because there is power to be won by creating it; and, in the corporate world, those who acquire the most power are those who learn how navigate environments of artificial scarcity, often generating it as it solidifies their power once gained.

Who runs the technology industry? Venture capitalists. Even though many technology companies are not venture-funded, the VC-funded companies and their bosses (the VCs) set the culture and they fund the companies that set salaries. Most of them, as I’ve discussed, are people who failed in the colonizing culture (the mainstream MBA/business world) and went West to boss nerds around. Having failed in the existing “Establishment” culture, they (somewhat unintentionally) create a new one that amplifies its worst traits, much in the way that people who are ejected from an established and cool-headed (in relative terms) criminal organization will often found a more violent one. So they’ve taken the relationship-driven anti-meritocracy for which the Harvard-MBA world is notorious, and then they went off and made a world (Sand Hill Road) that’s even more oligarchical, juvenile, and chauvinistic than the MBA culture that it splintered off from. Worse than being zero-sum players, these are zero-sum players whose being rejected by the MBA culture (not all of whose people are zero-sum players; there are some genuine good-faith positive-sum players in the business world) was often due to their lack of vision. And hence, we end up with stack ranking. Stack ranking would exist except for the fact that many technology companies are run by “leftover” CEOs and VCs who couldn’t get leadership jobs anywhere else. And because of the long-standing climate of terrible leadership in this industry, we end up with Snapchat and Clinkle but scant funding for clean energy. We end up with a world in which most software engineers work on stupid products that don’t matter.

In 2015, we live in a time of broad-based and pervasive organizational decline. While Silicon Valley champions all that is “startup”, another way to perceive the accelerated birth-and-death cycle of organizations is that they’ve become shorter-lived and more disposable in general. Perhaps our society is reaching an organizational Hayflick limit. Perhaps the “macro-age” of our current mode of life is senescent and, therefore, the organizations that we are able to form undergo rapid “micro” aging. There is individual gain, for a few, to be had in this period of organizational decay. A world in which organizations (whether young startups or old corporate pillars) are dying at such a high rate is one where rapid ascent is more possible, especially for those who already possess inherited connections (because, while organizations themselves are much more volatile and short-lived, the people in charge don’t change very often) and can therefore position themselves as “serial entrepreneurs” or “visionary innovators” in Silicon Valley. What is being missed, far too often, is that this fetishized “startup bloom” is not so much an artifact of good-faith outperformance of the Establishment, but rather an opportunistic reaction to a society’s increasing inability to form and maintain organizations that are worth caring about. Wall Street and Silicon Valley both saw that mainstream Corporate America was becoming inhospitable to people with serious talent. Wall Street decided to beat it on compensation; Silicon Valley amped up the delusional rhetoric about “changing the world”, the exploitation of young, male quixotry, and the willingness to use false promises (executive in 3 years! investor contact!) to scout talent. That’s where we are now. The soul of our industry is not a driving hatred of scarcity, but the impulse to exploit the quixotry of young talent. If we can’t change that, then we shouldn’t be trusted to “change the world” because our changes shall be mostly negative.

Technology must escape its colonial overseers and bring genuine technologists into leading roles. It cannot happen fast enough. In order to do, it’s going to have to dump Sand Hill Road and the Silicon Valley economy in general. I don’t know what will replace it, but what’s in place right now is so clearly not working that nothing is lost by throwing it out wholesale.

Continuous negotiation

After reading this piece about asking for raises in software engineering, I feel compelled to share something that I’ve learned about negotiation. I can’t claim to be great at it, personally. I know how it works but– I’ll be honest– negotiating can be really difficult for almost all of us, myself included. As humans, we find it hard to ask someone for a favor or consideration when the request might be received badly. We also have an aversion to direct challenge and explicit recognition of social status. It’s awkward as hell to ask, “So, what do you think of me?” Many negotiations, fundamentally, are conceived to be uncomfortably close to that question, and the social taboos and protocols around asking questions like that can make the process difficult for anyone to navigate. So I’m far from a tactical expert, and I’ll admit as much. There is one thing, however, that I’ve learned about successful workplace negotiators: they do it creatively, continuously, and persistently.

The comic-book depiction of salary negotiation is one in which the overpaid, under-appreciated employee (with arms full of folders and papers representing “work”) goes into her office and asks her pointy-haired boss for a raise: a measly 10 percent increase that she has, no doubt, earned. In a just world, she’d get it; but in the real world, she gets a line about there being “no money in the budget”. This depiction of salary negotiation gets it completely wrong. It sets it up as an episodic all-or-nothing affair in which the request must either be immediately granted (that’s rare) or the negotiator shall slink away, defeated.

Here’s why that scenario plays out so badly. Sure, she deserves a raise; but at that moment in time, the boss is presented with the unattractive proposition of paying more (or, worse yet, justifying higher payment to someone else) for the same work, rejects it, and the employee walks away feeling bitter. If this scenario plays out as described, it’s often a case where she failed to recognize the continually occurring opportunities for micro-negotiations, both before and at that point.

First of all, if someone asks for a raise in good faith and is declined, that’s an opportunity to ask for something else: a better title instead, improved project allocation, a conference budget, and possibly the capacity to delegate undesirable work. Even if “there’s no money in the budget” is a flat-out lie, there’s nowhere to proceed on the money front– you can’t call your boss a liar and say, “I think there is” or “Have you checked?”– so you look for something else that might be non-monetary, like a better working space. Titles are a good place to start. People tend to think that they “don’t matter”, but they actually matter a great deal, as public statements about how much trust the organization has placed in a person’s competence. They’re also given away liberally when managers aren’t able to give salary increases to people who “obviously deserve” them. Don’t get me wrong: I’d take a 75% pay raise over a fancy title; but if a raise isn’t in the question, then I’d prefer to ask for something else that I might actually get. When cash is tight, titles are cheap. As things improve, who gets first pick of the green-field, new projects that emerge? The people with the strongest reputations, of which titles are an important and formal component. When cash is abundant, it usually flows to the people on or near those high-profile projects.

Many things that we do, as humans, are negotiations, often subtle. Take a status meeting (as in project status, of course). Standup is like Scrabble. Bad players focus on the 100-point words. Good players try not to open up the board. Giving elaborate status updates is to focus on the 100-point words at the expense of strategic play. A terse, effective, update is a much better play. If you open yourself up to a follow-on question about project status (e.g. why something took a certain length of time, or needed to be done in a certain way) then you’ve done it wrong. You put something on the board that should have never gone there. The right status (here meaning social status) stance to take when giving (project) status is: “I will tell you what I am doing, but I decide how much visibility others get into my work, because there are people who are audited and people who are implicitly trusted and the decision has already been made that I’m in the second category… and I’m pretty sure we agree on this already but if you disagree, we’re gonna fucking dance.” When you give a complete but terse status update, you’re saying, “I’m willing to keep you appraised of what I’m up to, because I’m proud of my work, but I refuse to justify time because I work too hard and I’m simply too valuable to be treated as one who has to justify his own working time.”

Timeliness is another area of micronegotiation, and around meetings one sees a fair amount of status lateness (mixed with “good faith” random lateness that happens to everyone). The person who shows up late to a status meeting is saying, “I have the privilege of spending less time giving status (a subordinate activity) than the rest of you”. The boss who makes the uncomfortable joke-that-isn’t about that person being habitually late is saying, “you’re asking for too much; try being the 4th-latest a couple of times”. As it were, I think that status lateness is an extremely ineffective form of micronegotation– unless you can establish that the lateness is because you’re performing an important task. Some “power moves” enhance status capital by exploiting the human aversion to cognitive dissonance (he’s acting like an alpha, so he must be an alpha) but others spend it, and status lateness tends to be one that spends it, because lateness is just as often a sign of sloppiness as of high value. Any asshole can be late, and the signature behavior of a true high-status person is not habitual minor lateness. In fact, actual high-power people, in general, are punctual and loyal and willing to do the grungiest of the grunge work for an important project or mission, but they magically make themselves unavailable (without it being obvious that that’s what they’re doing) for the unimportant stuff. If you’re looking to ape the signature of a high-power person (and I wouldn’t recommend achieving it via status lateness, because there are better ways) you shouldn’t do it by being 10 minutes late for every standup. That just looks sloppy. You do it by being early or on time, most of the time, and missing a few such meetings completely for an important reason. (“Sorry that I missed the meeting. I was busy with <something that actually matters.>”) Of course, you have to do this in a way that doesn’t offend, humiliate, or annoy the rest of the team, and it’s so hard to pull that off with status lateness that I’d suspect that anyone with the social skills to do it does not need to take negotiation advice from the likes of me.

Most negotiation theory is focused on large, episodic negotiations as if those were the way that progress in business is made. To be sure, those episodic meetings matter quite a bit. There’s probably a good 10-40 percent of swing space (at upper levels, much more!) in terms of the salary available to a person at a specific career juncture. However, what matters just as much is the preparation through micronegotations. Someone with the track record of a 10-years-and-still-junior engineer isn’t isn’t in the running for $250,000/year jobs no matter how good he is at episodic salary negotiations. It’s hard to back up one’s demand for a raise if one is not perceived as a high performer, and that has as much to do with project allocation as with talent and raw exertion, and getting the best projects usually comes down to skilled micronegotiations (“hey, I’d like to help out”). In the workplace, when it comes to higher pay or status, the episodic negotations usually come far too late– after a series of missed micronegotiation opportunities. One shouldn’t wait until one is underpaid, underappreciated, under-challenged, or overwhelmed with uninteresting work, because “the breaking point” is too late. The micronegotiations have to occur over time, and they must happen so fluently that most people aren’t even aware that the micronegotiations exist.

One upside of micronegotiation over episodic negotiation is that it’s rarely zero-sum. When you ask for a $20,000 raise directly (instead of something that doesn’t cost anything, like an improved title or more autonomy or a special project) you are marking a spot on a zero-sum spectrum, and that’s not a good strategy because you want your negotiating partner to be, well, a partner rather than an adversary. Micronegotiations are usually not zero-sum, because they usually pertain to matters that have unequal value to the parties involved. Let’s say that you work in an open-plan office. For programmers, they’re suboptimal and it’s probably not wise to ask for a private office; but some seats are better than others. Noise can be tuned out; visibility from behind is humiliating, stress-inducing, and depicts a person as having low social status. If you say to your boss, “I think we agree that I have a lot of important stuff on my plate, and I want the next seat in row A that becomes available”, getting a wall at your back, you’re not marking a spot on a zero-sum spectrum, because the people who make the decision as to whether you get a Row-A seat are generally not competing with you for that spot. So it’s no big deal for them to grant it to you. Instead, you’re finding a mutually beneficial solution where everyone wins: you get a better working space, and you’re no longer seen from behind (bringing a subtle improvement to the perception of your status, character, and competency, because the wall at your back depicts you as one who doesn’t fully belong in an open-plan office, but is “taking one for the team” by working in the pit) while your boss gets more output and is being asked for a favor (cf. Ben Franklin) that will demand less from him than a pay raise under a budget freeze.

The problem with software engineers isn’t that they’re bad at episodic salary negotiations. No one is good at those. If you’ve let yourself become undervalued by such a substantial amount that it “comes to a head”, you’re in a position that takes a lot of social acumen to talk one’s way out of. It’s that they aren’t aware of the micronegotiations that are constantly happening around them. To be fair, many micronegotiations seem like the opposite: humility. When you hold the elevator for someone, you’re not self-effacingly representing your time as unimportant; instead, you’re showing that you understand the other person’s value and importance, which is a way of encouraging the other person to likewise value you. The best micronegotiators never seem to be out for themselves, but looking out for the group. It’s not “let’s get this shitty task off my plate and throw it into some dark corner of the company” but, “let’s get together and discuss how to remove some recurring commitments from the team”.

What does good negotiation look like? Well, I’m at 1,700 words, and it would take another 17,000 to scratch the surface of that topic, and I’m far from being an expert on it. What it isn’t, most of the time, is formal and episodic. It’s continuous, and long-term-oriented, and often positive-sum. When you ask for something, whether it’s a pay raise or a better seat in the office, it’s OK to walk away without it. What you can’t leave on the table is your own status; you can leave as one who didn’t get X, but you can’t leave as a person who didn’t deserve X. If your boss can’t raise your pay, get a title bump and better projects, and thank him in advance for keeping you in mind when the budget’s more open. If a wall at your back or a private office isn’t in the cards, then get a day per week of WFH and make damn sure that it’s your most productive day. This way, even if you’re not getting exactly the X that you asked for, you’re allowing a public statement to stand that, once an X becomes available, you deserve it.

Underappreciated workers don’t need to read more about episodic negotiations and BATNA and “tactics”. They need to learn how to play the long game. Long-game negotiation advice doesn’t sell as well because, well, it takes years before results are achieved; but, I would surmise, it’s a lot more effective.

Cool vs. powerful

Early this morning, this article crossed my transom: Why I Won’t Run Another Startup. It’s short, and it’s excellent. Go read it.

It brought to mind an interesting social dynamic that, I think, is highly relevant to people trying to position themselves in an economy that is increasingly fluid, but still respects many of the old rules. In my mind, the key quote is here, and it agrees with my own personal experience in and around startups, having been on both sides of the purchasing discussion:

Every office-bound exec wants to love a startup. Like a pet. But no one wants to buy from a startup. Especially big companies. Big companies want to buy from big, stable businesses. They want to trust that you’ll still be around in a few years. And their people need to feel you’re a familiar name.

Startups are cool. Someone who is putting his reputation and years of emotional and financial investment at risk, for gold or glory, conforms to a charismatic archetype. That “cool” status might beget power– but usually not. People like scrappy underdogs, but they don’t trust them. Being “scrappy” or “lean” makes you cute, and it might inspire in others a mild desire to protect you, but you don’t have power until people want you to protect them.


One of the more obvious examples of “cool versus powerful” is in an urban nightclub scene, which has its own intriguing sociology. Nightclub and party scenes are staunchly elitist and hierarchical but, at the same time, eager to flout the mainstream concept of social status. A 47-year-old corporate executive worth $75 million might be turned away at the door, while a 21-year-old male model gets in because he knows the promoter. Casinos have a similar dynamic: by design, pure randomness (except in poker and, to a degree, in blackjack) can pull you out to be either a gloating winner or a stupendous loser for the night. The gods of the dice are egalitarian with regard to the “real world”. People are attracted to both of these scene because they have definitions of cool that are often contrary to those of mainstream, “square”, society.

On Saturday night at the club, old status norms are inverted or ignored. In a reversal of uncool corporate patriarchy, the young outrank the old, women outrank men, and having friends who are club promoters matters more than having friends who are hiring managers or investors. Such is “cool”. In fact, cool may be fickle, but it can make a great deal of money while it lasts. Most cool people will be poor, unable either to bottle their own lightning or to exploit others’ electricity in a useful way, but a few who open the right nightclub in the right spot will make millions. Overtly trying to make money (given that most cool people, although being middle- to upper-middle-class in native socioeconomic status, have very little cash on hand due to youth and underemployment) is deliberately uncool. In fact, most of the money made in the cool industry is from uncool people who want in, e.g. investment bankers whose only hope of entry is to drop $750 for a $20 bottle of vodka (“bottle service”).

Cool rarely leads to meaningful social status, and it doesn’t last. I’m writing this at 6:30 on a Wednesday morning in Chicago; at this exact moment and place, who knows which club promoter in L.A. means nothing. (I’m also a 31-year-old married man. Besides, if I did care to try for cool– I wasn’t so successful when I was the right age for it– I’d tell the U.S. nightlife to fuck itself and head for Budapest’s ruin pubs; but that’s another story.) Cool rarely lasts after the age of 30, an age at which people are just starting to have actual power. And while one of the most powerful things (in terms of having a long-term effect on humanity) one can do is to contribute to posterity either as a parent or a teacher, both roles are decidedly uncool.

Open-plan offices

One of my least favorite office trends is that toward cramped, noisy spaces: the open-plan office. Somehow, the Wolf of Wall Street work environment became the coolest layout in the working world. It’s ridiculously ineffective: it causes people to be sicker and less productive, and while the open-plan layout is sold as being “collaborative”, it actually turns adversarial pretty quickly. It’s a recurring 9-hour economy class plane ride to nowhere, which is not exactly the best theater for human relationships or camaraderie. On an airplane, people just want their pills or booze to kick in so they can forget their physical discomfort for long enough to sleep, and they’re so cranky that even the flight attendant offering free beverages annoys them; in an office, they just want to put their headphones on and get something the fuck done.

Why is an open-plan office “cool”? Those who tend to view management in the worst possible light will say that open-plan is about surveillance, control, and ego-fulfillment for the bosses. Lackeys who trust management implicitly actually believe the nonsense about these spaces being “collaborative”. Neither is correct. The open-plan monster is actually about marketing. “Scrappy” startups have to sell themselves constantly to investors and clients. The actual getting done of work is quite subtle. Show me a quiet workplace where the median age is 45, people have offices with doors, half the staff are women, and there are mathematical scribblings on the whiteboards, and you’ve shown me a place where a lot’s getting done, but it doesn’t look productive from the “pattern matching” viewpoint of a SillyCon Valley venture capitalist. At 10:30 on a random Tuesday, all I’m going to see are people (old people! women with pictures of kids on their desks! people who leave at 5:30!) typing abstract symbols into computers. Are they writing formal verification software that will save lives… or are they playing some complicated text adventure game that happens to run in emacs and just look like Haskell code? If I’m an average VC, I won’t have a clue. Show me a typical open-plan startup office, and it immediately looks frantically busy, even if little’s getting done.

Being in an open-plan office makes you cool but it lowers your social status. There’s no contradiction there, because coolness and power often contradict. It makes you cool because it shows that you’re young and adaptable to the startup’s ravenous appetite for attractiveness– to investors and clients. The company’s not established or trusted yet, so it needs to strike a powerful image, and if you work in a trading-floor environment (for 1/7th of what your trading counterpart is paid in order to compensate for that environment) then you’re doing your part to create that image. You’re pitching in to the startup’s overarching need to market itself; you’re a team-player. (If you want to get actual work done, do it before 10:00 or after 5:00.) By accepting the otherwise degrading work situation of being visible from behind, you’re part of what makes that “scrappy underdog” an emotional favorite: the cool factor.

All of that said, people with status and power avoid visibility into many aspects of their work. Always. This is visible even in physical position. Even in an “egalitarian” open-plan office, the higher status people will, over time as seats shuffle, be less visible from behind than the peons. A newly-hired VP might face undesirable lines of sight in his first six months, but after a couple years, he’ll be in the row with a wall at his back.

One thing that I have learned is that it’s best if no one knows how hard you’re working. I average about 50 hours per week but, occasionally, I slack and put in a 3-hour day. Other times, I throw down and work 14-hour days (much of that at home). I prefer that no one know which is happening at the time. I certainly don’t want to be perceived as the hardest-working person in the office (low status) or the least hard-working (low commitment). Being “the Office X” for any X is undesirable; it’s OK to be liberal (or conservative, or Christian, or atheist, or feminist) and known for it, but you don’t want to be the Office Liberal, or the Office Conservative, or the Office Christian, or the Office Atheist, or the Office Feminist. Likewise, you never want to be the Office Slacker or the Office Workhorse. So on the rare day that I do need to slack, I up-moderate the appearance of working hard and do a couple tasks on my secret backlog of things that look hard but only take a couple minutes; and when I am working harder than anyone else, I down-moderate that appearance so that whatever I achieve seems more effortless than it actually was, because visible sacrifice or extreme effort might make one a “team player” but it’s a low-status move. That said, even if my work effort were exactly socially optimal (75th percentile in a typical startup, or 50 hours per week) I would still want uncertainty about how much I’m working. Let’s say that 10 hours per day is the socially optimal work effort and I’m working exactly that. Still, if anyone else knows that I’m working exactly that much, then I utterly lose, status wise, compared to the “wizard” who works the exact same amount but has completely avoided visibility into his work and might be working 3 hours per day and might be working 17. Being “pinpointed”, even if you’re at the exact right spot, makes you a loser. That’s why I hate “Agile” regimes that are designed to pinpoint people. Ask around about the work effort of a high-status person (like a CEO) and, because he’s not pinpointed, people will see what they want to see. Those who value effort will perceive an all-in hard worker, while those who admire talented slackers will see her as a supremely efficient “10x” sort of person.

This is what young people generally don’t get– and that older people usually understand through experience, making them less of a “culture fit” for the more cultish companies– about “Agile” and open-plan offices and violent transparency. Allowing extreme visibility into your work, as the “Agile” fad that is destroying software engineering demands, makes you cool. It makes you well-liked and affable. However, it disempowers you, even if your work commitment is exactly the socially optimal amount. It makes you a pretty little boy (or girl); not a man or woman. It makes you “a culture fit” but never a culture maker.

When you let people know how hard (or how little) you work, you’re giving away critical information and getting nothing in return. How little or how much you work can always be used against you; if you visibly work hard, people might see your efforts as unsustainable; they might distrust you on the suspicion that you have ulterior motives, like a rogue trader who never takes vacation; they might start tossing you undesirable grunt work assuming you’ll do it with minimal complaint; or they might think that you’re incompetent and have to work long hours to make up for your lack of natural ability. If you’re smart, you keep that information close to your chest. Just as your managers and colleagues should know nothing about your sex life, whether you’re having a lot of sex or none or an average amount; they should not know how many hours you’re actually working, whether you’re the biggest slacker or the hardest worker or right in the middle.

The most powerful statements that a person makes are what she gives, expecting nothing in return. It is not always a low-status move to do give something and ask for nothing back. Giving people no-strings gifts that help them and don’t hurt you is not just ethically good, but it also improve your status by showing that you have good judgment. Giving people gifts that don’t help them, but that hurt you, either supplicates you or shows that you have terrible judgment. No one gains anything meaningful when you provide Agile-style micro-visibility into your work– executives don’t make better decisions, the team doesn’t gel any better– but you put yourself at unnecessary political risk. You’re hurting yourself “for the team” but the team doesn’t actually gain anything other than information it didn’t ask for and can’t use (unless someone intends to use it politically, and possibly against you). By doing this, you signify yourself as the over-eager sort of person who unintentionally generates political messes.

The open-plan office is cool but lowers one’s status. That said, cubicles are probably worse: low status and uncool. Still, I’d rather have a private office: uncool and middle-aged, but high in status. Private space means that your work actually matters.

“I don’t care what other people think about me”

One of my favorite examples of the divergence between what is cool and what is powerful is the statement, “I don’t care what other people think about me”. It’s usually a dishonest statement. Why would anyone who means it, say it? It’s also a cool statement. Cool people don’t care (or, more accurately, don’t seem to care) what is thought about them. However, it’s disempowering. Let’s transform it into something equivalent: “I don’t care about my reputation“. That’s not so much a “cool” statement as a reckless one. Reputation has a phenomenal impact on a person’s ability to be effective, and “I don’t care if I’m effective” is a loser’s statement. And yet, reputation is, quite exactly, what others think about a person. So why is one equivalent statement cool, and the other reckless?

Usually, people who say, “I don’t care what you think about me” are saying one of two things. The first is a fuck-you, either to the person or to the masses. Being cool is somewhat democratic; it’s about whether you are popular, seen as attractive, or otherwise beloved by the masses. Appealing to power is not democratic; most peoples’ votes actually don’t count for much. (Of course, if you brazenly flip off the masses, you might offend many people who do matter, so it’s not advisable in most circumstances.) The 24-year-olds in the open-plan office who play foosball from noon till 9:00 can decide if you’re cool, but they have no say in what you’re paid, how your work is evaluated, or whether you’re promoted or fired. It’s better to have them like you than to be disliked by them, but they’re not the ones making decisions that matter. So, a person who says, “I don’t care what you think about me” is often saying, “your vote doesn’t matter”. That’s a bit of a stupid statement, because even other prole-kickers don’t like the brazen prole-kickers.

The second meaning of “I don’t care what you think about me” is “I don’t care if you like me“. That’s fundamentally different. Personally, I do care about what people think of me. Reputation is far more powerful a factor in one’s ability to be effective in anything involving other humans than is individual capability. A reputation for competence is crucial. However, I don’t really care that much about being liked. I don’t want to be hated, but there’s really no difference between being mildly disliked by someone who’d never back me in a tight spot and being mildly liked by a person who’d never back me in a tight spot. It’s all the same, along that stretch: don’t make this person as an enemy, don’t trust as a friend.

Machiavelli was half-right and half-wrong with “It is better to be feared than loved.” It is not very valuable to be vacuously “loved”, as “scrappy startups” often are. His argument was that beloved princes are often betrayed– and we see that, all the time, in Silicon Valley– whereas feared princes are less likely to be maltreated. This may apply to Renaissance politics, that period being just as barbaric (if not moreso) as the medieval era before it; but I don’t think that it applies to modern corporate politics. Being loved isn’t valuable, but being feared is detrimental as well. You don’t get what you want through fear unless what you want is to be avoided and friendless.

It is better to be considered competent than to be feared or loved. Competent at what? That, itself, is an interesting question. Take my notes, above, on why it is undesirable to provide visibility into how hard you work. If you’re a known slacker who, coasting on natural ability or acquired expertise, gets the job done and does it well, you’ve proven your competence at the job, but you’ve shown social incompetence, by definition, because people know that you’re working less hard than the rest of the team. Even if no one resents you for it, the fact that people have this information over you is one that lowers your status. Likewise, if you’re to be a reliable hard worker, you’ve shown competence at self-control and in focus; but, yet again, the fact that people know that you work longer hours than everyone else shows a bit of social incompetence. The optimal image, in terms of where you are on the piker-versus-workhorse continuum, is to be high enough in status that others’ image of you is exactly what you want it to be. I would say, then, that wants to be seen as being competent at life. It is not enough to be competent only at the job; that keeps you from getting fired, but it won’t get you promoted.

Of course, the idea that there’s such a thing as “competent at life” is ridiculous. I’m highly competent at most things that I do, but if I somehow got into professional football, I’d be one of the worst incompetents ever seen. “Competent at life” is an image, if not a hard reality. There probably isn’t such a thing, because for anyone there is a context that would humiliate that person (for me, professional football). That said, there are people who have the self-awareness and social acumen to make themselves most visible in contexts where they are the most competent (and have moments of incompetence, such as when learning a new skill, in private) and there are others who don’t. It’s best to be in the former group and therefore create the image (since there is no such reality) of universal competence.

It is better to be thought competent than to be loved or to be feared. If you are beloved but you are not respected and you are not trusted to be competent, you can be tossed aside in favor of someone who is prettier or more charismatic or younger or cooler or “scrappier” and more of an underdog; and, over time, the probability of that happening approaches one. People will feel a mild desire to protect you, but no one will come to you for protection. This is of minimal use, because the amount of emotional energy that powerful people have to expend in the protection of others is usually low; the mentor/protege dynamic of cinema is extremely rare in the real world; most people with actual power were self-mentoring. However, if you are feared, that doesn’t necessarily mean that you’ll be respected or seen as capable. Plenty of people are feared because they’re catastrophically incompetent. You’re much more likely to be avoided, isolated, and miserable, than to get your way through fear. Furthermore, it’s often necessary that blatant adversaries (i.e. someone who might damage your reputation or career to bolster his, or to settle a score) be intimidated, but you never want them to be frightened. An intimidated adversary declines to fight you, which is what you want; a frightened or humiliated one might do any number of things, most of which are bad for all parties.

Cool can disempower

It is not always undesirable to be cool or popular. Depending on one’s aims, it might be useful. Very young people are almost never powerful, and will have more fun if they’re seen as cool than if not. When you’re 17, teachers and parents and admissions officers (all uncool) have the power, so there’s a side game that’s sexier and more accessible. When you’re 23, being “cool” can get you a following and venture funding and turn you from yet-another app developer to a “founder” overnight. There is, however, a point (probably, in the late 20s) at which cool becomes a childish thing to put away. If you work well in a Scrum environment, that might make you “cool” in light of current IT fads, but it ultimately shows that you’ve excelled at subordination, which does not lend you an aura of power. (“Agile: How to be a 10X Junior Developer.”)

I am, of course, not saying that being likeable or cool are, ever, bad things. All else considered, it’s better to have them than not. They just aren’t very useful. They aren’t the same thing as status or power, and sometimes one must be chosen or the other. Open-plan culture and the startup world fetishize coolness and distract people from the uncool but important games that they’ll have to play (and get good at) in order to become adults. Ultimately, having a reputation for professional competence and being able to afford nice vacations is just more important than being considered “cool” by people who won’t remember one’s name in 10 years. At some point, you realize that it’s more important to have a career that’ll enable you to send your kids to top schools than to win the approval of people who are just a year and a half out of those schools. The “sexiness” of the startup cult seems to derive itself from those who haven’t yet learned this.