Why there are so few AI jobs

Something began in the 1970s that has been described as “the AI winter”, but to call it that is to miss the point, because the social illness it represents involves much more than artificial intelligence (AI). AI research was one of many casualties that came about as anti-intellectualism revived itself and society fell into a diseased state.

One might call the “AI winter” (which is still going on) an “interesting work winter” and it pertains to much more of technology than AI alone, because it represented a sea change in what it meant to be a programmer. Before the disaster, technology jobs had an R&D flavor, like academia but with better pay and less of the vicious politics. After the calamitous 1980s and the replacement of R&D by M&A, work in interesting fields (e.g. machine learning, information retrieval, language design) became scarce and over 90% of software development became mindless, line-of-business makework. At some point, technologists stopped being autonomous researchers and started being business subordinates and everything went to hell. What little interesting work remained was only available in geographic “super-hubs” (such as Silicon Valley) where housing prices are astronomical compared to the rest of the country. Due to the emasculation of technology research in the U.S., economic growth slowed to a crawl, and the focus of the nation’s brightest minds turned to creation of asset bubbles (seen in 1999, 2007, and 2014) rather than generating long-lasting value.

Why did this happen? Why did the entrenched public- and private-sector bureaucrats (with, even among them, the locus of power increasingly shifting to private-sector bureaucrats, who can’t be voted out of office) who run the world lose faith in the research being done by people much smarter, and who work much harder, than them? The answer is simple. It’s not even controversial. End of the Cold War? Nah, it began before that. At fault is the lowly perceptron.

Interlude: a geometric puzzle

This is a simple geometry puzzle. Below are four points at the corners of the square, colored (and numbered) like so:

0 1
1 0

Is it possible to draw a line that separates the red points (0’s) from the green points (1’s)?

The answer is that it’s not possible. Any separating line would have to separate two points from each other. Now draw a circle passing through all four points. Any line can intersect that circle at no more than two points. Therefore, a line separating two points from the other two would have to separate two adjacent points, which would be of opposing colors. It’s not possible. Another way to say this is that the classes (colors) aren’t linearly separable.

What is a perceptron?

“Perceptron” is a fancy name given to a mathematical function with a simple description. Let w be a known “weight” vector (if that’s an unfamiliar term, a list of numbers) and x be an input “data” vector of the same size, with the caveat that x[0] = 1 (a “bias” term) always. The perceptron, given w, is a virtual “machine” that computes, for any given input x, the following:

  • 1, if w[0]*x[0] + … + w[n]*x[n] > 0,
  • 0, if w[0]*x[0] + … + w[n]*x[n] < 0.

In machine learning terms, it’s a linear classifier. If there’s a linear function that cleanly separates the “Yes” class (the 1 values) from the “No” class (the 0 values) it can be expressed as a perceptron. There’s an elegant algorithm for, in that linearly separable case, finding a working weight vector. It always converges.

A mathematician might say, “What’s so interesting about that? It’s just a dot product being passed through a step function.” That’s true. Perceptrons are very simple. A single perceptron can solve more decision problems than one might initially think, but it can’t solve all of them. It’s too simple a model.

Limitations

Let’s say that you want to model an XOR (“exclusive or”) gate, corresponding to the following function:

| in_1 | in_2 | out |
+------+------+-----+
|   0  |   0  |  0  |
|   0  |   1  |  1  |
|   1  |   0  |  1  |
|   1  |   1  |  0  |
+------+------+-----+

One might recognize that this is identical to the “brainteaser” above, with in_1 and in_2 corresponding to the x- and y- dimensions in the coordinate plane. This is the same problem. This function is nonlinear; it could be expressed as f(x, y) = x + y – 2xy. and that’s arguably the simplest representation of it that works. A separating “plane” in the 2-dimensional space of the inputs would be a line, and there’s no line separating the two classes. It’s mathematically obvious that the perceptron can’t do it. I showed this, above, using high-school geometry.

To a mathematician, this isn’t surprising. Marvin Minsky pointed out the mathematically evident limitations of a single perceptron. One can model intricate mathematical functions with more complex networks of perceptrons and perceptron-like units, called artificial neural networks. They work well. One can also, using what are called “basis expansions”, generate further dimensions from existing data in order to create a higher-dimensional space in which linear classifiers still work. (That’s what people usually do with support vector machines, which provide the machinery to do so efficiently.) For example, adding xy as a third “derived” input dimension would make the classes (0’s and 1’s) linearly separable. There’s nothing mathematically wrong with doing that; it’s something that statisticians do when they want to build complex models but still have some of the analytic properties of simpler ones, like linear regression or nearest-neighbor modeling.

The limitations of the single perceptron do not invalidate AI. At least, they don’t if you’re a smart person. Everyone in the AI community could see the geometrically obvious limitation of a single perceptron, and not one of them believed that it came close to invalidating their work. It only proved that more complex models were needed for some problems, which surprised no one. Single-perceptron models might still be useful for computational efficiency (in the 1960s, computational power was about a billion times as expensive as now) or because the data don’t support a more complex model; they just couldn’t learn or model every pattern.

In the AI community, there was no scandal or surprise. That some problems aren’t linearly separable is not surprising. However, some nerd-hating non-scientists (especially in business upper management) took this finding to represent more than it actually did.

They fooled us! A brain with one neuron can’t have general intelligence!

The problem is that the world is not run, and most of the wealth in it is not controlled, by intelligent people. It’s run by social-climbing empty-suits who are itching for a fight and would love to take some “eggheads” down a notch. Insofar as an artificial neural network models a brain, a perceptron models a single neuron, which can’t be expected to “think” at all. Yet the fully admitted limitations of a single perceptron were taken, by the mouth-breathing muscleheads who run the world, as an excuse to shit on technology and pull research funding because “AI didn’t deliver”. That produced an academic job market that can only be described as a pogrom, but it didn’t stop there. Private-sector funding dried up as short-term, short-tempered management came into vogue.

To make it clear, no one ever said that a single perceptron can solve every decision problem. It’s a linear model. That means it’s restricted, intentionally, to a small subspace of possible models. Why would people work with a restricted model? Traditionally, it was for a lack of data. (We’re in the 1960s and ’70s, when data was contained on physical punch cards and a megabyte weighed something and a disk drive cost more than a car.) If you don’t have a lot of data, you can’t build complex models. For many decision problems, the humble perceptron (like its cousins, logistic regression and support vector machines) did well and, unlike other computationally intensive linear classification methods (such as logistic regression, which requires gradient descent, or a variant thereof, over the log-likelihood surface; or such as the support vector machine, which are a quadratic programming problem that we didn’t know how to solve efficiently until the 1990s) it could be trained with minimal computational expense, in a bounded amount of time. Even today, linear models are surprisingly effective for a large number of problems. For example, the first spam classifiers (Naive Bayes) operated using a linear model, and it worked well. No one was claiming that a single perceptron was the pinnacle of AI. It was something that we could build cheaply on 1970-era hardware and that could build a working model on many important datasets.

Winter war

Personally, I don’t think that the AI Winter was an impersonal, passive event like the changes of seasons. Rather, I think it was part of a deliberate resurgence of anti-intellectualism in a major cultural war– one which the smart people lost. The admitted limitations of one approach to automated decision-making gave the former high school bullies, now corporate fat cats, all the ammo they needed in order to argue that those “eggheads” weren’t as smart as they thought they were. None of them knew exactly what a perceptron or an “XOR gate” were, but the limitation that I’ve described was morphed into “neural networks can’t solve general mathematical problems” (arguably untrue) and that turned into “AI will never deliver”. In the mean-spirited and anti-liberal political climate of the 1980s, this was all that anyone needed as an excuse to cut public funding. The private sector not only followed suit, but amplified the trend. The public cuts were a mix of reasonable fiscal conservatism and mean-spirited anti-research sentiment, but the business elites responded strongly to (and took to a whole new level) the mean-spirited aspect, flexing their muscles as elitism (thought vanquished in the 1930s to ’50s) became “sexy” again in the Reagan Era. Basic research, which gave far too much autonomy and power to “eggheads”, was slashed, marginalized, and denigrated.

The claim that “AI didn’t deliver” was never true. What actually happened is that we solved a number of problems, once thought to require human intelligence, with a variety of advanced statistical means as well as some insights from fields like physics, linguistics, ecology and economics. Solving problems demystified them. Automated mail sorting, once called “artificial intelligence”, became optical character recognition. This, perhaps, was part of the problem. Successes in “AI” were quickly put into a new discipline. Even modern practitioners of statistical methods are quick to say that they do machine learning, not AI. What was actually happening is that, while we were solving specific computational problems once thought to require “intelligence”, we found that our highly specialized solutions did well on the problems they were designed for, and could be adapted to similar problems, but with very slow progress toward general intelligence. As it were, we’ve learned in recent decades that our brains are even more complicated than we thought, with a multitude of specialized modules. That no specific statistical algorithm can replicate all of them, working together in real time, shouldn’t surprise anyone. Is this an issue? Does it invalidate “AI” research? No, because most of those victories, while they fell short of replicating a human brain, still delivered immense economic value. Google, although it eventually succumbed to the sociological fragility and failure that inexorably follow closed allocation, began as an AI company. It’s now worth over $360 billion.

Also mixed in with the anti-AI sentiment is the religious aspect. It’s still an open and subjective question what human intelligence really is. The idea that human cognition could be replicated by a computer offended religious sentiments, even though few would consider automated mail sorting to bear on unanswerable questions about the soul. I’m not going to go deep into this philosophical rabbit hole, because I think it’s a waste of time to debate why people believe AI research (or, for a more popular example, evolution by natural selection) to offend their religious beliefs. We don’t know what qualia is or where it comes from. I’ll just leave it at this. If we can use advanced computational techniques to solve problems that were expensive, painful, or impossible given the limitations of human cognition, we should absolutely do it. Those who object to AI on religious grounds fear that advanced computational research will demystify cognition and bring about the end of religion. Ignoring the question of whether an “end of religion” is a bad thing, or what “religion” is, there are two problems with this. First, if there is something to us that is non-material, we won’t be able to replicate it mechanically and there is no harm, to the sacred, in any of this work. Second, computational victories in “AI” tend to demystify themselves and the subfield is no longer considered “AI”. Instead, it’s “optical character recognition” or “computer game-playing”. Most of what we use on a daily basis (often behind the scenes, such as in databases) comes from research that was originally considered “artificial intelligence”.

Artificial intelligence research has never told us, and will never tell us, whether it is more reasonable to believe in gods and religion or not to believe. Religion is often used by corrupt, anti-intellectual, politicians and clerics to rouse sentiment against scientific progress, as if automation of human grunt work were a modern-day Tower of Babel. Yet, to show what I mean by AI victories demystifying themselves, almost none would hesitate to use Google, a web-search service powered by AI-inspired algorithms.

Why do the anti-intellectuals in politics and business wish to scare the public with threats of AI-fueled irreligion and secularism (as if those were bad things)? Most of them are intelligent enough to realize that they’re making junk arguments. The answer, I think, is about raw political dominance. As they see it, the “nerds” with their “cushy” research jobs can’t be allowed to (gasp!) have good working conditions.

The sad news is that the anti-intellectuals are likely to take the economy and society down with them. In the 1960s, when we were putting billions of dollars into “wasteful” research spending, the economy grew at a record pace. The world economy was growing at 5.7 percent per year, and the U.S. economy was the envy of the world. Now, in our spartan time of anti-intellectualism, anti-science sentiment, and corporate elitism, the economy is sluggish and the society is stagnant– all because the people in charge can’t stand to see “eggheads” win.

Has AI “delivered”?

If you’re looking to rouse religious fear and fury, you might make a certain species of fantastic argument against “artificial intelligence”. The truth of the matter, however, is that while we’ve seen domain-specific superiority of machines over human intelligence in rote processes, we’re still far from creating an artificial general intelligence, i.e. a computational entity that can exhibit the general learning capability of a human. We might never do it. We might not need to and, I would argue, we should not if it is not useful.

In a way, “artificial intelligence” is a defined-by-exclusion category of “computational problems we haven’t solved yet”. Once we figure out how to make computers better at something than humans are, it becomes “just computation” and is taken for granted. Few believe they’re using “an AI” when they use Google for web search, because we’re now able to conceive of the computational work it does as mechanical rather than “intelligent”.

If you’re a business guy just looking to bully some nerds, however, you aren’t going to appeal to religion. You’re going to make the claim that all this work on “artificial intelligence” hasn’t “delivered”. (Side note: if someone uses “deliver” intransitively, as business bullies are wont to do, you should punch that person in the face.) Saying someone or something isn’t “delivering” is a way to put false objectivity behind a claim that means nothing other than “I don’t like that person”. As for AI, it’s true that artificial general intelligence has eluded us thus far, and continues to do so. It’s an extremely hard problem: far harder than the optimists among us thought it would be, fifty years ago. However, the CS research community has generated a hell of a lot of value along the way.

The disenchantment might be similar to the question about “flying cars”. We actually have them. They’re called small airplanes. In the developed world, a person of average means can learn how to fly one. They’re not even that much more expensive than cars. The reason so few people use airplanes for commuting is that it just doesn’t make economic sense for them: the savings of time don’t justify increased fuel and maintenance costs. But a middle-class American or European can, if she wants, have a “flying car” right now. It’s there. It’s just not as cheap or easy to use as we’d like. With artificial intelligence, that research has brought forth a ridiculous number of victories and massive economic growth. It just hasn’t brought forth an artificial general intelligence. That’s fine; it’s not clear that we need to build one in order to get the immense progress that technologists create when given the autonomy and support.

Back to the perceptron

One hard truth I’ve learned is that any industrial effort will have builders and politicians. It’s very rare that someone is good at both. In the business world, those unelected private-sector politicians are called “executives”. They tend, for a variety of reasons, to put themselves into pissing contests with the builders (“eggheads”) who are actually making stuff. One time-tested way to show up the builders is to take something that is obviously true (leading the builders to agree with the presentation) but present it out of context in a way that is misleading.

The incapacity of the single perceptron at general mathematical modeling is a prime example of this. Not one AI researcher was surprised that such a simple model couldn’t describe all patterns or equational relationships. The fact that can be proven (as I did) with high school geometry. That a single perceptron can’t model a key logical operation is, as above, obviously true. The builders knew it, and agree. Unfortunately, what the builders failed to see was that the anti-intellectual politicians were taking this fact way out of context, using the known limitations of a computational building block to ascribe limitations (that did not exist) to general structures. This led to the general dismantling of public, academic, and private support for technological research, an anti-intellectual and mean-spirited campaign that continues to this day.

That’s why there are so few AI jobs.

Technology’s Loser Problem

I’m angry. The full back story isn’t worth getting into, but there was a company where I applied for a job in the spring of 2013: to build a company’s machine learning infrastructure from scratch. It was a position of technical leadership (Director equivalent, but writing code with no reports) and I would have been able to use Clojure. As it were, I didn’t get it. They were looking for someone more experienced, who’d built those kinds of systems before, and wouldn’t take 6 months to train up to the job. That, itself, is not worth getting angry about. Being turned down happens, especially at high levels.

I found out, just now, that the position was not filled. Not then. Not 6 months later. Not to this day, more than a year later. It has taken them longer to fill the role than it would have taken for me to grow into it.

When they turned me down, it didn’t faze me. I thought they’d found a better candidate. That happens; only thing I can do is make myself better. I found myself, however, a bit irked when I found out that they hadn’t filled the position for longer than it would have taken me to gain the necessary experience. I lost, and so did they.

That’s not what makes me angry. Rationally, I realize that most companies aren’t going to call back a pretty-good candidate they rejected because they had just opened the position and they thought they could do better (if you’re the first 37% of candidates for a job, it makes sense for them not to choose you and, empirically, first and second applicants for a high-level position rarely get it). That’s the sort of potentially beneficial but extremely awkward social process that just won’t happen. What makes me angry is the realization of how common a certain sort of decision is in the technology world. We make a lot of lose-lose decisions that hurt all of us. Extremely specific hiring requirements (that, in bulk, cost the company more in waiting time than training a 90% match up to the role) are just the tip of the iceberg.

You know those people who complain about the lack of decent <gender of sexual interest> but (a) reject people for the shallowest, stupidest reasons, (b) aren’t much of a prize and don’t work to better themselves, and (c) generally refuse to acknowledge that the problem is rooted in their own inflated perception of their market value? That’s how I feel every time I hear some corporate asswipe complain about a “talent shortage” in technology. No, there isn’t one. You’re either too stingy or too picky or completely inept at recruiting, because there’s a ton of underemployed talent out there.

Few of us, as programmers, call the initial shots. We’ve done a poor job of making The Business listen to us. However, when we do have power, we tend to fuck it up. One of the problems is that we over-comply with what The Business tells us it whats. For example, when a nontechnical CEO says, “I only want you to hire absolute rock stars”, what he actually means is, “Don’t hire an idiot just to have a warm body or plug a hole”. However, because they tend to be literal, over-compliant, and suboptimal, programmers will interpret that to mean, “Reject any candidate who isn’t 3 standard deviations above the mean.” The leads to positions not being filled, because The Business is rarely willing to pay what one standard deviation above the mean costs, let alone three.

Both sides now

I’ve been on both sides of the interviewing and hiring process. I’ve seen programmers’ code samples described with the most vicious language over the most trivial mistakes, or even stylistic differences. I’ve seen job candidates rejected for the most god-awful stupid reasons. In one case, the interviewer clearly screwed up (he misstated the problem in a way that made it impossible) but, refusing to risk face by admitting the problem was on his end, he claimed the candidate failed the question. Another was dinged on a back-channel reference (don’t get me started on that sleazy practice, which ought to be illegal) claiming, without any evidence, that “he didn’t do much” on a notable project four years ago. I once saw an intern denied a full-time offer because he lived in an unstylish neighborhood. (The justification was that one had to be “hungry”, mandating Manhattan.) Many of us programmers are so butthurt about not being allowed to sit at the cool kids’ table that, when given the petty power associated with interviewing other programmers, the bitch-claws come out in a major way.

Having been involved in interviewing and recruiting, I’ll concur that there are a significant number of untalented applicants. If it’s 99.5 percent, you’re doing a lot of things wrong, but most resumes do come from people way out of their depth. Moreover, as with dating, there’s an adverse weighting in play. Most people aren’t broken, but broken people go on orders of magnitude more dates than everyone else, which is why most peoples’ dating histories have a disproportionate representation of horror stories, losers, and weirdos. It’s the same with hiring, but phone screening should filter against that. If you’re at all good at it, about half of the people brought in-office will be solid candidates.

Of course, each requirement cuts down the pool. Plenty of companies (in finance, some officially) have a “no job hopper” or “no unemployeds” rule. Many mandate high levels of experience in new technologies (even though learning new technologies is what we’re good at). Then, there are those who are hung up on reference checking in weird and creepy ways. I know of one person who proudly admits that his reference checking protocol is to cold-call a random person (again, back-channel) is the candidate’s network and ask the question, without context, “Who is the best person you’ve ever worked with?” If anyone other than the candidate is named, the candidate is rejected. That’s not being selective. That’s being an invasive, narcissistic idiot. Since each requirement reduces the size of qualified people, it doesn’t take long before the prejudices winnow an applicant pool down to zero.

Programmers? Let’s be real here, we kinda suck…

As programmers, we’re not very well-respected, and when we’re finally paid moderately well, we let useless business executives (who work 10-to-3 and think HashMap is a pot-finding app) claim that “programmer salaries are ridiculous”. (Not so.) Sometimes (to my horror) you’ll hear a programmer even agree that our salaries are “ridiculous”. Fuck that bullshit; it’s factually untrue. The Business is, in general, pretty horrible to us. We suffer under closed allocation, deal with arbitrary deadlines, and if we don’t answer to an idiot, we usually answer to someone else who does. Where does the low status of programmers come from? Why are we treated as cost centers instead of partners in the business? Honestly… much of the problem is us. We’ve failed to manage The Business, and the result is that it takes ownership of us.

Most of the time, when a group of people is disproportionately successful, the cause isn’t any superiority of the average individual, but a trait of the group: they help each other out. People tend to call these formations “<X> Mafia” where X might be an ethnicity, a school, or a company. Y Combinator is an explicit, pre-planned attempt to create a similar network; time will tell if it succeeds. True professions have it. Doctors look out for the profession. With programmers, we don’t see this. There isn’t a collective spirit: just long email flamewars about tabs versus spaces. We don’t look out for each other. We beat each other down. We sell each other out to non-technical management (outsiders) for a shockingly low bounty, or for no reason at all.

In many investment banks, there’s an established status hierarchy in which traders and soft-skills operators (“true bankers”) are at the top, quants are in the middle, and programmers (non-quant programmers are called “IT”) are even lower. I asked a high-ranking quant why it was this way, and he explained it in terms of the “360 degree” performance reviews. Bankers and traders all gave each other top ratings, and wrote glowing feedback for minor favors. They were savvy enough to figure out that it was best for them to give great reviews up, down, and sideways, regardless of their actual opinions. Quants tended to give above-average ratings and occasionally wrote positive feedback. IT gave average ratings for average work and plenty of negative feedback. The programmers were being the most honest, but hurting each other in the process. The bankers and traders were being political, and that’s a good thing. They were savvy enough to know that it didn’t benefit them to sell each other out to HR and upper management. Instead, they arranged it so they all got good ratings and the business had to, at baseline, appreciate and reward all of them. While it might seem that this hurt top performers, it had the opposite effect. If everyone got a 50 percent bonus and 20% raise, management had to give the top people (and, in trading, it’s pretty obvious who those are) even more.

Management loves to turn high performers against the weak, because this enables management to be stingy on both sides. The low performers are fired (they’re never mentored or reassigned) and the high performers can be paid a pittance and still have a huge bonus in relative terms (not being fired vs. being fired). What the bankers were smart enough to realize (and programmers, in general, are not) is that performance is highly context-driven. Put eight people of exactly equal ability on a team to do a task and there will be one leader, two or three contributors, and the rest will be marginal or stragglers. It’s just more efficient to have the key knowledge in a small number of heads. Open source projects work this way. What this means is that, even if you have excellent people and no bad hires, you’ll probably have some who end up with not much to show for their time (which is why open allocation is superior; they can reassign themselves until they end up in a high-impact role). If management can see who is in what role, it can fire the stragglers and under-reward the key players (who, because they’re already high performers, are probably motivated by things other than money… at least, for now). The bankers and traders (and, to a lesser extent, the quants) had the social savvy and sense to realize that it was best that upper management not know exactly who was doing what. They protected each other, and it worked for them. The programmers, on the other hand, did not, and this hurt top performers as well as those on the bottom.

Let’s say that an investment bank tried to impose tech-company stack ranking on its employees, associate level and higher. (Analyst programs are another matter, not to be discussed here.) Realizing the mutual benefit in protecting each other, the bankers would find a way to sabotage the process by giving everyone top ratings, ranking the worst employees highly, or simply refusing to do the paperwork. And good for them! Far from being unethical, this is what they should do: collectively work The Business to get what they’re actually worth. Only a programmer would be clueless enough to go along with that nonsense.

In my more pessimistic moods, I tend to think that we, as programmers, deserve our low status and subordinacy. As much as we love to hate those “business douchebags” there’s one thing I will say for them. They tend to help each other out a lot more than we do. Why is this? Because they’re more political and, again, that might not be a bad thing. Ask a programmer to rate the performance of a completely average colleague and you’ll get an honest answer: he was mediocre, we could have done without him. These are factual statements about average workers, but devastating when put into words. Ask a product manager or an executive about an average colleague and you’ll hear nothing but praise: he was indispensable, a world-class player, best hire in ten years. They realize that it’s politically better for them, individually and as a group, to keep their real opinions to themselves and never say anything that could remotely endanger another’s career. Even if that person’s performance was only average, why make an enemy when one can make a friend?

“Bad code”

Let’s get to another thing that we do, as programmers, that really keeps us down. We bash the shit out of each other’s code and technical decision-making, often over minutiae.

I hate bad code. I really do. I’ve seen plenty of it. (I’ve written some, but I won’t talk about that.) I understand why programmers complain about each other’s code. Everyone seems to have an independent (and poorly documented) in-head culture that informs how he or she writes code, and reading another person’s induces a certain “culture shock”. Even good code can be difficult to read, especially under time pressure. And yes, most large codebases have a lot of code in them that’s truly shitty, sometimes to the point of being nearly impossible to reason about. Businesses have failed because of code quality problems, although (to tell the whole story) it’s rare that one bad programmer can do that much damage. The worst software out there isn’t the result of one inept author, but the result of code having too many authors, often over years. It doesn’t help that most companies assign maintenance work to either to junior programmers, or demoted (and disengaged) senior ones, neither category having the power to do it right.

I’d be the last one to come out and defend bad code. That said, I think we spend too much time complaining about each other’s code– and, worse yet, we tend toward the unforgivable sin of complaining to the wrong people. A technical manager has, at least, the experience and perspective to know that, at some level, every programmer hates other peoples’ code. But if that programmer snitches to a non-technical manager and executive,  well… you’ve just invited a 5-year-old with a gun to the party. Someone might get fired because “tabs versus spaces” went telephone-game into “Tom does shoddy work and is going to destroy the business”. Because executives are politically savvy enough to protect the group, and only sell each other out in extreme circumstances, what started out as a stylistic disagreement sounds (to the executive ear) like Tom (who used his girlfriend’s computer to fix a production problem at 11:45 on a Friday night, the tabs/spaces issue being for want of an .emacs.d) is deliberately destroying the codebase and putting the whole company at risk.

As programmers, we sell each other out all the time. If we want to advance beyond reasonable but merely upper-working class salaries, and be more respected by The Business, we have to be more careful about this kind of shit. I’ve heard a great number of software engineers say things like, “Half of all programmers should just be fired.” Now, I’ll readily agree that there are a lot of badly-trained programmers out there whose lack of skill causes a lot of pain. But I’m old enough to know that people come to a specific point from a multitude of paths and that it’s not useful to personalize this sort of thing. Also, regardless of what we may think as individuals, almost no doctor or banker would ever say, to someone outside his profession, “half of us should be fired”. They’re savvy enough to realize the value of protecting the group, and handling competence and disciplinary matters internally. Whether to fire, censure, mentor or praise is too important a decision to let it happen outside of our walls.

There are two observations about low-quality code, one minor and one major. The minor one is that code has a “all of us is worse than any of us” dynamic. As more hands pass over code, it tends to get worse. People hack the code needing specific features, never tending to the slow growth of complexity, and the program evolves over time into something that nobody understands because too many people were involved in it. Most software systems fall to pieces not because of incompetent individuals, but because of unmanaged growth of complexity. The major point on code-quality is: it’s almost always management’s fault.

Bad code comes from a multitude of causes, only one of which is low skill in programmers. Others include unreasonable deadlines, unwillingness to attack technical debt (a poor metaphor, because the interest rate on technical “debt” is both usurious and unpredictable), bad architecture and tooling choices, and poor matching of programmers to projects. Being stingy, management wants to hire the cheapest people it can find and give them the least time possible in which to do the work. That produces a lot of awful code, even if the individual programmers are capable. Most of the things that would improve code quality (and, in the long term, the health and performance of the business) are things that management won’t let the programmers have: more competitive salaries, more autonomy, longer timeframes, time for refactoring. The only thing that management and the engineers can agree on is firing (or demoting, because their work is often still in use and The Business needs someone who understands it) those who wrote bad code in the past.

One thing I’ve noticed is that technology companies do a horrible job of internal promotion. Why is that? Because launching anything will typically involve compromises with the business on timeframe and headcount, resulting in bad code. Any internal candidate for a promotion has left too many angles for attack. Somewhere out there, someone dislikes a line of code he wrote (or, if he’s a manager, something about a project he oversaw). Unsullied external candidates win, because no one can say anything bad about them. Hence, programming has the culture of mandatory (but, still, somewhat stigmatized) job hopping we know and love.

What’s really at the heart of angry programmers and their raging against all that low-quality code? Dishonest attribution. The programmer can’t do shit about the dickhead executive who set the unreasonable deadlines, or the penny-pinching asswipe managers who wouldn’t allow enough salary to hire anyone good. Nor can he do much about the product managers or “architects” who sit above and make his life hell on a daily basis. But he can attack Tom, his same-rank colleague, over that commit that really should have been split into two. Because they’re socially unskilled and will generally gleefully swallow whatever ration of shit is fed to them by management, most programmers can very easily be made to blame each other for “bad code” before blaming the management that required them to use the bad code in the first place.

Losers

As a group, software engineers are losers. In this usage, I’m not using the MacLeod definition (which is more nuanced) and my usage is halfway pejorative. I generally dislike calling someone a loser, because the pejorative, colloquial meaning of that word conflates unfortunate circumstance (one who loses) with deserved failure. Here, however, it applies. Why do we lose? Because we play against each other, instead of working together to beat the outside world. As a group, we create our own source of loss.

Often, we engage in zero- or negative-sum plays just to beat the other guy. It’s stupid. It’s why we can’t have nice things. We slug each other in the office and wonder why external hires get placed over us. We get into flamewars about minutiae of programming languages, spread FUD, and eventually some snot-nosed dipshit gets the “brilliant” idea to invite nontechnical management to weigh in. The end result is that The Business comes in, mushroom stamps all participants, and says, “Everything has to be Java“.

Part of the problem is that we’re too honest, and we impute honesty in others when it isn’t there. We actually believe in the corporate meritocracy. When executives claim that “low performers” are more of a threat to the company than their astronomical, undeserved salaries and their doomed-from-the-start pet projects, programmers are the only people stupid enough to believe them, and will often gleefully implement those “performance-based” witch hunts that bankers would be smart enough to evade (by looking for better jobs, and arranging for axes to fall on people planning exits anyway). Programmers attempt to be apolitical, but that ends up being very political, because the stance of not getting political means that one accepts the status quo. That’s radically conservative, whether one admits it or not.

Of course, the bankers and traders realize the necessity of appearing to speak from a stance of professional apolitical-ness. Every corporation claims itself to be an apolitical meritocracy, and it’s not socially acceptable to admit otherwise. Only a software engineer would believe in that nonsense. Programmers hear “Tom’s not delivering” or “Andrea’s not a team player” and conceive of it as an objective fact, failing to recognize that, 99% of the time, it means absolutely nothing more or less than “I don’t like that person”.

Because we’re so easily swayed, misled, and divided, The Business can very easily take advantage of us. So, of course, it does. It knows that we’ll sell each other out for even a chance at a seat at the table. I know a software engineer who committed felony perjury against his colleagues just to get a middle-management position and the right to sit in on a couple of investor meetings. Given that this is how little we respect each other, ourselves, and our work, is it any wonder that software engineers have such low status?

Our gender issues

I’m going to talk, just briefly, about our issues with women. Whatever the ultimate cause of our lack of gender diversity– possibly sexism, possibly that the career ain’t so great– it’s a major indictment of us. My best guess? I think sexism is a part of it, but I think that most of it is general hostility. Women often enter programming and find their colleagues hostile, arrogant, and condescending. They attribute that to their gender, and I’m sure that it’s a small factor, but men experience all of that nonsense as well. To call it “professional hazing” would be too kind. There’s often nothing professional about it. I’ve dealt with rotten personalities, fanaticism about technical preference or style, and condescension and, honestly, don’t think there’s a programmer out there who hasn’t. When you get into private-sector technology, one of the first things you learn is that it’s full of assholes, especially at higher levels.

Women who are brave enough to get into this unfriendly industry take a look and, I would argue, most decide that it’s not worth it to put up with the bullshit. Law and medicine offer higher pay and status, more job security, fewer obnoxious colleagues, and enough professional structure in place that the guy who cracks rape jokes at work isn’t retained just because he’s a “rockstar ninja”.

“I thought we were the good guys?”

I’ve often written from a perspective that makes me seem pro-tech. Originally, I approached the satirical MacLeod pyramid with the belief that “Technocrat” should be used to distinguish positive high-performers (apart from Sociopaths). I’ve talked about how we are a colonized people, as technologists. It might seem that I’m making businesspeople out to be “the bad guys” and treating programmers as “the good guys”. Often, I’m biased in that very direction. But I also have to be objective. There are good business people out there, obviously. (They’re just rare in Silicon Valley, and I’ll get to that.) Likewise, software engineers aren’t all great people, either. I don’t think either “tribe” has a monopoly on moral superiority. As in Lost, “we’re the good guys” doesn’t mean much.

We do get the worst (in terms of ethics and competence) of the management/business tribe in the startup world. That’s been discussed at length, in the essay linked above. The people who run Silicon Valley aren’t technologists or “nerds” but machiavellian businessmen who’ve swooped in to the Valley to take advantage of said nerds. The appeal of the Valley, for the venture capitalists and non-technical bro executives who run it, isn’t technology or the creation of value, but the unparalleled opportunity to take advantage of too-smart, earnest hard workers (often foreign) who are so competent technically that they often unintentionally generate value, but don’t know the first thing about how to fight for their own interests.

It’s easy to think ourselves morally superior, just because the specific subset of business people who end up in our game tends to be the worst of that crowd. It’s also a trap. We have a lot to learn form the traders and bankers of the world about how to defend ourselves politically, how to stand a chance of capturing some of the value we create, and how to prevent ourselves from being robbed blind by people who may have lower IQs, but have been hacking humans for longer than we could have possibly been using computers. Besides, we’re not all good. Many of us aren’t much better than our non-technical overlords. Plenty of software engineers would gladly join the bad guys if invited to their table. The Valley is full of turncoat software engineers who don’t give a shit about the greater mission of technology (using knowledge to make peoples’ lives better) and who’d gladly sell their colleagues out to cost-cutting assholes in management.

Then there are the losers. Losers aren’t “the bad guys”. They don’t have the focus or originality that would enable them to pull off anything complicated. Their preferred sin is typically sloth. They’ll fail you when you need them the most, and that ‘s what makes them infuriating. They just want to put their heads down and work, and the problem is that they can’t be trusted to “get political” when that’s exactly what’s needed. The danger of losers is in numbers. The problem is that so many software engineers are clueless, willing losers who’ll gladly let political operators take everything from them.

When you’re young and don’t know any better, one of the appeals of software engineering is that it appears, superficially, to tolerate people of low social ability. To people used to artificial competition against their peers, this seems like an attractive trait of the industry; it’s not full of those “smooth assholes” and “alpha jocks”. After several years observing various industries, I’ve come to the conclusion that this attitude is not merely misguided, but counterproductive. You want socially skilled colleagues. Being the biggest fish in a small pond just means that there are no big fish to protect you when the sharks come in. Most of those “alpha jocks” aren’t assholes or idiots (talk to them, nerds; you’ll be surprised) and, when The Business comes in and is looking for a fight, it’s always best to have strong colleagues who’ve got your back.

Here’s an alternate, and quite possible hypothesis: maybe The Business isn’t actually full of bad guys. One thing that I’ve realized is that people tend to push blame upward. For example, the reputation of venture capitalists has been harmed by founders blaming “the VCs” for their own greed and mismanagement. It gives the grunt workers an external enemy, and the clueless can be tricked into working harder than they should (“they don’t really like us and haven’t given us much, but if we kill it on this project and prove them wrong, maybe they’ll change their minds!”). It actually often seems that most of the awfulness of the software industry doesn’t come directly from The Business, but from turncoat engineers (and ex-engineers) trying to impress The Business. In the same way that young gang members are more prone to violence than elder dons, the most creative forms of evil seem to come from ex-programmers who’ve changed their colors.

The common enemy

So long as software engineers can easily be divided against each other on trivial matters like tabs versus spaces and scrotum versus kanban, we’ll never get the respect (and, more importantly, the compensation) that we’re due. These issues distract us from what we really need to do, which is figure out how to work The Business. Clawing at each other, each trying to become the favored harem queen of the capitalist, is suboptimal compared to the higher goal of getting out of the harem.

I’ve spoken of “The Business” as if it were a faceless, malevolent entity. It might sound like I’m anti-business, and I’m not. Business is just a kind of process. Good people, and bad people, start businesses and some add great value to the world. The enemy isn’t private enterprise itself, but the short-term thinking and harem-queen politics of the established corporation. Business organizations get to a point where they cease having a real reason to exist, and all that’s left is the degenerate social contest for high-ranking positions. We, as programmers, seem to lack the skill to prevent that style of closed-allocation degeneracy from happening. In fact, we seem to unintentionally encourage it.

The evil isn’t that software is a business, but that technical excellence has long since been subordinated entirely to the effectively random emotional ups and downs of non-technical executives who lack the ability to evaluate our work. It’s that our weird ideology of “never get political” is actually intensely political and renders us easy to abuse. Business naturally seems to be at risk of anti-intellectual tendencies and, rather than fight back against this process, we’ve amplified it just to enjoy the illusion of being on the inside, among the “cool kids”, part of The Business. Not only does our lack of will to fight for our own interests leave us at the mercy of more skilled business operators, but it attracts an especially bad kind of them. Most business people, actually, aren’t the sorts of corporate assholes we’re used to seeing run companies. It’s just that our lack of social skill appeals to the worst of that set: people who come in to technology to take advantage of all the clueless, loser nerds who won’t fight for themselves. If we forced ourselves to be more discerning judges of character, and started focusing on ethics and creativity instead of fucking tabs-versus-spaces, we might attract a better sort of business person, and have an industry where stack ranking and bastardized-“Agile” micromanagement aren’t even considered.

If we want to improve our situation, we have to do the “unthinkable” (which is, as I’ve argued, actually quite thinkable). We have to get political.

What’s a mid-career software engineer actually worth? Try $779,000 per year as a lower bound.

Currently, people who either have bad intentions or a lack of knowledge are claiming that software engineer salaries are “ridiculous”. Now, I’ll readily admit that programmers are, relative the general population, quite well paid. I’m not about to complain about the money I make; I’m doing quite well, in a time and society where many people aren’t. The software industry has many problems, but low pay for engineers (at least, for junior and mid-career engineers; senior engineers are underpaid but that’s an issue for another time) doesn’t crack the top 5. Software engineers are underpaid, relative to the massive amount of value (if given proper projects, rather than mismanaged as is typical) they are capable of delivering. In comparison to the rest of the society, they do quite well.

So what should a software engineer be paid? There’s a wide disparity in skill level, so it’s hard to say. I’m going to focus on a competent, mid-career engineer. This is someone between 5 and 10 years of experience, with continual investment in skill, and probably around 1.6 on this software engineering scale. He’s not a hack or the stereotypical “5:01″ programmer who stopped learning new skills at 24, but he’s not a celebrity either. He’s good and persistent and experienced, but probably not an expert. In the late 1990s, that person was just cracking into six-figure territory: $100,000 per year. No one thought that number was “ridiculous”. Adjusted for inflation, that’s $142,300 per year today. That’s probably not far off what an engineer at that level actually makes, at least in New York and the Bay Area.

Software engineers look “ridiculous” to people who haven’t been software engineers in 20 years (or ever) and whose numbers are way out of date. If you’re a Baby Boomer whose last line of code was in 1985, you’re probably still thinking that $60,000 is a princely sum for a programmer to earn. When one factors inflation into the equation, programmer salaries are only “at record high” because inflation is an exponential process. Taking that out, they’re right about where history says they should be.

I would argue, even, that programmer salaries are low when taking a historical perspective. The trend is flat, adjusting for inflation, but the jobs are worse. Thirty years ago, programming was an R&D job. Programmers had a lot of autonomy: the kind of autonomy that it takes if one is going to invent C or Unix or the Internet or a new neural network architecture. Programmers controlled how they worked and what they worked on, and either answered to other programmers or to well-read scientists, rather than anti-intellectual businessmen who regard them as cost centers. Historically, companies sincerely committed to their employees’ careers and training. You didn’t have to change jobs every 2 years just to keep getting good projects and stay employable. The nature of the programming job, over the past couple decades, has become more stressful (open-plan offices) and careers have become shorter (ageism). Job volatility (unexpected layoffs and, even, phony “performance-based” firings in lieu of proper layoffs, in order to skimp on severance because that’s “the startup way”) has increased. With all the negatives associated with a programming job in 2014, that just didn’t exist in the 1970s to ’80s, flat performance on the salary curve is disappointing. Finally, salaries in the Bay Area and New York have kept abreast of general inflation, but the costs of living have skyrocketed in those “star cities”, while the economies of the still-affordable second-tier cities have declined. In the 1980s and ’90s, there were more locations in which a person could have a proper career, and that kept housing prices down. In 2014, that $142,000 doesn’t even enable one to buy a house in a place where there are jobs.

All of those factors are subjective, however, so I’ll discard them. We have sufficient data to know that $142,000 for a mid-career programmer is not ridiculous. It’s a lower bound for the business value of a software engineer (in 1999); we know that employers did pay that; they might have been willing to pay more. This information already gives us victory over the assclowns claiming that software engineer salaries are “ridiculous” right now.

Now, I’ll take it a step further and introduce Yannis’s Law: programmer productivity doubles every 6 years. Is it true? I would say that the answer is a resounding “yes”. For sure, there are plenty of mediocre programmers writing buggy, slow websites and abusing Javascript in truly awful ways. On the other hand, there is more recourse for a good programmer who find quality; rather than commit to commercial software, she can peruse the open-source world. There’s no evidence for a broad-based decline in programmer ability over the years. It’s also easy to claim that the software career “isn’t fun anymore” because so much time is spent gluing existing components together, and accounting for failures of legacy systems. I don’t think these gripes are new, and I think tools are improving, and a 12% per year rate sounds about right. Put another way, one who programs exactly as was done in 1999 is only about 18 percent as productive as one using modern tools. And yet that programmer, only 18% as productive as his counterpart today, was worth $142,000 (2014 dollars) back then!

Does this mean that we should throw old tools away (and older programmers under the bus)? Absolutely not. On the contrary, it’s the ability to stand on the shoulders of giants that makes us able to grow (as a class) at such a rate. Improved tools and accumulated knowledge deliver exponential value, but there’s a lot of knowledge that is rarely learned except over a decades-long career. Most fresh Stanford PhDs wouldn’t be able to implement a performant, scalable support vector machine from scratch, although they could recite the theory behind one. Your gray-haired badasses would be rusty on the theory but, with a quick refresh, stand a much greater chance of building it righjt. Moreover, the best old ideas tend to recur and long-standing familiarity is an advantage. The most exciting new programming language right now is Clojure, a Lisp that runs on the Java Virtual Machine. Lisp, as an idea, is over 50 years old. And Clojure simply couldn’t have been designed by a 25-year-old in Palo Alto. For programmers, the general trend is a 12% increase in productivity; but individuals can reliably do 30 percent or more, and for periods spanning over decades.

If the business value of a mid-level programmer in 1999 was $142,000 in today’s dollars, then one can argue that today, with programmers 5.7 times more productive, the true value is $779,000 per year at minimum. It might be more. For the highly competent and for more senior programmers, it certainly is higher. And here’s another thing: investors and managers and VPs of marketing didn’t create that surplus. We did. We are almost 6 times as productive as we were in the 1990s not because they got better at their jobs (they haven’t) but because we built the tools to make ourselves (and our successors) better at what we do. By rights, it’s ours.

Is it reasonable, or realistic, to argue that mid-career software engineers ought to be earning close to a million dollars per year? Probably not. It seems to be inevitable, and also better for society, that productivity gains are shared. We ought to meet in the middle. That we don’t capture all of the value we create is a good thing. It would be awful, for example, if sending an email cost as much as sending a letter by post or, worse yet, as much as using the 19th-century Pony Express, because the producers of progress had captured all of the value for themselves. So, although that $779,000 figure adequately represents the value of a decent mid-career engineer to the business, I wouldn’t go so far as to claim that we “deserve” to be paid that much. Most of us would ecstatic with real equity (not that 0.05% VC-istan bullshit) and a quarter of that number– and with the autonomy to deliver that kind of value.

What Silicon Valley’s ageism means

Computer programming shouldn’t be ageist. After all, it’s a deep discipline with a lot to learn. Peter Norvig says it takes 10 years, but I’d consider that number a minimum for most people. Ten years of high-quality, dedicated practice to the tune of 5-6 hours per day, 250 days per year, might be enough. For most people, it’s going to take longer, because few people can work only on the interesting problems that constitute dedicated practice. The fundamentals (computer science) alone take a few thousand hours of study, and then there’s the experience of programming itself, which one must do in order to learn how to do it well. Getting code to work is easy. Making it efficient, robust, and legible is hard. Then, there’s a panoply of languages, frameworks, paradigms, to learn and absorb and, for many, to reject. As an obstacle, there’s the day-to-day misery of a typical software day job, in which so much time is wasted on politics and meetings and pointless projects that an average engineer is lucky to have 5 hours per week for learning and growth. Ten years might be the ideal; I’d bet that 20 years is typical for the people who actually become great engineers and, sadly, the vast majority of professional programmers never get anywhere close.

It takes a long time to be actually good in software engineering. The precocious are outliers. More typically, people seem to peak after 40, as in all the other high-skill disciplines. It, then, seems that most of age-related decline in this field is externally enforced. Age discrimination is not an artifact of declining ability but changing perceptions.

It doesn’t make sense, but there it is.

Age discrimination has absolutely no place in technology. Yet it exists. After age 40, engineers find it increasingly difficult to get appropriate jobs. Startups are, in theory, supposed to “trade against” the inefficiencies and moral failures of other companies but, on this issue, the venture capital (VC) funded startups are the biggest source of the problem. Youth and inexperience have become virtues, while older people who push back against dysfunction (and, as well, outright exploitation) are cited as “resistant to change”.

There’s another issue that isn’t derived from explicit ageism, but might as well be. Because our colonizers (mainstream business culture) are superficial, they’ve turned programming into a celebrity economy. A programmer has two jobs. In addition to the work itself, which is highly technical and requires continual investment and learning, there’s a full-time reputation-management workload. If a machine learning engineer works at a startup and spends most of his time in operations, he’s at risk of being branded “an ops guy”, and may struggle to get high-quality projects in his specialty from that point on. He hasn’t actually lost anything– in fact, he’s become far more valuable– but the superficial, nontechnical idiots who evaluate us will view him as “rusty” in his specialty and, at the least, exploit his lack of leverage. All because he spent 2 years doing operations, because it needed to be done!

As we get older and more specialized, the employment minefield becomes only more complicated. We are more highly paid at that point, but not by enough of a margin to offset the increasing professional difficulties. Executives cite the complexity of high-end job searches when demanding high salaries and years-long severances. Programmers who are any good face the same, but get none of those protections. I would, in fact, say that any programmer who is at all good needs a private agent, just as actors do. The reputation management component of this career, which is supposed to be about technology and work and making the world better, but is actually about appeasing the nontechnical, drooling patron class, constitutes a full-time job that requires a specialist. Either we need unions, or we need an agent model like Hollywood, or perhaps we need both. That’s another essay, though.

The hypocrisy of the technology employer

Forty years ago, smart people left finance and the mainstream corporate ladder for technology, to move into the emerging R&D-driven guild culture that computing had at the time. Companies like Hewlett-Packard were legitimately progressive in how they treated their talent, and rewarded for it by their employees’ commitment to making great products. In this time, Silicon Valley represented, for the most technically adept people in the middle class, a genuine middle path. The middle path will require its own essay, but what I’m talking about here is a moderate alternative between the extremes of subordination and revolution. Back then, Silicon Valley was the middle path that, four decades later, it is eagerly closing.

Technology is no longer managed by “geeks” who love the work and what it can do, but by the worst kinds of business people who’ve come in to take advantage of said geeks. Upper management in the software industry is, in most cases, far more unethical and brazen than anywhere else. To them, a concentration of talented people who don’t have the inclination or cultural memory that would lead them to fight for themselves (labor unions, agent models, ruthlessness of their own) is an immense resource. Consequently, some of the most disgusting HR practices (e.g. stack ranking, blatant sexism) can be found in the technology industry.

There’s one really bad and technical trait of software employers that, I think, has damaged the industry immensely. Technology employers demand specialties when vetting people for jobs. General intelligence and proven ability to code isn’t enough; one has to have “production experience” in a wide array of technologies invented in the past five years. For all their faults, the previous regime of long-lasting corporations was not so bigoted when it came to past experience, trusting people to learn on the job, as needed. The new regime has no time for training or long-term investment, because all of these companies have been built to be flipped to a greater fool. In spite of their bigoted insistence on pre-existing specialties in hiring, they refuse to respect specialties once people are hired. Individual programmers who attempt to protect their specialties (and, thus, their careers) by refusing assignment to out-of-band or inferior grunt work are quickly fired. This is fundamentally hypocritical. In hiring, software companies refuse to look twice at someone without a yellow brick road of in-specialty accomplishments of increasing scope; yet, once employees are inside and fairly captive (due to the pernicious stigma against changing jobs quickly, even with good reason) they will gladly disregard that specialty, for any reason or no reason. Usually, this is framed as a business need (“we need you to work on this”) but it’s, more often, political and sometimes personal. Moving talent out of its specialty is a great way for insecure middle managers to neutralize overperformance threats. In a way, employers are like the pervert who chases far-too-young sexual partners (if “partner” is the right word here) for their innocence, simply to experience the thrill of destroying it. They want people who are unspoiled by the mediocrity and negativity of the corporate world, because they want to inflict the spoilage. The virginity of a fresh, not-yet-cynical graduate from a prestigious university is something they want all for themselves.

The depression factor

I’m not going to get personal here, but I’m bipolar so when I use words like “depression” and “hypomania” and “anxiety” I do, in fact, know what the fuck I am talking about.

A side effect of corporate capitalism, that I see, is that it has created a silent epidemic of middle-aged depression. The going assumption in technology that mental ability declines after age 25 is not well supported, and it is in fact contrary to what most cultures believe about intelligence and age. (In truth, various aspects of cognition peak at different ages– from language acquisition at age 5 to writing ability around 65– and there’s also so much individual variation that there’s no clear “peak” age.) For general, holistic intelligence, there’s no evidence of an age-bound peak in healthy people. While this risks sounding like a “No True Scotsman” claim, what I mean to say is that every meaningful age-related decline in cognition can be tracked to a physical health problem and not aging itself. Cardiovascular problems, physical pain and side effects of medication can impair cognition. I’m going to talk about the Big One, though, and that’s depression. Depression can cause cognitive decline. Most of that loss is reversible, but only if the person recovers from it and, in many cases, they never do.

In this case, I’m not talking about severe depression, the kind that would have a person considering electroconvulsive therapy or on suicide watch. I’m talking about mild depression that, depending on time of diagnosis, might be considered subclinical. People experiencing it in middle age are, one presumes, liable to attribute it to getting older rather than a real health problem. Given that middle-aged “invisibility” in youth-obsessed careers is, in truth, unjust and depressing, it seems likely that more than a few people would experience depression and fail to perceive it as a health issue. That’s one danger of depression that those who’ve never experienced it might not realize exists: when you’re depressed, you suffer from the delusion that (a) you’ve always been depressed, and (b) that no other outlook or mood makes sense. Depression is delusional, and it is a genuine illness, but it’s also dangerously self-consistent.

Contrary to the stereotype, people with depression aren’t always unhappy. In fact, people with mild depression can be happy quite often. It’s just easier to make them unhappy. Things that don’t faze normal people, like traffic jams or gloomy weather or long queues at the grocery store, are more likely to bother them. For some, there’s a constant but low-grade gloom and tendency to avoid making decisions. Others might experience 23 hours and 50 minutes per day of normal mood and 10 minutes of intense, debilitating, sadness: the kind that would force them to pull over to the side of the road and cry. There isn’t a template and, just as a variety of disparate diseases (some viral, some bacterial, and some behavioral) were once called “fevers”, I feel like “depression” is a cluster of about 20 different diseases that we just don’t have the tools to separate. Some depressions come without external cause. Others are clearly induced by environmental stresses. Some depressions impair cognition and probably constitute a (temporary) 30-IQ-point loss. Others (more commonly seen in artists than in technology workers) seem to induce no intellectual impairment at all; the person is miserable, but as sharp as ever.

Corporate workers do become less sharp, on average, with age. You don’t see that effect, at least not involuntarily so, in most intellectually intense fields. A 45-year-old artist or author or chess master has his best work ahead of him. True entrepreneurs (not dipshits who raise VC based on connections) also seem to peak in their 50s and, for some, even later. Most leaders hit their prime around 60. However, it’s observable that something happens in Corporate America that makes people more bitter, more passive, and slower to act over time, and that it starts around 40. Perhaps it’s an inverse of survivor bias, with the more talented people escaping the corporate racket (by becoming consultants, or entrepreneurs) before middle age. I don’t think so, though. There are plenty of highly talented people in their forties and fifties who’ve been in private-sector programming for a long time and just seem out of gas. I don’t blame them for this. With better jobs, I think they’d recover their power surprisingly quickly. I think they have a situationally-induced case of mild depression that, while it may not be the life-threatening illness we tend to associate with major depression, takes the edge off their abilities. It doesn’t make them unemployable. It makes them slow and bitter but, unlike aging itself, it’s very easily reversible: change the context.

Most of these slowed-down, middle-aged, private-sector programmers wouldn’t qualify for major depressive disorder. They’re not suicidal, don’t have debilitating panic attacks, and can attribute their losses of ability (however incorrectly) to age. Rather, I think that most of them are mildly but chronically depressed. To an individual, this is more of a deflation than a disability; to society, the costs are enormous, just because such a large number of people are affected, and because it disproportionately affects the most experienced people at a time when, in a healthier economic environment, they’d be in their prime.

The tournament of idiots

No one comes out of university wanting to be a private-sector social climber. There’s no “Office Politics” major. People see themselves as poets, economists, mathematicians, or entrepreneurs. They want to make, build, and do things. To their chagrin, most college graduates find that between them and any real work, there’s at least a decade of political positioning, jockeying for permissions and status, and associated nonsense that’s necessary if one intends to navigate the artificial scarcity of the corporate world.

The truth is that most of the nation’s most prized and powerful institutions (private-sector companies) have lost all purpose for existing. Ideals and missions are for slogans, but the organization’s true purpose is to line the pockets of those ranking high within it. There’s also no role or use for real leadership. Corporate executives are the farthest one gets from true leaders. Most are entrenched rent-seekers. With extreme economic inequality and a culture that worships consumption, it should surprise no one that our “leadership” class is a set of self-dealing parasites at a level that hasn’t been seen in an advanced economy since pre-Revolution France.

Leadership and talent have nothing to do with getting to the top. It’s the same game of backstabbing and political positioning that has been played in kings’ courts for millennia. The difference, in the modern corporation, is that there’s a pretense of meritocracy. People, at least, have to pretend to be working and leading to advance further. The work that is most congruent with social advancement, however, isn’t the creative work that begets innovation. Instead, it’s success in superficial reliability. Before you get permission to be creative, you have to show that you can suffer, and once you’ve won the suffering contest, it’s neither necessary nor worth it to take creative risks. Companies, therefore, pick leaders by loading people with unnecessary busywork that often won’t go anywhere, and putting intense but ultimately counterproductive demands on them. They generate superficial reliability contests. One person will outlast the others, who’ll fall along the way due to unexpected health problems, family emergencies, and other varieties of attrition (for the lucky few, getting better jobs elsewhere). One of the more common failure modes by which people lose this tournament of idiots is mild depression: not enough to have them hospitalized, but enough to pull them out of contention.

The corporate worker’s depression, especially in midlife, isn’t an unexpected side effect of economic growth or displacement or some other agent that might allow Silicon Valley’s leadership to sweep it under the rug of “unintended consequences”. Rather, it’s a primary landscape feature of the senseless competition that organizations create for “leadership” (rent-seeking) positions when they’ve run out of reasons to exist. At that level of decay, there is no meaningful definition of “merit” because the organization itself has turned pointless, and the only sensible way to allocate highly-paid positions is to create a tournament of idiots, in which people psychologically abuse each other (often subtly, in the form of microaggressions) until only a few remain healthy enough to function.

Here we arrive at a word I’ve come to dread: corporate culture. Every corporation has a culture, and 99% of those are utterly abortive. Generally, the more that a company’s true believers talk about “our culture”, the more fucked-up the place actually is. See, culture is often ugly. Foot binding, infantile genital mutilation (“female circumcision”), war, and animal torture are all aspects of human culture. Previous societies used supernatural appeal to defend inhumane practices, but modern corporations use “our culture” itself as a god. “Culture fit” is often cited to justify the otherwise inconsistent and, sometimes, unjustifiable. Why wasn’t the 55-year-old woman, a better coder than anyone else on the team, hired? “It wouldn’t look right.” Can’t say that! “A seasoned coder who isn’t rich would shatter the illusion that everyone good gets rich.” Less illegal, but far too honest. “She didn’t fit with the culture.” Bingo! Culture can always be used in this way, by an organization, because it’s a black box of blame, diffusing moral culpability about the group. Blaming an adverse decision on “the team” or “the culture” avoids individual risk for the blamer, but the culture itself can never be attacked as bad. Most people in most organizations actually know that the “leadership team” (career office politicians, also known as executives) of their firm is toxic and incompetent. When they aren’t around, the executives are attacked. But it’s rare that anyone ever attacks the culture because “the culture” is everyone. To indict it is to insult the people. In this way, “the culture” is like an unassailable god.

Full circle

We’ve traveled through some dark territory. The tournament of idiots that organizations construct to select leadership roles, once they’ve ceased to have a real purpose, causes depression. The ubiquity of such office cultures has created, I argue, a silent epidemic of mild, midlife depression that has led venture capitalists (situated at its periphery, their wealth buying them some exit from the vortex of mediocrity in which they must still work, but do not live) and privileged young psuedo-entrepreneurs (terrified of what awaits them when their family connections cool and they must actually work) to conclude that a general cognitive mediocrity awaits in midlife, even though there is no evidence to support this belief, and plenty of evidence from outside of corporate purgatory to contradict it.

What does all of this say? Personally, I think that, to the extent that large groups of individuals and organizations can collectively “know” things, the contemporary corporate world devalues experience because it knows that the experience it provides is of low value. It refuses to eat its own dogfood, knowing that it’s poisoned.

For example, software is sufficiently technical and complex that great engineers are invariably experienced ones. The reverse isn’t true. Much corporate experience is of negative value, at least if one includes the emotional side effects that can lead to depression. Median-case private-sector technology work isn’t sufficiently valuable to overcome the disadvantages associated with age, which is another way of saying that the labor market considers average-case corporate experience to have negative value. I’m not sure that I disagree. Do I think it’s right to write people off because of their age? Absolutely not. Do I agree with the labor market’s assessment that most corporate work rots the brain? Well, the answer is mostly “yes”. The corporate world turns smart people, over the years, into stupid ones. If I’m right that the cause of this is midlife depression, there’s good news. Much of that “brain rot” is contextual and reversible.

How do we fix this?

Biologists and gerontologists seeking insight into longevity have studied the genetics and diet of long-living groups of people, such as the Sardinians and the people of Okinawa. Luckily for us, midlife cognitive decline isn’t a landscape feature of most technical or creative fields. (In fact, it’s probably not present in ours; it’s just perceived that way.) There are plenty of places to look for higher cognitive longevity, because few industries are as toxic as the contemporary software industry. When there is an R&D flavor to the work, and when people have basic autonomy, people tend to peak around 50, and sometimes later. Of course, there’s a lot of individual variation, and some people voluntarily slow down before that age, in order to attend to health, family, spiritual, or personal concerns. The key word to that is voluntary

Modeling and professional athletics (in which there are physical reasons for decline) aside, a career in which people tend to peak early, or have an early peak forced upon them, is likely to be a toxic one that enervates them. Silicon Valley’s being a young man’s game (and the current incarnation of it, focused on VC-funded light tech, is exactly that) simply indicates that it’s so destructive to the players that only the hard-core psychopaths can survive in it for more than 10 years. It’s not a healthy place to spend a long period of time and develop an expertise. As discussed above, it will happily consume expertise but produces almost none of value (hence, its attraction to those with pre-existing specialties, despite failing to respect specialties once the employee is captive). This means, as we already see, that technical excellence will fall by the wayside, and positive-sum technological ambition will give way to the zero-sum personal ambitions of the major players.

We can’t fix the current system, in which the leading venture capitalists are striving for “exits” (greater fools). That economy has evolved from being a technology industry with some need for marketing, to a marketing industry with some need (and that bit declining) for technology. We can’t bring it back, because its entrenched players are too comfortable with it being the way it is. While the current approach provides mediocre returns on investment, hence the underperformance of the VC  asset class, the king-making and career-altering power that it affords the venture capitalist allows him to capture all kinds of benefits on the side, ranging from financial upside (“2 and 20″) to executive positions for talentless drinking buddies to, during brief bubbly episodes such as this one, “coolness”. They like it the way it is, and it won’t change. Rather than incrementally fix the current VC-funded Valley, it must be replaced outright.

The first step, one might say, is to revive R&D within technology companies. That’s a step in the right direction, but it doesn’t go far enough. Technology should be R&D, full stop. To get there, we need to assert ourselves. Rather than answering to non-technical businessmen, we need to learn how to manage our own affairs. We need to begin valuing the experience on which most R&D progress actually relies, rather than shutting the seasoned and cynical out. And, as a first step in this direction, we need to stop selling each other out to nontechnical management for stupid reasons, including but especially “culture fit”.

VC-istan 8: the Damaso Effect

Padre Damaso, one of the villains of the Filipino national novel, Noli me Tangere, is one of the most detestable literary characters, as a symbol of both colonial arrogance and severe theological incompetence. One of the novel’s remarks about colonialism is that it’s worsened by the specific types of people who implement colonial rule: those who failed in their mother country, and are taking part in a dangerous, isolating, and morally questionable project that is their last hope at acquiring authority. Colonizers tend to be people who have no justification for superior social status left but their national identity. One of the great and probably intractable tensions within the colonization process is that it forces the best (along with the rest) of the conquered society to subordinate to the worst of the conquering society. The total incompetence of the corrupt Spanish friars in Noli is just one example of this.

In 2014, the private-sector technology world is in a state of crisis, and it’s easy to see why. For all our purported progressivism and meritocracy, the reality of our industry is that it’s sliding backward into feudalism. Age discrimination, sexism, and classism are returning, undermining our claims of being a merit-based economy. Thanks to the clubby, collusive nature of venture capital, to secure financing for a new technology business requires tapping into a feudal reputation economy that funds people like Lucas Duplan, while almost no one backs anything truly ambitious. Finally, there’s the pernicious resurgence of location (thanks to VCs’ disinterest in funding anything more than 30 miles away from them) as a career-dominating factor, driving housing prices in the few still-viable metropolitan areas into the stratosphere. In so many ways, American society is going back in time, and private-sector technology is a driving force rather than a counterweight. What the fuck, pray tell, is going on? And how does this relate to the Damaso Effect?

Lawyers and doctors did something, purely out of self-interest, to prevent their work from being commoditized as American culture became increasingly commercial in the late 19th century. They professionalized. They invented ethical rules and processes that allowed them work for businessmen (and the public) without subordinating. How this all works is covered in another essay, but it served a few purposes. First, the profession could maintain standards of education so as to keep membership in the profession as a form of credibility that is independent of managerial or client review. Second, by ensuring a basic credibility (and, much more important, employability) for good-faith members, it enabled professionals to meet ethical obligations (i.e. don’t kill patients) that supersede managerial or corporate authority. Third, it ensured some control over wages, although that was not its entire goal. In fact, the difference between unionization and professionalization seems to be as follows. Unions are employed when the labor is a commodity, but ensure that the commoditization happens in a fair way (without collective bargaining, and in the absence of a society-wide basic income, that never occurs). Unions accept that the labor is a commodity, but demand a fair rate of exchange. Professionalization exists when there is some prevailing reason (usually an ethical one, such as in medicine) to prevent full commoditization. If it seems like I’m whitewashing history here, let me point out that the American Medical Association, to name one example, has done some atrocious things in its history. It originally opposed universal healthcare; it has received some karma, insofar as the inventively mean-spirited U.S. health insurance system has not only commoditized medical services, but done so on terms that are unfavorable to physician and patient both. I don’t mean to say that the professions have always been on the right side of history, because that’s clearly not the case; professionalization is a good idea, often poorly realized.

The ideal behind professionalization is to separate two senses of what it means to “work for” someone: (1) to provide services, versus (2) to subordinate fully. Its goal is to allow a set of highly intelligent, skilled people to deliver services on a fair market without having to subordinate inappropriately (such as providing personal services unrelated to the work, because of the power relationship that exists) as is the norm in mainstream business culture.

As a tribe, software professionals failed in this. We did not professionalize, nor did we unionize. In the Silicon Valley of the 1960s and ’70s, it was probably impossible to see the need for doing so: technologists were fully off the radar of the mainstream business culture, mostly lived on cheap land no one cared about, and had the autonomy to manage themselves and answer to their own. Hewlett-Packard, back in its heyday, was run by engineers, and for the benefit of engineers. Over time, that changed in the Valley. Technologists and mainstream, corporate businessmen were forced to come together. It became a colonial relationship quickly; the technologists, by failing to fight for themselves and their independence, became the conquered tribe.

Now it’s 2014, and the common sentiment is that software engineers are overpaid, entitled crybabies. I demolished this perception here. Mostly, that “software engineers are overpaid” whining is propaganda from those who pay software engineers, and who have a vested interest. It has been joined lately by leftist agitators, angry at the harmful effects of technology wealth in the Bay Area, who have failed thus far to grasp that the housing problem has more to do with $3-million-per-year, 11-to-3 product executives (and their trophy spouses who have nothing to do but fight for the NIMBY regulations that keep housing overpriced) than $120,000-per-year software engineers. There are good software jobs out there (I have one, for now) but, if anything, relative to the negatives of the software industry in general (low autonomy relative to intellectual ability, frequent job changes necessitated by low concern of employers for employee career needs, bad management) the vast majority of software engineers are underpaid. Unless they move into management, their incomes plateau at a level far below the cost of a house in the Bay Area. The truth is that almost none of the economic value created in the recent technology bubble has gone to software engineers or lifelong technologists. Almost all has gone to investors, well-connected do-nothings able to win sinecures from reputable investors and “advisors”, and management. This should surprise no one. Technology professionals and software engineers are, in general, a conquered tribe and the great social resource that is their brains is being mined for someone else’s benefit.

Here’s the Damaso Effect. Where do those Silicon Valley elites come from? I nailed this in this Quora answer. They come from the colonizing power, which is the mainstream business culture. This is the society that favors pedigree over (dangerous, subversive) creativity and true intellect, the one whose narcissism brought back age discrimination and makes sexism so hard to kick, even in software which should, by rights, be a meritocracy. That mainstream business world is the one where Work isn’t about building things or adding value to the world, but purely an avenue through which to dominate others. Ok, now I’ll admit that that’s an uncharitable depiction. In fact, corporate capitalism and its massive companies have solved quite a few problems well. And Wall Street, the capital of that world, is morally quite a bit better than its (execrable) reputation might suggest. It may seem very un-me-like to say this, but there are a lot of intelligent, forward-thinking, very good people in the mainstream business culture (“MBA culture”). However, those are not the ones who get sent to Silicon Valley by our colonial masters. The failures are the ones sent into VC firms and TechCrunch-approved startups to manage nerds. Not only are they the ones who failed out of the MBA culture, but they’re bitter as hell about it, too. MBA school told them that they’d be working on $50-billion private-equity deals and buying Manhattan penthouses, and they’re stuck bossing nerds around in Mountain View. They’re pissed.

Let me bring Zed Shaw in on this. His essay on NYC’s startup scene (and the inability thereof to get off the ground) is brilliant and should be read in full (seriously, go read it and come back to me when you’re done) but the basic point is that, compared to the sums of money that real financiers encounter, startups are puny and meaningless. A couple quotes I’ll pull in:

During the course of our meetings I asked him how much his “small” hedge fund was worth.

He told me:

30 BILLION DOLLARS

That’s right. His little hedge fund was worth more money than thousands of Silicon Valley startups combined on a good day. (Emphasis mine.) He wasn’t being modest either. It was “only” worth 30 billion dollars.

Zed has a strong point. The startup scene has the feeling of academic politics: vicious intrigue, because the stakes are so small. The complete lack of ethics seen in current-day technology executives is also a result of this. It’s the False Poverty Effect. When people feel poor, despite objective privilege and power, they’re more inclined to do unethical things because, goddammit, life owes them a break. That startup CEO whose investor buddies allowed him to pay himself $200,000 per year is probably the poorest person in his Harvard Business School class, and feels deeply inferior to the hedge-fund guys and MD-level bankers he drank with in MBA school.

This also gets into why hedge funds get better people (even, in NYC, for pure programming roles) than technology startups. Venture capitalists give you $5 million and manage you; they pay to manage. Hedge fund investors pay you to manage (their money). As long as you’re delivering returns, they stay out of your hair. It seems obvious that this would push the best business people into high finance, not VC-funded technology.

The lack of high-quality businessmen in the VC-funded tech scene hurts all of us. For all my railing against that ecosystem, I’d consider doing a technology startup (as a founder) if I could find a business co-founder who was genuinely at my level. For founders, it’s got to be code (tech co-founder) or contacts (business co-founder) and I bring the code. At my current age and level of development, I’m a Tech 8. A typical graduate from Harvard Business School might be a Biz 5. (I’m a harsh grader, that’s why I gave myself an 8.) Biz 6 means that a person comes with connections to partners at top VC firms and resources (namely, funding) in hand. The Biz 7’s go skiing at Tahoe with the top kingmakers in the Valley, and count a billionaire or two in their social circle. If I were to take a business co-founder (noting that he’d become CEO and my boss) I’d be inclined to hold out for an 8 or 9, but (at least, in New York) I never seemed to meet Biz 8’s or 9’s in VC-funded technology, and I think I’ve got a grasp on why. Business 8’s just aren’t interested in asking some 33-year-old California man-child for a piddling few million bucks (that comes along with nasty strings, like counterproductive upper management). They have better options. To the Business 8+ out there, whatever the VCs are doing in Silicon Valley is a miserable sideshow.

It’s actually weird and jarring to see how bad the “dating scene”, in the startup world, is between technical and business people. Lifelong technologists, who are deeply passionate about building great technology, don’t have many places elsewhere to go. So a lot of the Tech 9s and 10s stick around, while their business counterparts leave and a Biz 7 is the darling at the ball. I’m not a fan of Peter Shih, but I must thank him for giving us the term “49ers” (4’s who act like 9’s). The “soft” side, the business world of investors and well-connected people who think their modest connections deserve to trade at an exorbitant price against your talent, is full of 49ers– because Business 9’s know to go nowhere near the piddling stakes of the VC-funded world. Like a Midwestern town bussing its criminal element to San Francisco (yes, that actually happened) the mainstream business culture sends its worst and its failures into the VC-funded tech. Have an MBA, but not smart enough for statistical arbitrage? Your lack of mathematical intelligence means you must have “soft skills” and be a whiz at evaluating companies; Sand Hill Road is hiring!

The venture-funded startup world, then, has the best of one world (passionate lifelong technologists) answering to the people who failed out of their mother country: mainstream corporate culture.

The question is: what should be done about this? Is there a solution? Since the Tech 8’s and 9’s and 10’s can’t find appropriate matches in the VC-funded world (and, for their part, most Tech 8+ go into hedge funds or large companies– not bad places, but far away from new-business formation– by their mid-30s) where ought they to go? Is there a more natural home for Tech 8+? What might it look like? The answer is surprising, but it’s the mid-risk / mid-growth business that venture capitalists have been decrying for years as “lifestyle businesses”. The natural home of the top-tier technologist is not in the flash-in-the-pan world of VC, but the get-rich-slowly world of steady, 20 to 40 percent per year growth due to technical enhancement (not rapid personnel growth and creepy publicity plays, as the VCs prefer).

Is there a way to reliably institutionalize that mid-risk / mid-growth space, that currently must resort (“bootstrapping”) to personal savings (a scarce resource, given that engineers are systematically underpaid) just as venture capital has done to the high-risk /get-big-or-die region of the risk/growth spectrum? Can it be done with a K-strategic emphasis that forges high-quality businesses in addition to high-value ones? Well, the answer to that one is: I’m not sure. I think so. It’s certainly worth trying out. Doing so would be good for technology, good for the world, and quite possibly very lucrative. The real birth of the future is going to come from a fleet of a few thousand highly autonomous “lifestyle” businesses– and not from VC-managed get-huge-or-die gambits.

Was 2013 a “lost year” for technology? Not necessarily.

The verdict seems to be in. According to the press, 2013 was just a god-awful, embarrassing, downright shameful year for the technology industry, and especially Silicon Valley.

Christopher Mims voices the prevailing sentiment here:

All in, 2013 was an embarrassment for the entire tech industry and the engine that powers it—Silicon Valley. Innovation was replaced by financial engineering, mergers and acquisitions, and evasion of regulations. Not a single breakthrough product was unveiled—and for reasons outlined below, Google Glass doesn’t count.

He continues to point out the poor performance of high-profile product launches, the abysmal behavior of the industry’s “ruling class”– venture capitalists and leading executives– and the fallout from revelations like the NSA’s Prism program. Yes, 2013 brought forth a general miasma of bad faith, shitty ideas, and creepy, neoreactionary bubble zeitgeists: Uber’s exploitative airline-style pricing and BitTulip mania are just two prominent examples.

He didn’t cover everything; presumably for space, he gave no coverage to Sean Parker’s environmental catastrophe of a wedding (and the 10,000-word rant he penned while off his meds) and its continuing environmental effects. Nor did he cover the growing social unrest in California, culminating in the blockades against “Google buses”. Nor did he mention the rash of unqualified founders and mediocre companies like Summly, Snapchat, Knewton, and Clinkle and all the bizarre work (behind the scenes, by the increasingly country-club-like cadre of leading VCs) that went into engineering successes for these otherwise nonviable firms. In Mims’s tear-down of technology for its sins, he didn’t even scratch the surface, and even with the slight coverage given, 2013 in tech looks terrible. 

So, was 2013 just a toilet of a year, utterly devoid of value? Should we be ashamed to have lived through it?

No. Because technology doesn’t fucking work that way. Even when the news is full of pissers, there’s great work being done, much of which won’t come to fruition until 2014, 2015, or even 2030. Technology, done right, is about the long game and getting rich– no, making everyone rich– slowly. (Making everyone rich is, of course, not something that will be achieved in one quarter or even one decade.) “Viral” marketing and “hockey stick” obsessions are embarrassments to us. We don’t have the interest in engineering that sort of thing, and don’t believe we have the talent to reliably make it happen– because we’re pretty sure that no one does. But we’re very good, in technology, at making things 10, or 20, or 50 percent more efficient year-on-year. Those small gains and occasional big wins amount, in the aggregate, to world economic growth at a 5% annual rate– nearly the highest that it has ever achieved.

Sure, tech’s big stories of 2013 were mostly bad. Wearing Google Glass, it turns out, makes a person look like a gigantic fucking douchebag. I don’t think that such a fact damns an entire year, though. Isn’t technology supposed to embrace risk and failure? Good-faith failure is a sign of a good thing– experimentation. (I’m still disgusted by all the bad-faith failure out there, but that should surprise no one.) The good-faith failures that occur are signs of a process that works. What about the bad-faith ones? Let’s just hope they will inspire people to fix a few problems, or just one big problem: our leadership.

On that, late 2013 seems to have been the critical point in which we, as a technical community, lost faith in the leaders of our ecosystem: the venture capitalists and corporate executives who’ve claimed for decades to be an antidote to the centuries-old tension between capitalist and laborer, and who’ve proven no better (and, in so many ways, worse) than the old-style industrialists of yore. Silicon Valley exceptionalism is disappearing as an intellectually defensible position. The Silicon Valley secessionists and Sand Hill Road neofeudalists no longer look like visionaries to us; they look like sad, out-of-touch, privileged men abusing a temporary relevance, and losing it quickly through horrendous public behavior. The sad truth about this is that it will hurt the rest of us– those who are still coming up in technology– far more than it hurts them.

This loss of faith in our gods is, however, a good thing in the long run. Individually, none of us among the top 50,000 or so technologists in the U.S. has substantial power. If one of us objects to the state of things, there are 49,999 others who can replace us. As a group, though, we set the patterns. Who made Silicon Valley? We did, forty years ago, when it was a place where no one else wanted to live. We make and, when we lose faith, we unmake.

Progress is incremental and often silent. The people who do most of the work do least of the shouting. The celebrity culture that grows up around “tech” whenever there is a bubble has, in truth, little to do with whether our society can meet the technology challenges that the 2010s, ’20s, and onward will throw at us.

None of this nonsense will matter, ten years from now. Evan Spiegel, Sean Parker, Greg Gopman, and Adria Richards are names we will not have cause to remember by December 26, 2023. The current crop of VC-funded cool kids will be a bunch of sad, middle-aged wankers drinking to remember their short bursts of relevance. But the people who’ve spent the ten years between now and then continually building will, most likely, be better off then than now. Incremental progress. Hard work. True experimentation and innovation. That’s how technology is supposed to work. Very little of this will be covered by whatever succeeds TechCrunch.

Everything that happened in technology in 2013, and much of it was distasteful, is just part of a longer-running process. It was not a wasted year. It was a hard one for morale, but sometimes a morale problem is necessary to make things better. Perhaps we will wake up to the uselessness of the corporatists who have declared themselves our leaders, and do something about that problem.

So, I say, as always: long live technology.

VC-istan 4: Silicon Valley’s Tea Party

The SATs might have left people sour on analogies, but here’s one to memorize: VC-funded technology is to Corporate America as the Tea Party is to the Republican Party. I cannot think of a more perfect analogy for the relationship between this Sand Hill-funded startup bubble and the “good ol’ boy” corporate network it falsely claims to be displacing. It is, like the Tea Party, a more virulent resurgence of what it claims to be a reaction against.

What was the Tea Party?

Before analyzing VC-istan, let’s talk about the Tea Party. The contemporary American right wing has an intrinsic problem. First of all, it’s embarrassed by its internal contradictions, insofar as it fails to implement its claimed fiscal conservatism, instead getting us more indebted through wars fought to serve corporate interests. More to the point, it’s trying to get people to vote for things that are economically harmful to them– and it’s surprisingly good at that, but that requires keeping people misled about what they are actually supporting, which in turn mandates constant self-reinvention. For this reason, the Republican Party has a well-established pattern of generating a “revolution” every decade and a half or so.

First, there was the “Reagan Revolution” of 1980. Then there was the “join the fight” midterm election of 1994– the Republican landslide that brought us our first severe government shutdown. Around 2009, the modern Tea Party was born– and that brought us a second severe government shutdown. At first, this Tea Party appeared to be deeply libertarian, presented as a populist tax revolt without the overt corporate or religious affiliations of the Republican Party. It seems ridiculous in retrospect, but there were left-wing Tea Partiers at the beginning of the movement (there aren’t anymore). In time, the Tea Party was steered directly into the Republican tent, fueling the party’s electoral success in 2010. That’s miraculous for them, when one considers the gigantic image problem that the Bush Era created for that party. In 2008, some commentators believed the Republicans were finished for good, about to go the way of the Whigs; two years later, it had been reinvigorated by a populist movement that, at its inception, seemed radically different from the fiscally irresponsible GOP.

By promising a reduction in taxes and social complexity, the Tea Party managed to remove Bush and Cheney– old-style authoritarian stooges, big-government war hawks, and objective failures even before the end of their term– from the conversation in record time. Of course, time proved the Tea Partiers to be “useful idiots” for a more typical Republican resurgence– a reinvention of image, not of substance– and the most astute observers were not surprised. When the reputations of established players become sufficiently negative, reinvention (and “disruption”) becomes the marketers’ project of the hour.

Venture capital

Corporate America, too, has a severe image problem. The most talented people don’t want to work in stereotypical corporate environments. They want to be in academia, hedge funds, R&D labs, and cutting-edge startups– not dealing with the stingy pay, political intrigue, slow advancement, low autonomy, and archaic permissions systems associated that are stereotypical for large institutions. Of course, companies that need top talent can get it, but they must either (a) pay extremely well, (b) offer levels of autonomy that can complicate internal politics, or (c) market themselves extremely well.

Wall Street can simply buy its way out of the corporate image problem. However, this typically means that employee pay will go up by 20 to 30 percent per year, in order to keep abreast of the rapid hedonic adaptation that money exhibits. Few companies can afford to compensate that generously, especially putting that exponential growth in the context of a 20- to 40-year career. Venture capital’s ecosystem is an alternative solution to that image problem; a corporate system that appears to be maverick, anti-authoritarian and “disruptive”, when what it actually is is dishonest and muddled. The people would have been middling project managers in the old system are given the title of CEO/”founder” in one of VC-istan’s disposable companies. Instead of a team getting cut (and its staff reassigned) as would occur in a larger corporate machine, the supposedly independent company (of course, it is not truly independent, in the context of the feudal reputation economy that the VCs have created) is killed and everyone gets fired. This might seem like a worse and more dishonest corporate system, but it gives the impression of providing more individual prominence to highly talented (if somewhat clueless) people.

Not much of substance has improved in the transition from the older corporate system to the VC-funded world, but I think some things have actually been lost, particularly in reference to fairness. Bureaucracies can be dismal and ineffective, but those that work well are efficient and, most importantly, fair. In fact, attempts to achieve fairness (the definition of which seems, inexorably, to accrue complexity) seem to be a driving force behind bureaucratic increase. Obviously, bureaucracy is sometimes used toward unfair ends, or even designed maliciously (for example, over-restrictive policies are often built with the intentional purpose of making those with the power to grant exceptions powerful) but I would say, in general, that those negatives are not supposed to emerge from bureaucracy, and probably not characteristic of it in general. Bureaucracy is mostly boring, mostly effective, and only maligned because it’s infuriating when it fails (which is, often, the only time when most people notice it; bureaucracy that works goes unnamed). Without bureaucracy at all, however, social processes often devolve into a state where favor trading, influence peddling, and social connections– with those accrued early on (such as in school) and therefore most tied to socioeconomic status rather than merit, being most powerful– dominate.

VC-istan has reduced corporate bureaucracy (because companies are killed or sold before they can accrue that kind of complexity) but done away with the concern for fairness. It claims to be a meritocracy, and only accepts those who refuse to see (much less speak of( the machinations of power. Those who complain too loudly about VC collusion are ostracized. For just one petty example of the VC-funded world’s cavalier attitude toward injustice, people who voice the “wrong” opinions on Hacker News are silenced, “slowbanned” or even “hellbanned”. Injustice, accepted for the sake of efficiency, is tolerated as accidental noise that’s expected to converge over time, as the error from independent coin flips would smooth out as more occurred. The problem with social processes is that the errors (injustices that one hopes will be transient) don’t cancel each other out; they have a long-term autocorrelation.

In truth, what does self-assertion of meritocracy mean? It means that one is not even going to try to strive for additional fairness, under a belief that balance between fairness and other constraints has already been achieved. Of course, anyone who’s paying attention knows that not to be true.

Am I proposing that more bureaucracy is the solution to VC-istan’s moral failings? No. I’m only arguing that VC-istan’s selling point of “leanness” often comes at a cost, which is a sloppier and more unfair ecosystem. The old corporate ladder, with less of the ageism and emphasis on native social class and educational “pedigree”, was actually a much fairer one than VC-istan’s sloppily-built, poorly-thought-out one.

More virulent

The Tea Party turned out to be a more brazen and generally worse Republican Party than the one it supplanted. I’m not a fan of Bushite corporate stooges, but they would not have seriously considered the threat of fucking national default to be a valid negotiation tactic.  

Likewise, the VC-funded ecosystem is generally worse than the older and more stable corporate system that it is attempting to replace. To list some of the reasons why it is worse:

  • less intra-corporate mobility, since most VC-funded startups are built around a single project. As VC-funded companies become large enough that internal mobility would be viable, many develop mean-spirited stack-ranking cultures that keep internal mobility low or nonexistent.
  • the old corporate world’s large, announced layoffs, often with severance, have been replaced by dishonest “performance”-based firings designed to protect the company’s reputation (it may claim it is still hiring, and thus prosperous) at the expense of the departing employee’s.
  • increased social distance– investors vs. founders vs. employees is a much larger (and more permanent) social gulf than executives vs. managers vs. reports, the latter having more to do with seniority while the former is largely an artifact of native social class.
  • extreme ageism, classism, and sexism.
  • low rates of internal promotion, due to the company’s increasing need to validate its status with flashier hires (who get leadership roles as opportunities emerge). External promotion is the way to go in VC-istan, but that creates a “job hopping” impression that makes it hard to move back into the mainstream corporate world.
  • in general, meager benefits in meaningful dimensions (health coverage, 401k matching) matched with cosmetic or meaningless perks.
  • defined (if spartan) vacation allowances replaced by undefined (“unlimited”) vacation policies where social policing keeps everyone under two weeks per year.
  • on average, substantially longer work hours.
  • less working autonomy, on average, due to the tight deadlines faced by startups whose investors demand excessive risk-taking and rapid growth.
  • significantly more economic inequality, when the distribution of equity is considered. A hired (non-founder) executive might only earn 20% more, in salary, than an engineer, but typically receives 20-100 times as much equity in the company.

The future

What has actually emerged out of Silicon Valley is a failed social experiment that has generated much noise, little progress, and immense distraction. The good news is that it lacks comprehension of how to conduct itself outside of its own sandbox. For one small example, economics textbooks might argue that Uber’s “surge pricing” is supremely efficient and therefore right, even though the subjective experience for all who encounter it is extremely negative. I don’t intend to opine on whether Uber’s pricing model is morally right (it’s a useless discussion). I do find the observation valuable: the new economic elite of the Valley is shockingly gauche when it comes to self-presentation. It thinks it’s the height of science and culture, and everyone else finds it to be the worst case of uncultured new-money syndrome in over a century. It won’t last. If the gauche overlords of Silicon Valley– no longer engineers or technologists, but lifelong executives (with all the pejoratives appropriate to that word) who came up via private equity and good-ol’-boy networks– make a serious play for cultural prominence, they will be shoved back into their spider holes with overwhelming force.

The old corporate regime was deeply flawed, and that’s not going to come back either, but there was a certain humanity required of it if such organizations were to survive for the long term. The problem with VC-istan is that these companies don’t care about persistence; they’ll either be gigantic and invincible (and able to pay off old sins via meager settlements) or dead in five years. If VC-istan’s pretenses of building the future are taken at face value, then the future’s literally being built by people who give not a damn about it.

Uber can charge what it wants– that’s a private matter– but I’m disgusted when I see Valley darlings trying to shove their mindless, childish arrogance into politics. That’s actually scary. The price of housing and long commutes for which Silicon Valley is known are solid, incontrovertible proof that their little society is an utter failure.  Whatever they say about themselves is mitigated entirely by the messes– of their own making– in their own backyards. If they can’t even make San Francisco affordable, how are they equipped to handle the problems of the world? They aren’t. Just as the Tea Party proved itself incapable when it came down to the actual inherent complexity of politics in a nation of 315 million, the Valley darlings aren’t fit to rule more than a postage stamp.

What really built Silicon Valley, and Baltimore’s surprising advantage.

I’m moving to Baltimore in less than a week. There’ll be a lot to say about that in the future, and on the whole, I’m pretty excited about the move. Right now, Baltimore’s not known as a technology hub. Relative to the Valley, New York, and Austin, it’s not even on the map (yet). I think that is likely to change. I’m not going to call Baltimore a hands-down sure winner– it isn’t– but it’s a strong bet for the medium term (~10 years). I’ll get to why, in a little bit.

For my part, I don’t think any city is actually going to become “the next Silicon Valley” because I don’t think, in the near future, we’ll see that lopsided a distribution of high-talent activity. Up-and-coming cities like Austin, Boulder and Baltimore will grow, but no city will enjoy the utter dominance that Silicon Valley once had (and still has, to a lesser degree) in technology. The only people who win when it’s like that, really, are the landlords. The future, I think, is much more likely to be location-agnostic and better spread about. In 2009, “number-8 startup hub” was functionally equivalent to “in the sticks” because there were only three to five such places (in the U.S.) worth mention. I’d bet on that changing; I think the distribution of high-talent activity will be much more even in the next 15 years. I’m not going to prognosticate on the winner because I think there will be several. Austin is pretty much a sure bet; Philly and Baltimore have good odds, and the old contenders (e.g. Seattle, Boston) will still be strong. San Francisco will be formidable and probably a leader for a decade, although the rest of the Valley doesn’t have much going for it among the younger generation. 

Before I talk about why I’m bullish on Baltimore of all places, let me first get into what built Silicon Valley. It’s fairly well-established that the defense industry played a role and, as the folk legend goes, all of that government money fueled a few decades of innovation. That’s sort-of true. However, money isn’t some magic powder that one can sprinkle on peoples’ heads and make innovation happen. It’s the autonomy that comes with the money that leads to innovation. When smart people call the shots over their own work, they get things done that move the world forward at fifty times the rate, with a thousand times the quality, as they would if they were traditionally managed.

Not all of the innovation in Silicon Valley came from defense contractors. In fact, I’d argue that much of it didn’t come directly from that industry at all. Those cushy, well-paid basic research jobs in the public sector certainly contributed some of the Valley’s innovations, but far from all of it. An equally large contribution came from private players in competition (on worker autonomy) with those cushy public jobs. When the average technically-literate college graduate could grab what would now be (adjusted for inflation and localized housing costs) a six-figure job that had total job security and full autonomy over his work, private companies also had to step up their game and create high-autonomy jobs on their end. A company that was private and therefore personally riskier had to make the work interesting and creative. (Capitalism, though it can excel in this regard, isn’t innately innovative. It needs to have the right conditions, and a refusal of top-talent to work on bland rent-seeking activity– because it has better options– is one such condition.) Hewlett-Packard, in its heyday, was a primary example of a good private-sector citizen. When business when bad, instead of laying people off, pay was cut across the board but compensated with proportional time off. What’s now a cargo cult of foosball tables and “unlimited” vacation (meaning, no year-end reimbursement for unused days while your boss decides your vacation allotment) was originally born in a time when companies had to compete, on interesting work and autonomy and working culture, with an extremely generous ecosystem comprised of the public sector and government contractors.

This arms race doesn’t really exist in Silicon Valley anymore, and that might be why, culturally, that ecosystem is losing wind. Startups compete with each other a bit, but that competition is only meaningful over the small percentage of engineers (and a larger percentage of product executives) who come out on top of the celebrity culture that exists there. For the underclass, who don’t know any top-tier venture capitalists and haven’t completed an exit, they take what they’re given, and they don’t have any more autonomy than they would at a company like Microsoft or IBM. Sure, it’s a little bit pricey to hire a talented engineer– at 15 years of experience, she’s probably making $175,000 per year– but that has more to do with the local cost-of-living than anything else, and that’s not a large sum of money for a corporate to spend on someone whose productive capability is worth 10 to 25 times that. It’s expensive to hire her, relatively speaking, but not competitive. If she doesn’t want the job, someone else will take it.

So, in sum, what is it that built Silicon Valley and is no longer there? Cheap real estate was a big part of that, in large part for the freedom and sense of ownership that exists in a place where normal people can buy land (as they once could, in Northern California). But the much bigger factor was an economy that forced technology companies to compete on worker autonomy, career coherency, and interesting work with a public-funded sphere that, while bureaucratic issues existed and much of the work couldn’t be shared with the public, gave technologists lifelong job security, opportunities for basic research, and extremely high levels of autonomy by private-sector standards. The result of this competition was that startups with shitty ideas never got off the ground (would that were true now!) because no technically capable person would work at one. The west coast didn’t have all the technology companies, of course; but this competition on worker autonomy and interesting projects forced it to produce the best ones.

The VC-funded startup world is trying to replicate this former decades-long period of success. It will fail. Why? Well, it brings one ingredient, which is money. It has lots of that. It doesn’t deliver on engineer autonomy, though. Venture capitalists can’t judge technical talent at the high end– people who can reliably judge talent at that level are even rarer than the small set of people who have it– so they give money based on social network effects and “track record”, which ultimately means that the people who get funded are those who are good at raising money. Some of those people are highly intelligent, but they’re rarely creators or innovators, and the latter tend to lose when they have to compete on social polish with salesmen. When the money’s granted, it happens is a top-down way with founders ending up with astronomically more power than the people working with them. The result of this is the generation of scads of non-technical companies that happen to be involved in technology. It creates something bland and unsatisfying, just like the VC-funded ecosystem of today.

One way to think about this is to look at foreign aid to impoverished countries. If it’s given in a raw cash form, it often fails to achieve humanitarian progress, because the corrupt government captures all of it. The few are enriched, while the many (who need the aid) don’t get it. It might be used to buy guns instead of butter. Distribution is a real problem. Now, look at the VC-funded world. It shows the same unhealthy pattern: give lots of money but distribute it poorly, and the wrong people get it and almost none of it’s used for the intended aim. There’s a lot of money being thrown at Northern California, but it’s being distributed in a way that is actually harmful to innovation: it enriches people with an orthogonal if not negatively correlated (to the ability to innovate) skill set, it drives up costs of living, and it creates a fleet of businesses that look innovative but are actually, because of the existential pressures on them, even stingier when it comes to matters of employee autonomy and interesting work/experimentation than the supposedly stodgy corporations of the old system.

Of course, I am not saying that money is unimportant or that venture capital funds have to be harmful. Both are far from the truth. It’s only that the money is beneficial only when it provides the autonomy that begets innovation. This Hollywood-for-ugly-people business model doesn’t have that effect. Those dollars, when they end up in housing prices (entropic waste heat) actually end up doing a lot of damage.

Now it’s time to talk about why I think Baltimore is a strong bet for 10-15 years into the future. Don’t get me wrong: it’s probably not even a top-20 technology hub right now. As a whole, the city has some serious issues, but the nice parts are safe and affordable and the place has the “smart city” feel of a place like Boston, Seattle, or Minneapolis, which means that there’s seriously strong potential there. That’s far from enough to justify calling a winner right now, because if you’ve traveled enough (and I’ve done multiple cross-country road trips) you realize that smart people are everywhere and there are probably two dozen cities with “serious potential”. There’s more to it. Baltimore has something that venture capitalists, to be frank, don’t much like; but that’s actually good for everyone. Investors who know the city agree that it has a lot of engineering talent, but that a large proportion of it is tied up in “cushy” government and contractor jobs, and the fear is the typical VC-funded startup will struggle to compete. Those companies will be paying a little more, if they paid Bay Area/New York salaries, than the government think tanks, but not enough (for most people) to justify the sacrifices in terms of job security, interesting work, career coherency, and overall autonomy.

When you’re building a typical VC startup– a get-giant-tomorrow-or-get-lost red-ocean gambit that needs to execute fast– you’d rather compete on salary with Google (an operational cost) than compete on employee autonomy with well-heeled government agencies (which has greater effects on how the business is run). If you really have to grow 30% per month to survive, then you need “take-charge, strong leadership” (i.e. a top-down autocratic culture). You won’t have long-term creative health, but you’re not even thinking about the long run at that point, because you’re fighting for short-term survival. When you’re at that extreme (very-high risk, very-high growth) you need the vicious but undeniable efficiency of a follow-or-leave dictatorship. Those are the companies that VCs know how to evaluate, fund, and run: get-big-or-die gambits that grow into corporate megaliths, scare existing corporate megaliths enough to get bought at a panic price, or (as most do) fall to pieces, all inside of five years. Competing with Google or Microsoft on salary brings a predictable, manageable cost; competing with a government-funded think-tank on employee autonomy rules out the get-big-or-die business strategies that involve chronic crunch time and mandate autocracy.

On the other hand, competing with “cushy” government jobs in this way is not an issue for the mid-risk / mid-growth space sometimes derided as “lifestyle businesses”– in fact, to compete on autonomy and interesting work  and that was the style of thinking (before the emergence of the VC-powered mess) that built the first Silicon Valley, back in the day when it was something to admire, not crack jokes about.

All of this is far from saying that Baltimore is guaranteed to become a technology startup hub in the next 10 years. There are far too many variables in play to make that call as of now. It has a certain poorly-understood but historically potent local advantage, for sure, and I think it’s a decent bet for that and other reasons.

Technology: we can change our leadership, or we can quit.

Technology has lost its “golden child” image, with piñatas of Google buses being beaten to shit in San Francisco, invasions of privacy making national headlines, and an overall sense in the country that our leadership’s exceptionalist reputation as the “good” nouveau riche is not deserved and must end. To put it bluntly, the rest of the country doesn’t like us– any of us– anymore. We’ve lost that good faith, in technology, that allowed us to be rich (well, a few of us) and not hated, even in the midst of a transformational, generation-eating recession. 2013 will be remembered as the year when popular opinion of “Silicon Valley” imploded. As much as I despise VC-istan, that is not a good thing, because popular opinion will not separate VC-istan’s upper crust from Silicon Valley or technology as a whole.

After decades of the kinds of mismanagement that only prosperity can support, we’ve developed an industry that, despite having the best individual contributors in the world– has the worst leadership out there.

Additionally, even within the Valley morale is challenged. The truth about the VC-funded ecosystem is that it’s no longer an alternative to the traditional corporate ladder, but merely a shitty corporate ladder (the transitions being worker to founder to investor) in which disposable companies allow executives to do things to peoples’ careers that they’d never get away with in larger companies. There’s a satirical song called “The Dream of the ’90s” about a resurgence of unambitious immaturity in Portland’s hipster culture. VC-istan is a similarly nostalgic 1990s-derived culture, except centered around ambitious immaturity. Perhaps it was more real in the 1990s, but the venture-funded world now is a Disney-fied caricature of entrepreneurship dominated by rich kids who take no real risks because their backers have already decided on a range of outcomes, and will provide “entrepreneur-in-residence” soft landings for their well-connected proteges, no matter what happens. It’s not about building things anymore; it’s about using Daddy’s contacts to play startup for a few years and relish telling older and much more talented people what to do.

People are waking up to this. VC-istan is under attack. I just hope it doesn’t take down the rest of technology with it.

The reputation of this ecosystem is falling to pieces. As it were, individual technology companies go to great lengths to defend their reputations, and only relinquish those when there’s enormous benefit (in the billions) to be made through the compromise. As technology firms see it, and they’re not wrong, their ability to execute and to attract talent is strongly determined by the company’s reputation. Why is reputation so much more important to a software firm than to, say, a steel or oil company? A few things come to mind. Obviously, internal reputation (morale) is extremely important in software. The difference between an unmotivated versus a motivated steel worker might be a factor of 2 or 3; in software, it’s at least 10. Second, and there are a variety of reasons for this, most talented people don’t care all that much about money, at least not in the classic economic sense. They don’t want to be poor, but they’d rather have smart co-workers, interesting work, and a supportive managerial environment and be comfortable than lose those things and make 25% more. (We also believe that we’ll make more money, in the long term, if we work is quality environments where we can succeed.) Most reflective people know that “rich” is relative and economic rewards lend themselves to hedonic adaptation quickly. As Don Draper said, this form of happiness is “a moment before you need more happiness”. So you can’t court the best software engineers with a 10- or even 50-percent advantage in salary; you have to convince them that your company will give them interesting work and benefit their careers. That’s hard to do when a company has a damaged reputation. From experience, we know not to trust even most of the companies with googd reputations, much less the ones whose images have already been tarnished.

Sadly for them, it’s probably 80 percent of the Fortune 500 where the top 5% of software engineers would simply refuse to work, unless given a salary that would put them above even most executives, or in desperate need of short-term employment. These companies don’t end up with minimal engineering power; they end up with zero, because they can’t attract talent from outside, they overlook the high-potential people within, and talented people who come in never form a critical mass that might give them any political immunity to the overwhelming mediocrity (that is a threat even in the prestigious companies). On the other hand, Google and Facebook have more top-5% engineers than they know what to do with. Talent is clustering and clumping like never before, both in terms of employer selection and geography. So not only are the stakes of reputation high, but most firms end up as losers, bereft of top talent and doomed to watch their IT organizations slide into inefficiency, if not failure. Sturgeon’s Law is painfully felt everywhere in technology. If you’re a programmer looking for work, you find out quickly that most engagements are low in quality. On the other hand, if you’re a hiring manager, you find most engineering applicants to be incompetent at worst and badly-taught (i.e. betrayed, and sometimes irreparably damaged, by years of shitty work experience) at best.

Despite its problems, there’s money in technology. There’s so much fucking money in it that it has tolerated abysmal leadership for a long time. The Valley is so rich that the points don’t matter. Fired unjustly? Another job awaits you. Moron promoted (or, better yet, externally hired) above you? Job hop. Unfortunately, that won’t last forever and not everyone is positioned to benefit from this fluidity. Besides, some of the volatility injected into technology by bad management is just unnecessary and counterproductive. We’ve set patterns in place that won’t survive the future, in which traditional corporate software’s place diminishes. (Software and technology themselves will live on; that’s another discussion.) There will still be money, but the patterns that earn it will be different, and old processes won’t work. After decades of the kinds of mismanagement that only prosperity can support, we’ve developed an industry that, despite having the best individual contributors in the world– has the worst leadership out there.

Now, we’re seeing the backlash. No one gives a shit about Google Glass when the people who’ve lived in The Mission for fifty years are being pushed out by spoiled white kids who would never deign even to learn Spanish because “there’ll be an app for that in 5 years”. It’s no longer cool to have “invites” to some goofy social experiment when everyone knows that their data’s being sold to shady third parties and that full profile access is often a workplace perk. Finally, technology startups have gone full-cycle from being a risky, conventionally denigrated career move (1980s) to a really great opportunity (1990s) to “cool” (2005-12) to somewhat less cool (post-2013). This is happening because we no longer have the carte blanche abundance of opportunity that allows us to be prosperous even with horrendous leadership. There’s still a ton of opportunity out there, but the easy wins are gone, and we can’t let “the business side” run on auto-pilot because the age in which bad leadership is acceptable is ending. We can’t put our heads down and expect the men in suits  to do what’s right; they only did that when everyone could get rich because the victories were so facile, but that’s no longer true (if it ever really was) and we need to take more responsibility for our own direction and destiny. We can handle that stuff; trust me.

We’ll need to move our current executives and “hip” investors and tech press aside and let new players come in; but we can keep technology alive. And we owe it to future generations. Technology is just too important for us to let the people currently running this game continue to screw it up.

Tech companies: open allocation is your only real option.

I wrote, about a month ago, about Valve’s policy of allowing employees to transfer freely within the company, symbolized by placing wheels under the desk (thereby creating a physical marker of their superior corporate culture that makes traditional tech perks look like toys) and expecting employees to self-organize. I’ve taken to calling this seemingly radical notion open allocation– employees have free rein to work on projects as they choose, without asking for permission or formal allocation– and I’m convinced that, despite seeming radical, open allocation is the only thing that actually works in software. There’s one exception. Some degree of closed allocation is probably necessary in the financial industry because of information barriers (mandated by regulators) and this might be why getting the best people to stay in finance is so expensive. It costs that much to keep good people in a company where open allocation isn’t the norm, and where the workflow is so explicitly directed and constrained by the “P&L” and by justifiable risk aversion. If you can afford to give engineers 20 to 40 percent raises every year and thereby compete with high-frequency-trading (HFT) hedge funds, you might be able to retain talent under closed allocation. If not, read on.

Closed allocation doesn’t work. What do I mean by “doesn’t work”? I mean that, as things currently go in the software industry, most projects fail. Either they don’t deliver any business value, or they deliver too little, or they deliver some value but exert long-term costs as legacy vampires. Most people also dislike their assigned projects and put minimal or even negative productivity into them. Good software is exceedingly rare, and not because software engineers are incompetent, but because when they’re micromanaged, they stop caring. Closed allocation and micromanagement provide an excuse for failure: I was on a shitty project with no upside. I was set up to fail. Open allocation blows that away: a person who has a low impact because he works on bad projects is making bad choices and has only himself to blame.

Closed allocation is the norm in software, and doesn’t necessarily entail micromanagement, but it creates the possibility for it, because of the extreme advantage it gives managers over engineers. An engineer’s power under closed allocation is minimal: his one bit of leverage is to change jobs, and that almost always entails changing companies. In a closed-allocation shop, project importance is determined prima facie by executives long before the first line of code is written, and formalized in magic numbers called “headcount” (even the word is medieval, so I wonder if people piss at the table, at these meetings, in order to show rank) that represent the hiring authority (read: political strength) of various internal factions. The intention of headcount numbers is supposed to be to prevent reckless hiring by the company on the whole, and that’s an important purpose, but their actual effect is to make internal mobility difficult, because most teams would rather save their headcount for possible “dream hires” who might apply from outside in the future, rather than risk a spot on an engineer with an average performance review history (which is what most engineers will have). Headcount bullshit makes it nearly impossible to transfer unless (a) someone likes you on a personal basis, or (b) you have a 90th-percentile performance review history (in which case you don’t need a transfer). Macroscopic hiring policies (limits, and sometimes freezes) are necessary to prevent the company from over-hiring, but internal headcount limits are one of the worst ideas ever. If people want to move, and the leads of those projects deem them qualified, there’s no reason not to allow this. It’s good for the engineers and for the projects that have more motivated people working on them.

When open allocation is in play, projects compete for engineers, and the result is better projects. When closed allocation is in force, engineers compete for projects, and the result is worse engineers. 

When you manage people like children, that’s what they become. Traditional, 20th-century management (so-called “Theory X”) is based on the principle that people are lazy and need to be intimidated into working hard, and that they’re unethical and need to be terrified of the consequences of stealing from the company, with a definition of “stealing” that includes “poaching” clients and talent, education on company time, and putting their career goals over the company’s objectives. In this mentality, the only way to get something decent out of a worker is to scare him by threatening to turn off his income– suddenly and without appeal. Micromanagement and Theory X are what I call the Aztec Syndrome: the belief in many companies that if there isn’t a continual indulgence in sacrifice and suffering, the sun will stop rising.

Psychologists have spent decades trying to answer the question, “Why does work suck?” The answer might be surprising. People aren’t lazy, and they like to work. Most people do not dislike the activity of working, but dislike the subordinate context (and closed allocation is all about subordination). For example, peoples’ minute-by-minute self-reported happiness tends to drop precipitously when they arrive at the office, and rise when they leave it, but it improves once they start actually working. They’re happier not to be at an office, but if they’re in an office, they’re much happier when working than when idle. (That’s why workplace “goofing off” is such a terrible idea; it does nothing for office stress and it lengthens the day.) People like work. It’s part of who we are. What they don’t like, and what enervates them, is the subordinate context and the culturally ingrained intimidation. This suggests the so-called “Theory Y” school of management, which is that people are intrinsically motivated to work hard and do good things, and that management’s role is to remove obstacles.

Closed allocation is all about intimidation: if you don’t have this project, you don’t have a job. Tight headcount policies and lockout periods make internal mobility extraordinarily difficult– much harder than getting hired at another company. The problem is that intimidation doesn’t produce creativity and it erodes peoples’ sense of ethics (when people are under duress, they feel less responsible for what they are doing). It also provides the wrong motivation: the goal becomes to avoid getting fired, rather than to produce excellent work.

Also, if the only way a company can motivate people to do a project is to threaten to turn off a person’s income, that company should really question whether that project’s worth doing at all.

Open allocation is not the same thing as “20% time”, and it isn’t a “free-for-all”. Open allocation does not mean “everyone gets to do what they want”. A better way to represent it is: “Lead, follow, or get out of the way” (and “get out of the way” means “leave the company”). To lead, you have to demonstrate that your product is of value to the business, and convince enough of your colleagues to join your project that it has enough effort behind it to succeed. If your project isn’t interesting and doesn’t have business value, you won’t be able to convince colleagues to bet their careers on it and the project won’t happen. This requires strong interpersonal skills and creativity. Your colleagues decide, voting with their feet, if you’re a leader, not “management”. If you aren’t able to lead, then you follow, until you have the skill and credibility to lead your own project. There should be no shame in following; that’s what most people will have to do, especially when starting out.

“20% time” (or hack days) should be exist as well, but that’s not what I’m talking about. Under open allocation, people are still expected to show that they’ve served the needs of the business during their “80% time”. Productivity standards are still set by the projects, but employees choose which projects (and sets of standards) they want to pursue. Employees unable to meet the standards of one project must find another one. 20% time is more open, because it entails permission to fail. If you want to do a small project with potentially high impact, or to prove that you have the ability to lead by starting a skunk-works project, or volunteer, take courses, or attend conferences on company time, that’s what it’s for. During their “80% time”, people are still expected to lead or follow on a project with some degree of sanction. They can’t just “do whatever they want”.

Four types of projects. The obvious question that open allocation raises is, “Who does the scut work?” The answer is simple: people do it if they will get promoted, formally or informally, for doing it, or if their project directly relies on it. In other words, the important but unpleasant work gets done, by people who volunteer to do it. I want to emphasize “gets done”. Under closed allocation, a lot of the unpleasant stuff never really gets done well, especially if unsexy projects don’t lead to promotions, because people are investing most of their energy into figuring out how to get to better projects. The roaches are swept under the carpet, and people plan their blame strategies months in advance.

If we classify projects into four categories by important vs. unimportant, and interesting vs. unpleasant, we can assess what happens under open allocation. Important and interesting projects are never hard to staff. Unimportant but interesting projects are for 20% time; they might succeed, and become important later, but they aren’t seen as critical until they’re proven to have real business value, so people are allowed to work on them but are strongly encouraged to also find and concentrate on work that’s important to the business. Important but unpleasant projects are rewarded with bonuses, promotions, and the increased credibility accorded to those who do undesirable but critical work. These bonuses should be substantial (six and occasionally even seven figures for critical legacy rescues); if the project is actually important, it’s worth it to actually pay. If it’s not, then don’t spend the money. Unimportant and unpleasant projects, under open allocation, don’t get done. That’s how it should be. This is the class of undesirable, “death march” projects that closed-allocation nurtures (they never go away, because to suggest they aren’t worth doing is an affront to the manager that sponsors them and a career-ending move) but that open allocation eliminates. Under open allocation, people who transfer away from these death marches aren’t “deserters”. It’s management’s fault if, out of a whole company, no one wants to work on the project. Either the project’s not important, or they didn’t provide enough enticement.

Closed allocation is irreducibly political. Compare two meanings of the three-word phrase, “I’m on it”. In an open-allocation shop, “I’m on it” is a promise to complete a task, or at least to try to do it. It means, “I’ve got this.” In a closed-allocation shop, “I’m on it” means “political forces outside of my control require me to work only on this project.”

People complain about the politics at their closed-allocation jobs, but they shouldn’t, because it’s inevitable that politics will eclipse the matter of actually getting work done. It happens every time, like clockwork. The metagame becomes a million times more important than actually sharpening pencils or writing code. If you have closed allocation, you’ll have a political rat’s nest. There’s no way to avoid it. In closed allocation, the stakes of project allocation are so high that people are going to calculate every move based on future mobility. Hence, politics. What tends to happen is that a four-class system emerges, resulting from the four categories of work that I developed above. The most established engineers, who have the autonomy and leverage to demand the best projects, end up in the “interesting and important” category. They get good projects the old-fashioned way: proving that they’re valuable to the company, then threatening to leave if they aren’t reassigned. Engineers who are looking for promotions into managerial roles tend to take on the unpleasant but important work, and attempt to coerce new and captive employees into doing the legwork. The upper-middle class of engineers can take the interesting but unimportant work, but it tends to slow their careers if they intend to stay at the same company (they learn a lot, but they don’t build internal credibility). The majority and the rest, who have no significant authority over what they work on, get a mix, but a lot of them get stuck with the uninteresting, unimportant work (and closed-allocation shops generate tons of that stuff) that exists for reasons rooted in managerial politics.

What are the problems with open allocation? The main issue with open allocation is that it seems harder to manage, because it requires managers to actively motivate people to do the important but unpleasant work. In closed allocation, people are told to do work “because I said so”. Either they do it, or they quit, or they get fired. It’s binary, which seems simple. There’s no appeal process when people fail projects or projects fail people– and no one ever knows which happened– and extra-hierarchical collaboration is “trimmed”, and efforts can be tracked by people who think a single spreadsheet can capture everything important about what is happening in the company. Closed-allocation shops have hierarchy and clear chains of command, and single-points-of-failure (because a person can be fired from a whole company for disagreeing with one manager) out the proverbial wazoo. They’re Soviet-style command economies that somehow ended up being implemented within supposedly “capitalist” companies, but they “appear” simple to manage, and that’s why they’re popular. The problem with closed allocation policies is that they lead to enormous project failure rates, inefficient allocation of time, talent bleeds, and unnecessary terminations. In the long term, all of this unplanned and surprising garbage work makes the manager’s job harder, more complex, and worse. When assessing the problems associated with open allocation (such as increased managerial complexity) it’s important to consider that the alternative is much worse.

How do you do it? The challenging part of open allocation is enticing people to do unpleasant projects. There needs to be a reward. Make the bounty too high, and people come in with the wrong motivations (capturing the outsized reward, rather than getting a fair reward while helping the company) and the perverse incentives can even lead to “rat farming” (creating messes in the hopes of being asked to repair them at a premium). Make it too low, and no one will do it, because no one wise likes a company well enough to risk her own career on a loser project (and part of what makes a bad project bad is that, absent recognition, it’s career-negative to do undesirable work). Make the reward too monetary and it looks bad on the balance sheet, and gossip is a risk: people will talk if they find out a 27-year-old was paid $800,00o in stock options (note: there had better be vesting applied) even if it’s justified in light of the legacy dragon being slain. Make it too career-focused and you have people getting promotions they might not deserve, because doing unpleasant work doesn’t necessarily give a person technical authority in all areas. It’s hard to get the carrot right. The appeal of closed allocation is that the stick is a much simpler tool: do this shit or I’ll fire you.

The project has to be “packaged”. It can’t be all unpleasant and menial work, and it needs to be structured to involve some of the leadership and architectural tasks necessary for the person completing it to actually deserve the promised promotion. It’s not “we’ll promote you because you did something grungy” but “if you can get together a team to do this, you’ll all get big bonuses, and you’ll get a promotion for leading it.” Management also needs to have technical insight on hand in order to do this: rather than doing grunt work as a recurring cost, kill it forever with automation.

An important notion in all this is that of a committed project. Effectively, this is what the executives should create if they spot a quantum of work that the business needs but that is difficult and does not seem to be enjoyable in the estimation of the engineers. These shouldn’t be created lightly. Substantial cash and stock bonuses (vested, over the expected duration of the project) and promotions are associated with completing these projects, and if more than 25% of the workload is committed projects, something’s being done wrong. A committed project offers high visibility (it’s damn important; we need this thing) and graduation into a leadership role. No one is “assigned” to a committed project. People “step up” and work on them because of the rewards. If you agree to work on a committed project, you’re expected to make a good-faith effort to see it through for an agreed-upon period of time (typically, a year). You do it no matter how bad it gets (unless you’re incapable) because that’s what leadership is. You should not “flake out” because you get bored. Your reputation is on the line.

Companies often delegate the important but undesirable work in an awkward way. The manager gets a certain credibility for taking on a grungy project, because he’s usually at a level where he has basic autonomy over his work and what kinds of projects he manages. If he can motivate a team to accomplish it, he gets a lot of credit for taking on the gnarly task. The workers, under closed allocation, get zilch. They were just doing their jobs. The consequence of this is that a lot of bodies end up buried by people who are showing just enough presence to remain in good standing, but putting the bulk of their effort into moving to something better. Usually, it’s new hires without leverage who get staffed on these bad projects.

I’d take a different approach to committed projects. Working on one requires (as the name implies) commitment. You shouldn’t flake out because something more attractive comes along. So only people who’ve proven themselves solid and reliable should be working on (much less leading) them. To work on one (beyond a 20%-time basis) you have to have been at the company for at least a year, senior enough for the leadership to believe that you have the ability to deliver, and in strong standing at the company. Unless hired at senior roles, I’d never let a junior hire take on a committed project unless it was absolutely required– too much risk.

How do you fire people? When I was in school, I enjoyed designing and playing with role-playing systems. Modeling a fantasy world is a lot of fun. Once I developed an elaborate health mechanic that differentiated fatigue, injury, pain, blood loss, and “magic fatigue” (which affected magic users) and aggregated them (determining attribute reductions and eventual incapacitation) in what I considered to be a novel way. One small detail I didn’t include was death, so the first question I got was, “How do you die?” Of course, blood loss and injuries could do it. In a no-magic, medieval world, loss of the head is an incapacitating and irreversible injury, and exsanguination is likewise. However, in a high-magic world, “death” is reversible. Getting roasted, eaten and digested by a dragon might be reversible. But there has to be a possibility (though it doesn’t require a dedicated game mechanic) for a character to actually die in the permanent, create-a-new-character sense of the word. Otherwise there’s no sense of risk in the game: it’s just rolling dice to see how fast you level up. My answer was to leave that decision to the GM. In horror campaigns, senseless death (and better yet, senseless insanity) is part of the environment. It’s a world in which everything is trying to kill you and random shit can end your quest. But in high-fantasy campaigns with magic and cinematic storylines, I’m averse to characters being “killed by the dice”. If the character is at the end of his story arc, or does something inane like putting his head in a dragon’s mouth because he’s level 27 and “can’t be killed”, then he dies for real. Not “0 hit points”, but the end of his earthly existence. But he shouldn’t die because the player is hapless enough to roll 4 “1”s in a row on a d10. Shit happens.

The major problem with “rank and yank” (stack-ranking with enforced culling rates) and especially closed allocation is that a lot of potentially great employees are killed by the dice. It becomes part of the rhythm of the company for good people to get inappropriate projects or unfair reviews, blow up mailing lists or otherwise damage morale when it pisses them off, then get fired or quit in a huff. Yawn… another one did that this week. As I alluded in my Valve essay, this is the Welch Effect: the ones who get fired under rank-and-yank policies are rarely low performers, but junior members of macroscopically underperforming teams (who rarely have anything to do with this underperformance). The only way to enforce closed allocation is to fire people who fail to conform to it, but this also means culling the unlucky whose low impact (for which they may not be at fault) appears like malicious noncompliance.

Make no mistake: closed allocation is as much about firing people as guns are about killing people. If people aren’t getting fired, many will work on what they want to anyway (ignoring their main projects) and closed allocation has no teeth. In closed allocation shops, firings become a way for the company to clean up its messes. “We screwed this guy over by putting him on the wrong project; let’s get rid of him before he pisses all over morale.” Firings and pseudo-firings (“performance improvement plans” and transfer blocks and intentional dead-end allocations) become common enough that they’re hard to ignore. People see them, and that they sometimes happen to good people. And they scare people, especially because the default in non-financial tech companies is to fire quickly (“fail fast”) and without severance. It’s a really bad arrangement.

Do open-allocation shops have to fire people? The answer is an obvious “yes”, but it should be damn rare. The general rule of good firing is: mentor subtracters, fire dividers. Subtracters are good-faith employees who aren’t pulling their weight. They try, but they’re not focused or skilled enough to produce work that would justify keeping them on the payroll. Yet. Most employees start as subtractors, and the amount of time it takes to become an adder varies. Most companies try to set guidelines for how long an employee is allowed to take to become an adder (usually about 6 months). I’d advise against setting a firm timeframe, because what’s important is now how fast a person has learned (she might have had a rocky start) but how fast, and more importantly how well, she can learn.

Subtracters are, except in an acute cash crisis when they must be laid off for business reasons, harmless. They contribute microscopically to the burn rate, but they’re usually producing some useful work, and getting better. They’ll be adders and multipliers soon. Dividers are the people who make whole teams (or possibly the whole company) less productive. Unethical people are dividers, but so are people whose work is of so low quality that messes are created for others, and people whose outsized egos produce conflicts. Long-term (18+ months) subtractors become “passive” dividers because of their morale effects, and have to be fired for the same reason. Dividers smash morale, and they’re severe culture threats. No matter how rich your company is and how badly you may want not to fire people, you have to get rid of dividers if they don’t reform immediately. Dividers ratchet up their toxicity until they are capable of taking down an entire company. Firing can be difficult because many dividers shine as individual contributors (“rock stars”) but taketh away in their effects on morale, but there’s no other option.

My philosophy of firing is that the decision should be made rarely, swiftly, for objective reasons, and with a severance package sufficient to cover the job search (unless the person did something illegal or formally unethical) that includes non-disclosure, non-litigation and non-disparagement. This isn’t about “rewarding failure”. It’s about limiting risk. When you draft “performance improvement plans” to justify termination without severance, you’re externalizing the cost to people who have to work with a divider who’s only going to get worse post-PIP. Companies escort fired employees out of the building, which is a harsh but necessary risk-limiting measure; but it’s insane to leave a PIP’d employee in access for two months. Moreover, when you cold-fire someone, you’re inviting disparagement, gossip, and lawsuits. Just pay the guy to go away. It’s the cheapest and lowest-variance option. Three months of severance and you never see the guy again. Good. Six months and you he speaks highly of you and your company: he had a rocky time, you took care of him, and he’s (probably) better-off now. (If you’re tight on money, which most startups are, stay closer to the 3-month mark. You need to keep expenses low more than you need fired employees to be your evangelists. If you’re really tight, replace the severance with a “gardening leave” package that continues his pay only until he starts his next job.)

If you don’t fire dividers, you end up with something that looks a lot like closed allocation. Dividers can be managers (a manager can only be a multiplier or divider, and in my experience, at least half are dividers) or subordinates, but dividers tend to intimidate. Subordinate passive dividers intimidate through non-compliance (they won’t get anything done) while active dividers either use interpersonal aggression or sabotage to threaten or upset people (often for no personal gain). Managerial (or proto-managerial) dividers tend to threaten career adversity (including bad reviews, negative gossip, and termination) in order to force people to put the manager’s career goals above their own. They can’t motivate through leadership, so they do it using intimidation and (if available) authority, and they draw people into captivity to get done the work they want, without paying for it on a fair market (i.e. providing an incentive to do the otherwise undesirable work). At this point, what you have is a closed-allocation company. What this means is that open allocation has to be protected: you do it by firing the threats.

If I were running a company, I think I’d have a 70% first-year “firing” (by which I mean removal from management; I’d allow lateral moves into IC roles for those who desired to do so) rate for titled managers. By “titled manager”, I mean someone with the authority and obligation to participate in dispute resolution, terminations and promotions, and packaging committed projects. Technical leadership opportunities would be available to anyone who could convince people to follow them, but to be a titled people manager you’d have to pass a high bar. (You’d have to be as good at it as I would be, and for 70 to 80 percent of the managers I’ve observed, I’d do a better job.) This high attrition rate would be offset by a few cultural factors and benefits. First, “failing” in the management course wouldn’t be stigmatized because it would be well-understood that most people either end it voluntarily, or aren’t asked to continue. People would be congratulated for trying out, and they’d still be just as eligible to lead projects– if they could convince others to follow. Second, those who aspired specifically to people-management and weren’t selected would be entitled (unless fully terminated for doing something unethical or damaging) to a six-month leave period in which they’d be permitted to represent themselves as employed. That’s what B+ and A- managers would get– the right to remain as individual contributors (at the same rank and pay) and, if they didn’t want that, a severance offer along with a strong reference if they wished to pursue people management in other companies– but not at this one.

Are there benefits to closed allocation? I can answer this with strong confidence. No, not in typical technology companies. None exist. The work that people are “forced” to do is of such low quality that, on balance, I’d say it provides zero expectancy. In commodity labor, poorly motivated employees are about half as productive as average ones, and the best are about twice as productive. Intimidating the degenerate slackers into bringing themselves up to 0.5x from zero makes sense. In white-collar work and especially in technology, those numbers seem to be closer to -5 and +20, not 0.5 and 2.

You need closed (or at least controlled) allocation over engineers if there is material proprietary information where even superficial details would represent, if divulged, an unacceptable breach: millions of dollars lost, company under existential threat, classified information leaked. You impose a “need-to-know” system over everything sensitive. However, this most often requires keeping untrusted, or just too many people, out of certain projects (which would be designated as committed projects under open allocation). It doesn’t require keeping people stuck on specific work. Full-on closed allocation is only necessary when there are regulatory requirements that demand it (in some financial cases) or extremely sensitive proprietary secrets involved in most of the work– and comments in public-domain algorithms don’t count (statistical arbitrage strategies do).

What does this mean? Fundamentally, this issue comes down to a simple rule: treat employees like adults, and that’s what they’ll be. Investment banks and hedge funds can’t implement total open allocation, so they make up the difference through high compensation (often at unambiguously adult levels) and prestige (which enables lateral promotions for those who don’t move up quickly). On the other hand, if you’re a tiny startup with 30-year-old executives, you can’t afford banking bonuses, and you don’t have the revolving door into $400k private equity and hedge fund positions that the top banks do, so employee autonomy (open allocation) is the only way for you to do it. If you want adults to work for you, you have to offer autonomy at a level currently considered (even in startups) to be extreme.

If you’re an engineer, you should keep an eye out for open-allocation companies, which will become more numerous as the Valve model proves itself repeatedly and all over the place (it will, because the alternative is a ridiculous and proven failure). Getting good work will improve your skills and, in the long run, your career. So work for open-allocation shops if you can. Or, you can work in a traditional closed-allocation company and hope you get (and continue to get) handed good projects. That means you work for (effectively, if not actually) a bank or a hedge fund, and that’s fine, but you should expect to be compensated accordingly for the reduction in autonomy. If you work for a closed-allocation ad exchange, you’re a hedge-fund trader and you deserve to be paid like one.

If you’re a technology executive, you need to seriously consider open allocation. You owe it to your employees to treat them like adults, and you’ll be pleasantly surprised to find that that’s what they become. You also owe it to your managers to free them from the administrative shit-work (headcount fights, PIPs and terminations) that closed allocation generates. Finally, you owe it to yourself; treat yourself to a company whose culture is actually worth caring about.