Silicon Valley may not be fixable

I’ve come to an upsetting conclusion about Silicon Valley. There’s always been a hold-out hope that I’ve had that it could be fixed, and the balance of power restored to where it belongs (lifelong technologists and engineers) rather than where it currently resides, and categorically does not belong (managers, investors, non-technologists). Traditional business culture, focused on power relationships and emotions and influence peddling, has invaded the Valley and created a rash of startups with mediocre products, terrible treatment of talent, and little respect for users, employees, or technology as a whole. The result has underperformed from a return-on-investment perspective, but remained in force because it makes the well-connected exceedingly rich (at the expense of average workers, and of passive investors).  

This is disturbing because, while there are many in Silicon Valley who think that they are ready to rule, that is proven false by some humiliating, and public, failures of that ecosystem. For one thing, the real winners, in the Bay Area, aren’t technology people but landlords. The housing situation in San Francisco alone is sufficient to prove that “nerds”, at least of that stripe, aren’t ready to rule. 

We’ve also seen a hard-core Damaso Effect. The highest-status people in the Valley aren’t lifelong technologists, but people who failed out of the corporate mainstream, and are bossing nerds around as a second act. Passed over for MD at Goldman Sachs? Go to California, become a VC partner, and boss nerds around. Bottom 5% of your class in business school, untouchable even by second-rate management consultancies? Congratulations, you’re now the VP of HR at a well-funded, 100-person startup.

If the highest positions in the Valley are reserved for people who failed out of the dominant, old-style, “paper belt” culture, then we’re not going to see much innovation or “nerdiness”. Indeed, most of the winners in this crop are well-connected, full-blooded douchebags like Evan Spiegel and Lucas Duplan, who couldn’t even code themselves a better haircut. The unsettling but faultless distinction of being the last genuine nerd to succeed in the Valley probably goes to Mark Zuckerberg.

The new Valley isn’t one where underdogs can succeed. It’s not a place where land is cheap and ideas flow freely. Instead, it’s where highly productive and often creative, but politically disorganized, people (“nerds”) are treated as a resource from which value is extracted. The dominating culture of the Valley now is that of resource extraction. Economically, the latest incarnation has more in common with Saudi Arabia than with what Silicon Valley used to be. The only difference is that, instead of oil being drilled out of the ground, it’s hours of effort and creative passion being mined from talented but politically clueless (and usually quite young) software professionals, who haven’t figured out yet that their work is worth several times what they’re paid to do it, and that the importance of their skills ought to command some basic respect and equity. 

All this said, I’d like to step away from the name-calling and mudslinging. This isn’t some high-minded rejection of those impulses, because I love name-calling and mudslinging. I just want to take a more technical tack. It is fun to pillory the worst people (and that’s why I’m a faithful reader of Valleywag) but, nonetheless, their mere existence shouldn’t prevent the good guys (“nerds”) from succeeding. And, of course, the tribalism that I sometimes invoke (good nerds versus evil businessmen) is, if taken literally, quite ridiculous. There are bad nerds and there are good business people, and I’ve never wanted to imply otherwise. What’s distressing about the Valley is that it so rarely attracts the good business people.

In fact, the bad reputation of business people in the Valley, I think, stems largely from the fact that the competent ones never go there, instead working on billion-dollar Wall Street deals. This makes it easy for a young programmer to forget (or to never know) that they exist. The competent, ethical business people tend either to stay in New York and climb the finance ladder and end up running hedge funds, or deliberately downshift and run smaller “lifestyle” businesses or elite consultancies. Either they win the corporate game, or they leave it, but they don’t half-ass it by playing a less competitive but equally soulless corporate game on the other coast. It’s the inept and malignant ones who tend to find their way out to the Valley, attracted by the enormous “Kick Me” sign that each generation of young software engineers has on its back. 

The degenerate balance of power in the Valley attracts the bad business people, those who can’t make it on their home turf. Meanwhile, the talented and scrupulous ones tend to avoid it, not wanting the association. If the worst students out of Harvard Business School are becoming startup founders and venture capitalists and building nerd armies to build toilet check-in apps, then the best students from those programs are going to stay far away from that ecosystem. 

So why is the balance of power so broken, in the Valley? I think the answer is its short-term focus. To put numbers to it, let’s note that a typical Valley startup requires a business co-founder (who’ll become CEO) and a technical one (who’ll become CTO). The business co-founder raises money, and the technology co-founder builds the product and team. The general rule with founder inclusion is “Code or Contacts”. If someone’s not packing either, you can’t afford the complexity and cost of making him a founder. Both seem important, so why does the “Contacts” side tend to win? Why can’t “Code” people catch a break?

Let’s look at it as a bilateral matching problem, like dating, and assign an attractiveness level to each. For the purposes of this essay, I’m using a 10-point rating scale. 0-2 are the unqualified, 3-4 are the hangers-on, and 5-6 are average among the set (a fairly small one) of people equipped to found startups at all. 7-8 are the solid players of evident high capability, and 9-10 occur at a rate of a few per generation, and people tend to make “Chuck Norris” jokes about them.

You’d expect 10s to match with 10s, 4s with 4s, and so on. Of course, externalities can skew the market and push one side down a notch. In a typical college environment, for an example, women are more attractive to older men (including graduate students and professors) and therefore have more options, and thus more power, than their same-age male counterparts. Mature women simply have no interest in 18-year-old men, while older men do frequently pair off with young women. That power dynamic tends to reverse with age, to the point that women have a justified grievance about the dating scene later in life; but in college, women hold all the cards. 

What does it look like in technology? I’m a Technology 8 and, realistically, I couldn’t expect to pair with a Business 8. It’s not that the Biz 8 is rarer or better than the Tech 8. It’s just that he has more options. Technology people are rated based on what they know and can do. Business people are rated based on connections and what they can squeeze out of their pre-existing relationships. A Business 5 would be a typical graduate of a top-10 MBA program with the connections that implies. A Business 6 has enough contacts to raise venture funding, on reasonable terms, with a product. A Business 7 can raise a Series-A without a product, is frequently invited to lunch with CEOs of large companies, and could become partner at Sequoia on a conversation. What about my at-level counterpart, the Business 8? He’s probably not even in venture-funded technology. He likely has a personal “search fund” and is doing something else. 

The Business 8s and 9s have so many options outside of the Valley that they’re almost never seen there. In fact, a genuine Business 8 who entered the Valley would likely become the most powerful person in it. On the other hand, Technology 8’s like me aren’t nearly as rare. In fact, and it seems almost tautological that it would be this way, Tech 10s and 9s and 8s are found predominantly in technology. Where else would they go?

In a weird way, the Tech 10s are in a disadvantageous position because they have to stay in the industry to remain Tech 10s. The typical Tech 10 works on R&D in a place like Google X, and there aren’t many jobs that offer that kind of autonomy. If he leaves that world, he’ll slide down to Tech 9 (or even 8) status in a few years. The technical co-founder’s major asset requires continual work to remain sharp, and at the 7+ level, it’s pretty damn competitive. The business co-founders asset (connection) is much less perishable. In fact, the weird benefit of connections is that they get more powerful with time. College connections are taken to be much deeper than professional ones, and prep-school connections run deeper still. Why? Because connections and pedigree are all about nostalgia. Blue blood runs deep and must be old; otherwise, it won’t be properly blue. For all of our mouth-honored belief in progress as a technological society, we’re still run by people who are inflexibly backward-looking and neophobic. It’s no wonder, one might remark, that Silicon Valley’s greatest contribution to the world over the past five years has been a slew of toilet check-in apps. 

The Tech 8s and 9s and 10s are generally funneled into technology, because few other environments will even let them maintain (much less improve) their skills. On the other hand, the Biz 8s and 9s and 10s are drawn away by other and better options. Alone, this wouldn’t be that devastating. On both scales, there are more people at each level than at the ones above it, so the absence of the Biz 8+ might have the Tech 8-10s pairing with Biz 7s and the Tech 7s pairing with Biz 6s. Fine; they’ll live. A Biz 6 or 7 can still open plenty of doors. Speaking as a Tech 8, I’d be happy to pair with a Biz 7 as an equal partner. 

Unfortunately, the advantage of business co-founders seems to be about 3-4 points, at any level. That’s what you way pay for their connections. If you’re a Tech 6 and you pair up with a Biz 2-3 co-founder, you’ll probably split the equity evenly (but raising money will be extremely difficult, if not impossible). Pair with a Biz 4, as a Tech 6, and you’ll probably get a third of what he does. If you pair with a Biz 6 co-founder, you’re likely to get 5%. It’s unfair, and unreasonable, but that’s where the market seems to have settled. Why? Removal of the Business 8+ from the market is not, alone, enough to explain it. 

I’ve explained that the business people have up-flight options into other industries. They also have down-flight options when it comes to picking co-founders. If a Tech 8 turns down a Biz 8 to work with a Biz 6, then he’s saying no to someone with millions in VC funding that is essentially already-in-hand, in order to work with someone who might be able to deliver Sequoia after 6 months of work on the product. The Biz 8 is so trusted by the investors that the Tech 8 will probably get a raise to take the founder job; if he works for the Biz 6, he’ll end up working for free for half a year. What, on the other hand, happens to a Biz 8 who turns down his Tech-8 counterpart for a Tech 6? 

The answer is… (crickets) nothing. Investors simply don’t care about the difference. The Biz 8 could pair with a Tech 3 or even another business guy and, while the technical infrastructure of company would be terrible, it wouldn’t really matter. In the Valley, technology is just made to be sold, not actually used. Why hire a Tech 7+ who’s going to make annoying sounds about “functional programming” and “deep neural nets” when you can just hire another VC-connected salesman? Why worry about building a company “for investors” when, with more-connected people involved, you can always just get more investors to pay back the early ones? 

At any chosen level, the business co-founder can choose a tech co-founder 2 to 4 points below him and not lose very much. Strong technical leadership matters if the business is going to last a long time, but these companies are built to be sold or shot dead within four years, so who cares if the technology sucks? To the Biz 8, a Tech 4 willing to take a terrible equity split is a good-enough substitute for the Tech 8. The same doesn’t hold for the Tech 8. Expecting a Biz 4 to handle the VCs is just not an option for him. Even if a Biz 4 or 5 is able to get venture capital, and that itself is unlikely, he’ll probably still be eaten alive by the investors, resulting in terms like “participating preferred” that will emasculate the entire company. 

What all this means is that the business co-founders are more essential because not only are they rarer, but because these businesses don’t last long enough– and aren’t built to last, either– for poor technical leadership to really matter. In such a climate, the leverage of the Tech 7+ is incredibly weak. The value offered by a genuine Tech 8 in the early stages is crucial to a company if one takes a 5- or 10- or 20-year-view, but no one does that anymore. No one cares beyond the sale. The perverse result of this is that technical talent has become non-essential on its home turf. As a result, more power accrues every month to the vision-less, egotistical business guys running the show in Silicon Valley, and lifelong technologists (“nerds”) are even more of a marginalized class. 

I’d like to say that I know how to fix this, how to kill the next-quarter mentality and drive the invaders back out of this wonderful territory (cutting-edge technology) where they are most unwelcome, but the truth is that I don’t know what to do. The short-term mentality seems to be a permanent fixture and, in that light, I don’t think that talented technologists are ever going to get a fair shake against influence peddlers, manipulators, and merchants of connections. I’m sorry, but I can’t fix it, and I don’t know whether it can be fixed. 

Why there are so few AI jobs

Something began in the 1970s that has been described as “the AI winter”, but to call it that is to miss the point, because the social illness it represents involves much more than artificial intelligence (AI). AI research was one of many casualties that came about as anti-intellectualism revived itself and society fell into a diseased state.

One might call the “AI winter” (which is still going on) an “interesting work winter” and it pertains to much more of technology than AI alone, because it represented a sea change in what it meant to be a programmer. Before the disaster, technology jobs had an R&D flavor, like academia but with better pay and less of the vicious politics. After the calamitous 1980s and the replacement of R&D by M&A, work in interesting fields (e.g. machine learning, information retrieval, language design) became scarce and over 90% of software development became mindless, line-of-business makework. At some point, technologists stopped being autonomous researchers and started being business subordinates and everything went to hell. What little interesting work remained was only available in geographic “super-hubs” (such as Silicon Valley) where housing prices are astronomical compared to the rest of the country. Due to the emasculation of technology research in the U.S., economic growth slowed to a crawl, and the focus of the nation’s brightest minds turned to creation of asset bubbles (seen in 1999, 2007, and 2014) rather than generating long-lasting value.

Why did this happen? Why did the entrenched public- and private-sector bureaucrats (with, even among them, the locus of power increasingly shifting to private-sector bureaucrats, who can’t be voted out of office) who run the world lose faith in the research being done by people much smarter, and who work much harder, than them? The answer is simple. It’s not even controversial. End of the Cold War? Nah, it began before that. At fault is the lowly perceptron.

Interlude: a geometric puzzle

This is a simple geometry puzzle. Below are four points at the corners of the square, colored (and numbered) like so:

0 1
1 0

Is it possible to draw a line that separates the red points (0’s) from the green points (1’s)?

The answer is that it’s not possible. Any separating line would have to separate two points from each other. Now draw a circle passing through all four points. Any line can intersect that circle at no more than two points. Therefore, a line separating two points from the other two would have to separate two adjacent points, which would be of opposing colors. It’s not possible. Another way to say this is that the classes (colors) aren’t linearly separable.

What is a perceptron?

“Perceptron” is a fancy name given to a mathematical function with a simple description. Let w be a known “weight” vector (if that’s an unfamiliar term, a list of numbers) and x be an input “data” vector of the same size, with the caveat that x[0] = 1 (a “bias” term) always. The perceptron, given w, is a virtual “machine” that computes, for any given input x, the following:

  • 1, if w[0]*x[0] + … + w[n]*x[n] > 0,
  • 0, if w[0]*x[0] + … + w[n]*x[n] < 0.

In machine learning terms, it’s a linear classifier. If there’s a linear function that cleanly separates the “Yes” class (the 1 values) from the “No” class (the 0 values) it can be expressed as a perceptron. There’s an elegant algorithm for, in that linearly separable case, finding a working weight vector. It always converges.

A mathematician might say, “What’s so interesting about that? It’s just a dot product being passed through a step function.” That’s true. Perceptrons are very simple. A single perceptron can solve more decision problems than one might initially think, but it can’t solve all of them. It’s too simple a model.

Limitations

Let’s say that you want to model an XOR (“exclusive or”) gate, corresponding to the following function:

| in_1 | in_2 | out |
+------+------+-----+
|   0  |   0  |  0  |
|   0  |   1  |  1  |
|   1  |   0  |  1  |
|   1  |   1  |  0  |
+------+------+-----+

One might recognize that this is identical to the “brainteaser” above, with in_1 and in_2 corresponding to the x- and y- dimensions in the coordinate plane. This is the same problem. This function is nonlinear; it could be expressed as f(x, y) = x + y – 2xy. and that’s arguably the simplest representation of it that works. A separating “plane” in the 2-dimensional space of the inputs would be a line, and there’s no line separating the two classes. It’s mathematically obvious that the perceptron can’t do it. I showed this, above, using high-school geometry.

To a mathematician, this isn’t surprising. Marvin Minsky pointed out the mathematically evident limitations of a single perceptron. One can model intricate mathematical functions with more complex networks of perceptrons and perceptron-like units, called artificial neural networks. They work well. One can also, using what are called “basis expansions”, generate further dimensions from existing data in order to create a higher-dimensional space in which linear classifiers still work. (That’s what people usually do with support vector machines, which provide the machinery to do so efficiently.) For example, adding xy as a third “derived” input dimension would make the classes (0’s and 1’s) linearly separable. There’s nothing mathematically wrong with doing that; it’s something that statisticians do when they want to build complex models but still have some of the analytic properties of simpler ones, like linear regression or nearest-neighbor modeling.

The limitations of the single perceptron do not invalidate AI. At least, they don’t if you’re a smart person. Everyone in the AI community could see the geometrically obvious limitation of a single perceptron, and not one of them believed that it came close to invalidating their work. It only proved that more complex models were needed for some problems, which surprised no one. Single-perceptron models might still be useful for computational efficiency (in the 1960s, computational power was about a billion times as expensive as now) or because the data don’t support a more complex model; they just couldn’t learn or model every pattern.

In the AI community, there was no scandal or surprise. That some problems aren’t linearly separable is not surprising. However, some nerd-hating non-scientists (especially in business upper management) took this finding to represent more than it actually did.

They fooled us! A brain with one neuron can’t have general intelligence!

The problem is that the world is not run, and most of the wealth in it is not controlled, by intelligent people. It’s run by social-climbing empty-suits who are itching for a fight and would love to take some “eggheads” down a notch. Insofar as an artificial neural network models a brain, a perceptron models a single neuron, which can’t be expected to “think” at all. Yet the fully admitted limitations of a single perceptron were taken, by the mouth-breathing muscleheads who run the world, as an excuse to shit on technology and pull research funding because “AI didn’t deliver”. That produced an academic job market that can only be described as a pogrom, but it didn’t stop there. Private-sector funding dried up as short-term, short-tempered management came into vogue.

To make it clear, no one ever said that a single perceptron can solve every decision problem. It’s a linear model. That means it’s restricted, intentionally, to a small subspace of possible models. Why would people work with a restricted model? Traditionally, it was for a lack of data. (We’re in the 1960s and ’70s, when data was contained on physical punch cards and a megabyte weighed something and a disk drive cost more than a car.) If you don’t have a lot of data, you can’t build complex models. For many decision problems, the humble perceptron (like its cousins, logistic regression and support vector machines) did well and, unlike other computationally intensive linear classification methods (such as logistic regression, which requires gradient descent, or a variant thereof, over the log-likelihood surface; or such as the support vector machine, which are a quadratic programming problem that we didn’t know how to solve efficiently until the 1990s) it could be trained with minimal computational expense, in a bounded amount of time. Even today, linear models are surprisingly effective for a large number of problems. For example, the first spam classifiers (Naive Bayes) operated using a linear model, and it worked well. No one was claiming that a single perceptron was the pinnacle of AI. It was something that we could build cheaply on 1970-era hardware and that could build a working model on many important datasets.

Winter war

Personally, I don’t think that the AI Winter was an impersonal, passive event like the changes of seasons. Rather, I think it was part of a deliberate resurgence of anti-intellectualism in a major cultural war– one which the smart people lost. The admitted limitations of one approach to automated decision-making gave the former high school bullies, now corporate fat cats, all the ammo they needed in order to argue that those “eggheads” weren’t as smart as they thought they were. None of them knew exactly what a perceptron or an “XOR gate” were, but the limitation that I’ve described was morphed into “neural networks can’t solve general mathematical problems” (arguably untrue) and that turned into “AI will never deliver”. In the mean-spirited and anti-liberal political climate of the 1980s, this was all that anyone needed as an excuse to cut public funding. The private sector not only followed suit, but amplified the trend. The public cuts were a mix of reasonable fiscal conservatism and mean-spirited anti-research sentiment, but the business elites responded strongly to (and took to a whole new level) the mean-spirited aspect, flexing their muscles as elitism (thought vanquished in the 1930s to ’50s) became “sexy” again in the Reagan Era. Basic research, which gave far too much autonomy and power to “eggheads”, was slashed, marginalized, and denigrated.

The claim that “AI didn’t deliver” was never true. What actually happened is that we solved a number of problems, once thought to require human intelligence, with a variety of advanced statistical means as well as some insights from fields like physics, linguistics, ecology and economics. Solving problems demystified them. Automated mail sorting, once called “artificial intelligence”, became optical character recognition. This, perhaps, was part of the problem. Successes in “AI” were quickly put into a new discipline. Even modern practitioners of statistical methods are quick to say that they do machine learning, not AI. What was actually happening is that, while we were solving specific computational problems once thought to require “intelligence”, we found that our highly specialized solutions did well on the problems they were designed for, and could be adapted to similar problems, but with very slow progress toward general intelligence. As it were, we’ve learned in recent decades that our brains are even more complicated than we thought, with a multitude of specialized modules. That no specific statistical algorithm can replicate all of them, working together in real time, shouldn’t surprise anyone. Is this an issue? Does it invalidate “AI” research? No, because most of those victories, while they fell short of replicating a human brain, still delivered immense economic value. Google, although it eventually succumbed to the sociological fragility and failure that inexorably follow closed allocation, began as an AI company. It’s now worth over $360 billion.

Also mixed in with the anti-AI sentiment is the religious aspect. It’s still an open and subjective question what human intelligence really is. The idea that human cognition could be replicated by a computer offended religious sentiments, even though few would consider automated mail sorting to bear on unanswerable questions about the soul. I’m not going to go deep into this philosophical rabbit hole, because I think it’s a waste of time to debate why people believe AI research (or, for a more popular example, evolution by natural selection) to offend their religious beliefs. We don’t know what qualia is or where it comes from. I’ll just leave it at this. If we can use advanced computational techniques to solve problems that were expensive, painful, or impossible given the limitations of human cognition, we should absolutely do it. Those who object to AI on religious grounds fear that advanced computational research will demystify cognition and bring about the end of religion. Ignoring the question of whether an “end of religion” is a bad thing, or what “religion” is, there are two problems with this. First, if there is something to us that is non-material, we won’t be able to replicate it mechanically and there is no harm, to the sacred, in any of this work. Second, computational victories in “AI” tend to demystify themselves and the subfield is no longer considered “AI”. Instead, it’s “optical character recognition” or “computer game-playing”. Most of what we use on a daily basis (often behind the scenes, such as in databases) comes from research that was originally considered “artificial intelligence”.

Artificial intelligence research has never told us, and will never tell us, whether it is more reasonable to believe in gods and religion or not to believe. Religion is often used by corrupt, anti-intellectual, politicians and clerics to rouse sentiment against scientific progress, as if automation of human grunt work were a modern-day Tower of Babel. Yet, to show what I mean by AI victories demystifying themselves, almost none would hesitate to use Google, a web-search service powered by AI-inspired algorithms.

Why do the anti-intellectuals in politics and business wish to scare the public with threats of AI-fueled irreligion and secularism (as if those were bad things)? Most of them are intelligent enough to realize that they’re making junk arguments. The answer, I think, is about raw political dominance. As they see it, the “nerds” with their “cushy” research jobs can’t be allowed to (gasp!) have good working conditions.

The sad news is that the anti-intellectuals are likely to take the economy and society down with them. In the 1960s, when we were putting billions of dollars into “wasteful” research spending, the economy grew at a record pace. The world economy was growing at 5.7 percent per year, and the U.S. economy was the envy of the world. Now, in our spartan time of anti-intellectualism, anti-science sentiment, and corporate elitism, the economy is sluggish and the society is stagnant– all because the people in charge can’t stand to see “eggheads” win.

Has AI “delivered”?

If you’re looking to rouse religious fear and fury, you might make a certain species of fantastic argument against “artificial intelligence”. The truth of the matter, however, is that while we’ve seen domain-specific superiority of machines over human intelligence in rote processes, we’re still far from creating an artificial general intelligence, i.e. a computational entity that can exhibit the general learning capability of a human. We might never do it. We might not need to and, I would argue, we should not if it is not useful.

In a way, “artificial intelligence” is a defined-by-exclusion category of “computational problems we haven’t solved yet”. Once we figure out how to make computers better at something than humans are, it becomes “just computation” and is taken for granted. Few believe they’re using “an AI” when they use Google for web search, because we’re now able to conceive of the computational work it does as mechanical rather than “intelligent”.

If you’re a business guy just looking to bully some nerds, however, you aren’t going to appeal to religion. You’re going to make the claim that all this work on “artificial intelligence” hasn’t “delivered”. (Side note: if someone uses “deliver” intransitively, as business bullies are wont to do, you should punch that person in the face.) Saying someone or something isn’t “delivering” is a way to put false objectivity behind a claim that means nothing other than “I don’t like that person”. As for AI, it’s true that artificial general intelligence has eluded us thus far, and continues to do so. It’s an extremely hard problem: far harder than the optimists among us thought it would be, fifty years ago. However, the CS research community has generated a hell of a lot of value along the way.

The disenchantment might be similar to the question about “flying cars”. We actually have them. They’re called small airplanes. In the developed world, a person of average means can learn how to fly one. They’re not even that much more expensive than cars. The reason so few people use airplanes for commuting is that it just doesn’t make economic sense for them: the savings of time don’t justify increased fuel and maintenance costs. But a middle-class American or European can, if she wants, have a “flying car” right now. It’s there. It’s just not as cheap or easy to use as we’d like. With artificial intelligence, that research has brought forth a ridiculous number of victories and massive economic growth. It just hasn’t brought forth an artificial general intelligence. That’s fine; it’s not clear that we need to build one in order to get the immense progress that technologists create when given the autonomy and support.

Back to the perceptron

One hard truth I’ve learned is that any industrial effort will have builders and politicians. It’s very rare that someone is good at both. In the business world, those unelected private-sector politicians are called “executives”. They tend, for a variety of reasons, to put themselves into pissing contests with the builders (“eggheads”) who are actually making stuff. One time-tested way to show up the builders is to take something that is obviously true (leading the builders to agree with the presentation) but present it out of context in a way that is misleading.

The incapacity of the single perceptron at general mathematical modeling is a prime example of this. Not one AI researcher was surprised that such a simple model couldn’t describe all patterns or equational relationships. The fact that can be proven (as I did) with high school geometry. That a single perceptron can’t model a key logical operation is, as above, obviously true. The builders knew it, and agree. Unfortunately, what the builders failed to see was that the anti-intellectual politicians were taking this fact way out of context, using the known limitations of a computational building block to ascribe limitations (that did not exist) to general structures. This led to the general dismantling of public, academic, and private support for technological research, an anti-intellectual and mean-spirited campaign that continues to this day.

That’s why there are so few AI jobs.

Technology’s Loser Problem

I’m angry. The full back story isn’t worth getting into, but there was a company where I applied for a job in the spring of 2013: to build a company’s machine learning infrastructure from scratch. It was a position of technical leadership (Director equivalent, but writing code with no reports) and I would have been able to use Clojure. As it were, I didn’t get it. They were looking for someone more experienced, who’d built those kinds of systems before, and wouldn’t take 6 months to train up to the job. That, itself, is not worth getting angry about. Being turned down happens, especially at high levels.

I found out, just now, that the position was not filled. Not then. Not 6 months later. Not to this day, more than a year later. It has taken them longer to fill the role than it would have taken for me to grow into it.

When they turned me down, it didn’t faze me. I thought they’d found a better candidate. That happens; only thing I can do is make myself better. I found myself, however, a bit irked when I found out that they hadn’t filled the position for longer than it would have taken me to gain the necessary experience. I lost, and so did they.

That’s not what makes me angry. Rationally, I realize that most companies aren’t going to call back a pretty-good candidate they rejected because they had just opened the position and they thought they could do better (if you’re the first 37% of candidates for a job, it makes sense for them not to choose you and, empirically, first and second applicants for a high-level position rarely get it). That’s the sort of potentially beneficial but extremely awkward social process that just won’t happen. What makes me angry is the realization of how common a certain sort of decision is in the technology world. We make a lot of lose-lose decisions that hurt all of us. Extremely specific hiring requirements (that, in bulk, cost the company more in waiting time than training a 90% match up to the role) are just the tip of the iceberg.

You know those people who complain about the lack of decent <gender of sexual interest> but (a) reject people for the shallowest, stupidest reasons, (b) aren’t much of a prize and don’t work to better themselves, and (c) generally refuse to acknowledge that the problem is rooted in their own inflated perception of their market value? That’s how I feel every time I hear some corporate asswipe complain about a “talent shortage” in technology. No, there isn’t one. You’re either too stingy or too picky or completely inept at recruiting, because there’s a ton of underemployed talent out there.

Few of us, as programmers, call the initial shots. We’ve done a poor job of making The Business listen to us. However, when we do have power, we tend to fuck it up. One of the problems is that we over-comply with what The Business tells us it whats. For example, when a nontechnical CEO says, “I only want you to hire absolute rock stars”, what he actually means is, “Don’t hire an idiot just to have a warm body or plug a hole”. However, because they tend to be literal, over-compliant, and suboptimal, programmers will interpret that to mean, “Reject any candidate who isn’t 3 standard deviations above the mean.” The leads to positions not being filled, because The Business is rarely willing to pay what one standard deviation above the mean costs, let alone three.

Both sides now

I’ve been on both sides of the interviewing and hiring process. I’ve seen programmers’ code samples described with the most vicious language over the most trivial mistakes, or even stylistic differences. I’ve seen job candidates rejected for the most god-awful stupid reasons. In one case, the interviewer clearly screwed up (he misstated the problem in a way that made it impossible) but, refusing to risk face by admitting the problem was on his end, he claimed the candidate failed the question. Another was dinged on a back-channel reference (don’t get me started on that sleazy practice, which ought to be illegal) claiming, without any evidence, that “he didn’t do much” on a notable project four years ago. I once saw an intern denied a full-time offer because he lived in an unstylish neighborhood. (The justification was that one had to be “hungry”, mandating Manhattan.) Many of us programmers are so butthurt about not being allowed to sit at the cool kids’ table that, when given the petty power associated with interviewing other programmers, the bitch-claws come out in a major way.

Having been involved in interviewing and recruiting, I’ll concur that there are a significant number of untalented applicants. If it’s 99.5 percent, you’re doing a lot of things wrong, but most resumes do come from people way out of their depth. Moreover, as with dating, there’s an adverse weighting in play. Most people aren’t broken, but broken people go on orders of magnitude more dates than everyone else, which is why most peoples’ dating histories have a disproportionate representation of horror stories, losers, and weirdos. It’s the same with hiring, but phone screening should filter against that. If you’re at all good at it, about half of the people brought in-office will be solid candidates.

Of course, each requirement cuts down the pool. Plenty of companies (in finance, some officially) have a “no job hopper” or “no unemployeds” rule. Many mandate high levels of experience in new technologies (even though learning new technologies is what we’re good at). Then, there are those who are hung up on reference checking in weird and creepy ways. I know of one person who proudly admits that his reference checking protocol is to cold-call a random person (again, back-channel) is the candidate’s network and ask the question, without context, “Who is the best person you’ve ever worked with?” If anyone other than the candidate is named, the candidate is rejected. That’s not being selective. That’s being an invasive, narcissistic idiot. Since each requirement reduces the size of qualified people, it doesn’t take long before the prejudices winnow an applicant pool down to zero.

Programmers? Let’s be real here, we kinda suck…

As programmers, we’re not very well-respected, and when we’re finally paid moderately well, we let useless business executives (who work 10-to-3 and think HashMap is a pot-finding app) claim that “programmer salaries are ridiculous”. (Not so.) Sometimes (to my horror) you’ll hear a programmer even agree that our salaries are “ridiculous”. Fuck that bullshit; it’s factually untrue. The Business is, in general, pretty horrible to us. We suffer under closed allocation, deal with arbitrary deadlines, and if we don’t answer to an idiot, we usually answer to someone else who does. Where does the low status of programmers come from? Why are we treated as cost centers instead of partners in the business? Honestly… much of the problem is us. We’ve failed to manage The Business, and the result is that it takes ownership of us.

Most of the time, when a group of people is disproportionately successful, the cause isn’t any superiority of the average individual, but a trait of the group: they help each other out. People tend to call these formations “<X> Mafia” where X might be an ethnicity, a school, or a company. Y Combinator is an explicit, pre-planned attempt to create a similar network; time will tell if it succeeds. True professions have it. Doctors look out for the profession. With programmers, we don’t see this. There isn’t a collective spirit: just long email flamewars about tabs versus spaces. We don’t look out for each other. We beat each other down. We sell each other out to non-technical management (outsiders) for a shockingly low bounty, or for no reason at all.

In many investment banks, there’s an established status hierarchy in which traders and soft-skills operators (“true bankers”) are at the top, quants are in the middle, and programmers (non-quant programmers are called “IT”) are even lower. I asked a high-ranking quant why it was this way, and he explained it in terms of the “360 degree” performance reviews. Bankers and traders all gave each other top ratings, and wrote glowing feedback for minor favors. They were savvy enough to figure out that it was best for them to give great reviews up, down, and sideways, regardless of their actual opinions. Quants tended to give above-average ratings and occasionally wrote positive feedback. IT gave average ratings for average work and plenty of negative feedback. The programmers were being the most honest, but hurting each other in the process. The bankers and traders were being political, and that’s a good thing. They were savvy enough to know that it didn’t benefit them to sell each other out to HR and upper management. Instead, they arranged it so they all got good ratings and the business had to, at baseline, appreciate and reward all of them. While it might seem that this hurt top performers, it had the opposite effect. If everyone got a 50 percent bonus and 20% raise, management had to give the top people (and, in trading, it’s pretty obvious who those are) even more.

Management loves to turn high performers against the weak, because this enables management to be stingy on both sides. The low performers are fired (they’re never mentored or reassigned) and the high performers can be paid a pittance and still have a huge bonus in relative terms (not being fired vs. being fired). What the bankers were smart enough to realize (and programmers, in general, are not) is that performance is highly context-driven. Put eight people of exactly equal ability on a team to do a task and there will be one leader, two or three contributors, and the rest will be marginal or stragglers. It’s just more efficient to have the key knowledge in a small number of heads. Open source projects work this way. What this means is that, even if you have excellent people and no bad hires, you’ll probably have some who end up with not much to show for their time (which is why open allocation is superior; they can reassign themselves until they end up in a high-impact role). If management can see who is in what role, it can fire the stragglers and under-reward the key players (who, because they’re already high performers, are probably motivated by things other than money… at least, for now). The bankers and traders (and, to a lesser extent, the quants) had the social savvy and sense to realize that it was best that upper management not know exactly who was doing what. They protected each other, and it worked for them. The programmers, on the other hand, did not, and this hurt top performers as well as those on the bottom.

Let’s say that an investment bank tried to impose tech-company stack ranking on its employees, associate level and higher. (Analyst programs are another matter, not to be discussed here.) Realizing the mutual benefit in protecting each other, the bankers would find a way to sabotage the process by giving everyone top ratings, ranking the worst employees highly, or simply refusing to do the paperwork. And good for them! Far from being unethical, this is what they should do: collectively work The Business to get what they’re actually worth. Only a programmer would be clueless enough to go along with that nonsense.

In my more pessimistic moods, I tend to think that we, as programmers, deserve our low status and subordinacy. As much as we love to hate those “business douchebags” there’s one thing I will say for them. They tend to help each other out a lot more than we do. Why is this? Because they’re more political and, again, that might not be a bad thing. Ask a programmer to rate the performance of a completely average colleague and you’ll get an honest answer: he was mediocre, we could have done without him. These are factual statements about average workers, but devastating when put into words. Ask a product manager or an executive about an average colleague and you’ll hear nothing but praise: he was indispensable, a world-class player, best hire in ten years. They realize that it’s politically better for them, individually and as a group, to keep their real opinions to themselves and never say anything that could remotely endanger another’s career. Even if that person’s performance was only average, why make an enemy when one can make a friend?

“Bad code”

Let’s get to another thing that we do, as programmers, that really keeps us down. We bash the shit out of each other’s code and technical decision-making, often over minutiae.

I hate bad code. I really do. I’ve seen plenty of it. (I’ve written some, but I won’t talk about that.) I understand why programmers complain about each other’s code. Everyone seems to have an independent (and poorly documented) in-head culture that informs how he or she writes code, and reading another person’s induces a certain “culture shock”. Even good code can be difficult to read, especially under time pressure. And yes, most large codebases have a lot of code in them that’s truly shitty, sometimes to the point of being nearly impossible to reason about. Businesses have failed because of code quality problems, although (to tell the whole story) it’s rare that one bad programmer can do that much damage. The worst software out there isn’t the result of one inept author, but the result of code having too many authors, often over years. It doesn’t help that most companies assign maintenance work to either to junior programmers, or demoted (and disengaged) senior ones, neither category having the power to do it right.

I’d be the last one to come out and defend bad code. That said, I think we spend too much time complaining about each other’s code– and, worse yet, we tend toward the unforgivable sin of complaining to the wrong people. A technical manager has, at least, the experience and perspective to know that, at some level, every programmer hates other peoples’ code. But if that programmer snitches to a non-technical manager and executive,  well… you’ve just invited a 5-year-old with a gun to the party. Someone might get fired because “tabs versus spaces” went telephone-game into “Tom does shoddy work and is going to destroy the business”. Because executives are politically savvy enough to protect the group, and only sell each other out in extreme circumstances, what started out as a stylistic disagreement sounds (to the executive ear) like Tom (who used his girlfriend’s computer to fix a production problem at 11:45 on a Friday night, the tabs/spaces issue being for want of an .emacs.d) is deliberately destroying the codebase and putting the whole company at risk.

As programmers, we sell each other out all the time. If we want to advance beyond reasonable but merely upper-working class salaries, and be more respected by The Business, we have to be more careful about this kind of shit. I’ve heard a great number of software engineers say things like, “Half of all programmers should just be fired.” Now, I’ll readily agree that there are a lot of badly-trained programmers out there whose lack of skill causes a lot of pain. But I’m old enough to know that people come to a specific point from a multitude of paths and that it’s not useful to personalize this sort of thing. Also, regardless of what we may think as individuals, almost no doctor or banker would ever say, to someone outside his profession, “half of us should be fired”. They’re savvy enough to realize the value of protecting the group, and handling competence and disciplinary matters internally. Whether to fire, censure, mentor or praise is too important a decision to let it happen outside of our walls.

There are two observations about low-quality code, one minor and one major. The minor one is that code has a “all of us is worse than any of us” dynamic. As more hands pass over code, it tends to get worse. People hack the code needing specific features, never tending to the slow growth of complexity, and the program evolves over time into something that nobody understands because too many people were involved in it. Most software systems fall to pieces not because of incompetent individuals, but because of unmanaged growth of complexity. The major point on code-quality is: it’s almost always management’s fault.

Bad code comes from a multitude of causes, only one of which is low skill in programmers. Others include unreasonable deadlines, unwillingness to attack technical debt (a poor metaphor, because the interest rate on technical “debt” is both usurious and unpredictable), bad architecture and tooling choices, and poor matching of programmers to projects. Being stingy, management wants to hire the cheapest people it can find and give them the least time possible in which to do the work. That produces a lot of awful code, even if the individual programmers are capable. Most of the things that would improve code quality (and, in the long term, the health and performance of the business) are things that management won’t let the programmers have: more competitive salaries, more autonomy, longer timeframes, time for refactoring. The only thing that management and the engineers can agree on is firing (or demoting, because their work is often still in use and The Business needs someone who understands it) those who wrote bad code in the past.

One thing I’ve noticed is that technology companies do a horrible job of internal promotion. Why is that? Because launching anything will typically involve compromises with the business on timeframe and headcount, resulting in bad code. Any internal candidate for a promotion has left too many angles for attack. Somewhere out there, someone dislikes a line of code he wrote (or, if he’s a manager, something about a project he oversaw). Unsullied external candidates win, because no one can say anything bad about them. Hence, programming has the culture of mandatory (but, still, somewhat stigmatized) job hopping we know and love.

What’s really at the heart of angry programmers and their raging against all that low-quality code? Dishonest attribution. The programmer can’t do shit about the dickhead executive who set the unreasonable deadlines, or the penny-pinching asswipe managers who wouldn’t allow enough salary to hire anyone good. Nor can he do much about the product managers or “architects” who sit above and make his life hell on a daily basis. But he can attack Tom, his same-rank colleague, over that commit that really should have been split into two. Because they’re socially unskilled and will generally gleefully swallow whatever ration of shit is fed to them by management, most programmers can very easily be made to blame each other for “bad code” before blaming the management that required them to use the bad code in the first place.

Losers

As a group, software engineers are losers. In this usage, I’m not using the MacLeod definition (which is more nuanced) and my usage is halfway pejorative. I generally dislike calling someone a loser, because the pejorative, colloquial meaning of that word conflates unfortunate circumstance (one who loses) with deserved failure. Here, however, it applies. Why do we lose? Because we play against each other, instead of working together to beat the outside world. As a group, we create our own source of loss.

Often, we engage in zero- or negative-sum plays just to beat the other guy. It’s stupid. It’s why we can’t have nice things. We slug each other in the office and wonder why external hires get placed over us. We get into flamewars about minutiae of programming languages, spread FUD, and eventually some snot-nosed dipshit gets the “brilliant” idea to invite nontechnical management to weigh in. The end result is that The Business comes in, mushroom stamps all participants, and says, “Everything has to be Java“.

Part of the problem is that we’re too honest, and we impute honesty in others when it isn’t there. We actually believe in the corporate meritocracy. When executives claim that “low performers” are more of a threat to the company than their astronomical, undeserved salaries and their doomed-from-the-start pet projects, programmers are the only people stupid enough to believe them, and will often gleefully implement those “performance-based” witch hunts that bankers would be smart enough to evade (by looking for better jobs, and arranging for axes to fall on people planning exits anyway). Programmers attempt to be apolitical, but that ends up being very political, because the stance of not getting political means that one accepts the status quo. That’s radically conservative, whether one admits it or not.

Of course, the bankers and traders realize the necessity of appearing to speak from a stance of professional apolitical-ness. Every corporation claims itself to be an apolitical meritocracy, and it’s not socially acceptable to admit otherwise. Only a software engineer would believe in that nonsense. Programmers hear “Tom’s not delivering” or “Andrea’s not a team player” and conceive of it as an objective fact, failing to recognize that, 99% of the time, it means absolutely nothing more or less than “I don’t like that person”.

Because we’re so easily swayed, misled, and divided, The Business can very easily take advantage of us. So, of course, it does. It knows that we’ll sell each other out for even a chance at a seat at the table. I know a software engineer who committed felony perjury against his colleagues just to get a middle-management position and the right to sit in on a couple of investor meetings. Given that this is how little we respect each other, ourselves, and our work, is it any wonder that software engineers have such low status?

Our gender issues

I’m going to talk, just briefly, about our issues with women. Whatever the ultimate cause of our lack of gender diversity– possibly sexism, possibly that the career ain’t so great– it’s a major indictment of us. My best guess? I think sexism is a part of it, but I think that most of it is general hostility. Women often enter programming and find their colleagues hostile, arrogant, and condescending. They attribute that to their gender, and I’m sure that it’s a small factor, but men experience all of that nonsense as well. To call it “professional hazing” would be too kind. There’s often nothing professional about it. I’ve dealt with rotten personalities, fanaticism about technical preference or style, and condescension and, honestly, don’t think there’s a programmer out there who hasn’t. When you get into private-sector technology, one of the first things you learn is that it’s full of assholes, especially at higher levels.

Women who are brave enough to get into this unfriendly industry take a look and, I would argue, most decide that it’s not worth it to put up with the bullshit. Law and medicine offer higher pay and status, more job security, fewer obnoxious colleagues, and enough professional structure in place that the guy who cracks rape jokes at work isn’t retained just because he’s a “rockstar ninja”.

“I thought we were the good guys?”

I’ve often written from a perspective that makes me seem pro-tech. Originally, I approached the satirical MacLeod pyramid with the belief that “Technocrat” should be used to distinguish positive high-performers (apart from Sociopaths). I’ve talked about how we are a colonized people, as technologists. It might seem that I’m making businesspeople out to be “the bad guys” and treating programmers as “the good guys”. Often, I’m biased in that very direction. But I also have to be objective. There are good business people out there, obviously. (They’re just rare in Silicon Valley, and I’ll get to that.) Likewise, software engineers aren’t all great people, either. I don’t think either “tribe” has a monopoly on moral superiority. As in Lost, “we’re the good guys” doesn’t mean much.

We do get the worst (in terms of ethics and competence) of the management/business tribe in the startup world. That’s been discussed at length, in the essay linked above. The people who run Silicon Valley aren’t technologists or “nerds” but machiavellian businessmen who’ve swooped in to the Valley to take advantage of said nerds. The appeal of the Valley, for the venture capitalists and non-technical bro executives who run it, isn’t technology or the creation of value, but the unparalleled opportunity to take advantage of too-smart, earnest hard workers (often foreign) who are so competent technically that they often unintentionally generate value, but don’t know the first thing about how to fight for their own interests.

It’s easy to think ourselves morally superior, just because the specific subset of business people who end up in our game tends to be the worst of that crowd. It’s also a trap. We have a lot to learn form the traders and bankers of the world about how to defend ourselves politically, how to stand a chance of capturing some of the value we create, and how to prevent ourselves from being robbed blind by people who may have lower IQs, but have been hacking humans for longer than we could have possibly been using computers. Besides, we’re not all good. Many of us aren’t much better than our non-technical overlords. Plenty of software engineers would gladly join the bad guys if invited to their table. The Valley is full of turncoat software engineers who don’t give a shit about the greater mission of technology (using knowledge to make peoples’ lives better) and who’d gladly sell their colleagues out to cost-cutting assholes in management.

Then there are the losers. Losers aren’t “the bad guys”. They don’t have the focus or originality that would enable them to pull off anything complicated. Their preferred sin is typically sloth. They’ll fail you when you need them the most, and that ‘s what makes them infuriating. They just want to put their heads down and work, and the problem is that they can’t be trusted to “get political” when that’s exactly what’s needed. The danger of losers is in numbers. The problem is that so many software engineers are clueless, willing losers who’ll gladly let political operators take everything from them.

When you’re young and don’t know any better, one of the appeals of software engineering is that it appears, superficially, to tolerate people of low social ability. To people used to artificial competition against their peers, this seems like an attractive trait of the industry; it’s not full of those “smooth assholes” and “alpha jocks”. After several years observing various industries, I’ve come to the conclusion that this attitude is not merely misguided, but counterproductive. You want socially skilled colleagues. Being the biggest fish in a small pond just means that there are no big fish to protect you when the sharks come in. Most of those “alpha jocks” aren’t assholes or idiots (talk to them, nerds; you’ll be surprised) and, when The Business comes in and is looking for a fight, it’s always best to have strong colleagues who’ve got your back.

Here’s an alternate, and quite possible hypothesis: maybe The Business isn’t actually full of bad guys. One thing that I’ve realized is that people tend to push blame upward. For example, the reputation of venture capitalists has been harmed by founders blaming “the VCs” for their own greed and mismanagement. It gives the grunt workers an external enemy, and the clueless can be tricked into working harder than they should (“they don’t really like us and haven’t given us much, but if we kill it on this project and prove them wrong, maybe they’ll change their minds!”). It actually often seems that most of the awfulness of the software industry doesn’t come directly from The Business, but from turncoat engineers (and ex-engineers) trying to impress The Business. In the same way that young gang members are more prone to violence than elder dons, the most creative forms of evil seem to come from ex-programmers who’ve changed their colors.

The common enemy

So long as software engineers can easily be divided against each other on trivial matters like tabs versus spaces and scrotum versus kanban, we’ll never get the respect (and, more importantly, the compensation) that we’re due. These issues distract us from what we really need to do, which is figure out how to work The Business. Clawing at each other, each trying to become the favored harem queen of the capitalist, is suboptimal compared to the higher goal of getting out of the harem.

I’ve spoken of “The Business” as if it were a faceless, malevolent entity. It might sound like I’m anti-business, and I’m not. Business is just a kind of process. Good people, and bad people, start businesses and some add great value to the world. The enemy isn’t private enterprise itself, but the short-term thinking and harem-queen politics of the established corporation. Business organizations get to a point where they cease having a real reason to exist, and all that’s left is the degenerate social contest for high-ranking positions. We, as programmers, seem to lack the skill to prevent that style of closed-allocation degeneracy from happening. In fact, we seem to unintentionally encourage it.

The evil isn’t that software is a business, but that technical excellence has long since been subordinated entirely to the effectively random emotional ups and downs of non-technical executives who lack the ability to evaluate our work. It’s that our weird ideology of “never get political” is actually intensely political and renders us easy to abuse. Business naturally seems to be at risk of anti-intellectual tendencies and, rather than fight back against this process, we’ve amplified it just to enjoy the illusion of being on the inside, among the “cool kids”, part of The Business. Not only does our lack of will to fight for our own interests leave us at the mercy of more skilled business operators, but it attracts an especially bad kind of them. Most business people, actually, aren’t the sorts of corporate assholes we’re used to seeing run companies. It’s just that our lack of social skill appeals to the worst of that set: people who come in to technology to take advantage of all the clueless, loser nerds who won’t fight for themselves. If we forced ourselves to be more discerning judges of character, and started focusing on ethics and creativity instead of fucking tabs-versus-spaces, we might attract a better sort of business person, and have an industry where stack ranking and bastardized-“Agile” micromanagement aren’t even considered.

If we want to improve our situation, we have to do the “unthinkable” (which is, as I’ve argued, actually quite thinkable). We have to get political.

What’s a mid-career software engineer actually worth? Try $779,000 per year as a lower bound.

Currently, people who either have bad intentions or a lack of knowledge are claiming that software engineer salaries are “ridiculous”. Now, I’ll readily admit that programmers are, relative the general population, quite well paid. I’m not about to complain about the money I make; I’m doing quite well, in a time and society where many people aren’t. The software industry has many problems, but low pay for engineers (at least, for junior and mid-career engineers; senior engineers are underpaid but that’s an issue for another time) doesn’t crack the top 5. Software engineers are underpaid, relative to the massive amount of value (if given proper projects, rather than mismanaged as is typical) they are capable of delivering. In comparison to the rest of the society, they do quite well.

So what should a software engineer be paid? There’s a wide disparity in skill level, so it’s hard to say. I’m going to focus on a competent, mid-career engineer. This is someone between 5 and 10 years of experience, with continual investment in skill, and probably around 1.6 on this software engineering scale. He’s not a hack or the stereotypical “5:01″ programmer who stopped learning new skills at 24, but he’s not a celebrity either. He’s good and persistent and experienced, but probably not an expert. In the late 1990s, that person was just cracking into six-figure territory: $100,000 per year. No one thought that number was “ridiculous”. Adjusted for inflation, that’s $142,300 per year today. That’s probably not far off what an engineer at that level actually makes, at least in New York and the Bay Area.

Software engineers look “ridiculous” to people who haven’t been software engineers in 20 years (or ever) and whose numbers are way out of date. If you’re a Baby Boomer whose last line of code was in 1985, you’re probably still thinking that $60,000 is a princely sum for a programmer to earn. When one factors inflation into the equation, programmer salaries are only “at record high” because inflation is an exponential process. Taking that out, they’re right about where history says they should be.

I would argue, even, that programmer salaries are low when taking a historical perspective. The trend is flat, adjusting for inflation, but the jobs are worse. Thirty years ago, programming was an R&D job. Programmers had a lot of autonomy: the kind of autonomy that it takes if one is going to invent C or Unix or the Internet or a new neural network architecture. Programmers controlled how they worked and what they worked on, and either answered to other programmers or to well-read scientists, rather than anti-intellectual businessmen who regard them as cost centers. Historically, companies sincerely committed to their employees’ careers and training. You didn’t have to change jobs every 2 years just to keep getting good projects and stay employable. The nature of the programming job, over the past couple decades, has become more stressful (open-plan offices) and careers have become shorter (ageism). Job volatility (unexpected layoffs and, even, phony “performance-based” firings in lieu of proper layoffs, in order to skimp on severance because that’s “the startup way”) has increased. With all the negatives associated with a programming job in 2014, that just didn’t exist in the 1970s to ’80s, flat performance on the salary curve is disappointing. Finally, salaries in the Bay Area and New York have kept abreast of general inflation, but the costs of living have skyrocketed in those “star cities”, while the economies of the still-affordable second-tier cities have declined. In the 1980s and ’90s, there were more locations in which a person could have a proper career, and that kept housing prices down. In 2014, that $142,000 doesn’t even enable one to buy a house in a place where there are jobs.

All of those factors are subjective, however, so I’ll discard them. We have sufficient data to know that $142,000 for a mid-career programmer is not ridiculous. It’s a lower bound for the business value of a software engineer (in 1999); we know that employers did pay that; they might have been willing to pay more. This information already gives us victory over the assclowns claiming that software engineer salaries are “ridiculous” right now.

Now, I’ll take it a step further and introduce Yannis’s Law: programmer productivity doubles every 6 years. Is it true? I would say that the answer is a resounding “yes”. For sure, there are plenty of mediocre programmers writing buggy, slow websites and abusing Javascript in truly awful ways. On the other hand, there is more recourse for a good programmer who find quality; rather than commit to commercial software, she can peruse the open-source world. There’s no evidence for a broad-based decline in programmer ability over the years. It’s also easy to claim that the software career “isn’t fun anymore” because so much time is spent gluing existing components together, and accounting for failures of legacy systems. I don’t think these gripes are new, and I think tools are improving, and a 12% per year rate sounds about right. Put another way, one who programs exactly as was done in 1999 is only about 18 percent as productive as one using modern tools. And yet that programmer, only 18% as productive as his counterpart today, was worth $142,000 (2014 dollars) back then!

Does this mean that we should throw old tools away (and older programmers under the bus)? Absolutely not. On the contrary, it’s the ability to stand on the shoulders of giants that makes us able to grow (as a class) at such a rate. Improved tools and accumulated knowledge deliver exponential value, but there’s a lot of knowledge that is rarely learned except over a decades-long career. Most fresh Stanford PhDs wouldn’t be able to implement a performant, scalable support vector machine from scratch, although they could recite the theory behind one. Your gray-haired badasses would be rusty on the theory but, with a quick refresh, stand a much greater chance of building it righjt. Moreover, the best old ideas tend to recur and long-standing familiarity is an advantage. The most exciting new programming language right now is Clojure, a Lisp that runs on the Java Virtual Machine. Lisp, as an idea, is over 50 years old. And Clojure simply couldn’t have been designed by a 25-year-old in Palo Alto. For programmers, the general trend is a 12% increase in productivity; but individuals can reliably do 30 percent or more, and for periods spanning over decades.

If the business value of a mid-level programmer in 1999 was $142,000 in today’s dollars, then one can argue that today, with programmers 5.7 times more productive, the true value is $779,000 per year at minimum. It might be more. For the highly competent and for more senior programmers, it certainly is higher. And here’s another thing: investors and managers and VPs of marketing didn’t create that surplus. We did. We are almost 6 times as productive as we were in the 1990s not because they got better at their jobs (they haven’t) but because we built the tools to make ourselves (and our successors) better at what we do. By rights, it’s ours.

Is it reasonable, or realistic, to argue that mid-career software engineers ought to be earning close to a million dollars per year? Probably not. It seems to be inevitable, and also better for society, that productivity gains are shared. We ought to meet in the middle. That we don’t capture all of the value we create is a good thing. It would be awful, for example, if sending an email cost as much as sending a letter by post or, worse yet, as much as using the 19th-century Pony Express, because the producers of progress had captured all of the value for themselves. So, although that $779,000 figure adequately represents the value of a decent mid-career engineer to the business, I wouldn’t go so far as to claim that we “deserve” to be paid that much. Most of us would ecstatic with real equity (not that 0.05% VC-istan bullshit) and a quarter of that number– and with the autonomy to deliver that kind of value.

What Silicon Valley’s ageism means

Computer programming shouldn’t be ageist. After all, it’s a deep discipline with a lot to learn. Peter Norvig says it takes 10 years, but I’d consider that number a minimum for most people. Ten years of high-quality, dedicated practice to the tune of 5-6 hours per day, 250 days per year, might be enough. For most people, it’s going to take longer, because few people can work only on the interesting problems that constitute dedicated practice. The fundamentals (computer science) alone take a few thousand hours of study, and then there’s the experience of programming itself, which one must do in order to learn how to do it well. Getting code to work is easy. Making it efficient, robust, and legible is hard. Then, there’s a panoply of languages, frameworks, paradigms, to learn and absorb and, for many, to reject. As an obstacle, there’s the day-to-day misery of a typical software day job, in which so much time is wasted on politics and meetings and pointless projects that an average engineer is lucky to have 5 hours per week for learning and growth. Ten years might be the ideal; I’d bet that 20 years is typical for the people who actually become great engineers and, sadly, the vast majority of professional programmers never get anywhere close.

It takes a long time to be actually good in software engineering. The precocious are outliers. More typically, people seem to peak after 40, as in all the other high-skill disciplines. It, then, seems that most of age-related decline in this field is externally enforced. Age discrimination is not an artifact of declining ability but changing perceptions.

It doesn’t make sense, but there it is.

Age discrimination has absolutely no place in technology. Yet it exists. After age 40, engineers find it increasingly difficult to get appropriate jobs. Startups are, in theory, supposed to “trade against” the inefficiencies and moral failures of other companies but, on this issue, the venture capital (VC) funded startups are the biggest source of the problem. Youth and inexperience have become virtues, while older people who push back against dysfunction (and, as well, outright exploitation) are cited as “resistant to change”.

There’s another issue that isn’t derived from explicit ageism, but might as well be. Because our colonizers (mainstream business culture) are superficial, they’ve turned programming into a celebrity economy. A programmer has two jobs. In addition to the work itself, which is highly technical and requires continual investment and learning, there’s a full-time reputation-management workload. If a machine learning engineer works at a startup and spends most of his time in operations, he’s at risk of being branded “an ops guy”, and may struggle to get high-quality projects in his specialty from that point on. He hasn’t actually lost anything– in fact, he’s become far more valuable– but the superficial, nontechnical idiots who evaluate us will view him as “rusty” in his specialty and, at the least, exploit his lack of leverage. All because he spent 2 years doing operations, because it needed to be done!

As we get older and more specialized, the employment minefield becomes only more complicated. We are more highly paid at that point, but not by enough of a margin to offset the increasing professional difficulties. Executives cite the complexity of high-end job searches when demanding high salaries and years-long severances. Programmers who are any good face the same, but get none of those protections. I would, in fact, say that any programmer who is at all good needs a private agent, just as actors do. The reputation management component of this career, which is supposed to be about technology and work and making the world better, but is actually about appeasing the nontechnical, drooling patron class, constitutes a full-time job that requires a specialist. Either we need unions, or we need an agent model like Hollywood, or perhaps we need both. That’s another essay, though.

The hypocrisy of the technology employer

Forty years ago, smart people left finance and the mainstream corporate ladder for technology, to move into the emerging R&D-driven guild culture that computing had at the time. Companies like Hewlett-Packard were legitimately progressive in how they treated their talent, and rewarded for it by their employees’ commitment to making great products. In this time, Silicon Valley represented, for the most technically adept people in the middle class, a genuine middle path. The middle path will require its own essay, but what I’m talking about here is a moderate alternative between the extremes of subordination and revolution. Back then, Silicon Valley was the middle path that, four decades later, it is eagerly closing.

Technology is no longer managed by “geeks” who love the work and what it can do, but by the worst kinds of business people who’ve come in to take advantage of said geeks. Upper management in the software industry is, in most cases, far more unethical and brazen than anywhere else. To them, a concentration of talented people who don’t have the inclination or cultural memory that would lead them to fight for themselves (labor unions, agent models, ruthlessness of their own) is an immense resource. Consequently, some of the most disgusting HR practices (e.g. stack ranking, blatant sexism) can be found in the technology industry.

There’s one really bad and technical trait of software employers that, I think, has damaged the industry immensely. Technology employers demand specialties when vetting people for jobs. General intelligence and proven ability to code isn’t enough; one has to have “production experience” in a wide array of technologies invented in the past five years. For all their faults, the previous regime of long-lasting corporations was not so bigoted when it came to past experience, trusting people to learn on the job, as needed. The new regime has no time for training or long-term investment, because all of these companies have been built to be flipped to a greater fool. In spite of their bigoted insistence on pre-existing specialties in hiring, they refuse to respect specialties once people are hired. Individual programmers who attempt to protect their specialties (and, thus, their careers) by refusing assignment to out-of-band or inferior grunt work are quickly fired. This is fundamentally hypocritical. In hiring, software companies refuse to look twice at someone without a yellow brick road of in-specialty accomplishments of increasing scope; yet, once employees are inside and fairly captive (due to the pernicious stigma against changing jobs quickly, even with good reason) they will gladly disregard that specialty, for any reason or no reason. Usually, this is framed as a business need (“we need you to work on this”) but it’s, more often, political and sometimes personal. Moving talent out of its specialty is a great way for insecure middle managers to neutralize overperformance threats. In a way, employers are like the pervert who chases far-too-young sexual partners (if “partner” is the right word here) for their innocence, simply to experience the thrill of destroying it. They want people who are unspoiled by the mediocrity and negativity of the corporate world, because they want to inflict the spoilage. The virginity of a fresh, not-yet-cynical graduate from a prestigious university is something they want all for themselves.

The depression factor

I’m not going to get personal here, but I’m bipolar so when I use words like “depression” and “hypomania” and “anxiety” I do, in fact, know what the fuck I am talking about.

A side effect of corporate capitalism, that I see, is that it has created a silent epidemic of middle-aged depression. The going assumption in technology that mental ability declines after age 25 is not well supported, and it is in fact contrary to what most cultures believe about intelligence and age. (In truth, various aspects of cognition peak at different ages– from language acquisition at age 5 to writing ability around 65– and there’s also so much individual variation that there’s no clear “peak” age.) For general, holistic intelligence, there’s no evidence of an age-bound peak in healthy people. While this risks sounding like a “No True Scotsman” claim, what I mean to say is that every meaningful age-related decline in cognition can be tracked to a physical health problem and not aging itself. Cardiovascular problems, physical pain and side effects of medication can impair cognition. I’m going to talk about the Big One, though, and that’s depression. Depression can cause cognitive decline. Most of that loss is reversible, but only if the person recovers from it and, in many cases, they never do.

In this case, I’m not talking about severe depression, the kind that would have a person considering electroconvulsive therapy or on suicide watch. I’m talking about mild depression that, depending on time of diagnosis, might be considered subclinical. People experiencing it in middle age are, one presumes, liable to attribute it to getting older rather than a real health problem. Given that middle-aged “invisibility” in youth-obsessed careers is, in truth, unjust and depressing, it seems likely that more than a few people would experience depression and fail to perceive it as a health issue. That’s one danger of depression that those who’ve never experienced it might not realize exists: when you’re depressed, you suffer from the delusion that (a) you’ve always been depressed, and (b) that no other outlook or mood makes sense. Depression is delusional, and it is a genuine illness, but it’s also dangerously self-consistent.

Contrary to the stereotype, people with depression aren’t always unhappy. In fact, people with mild depression can be happy quite often. It’s just easier to make them unhappy. Things that don’t faze normal people, like traffic jams or gloomy weather or long queues at the grocery store, are more likely to bother them. For some, there’s a constant but low-grade gloom and tendency to avoid making decisions. Others might experience 23 hours and 50 minutes per day of normal mood and 10 minutes of intense, debilitating, sadness: the kind that would force them to pull over to the side of the road and cry. There isn’t a template and, just as a variety of disparate diseases (some viral, some bacterial, and some behavioral) were once called “fevers”, I feel like “depression” is a cluster of about 20 different diseases that we just don’t have the tools to separate. Some depressions come without external cause. Others are clearly induced by environmental stresses. Some depressions impair cognition and probably constitute a (temporary) 30-IQ-point loss. Others (more commonly seen in artists than in technology workers) seem to induce no intellectual impairment at all; the person is miserable, but as sharp as ever.

Corporate workers do become less sharp, on average, with age. You don’t see that effect, at least not involuntarily so, in most intellectually intense fields. A 45-year-old artist or author or chess master has his best work ahead of him. True entrepreneurs (not dipshits who raise VC based on connections) also seem to peak in their 50s and, for some, even later. Most leaders hit their prime around 60. However, it’s observable that something happens in Corporate America that makes people more bitter, more passive, and slower to act over time, and that it starts around 40. Perhaps it’s an inverse of survivor bias, with the more talented people escaping the corporate racket (by becoming consultants, or entrepreneurs) before middle age. I don’t think so, though. There are plenty of highly talented people in their forties and fifties who’ve been in private-sector programming for a long time and just seem out of gas. I don’t blame them for this. With better jobs, I think they’d recover their power surprisingly quickly. I think they have a situationally-induced case of mild depression that, while it may not be the life-threatening illness we tend to associate with major depression, takes the edge off their abilities. It doesn’t make them unemployable. It makes them slow and bitter but, unlike aging itself, it’s very easily reversible: change the context.

Most of these slowed-down, middle-aged, private-sector programmers wouldn’t qualify for major depressive disorder. They’re not suicidal, don’t have debilitating panic attacks, and can attribute their losses of ability (however incorrectly) to age. Rather, I think that most of them are mildly but chronically depressed. To an individual, this is more of a deflation than a disability; to society, the costs are enormous, just because such a large number of people are affected, and because it disproportionately affects the most experienced people at a time when, in a healthier economic environment, they’d be in their prime.

The tournament of idiots

No one comes out of university wanting to be a private-sector social climber. There’s no “Office Politics” major. People see themselves as poets, economists, mathematicians, or entrepreneurs. They want to make, build, and do things. To their chagrin, most college graduates find that between them and any real work, there’s at least a decade of political positioning, jockeying for permissions and status, and associated nonsense that’s necessary if one intends to navigate the artificial scarcity of the corporate world.

The truth is that most of the nation’s most prized and powerful institutions (private-sector companies) have lost all purpose for existing. Ideals and missions are for slogans, but the organization’s true purpose is to line the pockets of those ranking high within it. There’s also no role or use for real leadership. Corporate executives are the farthest one gets from true leaders. Most are entrenched rent-seekers. With extreme economic inequality and a culture that worships consumption, it should surprise no one that our “leadership” class is a set of self-dealing parasites at a level that hasn’t been seen in an advanced economy since pre-Revolution France.

Leadership and talent have nothing to do with getting to the top. It’s the same game of backstabbing and political positioning that has been played in kings’ courts for millennia. The difference, in the modern corporation, is that there’s a pretense of meritocracy. People, at least, have to pretend to be working and leading to advance further. The work that is most congruent with social advancement, however, isn’t the creative work that begets innovation. Instead, it’s success in superficial reliability. Before you get permission to be creative, you have to show that you can suffer, and once you’ve won the suffering contest, it’s neither necessary nor worth it to take creative risks. Companies, therefore, pick leaders by loading people with unnecessary busywork that often won’t go anywhere, and putting intense but ultimately counterproductive demands on them. They generate superficial reliability contests. One person will outlast the others, who’ll fall along the way due to unexpected health problems, family emergencies, and other varieties of attrition (for the lucky few, getting better jobs elsewhere). One of the more common failure modes by which people lose this tournament of idiots is mild depression: not enough to have them hospitalized, but enough to pull them out of contention.

The corporate worker’s depression, especially in midlife, isn’t an unexpected side effect of economic growth or displacement or some other agent that might allow Silicon Valley’s leadership to sweep it under the rug of “unintended consequences”. Rather, it’s a primary landscape feature of the senseless competition that organizations create for “leadership” (rent-seeking) positions when they’ve run out of reasons to exist. At that level of decay, there is no meaningful definition of “merit” because the organization itself has turned pointless, and the only sensible way to allocate highly-paid positions is to create a tournament of idiots, in which people psychologically abuse each other (often subtly, in the form of microaggressions) until only a few remain healthy enough to function.

Here we arrive at a word I’ve come to dread: corporate culture. Every corporation has a culture, and 99% of those are utterly abortive. Generally, the more that a company’s true believers talk about “our culture”, the more fucked-up the place actually is. See, culture is often ugly. Foot binding, infantile genital mutilation (“female circumcision”), war, and animal torture are all aspects of human culture. Previous societies used supernatural appeal to defend inhumane practices, but modern corporations use “our culture” itself as a god. “Culture fit” is often cited to justify the otherwise inconsistent and, sometimes, unjustifiable. Why wasn’t the 55-year-old woman, a better coder than anyone else on the team, hired? “It wouldn’t look right.” Can’t say that! “A seasoned coder who isn’t rich would shatter the illusion that everyone good gets rich.” Less illegal, but far too honest. “She didn’t fit with the culture.” Bingo! Culture can always be used in this way, by an organization, because it’s a black box of blame, diffusing moral culpability about the group. Blaming an adverse decision on “the team” or “the culture” avoids individual risk for the blamer, but the culture itself can never be attacked as bad. Most people in most organizations actually know that the “leadership team” (career office politicians, also known as executives) of their firm is toxic and incompetent. When they aren’t around, the executives are attacked. But it’s rare that anyone ever attacks the culture because “the culture” is everyone. To indict it is to insult the people. In this way, “the culture” is like an unassailable god.

Full circle

We’ve traveled through some dark territory. The tournament of idiots that organizations construct to select leadership roles, once they’ve ceased to have a real purpose, causes depression. The ubiquity of such office cultures has created, I argue, a silent epidemic of mild, midlife depression that has led venture capitalists (situated at its periphery, their wealth buying them some exit from the vortex of mediocrity in which they must still work, but do not live) and privileged young psuedo-entrepreneurs (terrified of what awaits them when their family connections cool and they must actually work) to conclude that a general cognitive mediocrity awaits in midlife, even though there is no evidence to support this belief, and plenty of evidence from outside of corporate purgatory to contradict it.

What does all of this say? Personally, I think that, to the extent that large groups of individuals and organizations can collectively “know” things, the contemporary corporate world devalues experience because it knows that the experience it provides is of low value. It refuses to eat its own dogfood, knowing that it’s poisoned.

For example, software is sufficiently technical and complex that great engineers are invariably experienced ones. The reverse isn’t true. Much corporate experience is of negative value, at least if one includes the emotional side effects that can lead to depression. Median-case private-sector technology work isn’t sufficiently valuable to overcome the disadvantages associated with age, which is another way of saying that the labor market considers average-case corporate experience to have negative value. I’m not sure that I disagree. Do I think it’s right to write people off because of their age? Absolutely not. Do I agree with the labor market’s assessment that most corporate work rots the brain? Well, the answer is mostly “yes”. The corporate world turns smart people, over the years, into stupid ones. If I’m right that the cause of this is midlife depression, there’s good news. Much of that “brain rot” is contextual and reversible.

How do we fix this?

Biologists and gerontologists seeking insight into longevity have studied the genetics and diet of long-living groups of people, such as the Sardinians and the people of Okinawa. Luckily for us, midlife cognitive decline isn’t a landscape feature of most technical or creative fields. (In fact, it’s probably not present in ours; it’s just perceived that way.) There are plenty of places to look for higher cognitive longevity, because few industries are as toxic as the contemporary software industry. When there is an R&D flavor to the work, and when people have basic autonomy, people tend to peak around 50, and sometimes later. Of course, there’s a lot of individual variation, and some people voluntarily slow down before that age, in order to attend to health, family, spiritual, or personal concerns. The key word to that is voluntary

Modeling and professional athletics (in which there are physical reasons for decline) aside, a career in which people tend to peak early, or have an early peak forced upon them, is likely to be a toxic one that enervates them. Silicon Valley’s being a young man’s game (and the current incarnation of it, focused on VC-funded light tech, is exactly that) simply indicates that it’s so destructive to the players that only the hard-core psychopaths can survive in it for more than 10 years. It’s not a healthy place to spend a long period of time and develop an expertise. As discussed above, it will happily consume expertise but produces almost none of value (hence, its attraction to those with pre-existing specialties, despite failing to respect specialties once the employee is captive). This means, as we already see, that technical excellence will fall by the wayside, and positive-sum technological ambition will give way to the zero-sum personal ambitions of the major players.

We can’t fix the current system, in which the leading venture capitalists are striving for “exits” (greater fools). That economy has evolved from being a technology industry with some need for marketing, to a marketing industry with some need (and that bit declining) for technology. We can’t bring it back, because its entrenched players are too comfortable with it being the way it is. While the current approach provides mediocre returns on investment, hence the underperformance of the VC  asset class, the king-making and career-altering power that it affords the venture capitalist allows him to capture all kinds of benefits on the side, ranging from financial upside (“2 and 20″) to executive positions for talentless drinking buddies to, during brief bubbly episodes such as this one, “coolness”. They like it the way it is, and it won’t change. Rather than incrementally fix the current VC-funded Valley, it must be replaced outright.

The first step, one might say, is to revive R&D within technology companies. That’s a step in the right direction, but it doesn’t go far enough. Technology should be R&D, full stop. To get there, we need to assert ourselves. Rather than answering to non-technical businessmen, we need to learn how to manage our own affairs. We need to begin valuing the experience on which most R&D progress actually relies, rather than shutting the seasoned and cynical out. And, as a first step in this direction, we need to stop selling each other out to nontechnical management for stupid reasons, including but especially “culture fit”.

VC-istan 8: the Damaso Effect

Padre Damaso, one of the villains of the Filipino national novel, Noli me Tangere, is one of the most detestable literary characters, as a symbol of both colonial arrogance and severe theological incompetence. One of the novel’s remarks about colonialism is that it’s worsened by the specific types of people who implement colonial rule: those who failed in their mother country, and are taking part in a dangerous, isolating, and morally questionable project that is their last hope at acquiring authority. Colonizers tend to be people who have no justification for superior social status left but their national identity. One of the great and probably intractable tensions within the colonization process is that it forces the best (along with the rest) of the conquered society to subordinate to the worst of the conquering society. The total incompetence of the corrupt Spanish friars in Noli is just one example of this.

In 2014, the private-sector technology world is in a state of crisis, and it’s easy to see why. For all our purported progressivism and meritocracy, the reality of our industry is that it’s sliding backward into feudalism. Age discrimination, sexism, and classism are returning, undermining our claims of being a merit-based economy. Thanks to the clubby, collusive nature of venture capital, to secure financing for a new technology business requires tapping into a feudal reputation economy that funds people like Lucas Duplan, while almost no one backs anything truly ambitious. Finally, there’s the pernicious resurgence of location (thanks to VCs’ disinterest in funding anything more than 30 miles away from them) as a career-dominating factor, driving housing prices in the few still-viable metropolitan areas into the stratosphere. In so many ways, American society is going back in time, and private-sector technology is a driving force rather than a counterweight. What the fuck, pray tell, is going on? And how does this relate to the Damaso Effect?

Lawyers and doctors did something, purely out of self-interest, to prevent their work from being commoditized as American culture became increasingly commercial in the late 19th century. They professionalized. They invented ethical rules and processes that allowed them work for businessmen (and the public) without subordinating. How this all works is covered in another essay, but it served a few purposes. First, the profession could maintain standards of education so as to keep membership in the profession as a form of credibility that is independent of managerial or client review. Second, by ensuring a basic credibility (and, much more important, employability) for good-faith members, it enabled professionals to meet ethical obligations (i.e. don’t kill patients) that supersede managerial or corporate authority. Third, it ensured some control over wages, although that was not its entire goal. In fact, the difference between unionization and professionalization seems to be as follows. Unions are employed when the labor is a commodity, but ensure that the commoditization happens in a fair way (without collective bargaining, and in the absence of a society-wide basic income, that never occurs). Unions accept that the labor is a commodity, but demand a fair rate of exchange. Professionalization exists when there is some prevailing reason (usually an ethical one, such as in medicine) to prevent full commoditization. If it seems like I’m whitewashing history here, let me point out that the American Medical Association, to name one example, has done some atrocious things in its history. It originally opposed universal healthcare; it has received some karma, insofar as the inventively mean-spirited U.S. health insurance system has not only commoditized medical services, but done so on terms that are unfavorable to physician and patient both. I don’t mean to say that the professions have always been on the right side of history, because that’s clearly not the case; professionalization is a good idea, often poorly realized.

The ideal behind professionalization is to separate two senses of what it means to “work for” someone: (1) to provide services, versus (2) to subordinate fully. Its goal is to allow a set of highly intelligent, skilled people to deliver services on a fair market without having to subordinate inappropriately (such as providing personal services unrelated to the work, because of the power relationship that exists) as is the norm in mainstream business culture.

As a tribe, software professionals failed in this. We did not professionalize, nor did we unionize. In the Silicon Valley of the 1960s and ’70s, it was probably impossible to see the need for doing so: technologists were fully off the radar of the mainstream business culture, mostly lived on cheap land no one cared about, and had the autonomy to manage themselves and answer to their own. Hewlett-Packard, back in its heyday, was run by engineers, and for the benefit of engineers. Over time, that changed in the Valley. Technologists and mainstream, corporate businessmen were forced to come together. It became a colonial relationship quickly; the technologists, by failing to fight for themselves and their independence, became the conquered tribe.

Now it’s 2014, and the common sentiment is that software engineers are overpaid, entitled crybabies. I demolished this perception here. Mostly, that “software engineers are overpaid” whining is propaganda from those who pay software engineers, and who have a vested interest. It has been joined lately by leftist agitators, angry at the harmful effects of technology wealth in the Bay Area, who have failed thus far to grasp that the housing problem has more to do with $3-million-per-year, 11-to-3 product executives (and their trophy spouses who have nothing to do but fight for the NIMBY regulations that keep housing overpriced) than $120,000-per-year software engineers. There are good software jobs out there (I have one, for now) but, if anything, relative to the negatives of the software industry in general (low autonomy relative to intellectual ability, frequent job changes necessitated by low concern of employers for employee career needs, bad management) the vast majority of software engineers are underpaid. Unless they move into management, their incomes plateau at a level far below the cost of a house in the Bay Area. The truth is that almost none of the economic value created in the recent technology bubble has gone to software engineers or lifelong technologists. Almost all has gone to investors, well-connected do-nothings able to win sinecures from reputable investors and “advisors”, and management. This should surprise no one. Technology professionals and software engineers are, in general, a conquered tribe and the great social resource that is their brains is being mined for someone else’s benefit.

Here’s the Damaso Effect. Where do those Silicon Valley elites come from? I nailed this in this Quora answer. They come from the colonizing power, which is the mainstream business culture. This is the society that favors pedigree over (dangerous, subversive) creativity and true intellect, the one whose narcissism brought back age discrimination and makes sexism so hard to kick, even in software which should, by rights, be a meritocracy. That mainstream business world is the one where Work isn’t about building things or adding value to the world, but purely an avenue through which to dominate others. Ok, now I’ll admit that that’s an uncharitable depiction. In fact, corporate capitalism and its massive companies have solved quite a few problems well. And Wall Street, the capital of that world, is morally quite a bit better than its (execrable) reputation might suggest. It may seem very un-me-like to say this, but there are a lot of intelligent, forward-thinking, very good people in the mainstream business culture (“MBA culture”). However, those are not the ones who get sent to Silicon Valley by our colonial masters. The failures are the ones sent into VC firms and TechCrunch-approved startups to manage nerds. Not only are they the ones who failed out of the MBA culture, but they’re bitter as hell about it, too. MBA school told them that they’d be working on $50-billion private-equity deals and buying Manhattan penthouses, and they’re stuck bossing nerds around in Mountain View. They’re pissed.

Let me bring Zed Shaw in on this. His essay on NYC’s startup scene (and the inability thereof to get off the ground) is brilliant and should be read in full (seriously, go read it and come back to me when you’re done) but the basic point is that, compared to the sums of money that real financiers encounter, startups are puny and meaningless. A couple quotes I’ll pull in:

During the course of our meetings I asked him how much his “small” hedge fund was worth.

He told me:

30 BILLION DOLLARS

That’s right. His little hedge fund was worth more money than thousands of Silicon Valley startups combined on a good day. (Emphasis mine.) He wasn’t being modest either. It was “only” worth 30 billion dollars.

Zed has a strong point. The startup scene has the feeling of academic politics: vicious intrigue, because the stakes are so small. The complete lack of ethics seen in current-day technology executives is also a result of this. It’s the False Poverty Effect. When people feel poor, despite objective privilege and power, they’re more inclined to do unethical things because, goddammit, life owes them a break. That startup CEO whose investor buddies allowed him to pay himself $200,000 per year is probably the poorest person in his Harvard Business School class, and feels deeply inferior to the hedge-fund guys and MD-level bankers he drank with in MBA school.

This also gets into why hedge funds get better people (even, in NYC, for pure programming roles) than technology startups. Venture capitalists give you $5 million and manage you; they pay to manage. Hedge fund investors pay you to manage (their money). As long as you’re delivering returns, they stay out of your hair. It seems obvious that this would push the best business people into high finance, not VC-funded technology.

The lack of high-quality businessmen in the VC-funded tech scene hurts all of us. For all my railing against that ecosystem, I’d consider doing a technology startup (as a founder) if I could find a business co-founder who was genuinely at my level. For founders, it’s got to be code (tech co-founder) or contacts (business co-founder) and I bring the code. At my current age and level of development, I’m a Tech 8. A typical graduate from Harvard Business School might be a Biz 5. (I’m a harsh grader, that’s why I gave myself an 8.) Biz 6 means that a person comes with connections to partners at top VC firms and resources (namely, funding) in hand. The Biz 7’s go skiing at Tahoe with the top kingmakers in the Valley, and count a billionaire or two in their social circle. If I were to take a business co-founder (noting that he’d become CEO and my boss) I’d be inclined to hold out for an 8 or 9, but (at least, in New York) I never seemed to meet Biz 8’s or 9’s in VC-funded technology, and I think I’ve got a grasp on why. Business 8’s just aren’t interested in asking some 33-year-old California man-child for a piddling few million bucks (that comes along with nasty strings, like counterproductive upper management). They have better options. To the Business 8+ out there, whatever the VCs are doing in Silicon Valley is a miserable sideshow.

It’s actually weird and jarring to see how bad the “dating scene”, in the startup world, is between technical and business people. Lifelong technologists, who are deeply passionate about building great technology, don’t have many places elsewhere to go. So a lot of the Tech 9s and 10s stick around, while their business counterparts leave and a Biz 7 is the darling at the ball. I’m not a fan of Peter Shih, but I must thank him for giving us the term “49ers” (4’s who act like 9’s). The “soft” side, the business world of investors and well-connected people who think their modest connections deserve to trade at an exorbitant price against your talent, is full of 49ers– because Business 9’s know to go nowhere near the piddling stakes of the VC-funded world. Like a Midwestern town bussing its criminal element to San Francisco (yes, that actually happened) the mainstream business culture sends its worst and its failures into the VC-funded tech. Have an MBA, but not smart enough for statistical arbitrage? Your lack of mathematical intelligence means you must have “soft skills” and be a whiz at evaluating companies; Sand Hill Road is hiring!

The venture-funded startup world, then, has the best of one world (passionate lifelong technologists) answering to the people who failed out of their mother country: mainstream corporate culture.

The question is: what should be done about this? Is there a solution? Since the Tech 8’s and 9’s and 10’s can’t find appropriate matches in the VC-funded world (and, for their part, most Tech 8+ go into hedge funds or large companies– not bad places, but far away from new-business formation– by their mid-30s) where ought they to go? Is there a more natural home for Tech 8+? What might it look like? The answer is surprising, but it’s the mid-risk / mid-growth business that venture capitalists have been decrying for years as “lifestyle businesses”. The natural home of the top-tier technologist is not in the flash-in-the-pan world of VC, but the get-rich-slowly world of steady, 20 to 40 percent per year growth due to technical enhancement (not rapid personnel growth and creepy publicity plays, as the VCs prefer).

Is there a way to reliably institutionalize that mid-risk / mid-growth space, that currently must resort (“bootstrapping”) to personal savings (a scarce resource, given that engineers are systematically underpaid) just as venture capital has done to the high-risk /get-big-or-die region of the risk/growth spectrum? Can it be done with a K-strategic emphasis that forges high-quality businesses in addition to high-value ones? Well, the answer to that one is: I’m not sure. I think so. It’s certainly worth trying out. Doing so would be good for technology, good for the world, and quite possibly very lucrative. The real birth of the future is going to come from a fleet of a few thousand highly autonomous “lifestyle” businesses– and not from VC-managed get-huge-or-die gambits.

Was 2013 a “lost year” for technology? Not necessarily.

The verdict seems to be in. According to the press, 2013 was just a god-awful, embarrassing, downright shameful year for the technology industry, and especially Silicon Valley.

Christopher Mims voices the prevailing sentiment here:

All in, 2013 was an embarrassment for the entire tech industry and the engine that powers it—Silicon Valley. Innovation was replaced by financial engineering, mergers and acquisitions, and evasion of regulations. Not a single breakthrough product was unveiled—and for reasons outlined below, Google Glass doesn’t count.

He continues to point out the poor performance of high-profile product launches, the abysmal behavior of the industry’s “ruling class”– venture capitalists and leading executives– and the fallout from revelations like the NSA’s Prism program. Yes, 2013 brought forth a general miasma of bad faith, shitty ideas, and creepy, neoreactionary bubble zeitgeists: Uber’s exploitative airline-style pricing and BitTulip mania are just two prominent examples.

He didn’t cover everything; presumably for space, he gave no coverage to Sean Parker’s environmental catastrophe of a wedding (and the 10,000-word rant he penned while off his meds) and its continuing environmental effects. Nor did he cover the growing social unrest in California, culminating in the blockades against “Google buses”. Nor did he mention the rash of unqualified founders and mediocre companies like Summly, Snapchat, Knewton, and Clinkle and all the bizarre work (behind the scenes, by the increasingly country-club-like cadre of leading VCs) that went into engineering successes for these otherwise nonviable firms. In Mims’s tear-down of technology for its sins, he didn’t even scratch the surface, and even with the slight coverage given, 2013 in tech looks terrible. 

So, was 2013 just a toilet of a year, utterly devoid of value? Should we be ashamed to have lived through it?

No. Because technology doesn’t fucking work that way. Even when the news is full of pissers, there’s great work being done, much of which won’t come to fruition until 2014, 2015, or even 2030. Technology, done right, is about the long game and getting rich– no, making everyone rich– slowly. (Making everyone rich is, of course, not something that will be achieved in one quarter or even one decade.) “Viral” marketing and “hockey stick” obsessions are embarrassments to us. We don’t have the interest in engineering that sort of thing, and don’t believe we have the talent to reliably make it happen– because we’re pretty sure that no one does. But we’re very good, in technology, at making things 10, or 20, or 50 percent more efficient year-on-year. Those small gains and occasional big wins amount, in the aggregate, to world economic growth at a 5% annual rate– nearly the highest that it has ever achieved.

Sure, tech’s big stories of 2013 were mostly bad. Wearing Google Glass, it turns out, makes a person look like a gigantic fucking douchebag. I don’t think that such a fact damns an entire year, though. Isn’t technology supposed to embrace risk and failure? Good-faith failure is a sign of a good thing– experimentation. (I’m still disgusted by all the bad-faith failure out there, but that should surprise no one.) The good-faith failures that occur are signs of a process that works. What about the bad-faith ones? Let’s just hope they will inspire people to fix a few problems, or just one big problem: our leadership.

On that, late 2013 seems to have been the critical point in which we, as a technical community, lost faith in the leaders of our ecosystem: the venture capitalists and corporate executives who’ve claimed for decades to be an antidote to the centuries-old tension between capitalist and laborer, and who’ve proven no better (and, in so many ways, worse) than the old-style industrialists of yore. Silicon Valley exceptionalism is disappearing as an intellectually defensible position. The Silicon Valley secessionists and Sand Hill Road neofeudalists no longer look like visionaries to us; they look like sad, out-of-touch, privileged men abusing a temporary relevance, and losing it quickly through horrendous public behavior. The sad truth about this is that it will hurt the rest of us– those who are still coming up in technology– far more than it hurts them.

This loss of faith in our gods is, however, a good thing in the long run. Individually, none of us among the top 50,000 or so technologists in the U.S. has substantial power. If one of us objects to the state of things, there are 49,999 others who can replace us. As a group, though, we set the patterns. Who made Silicon Valley? We did, forty years ago, when it was a place where no one else wanted to live. We make and, when we lose faith, we unmake.

Progress is incremental and often silent. The people who do most of the work do least of the shouting. The celebrity culture that grows up around “tech” whenever there is a bubble has, in truth, little to do with whether our society can meet the technology challenges that the 2010s, ’20s, and onward will throw at us.

None of this nonsense will matter, ten years from now. Evan Spiegel, Sean Parker, Greg Gopman, and Adria Richards are names we will not have cause to remember by December 26, 2023. The current crop of VC-funded cool kids will be a bunch of sad, middle-aged wankers drinking to remember their short bursts of relevance. But the people who’ve spent the ten years between now and then continually building will, most likely, be better off then than now. Incremental progress. Hard work. True experimentation and innovation. That’s how technology is supposed to work. Very little of this will be covered by whatever succeeds TechCrunch.

Everything that happened in technology in 2013, and much of it was distasteful, is just part of a longer-running process. It was not a wasted year. It was a hard one for morale, but sometimes a morale problem is necessary to make things better. Perhaps we will wake up to the uselessness of the corporatists who have declared themselves our leaders, and do something about that problem.

So, I say, as always: long live technology.