On the supposed aversion of software engineers to “the business”.

There’s a claim that’s often made about software engineers, which is that we “don’t want anything to do with the business”. To hear the typical story told, we just want to put our heads down and work on engineering problems, and have little respect for the business problems that are of direct importance to the companies where we work. There’s a certain mythology that has grown up around that little concept.

Taking a superficial view, this perception is accurate. The most talented software engineers seem to have minimal interest in the commercial viability of their work, and a rather low level of respect for the flavor-of-the-month power-holders who direct and supervise their work. It’s easy to conclude that software engineers want to live in an ivory tower far away from business concerns. It’s also, in my experience, completely incorrect. Business can be intellectually fascinating. As I’ve learned with age, new product development, microeconomics and game theory, and interpersonal interactions are just as rich in cognitive nutrition as compiler design or random matrix theory. I might prefer to study hard technical topics in my free time, in order to keep up a specialty, but I’m a generalist at heart and I don’t view business problems or interpersonal challenges as inferior or “dirty”. More to this, I think that most software engineers agree with me on that. We’re not ivory tower theoreticians. We’re builders, and as we age, we begin to respect the challenges involved in large projects that present interpersonal as well as technical challenges.

So why are so many talented software engineers seemingly averse to the business? Why do most talented programmers fly away from line-of-business work, leaving it to the less capable and credible? It’s this: we don’t want to deal with the business as subordinates. That, stated so, is the truth of it.

There are a few who protect their specialties with such intensity that any business-related work is viewed as an unwanted distraction, and I’m glad that they exist, because the hardest technical problems require a single-minded focus. I’m not speaking (not here and now, anyway) for them. Instead, I’m talking about a more typical technologist, with an attraction to problem-solving in general. Is she willing to work for “the business”? Of course, but not as a subordinate. If she’s going to be called in to mix business concerns with her work, she’s going to want the authority and autonomy necessary to actually solve the problems put in front of her. It’s when working with the business doesn’t come with these requisite resources and permissions that she’d rather slink away and build interpreters for esoteric languages.

The stereotype is that software engineers and technologists “don’t care” about business problems. The reality is that they avoid working on line-of-business software because the position is inherently subordinate. Give them the authority to set requirements, instead of coding to them, and they’ll care. Make them partners and drivers instead of “resources”, and they’ll actually give a damn. But expect them to interact with the business in a purely subordinate role, as in a typical business-driven “tech” company, and the talented ones (who are invariably smarter than the executives shouting orders, but have chosen not to participate in the political contest necessary to get to that level) will hide from the front lines.

If a company views software engineering as a cost center, and programming as a fundamentally subordinate activity, it will find that talented programmers avoid direct interaction with the business (which will, by design, happen on subordinate terms) and software it builds will either be of low quality or irrelevant to its business needs– because those who have the ability to write high-quality software won’t even bother to make their work relevant. However, this pattern of degeneracy (although common) should not be taken as a foregone conclusion. There are more similarities than differences between business problems and engineering problems, and it’s quite possible to give people with programming and engineering talent the incentive to learn about the business. While technical talent flies away from “business-driven programming” like a bat out of hell, there’s no intrinsic animosity between programming talent and “the business”. To the contrary, I think that people with experience solving these two classes of problems could have a natural affinity, and have a lot to learn from each other. Any such meeting has to come on terms of equality, however. If working with the business means doing so as a subordinate, then no one with technical talent will do so in earnest.

This comment was censored by Y Combinator’s Hacker News.

The news topic was Alan Eustace’s recent skydiving record.

The Hacker News comment thread is here.

My comment is here. The link may not work.

Here is the text of it.

Maybe this is cynical but I dislike stories like this. I’m glad he got back safely, but it sounds a bit Everest-y. Felix Baumgartner was an experienced jumper. Every time a corporate executive pulls the “throw money at something hard for mere mortals” card I cringe. Again, Everest. The number of rich businessmen who die because Mother Nature does not give a fuck about job titles is immense.

The comment itself isn’t that interesting. What is interesting is that such a vanilla remark (profanity isn’t taken to be an issue on Hacker News) could be censored. I wonder why? What libertarian nerve did I tweak?

I’m not going to speculate. But enjoy the above, an average, ordinary comment rendered unusual by the mere fact of it being censored.

It might be time for software engineers, especially in Silicon Valley, to unionize.

Should software engineers unionize? Two years ago, I would have said “no”. In fact, I did say “no” two years ago. At the time, I was unduly influenced by the negative reputation of unions in this country, and drawing a rather artificial distinction between “unions” (blue-collar) and professional “guilds” (white-collar, often prestigious). I saw the need to draw together and collectively seek our common interest, but I gave it the language of a “profession”.

Two years ago, I argued that we needed structure of a constitutional nature, and I still agree with that. Software, right now, is an every-man-for-himself, “Wild West” industry. There are no unions, talent agents for programmers are rare to nonexistent, talented engineers are fired quickly and without apologies (or severance), and the engineer is wholly responsible for his own career advancement. (Some companies are so backward that they deduct conference attendance from vacation days!) A small number of companies (e.g. Valve, with its open allocation system that allows employees, within reason, to define their own work and pick projects) offer constitutional guarantees regarding internal mobility and social justice, but that is far from the norm. In most companies, the fundamental idea is that employee lives entirely at the whim of a manager, “and you should be thanking [him] every morning, along with Jesus, for giving you another day.” Constitutional protections of employees would be anathema to most organizations, whose internal models of social justice are akin to Elizabethan England’s “great chain of being” concept, in which the monarchy ruled by divine right and was unaccountable to anyone.

The Wild West employment climate was tolerable to most Silicon Valley software engineers when they shared in the upside (i.e. stood a serious chance of getting rich, or at least comfortable, through hard work). Twenty years ago, programmers from middle-class origins could actually raise venture funding without relying on (upper-class, connected) “advisors” and extraneous business co-founders who’d charge several percent, and want to manage, just for introductions to investors. Twenty years ago, housing was affordable in the Bay Area, and living on a low salary to pursue a dream was legitimately possible. Twenty years ago, startups fired good employees as often as they do now, but they offered genuinely positive references and introductions to investors when they did so. The attitude was, “We need a different skill set than what you have, but we’ll make sure you land on your feet.” It was more like a rock band breaking up over genuine creative differences than a person being singled out and humilated. Twenty years ago, while it was rare that a startup would explicitly pay for an engineer’s career advancement (2-4 conferences per year was standard, but tuition reimbursement was rare) engineers had the authority to define and self-allocate their own work, so they actually could advance their careers without above-normal assistance. Twenty years ago, there was just as much volatility in day-to-day employment as there is now, but the Valley was still run by lifelong technologists who identified themselves as engineers, not talentless hacks self-identified as future rich people who had ethical license to do whatever they wanted because they were “changing the world”. More often than not, the engineers in that time looked out for each other. It was a different time, and a better one for the Valley.

Second phase

Silicon Valley has devolved in a number of ways. Housing has become inordinately expensive, with California NIMBYists opposing high-density housing at every turn. (“California: where the future is built by people living in the past.”) The result of this is that a typical software engineer, making about $120,000 per year, an amount that would still be considered high in most of the U.S., can end up living paycheck-to-paycheck. What made Northern California great in the 1960s to ’80s has become its downfall: its openness to the new. As a region, it was too trusting. It allowed non-resident third-world despots and corrupt officials to buy real estate that they’d rarely or never use, pricing Americans out of their own housing. Worse, when smooth-talking East Coast financiers took an interest in the region in the 1990s, it welcomed them, unaware that they’d eventually take the place over and out-compete the lifelong technologists for venture capital. The Battle of Palo Alto has been lost.

No one can reverse the arrow of time. We shouldn’t look to restore the Silicon Valley of 1975, because it doesn’t exist anymore, and it never will again. We should be focused on creating something better in 2025. At that, we have a chance. Of course, we need for a substantial number of lifelong technologists to regain money and power. With the U.S. middle-class falling to pieces, this is not going to happen without opposition. There is not a rising tide, and all boats are not being lifted. We, the lifelong technologists and engineers, have to wrest power from the existing elite. We have to do something that engineers (and middle-class Americans, overly steeped in outdated concepts of meritocracy and fair play) generally hate to do: we have to get political. After all, an unreasonable aversion to political activity supports the status quo and is, therefore, political already.

‘Cause the takers gonna take, take, take, take, take, and the makers gonna make, make, make, make, make…

Ayn Rand’s fan club loves to use the rhetoric of “takers” and “makers”. I generally dislike this distinction as it is commonly used, since the “taker” label is usually applied to the poor and uneducated who, through no fault of their own, have little to offer society. Yet it’s illuminating, specifically because it shows Rand’s view of corporate capitalism to be fundamentally incorrect. To Rand, the entrepreneurs were the “makers”, while she assigned the “taker” label to the poor, disenfranchised, and disliked lower classes as well as to government bureaucrats. In reality, the takers are the private-sector social-climbers who, being better at social and political machination than those doing the actual work, capture most of the value generated by the productive but politically disorganized makers. In most companies, the high-status positions are owned by blue-blooded, 100x takers (well-positioned, unaccountable executive bureaucrats) and the all of the work is done by underpaid makers who “just want to do good work” and (to their detriment) refuse to “get political”.

As the takers move in to technology, and out-compete makers for attention and resources due to their single-minded focus on political victory above creation, they destroy its innovative capacity, replacing creativity with mean-spirited, zero-sum slagging. They’ve introduced stack ranking, which is the epitome of zero-sum squabbling. They’ve created an age-discrimination culture that values deference to authority over experience. They’ve replaced a mindset of exploration and value creation with the anti-intellectualism of the enterprise Java shop. They’ve done so much damage that anything that reduces or challenges their power deserves serious consideration.

At this point, there is probably nothing that could be lost in bringing the unions into Silicon Valley.

Objections to unionization

There are four main objections to unionization of Silicon Valley engineers. I’ll address each of these.

1. Unions pit management and labor against each other. 

This is the easiest of the four to destroy. With stack ranking in place in companies like Amazon and Google, and with 0.02% slices of 100-person startups qualifying for “ownership” (as in, “you should work 90-hour weeks and carry a pager because you’re an owner“, which is a bald-faced lie) in the Valley, labor and management are against each other. “Class war” is already happening, but it’s one-sided as the working class refuses to defend itself (yet). An engineer is far less likely to advance to the investor ranks in the Valley than an associate in an investment bank or law firm is to “make partner”. Ruses and phony promises, instead of career investment, are deployed to encourage young engineers to work 90-hour weeks on other peoples’ ideas. Management started the fight, and it’s winning hand-over-fist. Equalizing, in this fight that’s already underway, just makes our position better. Collective bargaining may not be the only tool that might allow us to equalize, but it’s a historically proven one.

Improving software engineer wages will also transfer future income away from socially well-connected takers and back to makers. This will give us the capital to fund whatever happens after the ossification of the VC-funded world in Silicon Valley is complete. Oddly enough, even if unions diminish innovation in the companies where they’re implemented (and I don’t see a good reason to believe that they will) they wound enhance innovation in the broader economy by reviving the middle class, and making it possible again for people of average means to capitalize new companies.

2. Wage normalization/mediocrity.

Some unions regulate wages to a degree that most software engineers (including me) find unreasonable. Public school teachers’ unions, for example, make it difficult to fire incompetents and often impossible to pay great teachers what they’re really worth. Though unions improve the aggregate wage, their reputation is for pulling compensation to the middle. This is a genuine problem that we’ll have to deal with. How do we prevent across-the-board mediocrity in compensation? Whatever collective bargaining structure we create for engineers, it shouldn’t prevent one who is genuinely worth millions per year from making that much. I’d like to see a salary floor set, but there shouldn’t be a ceiling.

There is, oddly enough, good news on this item. To tell the truth, wage normalization has already happened in the Valley. An entry-level software engineer at a large tech company will make about $120,000 per year, all-in. If she works her ass off for ten years and becomes 5 to 25 times as valuable, she’ll be lucky to make more than 1.5 times that. With ten more years, she’ll be lucky if she’s not starting to face age discrimination. Employers know that becoming and staying a “10x” engineer requires continuing access to high-quality work, which they make artificially rare (closed allocation). This makes it awkward and difficult for a Staff Engineer to ask for appropriate pay: sure, she’s adding tens of millions per year of value to the company, but that’s because the company is “generously” giving her decent projects!

In other words, we don’t have to worry about unions introducing wage normalization. It’s already there. Most “10x” engineers get mediocre wages, relative to the value of their contribution, already. Sure, there are engineers who make $750,000 per year plus stock options, but (a) that’s extremely rare, and, (b) it often has more to do with managerial favoritism than merit. Software engineers’ salaries aren’t abnormally high, and they are certainly not “inflated”, for 99 percent of us. For most of us, downward wage normalization has already occurred. If collective bargaining can deliver upward normalization, we should take it.

3. Seniority.

Airline pilots’ unions are notorious for the toxic culture existing around seniority. That is certainly a thing we should not replicate. The pilots who’ve been with the airline get the best routes and make large sums of money ($200,000 per year and up) while the junior pilots make the worst routes and make very little, and this is by contract. These sorts of seniority systems are immensely damaging, both to the airline’s ability to sustain itself as well as to the quality of service, and undesirable even for most pilots. First, they make it a disaster for a pilot to be laid off, because it means starting again at the bottom of the queue. Second and relatedly, they make it nearly impossible for pilots to change airlines without damaging their careers. Third, this sort of overvaluation of seniority leads to mediocrity, because it allows the most experienced people to rest on their laurels. Fourth, while it seems to protect old hands, it also discourages people from moving into that career later in life, because they know they’ll never be able to get the good jobs. In truth, these sorts of seniority systems are a form of aggressive age discrimination, because they lock out mid-life career-switchers who might bring in new blood and knowledge from other domains.

Silicon Valley’s startup culture, with its age discrimination culture and worship of youth, seems to be at the opposite extreme. However, I think this is a false dichotomy. This attraction of employers to the young exists because they can be abused. The seniority system and rate-limiting of promotions still exist. It’s just that the employee’s upside has been eliminated, because companies can renege on the benefits that come with seniority. The seniority system itself is still very much in place. It’s just a broken one.

A few years ago, in a job search process, I submitted to a company’s pre-interview code test a solution that, I was told, was one of the three best submissions they’d seen, and this was a pretty prestigious company, so I’d guess that they’d received a lot of code samples. I interviewed and got an offer, which was… for a junior position. Blowing away the code challenge didn’t matter. This ties into a general dislike I’ve developed for code tests and “brainteasers” on interviews. I’m very good at them, but there’s an error rate for anyone, because sometimes a candidate is rejected because the reviewer dislikes the language he chose. If there’s a chance that performing extremely well can bump a person up a rank or two, then I’d be all for these tests. It’d be to my advantage. If they’re just another hurdle to pass, bringing only downside (and that seems, often, to be the case) then I’d prefer to avoid them. Why would I waste time on a code test just to get a junior position?

In the VC-funded world, we see an amalgam of two systems on the topic of seniority. (This is a common theme of corporate capitalism, which exists to deliver the best of two systems– capitalism and socialism– for a well-connected elite and the worst of both to the rest.) If you don’t have the social connections to get funded and acqui-hired, you still have to get into the queue at the back, pay dues, and watch mediocrities get better projects and more opportunities to succeed. So it shares that in common with the decrepit seniority systems: excluding “the 1%”, the young get shafted. On the other hand, the lack of internal promotion (thus, mandatory job hopping) and aggressive performance appraisal (creating noise in the system, because when stack-ranking comes out, no one is safe; it’s like “The Purge”) make it so that everyone has to be prepared to be on the job market at any time. Thus, later in one’s career, the promises of seniority can be reneged upon. Young programmers (except for well-connected– and, increasingly, parentally connected– ones who can become founders) have to contend with seniority systems that become excuses for why they don’t get good projects or to make meaningful decisions and learn a thing or two. But twenty years later, there is no safety net for them and the “Wild West” rules dominate.

4. Lack of innovation and mediocrity.

Unions, in the interest of advancing their workers’ interests, will sometimes generate regulations that can hamper innovation. Is this a concern? It depends. If you aggressively unionized an open-allocation, engineer-driven software company like Valve, it would probably be a change for the worse. If it’s a closed-allocation software company, you lose… nothing.

Some unions cause mediocrity in wages, while some provide general protection and a wage floor but allow market wages. I think it’s obvious that software requires the second type. Sometimes, unions protect incompetent employees. A software union ought to negotiate a guaranteed severance, but not prevent bad engineers from being fired. With regard to the three issues raised above, I think we’ll be able to engineer a collective-bargaining arrangement that prevents those problems. This one, the fourth, is the biggest. Sometimes, in the interest of protecting members’ jobs, unions introduce a lot of regulations that slow down work, and we don’t want that.

If the company uses closed allocation, the “good news” is that there’s absolutely nothing to worry about when introducing a union. Companies formalize closed allocation (with internal headcount limits, official performance reviews, and a prevailing distrust of employees) when they’ve reached a stage at which innovation is (a) de-prioritized, and (b) considered to be a job for executives alone. Once a firm is rate-limiting and restricting innovation like that, it has already decided that it doesn’t need most of its people to be creative. Fair enough, one might say, as there’s a lot of important work that doesn’t need to be innovative. That’s the kind of work over which unions unambiguously succeed. At that point, let’s bring in the unions to make sure that the workers are fairly treated and compensated. If they’re actually paid appropriately and can save money (imagine that!) they might be founders in the future.

Closed-allocation management is such an innovation killer, already, that any loss that might be inflicted by collective bargaining is just a rounding error. If a company has already decided to implement closed allocation, it’s shown that it no longer believes that it needs innovation. It’s probably right. So there’s nothing to lose.

In sum, the feared culture of mediocrity and distributive squabbling won’t be introduced by programmers’ unions. It’s already there, thanks to Silicon Valley’s management.

In fact, a properly structured professional guild is the only way that I can come up with for defeating that mediocrity. If we put a floor on how programmers can be treated and compensated, we can drive the unqualified and desperate out of our industry, which is the first step toward proper professionalization, and we can cancel the projects that aren’t worth a properly compensated engineer’s time. The main reason that so many software engineers are assigned bogus projects is that our salaries are too low. If it cost more to waste our time, we wouldn’t be assigned to the useless work that a closed-allocation shop generates.

Other benefits

I can’t predict the effect that labor unions would have on software engineer compensation. There are too many variables. My best guess is that they wouldn’t increase salaries by very much (possibly 10 to 20 percent, with more improvement at the high end) but that they’d remain at 2014 levels, even after the current tech bubble bursts. Unions might seem unnecessary in a time when mediocre engineers can earn $140,000 per year, but there’s no reason to be sure that salaries will stay at that level, even for the strong engineers who are worth several times that in any economic climate. We ought to start organizing now, rather than waiting until we ostensibly need to.

Moreover, there are other gains that would improve software workplace cultures immensely. The fact is that, since most of us will never experience the one-in-a-thousand upside outcomes of “fuck you money” or direct promotion to partnership ranks at Sequoia, we’re better off to abolish the Wild West employment culture that exists now. It would be tolerable if it delivered real upside and autonomy to us, but it doesn’t.

Here are some specific protections we could get through a union:

  • We could destroy stack ranking and mandate that performance review histories not be part of a company’s internal transfer process, eliminating a large class of professional extortions and bringing companies closer to the open-allocation ideal.
  • We could put an end to exploitative terms in employment contracts such as binding mandatory arbitration, employer ownership of side projects, and one-way non-disparagement clauses that exist only software engineers are too trusting and many don’t read their offer letters beyond the salary and title. (Yes, I agree that it’s “their fault” when they get shafted because they didn’t read their contracts. But it’s unfair that the wiser among us have to compete with these clueless fuck-ups in a race to the bottom.)
  • We could require employers to allow employees to have representation (legal and career-coaching) in the room when negotiating with management regarding performance appraisal, terminations, references and introduction clauses.
  • We could reduce the incidence of back-channel references, blacklists, and “no poach” agreements by setting up a union tip line, and by providing legal assistance to victimized employees.
  • We could have matters of negotiation that are embarrassing for the individual, such as those surrounding disability accommodation, workplace privacy, severance and performance appraisal, managed ex ante, for all of us, by experienced professional negotiators.
  • We could eliminate (or, at least, curtail) the sharing of HR data, such as salaries, titles and performance reviews, across companies (typically, into third-party “data collection” services), a probably-illegal practice deployed to reduce salaries and to blacklist suspected unionists.
  • For freelancers and entrepreneurs, we could eliminate the “we’ll call other clients/investors and turn off unrelated interest” class of professional extortion that is often used against them.

Only an insane person would see the above protections as undesirable. They’re necessary for economic and cultural reasons. It’s astonishing and barbaric, for example, that a software engineer can be put on a PIP without the right to have a representative (including, if he wishes, an attorney) in the room with him when that notice is given. We ought to fix that. It’s not just an issue of finding the right price point for our labor; it’s a critical moral issue that we ignore at our peril.

Where to look next

Professional athletes have unions, and have not experienced wage normalization. Their work and rewards have not been drawn to mediocrity. They still compete incredibly hard against each other. The same, I would argue, applies to Hollywood. It’s heavily unionized, and yet, the product is far from mediocre. (Some might dislike much of what Hollywood produces, but in terms of success on the global market, the U.S. entertainment industry excels. On its own measurable terms, the product isn’t mediocre.) Rather than producing mediocrity and stifling innovation, these unions serve to protect workers (and their careers, and their reputations) in a complex, hit-driven business where talented individuals can add immense value, but in a way that’s hard to measure. Software is also a complex, hit-driven business of the same kind, and we deserve the same protections. We have a need to protect our reputations and health, and to avoid being “type-cast” and losing our personal brand, and we have the right to representation that enables us to do so.

I don’t know the inner details of Hollywood’s unions and I can’t say, with any real confidence, that their model is perfect or right for us. I’m not sure. I will say this much: that would be a place to start looking. We have several counterexamples to the “unions produce mediocrity and kill innovation” argument that is made every time someone discusses collective bargaining for software engineers. These give us starting points for this exploration.

Is there an alternative?

I’ve established that nothing is lost in unionizing a typical, closed-allocation software company. The failings and corruption risks of unions are minuscule in comparison to the proven failure and corruption of typical corporate management. If your company uses stack ranking (to my knowledge, Google still does and Amazon does) then you should unionize it. Just killing off stack ranking will show the men upstairs enough momentum to properly scare them. That would be a heroic start.

With regard to open-allocation, innovation-friendly technology companies, I’m less convinced that unions are necessary. Some of the protections I’ve described are owed to software engineers in any context, but a company that commits to open allocation is already offering many of those. The few companies that offer constitutional protection against misguided management– and I’m not talking about vague platitudes about not being evil (directed at Microsoft, which abolished stack-ranking, while Google still has it) or “20 percent time” policies with no teeth– may not be in need of unions. The most progressive ~1 percent of technology firms are already providing much of what unions are there to deliver.

If you’re a technology manager or small business owner and you don’t want the need for unions to exist, the best strategy is to adopt a transparent and constitutional style of management. I’ve studied open allocation a fair bit, and for technical innovation, it is the only solution within current knowledge. The fear I have with regard to the concept is that, in the future, it might be bastardized like “agile” or “object-oriented programming”. After all, “open allocation” is, itself, just two words. It’s the spirit behind the concept that is important. A more general, infrastructural ideal with much broader applicability is constitutional management. Some companies have an “Employee Bill of Rights” that can only be modified by a secret-ballot majority vote. That’s the kind of thinking that a technology company needs if it wants to avoid the need for a union.

However, expecting progressive management to take back the Valley is not, sadly, realistic. It’s time to give up the dream of a return to the 1970s-era middle-class, union-free Silicon Valley, because that’s not going to come back; and to disabuse ourselves individually of the notion that an engineering position at a VC-funded startup is 3 years’ distance from being a well-funded CEO, because it doesn’t work that way either. Collective bargaining may be just a starting point, and maybe it’s not the final right answer, but it’s time to explore the concept.

Are Haskell engineers second-rate? (Answer: no.)

Before I risk offending anyone with my provocative title, I’ll give away the answer: it’s “no”. However, there’s an interesting discussion to be made of this. Not to pick on Haskell or Erlang or Clojure in particular, Piaw Na made this comment in this Quora answer.

A fixation on programming languages is the sign of a 2nd rate engineer/computer scientist.


Even when I was hiring to fill an Erlang server position, I found that the Erlang specialists were much worse engineers than hiring a great all-rounder and having him learn Erlang (or whatever) to fill the position.

Na’s argument is similar to the attitude that is in vogue in technology companies founded in the 1990s, such as Google and Amazon. At the time, programming language research was considered to be a dead and probably useless field. Large applications were written in C++ and possibly Java. Python, ten to twenty years ago, was considered cutting-edge and risky, and languages like Lisp or Haskell were hardly used at all. Paul Graham deserves a lot of credit, not just for the decision to use Lisp for his startup, but for his eloquent defense of it and expressive languages (like Python) in general. From his essay, “The Python Paradox” (all emphasis mine):

In a recent talk I said something that upset a lot of people: that you could get smarter programmers to work on a Python project than you could to work on a Java project.

I didn’t mean by this that Java programmers are dumb. I meant that Python programmers are smart. It’s a lot of work to learn a new programming language. And people don’t learn Python because it will get them a job; they learn it because they genuinely like to program and aren’t satisfied with the languages they already know.

Which makes them exactly the kind of programmers companies should want to hire. Hence what, for lack of a better name, I’ll call the Python paradox: if a company chooses to write its software in a comparatively esoteric language, they’ll be able to hire better programmers, because they’ll attract only those who cared enough to learn it. And for programmers the paradox is even more pronounced: the language to learn, if you want to get a good job, is a language that people don’t learn merely to get a job.

These passages don’t necessarily contradict each other, but they suggest different hiring strategies. Piaw Na would suggest that you should be more leery of people who identify strongly as Clojure or Haskell engineers, while Paul Graham suggests that you want programmers who hold strong enough opinions about languages to invest themselves heavily in the more obscure ones.

So who’s right? Obviously, they both are to some degree; otherwise, this wouldn’t be an interesting essay.

Programming languages are important to great software engineers. There’s a “languages don’t matter” attitude among managers and “architects” in many software companies, held by people who haven’t written a line of code since Macarena came out. People who actually still write code care immensely about their tools. All that said, Piaw Na is correct on the tendency of monoglot programmers toward mediocrity. Great engineers want to use the right tools for each job, and programming languages are an area where there isn’t one right tool for every job (far from it!)

Whether it’s Java or an arguably superior language like Haskell or Erlang, a programmer who refuses to learn things that are outside of his comfort zone is likely to be a mediocre one. While any programming task can technically be achieved in any language (Turing completeness) they vary rapidly in practical capability, and there really isn’t a single language that dominates the others, because programming is diverse. Some algorithms are nearly impossible to write efficiently without manual memory control, which necessitates using a language like C. For a typical production server that should never fail, Haskell is probably the strongest choice. For interactive exploration, Clojure’s first-class “repl” (interactive console) is hard to beat.

I’ve bashed Java a fair amount (perhaps unfairly) and that’s because I absolutely loathe the “Java Shop” culture. Giant XML files, AbstractFactoryManagerSingletonFactory patterns, IDE-dependent builds… all that nonsense can go fucking die in a taint fire. The lack of taste in the corporate Java culture is astonishing. It generates ugly code with no personality and that falls apart rapidly because no decent engineer wants to maintain it. The monoglot nature of the corporate Java community is, quite likely, correlated to these cultural problems. When you have incurious developers, short-sighted and clueless management, and a language that basically works as long as you throw enough engineers at the problem, you’re simply not going to stumble upon a reason to use anything other than Java.

The noticeable trait of mediocre Java developers (and it’s probably similar for .NET, and maybe even C++) is that their conception of programming is limited to what can be done in Java. Their bosses won’t let them use other languages, so why learn anything but Java? Ask one of them how an assembler or an operating system works and you’ll get a blank stare. Ask about machine learning or computer graphics and you’ll hear grumbling about how linear algebra was taught at 8:00 am. They’ve forgotten how to write programs and haven’t run one since college; they just make classes all day, and it becomes the job of “some smart guy” (the same “some smart guy” who gets you productive again if “your Java”, meaning your IDE, breaks) to staple them together and throw something into production. The corporate programmer culture has dumbed software engineering down to this, and the result is that CommodityJavaDrones haven’t been near actual computer science in years, even if they’re up to speed on how to write “user stories” and groom “back logs” and play politics.

It’s not Java’s fault. The language has its warts, but business-driven stupidity didn’t come into being in 1995 and it would have glomped on to something else if Java hadn’t come along. James Gosling didn’t intend AbstractFactoryVibratorSingletons when he invented the language. He needed a fairly low-level language (like C) for the JVM. For the problem he was trying to solve, Java worked well.

Comfort zones are for losers

Piaw Na’s right that not all programmers in the more interesting languages (e.g. Erlang, Haskell) are first-rate. Bad code and sloppy engineering can be found in all languages. Sometimes, a neophyte will have the luck of landing in a first job using a more expressive language, but still fail to grow. It’s rare, but I’ve seen it happen.

This is an industry in which anyone who wants to be decent can’t afford to have comfort zones. If you cop an attitude of, “I’ll never use a statically typed language” or “I just don’t do low-level” or “I stay away from the browser because Javascript is ugly” or “I’ll never understand operating systems” then you’re probably not going to become a first-rate programmer. It’s paradoxical and difficult, because you have to be very selective in what you work on to keep learning and stay sharp, but you can’t rule out whole areas of computer science, either. (You can rule out process bullshit like “story points”; that will just rot your brain.)

If you’re a curious person, you’re not going to ignore a whole area of computer science just because it’s difficult. Things change too often. Obviously, it’s fine (and expected) to have preferences. It’s setting up brick walls that I have an issue with. Of course, we all create brick walls by necessity when we’re starting out and we need to be selective in what to focus on (lest we get a mediocre knowledge of many things). I’d just argue that the better computer scientists are the ones willing to break those walls down when there’s a good reason to do so. If you’re doing computer science right, then every problem should be new and they should tend to progress in reward, complexity, and challenge. The solved, repetitive stuff should be automated when possible.

Paul Graham warns about the danger of using a language as a comfort zone by introducing the concept of “Blub”. Blub is a stereotypical mediocre language that a monoglot, corporate programmer uses as his model for all forms of programming. The Blub programmer sees lower-level languages as braindead, and higher-level languages as abstract and just plain weird. Blub is, of course, not a specific language but an artifact of an attitude. Java, at least of the enterprise variety, is probably the Blubbiest of Blubs, but there’s nothing that stops a person from taking Haskell as his own personal Blub.

So what makes Java such a common Blub? Part of it, I think, is that the language does many things well-enough. If you are going to be a one-language programmer (or, worse, a one-language company, a “$LANG Shop”) then Java isn’t such a bad choice. It’s not hard to learn, it’s possible to write performant code, and the library support is strong. Let’s look at the standpoint of a middling engineer, because good engineers are almost certainly going to be polyglot, and crappy ones don’t care about languages or software engineering. This middling vantage point is also useful because in business-driven engineering shops, middling engineers tend to be the ones emerging as leaders. A middling Python engineer is going to realize, quickly, that some routines need to be written in C, because Python generally isn’t fast. (Crappy engineers don’t think or care about performance. It’s just mathy voodoo to them.) He might not be proficient in C or enjoy using it, but he’s not going to write the language off. The middling Java engineer will never need to know what C even is.

As far as Blubs go, one could do worse (on language fundamentals) than Java. It’s not PHP or Javascript or Visual Basic. Culturally, however, it tends to be a toxic Blub. Na points out that someone who refuses to code outside of Erlang is probably not a first-rate developer. Fair. I’m a huge fan of Haskell and would probably pick it for most projects if I called the shots, but I’ve also exposed myself to Clojure, C, Python (for the data science libraries) and Scala. I certainly think that comfort zones are a sign of a second-rate programmer. Even still, the CommodityJavaDrones are often third-rate. What’s more pathetic than holding fast to one’s own comfort zone? Living inside someone else’s.

Business-driven engineering (or: waterfalls of sewage)

For an aside, one of the reasons why first-rate engineers don’t seem as adamant about programming languages is that, often, they have the credibility to make technical decisions. (If they don’t, they’re in a company that doesn’t recognize talent, and should leave.) If you’re junior and powerless, you care a great deal about whether you wind up at a “Java shop” or a “Python shop”, because you’re not going to be allowed to use anything but the house language. If you’re senior and have clout, you tend to work on new projects and get to choose the tools. Great engineers avoid business-driven engineering and closed allocation, at any rate. Of course, no one is born as a first-rate engineer, so in our “good, working toward great” years, we do have to pick companies based on those sorts of signals. If a company’s a Java shop, it sends a really bad signal.

Java isn’t my favorite language, and I don’t have much use for the (false) claim that languages don’t matter, but it is the right language for some problems. If you have to be on the JVM, and can’t afford the (small, but nonzero) overhead of Clojure or Scala, then Java’s the right choice. As I’ve said, the ugliness of Java isn’t all intrinsic to the language, but has a lot to do with the anti-intellectual culture of corporate Java.

For those who’ve been lucky enough to avoid the hideousness of most programming jobs, here’s how many software companies work: business comes up with the ideas and defines the work, product managers intermediate and sit in on interminable meetings, and programmers just implement. Most “scrum teams” are just ticket shops. The engineer has no autonomy. This is business-driven engineering. I’d call it “waterfall of sewage engineering”, but decrying a “waterfall” makes it sound like I support much of what is called “agile”, and I don’t. The problem with “agile” is that it’s still closed-allocation, business-driven engineering, meaning that nothing was accomplished. Trying to “fix” business-driven engineering is like putting salt on a turd to make it edible: it just doesn’t work that way.

This may be paradoxical, but when you have an engineer-driven firm, you get better engineering and better business. See, business-driven engineering rots the mind, because it takes what should be a creative and challenging discipline and turns it into “Write me seven classes and 17+i story points by Friday.” It’s also part of why there’s an age-discrimination problem in technology; if you spend your 20s doing that crap, you actually will be a corporate executive (as in premature dementia; not necessarily as in rank and salary, unfortunately) by age 30.

Are there excellent engineers who happen to use Java? Absolutely. Are there not-great programmers in the Haskell community? Sure. If you really want to encounter that underclass of non-programming programmers, though, you have to descend into the shit-lava inferno of business-driven engineering.

As for bets and things…

Hiring is, to a very large extent, about making the bets with the best odds. Does using Haskell guarantee that you’re going to have excellent programmers? No. And Piaw Na is right that talented programmers unexposed to it shouldn’t be written off in hiring, because great engineers can learn new languages quickly. (It takes longer to learn an average corporate Java codebase, those being verbose and generally of low quality, than to learn Haskell or Clojure.) Do the odds favor using languages like Haskell and Clojure, if you want to get mostly wheat in your hiring process? Absolutely. Not only do you get (on average) better developers in those languages, but you’re also going to be in access to stronger communities.

I’ll say this, too, for most Haskell and Clojure engineers that I know: most of them aren’t monoglots who stick to their comfort zones. At a conference or talk dedicated these more “elite” languages, there are plenty of presentations that bring in ideas from other languages and paradigms, whether it’s logical programming or assembly hacking. What makes these languages and communities different and somewhat special isn’t that there are no mediocre engineers, but that the average engineer is quite strong. That creates an energy, and a sense of friendly competition, that you just don’t see in an enterprise Java shop.

The Python Paradox doesn’t guarantee great hires all the time. That requires, well, great hiring. But I think it’s pretty obvious, on the “do languages matter?” debate, which side the odds favor.

Tech hiring, poker, and (not) playing to win.

If you’ve been around this industry for long enough, you’ve heard plenty of hiring managers complain about the difficulty of attracting and keeping good people. Often, you hear excuses that bear no resemblance to reality, like “There just aren’t any good people out there” (not true) or “Talented people just don’t want to live in the Midwest” (not true, and goddamn insulting to the Midwest). When you hear those sorts of things, you’re dealing with a losing player who doesn’t know why he’s losing. Sadly, that’s the more common case. It’s downright refreshing to hear a middle manager admit something like, “We can’t get talent because upper management doesn’t want to pay for it, and because we’re solving a stupid problem, and because we’ve fucked up our reputation with the past 5 years’ worth of short-term thinking”, but it’s not terribly common.

Winning pots vs. winning money

I’m going to use poker as a metaphor for hiring. The connection’s probably not obvious yet. The reason that it’s interesting to me is because gambling is an activity at which bad players (a) can think that they’re winning if they aren’t keeping track, and (b) usually have no insight into what they’re doing wrong. With poker, a common mistake of inept players is to focus on winning pots instead of money. Skilled and unskilled players, over time, get the same number of good and bad hands as anyone else, and the inept players tend to win slightly more (!) than their fair share of pots. They lose money, however, because not all pots are equal in value. If you play too many hands, you’ll win more of the small pots but lose your shirt on the larger ones.

The dopamine-fueled thrill of bluffing with a bad hand is powerful, but hands that can be won so easily are rarely worth much. The high-impact rounds of poker are the ones where multiple players have strong hands, and see it through to showdown, and not the ones where you can bluff the rest  of the table out of the game. Good players focus on the big pots. (For an aside: good players will bluff on occasion, but it’s not to win the free pots, which are of low value. You bluff so that, when you do get a killer hand, others will play against you. If you never bluff, others will fold when you play aggressively, and you won’t make as much money.) Good players don’t set a goal of winning as often as they can, because that’s a fool’s game. They aim to maximize how much they win on good hands, and minimize their losses on the bad ones.


What does the above have to do with hiring? Well, let’s look at just one of the cultural negatives of Silicon Valley: ageism. Why might such a prejudice be there? The answer is that most hiring managers are playing to win pots, not to win money.

The ageism and the low status of software engineers exist because most tech companies’ managers are running a pot-winning strategy. Let’s say that a 1.2-level programmer has a market salary of $100,000 per year and is worth $150,000 to the business; at the 1.8 level, we have a market salary of $150,000 and a business value of $500,000. (Are those numbers accurate? It depends on the problem. For a typical tech company, they’re about right.) From that, there’s seven times as much true profit in hiring the 1.8 programmer. Doesn’t this predict the opposite of age discrimination? From those numbers, you’d expect employers to be chasing the experienced engineers, not shunning them. That’s what would happen if these guys were playing to win money, and not pots. So why doesn’t that happen?

The common explanation given for this is a sociological one. Companies need a few 1.5 engineers, and possibly a 2.0 chief architect, but they don’t need many of them. That’s why, many argue, it’s hard for experienced and capable older engineers to find jobs. (For the incapable older engineers… forget it. Time to become a partner in a VC firm.) This explanation is a cop-out. If you can’t hire talented people, then you need a pyramidal structure and infantilizing policies. However, the success of Valve’s open allocation shows that one can build a company entirely out of high-quality people. You just need progressive management if you want to keep it that way. It’s a vicious cycle. If you structure your workplace as a pyramid, you’ll have a lot of untalented people at the bottom, and therefore need to keep the hierarchy in place.

The real explanation for this phenomenon is that hiring managers are just bad at this poker game. First of all, they’re playing for a high frequency of wins, and not for the big wins. If you tailor your hiring strategy toward mediocrity, you’re going to have a lot of catches, but they’re going to be of the “spend $100,000 to make $150,000″ variety. This actually doesn’t scale well, at all because it loads the company up with inexperienced or mediocre engineers. There are plenty of nonlinearities in software engineer hiring and building a large team is, if it can be avoided, undesirable (see: Brooks’s Law). Second, they mark their successes not based on the true profit (the difference between an engineer’s value-add and the cost of hiring her) but based on the profit relative to the market rate. It’s actually a $360,000 win to hire a 1.8 engineer at $140,000– at least, if you staff her on 1.8-level work– but HR marks it down as a measly $10,000 gain, because that’s the difference between the accepted salary and the market level of $150,000.

Hiring managers operate as if they’re trying to maximize the bulk number of wins itself. The going strategy, then, is to target the young, because they are the most likely to under-value themselves by large margins. A 30-year-old who is any good, knows it. A 23-year-old on an H1-B visa has no idea where he stands relative to other programmers. You can probably get him to agree to a salary that’s $30,000 per year below market. From an HR perspective, that seems like a massive victory. Really, it’s not. Compared to the gain in hiring a more experienced programmer, even at market salary, that $30,000 is a rounding error.

“Toxic wins”

Gambling and business both have a pattern that I call the toxic win. That’s when a bad strategy delivers wins that “feel good”. Subjectively, it feels like like it’s working, because of peoples’ optimistic memory biases. Even most gamblers don’t keep good track of their wins and losses and, in business, it’s nearly impossible even to measure (in the short term) wins and losses at all.

To use poker for an example, a toxic win would occur where an inept player aggressively bluffs, wins a small pot uncontested, and congratulates himself. What he learned: bluffing aggressively makes you win more often. What he should have learned: the other players had weak hands. Aggressive bluffing into a hand when others have strong hands is a quick way to lose a lot of money in the hands that actually matter.

In hiring, the toxic win is when a talented young programmer sells himself egregiously short. Coming out of college, he values himself relative to his peer group (0.9 to 1.2) when he’s actually a 1.4. It doesn’t happen often, but once a hiring manager gets that kind of win, it becomes expected. It’s like the “short commute bias” that explains chronic lateness. In many cases, it’s not that chronically late people are inconsiderate, but that they suffer from an optimistic memory bias: based on that one time that the road was absolutely clear, that they could drive 80 miles an hour, and they got to work in 20 minutes (in a commute that typically takes 30) they’ve begun planning as if the actual time-cost of the commute is 20 minutes. This means that they’re typically late, and usually have excuses (bad traffic, poor weather, misbehaving pets or kids) for being so because, goddamn it, that commute should only take 20 minutes. In short, a rarity is presumed to be a permanent entitlement, and suboptimal decisions result.

You see the short commute bias all over the place in business. It explains why managers (and engineers) systematically underestimate how long it takes to complete a project, but you also see it in HR. As soon as That One Kid takes a $25,000 drop relative to an appropriate salary (never mind that he left, one year later) hiring managers take that to be the new appropriate salary for that level. When they struggle to fill the position, trying to replicate that rare toxic win rather than building a business, they bitch about “inflated salaries” and a “talent shortage”.

Mixed messages…?

It’s subtle, but I’ve actually argued two cases that seem to contradict each other. My first complaint is that hiring managers over-focus on a high frequency of mediocre wins, to the detriment of experienced software engineers, and fill their companies with inexperienced or mediocre people that they often don’t know what to do with. This accuses them of over-hiring. My second complaint is that they suffer from “short commute bias” and that they therefore miss opportunities to hire, because they’re holding out for that rock star who severely underprices himself. This accuses them of under-hiring. Oddly enough, both are true. (Generally, it’s not the same people who are under-hiring and over-hiring.) But why is this? What would impel people toward two opposing kinds of incompetent play?

Oblique arbitrage and social status

If you’re an economist, you might view wage-setting and talent standards in such terms and expect software companies to staff themselves with the (seriously underpriced) more senior engineers. This assumes that companies are rational actors. But companies don’t “act” at all; people within them do. As it turns out, people are more sensitive to social status than quantifiable economic gain.

We can’t measure social status precisely, but we can estimate it. Every corporation conceives of itself as a meritocracy; there’s no company that cites “Nepotism, Mediocrity, and Chicanery” in its “core values” statement. Job title, salary, autonomy, and reporting structure are all forms of social status that, at least in theory, the company attempts to correlate with a person’s capability. Often, these assessments are way off; but companies tend to converge, at least, in consistency in how they are inaccurate (overpaid people will also be over-titled and given more difficult project, for example). In software, I’ve developed a scale for estimating engineer maturity. It’s not perfect, but it’ll work for our purposes, and I think it reasonably models social status at a typical tech company. An engineer considered to be at the 1.5 to 1.7 level will usually be called a senior software engineer. If the person’s estimated at 1.8 to 2.0, you’ll see a title like staff or principal given. At 2.1 to 2.3, you start to hear terms like fellow. However, titles are, in general, the least interesting variety of social status. More interesting are compensation (which we’ve discussed) and project allocation. If you’re perceived as a 1.5-level engineer, you might expect the company to line you up with 1.5-level work. In practice, most people get work below their level of capability, because companies want to avoid the risk involved in 1.5+ level projects whenever possible. They still want to align the people they consider best with the hardest projects, but usually the desire is to have people working half a point below their actual capability, because employers believe that it minimizes risk.

From day to day in the workplace, people don’t assess their wins and losses in terms of dollars, but in social status. If you get a 1.5-level engineer to accept terms of employment (including, but not restricted to, compensation) appropriate to 1.4, then you’ve made a 0.1-point gain. But if you get him to accept a package appropriate for a 1.2 engineer, that’s a 0.3-point gain. The economics aren’t really considered. That’s three times as much “winning”! A huge win, in this calculus, would be to have CS doctorate from MIT fixing bugs. In terms of social status, it’s a great brag; “I’ve got MIT PhDs sweeping my floors”. In economic terms, it’s pissing away millions in lost opportunity.

Do I mean to suggest that companies, or even managers, are out to deliberately underemploy people? It’s probably not in their interest to do so. After all, if you give a 1.5 engineer the work, autonomy, and conditions appropriate to the 1.2 level, you’re probably not going to collect 1.5-level work. What most companies (and managers) want is excess capacity: a 1.5+ who’s willing to take 1.2-level conditions and do 1.2-level work, but can bring out the bigger guns when needed.

What am I suggesting? Some say that companies are in the business of buying low and selling high. Not so. Often buying high and selling higher is a better move. It’s the difference that matters. People intuitively understand this, and make good choices when the decisions are quantifiable and purely economic; but when shit gets sociological, and the nonlinear transforms come out, they start making terrible decisions. A 0.3-point difference between quality of programmer and work conditions isn’t what you should be optimizing for; social status “points” don’t really matter when you ought to be focus on delivering value to customers and making money. You should be in the money business, not the social status business. If you systematically under-employ and under-reward people, that’s not an arbitrage; you’re probably pissing away millions of dollars in lost opportunity.

Results… ?

One typical corporate-political tactic is to use phony existential risk to argue for what one wants. “Using <outdated technology X> will annoy me and hurt my career” evolves into “We’ll get no talent if we continue with <X>.” This is how we get into the religious wars for which technology is notorious. Because many software engineers are petty, short-sighted, and socially inept, premature escalation to management is commonplace (and that’s one of the reasons why we get no fucking respect). Managers are so used to hearing “we’ll get no talent if we…” that they pretty much ignore that complaint. The problem, of course, is that many of the things that management does (closed allocation, low compensation, micromanagement) are genuinely talent-hostile; but our tendency to present minor differences of opinion as eschatological matters has stripped us of any credibility when we’re actually right. Closed allocation is talent-hostile and does destroy technology companies, but not with immediacy (and, sadly, its tendency toward decline takes long enough that most executives don’t care).

I’ll say one thing: the “we’ll get no talent…” exaggeration is inaccurate, and a tad bit offensive. I’ve worked in some mind-bogglingly shitty companies, with awful health benefits and stupid policies all over the place, and they all managed to have at least a few talented people within them. Those high-talent people may have been systematically underutilized, but they were there. A company that doesn’t offer relocation, skimps on health benefits, or deducts sick leave from an already tight vacation allotment is clearly not playing to win. Or, in the poker analogy, it’s playing for irrelevant small wins while missing high-impact opportunities. That’s awful, and it leads to mediocrity, but the “we’ll get no talent” argument is just empirically wrong.

If you don’t play to win, you still get some talent. The difference is in degree. Do you want to be MIT, or a second-tier state school? The second-tier state schools still have some talent, and their best students would easily fit in at the Harvards and MITs of the world, but the synergistic “energy” that you get when a large number of talented people come together is usually not there. Since organizations are driven by group dynamics rather than talented individuals, a company needs that “energy”… but usually has no idea how to get it.

Of course, it’s quite possible that one’s company doesn’t need or want to be an MIT. There’s a lot of unglamorous work that needs to be done in this world. Even the most wrongheaded, talent-hostile companies can attract smart individuals. That’s not even hard to do, because even smart people need to eat. If you want your organization to lead, however, you’ve got to play to win the game, and stop optimizing for silly, small-ticket micro-victories. The good news (for the progressive executive) is that the technology industry is so badly run, right now, that good leadership is a welcome and extreme rarity. There’s a lot of untaken reward sitting out there for the few that have the courage to come out, show genuine leadership, build talent-friendly companies as opposed to social media cantrips, and actually play to win this game.

Can “Agile” break the Iron Triangle? Can open allocation?

I’ve written a lot about the MacLeod-Gervais-Rao model of the corporate organization, which I’ve dissected at length starting here. This exploration is going to start with many of the same ideas, refined by time.

In the MacLeod model, people in a company are either sorted or self-select in to three tiers, each uncharitably named. The “Losers” at the bottom of the corporation do the grunt work; the “Sociopaths”, typically at the top or on the way up, tell people what to do and take most of the rewards; and the “Clueless” get caught in the middle, working overtime to correct the mistakes of disengaged people below them and the unapologetically self-advancing people above them.

It’s a sound model. It accurately describes many corporations. To be fair, the names are a bit negative, and possibly inaccurate. “Losers” aren’t disliked or contemptible people, but only economic losers who trade their time for a pittance, in exchange for low work demands, low or nonexistent income volatility, and infrequent changes in their job duties and geographic location. They’re selling off their risk because they prefer comfort over upside. “Sociopaths” are the risk-seekers who aren’t afraid to break rules. Organizations need such people in order to survive, but they don’t need many, due to the pyramidal shape of a corporate hierarchy. One rule-breaking maverick is an asset; ten is a problem. Those get rapidly promoted or fired, with not much middle ground. The “Clueless” in the middle are the “true believers” who think (in spite of all objective evidence) that their organizations are meritocracies. Sociopaths (even the good ones) aren’t afraid to cut deals and play politics and grease palms, but Clueless tend to believe that all problems can be solved by “working harder”. Their unconditional work ethic (as opposed to the conditional one of the Sociopath) makes them a natural match for unpleasant tasks and dead-end jobs.

Model vs. reality

Is this depiction of corporate life accurate? Can there be “Losers” in high positions and “Sociopaths” at the bottom? Of course, there are. It’s just rare that it happens, because organizations work against it. Moreover, these traits are hardly immutable. A person might be a “Clueless” in one context and a “Sociopath” in another. The MacLeod model is a generalization and an attractor state. What it describes best are the organizations that have lost most of their purpose for existing, or that never had a vision beyond self-enrichment of a few. (That’s most organizations, at least in the corporate world.) At such a point, self-interest is the dominating motivation of most players, and the idea that there’s a higher mission is just a narrative for the Clueless. Since not everyone can be a leader or be made rich, organizations adapt by evolving toward this state of affairs, which is highly stable. I’ll get to its specific flaws, and its relationship to software methodologies like “Agile” and open allocation, later on. Before doing that, it’s most useful to discuss why this tripartite sorting of people occurs.

The Iron Triangle

Organizations tend to want three things from people:

  • subordinacy (S), which pertains to “Does this person value the organization’s interest above her own?”
  • dedication (D), or “Will she work as hard as she can?”
  • strategy (T), or “Does she work on the right things?”

The most nuanced of these is the first, subordinacy. Of the three Iron Triangle traits, it’s the one that I tend to least value, but in order to discuss it properly, we have to discuss what it is.

I don’t champion the constitutionally insubordinate. In truth, I don’t have much use for such people, the ones who oppose authority simply because it is authority. Sometimes, subordinating is the right thing to do. For example, stopping and waiting at a red light is an act of subordination where the alternative, almost always, is reckless. It’s illegal, and for good reasons, to run red lights. Moreover, while I don’t value one-sided loyalty (i.e. loyalty to a group or organization that will not return it) I do value integrity. Plenty of actions that are insubordinate are also dishonest, toxic, and wrong. All of this is to say that the question of when (in)subordination is a virtue vs. vice is a deep and complicated one. At least for now, I prefer to step away from the larger moral questions and focus on the (typically, morally neutral) matter of an organization’s upkeep.

While subordination and insubordination are matters of action, subordinacy (S+) is one of attitude and personal strategy. (Unfortunately, the adjectival forms of both words, “subordinacy” and “subordination”, seem to be the same word: “subordinate”.) When an act of insubordination is ethically neutral, will the person do it? Will he use his organization’s reputation, without prior permission, to bolster his own? Will he focus most on the tasks that further his career, rather than those that are most important to the organization? Will he assist a third party in negotiation with the organization, in order to have a friend (or, at least, an ally or a favor owed) in the future? These are questions where insubordinacy is ethically neutral. It’s not the noble insubordination of the Civil Rights activists; nor is it the unethical, degenerate sort; it’s the neutral insubordinacy of a careerist who’s figured the game out. The person with subordinacy (S+) will err on the side of favoring the organization, preferring stability over the opportunity for personal gain. The person without subordinacy (S-) tends to favor the interests of people (including himself) for multiple reasons. The charitable view is that the insubordinate favor individuals over organizations because individuals are more likely to have coherent vision and the strategic insight necessary to move society forward. The less favorable view is that they favor individuals because people have a memory, and organizations don’t. If you tell a talented young engineer, while your company is courting him, that he could get $20,000 more just by asking for it, there’s a chance that, five years down the road, he’ll pull you in to his startup as a co-founder. If you favor the organization by not sharing this information, you’ll get nothing in return. In truth, as an S-, I think that both of these views are correct. We favor individual advancement (of ourselves, and of others whom we care about) over organizational upkeep because individuals are more coherent and strategically competent, but also because we won’t waste loyalty on bureaucracies that would never return the favor.

Without further exploring the rabbit hole of subordinacy, let’s return to a question that I’m sure many readers have asked. Why is this set of three traits (subordinacy, dedication, and strategy) called an Iron Triangle? The answer is that an organization gets at most two from each person. To see why that is, it’s useful to examine the eight combinations formed by the absence or presence of each trait. People with zero or one of the Iron Triangle traits tend to be organizationally inert, so I won’t focus on them. At 2 out of 3, we get the MacLeod archetypes. The strategic and dedicated, but not subordinate (or, S-D+T+) become the high-energy creative (or, in some cases, destructive) employees (Sociopaths) who’ll either be promoted quickly or fired. Their more risk-averse counterparts, on the other hand, will select subordinacy over dedication. They have a good sense of what projects are worth working on and how hard to work (or, conversely, how lazy they can be and get away with it) but, if assigned to doomed projects, they’d rather stay in place and slack than put themselves at personal risk by trying to right the ship. Those people, being subordinate and strategic but not dedicated (S+D-T+) become the MacLeod Losers. They might know how to improve things, but they know that there’s more job security in following orders. Finally, those with the unconditional work ethic, the subordinate and dedicated but not strategic (S+D+T-) tend to end up doing the jobs that no one wants. This can advance them into middle management, but they’ll rarely get a decision-making role. They tend to get the responsibility-without-power kind of “management” that is avoided by Losers (it’s too much work) and Sociopaths (it’s a dead-end job) alike. Those are the MacLeod Clueless.

Why is it rare, if not impossible, that an organization gets all three Iron Triangle traits from one individual? It’s a paradoxical arrangement. A person who is strategic will generally not be subordinate and dedicated. Subordinacy means that a person gives up opportunities for personal advancement in order to avoid conflict and appearances of impropriety, and in the interest of organizational harmony and upkeep. Dedication means that a person works much harder, and takes on more of the nasty tasks, than is needed to maintain employment and social acceptability. It’s not strategic to be both. You can earn your place with one or the other, and if you play both games you’re more likely to make mistakes, because they conflict. If you want to maximize comfort and minimize risk and pain, you prefer subordinacy. You become “a nice guy”, and promotions come slowly, if at all. That’s the MacLeod Loser route. If you want to maximize personal yield and advancement, or some higher moral objective (e.g. social justice) that might require fighting the organization, and you don’t mind pain and risk (even the risk of being expelled or fired from the organization) then you favor dedication. You become a MacLeod “Sociopath”. You shake things up, aren’t afraid to make enemies or have others question your judgment (and, even, integrity) and you work hard not because you have a (MacLeod Clueless) desire to clean up others’ messes, but because you want to prove yourself right about something. You’ll either be promoted, fired, or find external promotion inside of a couple years.

Simply put, to be subordinate and dedicated is not strategic, since there is no value in taking on two kinds of self-sacrifice when only one will suffice.

For an aside, the stereotype of the non-strategic, terminal middle manager is that he’s a milquetoast with no vision. That type of person certainly exists, but a pattern that is at least as common (and dangerous) is the non-strategic person with too many visions. He overcommits, tries to please everyone (subordinacy) and works really hard (dedication) on an array of stuff that never amounts to a coherent whole. He has “vision” but there is no coherence to it. He can’t focus. If a person is subordinate and dedicated, that person is trying to play two conflicting strategies (for an example of conflict, a person who works too hard will lose social polish and tarnish any “nice guy” gains earned in subordinacy) simultaneously and that is not strategic.

An archetype of a person who seems to have all three traits would be the Protégé. Is that a possible fourth MacLeod category? (Answer: I’d say no.) A person whose career is protected by someone powerful or influential will show all three Iron Triangle traits. The “to be subordinate and dedicated is not strategic” theorem seems broken, in that case. Is it? I would argue that it’s not. This is a case of conditional subordinacy. The protégé has a genuine personal interest in doing right by the mentor, and in keeping long-term, loyalty-based relationships (which do not exist, these days, with corporations) intact. I would argue that, while the protégé will subordinate, it isn’t subordinacy because there is a harmony of interests, and it’s conditional upon the loyalty going both ways. If the upper management’s buy-in to the protégé’s career wanes, he’ll either start to slack (MacLeod Loser) or seek his own interests (MacLeod Sociopath).

What’s wrong with the Iron Triangle?

Is the Iron Triangle a problem? After all, so long as organizations can get each of those desired traits from someone, is it a real loss that no individual will deliver all three? It’s not clear that the Iron Triangle poses a real threat to the organization; in fact, the MacLeod arrangement is highly stable. Moreover, people and organizations adapt somewhat fluidly. A strategic person can consciously choose to favor subordinacy or dedication over the other. A non-strategic person might become strategic with age and experience. No one’s a lifelong MacLeod Loser or Clueless or Sociopath; we evolve as our needs (and the needs of those around us) demand. MacLeod organizations are “iron” because they’re stable on their own terms, like white dwarfs left when a star’s best life is behind it.

So what’s wrong? Why is the MacLeod organization considered to be dysfunctional? Why do, for example, technology companies continually come out with new employee perks and development methodologies in order to create the perception that they’re not MacLeod organizations? Why have the cynical executives in technology removed themselves from official managerial roles and become “investors” at venture capital firms? Why is it desirable that a company move itself away from a MacLeod model, which seems to give most people what they want?

The MacLeod organization’s problem is that its short-term stability hides a long-term trend toward obsolescence. Each of these three archetypes has a fatal flaw. In assessing the fatal flaws of each, we’ll get an understanding of why some technology companies insist on methodologies like “Scrum”, and also on whether a different approach, like open allocation, can solve the Iron Triangle problem.

The flaw of the MacLeod Sociopaths, whom I give the more charitable name of Self-Executive, is that they’re a high-variance set of people. If you want reliable mediocrity, you won’t get it from them. In fact, expecting reliable mediocrity will alienate them. The name of “Sociopath” comes from the fact that some (and, typically, a disproportionate share of those who attain power) are unethical. I don’t actually think that Self-Executive employees are any more (or less) unethical than any other group, but when they are cheaters and liars, they do a lot of damage. The main issue that that set of people has, relative to organizations, is numerical. Organizations don’t need many creative, passionate, decision-making people. In the pyramid-shaped corporation, there are more people with the Self-Executive inclination (probably, about 10% of the population) than there are roles with executive (with “executive”, here, including non-managerial technical or creative leadership) work. Thus, some will be promoted and others will be fired. The stakes are high, so you get in-fighting, politicking, and sometimes even cheating by some highly creative people who aren’t used to being told “No”.

Integrity, in terms of which Self-Executives (or “Sociopaths”) are promoted vs. fired, actually matters. A MacLeod Loser with low integrity might steal office supplies, but a MacLeod Sociopath with low integrity can kill the company. So promoting the “good Sociopaths” (the paradoxical nature of this term is one reason why I prefer Self-Executive) and firing the bad ones is important. Unfortunately, the bad kind tend to be a lot better at office politics.

As for the MacLeod Clueless, whom I’ve renamed to Workhorses, their flaw is that they tend to generate recurring commitments: meetings and processes and rules and additional tasks that seem like good ideas individually but, en masse, make the company incoherent and slow. With that unconditional work ethic, they’re willing to throw their weight behind whatever efforts their superiors consider important. Left to their own devices, they’ll come up with something that will usually provoke a “Huh?” reaction. In the lower ranks, their work ethic endears them to their bosses and they can get promoted, but eventually, that stops. Workhorses never get themselves respected enough to be given high-end work, and their willingness to complete low-end and unimportant work means they get used as garbage disposals by the organization. This isn’t a problem, until they get promoted too high and the Peter Principle kicks in. The issue isn’t just that they generate bad ideas, because everyone who has ideas comes up with some bad ones. (Self-Executives also come up with bad ideas, and quickly abandon them.) It’s that they never flush away bad ideas, so they have a tendency to create the recurring commitments that accumulate and slow down the entire organization. The MacLeod Losers lack the initiative to generate this crap, and Sociopaths avoid it because recurring commitments are career-draining, dead-end pits of suck. It’s the well-meaning incompetents who build such messes up. Eventually, the corporation reaches a level of stagnation at which Sociopaths called “management consultants” are brought in to garbage-collect those recurring commitments.

What’s wrong with the MacLeod Losers? (I would rename them Team-Players.) It seems like a Team-Player should be a model employee. He won’t disobey a direct order. He won’t generate fruitless work. His main flaw is the lack of dedication. He wants to get along with others, not be the hardest-working. If you tell him that his work hours are 9 to 5, he’ll be out the door around 5:10. But what’s wrong with that? MacLeod Losers are actually pretty efficient and productive. That’s part of their being strategic. Efficiently doing what little work they’re directly told to do gives them more time to surf the web, play with Nerf guns, or jockey for a “cool” in-group status that, while it doesn’t correlate to the kinds of status (such as salary and title) that actually matter, gives them a sense of comfort and importance (“I’m the Halloween Party Guy”). Team-Players work efficiently and, while they lack initiative, they’re far from being slackers, since their goal is to maximize comfort. True minimum-effort playing (that is, doing just enough work not to get fired) is actually pretty uncomfortable. You’re just on the bubble, people don’t like you, and every time your manager or expectations change, you have to watch your back. So they don’t go down that far, in effort level. They modulate their work output to the Socially Accepted Mediocre Effort (SAME).

Companies tend to form effort bands based on peoples’ level of work. The negatively productive are in the FUGLY (“Fail Up or Get Lost Yesterday”) band; they’re either already on PIPs, or they have personal connections that’ll lift them no matter what they do. Next is MEME (“Minimum Exertion to Maintain Employment”) and then SAME (“Socially Acceptable Mediocre Effort”) and AWT (“Actually Works There”) followed by HSTG (“Holy Shit, This Guy/Gal”), the last of these being the overperforming level at which a person becomes more likely to be fired (because there are more opportunities for political failure or social embarrassment). In most organizations, the MEME is about 3 hours per week of actual work and SAME is around 10 to 15. The Workhorses/Clueless tend toward AWT and HSTG, and the Self-Executives/Sociopaths will play the full spectrum depending on what they are trying to do. But the Team-Players/Losers are optimizing for comfort. They want to be well-liked and not have their job duties change, and to keep management off their backs. Their avoidance of recurring commitments, over-exertion, and embarrassment to others keeps them out of the AWT/HSTG range, but their desire to be well-liked keeps them out of FUGLY/MEME territory. So they aim for the SAME. And what’s wrong with that? The problem is SAME Drift. The SAME starts out at an acceptable level, at which the company makes more off the Losers’ work than it pays them; but over time, the SAME tends to approach (and can reach) zero as the workload fluctuates. When the workload decreases, MacLeod Losers/Team-Players (being strategic) drop their output, because creating non-essential work for themselves is just a waste. When it increases, those who are willing to take additional work for self-advancement become MacLeod Sociopaths/Self-Executives, leaving the Loser tier (and the SAME) behind. Better incentives will bring people out of the Loser tier and therefore reduce the number of underperformers and their bulk effect, but it will generally not improve the SAME. That matters, because in any organization, most people will be exactly at the SAME.

Culture! (and, uh, firing people)

After all of this, we get to emergent social behavior and game theory, and when humans are involved, we tend to call it culture.

The MacLeod tiers are most pronounced in the lawful-evil rank culture that values subordinacy over the other two Iron Triangle traits. Rank culture is self-consistent and internally stable, because it gives everyone what they want (comfort to Losers/Team-players, mission to Clueless/Workhorses, self-advancement to Sociopaths/Self-Executives). Still, the organization declines over time, due to SAME Drift. If the SAME drops too far, the company will be weighed down by cynical underperformers. By that point, it usually has underperforming and incompetent managers (who protect the incapable below them) as well, and that goes many levels up the chain. It’s ugly, and the most common thing that seems to shock a company out of a low-SAME funk is a serious layoff or restructuring. (In the workplace, promotions and firings are the most important aspects of “the culture”. The rest is distraction.)

Strategically, layoffs are hard to get right. A big layoff provides a shock (that can be desirable or not) but the company can recover, if there’s a well-communicated strategic reason for the change. A series of small layoffs destroys morale; people realize that their jobs are tentative and stop caring. It’s always better to do one big one, but this requires (for the executives deciding whom to lay off) compiling a large amount of knowledge without tipping anyone off to what’s about to happen. The biggest problem with layoffs, as practiced, is that the proper way to do them is to cut complexity as well as people. If you cut people but try to perform all the same operations, you’re just going to burn out an already-shaken team. However, companies usually don’t want to cut complexity because, for every inefficient process or unnecessary recurring commitment they have, there’s usually someone who likes it being there. Also, cutting complexity often means that good people (whose work areas happen to be in unprofitable departments) are let go, and no one likes that. The intended strategy often becomes, instead, to cut people first, in one efficient hack, then move them around in a way that cuts complexity. (The latter part rarely happens, leaving a skeleton crew to do just as much work as the larger set did.) This is cut-then-shuffle is only remotely feasible if the lowest-performing people are the ones let go. However, it’s pretty much impossible, from the top, to identify low performers. Can a company reliably figure out who its low performers are without political corruption? No, not really. But it can generate a bunch of meaningless numbers (or a ranked ordering) and trick itself into believing in them.

This is where “stack ranking” (or “top grading”), originated by Jack Welch, beloved by McKinsey, and now used at most technology companies, comes into play. Stack ranking is a recurring layoff, dishonestly packaged as being performance-based. Microsoft used to give 7% of employees the dreaded “5” rating, and Google’s “2.8/2.9″ (given to 3-5 percent, depending on economic circumstances) had the same effect, but there are many more examples of this. In some regimes, the same percentage on each team gets nicked. More often, it’s based on the macroscopic performance of each division, department and team. This leads to the Welch Effect: in a discretionary termination (i.e. not one forced by business events, such as a plant closing) the people most likely to be let go are junior members of underperforming teams. This is suboptimal because those are usually the people least responsible for that team’s underperformance. I can make 20 different arguments, each in 20 different ways, but the point is that stack-ranking doesn’t actually get rid of low performers. It gets rid of people pretty randomly, pisses a lot of people off, and eventually leads to a rash of political behavior. It certainly doesn’t do the proper job of a layoff, which is to reduce complexity. (It increases complexity, because of all the political behavior and improper favor-trading that goes on.) It does achieve one thing: it shatters the SAME. The MacLeod Losers are shocked either into working harder (Clueless) or getting better at politics (Sociopaths). That sense of equilibrium is gone.

The above is something management consultants love to do, because it creates toxic (and, from above, probably humorous) drama in a company where they don’t have to work. The MacLeod rank culture, in which following orders is enough to keep a person in place, disappears. It’s replaced by the chaotic-evil tough culture (named after Jeffrey Skilling’s proud admission, “we have a tough culture at Enron”). While the rank culture valued subordinacy over the other Iron Triangle traits, the tough culture values dedication. Managers aren’t fully trusted: some percentage of them is usually also thrown out each year, as well. It doesn’t matter much what you’re doing; you just need a reputation for being a hard worker and not a “piker”. Hours get long: 10- and 12-hour days become the norm. Busywork is tolerated because the deluge of unimportant but time-consuming tasks “weeds out the weak”. When someone seems to favor family or outside hobbies over the job, and stops working long hours, the knives come out, even if that person’s more productive and efficient than anyone else.

Typically, a company will vacillate between the lawful-evil rank and chaotic-evil tough cultures. Rank culture is the internally stable but slowly-declining arrangement. When its failings attract executive or shareholder attention, it moves toward a tough culture. Over time, as some people (the new MacLeod Losers) get tired of working so hard, and others (the new MacLeod Sociopaths) figure the new system out, and they realize there is personal benefit in giving protection against it, the organization slides back into a rank culture. The ones who can win control over the harsh, high-stakes performance appraisal of the tough culture (presented as an impartial meritocracy, but as prone to manipulation as a rank culture’s system) will typically be the new holders of rank within a couple of years.

Those two cultures are the ugly cultures I’ve called “evil” because they’re unpleasant to work in, inefficient, and generally deprive shareholders (of a performant organization) as well as non-executive employees (of a decent work-life and of career support). There are two good cultures. The chaotic good culture is the self-executive culture, which favors strategy over the other two Iron Triangle traits. I’ll get back to that. The lawful good culture is the guild culture, which favors a balance in the three Iron Triangle traits. Remember how I said that there was no way a person could have all three traits? Well, I lied, sort-of. I mentioned the protégé concept in order to discuss exactly this. In the idealized guild, all of the new entrants are protégés, and a balance of the three Iron Triangle traits (subordinacy being conditioned on the promise of a great career in the future) is possible. Rather than a pyramid, the shape of the professional structure is an obelisk. The grandmasters and masters tutor the journeymen and apprentices. You don’t have “Sociopaths” and “Losers”; you just have the experienced lifting up the neophytes. Because most or all of the apprentices and journeymen will be promoted, they don’t need to play against the organization in order to have careers, as they would in a pyramidal structure. It’s a nice idea, and it actually works. The problem is that it typically can’t grow very fast. The guild culture is brittle against rapid expansion or contraction, and seems not handle change or chaos very well in general. It also devolves into mean-spirited behavior (see: “big law” and academia) when the guild system’s no longer supported by economic conditions, and when the lack of prospects for the up-and-coming leads to generational conflict. Then, it devolves into a tough culture due to internal scarcity, followed by a rank culture. I don’t think we’re going to see new guild cultures in the 21st century. In fact, we might see some more of the existing ones fall apart.

We’re left with one culture that might work: the self-executive culture. Valve’s open allocation is an example of this. Not only does it encourage (rather than punishing) self-executive behavior, but it seems to mandate it. Just taking orders isn’t a viable career strategy, because it’s not clear who has the authority to give orders. People can’t use “landed on a bad project” to justify mediocrity, because they picked their projects. I’ve written a ton about this already. It really is the only non-imbecilic way to build a software company.

Breaking the Iron Triangle?

We’re now ready to discuss software methodologies.

“Agile” is an attempt to break the “waterfall model“. While there isn’t a guaranteed equivalence, “waterfall” often accompanies a rank culture, because in a rank culture the decisions are made from on high and flow downhill, rarely being opposed or questioned. How do people develop software in a rank culture? By following orders. A few higher-ups make the important decisions, and the halfway-checked-out MacLeod Losers implement. The result is low-quality software and sluggish response to change. In many industries, the creative ossification that follows a rank culture will take decades to slow it down to an unacceptable level. In software, that happens much faster, because programmers tend either to be engaged or useless. So this tendency of rank cultures toward underperformance is more of an emergency. The wrecking ball that is often used to crack a rank culture is often given the name of “Agile”.

There’s good and bad in Agile, whose stated purpose is to remove the impediments and communication breakdowns that rank-culture/”waterfall” development creates (and that become excuses for slacking). Scrum, to take an example, actually forces the “product owner” to prioritize tasks, which does increase coherency. My problem with “Agile” is that it doesn’t go far enough. Typically, it’s still business-driven engineering, which means that the major problem (the passengers flying the plane) hasn’t been fixed. Saying, “We should still do business-driven programming, but let’s launch something every 2 weeks” is like saying “I want to get shot in the head, but by a buxom blonde who wears an eye patch.” Personally, I’d rather just not get shot in the head. If you’re doing business-driven engineering, then Scrum is very likely to be an improvement; but the most talented programmers would much rather work in an engineer-driven firm.

“Engineer-driven” doesn’t necessarily mean open allocation. Google is engineer-driven (and that explains much of its success) but uses closed allocation. It’s a step in the right direction, but not (I would argue) enough. Now, I’ll admit that it’s quite possible that many “Agile” methodologies are compatible with (a) engineer-driven development and even (b) open allocation. I just haven’t seen it play out that way. When “story points” and “iteration planning” and all that other heavyweight stuff comes out, it just becomes a new structure for routing tickets (generated by the business) to engineers who are presumed to be interchangeable. If there are malefactors in view of this, it can also be abused for performance appraisal and professional extortion. They will become the new holders of rank.

Well-intended engineers and middle managers often like Agile because it shakes up the “waterfall” model of the rank culture. At least in theory, the team “comes together” (in a military’s sense of the word) and becomes a primary driver, increasing (at least) the autonomy (self-executivity?) of the group. The problem is that malefactors (true sociopaths) also like it. They can use Agile’s machinery to create a new tough culture. That’s bad for everyone, because tough cultures settle into worse rank cultures than those they emerged out of. Of the four workplace cultures, tough cultures are the most dysfunctional, because the political behavior they incentivize creates complexity (often long-lasting) while the organization is losing (firing) people. Rank cultures generate slacking and ambivalent zombie-shuffling, but tough cultures encourage land-mine-setting and in-fighting.

I would not go so far as to say that “Agile” methodologies will create a tough culture. At least, relative to the sclerotic dysfunction that inexorably follows from business-driven engineering, there are good ideas in the “Agile” movement. My argument is that they can be used to that end. That doesn’t make Scrum and Kanban and XP necessarily bad, but I would call them dangerous. As soon as fucking “story points” pop up in a performance review, get rid of them or your organization will soon die.


What makes the self-executive culture so much better than the others? After all, might its rarity suggest that it’s not sustainable? Well, I don’t know enough to vouch for open allocation outside of software. I’m obviously not an authority on how to run hospitals or overseas ink factories. I’ll stick to what I know, and that’s software engineering. Software is convex work, meaning that the difference between nonperformance and mediocrity is smaller than that between mediocrity and excellence (“10x” performance). Not all work is like that. For many tasks, mediocrity is just fine and excellence is not much better, but nonperformance is catastrophic. That’s called concave work, because the input-output curve is concave down.

Concave work is easier to manage. Why? Because a manager wants to (a) maximize average return, but (b) is subject to limits on risk. With concave work, the region of lowest risk (lowest first derivative) is at the high-performance plateau. Thus, removing risks will enhance performance. With convex work, the opposite is true. The low-risk plateau is at zero. High performance and risk are, in that case, positively correlated. You want people to take as much creative risk as they can afford. You often need it if you want to remain competitive.

In a self-executive workplace, where people are trusted (and, indeed, required) to manage their own direction in the company, is the Iron Triangle broken? Well, let’s look at each of the three Iron Triangle traits. Self-executive cultures excel at strategy because they distribute that responsibility (for deciding what is worth working on) to all workers, rather than relegating important decisions to a sheltered few (who are showered in information, almost all of it false, from the rest of the organization). They certainly inspire dedication, as people throw passion and creativity into the process of proving (or refuting) their ideas. Those two traits are covered. What of the third, subordinacy? Is open allocation, designed around the principle of eliminating needless subordination, going to succeed under that lens?

It doesn’t matter. Subordinacy is about how a person resolves conflicts of interest between herself and the company that emerge when the company plays against her personal or career needs. In a self-executive firm, that doesn’t happen. The conflict never exists.

Doing great work ought to be a “double bottom line”, multilateral victory: it helps the company and the individual to do great things. Self-executive cultures don’t quibble with the question of how some unnecessary conflict of interest is handled by a person, which is what subordinacy is about. They render it irrelevant, by trusting people to manage their own careers and creative energies. This removes the focus on subordinacy and redirects it to something far more important, and something that can only be assessed in a person who’s given some real freedom.

What is that more important “something”? Integrity.

Open allocation doesn’t really “break” the Iron Triangle. It renders it meaningless, and that’s good enough.

Cheap votes: political degradation in government, business, and venture capital.

I’ve written a lot about how people in the mainstream business culture externalize costs in order to improve their personal careers and reputations, and the natural disconnect this creates between them and technologists, who want to get rich by creating new value, and not by finding increasingly clever ways to slough costs to other people. What I haven’t written as much about is how these private-sector social climbers, who present themselves as entrepreneurs but have more in common with Soviet bureaucrats, managed to gain their power. How exactly do these characters establish themselves as leaders? The core concept one needs to understand is one that appears consistently in politics, economics, online gaming, and social relationships: cheap votes.

Why is vote-selling illegal?

First, a question: should it be illegal to buy and sell votes? Some might find it unreasonable that this transaction is illegal; others might be surprised to know that it wasn’t always against the law, even if it seems like the sort of thing that should be. Society generally allows the transfer of one kind of power into another, so why should individual electoral power be considered sacred? On theory alone, it’s hard to make the case that it should be. 

I’ll attempt to answer that. The first thing that must be noted is that vote-buying matters. It increases the statistical power of the bought votes, to the detriment of the rest of the electorate. On paper, one vote is one vote. However, the variance contribution (or proportion of effect) of a voting bloc grows with the square of its size. In that way, the power of a 100-person, perfectly correlated (i.e. no defections) voting bloc is 10,000 times that of an individual. 

Let’s give a concrete example. Let’s say that the payoff of a gamble is based on 101 coins, 100 white and one black. The payoff is based on the heads flipped, with each white coin worth $1 and the black coin worth $100. The total range of payoffs is $0 to $200, and the black coin will, obviously, contribute $100 of that. So does the black coin have “half of” the influence over the payoff? Not quite; it has more. The white coins, as a group, will almost always contribute between $30 and $70– and between $40 and $60, 95 percent of the time. It’s a bell curve. What this means is that whether a round will have a good payoff depends, in practice, almost entirely on the black coin. If it’s heads, you’ll almost never see less than $130. If it’s tails, you’ll rarely see more than $70. The white coins matter, but not nearly as much, because many of the heads and tails cancel each other out. 

Both the white and black coins have the same mean contribution to the payoff: $50. However, the variance of the single black coin is much higher: 2500 (or a standard deviation of $50). The white coins, all together, have a variance of 25, or a standard deviation of $5. Since variance is (under many types of conditions) the best measure of relative influence, one could argue that the black coin has 100 times the mathematical influence of all the white coins added together, and 10,000 times the influence of an individual white coin. 

These simplifications break down in some cases, especially around winner-take-all elections. For example, if two factions are inflexibly opposed (because the people in them benefit or suffer personally, or think they do, based on the result of the election) and each has 45% of the vote, then the people in the remaining 10% (“spoilers”) have significantly more power, especially if something can bring them to congeal into a bloc of their own. That is a commonly-cited case in which individual, generally indifferent “swing” voters gain power. Does this contradict my claim about the disproportionate power of voting blocs? Not really. In this scenario, they have disproportionate decisive effect, but their power is over a binary decision that was already set up by the movement of the other 90%. 

Moreover, it’s improbable that the people in that 10 percent would form a bloc of their own. What prevents this? Indifference. Apathy. They often don’t really care either way about the matter being voted on. They’d probably sell their votes for a hundred dollars each. 

In quite a large number of matters, specific details are too boring for most people to care, even if those issues are extremely important. They’d much rather defer to the experts, throw their power to someone else, and get back to their arguments about colors of bikesheds. Their votes are cheap and, if its legal, people will gain power or wealth by bundling those cheap votes together and selling the blocs.

So why is vote-selling illegal? It causes democracy to degenerate (enough that, as we’ll see, many organizations eschew democracy altogether). The voters who have the most interest in the outcome, and the most knowledge, will be more inclined to vote as individuals. Though they will correlate and may fall into loose clusters (e.g., “conservatives”, “liberals”) this will tend to be emergent rather by intent. On the other hand, the blunt power of an inflexible voting bloc will be attained by… the bought votes, the cheapest votes, the “fuck it, whoever pays me” set. The voting process ceases to reflect the general will (in Rousseau’s sense of the concept) of the people, as power is transferred to those who can package and sell cheap votes– and those who buy them. 

Real-life examples

Official buying and selling of votes is illegal, but indirect forms of it are both legal and not uncommon. For example, over ninety percent of voters in a typical election will give their vote, automatically, to the candidate of one of two major political parties. These candidates are usually chosen, at this point in history, through legitimate electoral means: the party’s primary. But what about the stages before that, as incumbents in other offices issue endorsements and campaign checks are cut?

Effectively, the purpose of these parties is to assume that cheap-vote congealment (and bloc formation) has already happened, tell the populace that it’s down between two remaining candidates, and make the voters feel they have a choice between two people who are often quite very similar in economic (in the U.S., right-of-center) and social (moderately authoritarian) policies while differing on superficial cultural grounds (related to religion in a way that is regional and does generalize uniformly across the whole country). The political parties, in a way, are the most legitimate cheap-vote aggregators. They know that most Americans care more about the bike-shed difference between Democratic corporate crooks and Republican corporate crooks– the spectator-sport conflict between Springfield and Shelbyville– than the nuances of political debate and the merits of the issues.

The vote-buying process is more brazen in the media. While expensive and thorough campaigns can’t turn an unlikeable person into a winner, they can have a large effect in “swing states” or close matches. There are some people who’ll be swayed by the often juvenile political commercials that pop up in the month before an election, and those are some of the cheapest voters. The electioneer need not even buy their vote directly; it has already been sold to the television station or radio show (a highly powerful cheap vote aggregator) to whom they’ve lent their agency. 

This is one of the reasons I don’t find low voter turnout to be distressing or even undesirable, at least not on first principles. If low voter turnout is an artifact of disenfranchisement, then it’s bad. If poorer people can’t get to the polls because their bosses won’t let them have the time off work (and Election Day ought to be a day off from work, but that’s another debate) then that’s quite wrong. On the other hand, if uninformed people don’t show up, that’s fine. I don’t get involved in civic activities unless I know what and who I’m voting for; otherwise, I’d be, at best, adding statistical noise and, at worse, unwittingly giving power to the cheap vote sellers and buyers who’ve put their preferred brand into my head. 

All this said, cheap-vote dynamics aren’t limited to politics. In fact, they’re much more common in economics. Just look at advertising. People vote with their dollars on what products should be made and what businesses should continue. A market, just like an election, is a preference aggregator. The problem? No one knows all of the contenders, or could possibly know. As opposed to a handful of political candidates, there might be twenty or two hundred vendors of a product. Quite a great number of them will buy not based on product quality or personal affinity but on reputation (brand) alone. Advertising has a minimal effect on the most knowledgeable (Gladwell’s “Mavens”) but it’s extremely powerful at bringing in the cheapest votes, the on-the-fence people who’ll go with what seems like the least risky choice. 

Venture capital

Maybe it’s predictable that I would relate this to technology, but it’s so applicable here that I can’t leave the obvious facts of the issue unexplored. 

Selection of organizational leadership almost always has a cheap-vote issue, because elections with large numbers of indistinguishable alternatives are where cheap votes have the most power. (A yes/no decision that affects everyone is where cheap votes will have the least power.) Most people see the contests as wholly external, because all the credible candidates are (from the individual’s point of view) just “not me”. Or, more accurately, if no one they know is in contention, they’re not going to be invested in the matter of which bozo gets the tallest stilts. As organizations get large, the effect of this apathy becomes dominant. 

Therefore, it’s rare in any case that selection of people will be uncorrupted by cheap vote dynamics, no matter how democratic the election or aggregation process may be. While some people are great leaders and others are terrible, it’s nearly impossible to reliably determine who will be which kind until after they have led (and, sometimes, it’s not clear for some time afterward). If asked to choose leaders among 20 candidates in a group of 10,000, you’ll see nuisance (by “nuisance”, I mean, uncorrelated to policy) variables like physical attractiveness, charisma, and even order of presentation (making the person who designs the ballots a potential cheap-vote vendor) take a disproportionate effect. This is an issue in the public sector, but a much more egregious one in the private sector, given the complete lack of transparency into the “leadership” class, in addition to the managerial power relationships and the general lack of concern about organizational corruption. 

Corporations (for better or worse, and I’d argue, for the worse) eliminate this effect by simply depriving employees of the ability to choose leaders at all: supervisors and middle managers and executives are chosen from the top down, based on loyalty to those above, and the workers are assumed to be voting for the pre-selected by continuing to work there. The corporation cheapens the worker’s vote, in effect, by reducing its value to zero. “You were going to sell your vote anyway, so let’s just say that the election happened this way.” Unless they can organize, the workers are complicit in the cheapening of their votes if they continue to work for such companies and, sadly, quite a large number do. 

There are people, of course, who are energetic and creative and naturally anti-authoritarian. Such people dislike an environment where their votes have already been cheapened, bought for a pittance, and sold to the one-party system that calls itself corporate management. The argument often made about them is that they should “just do a startup”, as if the one-party system of Silicon Valley’s venture capital elite would be preferable to the one-party system of a company’s management. By and large, it’s not an improvement.

In fact, the Silicon Valley system is worse in quite a large number of ways. A corporation can fire someone, but generally won’t continue to damage that person’s reputation, for fear of a lawsuit, negative publicity, and plummeting internal morale. This means that a person who rejects, or is rejected by, one company’s one-party system can, at the least, transition over to another company that might have a better one party in charge. There is, although not to the degree that there should be, some competition among corporate managers, and that generally keeps most of them from being truly awful. On the other hand, venture capitalists, with their culture of note-sharing, collusion, and market manipulation (one which if it were applied to publicly-traded stocks instead of unregulated private equities, would result in stiff prison sentences for all of them; alas, lawmakers don’t much care what happens to the careers of middle-class 22-year-old programmers) frequently do damage the careers of those who oppose the interests of the group. Most of the VC-era “innovations” in corporate structure and culture– stack-ranking, the intentional encouragement of a machismo-driven and exclusionary culture, fast firing, horrendous health benefits because “we’re a startup”– have been for the worse. The Valley hasn’t “disrupted” the corporate establishment. It’s reinvented it in a much more onerous way. 

So how do the bastards in charge get away with this? The Silicon Valley elite are, mostly, the discards of Wall Street. They weren’t successful in their original home (the corporate mainstream) and they aren’t nearly as smart as the nerds they manage, so what gives them their power? Who gives up the power that they win? Once again, it’s a cheap vote dynamic in place. 

Venture capitalists are intermediaries between passive capital seeking above-normal returns and top technical talent. There’s a lot of passive capital out there coming from people who want to participate, financially, in new technology development. Likewise, there are a lot of smart people with great ideas but no personal ownership of the resources to implement them. The passive capitalists recognize that they don’t have the ability to judge top talent from pretenders (and neither do the narcissistic careerists on Sand Hill Road to whom they trust to their assets, but that’s another discussion) and so they sell their votes. Venture capitalists are the ones who buy those votes and package them into statistically powerful blocs. Once this is done, the decision of a single venture capitalist (bolstered by others in his industry who’ll follow his lead) determines which contender in a new industry will get the most press coverage, the most expensive programming talent, and sufficient evidence of “virality” to justify the next round of funding. 

As programmers, we (sadly) can’t do much to prevent pension funds and municipalities from erroneously trusting these Bay Area investor celebrities who couldn’t tell talent from their own asshole. I’ve said enough, to this point, about that side, and the cheap-vote buying that happens between passive capitalists and the high priests who are supposed to know better. In theory, the poor returns delivered by those agents ought to result in their eventual downfall. After all, shouldn’t people lose faith in the Sand Hill Road elite after more than a decade of mediocre returns? This seems not to be happening, largely because of the long feedback cycle and high variance intrinsic to the venture capital game. Market dynamics work in a more regularized setting, but when there is that much noise and delay in the system, capable direct judgment of talent (before the results come in) is the only reliable way to get decent performance. Unfortunately, the only people with that capability are us, programmers, and we’re near the bottom of the social hierarchy. Isn’t that odd?

So let’s talk about what we can do. Preventing the flow of capital from passive investors into careerist narcissists at VC firms who fund their underqualified friends is probably not within our power at the present time. It’s nearly impossible to prevent someone with a completely different set of interests from cheapening his or her vote. Do so aggressively, and the person is likely to vote poorly (that is, against the common interest and often his own) just to spite the regulator attempting to prevent it, just as a teenage girl might date low-quality men to offend her parents. So let’s talk about our votes.

VC-funded companies (invariably calling themselves “startups”) don’t pay very well, and the equity disbursements typically range from the trivial down to the outright insulting. Yet young engineers flock to them, underestimating the social distance that a subordinate, engineer-level role will give them from the VC king-makers. They work at these companies because they think they’ll be getting personal introductions from the CEO to investors, and join that circuit as equals; in reality, that rarely happens unless contractually specified. They strengthen the feudal reputation economy that the VCs have created by giving their own power away based on brand (e.g., TechCrunch coverage, name-brand investors). 

When young people work for these VC darlings under such rotten terms, they’re devaluing their votes. When they show unreasonable (and historically refuted) trust in corporate management by refusing to organize their labor, they are (likewise) devaluing not only their political pull, but the credibility and leverage of their profession. That’s something we, as a group, can change. We probably can’t fix the way startups are financed in the next year; maybe, if we play our local politics right and enhance our own status and credibility, we’ll have that power in ten. We can start to clean up our own backyards, and we should. 

Sadly, talent does need access to capital, more than capital needs talent. The pressing needs of the day have given capital, for over a century, that basic supremacy over labor: “you need to eat, I can wait.” But does talent need access to a specific pool of capital controlled by narcissists living in a few hundred square miles of California office park? No, it doesn’t. We need money, but we don’t need them. On the other hand, if the passive investors who provide the capital that fuels their careers even begin to pay the littlest bit of attention, the VCs will need us. After all, it’s the immense productive capacity of what we do (not what VCs do) that gives venture capital the “sexiness” that excuses its decade-plus of mediocrity. Their ability to coast, and to fund suboptimal founders, rests on the fact that no one is paying attention to whether they do their jobs well, the assumption being that we (technologists) will stay on their manor, passively keeping our heads down and saying, “politics is someone else’s job; I just want to solve hard problems.” As long as we live on the VCs’ terrain, there is no way for passive investors to get to us except through Sand Hill Road. But there is no reason for that to continue. We have the power to spot, and to vote against, bad companies (and terrible products, and demoralizing corporate cultures) as and before they form. And we ought to be using it. As I’ve said before, we as software engineers and technologists have to break out of our comfort zones and (dare I say it?) get political.