Gervais / MacLeod 21: Why Does Work Suck?

This is a penultimate “breather” post, insofar as it doesn’t present much new material, but summarizes much of what’s in the previous 20 essays. It’s now time to tie everything together and Solve It. This series has reached enough bulk that such an endeavor requires two posts: one to tie it all together (Part 21) and one to discuss solutions (Part 22). Let me try to put the highlights from everything I’ve covered into a coherent whole. That may prove hard to do; I might not succeed. But I will try.

This will be long and repeat a lot of previous material. There are two reasons for that. First, I intend this essay to be a summarization of some highlights from where we’ve been. Second, I want it to stand alone as a “survey course” of the previous 20 essays, so that people can understand the highlights (and, thus, understand what I propose in the conclusion) even if they haven’t read all the prior material.

If I were to restart this series of posts (for which I did not intend it, originally, to reach 22 essays and 92+ kilowords) I would rename it Why Does Work Suck? In fact, if I turn this stuff into a book, that’s probably what I’ll name it. I never allowed myself to answer, “because it’s work, duh.” We’re biologically programmed to enjoy working. In fact, most of the things people do in their free time (growing produce, unpaid writing, open-source programming) involve more actual work than their paid jobs. Work is a human need.

How Does Work Suck?

There are a few problems with Work that make it almost unbearable, driving it into such a negative state that people only do it for the lack of other options.

  • Work Sucks because it is inefficient. This is what makes investors and bosses angry. Getting returns on capital either requires managing it, which is time-consuming, or hiring a manager, which means one has to put a lot of trust in this person. Work is also inefficient for average employees (MacLeod Losers) which is why wages age so low.
  • Work Sucks because bad people end up in charge. Whether most of them are legitimately morally bad is open to debate, but they’re certainly a ruthless and improperly balanced set of people (MacLeod Sociopath) who can be trusted to enforce corporate statism. Over time, this produces a leadership caste that is great at maintaining power internally but incapable of driving the company to external success.
  • Work Sucks because of a lack of trust. That’s true on all sides. People are spending 8+ hours per day on high-stakes social gambling while surrounded by people they distrust, and who distrust them back.
  • Work Sucks because so much of what’s to be done in unrewarding and pointless. People are glad to do work that’s interesting to them or advances their knowledge, or work that’s essential to the business because of career benefits, but there’s a lot of Fourth Quadrant work for which neither applies. This nonsensical junk work is generated by strategically blind (MacLeod Clueless) middle managers and executed by rationally disengaged peons (MacLeod Losers) who find it easier to subordinate than to question the obviously bad planning and direction.

All of these, in truth, are the same problem. The lack of trust creates the inefficiencies that require moral flexibility (convex deception) for a person to overcome. In a trust-sparse environment, the people who gain people are the least deserving of trust: the most successful liars. It’s also the lack of trust that generates the unrewarding work. Employees are subjected, in most companies, to a years-long dues-paying period which is mostly evaluative– to see how each handles unpleasant make-work and pick out the “team players”. The “job” exists to give the employer an out-of-the-money call option on legitimately important work, should it need some done. It’s a devastatingly bad system, so why does it hold up? Because, for two hundred years, it actually worked quite well. Explaining that requires delving into mathematics, so here we go.

Love the Logistic

The most important concept here is the S-shaped logistic function, which looks like this (courtesy of Wolfram Alpha):

The general form of such a function L(x; A, B, C) is:

where A represents the upper asymptote (“maximum potential”), B represents the rapidity of the change, and C is a horizontal offset (“difficulty”) representing the x-coordinate of the inflection point. The graph above is for L(x; 1, 1, 0).

Logistic functions are how economists generally model input-output relationships, such as the relationship between wages and productivity. They’re surprisingly useful because they can capture a wide variety of mathematical phenomena, such as:

  • Linear relationships; as B -> 0, the relationship becomes locally linear around the inflection point, (C, A/2).
  • Discrete 0/1 relationships: as B -> infinity, the function approaches a “step function” whose value is A for x > C and 0 for x < C.
  • Exponential (accelerating) growth: If B > 0, L(x; A, B, C) is very close to being exponential at the far left (x << C). (Convexity.)
  • Saturation: If B > 0, L(x; A, B, C) is approaching A with exponential decay at the far right (x >> C). (Concavity.)

Let’s keep inputs abstract but assume that we’re interested in some combination of skill, talent, effort, morale and knowledge called x with mean 0 and “typical values” between -1.0 and 1.0, meaning that we’re not especially interested in x = 10 because we don’t know how to get there. If C is large (e.g. C = 6) then we have an exponential function for all the values we care about: convexity over the entire window. Likewise, leftward C values (e.g. C = -6) give us concavity over the whole window.

Industrial work, over the past 200 years, has tended toward commoditization, meaning that (a) a yes/no quality standard exists, increasing B, and (b) it’s relatively easy for most properly set-up producers to meet it most of the time (with occasional error). The result is a curve that looks like this one, L(x; 10, 4.5, -0.7), which I’ll call a(x):

Variation, here, is mainly in incompetence. Another way to look at it is in terms of error rate. The excellent workers make almost no errors, the average ones achieve 95.8% of what is possible (or a 4.2% error rate) with the mediocre (x = -0.5) making almost 5 times as many mistakes (28.9% error rate), and the abysmal unemployable with an error rate well over 50%. This is what employment has looked like for the past two hundred years. Why? Because an industrial process is better modeled as a complex network of these functions, with outputs from one being inputs to another. The relationship of individual wage into morale, morale into performance, performance into productivity, and individual productivity into firm productivity, and firm productivity into profitability, can all be modeled as S-shaped curves. With this convoluted network of “hidden nodes” that exists in a context of a sophisticated industrial operation, it’s generally held to be better to have a consistently high-performing (B high, C negative) node than higher-performing but variable node.

One way to understand the B in the above equation is that it represents how reliably the same result is achieved, noting the convergence to a step function as B goes to infinity. In this light, we can understand mechanization. Middle grades of work rarely exist with machines. In the ideal, they either execute perfectly, or fail perfectly (and visibly, so one can repair them). Further refinements to this process are seen in the changeover from purely mechanical systems to electronic ones. It’s not always this way, even with software. There are nondeterministic computer behaviors that can produce intermittent bugs, but they’re rare and far from the ideal.

As I’ve discussed, if we can define perfect performance (i.e. we know what A, the error-free yield, looks like) then we can program a machine to achieve it. Concave work is being handed over to machines, with the convex tasks remaining available. With convexity, it’s rare that one knows what A and B are. On explored values, the graph just looks like this one, for L(x; 200, 2.0, 1.5), which I’ll call b(x):

It shows no signs of leveling off and, for all intents and purposes, it’s exponential. This is usually observed for creative work where a few major players (the “stars”) get outsized rewards in comparison to the average people.

Convexity Isn’t Fair

Let’s say that you have two employees, one of whom (Alice) is slightly above average (x = 0.1) and the other of whom (Bob) is just average (x = 0.0). You have the resources to provide 1.0 full point of training, and you can split it anyway you choose (e.g. 0.35 points for Alice, and 0.65 points for Bob). Now, let’s say that you’re managing concave work modeled by the function L(x; 100, 2.0, -0.3), which is concave.

Let the x-axis represent the amount of training (0.0 to 1.0) given to Alice, with the remainder given to Bob. Here’s a graph of their individual productivity levels, with Alice in blue, Bob in purple, and their sum productivity in the green curve

If we zoom in to look at the sum curve, we see a maximum at x = 0.45, an interior solution where both get some training.

At x = 0.0 (full investment in Bob) Alice is producing 69.0 points and Bob’s producing 93.1, for a total of 162.1.

At x = 0.5 (even split of training) Alice in producing 85.8 points and Bob’s producing 83.2, for a total of 169.0.

At x = 1.0 (full investment in Alice) Alice is producing 94.3 points and Bob’s producing 64.6, for a total of 158.9.

The maximal point is x = 0.45, which means that Alice gets slightly less training because Bob is further behind and needs it more. Both end up producing 84.55 points, for a total of 169.1. After the training is disbursed, they’re at the same level of competence (0.55). This is a “share the wealth” interior optimum that justifies sharing the training.

Let’s change to a convex world, with the function L(x; 320, 2.0, 1.1). Then, for the same problem, we get this graph (blue representing Alice’s productivity, purple representing Bob’s, and the green curve representing the sum):

Zooming in on the graph sum productivity, we find that the “fair” solution (x = 0.45) is the worst!

At x = 0.0 (full investment in Bob) Alice is producing 38.1 points and Bob’s producing 144.1, for a total of 182.2.

At x = 0.5 (even split of training) Alice in producing 86.1 points and Bob’s producing 74.1, for a total of 160.2.

At x = 1.0 (full investment in Alice) Alice is producing 160.0 points and Bob’s producing 31.9, for a total of 191.9.

The maxima are at the edges. The best strategy is to give Alice all of the training, but giving all to Bob is better than splitting it evenly, which is about the worst of the options. This is a “starve the poor” optimum. It favors picking a winner and putting all the investment into one party. This is how celebrity economies work. Slight differences in ability lead to massive differences in investment and, ultimately, create a permanent class of winners. Here, choosing a winner is often more important than getting “the right one” with the most potential.

Convexity pertains to decisions that don’t admit interior maxima, or for which such solutions don’t exist or make sense. For example, choosing a business model for a new company is convex, because putting resources into multiple models would result in mediocre performance in all of them, thus failure. The rarity of “co-CEOs” seems to indicate that choosing a leader is also a convex matter.

Convexity is hard to manage

In optimization, convex problems tend to be the easier ones, so the nomenclature here might be strange. In fact, this variety of convexity is the exact opposite of convexity in labor. Optimization problems are usually framed in terms of minimization of some undesirable quantity like cost, financial risk, statistical error, or defect rate. Zero is the (usually unattainable) perfect state. In business, that would correspond to the assumption that an industrial apparatus has an idealized business model and process, with the management’s goal to drive execution error to zero.

What makes convex minimization methods easier is that, even in a high-dimensional landscape, one can converge to the optimal point (global minimum) by starting from anywhere and iteratively stepping in the direction recommended by local features (usually, first and second derivative). It’s like finding the bottom point in a bowl. Non-convex optimizations are a lot harder because (a) there can be multiple local optima, which means that starting points matter, and (b) the local optima might be at the edges, which has its own undesirable properties (including, with people, unfairness). The amount of work required to find the best solutions is exponential in the number of dimensions. That’s why, for example, computers can’t algorithmically find the best business model for a “startup generator”. Even if it were a well-formed problem, the dimensionality would be high and the search problem intractable (probably).

Convex labor is analogous to non-convex optimization problems while management of concave labor is analogous to convex optimization. Sorry if this is confusing. There’s an important semantic difference to highlight here, though. With concave labor, there is some definition of perfect completion so that error (departure from that) can be defined and minimized with a known lower bound: 0. With convex labor, no one knows what the maximum value is, because the territory is unexplored and the “leveling off” of the logistic curve hasn’t been found yet. It’s natural, then, to frame that as a maximization problem without a known bound. With convex labor, you don’t know what the “zero-or-max” point is because no one knows how well one can perform.

Concave labor is the easy, nice case from a managerial perspective. While management doesn’t literally implement gradient descent, it tends to be able to self-correct when individual labor is concave (i.e. the optimization problem is convex). If Alice starts to pull ahead while Bob struggles, management will offer more training to Bob.

However, in the convex world, initial conditions matter. Consider the Alice-Bob problem above with the convex productivity curve, and the fact that splitting the training equitably is the worst possible solution. Management would ideally recognize Alice’s slight superiority and give her all the training, thus finding the optimal “edge case”. But what if Bob managed (convex dishonesty) to convince management that he was slightly superior to Alice and at, say, x = 0.2? Then Bob would get all the training, and Alice would get none, and management would converge on a sub-optimal local maximum. That is the essence of corporate backstabbing, is it not? Management’s increasing awareness of convexity in intellectual work means that it will tend to double down its investment in winners and toss away (fire) the losers. Thus, subordinates put considerable effort into creating the appearance of high potential for the sake of driving management to a local maximum that, if not necessarily ideal for the company, benefits them. That’s what “multiple local optima” means, in practical terms.

The traditional three-tiered corporation has a firm distinction between executives and managers (the third tier being “workers”, who are treated as a landscape feature) and its pertains to this. Because business problems are never entirely concave and orderly, the local “hill climbing” is left to managers, while the convex problems (which, like choosing initial conditions, require non-local insight) such as selecting leaders and business models are left to executives.

Yet with everything concave being performed, or soon to be performed, by machines, we’re seeing convexity pop up everywhere. The question of which programming languages to learn is a convex decision that non-managerial software engineers have to make in their careers. Picking a specialty is likewise; convexity is why it’s of value to specialize. The most talented people today are becoming self-executive, which means that they take responsibility for non-local matters that would otherwise be left to executives, including the direction of their own career. This, however, leads to conflicts with authority.

Older managers often complain about Millennial self-executivity and call it an attitude of entitlement. Actually, it’s the opposite. It’s disentitlement. When you’re entitled, you assume social contracts with other people and become angry when (from your perception) they don’t hold up their end. Millennials leave jobs, and furtively use slow periods to invest in their careers (e.g. in MOOCs) rather than asking for more work. That’s not an act of aggression or disillusion; it’s because they don’t believe the social contract ever existed. It’s not that they’re going to whine about a boss who doesn’t invest in their career– that would be entitlement– because that would do no good. They just leave. They weren’t owed anything, and they don’t owe anything. That’s disentitlement.

Convexity is bad for your job security

Here’s some scary news. When it comes to convex labor, most people shouldn’t be employed. First, let me show a concave input-output graph for worker productivity, assuming even distribution in worker ability from -1.0 to 1.0. Our model also assumes this ability statistic to be inflexible; there’s no training effect.

The blue line, at 82.44, represents the mean worker in the population. Why’s this important? It represents the expected productivity of a new hire off the street. If you’re at the median (x = 0.0) or even a bit below it, you are “above average”. It’s better to retain you than to bring someone in off the street. Let’s say that John is 40th percentile (x = -0.2) hire, which means that his productivity is 90. A random person hired off the street will be better than John, 60% of the time. However, the upside is limited (10 points at most) and the downside (possibly 70 points) is immense so, on average, it’s a terrible trade. It’s better to keep John (a known mediocre worker) on board than to replace him.

With a convex example, we find the opposite to be true:

Here, we have an arrangement in which most people are below the mean, so we’d expect high turnover. Management, one expects, would be inclined to hire people on a “try out” basis with the intention of throwing most of them back on the street. An average or even good (x = 0.5) hire should be thrown out in order to “roll the dice” with a new hire who might be the next star. Is that how managers actually behave? No, because there are frictional and morale reasons not to fire 80% of your people, and because this model’s assumption that people are inflexibly set at a competence level is not entirely true for most jobs, and those where it is true (e.g. fashion modeling) make it easy for management to evaluate someone before a hire is made. In-house experience matters. That is, however, how venture capital, publishing and record labels work. Once you turn out a couple failures, with those being the norm, it might still be that you’re a high performer who’s been unlucky, but you’re judged inferior to a random new entrant (with more upside potential) and flushed out of the system.

In the real world, it’s not so severe. We don’t see 80% of people being fired, and the reason is that, for most jobs, learning matters. The above applies to work at which there’s no learning process, but each worker is inflexibly put at a certain perfectly measurable productivity level. That’s not how the world really works. In-born talent is one relevant input, but there are others like skill, in-house experience, and education that have defensive properties and keep a person’s job security. People can often get themselves above the mean with hard work.

Secondly, the model above assumes workers are paid equally, which is not the case for most convex work. In the convex model above, the star (x = 1.0) might command several times the salary of the average performer (x = 0.0) and he should. That compensation inequality actually creates job security for the rest of them. If the best people didn’t charge more for their work, then employers would be inclined to fire middling performers in the search of a bargain.

This may be one of the reasons why there is such high turnover in the software industry. You can’t a get seasoned options trader for under $250,000 per year, but you can get excellent programmers (who are worth 5-10 times that amount, if given the right kind of work) for less than half of that. This is often individually justified (by the engineer) with an attitude of, “well, I don’t need to be paid millions; I care more about interesting work”. As an individual behavior, that’s fine, but it might be why so many software employers are so quick to toss engineers aside for dubious reasons. Once the manager concludes that the individual doesn’t have “star” potential, it’s worth it to throw out even a good engineer and try again for a shot at a bargain, considering the number of great engineers at mediocre salary levels.

One thing I’ve noticed in software (which is highly convex) is that there’s a cavalier attitude toward firing, and it’s almost certainly related to that “star economy” effect. What’s different is that software convexity has a lot inputs other that personal ability– project/person fit, tool familiarity, team cohesion, and a lot factors that are so hard to detect that they feel like pure luck– in the mix, so the “toss aside all but the best” strategy is severely defective, at least for a larger organization that should be enabling people to find better fitting projects, which makes a lot of sense amid convexity. That’s one of the reasons why I am so dogmatic about open allocation, at least in big companies.

Convexity is risky

Job insecurity amid convexity is an obvious problem, but not damning. If there’s a fixed demand for widgets, a competitor who can produce 10 times more of them is terrifying, because it will crash prices and put everyone else out of business (and, then, become a monopolist and raise them). Call that “red ocean convexity”, where the winners put the losers out of business because a “10X” performer takes 9X from someone else. However, if demand is limitless, then the presence of superior players isn’t always a bad thing. A movie star making $3 million isn’t ruined by one making $40 million. The arts are an example of “blue ocean convexity”, insofar as successful artists don’t make the others poorer, but increase the aggregate demand of art. It’s not “winner-take-all” insofar as one doesn’t have to be the top player to add something people value.

Computational problem solving (not “programming”) is a field where there’s very high demand, so the fact that top performers will produce an order of magnitude more value (the “10X effect”) doesn’t put the rest out of business. That’s a very good thing, because most of those top performers were among “the rest” when they started their career. Not only is there little direct competition, but as software engineers, we tend to admire those “10X” people and take every opportunity we can get to learn from them. If there were more of them, it wouldn’t make us poorer. It would make the world richer.

Is demand for anything limitless, though? For industrial products, no. Demand for televisions, for example, is limited by peoples’ need for them and space to put them. For making peoples’ lives better, yes. For improving processes, sure. Generation of true wealth (as Paul Graham defines it: “stuff people want”) is something for which there’s infinite demand, at least as far as we can see. So what’s the limiting factor? Why can’t everyone work on blue-ocean convex work that makes peoples’ lives better? It comes down to risk. So, let’s look at that. The model I’m going to use is as follows:

  • We only care about the immediate neighborhood of a specific (“typical”) competence level. We’ll call it x = 0.
  • Tasks have a difficulty t between -1.0 and 2.0, which represents the C in the logistic form. B is going to be a constant 4.5; just ignore that. 
  • The harder a task is, the higher the potential payoff. Thus, I’ll set A = 100 * (1 + e^(5*t)). This means that work gets more valuable slightly faster (11% faster) than it gets harder (“risk premium”). The constant term in A is based on the understanding that even very easy (difficulty of -1.0) work has value insofar as it’s time-consuming and therefore people must be paid to do it.
  • We measure risk for a given difficulty t by taking the first derivative of L(x; …), with respect to x, at x = 0. Why? L’(x; …) tells us how sensitive the output (payoff) is to marginal changes in input. We’re modeling unknown input variables and plain luck factors as a random, zero-mean “noise” variable d and assuming that for known competence x the true performance will be L(x + d; …). So this first derivative tells us, at x = 0, how sensitive we are to that unknown noise factor.

What we want to do is assess the yield (expected value) and risk (first derivative of yield) for difficulty levels from -1 to 2 when known x = 0. Here’s a graph of expected yield:

It’s hard to notice on that graph, but there’s actually a slight “dip” or “uncanny valley” as one goes from the extreme of easiness (t = -1.0) to slightly harder (-1.0 < t < 0.0) work:

Does it actually work that way in the real world? I have no idea. What causes this in the model is that, as we go from the ridiculously easy (t = 1.0) to the merely moderately easy (t = 0.5) the rate of potential failure grows faster than the maximum potential A does, as a function of t. That’s an artifact of how I modeled this and I don’t know for sure that a real-world market would have this trait. Actually, I doubt it would. It’s a small dip so I’m not going to worry about it. What we do see is that our yield is approximately constant as a function of difficulty for t from -1.0 to 0.0, where the work is concave for that level of skill; and then it grows exponentially as a function of t from 0.0 to 2.0, where the work is convex. That is what we tend to see on markets. The maximal market value of work (1 + e^(5 * t) in this model) grows slightly faster than difficulty in completing it (1 + e^(4.5*t), here).

However, what we’re interested in is risk, so let me show that as well by graphing the first derivative of L with respect to x (not t!) for each t.

What this shows us, pretty clearly, is monotonic risk increase as the tasks become more difficult. That’s probably not too surprising, but it’s nice to see what it looks like on paper. Notice that the easy work has almost no risk involved. Let’s plot these together. I’ve taken the liberty of normalizing the risk formula (in purple) to plot them together, which is reasonable because our units are abstract:

Let’s look at one other statistic, which will be the ratio between yield and risk. In finance, this is called the Sharpe Ratio. Because the units are abstract (i.e. there’s no real meaning to “1 unit” of competence or difficulty) there is no intrinsic meaning to its scale, and therefore I’ve again taken the liberty of normalizing this as well. That ratio, as a function of task difficulty, looks like this…

…which looks exactly like affine exponential decay. In fact, that’s what it is. The Sharpe Ratio is exponentially favorable for easy work (t < 0.0) and approaches a constant value (1.0 here, because of the normalization) for large t.

What’s the meaning of all this? Well, traditionally, the industrial problem was to maximize yield on capital within a finite “risk budget”. If that’s the case– you’re constrained by some finite amount of risk– then you want to select work according to the Sharpe Ratio. Concave tasks might have less yield, but they’re so low in risk that you can do more of them. For each quantum of risk in your budget, you want to get the most yield (expected value) out of it that you can. This favors the extreme concave labor. This is why industrial labor, for the past 200 years, has been almost all concave. Boring. Reliable. In many ways, the world still is concave and that’s a desirable thing. Good enough is good enough. However, it just so happens that when we, as humans, master a concave task when tend to look for the convex challenge of making it run itself. In pre-technological times, this was done by giving instructions to other people, and making machines as easy as possible for humans to use. In the technological era, it’s done with computers and code. Even the grunt work of coding is given to programs (we call them compilers) so we can focus on the interesting stuff. We’re programming all of that concave work out of human hands. Yes, concave work is still the backbone of the industrial world and always will be. It’s just not going to require humans doing it.

What if, instead, the risk budget weren’t an issue? Let’s say that we have a team of 5 programmers given a year to do whatever they want, and the worst they can do is waste their time, and you’re okay with that maximal-risk outcome (5 annual salaries for a learning experience). They might build something amazing that sells for $100 million, or they might work for a year and have the project still fail on the market. Maybe they do great work, but no one wants it; that’s a risk of creation. In this case, we’re not constrained by risk allocation but by talent. We’ve already accepted the worst possible outcome as acceptable. We want them to be doing convex work, which has the highest yield. Those top-notch people are the limiting resource, not risk allocation.

Convexity requires teamwork

Above, I established that if individual productivity is a convex function of investment in that person, and group performance is a sum of individual productivity, then the optimal solution is to ply one person with resources and starve (and likely fire) the rest. Is that how things actually work? No, not usually. There’s a glaring false assumption, which is the additive model where group performance is a simple sum of individual performances. Real team efforts shouldn’t work that way.

When a team is properly configured, most of their efforts don’t merely add to some pile of assets, but they multiply each others’ productivity. Each works to make the others more successful. I wrote about this advancement of technical maturity (from multiplier to adder) as it pertains to software but I think it’s more general. Warning: incompetent attempts at multiplier efforts are every bit as toxic as incompetent management and will have a divider effect.

Team convexity is a bit unique in the sense that both sides of the logistic “S-curve” are observed. You have synergy (convexity) as the team scales up to a certain size, but congestion (concavity) beyond a certain point. It’s very hard to get team size and configuration right, and typical “Theory Z” management (which attempts to coerce a heterogeneous set of people who didn’t choose each other, and probably didn’t choose the project, into being a team) generally fails at this. It can’t be managed competently from a top-down perspective, despite what many executives say (they are wrong). It has to be grass-roots self-organization. Top-down, closed-allocation management can work well in the Alice/Bob models above where productivity is the sum of individual performances (i.e. team synergies aren’t important) but it fails catastrophically on projects that require interactive, multiplicative effects in order to be successful.

Convexity has different rules

The technological economy is going to be very different, because of the way business problems are formulated. In the industrial economy, capital was held in some fixed amount by a business, whose goal was to gain as much yield (profit or interest) from it while keeping risk within certain bounds deemed acceptable. That made concavity desirable. It still is; stable income with low variation is always a good thing. It’s just that such work no longer requires humans. Concave work has been so commoditized that it’s hard to get a passive profit from it.

Ultimately, I think a basic income is the only way society will be able to handle widespread convexity of individual labor. What does it say about the future? People will either be very highly compensated, or effectively unemployed. There will be an increasing need for unpaid learning while people push themselves from the low, flat region of a convex curve to the high, steep part. Right now, we have a society where people with the means to indulge in that can put themselves on a strong career track, but the majority who have a lifelong need for monthly income end up getting shafted: they become a permanent class of unskilled labor and, by keeping wages low, they actually hold back technological advancement.

Industrial management was risk-reductive. A manager took ownership of some process and his job was to look for ways it could fail, then tried to reduce the sources of error in that process. The rare convex task (choosing a business strategy) was for a higher order of being, an executive. Technological management has to embrace risk, because all the concave work’s being taken by machines. In the future, it will only be economical for a human to do something when perfect completion is unknown or undefinable, and that’s the convex work.

A couple more graphs deserve attention, because both pertain to managerial goals. There are two ways that a manager can create a profit. One is to improve output. The other is to reduce costs. Which is favorable? It depends. Below is a graph that shows productivity ($/hour) as a function of wages for some task where performance is assumed to be convex in wages. The relationship is assumed here to be inflexible and go both ways: better people will expect more in wages, low wages will cause peoples’ out-of-work distractions to degrade their performance. Plotted in purple is the y = x or “break-even” line.

As one can see, it doesn’t even make sense to hire people for this kind of work at less than $68/hour: they’ll produce less than they cost. That “dip” is an inherent problem for convex work. Who’s going to pay people in the $50/hour range so they can become good and eventually move to the $100/hour range (where they’re producing $200/hour work)? This naturally tends toward a “winners and losers” scenario. The people who can quickly get themselves to the $70/hour productivity level (through the unpaid acquisition of skill) are employable, and will continue to grow; the rest will not be able to justify wages that sustain them. The short version: it’s hard to get into convex work.

Here’s a similar graph for concave work:

… and here’s a graph of the difference between productivity and wage, or per-hour profit, on each worker:

So the optimal profit is achieved at $24.45 per hour, where the worker provides $56.33 worth of work in that time. It doesn’t seem fair, but improvements to wages beyond that, while they improve productivity, do not improve it by enough to justify the additional cost. That’s not to say that companies will necessarily set wages to that level. (They might raise them higher to attract more workers, increasing total profit.) Also, here is a case where labor unions can be powerful (they aren’t especially helpful with convex work): in the above, the company would still earn a respectable profit on each worker with wages as high as $55 per hour, and wouldn’t be put out of business (despite managements’ claim that “you’ll break us” at, say, $40) until almost $80.

The tendency of corporate management toward cost-cutting, “always say no”, and Theory-X practices is an artifact of the above result of concavity. So while I can argue that “convexity is unfair” insofar as it encourages inequality of investment and resources, enabling small differences in initial conditions to produce a winner-take-all outcome; concavity produces its own variety of unfairness, since it often encourages wages to go to a very low level, where employers take a massive surplus.

The most important problem…?

Above is a lot about convexity, but I feel like the changeover to convexity in individual labor is the most important economic issue of the 21st century. So if we want to understand why the contemporary, MacLeod-hierarchical, organization won’t survive it, we need a deep understanding of what convexity is and how it works. I think we have that, now.

What does this have to do with Work Sucking? Well, there are a few things we get out of it. First, for the concave work that most of the labor force is still doing…

  • Concave (“commodity”) labor leads to grossly unfair wages. This creates a natural adversity between workers and management on the issue of wage levels. 
  • Management has a natural desire to reduce risk and cut costs, on an assumption of concavity. It’s what they’ve been doing for over 200 years. When you manage concave work, that’s the most profitable thing to do.
  • Management will often take a convex endeavor (e.g. computer programming) and try to treat it as concave. That’s what we, in software, call the “commodity developer” culture that clueless software managers try to shove down hapless engineers’ throats.
  • Stable, concave work is disappearing. Machines are taking it over. This isn’t a bad thing (on the contrary, it’s quite good) but it is eroding the semi-skilled labor base that gave the developed world a large middle class.

Now, for the convex:

  • Convex work favors low employment and volatile compensation. It’s not true that there “isn’t a lot of convex work” to go around. In fact, there’s a limitless amount of demand for it. However, one has to be unusually good for a company to justify paying for it at a level one could live on, because of the risk. Without a basic income in place, convexity will generate an economy where income volatility is at a level beyond what people are able to accept. As a firm believer in the need for market economies, this must be addressed.
  • Convex payoffs produce multiple optima on personnel matters (e.g. training, leadership). This sounds harmless until one realizes that “multiple optima” is a euphemism for “office politics”. It means there isn’t a clear meritocracy, as performance is highly context-sensitive.
  • Convex work often creates a tension between individual competition and teamwork. Managers attempting to grade individuals in isolation will create a competitive focus on individual productivity, because convexity rewards acceleration of small individual differences. This managerial style works for simple additive convexity, but fails in an organization that needs people to have multiplicative or synergistic effects (team convexity) and that’s most of them.

Red and blue ocean convexity

One of the surprising traits of convexity, tied-in with the matter of teamwork, is that it’s hard to predict whether it will be structurally cooperative or competitive. This leads me to believe that there are fundamental differences between “red ocean” and “blue ocean” varieties of convexity. For those unfamiliar with the terms, red ocean refers to well-established territory in which competition is fierce. There’s a known high quantity of resources (“blood in the water”) available but there’s a frenzy of people (some with considerable competitive advantages) working to get at it. It’s fierce and if you aren’t strong, the better predators will crowd you out. Blue ocean refers to unexplored territory where the yields are unknown but the competition’s less fierce (for now).

I don’t know this industry well, but I would think that modeling is an example of red-ocean convexity. Small differences in input (physical attractiveness, and skill at self-marketing) result in massive discrepancies of output, but there’s a small and limited amount of demand for the work. If there’s a new “10X model” on the scene, all the other models are worse off, because the supermodel takes up all of the work. For example, I know that some ridiculous percentage of the world’s hand-modeling is performed by one woman (who cannot live a normal life, due to her need to protect her hands).

What about professional sports, the distilled essence of competition? Blue ocean. Yep. That might seem surprising, given that these people often seem to want to kill each other, but the economic goal of a sports team is not to win games, but to play great games that people will pay money to watch. A “10X” player might revitalize the reputation of the sport, as Tiger Woods did for golf, and expand the audience. Top players actually make a lot of money for the opponents they defeat; the stars get a larger share of the pool, meaning their opponents get a smaller percentage, but they also expand that pool so much that everyone gets richer.

How about the VC-funded startup ecosystem? That’s less clear. Business formation is blue ocean convexity, insofar as there are plenty of untapped opportunities to add immense value, and they exist all over the world. However, fund-raising (at least, in the current investor climate) and press-whoring are red ocean convexity: a few already-established (and complacent) players get the lion’s share of the attention and resources, giving them an enormous head start. Indeed, this is the point of venture capital in the consumer-web space: use the “rocket fuel” (capital infusion) to take a first-entrant advantage before anyone else has a shot.

Red and blue ocean convexity are dramatically different in how they encourage people to think. With red-ocean convexity, it’s truly a ruthless, winner-take-all, space because the superior, 10X, player will force the others out of business. You must either beat him or join him. I recommend “join”. With blue-ocean convexity (which is the force that drives economic growth) outsized success doesn’t come at the expense of other people. In fact, the relationship may be symbiotic and cooperative. For example, great programmers build tools that are used all over the world and make everyone better at their jobs. So while there is a lot of inequality in payoffs– Linus Torvalds makes millions per year, I use his tools– because that’s how convexity works, it’s not necessarily a bad thing because everyone can win.

Convexity and progress

Convexity’s most important property is progressive time. Real-world convexity curves are often steeper than the ones graphed above and, if there isn’t a role for learning, then the vast majority of people will be unable to achieve at a level supporting an income, and thus unemployed. For example, while practice is key in (highly convex) professional sports, there aren’t many people who have the natural talent to earn a living at it. Convexity shuts out those without natural talent. Luckily for us and the world, most convex work isn’t so heavily influenced by natural limitations, but by skills, specialization and education. There’s still an elite at the rightward side of the payoff distribution curve that takes the bulk of the reward, but it’s possible for a diligent and motivated person to enter that elite by gaining the requisite skills. In other words, most of the inputs into that convex payoff function are within the individual actor’s control. This is another case of “good inequality”. In blue-ocean convexity, we want the top players to reap very large rewards, because it motivates more people to do the work that gets them there. 

Consider software engineering, which is perhaps the platonic ideal of blue-ocean convexity. What retards us the most as an industry is the lack of highly-skilled people. As an industry, we contend with managerial environments tailored to mediocrity, and suffer from code-quality problems that can reduce a technical asset’s real value to 80, 20, or even minus-300 cents on the dollar compared to its book value. Good software engineers are rare, and that hurts everyone. In fact, perhaps the easiest way to add $1 trillion in value to the economy would be to increase software engineer autonomy. Because most software engineers never get the environment of autonomy that would enable them to get any good, the whole economy suffers. What’s the antidote? A lot of training and effort– the so-called “10000 hours” of deliberate practice– that’s generally unpaid in this era of short-term, disposable jobs.

Convexity’s fundamental problem is that it requires highly-skilled labor, but no employer is willing to pay for people to develop the relevant skills, out of a fear that employees who drive up their market value will leave. In the short term, it’s an effective business strategy to hire mediocre “commodity developers” and staff them on gigantic teams for uninspiring projects, and give them work that requires minimal intellectual ability aside from following orders. In the long term, those developers never improve and produce garbage software that no one knows how to maintain, producing creeping morale decay and, sometimes, “time bombs” that cause huge business losses at unknown times in the future.

That’s why convexity is such a major threat to the full-employment society to which even liberal Americans still cling. Firms almost never invest in their people– empirically, we see that– in favor of the short-term “solution”, which is to ignore convexity and try to beat the labor context into concavity, that is terrible in the long term. Thus, even in convex work, the bulk of people linger at the low-yield leftward end of the curve. Their employers don’t invest in them, and often they lack the time and resources to invest in themselves. What we have, instead of blue-ocean convexity, is an economy where the privileged (who can afford unpaid time for learning) become superior because they have the capital to invest in themselves, and the rest are ignored and fall into low-yield commodity work. This was socially stable when there was a lot of concave, commodity work for humans to do, but that’s increasingly not the case.

Someone is going to have to invest in the long term, and to pay for progress and training. Right now, privileged individuals do it for themselves and their progeny, but that’s not scalable and will not avert the social instability threatened by systemic, long-term unemployment.

Trust and convexity

As I’ve said, convexity isn’t only a property of the relationship between individual inputs (talent, motivation, effort, skill) and productivity, but also occurs in team endeavors. Teams can be synergistic, with peoples’ efforts interacting multiplicatively instead of additively. That’s a very good thing, when it happens.

So it’s no surprise that large accomplishments often require multiple people. We already knew that! That is less true in 2013 than it was 1985– now, a single person can build a website serving millions– but it’s still the case. Arguably, it’s more the case now; it’s only that many markets have become so efficient that interpersonal dependencies “just work” and give more leverage to single actors. (The web entrepreneur is using technologies and infrastructure built by millions of other people.) At any rate, it’s only a small space of important projects that will be accomplished well by a single party, acting alone. For most, there’s a need to bring multiple people together, but to retain focus and that requires interior political inequalities (leadership) to the group.

We’re hard-wired to understand this. As humans, we fundamentally get the need for team endeavors with strong leadership. That’s why we enjoy team sports so much.

Historically, there have been three “sources of power” that have enabled people to undertake and lead large projects (team convexity):

  • coercion, which exists when negative consequences are used to motivate someone to do work that she wouldn’t otherwise do. This was the cornerstone of pre-industrial economies (slavery) but is also used, in a softer form, by ineffective managers: do this or lose your income/reputation. Anyway, coercion is how the Egyptian pyramids were built: coercive slave labor.
  • divination, in which leaders are elected based on an abstract principle, which may be the whim of a god, legal precedent, or pure random luck. For example, it has been argued that gambling (a case of “pure random luck”) served a socially positive purpose on the American frontier. Although it moved funds “randomly”, it allowed pools of capital to form, financing infrastructural ventures. Something like divination is how the cathedrals were built: voluntary labor, motivated by religious belief, directed by architects who often were connected with the Church. Self-divination, which tends to occur in a pure power vacuum, is called arrogation.
  • aggregation, where an attempt to compute, fairly, the group preference or the true market value of an asset is made. Political elections and financial markets are aggregations. Aggregation is how the Internet was built: self-directed labor driven by market forces.

When possible, fair aggregations are the most desirable, but it’s non-trivial to define what fair is. Should corporate management be driven by the one-dollar, one-vote system that exists today? Personally, I don’t think so. I think it sucks. I think employees deserve a vote simply because they have an obvious stake in the company. As much as the current, right-wing, state of the American electorate infuriates me, I really like the fact that citizens have the power to fire bad politicians. (They don’t use it enough; incumbent victory rates are so high that a bad politician has more job security than a good programmer.) Working people should have the same power over their management. By accepting a wage that is lower than the value of what they produce, they are paying their bosses. They have a right to dictate how they are managed, and to insist on the mentorship and training that convexity is making essential.

Because it’s so hard to determine a fair aggregation in the general case, there’s always some room for divination and arrogation, or even coercion in extreme cases. For example, our Constitution is a case of (secular, well-informed) divination on the matter of how to build a principled, stable and rational government, but it sets up an aggregation that we use elect political leaders. Additionally, if a political leader were voted out of office but did not hand over power, he’d be pushed out of it by force (coercion). Trust is what enables self-organizing (or, at least, stable) divination. People will grant power to leaders based on abstract principles if they trust those ideas, and they’ll allow representatives to act on their behalf if they trust those people.

Needless to say, convex payoffs to group efforts generate an important role for trust. That’s what the “stone soup” parable is about; because there’s no trust in the community, people hoard their own produce instead of sharing, and no one has had a decent meal for months. When outside travelers offer a nonexistent delicacy– the stone is a social catalyst with no nutritional value– and convince the other villagers to donate their spare produce, they enable them all to work together. So they get a nutritious bowl of soup and, one hopes, they can start to trust each other and build at least a barter or gift economy. They all benefit from the “stone soup”, but they were deceived.

Convex dishonesty isn’t always bad. It is the act of “borrowing” trust by lying to people, with the intent to pay them back out of the synergistic profits. Sometimes convex dishonesty is exactly what a person needs to do in order to get something accomplished. Nor is it always good. Failed convex frauds are damaging to morale, and therefore they often exacerbate the lack-of-trust problem. Moreover, there are many endeavors (e.g. pyramid schemes) that have the flavor of convex fraud but are, in reality, just fraud.

This, in fact, is why modern finance exists. It’s to replace the self-divinations that pre-financial societies required to get convex projects done with a fairer aggregation system that properly measures, and allows the transfer of, risks.

Credibility

For macroscopic considerations like the fair prices of oil or business equity, financial aggregations seem to work. What about the micro-level concern of what each worker should do on a daily basis? That usually exists in the context of a corporation (closed system) with specific authority structures and needs. Companies often attempt to create internal markets (tough culture) for resources and support, with each team’s footprint measured in internal “funny money” given the name of dollars. I’ve seen how those work, and they often become corrupt. The matter of how people direct the use of their time is based on an internal social currency (including job titles, visibility, etc.) that I’ve taken to calling credibility. It’s supposed to create a meritocracy, insofar as the only way one is supposed to be able to get credibility is through hard work and genuine achievement, but it often has some severely anti-meritocratic effects. 

So why does your job (probably) Suck? Your job will generally suck if you lack credibility, because it means that you don’t control your own time, have little choice over what you do and how you do it, and that your job security is poor. Your efforts will be allocated, controlled, and evaluated by an external party (a manager) whose superiority in credibility grants him the right of self-divination. He gets to throw your time into his convex project, but not vice versa. You don’t have a say in it. Remember: he’s got credibility, and you lack it. 

Credibility always generates a black market. There is no failing in this principle. Performance reviews are gamed, with various trades being made wherein managers offer review points in exchange for non-performance-related favors (such as vocal support for an unrelated project, positive “360-degree reviews”, and various considerations that are just inappropriate and won’t be discussed here) and loyalty. Temporary strongmen/thugs use transient credibility (usually, from managerial favoritism) to intimidate and extort other people into sharing credit for work accomplished, thus enabling the thug to appear like a high performer and get promoted to a real managerial role (permanent credibility). You win on a credibility market by buying and selling it for a profit, creating various perverted social arbitrages. No organization that has allowed credibility to become a major force has avoided this.

Now I can discuss the hierarchy as immortalized by this cartoon from Hugh MacLeod:

zzzzazzdggg49.jpg

Losers are not undesirable, unpopular, or useless people. In fact, they’re often the opposite. What makes them “losers” is that, in an economic sense, they’re losing insofar as they contribute more to the organization than they get out of it. Why do they do this? They like the monthly income and social stability. Sociopaths (who are not bad people; they’re just gamblers) take the other side of that risk trade. They bear a disproportionate share of the organization’s risk and work the hardest, but they get the most reward. They have the most to lose. A Loser who gets fired will get another job at the same wage; a Sociopath CEO will have to apply for subordinate positions if the company fails. Clueless are a level that forms later on when this risk transfer becomes degenerate– the Sociopaths are no longer putting in more effort or taking more risk than anyone else, but have become an entitled, complacent rent-seeking class– and they need a middle-management layer of over-eager “useful idiots” to create the image (Effort Thermocline) that the top jobs are still demanding.

What’s missing in this analysis? Well, there’s nothing morally wrong, at all, with a financial risk transfer. If I had a resource that had a 50% chance of yielding $10 million, and 50% chance of being worthless, I’d probably sell it to a rich person (whose tolerance of risk is much greater) for $4.9 million to “lock in” that amount. A +5-million-dollar swing in personal wealth is huge to me and minuscule to him. It’d be a good trade for both of us. I’d be paying a (comparably small) $100,000 risk premium to have that volatility out of my financial life. I’m not a Loser in this deal, and he’s not a Sociopath. It’s by-the-book finance, how it’s supposed to work.

What generates the evil, then? Well, it’s the credibility market. I don’t hold the individual firm responsible for prevailing financial scarcity and, thus, the overwhelmingly large number of people willing to make low-expectancy plays. As long as that firms pays its people reasonably, it has clean hands. So the financial Loser trade is not a sign of malfeasance. The credibility market’s different, because the organization has control over it. It creates the damn thing. Thus, I think the character of the risk transfer has several phases, each deserving its own moral stance:

  1. Financial risk transfer. Entrepreneurs put capital and their reputations at risk to amass the resources necessary to start a project whose returns are (macroscopically, at least) convex. This pool of resources is used to pay bills and wages, therefore allowing workers to get a reliable, recurring monthly wage that is somewhat less than the expected value of their contribution. Again, there’s nothing morally wrong here. Workers are getting a risk-free income (so long as the business continues to exist) while participating in the profits of industrial macro-convexity. 
  2. De-risking, entrenchment, and convex fraud. As the business becomes more established, its people stop viewing it as a risk transfer between entrepreneurs and workers, and start seeing it (after the company’s success is obvious) as a pool of “free” resources to gain control over. Such resources are often economic (“this place has millions of dollars to fund my ideas”) but reputation (“imagine what I could do as a representative of X”) is also a factor. People begin making self-divination (convex fraud) gambits to establish themselves as top performers and vault into the increasingly complacent, rent-seeking, executive tier. This is a red-ocean feeding frenzy for the pile of surplus value that the organization’s success has created.
  3. Credibility emerges, and becomes the internal currency. Successful convex fraudsters are almost always people who weren’t part of the original founding team. They didn’t get their equity when it was cheap, so now they’re in an unstable positions. They’re high-ranking managers, but haven’t yet entwined themselves with the business or won a significant share of the rewards/equity. Knowing that their success is a direct output of self-divination (that is, arrogation) they use their purloined social standing to create official credibility in the forms of titles (public statements of credibility), closed allocation (credibility as a project-maker and priority-setter), and performance reviews (periodic credibility recalibrations). This turns the unofficial credibility they’ve stolen into an official, secure kind.
  4. Panic trading, and credibility risk transfer. Newly formed businesses, given their recent memory of existential risk, generally have a cavalier attitude toward firing and a tough culture, which I’ll explain below. This means that a person can be terminated not because of doing anything wrong or being incompetent, but just because of an unlucky break in credibility fluctuations (e.g. a sponsor who changes jobs, a performance-review “vitality curve”). In role-playing games, this is the “killed by the dice” question: should the GM (game coordinator who functions as a neutral party, creating and directing the game world) allow characters, played well, to die– really die, in the “create a new character” sense, not in the “miraculously resurrected by a level-18 healer” sense– because of bad rolls of the dice? In role-playing games, it’s a matter of taste. Some people hate games where they can lose a character by random chance; others like the tension that it creates. At work, though, “killed by the dice” is always bad. Tough-culture credibility markets allow good employees to be killed by the dice. In fact, when stack-ranking and “low performer” witch hunts set in, they encourage it. This creates a lot of panic trading and there’s a new risk transfer in town. It’s not the morally acceptable and socially-positive transfer of financial risk we saw in Stage 1. Rather, it’s the degenerate black-market credibility trading that enables the worst sorts of people (true psychopaths) to rise.
  5. Collapse into feudalistic rank culture. No one wants a job where she can be fired “for performance” because of bad luck, so tough cultures don’t last very wrong; they turn into rank cultures. People (Losers) panic-trade their credibility, and would rather subordinate to get some credibility (“protection”) from a feudal lord (Sociopath) than risk having none and being flushed out. The people who control the review process become very powerful and, eventually, can manufacture enough of an image of high performance to become official managers. You’re no longer going to be killed by the dice in a rank culture, but you can be killed by a manager because he can unilaterally reduce your credibility to zero.
  6. Macroscopic underperformance and decline. Full-on rank culture is terribly inefficient, because it generates so much fourth-quadrant work that serves the need of local extortionists (usually, middle managers and their favorites) but does not help the business. Eventually, this leads to underperformance of the business as a whole. Rank culture fosters so much incompetence that trust breaks down within the organization, and it’s often permanent. Firing bad apples is no longer possible, because the process of flushing them away would require firing a substantial fraction of the organization, and that would become so politicized and disruptive as to break the company outright. Such companies regularly lapse into brief episodes of “tough culture”, when new executives (usually, people who buy it as its market value tanks) decide that it’s time to flush out the low performers, but they usually do it in a heavy-handed, McKinsey-esque way that creates a new and equally toxic credibility market. But… like clockwork, those who control said black markets become the new holders of rank and, soon enough, the official bosses. These mid-level rank-holders start out as the mean-spirited witch-hunters (proto-Sociopaths) who implement the “low performer initiative” but they eventually rise and leave a residue of strategically-unaware, soft, complacent and generally harmless mid-ranking “useful idiots” (new Clueless). Clueless are the middle managers who get some power when the company lurches into a new rank culture, but don’t know how to use it and don’t know the main rule of the game of thrones: you win or you die.
  7. Obsolescence and death. Self-explanatory. Some combination of rank-culture complacency and tough-culture moral decay turn the company into a shell of what it once was. The bad guys have taken out their millions and are driving up house prices in the area and their wives with too much plastic surgery are on zoning committees keeping those prices high; everyone else who worked at the firm is properly fucked. Sell off the pieces that still have value, close the shop.

That cycle, in the industrial era, used to play out over decades. If you joined a company in Stage 1 in 1945, you might start to see the Stage 4 midlife when you retired in 1975. Now, it happens much more quickly: it goes down over years, and sometimes months for fast-changing startups. It’s much more of an immediate threat to personal job security than it has ever been before. Cultural decay used to be a long-term existential risk to companies not taken seriously because calamity was decades away; now, it’s often ongoing and rapid thanks to the “build to flip” mentality.

To tell the truth about it, the MacLeod rank culture wasn’t such a bad fit for the industrial era. Industrial enterprises had a minimal amount of convex work (choosing the business model, setting strategies) that could be delegated to a small, elite, executive nerve-center. Clueless middle managers and rationally-disengaged (Loser) wage earners could implement ideas delivered from the top without too much introspection or insight, and that was fine because individual work was concave. Additionally, that small set of executives could be kept close to the owners of the company (if they weren’t the same set of people).

In the technological era, individual labor is convex and we can no longer afford Cluelessness, or Loserism. The most important work– and within a century or so, all work where there’s demand for humans to do it– requires self-executivity. The hierarchical corporation is a brachiosaur sunning itself on the Yucatan, but that bright point of light isn’t the sun.

Your job is a call option

If companies seem to tolerate, at least passively, the inefficiency of full-blown rank culture, doesn’t that mean that there isn’t a lot of real work for them to do? Well, yes, that’s true. I’ve already discussed the existence of low-yield, boring, Fourth Quadrant busywork that serves little purpose to the business. It’s not without any value, but it doesn’t do much for a person’s career. Why does it exist? First, let’s answer this: where does it come from?

Companies have a jealously-guarded core of real work: essential to the business, great for the careers of those who do it. The winners of the credibility market get the First Quadrant (1Q) of interesting and essential work. They put themselves on the “fun stuff” that is also the core of the business– it’s enjoyable, and it makes a lot of money for the firm and therefore leads to high bonuses. There isn’t a lot of work like this, and it’s coveted, so few people can be in this set. Those are akin to feudal lords, and correspond with MacLeod Sociopaths. Those who wish to join their set, but haven’t amassed enough credibility yet, take on the less enjoyable, but still important Second Quadrant (2Q) of work: unpleasant but essential. Those are the vassals attempting to become lords in the future. That’s often a Clueless strategy because it rarely works, but sometimes it does. Then there is a third monastic category of people who have enough credibility (got into the business early, usually) to sustain themselves but have no wish to rise in the organizational hierarchy. They work on fun, R&D projects that aren’t in the direct line of business (but might be, in the future). They do what’s interesting to them, because they have enough credibility to get away with that and not be fired. They work on the Third Quadrant (3Q): interesting but discretionary. How they fit into the MacLeod pyramid is unclear. I’d say they’re a fortunate sub-caste of Losers in the sense that they rationally disengage from the power politics of the essential work; but they’re Clueless if they’re wrong about their job security and get fired. Finally, who gets the Fourth Quadrant (4Q) of unpleasant and discretionary work? The peasants. The Losers without the job security of permanent credibility are the ones who do that stuff, because they have no other choice.

Where does the Fourth Quadrant work come from? Clueless middle-managers who take undesirable (2Q) or unimportant (3Q) projects, but manage to take all the career upside (turning 2Q into 4Q for their reports) and fun work (turning 3Q into 4Q) for themselves, leaving their reports utterly hosed. This might seem to violate their Cluelessness; it’s more Sociopathic, right? Well, MacLeod “Clueless” doesn’t mean that they don’t know how to fend for themselves. It means they’re non-strategic, or that they rarely know what’s good for the business or what will succeed in the long-term. They suck at “the big picture” but they’re perfectly capable of local operations. Additionally, some Clueless are decent people; others are very clearly not. It is perfectly possible to be MacLeod Clueless and also a sociopath.

Why do the Sociopaths in charge allow the blind Clueless to generate so much garbage make-work? The answer is that such work is evaluative. The point of the years-long “dues paying” period is to figure out who the “team players” are so that, when leadership opportunities or chances for legitimate, important work open up, the Sociopaths know which of the Clueless and Losers to pick. In other words, hiring a Loser subordinate and putting him on unimportant work is a call option on a key hire, later.

Workplace cultures

I mentioned rank and tough cultures above, so let me get into more detail of what those are. In general, an organization is going to evaluate its individuals based on three core traits:

  • subordinacy: does this person put the goals of the organization (or, at least, his immediate team and supervisor) above her own?
  • dedication: will she do unpleasant work, or large amounts of work, in order to succeed?
  • strategy: does she know what is worth working on, and direct her efforts toward important things?

People who lack two or all three of these core traits are generally so dysfunctional that all but the most nonselective employers just flush them out. Those types– such as the strategic, not-dedicated, and insubordinate Passive-Aggressive and the dedicated, insubordinate, and not-strategic Loose Cannon– occasionally pop up for comic relief, but they’re so incompetent that they don’t last long in a company and are never in contention for important roles. I call them, as a group, the Lumpenlosers.

MacLeod Losers tend to be strategic and subordinate, but not dedicated. They know what’s worth working on, but they tend to follow orders because they’re optimizing for comfort, social approval, and job security. They don’t see any value in 90-hour weeks (which would compromise their social polish) or radical pursuit of improvement (which would upset authority). They just want to be liked and adjust well to the cozy, boring, middle-bottom. If you make a MacLeod Loser work Saturdays, though, she’ll quit. She knows that she can get a similar or better job elsewhere.

MacLeod Clueless are subordinate and dedicated but not strategic. They have no clue what’s worth working on. They blindly follow orders, but will also put in above-board effort because of an unconditional work ethic. They frequently end up cleaning up messes made by Sociopaths above and Losers below them. They tend to be where the corporate buck actually stops, because Sociopaths can count on them to be loyal fall guys.

MacLeod Sociopaths are dedicated and strategic but insubordinate. They figure out how the system works and what is worth putting effort into, and they optimize for personal yield. They’re risk-takers who don’t mind taking the chance of getting fired if there’s also a decent likelihood of a promotion. They tend to have “up-or-out” career trajectories, and job hopping isn’t uncommon.

Since there are good Sociopaths out there, I’ve taken to calling the socially positive ones the Technocrats, who tend to be insubordinate with respect to immediate organizational authority, but have higher moral principles rooted in convexity: process improvements, teamwork and cooperation, technical and infrastructural excellence. They’re the “positive-sum” radicals.  I’ll get back to them.

Is there a “unicorn” employee who combines all three desired traits– subordinacy, dedication, and strategy? Yes, but it’s strictly conditional upon a particular set of circumstances. In general, it’s not strategic to be subordinate and dedicated. If you’re strategic, you’ll usually either optimize for comfort and be subordinate, but not dedicated, because that’s uncomfortable. If you follow orders, it’s pretty easy to coast in most companies. That’s the Loser strategy. Or, you might optimize for personal yield and work a bit harder, becoming dedicated, but you won’t do it for a manager’s benefit: it’s either your own, or some kind of higher purpose. That’s the Sociopath strategy. The exception is a mentor/protege relationship. Strategic and dedicated people will subordinate if they think that the person in authority knows more than they do, and is looking out for their career interests. They’re subordinating to a mentor conditionally, based on the understanding that they will be in authority, or at least able to do more interesting and important work, in the future.

From this understanding, we can derive four common workplace cultures:

  • rank cultures value subordinacy above all. You can coast if you’re in good graces with your manager, and the company ultimately becomes lazy. Rank cultures have the most pronounced MacLeod pyramid: lazy but affable Losers, blind but eager Clueless, and Sociopaths at the top looking for ways to gain from the whole mess. 
  • tough cultures value dedication, and flush out the less dedicated using informal social pressure and formal performance reviews. It’s no longer acceptable to work a standard workweek; 60 hours is the new 40. Tough culture exists to purge the Loser tier, splitting it between the neo-Clueless sector and the still-Loser rejects, which it will fire if they don’t quit first. So the MacLeod pyramid of a tough culture is more fluid, but every bit as pathological.
  • self-executive cultures value strategy. Employees are individually responsible for directing their own efforts into pursuits that are of the most value. This is the open allocation for which Valve and Github are known. Instead of employees having to compete for projects (tough culture) or managerial support (rank culture) it is the opposite. Projects compete for talent on an open market, and managers (if they exist) must operate in the interests of those being managed. There is no MacLeod hierarchy in a self-executive culture.
  • guild culture values a balance of the three. Junior employees aren’t treated as terminal subordinates but as proteges who will eventually rise into leadership/mentoring positions. There isn’t a MacLeod pyramid here; to the extent that there may be undesirable structure, it has more to do with inaccurate seniority metrics (e.g. years of experience) than with bad-faith credibility trading. 

Rank and guild cultures are both command cultures, insofar as they rely on central planning and global (within the institution) rule-setting. Top management must keep continual awareness of how many people are at each level, and plan out the future accordingly. Tough and self-executive cultures are market cultures, because they require direct engagement with an organic, internal market.

The healthy, “Theory Y” cultures are the guild and self-executive cultures. These confer a basic credibility on all employees, which shuts off the panic trading that generates the MacLeod process. In a guild culture, each employee has credibility for being a student who will grow in the future. In self-executive culture, each employee has power inherent in the right to direct her efforts to the project she considers most worthy. Bosses and projects competing for workers is a Good Thing. 

The pathological, “Theory X” cultures are the rank and tough cultures. It goes without saying that most rank cultures try to present themselves as guild cultures– but management has so much power that it need not take any mentorship commitments seriously. Likewise, most tough cultures present themselves as self-executive ones. How do you tell if your company has a genuinely healthy (Theory Y) culture? Basic credibility. If it’s there, it’s the good kind. If it’s not, it’s the bad kind of culture.

Basic credibility

In a healthy company, employees won’t be “killed by the dice”. Sure, random fluctuations in credibility and performance might delay a promotion for a year or two, but the panicked credibility trading of the Theory-X culture isn’t there. People don’t fear their bosses in a Theory-Y culture; they’re self-motivated and fear not doing enough by their own standards– because they actually care. Basic credibility means that every employee is extended enough credibility to direct his own work and career.

That does not mean people are never fired. If someone punches a colleague in the face or steals from the company, you fire him, but it has nothing to do with credibility. You get rid of him because, well, he did something illegal and harmful. What it does mean is that people aren’t terminated for “performance reasons” that really mean either (a) they were just unlucky and couldn’t get enough support to save them in tough-culture “stack ranking”, or (b) their manager disliked them for some reason (no-fault lack-of-fit, or manager-fault lack-of-fit). It does mean that people are permitted to move around in the company, and that the firm might tolerate a real underperformer for a couple of years. Guess what? In a convex world, underperformance almost doesn’t matter.

With convexity, the difference between excellence and mediocrity matters much more than that between mediocrity and underperformance. In a concave world, yes, you must fire underperformers because the margin you get on good employees is so low that one slacker can cancel out 4 or 5 good people. In a convex world, the danger isn’t that you have a few underperformers. You will have, at the least, good-faith low-performers, just because the nature of convexity is to create risk and inequality of return and some peoples’ projects won’t pan out. Thjat’s fine. Instead, the danger is that you don’t have any excellent (“10x”) employees.

There’s a managerial myth that cracking down on “low performers” is useful because they demotivate the “10x-ers”. Yes and no. Incompetent management and having to work around bad code are devastating and will chase out your top performers. If 10xer’s have to work with incompetents and have no opportunity to improve them, they get frustrated and quit. There are toxic incompetents (dividers) who make others unproductive and damage morale, and then there are low-impact employees who just need more time (subtracters). Subtracters cost more in salary than they deliver, but they aren’t hurting anyone and they will usually improve. Fire dividers immediately. Give subtracters a few years (yes, I said years) to find a fit. Sometimes, you’ll hire someone good and still have that person end up as a subtracter at first. That common in the face of convexity– and remember that convexity is the defining problem of the 21st-century business world. The right thing to do is to let her keep looking for a fit until she finds one. Almost never will it take years if your company runs properly.

“Low performer initiatives” rarely smoke out the truly toxic dividers, as it turns out. Why? Because people who have defective personalities and hurt other peoples’ morale and productivity are used to having their jobs in jeopardy, and have learned to play politics. They will usually survive. It’ll be unlucky subtracters you end up firing. You might save chump change on the balance sheet, but you’re not going to fix the real organizational problems.

Theories X, Y, and Z

I grouped the negative workplace cultures (rank and tough) together and called them Theory X; the positive ones (self-executive and guild) I called Theory Y. This isn’t my terminology; it’s about 50 years old, coming from Douglas MacGregor. The 1960s was the height of Theory Y management, so that was the “good” managerial style. Let’s compare them and see what they say.

Recall what I said about the “sources of power”: coercion, divination, and aggregation. Coercion was, by far, the predominant force in aggregate labor before 1800. Slavery, prisons, and militaries (with, in that time, lots of conscription) were the inspirations for the original corporations, and the new class of industrialists was very cruel: criminal by modern standards. Theory X was the norm. Under Theory X, workers are just resources. They have no rights, no important desires, and should be well-treated only if there’s an immediate performance benefit. Today, we recognize that as brutal and psychotic, but for a humanity coming off over 100,000 years of male positional violence and coerced labor, the original-sin model of work shouldn’t seem far off. Theory X held that employees are intrinsically lazy and selfish and will only work hard if threatened.

Around 1920, industrialists began to realize that, even though labor in that time mostly was concave, it was good business to be decent to one’s workers. Henry Ford, a rabid anti-Semite, was hardly a decent human being, much less “a nice guy”, but even he was able to see this. He raised wages, creating a healthy consumer base for his products. He reduced the workday to ten hours, then eight. The long days just weren’t productive. Over the next forty years, employers learned that if workers were treated well, they’d repay the favor by behaving better and working harder. This lead to the Theory Y school of management, which held that people were intrinsically altruistic and earnest, and that management’s role was to nurture them. This gave birth to the paternalistic corporation and the bilateral social contracts that created the American middle class.

Theory Y failed. Why? It grew up in the 1940s to ’60s, when there was a prosperous middle class, but in a time of very low economic inequality. One thing that would amaze most Millennials is that, when our parents grew up, the idea that a person would work for money was socially unacceptable. You just couldn’t say that you wanted to get rich, in 1970, and not be despised for it. And it was very rare for a person to make 10 times more than the average citizen! However, the growth of economic inequality that began in the 1970s, and accelerated since then, raised the stakes. Then the Reagan Era hit.

Most of the buyout/private equity activity that happened in the 1980s had a source immortalized by the movie Wall Street: industrial espionage, mostly driven by younger people eager to sell out their employers’ secrets to get jobs from private equity firms. There was a decade of betrayal that brutalized the older, paternalistic corporations. Given, by a private equity tempter, the option of becoming CEO immediately through chicanery, instead of working toward it for 20 years, many took the former. Knives came out, backs were stabbed, and the most trusting corporations got screwed.

Since the dust settled, around 1995, the predominant managerial attitude has been Theory Z. Theory X isn’t socially acceptable, and Theory Y’s failure is still too recently remembered. What’s Theory Z? Theory X takes a pessimistic view of workers and distrusts everyone. Theory Y takes an optimistic view of human nature and becomes too trusting. Theory Z is the most realistic of the three: it assumes that people are indifferent to large organizations (even their employers) but loyal to those close to them (family, friends, immediate colleagues, distant co-workers; probably in that order). Human nature is neither egoistic or altruistic, but localistic. This was an improvement insofar as it holds a more realistic view of how people are. It’s still wrong, though.

What’s wrong with Theory Z? It’s teamist. Now, when you have genuine teamwork, that’s a great thing. You get synergy, multiplier effects, team convexity– whatever you want to call it, I think we all agree that it’s powerful. The problem with the Theory-Z company is that it tries to enforce team cohesion. Don’t hire older people; they might like different music! Buy a foosball table, because 9:30pm diversions are how creativity happens! This is more of a cargo cult than anything founded in reasonable business principles, and it’s generally ineffective. Teamism reduces diversity and makes it harder to bring in talent (which is critical, in a convex world). It also tends toward general mediocrity.

Each Theory had a root delusion in it. Theory X’s delusion was that morale didn’t matter; workers were just machines. Theory Y’s delusion is rooted in the tendency for “too good” people to think everyone else is as decent as they are; it fell when the 1980s made vapid elitism “sexy” again, and opportunities to make obscene wealth in betraying one’s employer emerged. Theory Z’s delusion is that a set of people who share nothing other than a common manager constitute a genuine (synergistic) team. See, in an open-allocation world, you’re likely to get team synergies because of the self-organization. People would naturally tend to form teams where they make each other more productive (multiplier effects). It happens at the grass-roots level, but can’t be forced in people who are deprived of autonomy. With closed-allocation, you don’t get that. People (with diverging interests) are brought together by force outside of their control and told to be a team. Closed-allocation Theory Z lives in denial of how rare those synergistic effects actually are.

I mentioned, previously an alternative to these 3 theories that I’ve called Theory A, which is a more sober and realistic slant on Theory Y: trust employees with their own time and energy; distrust those who want to control others. I’ll return to that in Part 22, the conclusion.

Morality, civility, and social acceptability

The MacLeod Sociopaths that run large organizations are a corrosive force, but what defines them isn’t true psychopathy, although some of them are that. There are also plenty of genuinely good people who fit the MacLeod Sociopath archetype. I am among them. What makes them dangerous is that the organization has no means to audit them. If it’s run by “good Sociopaths” (whom I’ve taken to calling Technocrats) then it will be a good organization. However, if it’s run by the bad kind, it will degenerate. So, with the so-called Sociopaths (while it is less necessary for the Losers and Clueless) it is important to understand the moral composition of that set.

I’ve put a lot of effort into defining good and evil, and that’s a big topic I don’t have much room for, so let me be brief on them. Good is motivated by concerns like compassion, social justice, honesty, and virtue. Evil is militant localism or selfishness. In an organizational context, or from a perspective of individual fitness, both are maladaptive when taken to the extreme. Extreme good is self-sacrifice and martyrdom that tends to take a person out of the gene pool, and certainly isn’t good for the bottom line; extreme evil is perverse sadism that actually gets in a person’s way, as opposed to the moderate psychopathy of corporate criminals.

Law and chaos are the extremes of a civil spectrum, which I cribbed from AD&D. Lawful people have faith in institutions and chaotic people tend to distrust them. Lawful good sees institutions as tending to be more just and fair than individual people; chaotic good finds them to be corrupt. Lawful neutrality sees institutions as being efficient and respectable; chaotic neutrality finds them inefficient and deserving of destruction. Lawful evil sees institutions as a magnifier of strength and admires their power; chaotic evil sees them as obstructions that get in the way of raw, human dominance. 

Morality and civil bias, in people, seem to be orthogonal. In the AD&D system, each spectrum has three levels, producing 9 alignments. I focused on the careers of each here. In reality, though, there’s a continuous spectrum. For now, I’m just going to assume a Gaussian distribution, mean 0 and standard deviation 1, with the two dimensions being uncorrelated.

MacLeod Losers tend to be civilly neutral, and Clueless tend to be lawful; but MacLeod Sociopaths come from all over the map. Why? To understand that, we need to focus on a concept that I call well-adjustment. To start, humans don’t actually value extremes in goodness or in law. Extreme good leads to martyrdom, and most people who are more than 3 standard deviations of good are taken to be neurotic narcissists, rather than being admired. Extremely lawful people tend to be rigid, conformist, and are therefore not much liked either. I contend that there’s a point of maximum well-adjustment that represents what our society says people are supposed to be. I’d put it somewhere in the ballpark of 1 standard deviation of good, and 1 of law, or the point (1, 1). If we use +x to represent law, -x to represent chaos, +y to represent good, and -y to represent evil, we get the well-adjustment formula:

Here, low f means that one is more well-adjusted. It’s better to be good than evil, and to be lawful than chaotic, but it’s best to be at (1, 1) exactly. But wait! Is there really a difference between (1, 1) and (0, 0)? Or between (5, 5) and (5, 6)? Not really, I don’t think. Well-adjustment tends to be a binary relationship, so I’m going to put f through a logistic transform where 0.0 means total ill-adjustment at 1.0 means well-adjustment. Middling values represent a “fringe” of people who will be well-adjusted in some circumstances but fail, socially speaking, in others. Based on my experience, I’d guess that this:

is a good estimate. If your squared distance from the point of maximal well-adjustment is less than 4, you’re good. If it’s more than 8, you’re probably ill-adjusted– too good, too evil, too lawful, or too chaotic. What gives us, in the 2-D moral/civil space, is a well-adjustment function looking exactly like this:

whose contours look like this:

Now, I don’t know whether the actual well-adjustment function that drives human social behavior has such a perfect circular shape. I doubt it does. It’s probably some kind of contiguous oval, though. The white part is a plateau of high (near 1.0) social adjustment. People in this space tend to get along with everyone. Or, if they have social problems, it has little to do with their moral or civil alignments, which are socially acceptable. The red outside is a deep sea (near 0.0) of social maladjustment. It turns out that if you’re 2 standard deviations of evil and of chaos, you have a hard time making friends.

In other words, we have a social adjustment function that’s almost binary, but there’s a really interesting circular fringe that produces well-adjustment values between 0.1 and 0.9. Why would that be important? Because that’s where the MacLeod Sociopaths comes from.

Well-adjusted people don’t rise in organizations. Why? Because organizations know exactly how to make it so that well-adjusted, normal people don’t mind being at the bottom, and will slightly prefer it if that’s where the organization thinks they belong. It’s like Brave New World, where the lower castes (e.g. Gammas) are convinced that they are happiest where they are. If you’re on that white plateau of well-adjustment, you’ll probably never be fired. You’ll always have friends wherever you go. You can get comfortable as a MacLeod Loser, or maybe Clueless. You don’t worry. You don’t feel a strong need to rise quickly in an orgnaization.

Of course, the extremely ill-adjusted people in the red don’t rise either. That should not surprise anyone. Unless they become very good at hiding their alignments, they are too dysfunctional to have a shot in social organizations like a modern corporation. To put it bluntly, no one likes them.

However, let’s say that a Technocrat has 1.25 standard deviations of law and chaos each, making her well-adjustment level 0.65. She’s clearly in that fringe category. What does this mean? It means that she’ll be socially acceptable in about 65% of all contexts. The MacLeod Loser career isn’t an option for her. She might get along with one set of managers and co-workers, but as they change, things may turn against her. Over time, something will break. This gives her a natural up-or-out impetus. If she doesn’t keep learning new things and advancing her career, she could be hosed. She’s liked by more people than dislike her, but she can’t rely on being well-liked as it were a given.

It’s people on the fringe who tend to rise to the top of, and run, organizations, because they can never get cozy on the bottom. We can graph “fringeness”, measured as the magnitude of the slope (derivative) of the well-adjustment function and you get contours like this:

It’s a ring-shaped fringe. Nothing too surprising. The perfection of the circular ring is, of course, an artifact of the model. I don’t know if it’s this neat in the real world, but the idea there is correct. Now, here’s where things get interesting. What does that picture tell us? Not that much aside from what we already know: the most ambitious (and, eventually, most successful) people in an organization will be those who are not so close to the “point of maximal well-adjustment” to get along in any context, but not so far from it as to be rejected out of hand.

But how does this give us the observed battle royale between chaotic good and lawful evil? Up there, it just looks like a circle. 

Okay, so we see the point (3, 3) in that circular band. How common is it for someone to be 3 standard deviations of lawful and 3 standard deviations of good? Not common at all. 3-sigma events are rare (about 1 in 740) so a person who was 3 deviations from the norm in both would be 1-in-548,000– a true rarity. Let’s multiply this “fringeness” function we’ve graphed by the (Gaussian) population density at each point.

That’s what the fringe, weighted by population density, looks like. There’s a lack of presence of people at positions like (3, 3) because there’s almost no one there. There’s a clear crescent “C” shape and it contains a disproportionate share of two kinds of people. It has a lot of lawful evil in the bottom right, and a lot of chaotic good in the top left, in addition to some neutral “swing players” who will tend to side (with unity in their group) with one or the other. How they swing tends to determine the moral character of an organization. If they side with the chaotic good, then they’ll create a company like Valve. If they side with lawful evil, you get the typical MacLeod process.

That’s the theoretical reason why organizations come down to an apocalyptic battle between chaotic good (Technocrats) and lawful evil (corrosive Sociopaths, in the MacLeod process). How does this usually play out? Well, we know what lawful evil does. It uses the credibility black market to gain power in the organization. How should chaotic good fight against this? It seems that convexity plays to our advantage, insofar as the MacLeod process can no longer be afforded. In the long term, the firm can only survive if people like us (chaotic good) win. How do we turn that into victory in the short term?

So what’s a Technocrat to do? And how can a company be built to prevent it from undergoing MacLeod corrosion? What’s missing in the self-executive and guild cultures that a 5th “new” type of culture might be able to fix? That’s where I intend to go next.

Take a break, breathe a little. I’ll be back in about a week to Solve It.

Gervais followup: rank, tough, and market cultures.

This is a follow-up to yesterday’s essay in which I confronted the MacLeod Hierarchy of the organization, which affixes unflattering labels to the three typical tiers (workers, managers, executives) of the corporate pyramid. Subordinate workers, MacLeod names Losers, not as a pejorative, but because life at the bottom is a losing proposition. Lifelong middle managers are the Clueless who lack the insight necessary to advance. Executives are the exploitative Sociopaths who win. I looked at this and discovered that each category possessed two of three corporate traits: strategy, dedication, and subordinacy (which I called “team player” in the original essay). I replaced team player with subordinacy because I realized that “team player” isn’t well-defined. Here, by subordinacy, I mean that a person is willing to accept subordinate roles, without expectation of personal benefit. People who lack it are not constitutionally insubordinate, but view their work as a contract between themselves and manager. They take direction from the manager and show loyalty, so long the manager advances their careers. Since they show no loyalty to a manager or team that doesn’t take an interest in their careers, they get the “not a team player” label.

MacLeod Losers are subordinate and strategic, but not very dedicated. They work efficiently and generally do a good job, but they’re usually out the door at 5:00, and they’re not likely to stand up to improve processes if that will bring social discomfort upon them. Clueless are subordinate and dedicated, but not strategic. They’ll take orders and work hard, but they rarely know what is worth working on and should not advance to upper management. MacLeod Sociopaths are strategic and dedicated, but not subordinate. The next question to ask is, “Is insubordinacy necessarily psychopathy?”, and I would say no. Hence, my decision to split the Sociopath tier between the “good Sociopaths” (Technocrats) and the bad ones, the true Psychopaths.

People who have one or zero of the three traits (subordinacy, dedication, strategy) are too maladaptive to fit into the corporate matrix at all and become Lumpenlosers. People with all three do not exist. A person who is strategic is not going to dedicate herself to subordination. She might subordinate, in the context of a mutually beneficial mentor/protege role, but general subordination is out of the question. Strategic people either decide to minimize discomfort, which means being well-liked team players in the middle of the performance curve– not dedicated over-performers– or to maximize yield, which makes subordinacy unattractive.

What I realized is that, from these three traits, one can understand three common workplace cultures.

Rank Culture

The most common one is rank culture, which values subordinacy. Even mild insubordination can lead to termination, and for a worker to be described as “out for herself” is the kiss of death. Rank cultures make a home for the MacLeod Losers and Clueless, but MacLeod Sociopaths don’t fare well. They have to keep moving, preferably upward.

While the MacLeod Clueless lack the strategic vision to decide what to work on, their high degree of dedication and subordinacy makes them a major tactical asset for the Sociopath. The Clueless middle manager becomes a role model for the Losers. He played by the rules and worked hard, and moved into a position of (slightly) higher pay and respect. 

Rank cultures have, within them, a thermocline. From worker to middle manager, jobs get harder and require more effort to maintain and achieve. Below the thermocline, one really does have to exceed expectations to qualify for the next level up. Above the thermocline, in Sociopath territory, the power associated to rank starts paying dividends and jobs get easier as one ascends, rising into executive ranks where one controls the performance assessment and can either work only on “the fun stuff”, or choose not to work at all. Rank cultures require this thermocline, an opaque veil, to keep the workers motivated and invested in their belief that work will make them free. That is why, in such cultures, the strategically inept Clueless are so damn important.

Tough Culture

Rank culture’s downfall is that, over time, it reduces the quality of employees. The best struggle to rise in an unfair, politicized environment where subordination to local gatekeepers (managers) is more important than merit, and eventually quit or get themselves fired. The worst find a home in the bottom of the Loser tier, the Lumpenlosers. At some point, companies decide that the most important thing to do is clear out the underperformers. Thus is born tough culture. Rank culture values subordinacy above all else; tough culture demands dedication. Sixty hours per week becomes the new 40. Performance reviews now come with “teeth”.

Enron’s executives were proud of their “tough” culture, with high-stakes performance reviews and about 5% of employees fired for performance each year. The firm was berated for its “mean-spirited” and politicized review system that, in reality, is no different from what exists now in many technology companies. This is the “up-or-out” model where if you don’t appear to be “hungry”, you’ll be gone in a year or two. Those who appear to be coasting have targets on their heads.

Over time, tough culture begets a new rank culture, because the effort it demands becomes unsustainable, and because those who control the new and more serious performance review system begin using it to run extortions based on loyalty and tribute rather than objective effort. They become the new rank-holders.

Market Culture

Rank culture demands subordinacy, and tough culture demands dedication. Market culture is one that demands strategy, even of entry-level employees. You don’t have to work 60-hour weeks, nor do you have to be a well-liked, smiling order-follower. You have to work on things that matter.

Rank and tough cultures focus on “performance”, which is an assessment of how well one does as an individual. If you work hard to serve a bad boss, or on an ill-conceived project, you made no mistake. You were following orders. You had no impact, but it wasn’t your fault, and you “performed” well. Market cultures ignore the “performance” moralism and go straight to impact. What did you do, and how did it serve the organization? Low-impact doesn’t mean you’ll be fired, but you are expected to understand why it happened and to take responsibility for moving to higher impact pursuits.

People who serially have low impact, if the company can’t mentor them until they are self-executive, will need to be fired because, if they’re not, they’ll become the next generation’s subordinates and generate a rank culture. Firing them is the hard part, because most of the people who conceive market cultures are well-intended Technocrats (or, in the MacLeod triarchy, “good Sociopaths”) who want to liberate peoples’ creative energies. They don’t like giving up on people. But any market culture is going to have to deal with people who are not self-executive enough to thrive, and if it doesn’t have the runway to mentor them, it has to let them go.

A true market culture is “bossless”. You don’t work for a manager, but for the company. You might find mentors who will guide you toward more important, high-impact pursuits, but that’s your job. In technology, this is the open allocation methodology.

Why Market Culture is best

Market culture seems, of the three, the most explicitly Sociopathic (in the MacLeod sense). Rank cultures are about who you are, and how well you play the role that befits your rank. Tough cultures are about how much you sacrifice, and democratic in that sense. Market cultures are about what you do. Your rank and social status and “personality” don’t matter. Is that dehumanizing? In my opinion, no. It might look that way, but I see it from a different angle. In a rank culture, you’re expected to submit (or subordinate) yourself. In a tough culture, your contribution is sacrifice. In a market culture, you submit work. Of these three, I prefer the last. I would rather submit work to an “impersonal” market than a rank-seeking extortionist trying to rise in a dysfunctional rank culture. 

What I haven’t yet addressed is the cleavage I’ve drawn within the MacLeod Sociopath category between Technocrats and Psychopaths. The most important thing for an organization is the differential fitness of these two categories. Technocrat executives are, on average, beneficial. Psychopaths are unethical and usually undesirable.

Pure rank cultures do not seem to confer an advantage to either category. Tough cultures, on the other hand, benefit Psychopaths who find an outlet for their socially competitive energies. Ultimately, as the tough culture devolves into an emergent rank culture, Psychopaths thrive amid the ambiguity and political turmoil. Tough cultures unintentionally attract Psychopaths.

What about market cultures? I think that Psychopaths actually have a short term advantage in them, and that might seem damning of the market culture. This is probably why rank cultures are superficially more attractive to the Clueless middle-managers, with their dislike of “job hopping” and overt ambition. On the other hand, and much more importantly, market cultures confer the long-term advantage on Technocrats. They’re “eventually consistent”. To thrive in one for the long game, one must develop a skill base and deliver real value. That’s a Technocrat’s game.

No, idiot. Discomfort Is Bad.

Most corporate organizations have failed to adapt to the convexity of creative and technological work, a result of which is that the difference between excellence and mediocrity is much more meaningful than that between mediocrity and zero. An excellent worker might produce 10 times as much value as a mediocre one, instead of 1.2 times as much, as was the case in the previous industrial era. Companies, trapped in concave-era thinking, still obsess over “underperformers” (through annual witch hunts designed to root out the “slackers”) while ignoring the much greater danger, which is the risk of having no excellence. That’s much more deadly. For example, try to build a team of 50th- to 75th-percentile software engineers to solve a hard problem, and the team will fail. You don’t have any slackers or useless people– all would be perfectly productive people, given decent leadership– but you also don’t have anyone with the capability to lead, or to make architectural decisions. You’re screwed.

The systematic search-and-destroy attitude that many companies take toward “underperformers” exists for a number of reasons, but one is to create pervasive discomfort. Performance evaluation is a subjective, noisy, information-impoverished process, which means that good employees can get screwed just by being unlucky. The idea behind these systems is to make sure that no one feels safe. One in 10 people gets put through the kangaroo court of a “performance improvement plan” (which exists to justify termination without severance) and fired if he doesn’t get the hint. Four in 10 get damaging, below-average reviews that damage the relationship with the boss, but make internal mobility next to impossible. Four more are tagged with the label of mediocrity, and, finally, one of those 10 gets a good review and a “performance-based” bonus… which is probably less than he feels he deserved, because he had to play mad politics to get it. Everyone’s unhappy, and no one is comfortable. That is, in fact, the point of such systems: to keep people in a state of discomfort.

The root idea here is that Comfort Is Bad. The idea is that if people feel comfortable at work, they’ll become complacent, but that if they’re intimidated just enough, they’ll become hard workers. In the short term, there’s some evidence that this sort of motivation works. People will stay at work for an additional two hours in order to avoid missing a deadline and having an annoying conversation the next day. In the long term, it fails. For example, open-plan offices, designed to use social discomfort to enhance productivity, actually reduce it by 66 percent. Hammer on someone’s adrenal system, and you get response for a short while. After a certain point, you get a state of exhaustion and “flatness of affect”. The person doesn’t care anymore.

What’s the reason for this? I think that the phenomenon of learned helplessness is at play. One short-term reliable way to get an animal such as a human to do something is to inflict discomfort, and to have the discomfort go away if the desired work is performed. This is known as negative reinforcement; the removal of unpleasant circumstances in exchange for desired behavior. An example of this known to all programmers is the dreaded impromptu status check: the pointless unscheduled meeting in which a manager drops in, unannounced, and asks for an update on work progress,usually  in the absence of an immediate need. Often, this isn’t malicious or intentionally annoying, but comes from a misunderstanding of how engineers work. Managers are used to email clients that can be checked 79 times per day with no degradation of performance, and tend to forget that humans are not this way. That said, the behavior is an extreme productivity-killer, as it costs about 90 minutes per status check. I’ve seen managers do this 2 to 4 times per day. The more shortfall in the schedule, the more grilling there is. The idea is to make the engineer work hard so there is progress to report and the manager goes away quickly. Get something done in the next 24 hours, or else. This might have that effect– for a few weeks. At some point, though, people realize that the discomfort won’t go away in the long term. In fact, it gets worse, because performing well leads to higher expectations, while a decline in productivity (or even a perceived decline) brings on more micromanagement. Then learned helplessness sets in, and the attitude of not giving a shit takes hold. This is why, in the long run, micromanagers can’t motivate shit to stink.

Software engineers are increasingly inured to environments of discomfort and distraction. One of the worst trends in the software industry is the tendency toward cramped, open-plan offices where an engineer might have less than 50 square feet of personal space. This is sometimes attributed to cost savings, but I don’t buy it. Even in Midtown Manhattan, office space only costs about $100 per square foot per year. That’s not cheap, but not expensive enough (for software engineers) to justify the productivity-killing effect of the open-plan office.

Discomfort is an especial issue for software engineers, because our job is to solve problems. That’s what we do: we solve other peoples’ problems, and we solve our own. Our job, in large part, is to become better at our job. If a task is menial, we don’t suffer through it, nor do we complain about it or attempt to delegate it to someone else. We automate it away. We’re constantly trying to improve our productivity. Cramped workspaces, managerial status checks, and corrupt project-allocation machinery (as opposed to open allocation) all exist to lower the worker’s social status and create discomfort or, as douchebags prefer to call it, “hunger”. This is an intended effect, and because it’s in place on purpose, it’s also defended by powerful people. When engineers learn this, they realize that they’re confronted with a situation they cannot improve. It becomes a morale issue.

Transient discomfort motivates people to do things. If it’s cold, one puts on a coat. When discomfort recurs without fail, it stops having this effect. At some point, a person’s motivation collapses. What use is it to act to reduce discomfort if the people in charge of the environment will simply recalibrate it to make it uncomfortable again? None. So what motivates people in the long term? See: What Programmers Want. People need a genuine sense of accomplishment that comes from doing something well. That’s the genuine, long-lasting motivation that keeps people working. Typically, the creative and technological accomplishments that revitalize a person and make long-term stamina possible will only occur in an environment of moderate comfort, in which ideas flow freely. I’m not saying that the office should become an opium den, and there are forms of comfort that are best left at home, but people need to feel secure and at ease with the environment– not like they’re in a warzone.

So why does the Discomfort Is Good regime live on? Much of it is just an antiquated managerial ideology that’s poorly suited to convex work. However, I think that another contributing factor is “manager time”. One might think, based on my writing, that I dislike managers. As individuals, many of them are fine. It’s what they have to do that I tend to dislike, but it’s not an enviable job. Managing has higher status but, in reality, is no more fun than being managed. Managers are swamped. With 15 reports, schedules full of meetings, and their own bosses to “manage up”, they are typically overburdened. Consequently, a manager can’t afford to dedicate more than about 1/20 of his working time to any one report. The result of this extreme concurrency (out of accord with how humans think) is that each worker is split into a storyline that only gets 5% of the manager’s time. So when a new hire, at 6 months, is asking for more interesting work or a quieter location, the manager’s perspective is that she “just got here”. Six months times 1/20 is 1.3 weeks. That’s manager time. This explains the insufferably slow progress most people experience in their corporate careers. Typical management expects 3 to 5 years of dues-paying (in manager time, the length of a college semester) before a person is “proven” enough to start asking for things. Most people, of course, aren’t willing to wait 5 years to get a decent working space or autonomy over the projects they take on.

A typical company, as it sees its job, is to create a Prevailing Discomfort so that a manager can play “Good Cop” and grant favors: projects with more career upside, work-from-home arrangements, and more productive working spaces. Immediate managers never fire people; the company does “after careful review” of performance (in a “peer review” system wherein, for junior people, only managerial assessments are given credence). “Company policy” takes the Bad Cop role. Ten percent of employees must be fired each year because “it’s company policy”. No employee can transfer in the first 18 months because of “company policy”. (“No, your manager didn’t directly fuck you over. We have a policy of fucking over the least fortunate 10% and your manager simply chose not to protect you.”) Removal of the discomfort is to be doled out (by managers) as a reward for high-quality work. However, for a manager to fight to get these favors for reports is exhausting, and managers understandably don’t want to do this for people “right away”. The result is that these favors are given out very slowly, and often taken back during “belt-tightening” episodes, which means that the promised liberation from these annoying discomforts never really comes.

One of the more amusing things about the Discomfort Is Good regime is that it actually encourages the sorts of behaviors it’s supposed to curtail. Mean-spirited performance review systems don’t improve low performers; they create them by turning the unlucky into an immobile untouchable class with an axe to grind, and open-plan offices allow the morale toxicity of disengaged employees to spread at a rapid rate. Actually, my experience has been that workplace slacking is more common in open-plan offices. Why? After six months in open-plan office environments, people learn the tricks that allow them to appear productive while focusing on things other than work. Because such environments are exhausting, these are necessary survival adaptations, especially for people who want to be productive before or after work. In a decent office environment, a person who needed a 20-minute “power nap” could take one. In the open-plan regime, the alternative is a two-hour “zone out” that’s not half as effective.

The Discomfort Is Good regime is as entrenched in many technology startups as in large corporations, because it emerges out of a prevailing, but wrong, attitude among the managerial caste (from which most VC-istan startup founders, on account of the need for certain connections, have come). One of the first things that douchebags learn in Douchebag School is to make their subordinates “hungry”. It’s disgustingly humorous to watch them work to inflict discomfort on others– it’s transparent what they are trying to do, if one knows the signs– and be repaid by the delivery of substandard work product. Corporate America, at least in its current incarnation, is clearly in decline. While it sometimes raises a chuckle to see decay, I thought I would relish this more as I watched it happen. I expected pyrotechnics and theatrical collapses, and that’s clearly not the way this system is going to go. This one won’t go out with an explosive bang, but with the high-pitched hum of irritation and discomfort.

Fourth quadrant work

I’ve written a lot about open allocation, so I think it’s obvious where I stand on the issue. One of the questions that is always brought up in that discussion is: so who answers the phones? The implicit assumption, with which I don’t agree, is that there are certain categories of work that simply will not be performed unless people are coerced into doing it. To counter this, I’m going to answer the question directly. Who does the unpleasant work in an open-allocation company? What characterizes the work that doesn’t get done under open allocation?

First, define “unpleasant”. 

Most people in most jobs dislike going to work, but it’s not clear to me how much of that is an issue of fit as opposed to objectively unpleasant work. The problem comes from two sources. First, companies often determine their project load based on “requirements” whose importance is assessed according to the social status of the person proposing it rather than any reasonable notion of business, aesthetic, or technological value, so that generates a lot of low-yield busywork that people prefer to avoid because it’s not very important. Second, companies and hiring managers tend to be ill-equipped at matching people to their specialties, especially in technology. Hence, you have machine learning experts working on payroll systems. It’s not clear to me, however, that there’s this massive battery of objectively undesirable work on which companies rely. There’s probably someone who’d gladly take on a payroll-system project as an excuse to learn Python.

Additionally, most of what makes work unpleasant isn’t the work itself but the subordinate context: nonsensical requirements, lack of choice in one’s tools, and unfair evaluation systems. This is probably the most important insight that a manager should have about work: most people genuinely want to work. They don’t need to be coerced, and doing that will only reduce their intrinsic incentives in the long run. In that light, open allocation’s mission is to remove the command system that turns work that would otherwise be fulfilling into drudgery. Thus, even if we accept that there’s some quantity of unpleasant work that any company will generate, it’s likely that the amount of it will decrease under open allocation, especially as people are freed to find work that fits their interests and specialty. What’s left is work that no one wants to do: a smaller set of the workload. In most companies, there isn’t much of that work to go around, and it can almost always be automated.

The Four Quadrants

We define work as interesting if there are people who would enjoy doing it or find it fulfilling– some people like answering phones– and unpleasant if it’s drudgery that no one wants to do. We call work essential if it’s critical to a main function of the business– money is lost in large amounts if it’s not completed, or not done well– and discretionary if it’s less important. Exploratory work and support work tend to fall into the “discretionary” set. These two variables split work into four quadrants:

  • First Quadrant: Interesting and essential. This is work that is intellectually challenging, reputable in the job market, and important to the company’s success. Example: the machine learning “secret sauce” that powers Netflix’s recommendations or Google’s web search.
  • Second Quadrant: Unpleasant but essential. These tasks are often called “hero projects”. Few people enjoy doing them, but they’re critical to the company’s success. Example: maintaining or refactoring a badly-written legacy module on which the firm depends.
  • Third Quadrant: Interesting but discretionary. This type of work might become essential to the company in the future, but for now, it’s not in the company’s critical path. Third Quadrant work is important for the long-term creative health of the company and morale, but the company has not been (and should not be) bet on it.  Example: robotics research in a consumer web company.
  • Fourth Quadrant: Unpleasant and discretionary. This work isn’t especially desirable, nor is it important to the company. This is toxic sludge to be avoided if possible, because in addition to being unpleasant to perform, it doesn’t look good in a person’s promotion packet. This is the slop work that managers delegate out of a false perception of a pet project’s importance. Example: at least 80 percent of what software engineers are assigned at their day jobs.

The mediocrity that besets large companies over time is a direct consequence of the Fourth Quadrant work that closed allocation generates. When employees’ projects are assigned, without appeal, by managers, the most reliable mechanism for project-value discovery– whether capable workers are willing to entwine their careers with it– is shut down. The result, under closed allocation, is that management does not get this information regarding what projects the employees consider important, and therefore won’t even know what the Fourth Quadrant work is. Can they recover this “market information” by asking their reports? I would say no. If the employees have learned (possibly the hard way) how to survive a subordinate role, they won’t voice the opinion that their assigned project is a dead end, even if they know it to be true.

Closed allocation simply lacks the garbage-collection mechanism that companies need in order to clear away useless projects. Perversely, companies are much more comfortable with cutting people than projects. On the latter, they tend to be “write-only”, removing projects only years after they’ve failed. Most of the time, when companies perform layoffs, they do so without reducing the project load, expecting the survivors to put up with an increased workload. This isn’t sustainable, and the result often is that, instead of reducing scope, the company starts to underperform in an unplanned way: you get necrosis instead of apoptosis.

So what happens in each quadrant under open allocation? First Quadrant work gets done, and done well. That’s never an issue in any company, because there’s no shortage of good people who want to do it. Third Quadrant work also gets enough attention, likewise, because people enjoy doing it. As for Second Quadrant work, that also gets done, but management often finds that it has to pay for it, in bonuses, title upgrades, or pay raises. Structuring such rewards is a delicate art, since promotions should represent respect but not confer power that might undermine open allocation. However, I believe it can be done. I think the best solution is to have promotions and a “ladder”, but for its main purpose to be informing decisions about pay, and not an excuse to create power relationships that make no sense.

So, First and Third Quadrant work are not a problem under open allocation. That stuff is desirable and allocates itself. Second Quadrant work is done, and well, but expensive. Is this so bad, though? The purpose of these rewards is to compensate people for freely choosing work that would otherwise be averse to their interests and careers. That seems quite fair to me. Isn’t that how we justify CEO compensation? They do risky work, assume lots of responsibilities other people don’t want, and are rewarded for it? At least, that’s the story. Still, a “weakness” of open allocation is that it requires management to pay for work that they could get “for free” in a more coercive system. The counterpoint is that coerced workers are generally not going to perform as well as people with more pleasant motivations. If the work is truly Second Quadrant, it’s worth every damn penny to have it done well.

Thus, I think it’s a fair claim that open allocation wins in the First, Second, and Third Quadrant. What about the Fourth? Well, under open allocation, that stuff doesn’t get done. The company won’t pay for it, and no one is going to volunteer to do it, so it doesn’t happen. The question is: is that a problem?

I won’t argue that Fourth Quadrant work doesn’t have some value, because from the perspective of the business, it does. Fixing bugs in a dying legacy module might make its demise a bit slower. However, I would say that the value of most Fourth Quadrant work is low, and much of it is negative in value on account of the complexity that it imposes, in the same way that half the stuff in a typical apartment is of negative value. Where does it come from, and why does it exist? The source of Fourth Quadrant work is usually a project that begins as a Third Quadrant “pet project”. It’s not critical to the business’s success, but someone influential wants to do it and decides that it’s important. Later on, he manages to get “head count” for it: people who will be assigned to complete the less glamorous work that this pet project generates as it scales; or, in other words, people whose time is being traded, effectively, as a political token. If the project never becomes essential but its owner is active enough in defending it to keep it from ever being killed, it will continue to generate Fourth Quadrant work. That’s where most of this stuff comes from. So what is it used for? Often, companies allocate Third Quadrant work to interns and Fourth Quadrant work to new hires, not wanting to “risk” essential work on new people. The purpose is evaluative: to see if this person is a “team player” by watching his behavior on relatively unimportant, but unattractive, work. It’s the “dues paying” period and it’s horrible, because a bad review can render a year or two of a person’s working life completely wasted.

Under open allocation, the Fourth Quadrant work goes away. No one does any. I think that’s a good thing, because it doesn’t serve much of a purpose. People should be diving into relevant and interesting work as soon as they’re qualified for it. If someone’s not ready to be working on First and Second Quadrant (e.g. essential) work, then have that person in the Third Quadrant until she learns the ropes.

Closed-allocation companies need the Fourth Quadrant work because they hire people but don’t trust them. The ideology of open allocation is: we hired you, so we trust you to do your best to deliver useful work. That doesn’t mean that employees are given unlimited expense accounts on the first day, but it means that they’re trusted with their own time. For a contrast, the ideology of closed allocation is: just because we’re paying you doesn’t mean we trust, like, or respect you; you’re not a real member of the team until we say you are. This brings us to the real “original sin” at the heart of closed allocation: the duplicitous tendency of growing-too-fast software industries to hire before they trust.

Careerism breeds mediocrity

A common gripe of ambitious people is the oppressive culture of mediocrity that almost everyone experiences at work: boring tasks, low standards, risk aversion, no appetite for excellence, and little chance to advance. The question is often asked: where does all this mediocrity come from? Obviously, there are organizational forces– risk-aversion, subordination, seniority– that give it an advantage, but what might be an individual-level root cause that brings it into existence in the first place? What makes people preternaturally tolerant of mediocrity, to such a degree that large organizations converge to it? Is it just that “most people are mediocre”? Certainly, anyone can become complacent and mediocre, given sufficient reward and comfort, but I don’t think it’s a natural tendency of humans. In fact, I believe it leaves us dissatisfied and, over the long run, disgusted with working life.

Something I’ve learned over the years about the difference between mediocrity and excellence is that the former is focused on “being” and what one is, while the latter is about doing and what one creates or provides. Mediocrity wants to be attractive or important or socially well-connected. Excellence wants to create something attractive or perform an important job. Mediocrity wants to “be smart” and for everyone to know it. Excellence wants to do smart things. Mediocrity wants to be well-liked. Excellence wants to create things worth liking. Mediocrity wants to be one of the great writers. Excellence wants to write great works. People who want to hold positions, acquire esteem, and position their asses in specific comfortable chairs tend to be mediocre, risk-averse, and generally useless. The ones who excel are those who go out with the direct goal of achieving something.

The mediocrity I’ve described above is the essence of careerism: acquiring credibility, grabbing titles, and taking credit. What’s dangerous about this brand of mediocrity is that, in many ways, it looks like excellence. It is ambitious, just toward different ends. Like junk internet, it feels like real work is getting done. In fact, this variety of mediocrity is not only socially acceptable but drilled into children from a young age. It’s not “save lives and heal the sick” that many hear growing up, but “become a doctor”.

This leads naturally to an entitlement mentality, for what is a title but a privilege of being? Viscount isn’t something you do. It’s something you are, either by birth or by favor. Upper-tier corporate titles are similar, except with “by favor” being common because it must at least look like a meritocracy when, in truth, the proteges and winners have been picked at birth.

Corporations tend to be risk-averse and pathological, to such a degree that opportunities to excel are rare, and therefore become desirable. Thus, they’re allocated as a political favor. To whom? To people who are well-liked and have the finest titles. To do something great in a corporate context– to even have the permission to use your own time in such a pursuit– one first has to be something: well-titled, senior, “credible”. You can’t just roll up your sleeves and do something useful and important, lest you be chastised for taking time away from your assigned work. It’s ridiculous! Is it any wonder that our society has such a pervasive mentality of entitlement? When being something must occur before doing anything, there is no other way for people to react.

As I get older, I’m increasingly negative on the whole concept of careerism, because it makes being reputable (demonstrated through job titles) a prerequisite for doing something useful, and thereby generates a culture of entitled mediocrity, because its priorities lead naturally that way. What looks like ambition is actually a thin veneer over degenerate, worthless social climbing. Once people are steeped in this culture for long enough, they’re too far gone and real ambition has been drained from them forever.

This, I think, is Corporate America’s downfall. In this emasculated society, almost no one wants to do any real work– or to let anyone else do real work– because that’s not what gets rewarded, and to do anything that’s actually useful, one has to be something (in the eyes of others) first. This means that the doers who remain tend to be the ones who are willing to invest years in the soul-sucking social climbing and campaigning required to get there, and the macroscopic result of this is adverse selection in organizational leadership. Over time, this leaves organizations unable to adapt or thrive, but it takes decades for that process to run its course.

What’s the way out? Open allocation. In a closed-allocation dinosaur company, vicious political fights ensue about who gets to be “on” desirable projects. People lose sight of what they can actually do for the company, distracted by the perpetual cold war surrounding who gets to be on what team. You don’t have this nonsense in an open-allocation company. You just have people getting together to get something important done. The way out is to remove the matrix of entitlement, decisively and radically. That, and probably that alone, will evade the otherwise inevitable culture of mediocrity that characterizes most companies.

Programmer autonomy is a $1 trillion issue.

Sometimes, I think it’s useful to focus on trillion-dollar problems. While it’s extremely difficult to capture $1 trillion worth of value, as made evident by the nonexistence of trillion-dollar companies, innovations that create that much are not uncommon, and these trillion-dollar problems often have not one good idea, but a multitude of them to explore. So a thought experiment that I sometimes run is, “What is the easiest way to generate $1 trillion in value?” Where is the lowest-hanging trillion dollars of fruit? What’s the simplest, most obvious thing that can be done to add $1 trillion of value to the economy?

I’ve written at length about the obsolescence of traditional management, due to the convex input-output relationship in modern, creative work, making variance-reductive management counter-productive. I’ve also shared a number of thoughts about Valve’s policy of open allocation, and the need for software engineers to demand the respect accorded to traditional professions: sovereignty, autonomy, and the right to work directly for the profession without risk of the indirection imposed by traditional hierarchical management. So, for this exercise, I’m going to focus on the autonomy of software engineers. How much improvement will it take to add $1 trillion in value to the economy?

This is an open-ended question, so I make it more specific: how many software engineers would we have to give a high degree (open allocation) of autonomy in order to add $1 trillion? Obviously, the answer is going to depend on the assumptions that are made, so I’ll have to answer some basic questions. Because there’s uncertainty in all of these numbers, the conclusion should be treated as an estimate. However, when possible, I’ve attempted to make my assumptions as conservative as possible. Therefore, it’s quite plausible that the required number of engineers is actually less than half the number that I’ll compute here.

Question 1: How many software engineers are there? I’m going to restrict this analysis to the developed world, because it’s hard to enforce cultural changes globally. Data varies, but most surveys indicate that there are about 1.5 million software engineers in the U.S. If we further assume the U.S. to be one-fifth of “the developed world”, we get 7.5 million full-time software engineers in the world. This number could probably be increased by a factor of two by loosening the definition of “software engineer” or “developed world”, but I think it’s a good working number. The total number of software engineers is probably two to three times that.

Question 2: What is the distribution of talent among software engineers? First, here’s the scale that I developed for assessing the skill and technical maturity of a software engineer. Defining specific talent percentiles requires specifying a population, but I think the software engineer community can be decomposed into the following clusters, differing according to the managerial and cultural influences on their ability and incentive to gain competence.

Cluster A: Managed full-time (75%). These are full-time software engineers who, at least nominally, spend their working time either coding or on software-related problems (such as site reliability). If they were asked to define their jobs, they’d call themselves programmers or system administrators: not meteorologists or actuaries who happen to program software. Engineers in this cluster typically work on large corporate codebases and are placed in a typical hierarchical corporate structure. They develop an expertise within their defined job scope, but often learn very little outside of it. Excellence and creativity generally aren’t rewarded in their world, and they tend to evolve into the stereotypical “5:01 developers”. They tend to plateau around 1.2-1.3, because the more talented ones are usually tapped for management before they would reach 1.5. Multiplier-level contributions for engineers tend to be impossible to achieve in their world, due to bureaucracy, limited environments, project requirements developed by non-engineers, and an assumption that anyone decent will become a manager. I’ve assigned Cluster A a mean competence of 1.10 and a standard deviation of 0.25, meaning that 95 percent of them are between 0.6 and 1.6.

Cluster B: Novices and part-timers (15%). These are the non-engineers who write scripts occasionally, software engineers in their first few months, interns and students. They program sometimes, but they generally aren’t defined as programmers. This cluster I’ve given a mean competence of 0.80 and a standard deviation of 0.25. I assign them to a separate cluster because (a) they generally aren’t evaluated or managed as programmers, and (b) they rarely spend enough time with software to become highly competent. They’re good at other things.

Cluster B isn’t especially relevant to my analysis, and it’s also the least well-defined. Like the Earth’s atmosphere, its outer perimeter has no well-defined boundary. Uncertainty about its size is also the main reason why it’s hard to answer questions about the “number of programmers” in the world. The percentage would increase (and so would the number of programmers) if the definition of “programmer” were loosened.

Cluster C: Self-managing engineers (10%). These are the engineers who either work in conditions of unusual autonomy (being successful freelancers, company owners, or employees of open-allocation companies) or who exert unusual efforts to control their careers, education and progress. This cluster has a larger mean competence and variance than the others. I’ve assigned it a mean of 1.5 and a variance of 0.35. Consequently, almost all of the engineers who get above 2.0 are in this cluster, and this is far from surprising: 2.0-level (multiplier) work is very rare, and impossible to get under typical management.

If we mix these three distributions together, we get the following profile for the software engineering world:

Skill Percentile
1.0     38.37
1.2     65.38
1.4     85.24
1.6     94.47
1.8     97.87
2.0     99.23
2.2     99.78
2.4     99.95
2.6     99.99

How accurate is this distribution? Looking at it, I think it probably underestimates the percentage of engineers in the tail. I’m around 1.7-1.8 and I don’t think of myself as a 97th-percentile programmer (probably 94-95th). It also says that only 0.77 percent of engineers are 2.0 or higher (I’d estimate it at 1 to 2 percent). I would be inclined to give Cluster C an exponential tail on the right, rather than the quadratic-exponential decay of a Gaussian, but for the purpose of this analysis, I think the Gaussian model is good enough.

Question 3: How much value does a software engineer produce? Marginal contribution isn’t a good measure of “business value”, because it’s too widely variable based on specifics (such company size) but I wouldn’t say that employment market value is a good estimate either, because there’s a surplus of good people who (a) don’t know what they’re actually worth, and (b) are therefore willing to work for low compensation, especially because a steady salary removes variance from compensation. Companies know that they make more (a lot more) on the most talented engineers than on average ones– if they have high-level work for them. The better engineer might be worth ten times as much, but only cost twice as much, as the average. So I think of it in terms of abstract business value: given appropriate work, how much expected value (ignoring variance) can that person deliver?

This quantity I’ve assumed to be exponential with regard to engineer skill, as described above: an increment of 1.0 is a 10-fold increase in abstract business value (ABV). Is this the right multiplier? It indicates that an increment of 0.3 is a doubling of ABV, or that a 0.1-increment is a 26 percent increase in ABV. In my experience, this is about right. For some skill-intensive projects, such as technology-intensive startups, the gradient might be steeper (a 20-fold difference between 2.0 and 1.0, rather than 10) but I am ignoring that, for simplicity’s sake.

The ABV of a 1.0 software engineer I’ve estimated at $125,000 per year in a typical business environment, meaning that it would be break-even (considering payroll costs, office space, and communication overhead) to hire one at a salary around $75,000. A 95th-percentile engineer (1.63) produces an ABV of $533,000 and a 99th-percentile engineer (1.96) produces an ABV of $1.14 million. These numbers are not unreasonable at all. (In fact, if they’re wrong, I’d bet on them being low.)

This doesn’t, I should say, mean that a 99th-percentile engineer will reliably produce $1.14 million per year, every year. She has to be assigned at-level, appropriate, work. Additionally, that’s an expected return (and the variance is quite high at the upper end). She might produce $4 million in one year under one set of conditions, and zero in another arena. Since I’m interested in the macroeconomic effect of increasing engineer autonomy, I can ignore variance and focus on mean expectancy alone. This sort of variance is meaningful to the individual (it’s better to have a good year than a bad one) and small companies, but the noise cancels itself out on the macroeconomic scale.

Putting it all together: with these estimates of the distribution of engineer competence, and the ABV estimates above, it’s possible to compute an expected value for a randomly-chosen engineer in each cluster:

Cluster A: $185,469 per year.

Cluster B: $89,645 per year.

Cluster C: $546,879 per year.

All software engineers: $207,236 per year.

Software engineers can evolve in all sorts of ways, and improved education or more longevity might change the distributions of the clusters themselves. That I’m not going to model, because there are so many possibilities. Nor am I going to speculate on macroeconomic business changes that would change the ABV figures. I’m going to focus on only one aspect of economic change, although there are others with additional positive effects, and that change is the evolution of an engineer from the micromanaged world of Cluster A to the autonomous, highly-productive one of Cluster C. (Upward movement within clusters is also relevant, and that strengthens my case, but I’m excluding it for now.) I’m going to assume that it takes 5 years of education and training for that evolution to occur, and assume an engineer’s career (with some underestimation here, justified by time-value of money) lasts 20 years, meaning there are 15 years to reap the benefits in full, plus two years to account for partial improvement during that five-year evolution.

What this means is that, for each engineer who evolves in this way, it generates $361,410 per year in value for 17 years, or $6.14 million per engineer. That is the objective benefit that accrues to society in engineer skill growth alone when a software engineer moves out of a typical subordinate context and into one like Valve’s open allocation regime. Generating $1 trillion in this way requires liberating 163,000 engineers, or a mere 2.2% of the total pool. That happens even if (a) the number of software engineers remains the same (instead of increasing due to improvements to the industry) and (b) other forms of technological growth, that would increase the ABV of a good engineer, stop, although it’s extremely unlikely that they will. Also, there are the ripple effects (in terms of advancing the state of the art in software engineering) of a world with more autonomous and, thus, more skilled engineers. All of these are substantial, and they improve my case even further, but I’m removing those concerns in the interest of pursuing a lower bound for value creation. What I can say, with ironclad confidence, is that the movement of 163,000 engineers into an open-allocation regime will, by improving their skills and, over the years, their output, generate $1 trillion at minimum. That it might produce $5 or $20 trillion (or much more, in terms of long-term effects on economic growth) in eventual value, through multiplicative “ripple” effects, is more speculatory. My job here was just to get to $1 trillion.

In simpler terms, the technological economy into which our society needs to graduate is one that requires software (not to mention mathematical and scientific) talent at a level that would currently be considered elite, and the only conditions that allow such levels of talent to exist are those of extremely high autonomy, as observed at Valve and pre-apocalyptic Google. Letting programmers direct their own work, and therefore take responsibility for their own skill growth is, multiplied over the vast number of people in the world who have to work with software, easily a trillion-dollar problem. It’s worth addressing now.

What Programmers Want

Most people who have been assigned the unfortunate task of managing programmers have no idea how to motivate them. They believe that the small perks (such as foosball tables) and bonuses that work in more relaxed settings will compensate for more severe hindrances like distracting work environments, low autonomy, poor tools, unreasonable deadlines, and pointless projects. They’re wrong. However, this is one of the most important things to get right, for two reasons. The first is that programmer output is multiplicative of a number of factors– fit with tools and project, skill and experience, talent, group cohesion, and motivation. Each of these can have a major effect (plus or minus a factor of 2 at least) on impact, and engineer motivation is one that a manager can actually influence. The second is that measuring individual performance among software engineers is very hard. I would say that it’s almost impossible and in practical terms, economically infeasible. Why do I call infeasible rather than merely difficult? That’s because the only people who can reliably measure individual performance in software are so good that it’s almost never worth their time to have them doing that kind of work. If the best engineers have time to spend with their juniors, it’s more worthwhile to have them mentoring the others (which means their interests will align with the employees rather than the company trying to perform such measurement) than measuring them, the latter being a task they will resent having assigned to them.

Seasoned programmers can tell very quickly which ones are smart, capable, and skilled– the all-day, technical interviews characteristic of the most selective companies achieve that– but individual performance on-site is almost impossible to assess. Software is too complex for management to reliably separate bad environmental factors and projects from bad employees, much less willful underperformance from no-fault lack-of-fit. So measurement-based HR policies add noise and piss people off but achieve very little, because the measurements on which they rely are impossible to make with any accuracy. This means that the only effective strategy is to motivate engineers, because attempting to measure and control performance after-the-fact won’t work.

Traditional, intimidation-based management backfires in technology. To see why, consider the difference between 19th-century commodity labor and 21st-century technological work. For commodity labor, there’s a level of output one can expect from a good-faith, hard-working employee of average capability: the standard. Index that to the number 100. There are some who might run at 150-200, but often they are cutting corners or working in unsafe ways, so often the best overall performers might produce 125. (If the standard were so low that people could risklessly achieve 150, the company would raise the standard.) The slackers will go all the way down to zero if they can get away with it. In this world, one slacker cancels out four good employees, and intimidation-based management– which annoys the high performers and reduces their effectiveness, but brings the slackers in line, having a performance-middling effect across the board– can often work. Intimidation can pay off, because more is gained by intimidating the slacker into mediocrity brings more benefit than is lost. Technology is different. The best software engineers are not at 125 or even 200, but at 500, 1000, and in some cases, 5000+. Also, their response to a negative environment isn’t mere performance middling. They leave. Engineers don’t object to the firing of genuine problem employees (we end up having to clean up their messes) but typical HR junk science (stack ranking, enforced firing percentages, transfer blocks against no-fault lack-of-fit employees) disgusts us. It’s mean-spirited and it’s not how we like to do things. Intimidation doesn’t work, because we’ll quit. Intrinsic motivation is the only option.

Bonuses rarely motivate engineers either, because the bonuses given to entice engineers to put up with undesirable circumstances are often, quite frankly, two or three orders of magnitude too low. We value interesting work more than a few thousand dollars, and there are economic reasons for us doing so. First, we understand that bad projects entail a wide variety of risks. Even when our work isn’t intellectually stimulating, it’s still often difficult, and unrewarding but difficult work can lead to burnout. Undesirable projects often have a 20%-per-year burnout rate between firings, health-based attrition, project failure leading to loss of status, and just plain losing all motivation to continue. A $5,000 bonus doesn’t come close to compensating for a 20% chance of losing one’s job in a year. Additionally, there are the career-related issues associated with taking low-quality work. Engineers who don’t keep current lose ground, and this becomes even more of a problem with age. Software engineers are acutely aware of the need to establish themselves as demonstrably excellent before the age of 40, at which point mediocre engineers (and a great engineer becomes mediocre after too much mediocre experience) start to see prospects fade.

The truth is that typical HR mechanisms don’t work at all in motivating software engineers. Small bonuses won’t convince them to work differently, and firing middling performers (as opposed to the few who are actively toxic) to instill fear will drive out the best, who will flee the cultural fallout of the firings. There is no way around it: the only thing that will bring peak performance out of programmers is to actually make them happy to go to work. So what do software engineers need?

The approach I’m going to take is based on timeframes. Consider, for an aside, peoples’ needs for rest and time off. People need breaks at  work– say, 10 minutes every two hours. They also need 2 to 4 hours of leisure time each day. They need 2 to 3 days per week off entirely. They need (but, sadly, don’t often get) 4 to 6 weeks of vacation per year. And ideally, they’d have sabbaticals– a year off every 7 or so to focus on something different from the assigned work. There’s a fractal, self-similar nature to peoples’ need for rest and refreshment, and these needs for breaks tap into Maslovian needs: biological ones for the short-timeframe breaks and higher, holistic needs pertaining to the longer timeframes. I’m going to assert that something similar exists with regard to motivation, and examine six timeframes: minutes, hours, days, weeks, months, and years.

1. O(minutes): Flow

This may be the most important. Flow is a state of consciousness characterized by intense focus on a challenging problem. It’s a large part of what makes, for example, games enjoyable. It impels us toward productive activities, such as writing, cooking, exercise, and programming. It’s something we need for deep-seated psychological reasons, and when people don’t get it, they tend to become bored, bitter, and neurotic. For a word of warning, while flow can be productive, it can also be destructive if directed toward the wrong purposes. Gambling and video game addictions are probably reinforced, in part, by the anxiolytic properties of the flow state. In general, however, flow is a good thing, and the only thing that keeps people able to stand the office existence is the ability to get into flow and work.

Programming is all about flow. That’s a major part of why we do it. Software engineers get their best work done in a state of flow, but unfortunately, flow isn’t common for most. I would say that the median software engineer spends 15 to 120 minutes per week in a state of flow, and some never get into it. The rest of the time is lost to meetings, interruptions, breakages caused by inappropriate tools or bad legacy code, and context switches. Even short interruptions can easily cause a 30-minute loss. I’ve seen managers harass engineers with frequent status pings (2 to 4 per day) resulting in a collapse of productivity (which leads to more aggressive management, creating a death spiral). The typical office environment, in truth, is quite hostile to flow.

To achieve flow, engineers have to trust their environment. They have to believe that (barring an issue of very high priority and emergency) they won’t be interrupted by managers or co-workers. They need to have faith that their tools won’t break and that their environment hasn’t changed in a catastrophic way due to a mistake or an ill-advised change by programmers responsible for another component. If they’re “on alert”, they won’t get into flow and won’t be very productive. Since most office cultures grew up in the early 20th century when the going philosophy was that workers had to be intimidated or they would slack off, the result is not much flow and low productivity.

What tasks encourage or break flow is a complex question. Debugging can break flow, or it can be flowful. For me, I enjoy it (and can maintain flow) when the debugging is teaching me something new about a system I care about (especially if it’s my own). It’s rare, though, that an engineer can achieve flow while maintaining badly written code, which is a major reason why they tend to prefer new development over maintenance. Trying to understand bad software (and most in-house corporate software is terrible) creates a lot of “pinging” for the unfortunate person who has to absorb several disparate contexts in order to make sense of what’s going on. Reading good code is like reading a well-written academic paper: an opportunity to see how a problem was solved, with some obvious effort put into presentation and aesthetics of the solution. It’s actually quite enjoyable. On the other hand, reading bad code is like reading 100 paragraphs, all clipped from different sources. There’s no coherency or aesthetics, and the flow-inducing “click” (or “aha” experience) when a person makes a connection between two concepts almost never occurs. The problem with reading code is that, although good code is educational, there’s very little learned in reading bad code aside from the parochial idiosyncracies of a badly-designed system, and there’s a hell of a lot of bad code out there.

Perhaps surprisingly, whether a programmer can achieve “flow”, which will influence her minute-by-minute happiness, has almost nothing to do with the macroscopic interestingness of the project or company’s mission. Programmers, left alone, can achieve flow and be happy writing the sorts of enterprise business apps that they’re “supposed” to hate. And if their environment is so broken that flow is impossible, the most interesting, challenging, or “sexy” work won’t change that. Once, I saw someone leave an otherwise ideal machine learning quant job because of “boredom”, and I’m pretty sure his boredom had nothing to do with the quality of the work (which was enviable) but with the extremely noisy environment of a trading desk.

This also explains why “snobby” elite programmers tend to hate IDEs, the Windows operating system, and anything that forces them to use the mouse when key-combinations would suffice. Using the mouse and fiddling with windows can break flow. Keyboarding doesn’t. Of course, there are times when the mouse and GUI are superior. Web surfing is one example, and writing blog posts (WordPress instead of emacs) is another. Programming, on the other hand, is done using the keyboard, not drag-and-drop menus. The latter are a major distraction.

2. O(hours): Feedback

Flow is the essence here, but what keeps the “flow” going? The environmental needs are discussed above, but some sorts of work are more conducive to flow than others. People need a quick feedback cycle. One of the reasons that “data science” and machine learning projects are so highly desired is that there’s a lot of feedback, in comparison to enterprise projects which are developed in one world over months (with no real-world feedback) and released in another. It’s objective, and it comes on a daily basis. You can run your algorithms against real data and watch your search strategies unfold in front of you while your residual sum of squares (error) decreases.

Feedback needs to be objective or positive in order to keep people enthusiastic about their work. Positive feedback is always pleasant, so long as it’s meaningful. Objective, negative feedback can be useful as well. For example, debugging can be fun, because it points out one’s own mistakes and enables a person to improve the program. The same holds for problems that turn out to be more difficult than originally expected: it’s painful, but something is learned. What never works well is subjective, negative feedback (such as bad performance reviews, or aggressive micromanagement that shows a lack of trust). That pisses people off.

I think it should go without saying that this style of “feedback” can’t be explicitly provided on an hourly basis, because it’s unreasonable to expect managers to do that much work (or an employee do put up with such a high frequency of managerial interruption). So, the feedback has to come organically from the work itself. This means there need to be genuine challenges and achievements involved. So most of this feedback is “passive”, by which I mean there is nothing the company or manager does to inject the feedback into the process. The engineer’s experience completing the work provides the feedback itself.

One source of frustration and negative feedback that I consider subjective (and therefore place in that “negative, subjective feedback that pisses people off” category) is the jarring experience of working with badly-designed software. Good software is easy to use and makes the user feel more intelligent. Bad software is hard to use, often impossible to read at the source level, and makes the user or reader feel absolutely stupid. When you have this experience, it’s hard to tell if you are rejecting the ugly code (because it is distasteful) or if it is rejecting you (because you’re not smart enough to understand it). Well, I would say that it doesn’t matter. If the code “rejects” competent developers, it’s shitty code. Fix it.

The “feedback rate” is at the heart of many language debates. High-productivity languages like Python, Scala, and Clojure, allow programmers to implement significant functionality in mere hours. On my best projects, I’ve written 500 lines of good code in a day (by the corporate standard, that’s about two months of an engineer’s time). That provides a lot of feedback very quickly and establishes a virtuous cycle: feedback leads to engagement, which leads to flow, which leads to productivity, which leads to more feedback. With lower-level languages like C and Java– which are sometimes the absolute right tools to use for one’s problem, especially when tight control of performance is needed– macroscopic progress is usually a lot slower. This isn’t an issue, if the performance metric the programmer cares about lives at a lower level (e.g. speed of execution, limited memory use) and the tools available to her are giving good indications of her success. Then there is enough feedback. There’s nothing innate that makes Clojure more “flow-ful” than C; it’s just more rewarding to use Clojure if one is focused on macroscopic development, while C is more rewarding (in fact, probably the only language that’s appropriate) when one is focused on a certain class of performance concerns that require attention to low-level details. The problem is that when people use inappropriate tools (e.g. C++ for complex, but not latency-sensitive, web applications) they are unable to get useful feedback, in a timely manner, about the performance of their solutions.

Feedback is at the heart of the “gameification” obsession that has grown up of late, but in my opinion, it should be absolutely unnecessary. “Gameifcation” feels, to me, like an after-the-fact patch if not an apology, when fundamental changes are necessary. The problem, in the workplace, is that these “game” mechanisms often evolve into high-stakes performance measurements. Then there is too much anxiety for the “gameified” workplace to be fun.

In Java culture, the feedback issue is a severe problem, because development is often slow and the tools and culture tend to sterilize the development process by eliminating that “cosmic horror” (which elite programmers prefer) known as the command line. While IDEs do a great job of reducing flow-breakage that occurs for those unfortunate enough to be maintaining others’ code, they also create a world in which the engineers are alienated from computation and problem-solving. They don’t compile, build, or run code; they tweak pieces of giant systems that run far away in production and are supported by whoever drew the short straw and became “the 3:00 am guy”.

IDEs have some major benefits but some severe drawbacks. They’re good to the extent that they allow people to read code without breaking flow; they’re bad to the extent that they tend to require use patterns that break flow. The best solution, in my opinion, to the IDE problem is to have a read-only IDE served on the web. Engineers write code using a real editor, work at the command line so they are actually using a computer instead of an app, and do almost all of their work in a keyboard-driven environment. However, when they need to navigate others’ code in volume, the surfing (and possibly debugging) capabilities offered by IDEs should be available to them.

3. O(days): Progress

Flow and feedback are nice, but in the long term, programmers need to feel like they’re accomplishing something, or they’ll get bored. The feedback should show continual improvement and mastery. The day scale is the level at which programmers want to see genuine improvements. The same task shouldn’t be repeated more than a couple times: if it’s dull, automate it away. If the work environment is so constrained and slow that a programmer can’t log, on average, one meaningful accomplishment (feature added, bug fixed) per day, something is seriously wrong.  (Of course, most corporate programmers would be thrilled to fix one bug per week.)

The day-by-day level and the need for a sense of progress is where managers and engineers start to understand each other. They both want to see progress on a daily basis. So there’s a meeting point there. Unfortunately, managers have a tendency to pursue this in a counterproductive way, often inadvertently creating a Heisenberg problem (observation corrupts the observed) in their insistence on visibility into progress. I think that the increasing prevalence of Jira, for example, is dangerous, because increasing managerial oversight at a fine-grained level creates anxiety and makes flow impossible. I also think that most “agile” practices do more harm than good, and that much of the “scrum” movement is flat-out stupid. I don’t think it’s good for managers to expect detailed progress reports (a daily standup focused on blockers is probably ok) on a daily basis– that’s too much overhead and flow breakage– but this is the cycle at which engineers tend to audit themselves, and they won’t be happy in an environment where they end the day not feeling that they worked a day.

4. O(weeks): Support.

Progress is good, but as programmers, we tend toward a trait that the rest of the world sees only in the moody and blue: “depressive realism”. It’s as strong a tendency in the mentally healthy among us as the larger-than-baseline percentage who have mental health issues. For us, it’s not depressive. Managers are told every day how awesome they are by their subordinates, regardless of the fact that more than half of the managers in the world are useless. We, on the other hand, have subordinates (computers) that frequently tell us that we fucked up by giving them nonsensical instructions. “Fix this shit because I can’t compile it.” We tend to have an uncanny (by business standards) sense of our own limitations. We also know (on the topic of progress) that we’ll have good days and bad days. We’ll have weeks where we don’t accomplish anything measurable because (a) we were “blocked”, needing someone else to complete work before we could continue, (b) we had to deal with horrendous legacy code or maintenance work– massive productivity destroyers– or (c) the problem we’re trying to solve is extremely hard, or even impossible, but it took us a long time to fairly evaluate the problem and reach that conclusion.

Programmers want an environment that removes work-stopping issues, or “blockers”, and that gives them the benefit of the doubt. Engineers want the counterproductive among them to be mentored or terminated– the really bad ones just have to be let go– but they won’t show any loyalty to a manager if they perceive that he’d give them grief over a slow month. This is why so-called “performance improvement plans” (PIPs)– a bad idea in any case– are disastrous failures with engineers. Even the language is offensive, because it suggests with certainty that an observed productivity problem (and most corporate engineers have productivity problems because most corporate software environments are utterly broken and hostile to productivity) is a performance problem, and not something else. An engineer will not give one mote of loyalty to a manager that doesn’t give her the benefit of the doubt.

I choose “weeks” as the timeframe order of magnitude for this need because that’s the approximate frequency with which an engineer can be expected to encounter blockers, and removal of these is one thing that engineers often need from their managers: resolution of work-stopping issues that may require additional resources or (in rare cases) managerial intervention. However, that frequency can vary dramatically.

5. O(months): Career Development.

This is one that gets a bit sensitive, and it becomes crucial on the order of months, which is much sooner than most employers would like to see their subordinates insisting on career advancement, but, as programmers, we know we’re worth an order of magnitude more than we’re paid, and we expect to be compensated by our employers investing in our long-term career interests. This is probably the most important of the 6 items listed here.

Programmers face a job market that’s unusually meritocratic when changing jobs. Within companies, the promotion process is just as political and bizarre as it is for any other profession, but when looking for a new job, programmers are evaluated not on their past job titles and corporate associations, but on what they actually know. This is quite a good thing overall, because it means we can get promotions and raises (often having to change corporate allegiance in order to do so, but that’s a minor cost) just by learning things, but it also makes for an environment that doesn’t allow for intellectual stagnation. Yet most of the work that software engineers have to do is not very educational and, if done for too long, that sort of work leads in the wrong direction.

When programmers say about their jobs, “I’m not learning”, what they often mean is, “The work I am getting hurts my career.” Most employees in most jobs are trained to start asking for career advancement at 18 months, and to speak softly over the first 36. Most people can afford one to three years of dues paying. Programmers can’t. Programmers, if they see a project that can help their career and that is useful to the firm, expect the right to work on it right away. That rubs a lot of managers the wrong way, but it shouldn’t, because it’s a natural reaction to a career environment that requires actual skill and knowledge. In most companies, there really isn’t a required competence for leadership positions, so seniority is the deciding factor. Engineering couldn’t be more different, and the lifetime cost of two years’ dues-paying can be several hundred thousand dollars.

In software, good projects tend to beget good projects, and bad projects beget more crap work. People are quickly typecast to a level of competence based on what they’ve done, and they have a hard time upgrading, even if their level of ability is above what they’ve been assigned. People who do well on grunt work get more of it, people who do poorly get flushed out, and those who manage their performance precisely to the median can get ahead, but only if managers don’t figure out what they’re doing. As engineers, we understand the career dynamic very well, and quickly become resentful of management that isn’t taking this to heart. We’ll do an unpleasant project now and then– we understand that grungy jobs need to be done sometimes– but we expect to be compensated (promotion, visible recognition, better projects) for doing it. Most managers think they can get an undesirable project done just by threatening to fire someone if the work isn’t done, and that results in adverse selection. Good engineers leave, while bad engineers stay, suffer, and do it– but poorly.

Career-wise, the audit frequency for the best engineers is about 2 months. In most careers, people progress by putting in time, being seen, and gradually winning others’ trust, and actual skill growth is tertiary. That’s not true for us, or at least, not in the same way. We can’t afford to spend years paying dues while not learning anything. That will put us one or two full technology stacks behind the curve with respect to the outside world.

There’s a tension employees face between internal (within a company) and external (within their industry) career optimization. Paying dues is an internal optimization– it makes the people near you like you more, and therefore more likely to offer favors in the future– but confers almost no external-oriented benefit. It was worthwhile in the era of the paternalistic corporation, lifelong employment, and a huge stigma attached to changing jobs (much less getting fired) more than two or three times in one career. It makes much less sense now, so most people focus on the external game. Engineers who focus on the external objective are said to be “optimizing for learning” (or, sometimes, “stealing an education” from the boss). There are several superiorities to a focus on the external career game. First, external career advancement is not zero-sum–while jockeying internally for scarce leadership positions is– and what we do is innately cooperative. It works better with the type of people we are. Second, our average job tenure is about 2 to 3 years. Third, people who suffer and pay dues are usually passed over anyway in favor of more skilled candidates from outside. Our industry has figured out that it needs skilled people more than it needs reliable dues-payers (and it’s probably right). So this explains, in my view, why software engineers are so aggressive and insistent when it comes to the tendency to optimize for learning.

There is a solution for this, and although it seems radical, I’m convinced that it’s the only thing that actually works: open allocation. If programmers are allowed to choose the projects best suited to their skills and aspirations, the deep conflict of interest that otherwise exists among their work, their careers, and their educational aspirations will disappear.

6. O(years): Macroscopic goals. On the timescale of years, macroscopic goals become important. Money and networking opportunities are major concerns here. So are artistic and business visions. Some engineers want to create the world’s best video game, solve hard mathematical problems, or improve the technological ecosystem. Others want to retire early or build a network that will set them up for life.

Many startups focus on “change the world” macroscopic pitches about how their product will connect people, “disrupt” a hated industry or democratize a utility, or achieve some world-changing ambition. This makes great marketing copy for recruiters, but it doesn’t motivate people on the day-to-day basis. On the year-by-year basis, none of that marketing matters, because people will actually know the character of the organization after that much time. That said, the actual macroscopic character, and the meaning of the work, of a business matters a great deal. Over years and decades, it determines whether people will stick around once they develop the credibility, connections, and resources that would give them the ability to move on to something else more lucrative, more interesting, or of higher status.

How to win

It’s conventional wisdom in software that hiring the best engineers is an arbitrage, because they’re 10 times as effective but only twice as costly. This is only true if they’re motivated, and also if they’re put to work that unlocks their talent. If you assign a great engineer to mediocre work, you’re going to lose money. Software companies put an enormous amount of effort into “collecting” talent, but do a shoddy job of using or keeping it. Often, this is justified in the context of a “tough culture”; turnover is reflective of failing employees, not a bad culture. In the long term, this is ruinous. The payoff in retaining top talent is often exponential as a function of the time and effort put into attracting, retaining, and improving it.

Now that I’ve discussed what engineers need from their work environments in order to remain motivated, the next question is what a company should do. There isn’t a one-size-fits-all managerial solution to this. In most cases, the general best move is to reduce managerial control and to empower engineers: to set up an open-allocation work environment in which technical choices and project direction are set by engineers, and to direct from leadership rather than mere authority. This may seem “radical” in contrast to the typical corporate environment, but it’s the only thing that works.

Tech companies: open allocation is your only real option.

I wrote, about a month ago, about Valve’s policy of allowing employees to transfer freely within the company, symbolized by placing wheels under the desk (thereby creating a physical marker of their superior corporate culture that makes traditional tech perks look like toys) and expecting employees to self-organize. I’ve taken to calling this seemingly radical notion open allocation– employees have free rein to work on projects as they choose, without asking for permission or formal allocation– and I’m convinced that, despite seeming radical, open allocation is the only thing that actually works in software. There’s one exception. Some degree of closed allocation is probably necessary in the financial industry because of information barriers (mandated by regulators) and this might be why getting the best people to stay in finance is so expensive. It costs that much to keep good people in a company where open allocation isn’t the norm, and where the workflow is so explicitly directed and constrained by the “P&L” and by justifiable risk aversion. If you can afford to give engineers 20 to 40 percent raises every year and thereby compete with high-frequency-trading (HFT) hedge funds, you might be able to retain talent under closed allocation. If not, read on.

Closed allocation doesn’t work. What do I mean by “doesn’t work”? I mean that, as things currently go in the software industry, most projects fail. Either they don’t deliver any business value, or they deliver too little, or they deliver some value but exert long-term costs as legacy vampires. Most people also dislike their assigned projects and put minimal or even negative productivity into them. Good software is exceedingly rare, and not because software engineers are incompetent, but because when they’re micromanaged, they stop caring. Closed allocation and micromanagement provide an excuse for failure: I was on a shitty project with no upside. I was set up to fail. Open allocation blows that away: a person who has a low impact because he works on bad projects is making bad choices and has only himself to blame.

Closed allocation is the norm in software, and doesn’t necessarily entail micromanagement, but it creates the possibility for it, because of the extreme advantage it gives managers over engineers. An engineer’s power under closed allocation is minimal: his one bit of leverage is to change jobs, and that almost always entails changing companies. In a closed-allocation shop, project importance is determined prima facie by executives long before the first line of code is written, and formalized in magic numbers called “headcount” (even the word is medieval, so I wonder if people piss at the table, at these meetings, in order to show rank) that represent the hiring authority (read: political strength) of various internal factions. The intention of headcount numbers is supposed to be to prevent reckless hiring by the company on the whole, and that’s an important purpose, but their actual effect is to make internal mobility difficult, because most teams would rather save their headcount for possible “dream hires” who might apply from outside in the future, rather than risk a spot on an engineer with an average performance review history (which is what most engineers will have). Headcount bullshit makes it nearly impossible to transfer unless (a) someone likes you on a personal basis, or (b) you have a 90th-percentile performance review history (in which case you don’t need a transfer). Macroscopic hiring policies (limits, and sometimes freezes) are necessary to prevent the company from over-hiring, but internal headcount limits are one of the worst ideas ever. If people want to move, and the leads of those projects deem them qualified, there’s no reason not to allow this. It’s good for the engineers and for the projects that have more motivated people working on them.

When open allocation is in play, projects compete for engineers, and the result is better projects. When closed allocation is in force, engineers compete for projects, and the result is worse engineers. 

When you manage people like children, that’s what they become. Traditional, 20th-century management (so-called “Theory X”) is based on the principle that people are lazy and need to be intimidated into working hard, and that they’re unethical and need to be terrified of the consequences of stealing from the company, with a definition of “stealing” that includes “poaching” clients and talent, education on company time, and putting their career goals over the company’s objectives. In this mentality, the only way to get something decent out of a worker is to scare him by threatening to turn off his income– suddenly and without appeal. Micromanagement and Theory X are what I call the Aztec Syndrome: the belief in many companies that if there isn’t a continual indulgence in sacrifice and suffering, the sun will stop rising.

Psychologists have spent decades trying to answer the question, “Why does work suck?” The answer might be surprising. People aren’t lazy, and they like to work. Most people do not dislike the activity of working, but dislike the subordinate context (and closed allocation is all about subordination). For example, peoples’ minute-by-minute self-reported happiness tends to drop precipitously when they arrive at the office, and rise when they leave it, but it improves once they start actually working. They’re happier not to be at an office, but if they’re in an office, they’re much happier when working than when idle. (That’s why workplace “goofing off” is such a terrible idea; it does nothing for office stress and it lengthens the day.) People like work. It’s part of who we are. What they don’t like, and what enervates them, is the subordinate context and the culturally ingrained intimidation. This suggests the so-called “Theory Y” school of management, which is that people are intrinsically motivated to work hard and do good things, and that management’s role is to remove obstacles.

Closed allocation is all about intimidation: if you don’t have this project, you don’t have a job. Tight headcount policies and lockout periods make internal mobility extraordinarily difficult– much harder than getting hired at another company. The problem is that intimidation doesn’t produce creativity and it erodes peoples’ sense of ethics (when people are under duress, they feel less responsible for what they are doing). It also provides the wrong motivation: the goal becomes to avoid getting fired, rather than to produce excellent work.

Also, if the only way a company can motivate people to do a project is to threaten to turn off a person’s income, that company should really question whether that project’s worth doing at all.

Open allocation is not the same thing as “20% time”, and it isn’t a “free-for-all”. Open allocation does not mean “everyone gets to do what they want”. A better way to represent it is: “Lead, follow, or get out of the way” (and “get out of the way” means “leave the company”). To lead, you have to demonstrate that your product is of value to the business, and convince enough of your colleagues to join your project that it has enough effort behind it to succeed. If your project isn’t interesting and doesn’t have business value, you won’t be able to convince colleagues to bet their careers on it and the project won’t happen. This requires strong interpersonal skills and creativity. Your colleagues decide, voting with their feet, if you’re a leader, not “management”. If you aren’t able to lead, then you follow, until you have the skill and credibility to lead your own project. There should be no shame in following; that’s what most people will have to do, especially when starting out.

“20% time” (or hack days) should be exist as well, but that’s not what I’m talking about. Under open allocation, people are still expected to show that they’ve served the needs of the business during their “80% time”. Productivity standards are still set by the projects, but employees choose which projects (and sets of standards) they want to pursue. Employees unable to meet the standards of one project must find another one. 20% time is more open, because it entails permission to fail. If you want to do a small project with potentially high impact, or to prove that you have the ability to lead by starting a skunk-works project, or volunteer, take courses, or attend conferences on company time, that’s what it’s for. During their “80% time”, people are still expected to lead or follow on a project with some degree of sanction. They can’t just “do whatever they want”.

Four types of projects. The obvious question that open allocation raises is, “Who does the scut work?” The answer is simple: people do it if they will get promoted, formally or informally, for doing it, or if their project directly relies on it. In other words, the important but unpleasant work gets done, by people who volunteer to do it. I want to emphasize “gets done”. Under closed allocation, a lot of the unpleasant stuff never really gets done well, especially if unsexy projects don’t lead to promotions, because people are investing most of their energy into figuring out how to get to better projects. The roaches are swept under the carpet, and people plan their blame strategies months in advance.

If we classify projects into four categories by important vs. unimportant, and interesting vs. unpleasant, we can assess what happens under open allocation. Important and interesting projects are never hard to staff. Unimportant but interesting projects are for 20% time; they might succeed, and become important later, but they aren’t seen as critical until they’re proven to have real business value, so people are allowed to work on them but are strongly encouraged to also find and concentrate on work that’s important to the business. Important but unpleasant projects are rewarded with bonuses, promotions, and the increased credibility accorded to those who do undesirable but critical work. These bonuses should be substantial (six and occasionally even seven figures for critical legacy rescues); if the project is actually important, it’s worth it to actually pay. If it’s not, then don’t spend the money. Unimportant and unpleasant projects, under open allocation, don’t get done. That’s how it should be. This is the class of undesirable, “death march” projects that closed-allocation nurtures (they never go away, because to suggest they aren’t worth doing is an affront to the manager that sponsors them and a career-ending move) but that open allocation eliminates. Under open allocation, people who transfer away from these death marches aren’t “deserters”. It’s management’s fault if, out of a whole company, no one wants to work on the project. Either the project’s not important, or they didn’t provide enough enticement.

Closed allocation is irreducibly political. Compare two meanings of the three-word phrase, “I’m on it”. In an open-allocation shop, “I’m on it” is a promise to complete a task, or at least to try to do it. It means, “I’ve got this.” In a closed-allocation shop, “I’m on it” means “political forces outside of my control require me to work only on this project.”

People complain about the politics at their closed-allocation jobs, but they shouldn’t, because it’s inevitable that politics will eclipse the matter of actually getting work done. It happens every time, like clockwork. The metagame becomes a million times more important than actually sharpening pencils or writing code. If you have closed allocation, you’ll have a political rat’s nest. There’s no way to avoid it. In closed allocation, the stakes of project allocation are so high that people are going to calculate every move based on future mobility. Hence, politics. What tends to happen is that a four-class system emerges, resulting from the four categories of work that I developed above. The most established engineers, who have the autonomy and leverage to demand the best projects, end up in the “interesting and important” category. They get good projects the old-fashioned way: proving that they’re valuable to the company, then threatening to leave if they aren’t reassigned. Engineers who are looking for promotions into managerial roles tend to take on the unpleasant but important work, and attempt to coerce new and captive employees into doing the legwork. The upper-middle class of engineers can take the interesting but unimportant work, but it tends to slow their careers if they intend to stay at the same company (they learn a lot, but they don’t build internal credibility). The majority and the rest, who have no significant authority over what they work on, get a mix, but a lot of them get stuck with the uninteresting, unimportant work (and closed-allocation shops generate tons of that stuff) that exists for reasons rooted in managerial politics.

What are the problems with open allocation? The main issue with open allocation is that it seems harder to manage, because it requires managers to actively motivate people to do the important but unpleasant work. In closed allocation, people are told to do work “because I said so”. Either they do it, or they quit, or they get fired. It’s binary, which seems simple. There’s no appeal process when people fail projects or projects fail people– and no one ever knows which happened– and extra-hierarchical collaboration is “trimmed”, and efforts can be tracked by people who think a single spreadsheet can capture everything important about what is happening in the company. Closed-allocation shops have hierarchy and clear chains of command, and single-points-of-failure (because a person can be fired from a whole company for disagreeing with one manager) out the proverbial wazoo. They’re Soviet-style command economies that somehow ended up being implemented within supposedly “capitalist” companies, but they “appear” simple to manage, and that’s why they’re popular. The problem with closed allocation policies is that they lead to enormous project failure rates, inefficient allocation of time, talent bleeds, and unnecessary terminations. In the long term, all of this unplanned and surprising garbage work makes the manager’s job harder, more complex, and worse. When assessing the problems associated with open allocation (such as increased managerial complexity) it’s important to consider that the alternative is much worse.

How do you do it? The challenging part of open allocation is enticing people to do unpleasant projects. There needs to be a reward. Make the bounty too high, and people come in with the wrong motivations (capturing the outsized reward, rather than getting a fair reward while helping the company) and the perverse incentives can even lead to “rat farming” (creating messes in the hopes of being asked to repair them at a premium). Make it too low, and no one will do it, because no one wise likes a company well enough to risk her own career on a loser project (and part of what makes a bad project bad is that, absent recognition, it’s career-negative to do undesirable work). Make the reward too monetary and it looks bad on the balance sheet, and gossip is a risk: people will talk if they find out a 27-year-old was paid $800,00o in stock options (note: there had better be vesting applied) even if it’s justified in light of the legacy dragon being slain. Make it too career-focused and you have people getting promotions they might not deserve, because doing unpleasant work doesn’t necessarily give a person technical authority in all areas. It’s hard to get the carrot right. The appeal of closed allocation is that the stick is a much simpler tool: do this shit or I’ll fire you.

The project has to be “packaged”. It can’t be all unpleasant and menial work, and it needs to be structured to involve some of the leadership and architectural tasks necessary for the person completing it to actually deserve the promised promotion. It’s not “we’ll promote you because you did something grungy” but “if you can get together a team to do this, you’ll all get big bonuses, and you’ll get a promotion for leading it.” Management also needs to have technical insight on hand in order to do this: rather than doing grunt work as a recurring cost, kill it forever with automation.

An important notion in all this is that of a committed project. Effectively, this is what the executives should create if they spot a quantum of work that the business needs but that is difficult and does not seem to be enjoyable in the estimation of the engineers. These shouldn’t be created lightly. Substantial cash and stock bonuses (vested, over the expected duration of the project) and promotions are associated with completing these projects, and if more than 25% of the workload is committed projects, something’s being done wrong. A committed project offers high visibility (it’s damn important; we need this thing) and graduation into a leadership role. No one is “assigned” to a committed project. People “step up” and work on them because of the rewards. If you agree to work on a committed project, you’re expected to make a good-faith effort to see it through for an agreed-upon period of time (typically, a year). You do it no matter how bad it gets (unless you’re incapable) because that’s what leadership is. You should not “flake out” because you get bored. Your reputation is on the line.

Companies often delegate the important but undesirable work in an awkward way. The manager gets a certain credibility for taking on a grungy project, because he’s usually at a level where he has basic autonomy over his work and what kinds of projects he manages. If he can motivate a team to accomplish it, he gets a lot of credit for taking on the gnarly task. The workers, under closed allocation, get zilch. They were just doing their jobs. The consequence of this is that a lot of bodies end up buried by people who are showing just enough presence to remain in good standing, but putting the bulk of their effort into moving to something better. Usually, it’s new hires without leverage who get staffed on these bad projects.

I’d take a different approach to committed projects. Working on one requires (as the name implies) commitment. You shouldn’t flake out because something more attractive comes along. So only people who’ve proven themselves solid and reliable should be working on (much less leading) them. To work on one (beyond a 20%-time basis) you have to have been at the company for at least a year, senior enough for the leadership to believe that you have the ability to deliver, and in strong standing at the company. Unless hired at senior roles, I’d never let a junior hire take on a committed project unless it was absolutely required– too much risk.

How do you fire people? When I was in school, I enjoyed designing and playing with role-playing systems. Modeling a fantasy world is a lot of fun. Once I developed an elaborate health mechanic that differentiated fatigue, injury, pain, blood loss, and “magic fatigue” (which affected magic users) and aggregated them (determining attribute reductions and eventual incapacitation) in what I considered to be a novel way. One small detail I didn’t include was death, so the first question I got was, “How do you die?” Of course, blood loss and injuries could do it. In a no-magic, medieval world, loss of the head is an incapacitating and irreversible injury, and exsanguination is likewise. However, in a high-magic world, “death” is reversible. Getting roasted, eaten and digested by a dragon might be reversible. But there has to be a possibility (though it doesn’t require a dedicated game mechanic) for a character to actually die in the permanent, create-a-new-character sense of the word. Otherwise there’s no sense of risk in the game: it’s just rolling dice to see how fast you level up. My answer was to leave that decision to the GM. In horror campaigns, senseless death (and better yet, senseless insanity) is part of the environment. It’s a world in which everything is trying to kill you and random shit can end your quest. But in high-fantasy campaigns with magic and cinematic storylines, I’m averse to characters being “killed by the dice”. If the character is at the end of his story arc, or does something inane like putting his head in a dragon’s mouth because he’s level 27 and “can’t be killed”, then he dies for real. Not “0 hit points”, but the end of his earthly existence. But he shouldn’t die because the player is hapless enough to roll 4 “1″s in a row on a d10. Shit happens.

The major problem with “rank and yank” (stack-ranking with enforced culling rates) and especially closed allocation is that a lot of potentially great employees are killed by the dice. It becomes part of the rhythm of the company for good people to get inappropriate projects or unfair reviews, blow up mailing lists or otherwise damage morale when it pisses them off, then get fired or quit in a huff. Yawn… another one did that this week. As I alluded in my Valve essay, this is the Welch Effect: the ones who get fired under rank-and-yank policies are rarely low performers, but junior members of macroscopically underperforming teams (who rarely have anything to do with this underperformance). The only way to enforce closed allocation is to fire people who fail to conform to it, but this also means culling the unlucky whose low impact (for which they may not be at fault) appears like malicious noncompliance.

Make no mistake: closed allocation is as much about firing people as guns are about killing people. If people aren’t getting fired, many will work on what they want to anyway (ignoring their main projects) and closed allocation has no teeth. In closed allocation shops, firings become a way for the company to clean up its messes. “We screwed this guy over by putting him on the wrong project; let’s get rid of him before he pisses all over morale.” Firings and pseudo-firings (“performance improvement plans” and transfer blocks and intentional dead-end allocations) become common enough that they’re hard to ignore. People see them, and that they sometimes happen to good people. And they scare people, especially because the default in non-financial tech companies is to fire quickly (“fail fast”) and without severance. It’s a really bad arrangement.

Do open-allocation shops have to fire people? The answer is an obvious “yes”, but it should be damn rare. The general rule of good firing is: mentor subtracters, fire dividers. Subtracters are good-faith employees who aren’t pulling their weight. They try, but they’re not focused or skilled enough to produce work that would justify keeping them on the payroll. Yet. Most employees start as subtractors, and the amount of time it takes to become an adder varies. Most companies try to set guidelines for how long an employee is allowed to take to become an adder (usually about 6 months). I’d advise against setting a firm timeframe, because what’s important is now how fast a person has learned (she might have had a rocky start) but how fast, and more importantly how well, she can learn.

Subtracters are, except in an acute cash crisis when they must be laid off for business reasons, harmless. They contribute microscopically to the burn rate, but they’re usually producing some useful work, and getting better. They’ll be adders and multipliers soon. Dividers are the people who make whole teams (or possibly the whole company) less productive. Unethical people are dividers, but so are people whose work is of so low quality that messes are created for others, and people whose outsized egos produce conflicts. Long-term (18+ months) subtractors become “passive” dividers because of their morale effects, and have to be fired for the same reason. Dividers smash morale, and they’re severe culture threats. No matter how rich your company is and how badly you may want not to fire people, you have to get rid of dividers if they don’t reform immediately. Dividers ratchet up their toxicity until they are capable of taking down an entire company. Firing can be difficult because many dividers shine as individual contributors (“rock stars”) but taketh away in their effects on morale, but there’s no other option.

My philosophy of firing is that the decision should be made rarely, swiftly, for objective reasons, and with a severance package sufficient to cover the job search (unless the person did something illegal or formally unethical) that includes non-disclosure, non-litigation and non-disparagement. This isn’t about “rewarding failure”. It’s about limiting risk. When you draft “performance improvement plans” to justify termination without severance, you’re externalizing the cost to people who have to work with a divider who’s only going to get worse post-PIP. Companies escort fired employees out of the building, which is a harsh but necessary risk-limiting measure; but it’s insane to leave a PIP’d employee in access for two months. Moreover, when you cold-fire someone, you’re inviting disparagement, gossip, and lawsuits. Just pay the guy to go away. It’s the cheapest and lowest-variance option. Three months of severance and you never see the guy again. Good. Six months and you he speaks highly of you and your company: he had a rocky time, you took care of him, and he’s (probably) better-off now. (If you’re tight on money, which most startups are, stay closer to the 3-month mark. You need to keep expenses low more than you need fired employees to be your evangelists. If you’re really tight, replace the severance with a “gardening leave” package that continues his pay only until he starts his next job.)

If you don’t fire dividers, you end up with something that looks a lot like closed allocation. Dividers can be managers (a manager can only be a multiplier or divider, and in my experience, at least half are dividers) or subordinates, but dividers tend to intimidate. Subordinate passive dividers intimidate through non-compliance (they won’t get anything done) while active dividers either use interpersonal aggression or sabotage to threaten or upset people (often for no personal gain). Managerial (or proto-managerial) dividers tend to threaten career adversity (including bad reviews, negative gossip, and termination) in order to force people to put the manager’s career goals above their own. They can’t motivate through leadership, so they do it using intimidation and (if available) authority, and they draw people into captivity to get done the work they want, without paying for it on a fair market (i.e. providing an incentive to do the otherwise undesirable work). At this point, what you have is a closed-allocation company. What this means is that open allocation has to be protected: you do it by firing the threats.

If I were running a company, I think I’d have a 70% first-year “firing” (by which I mean removal from management; I’d allow lateral moves into IC roles for those who desired to do so) rate for titled managers. By “titled manager”, I mean someone with the authority and obligation to participate in dispute resolution, terminations and promotions, and packaging committed projects. Technical leadership opportunities would be available to anyone who could convince people to follow them, but to be a titled people manager you’d have to pass a high bar. (You’d have to be as good at it as I would be, and for 70 to 80 percent of the managers I’ve observed, I’d do a better job.) This high attrition rate would be offset by a few cultural factors and benefits. First, “failing” in the management course wouldn’t be stigmatized because it would be well-understood that most people either end it voluntarily, or aren’t asked to continue. People would be congratulated for trying out, and they’d still be just as eligible to lead projects– if they could convince others to follow. Second, those who aspired specifically to people-management and weren’t selected would be entitled (unless fully terminated for doing something unethical or damaging) to a six-month leave period in which they’d be permitted to represent themselves as employed. That’s what B+ and A- managers would get– the right to remain as individual contributors (at the same rank and pay) and, if they didn’t want that, a severance offer along with a strong reference if they wished to pursue people management in other companies– but not at this one.

Are there benefits to closed allocation? I can answer this with strong confidence. No, not in typical technology companies. None exist. The work that people are “forced” to do is of such low quality that, on balance, I’d say it provides zero expectancy. In commodity labor, poorly motivated employees are about half as productive as average ones, and the best are about twice as productive. Intimidating the degenerate slackers into bringing themselves up to 0.5x from zero makes sense. In white-collar work and especially in technology, those numbers seem to be closer to -5 and +20, not 0.5 and 2.

You need closed (or at least controlled) allocation over engineers if there is material proprietary information where even superficial details would represent, if divulged, an unacceptable breach: millions of dollars lost, company under existential threat, classified information leaked. You impose a “need-to-know” system over everything sensitive. However, this most often requires keeping untrusted, or just too many people, out of certain projects (which would be designated as committed projects under open allocation). It doesn’t require keeping people stuck on specific work. Full-on closed allocation is only necessary when there are regulatory requirements that demand it (in some financial cases) or extremely sensitive proprietary secrets involved in most of the work– and comments in public-domain algorithms don’t count (statistical arbitrage strategies do).

What does this mean? Fundamentally, this issue comes down to a simple rule: treat employees like adults, and that’s what they’ll be. Investment banks and hedge funds can’t implement total open allocation, so they make up the difference through high compensation (often at unambiguously adult levels) and prestige (which enables lateral promotions for those who don’t move up quickly). On the other hand, if you’re a tiny startup with 30-year-old executives, you can’t afford banking bonuses, and you don’t have the revolving door into $400k private equity and hedge fund positions that the top banks do, so employee autonomy (open allocation) is the only way for you to do it. If you want adults to work for you, you have to offer autonomy at a level currently considered (even in startups) to be extreme.

If you’re an engineer, you should keep an eye out for open-allocation companies, which will become more numerous as the Valve model proves itself repeatedly and all over the place (it will, because the alternative is a ridiculous and proven failure). Getting good work will improve your skills and, in the long run, your career. So work for open-allocation shops if you can. Or, you can work in a traditional closed-allocation company and hope you get (and continue to get) handed good projects. That means you work for (effectively, if not actually) a bank or a hedge fund, and that’s fine, but you should expect to be compensated accordingly for the reduction in autonomy. If you work for a closed-allocation ad exchange, you’re a hedge-fund trader and you deserve to be paid like one.

If you’re a technology executive, you need to seriously consider open allocation. You owe it to your employees to treat them like adults, and you’ll be pleasantly surprised to find that that’s what they become. You also owe it to your managers to free them from the administrative shit-work (headcount fights, PIPs and terminations) that closed allocation generates. Finally, you owe it to yourself; treat yourself to a company whose culture is actually worth caring about.