In defense of defensibility

I won’t say when or where, but at one point in time, a colleague and I were discussing our “red button numbers” for the organization under which we toiled.

What’s that? The concept is this: a genie offers you the option to push a “red button” and, if you do so, your company will go bankrupt and cease to exist. Equity will be worthless, paychecks will bounce, and jobs will end. However, every employee in that company gets the same cash severance. (Let’s say $50,000.) The stipulation that every employee gets paid is important. I’m not interested in what some people might do if they had no moral scruples. Some people would blow up their employers for a $50,000 personal payoff, with everyone else getting nothing, but almost no one would admit this, or do it if it were to become known. If everyone gets paid, it pushing the red button becomes ethically acceptable. At $50,000 for a typical company? Hell yeah. Most employees would see their lives improve. The executives would be miserable, getting a pittance compared to their salaries, but… seriously, fuck ’em. A majority of working people, if their company were scrapped and they were given a $50,000 check, would be dealt a huge favor by that circumstance.

The “everyone gets paid” red-button scenario is more interesting because it deals with what people will do in the open and consider ethically acceptable. When I get to more concrete matters of decisions people make that repair or dismantle companies, the interesting fact is that most of those decisions happen in the open. Companies are rarely sabotaged in secret, but disempowered and decomposed by their own people in plain view.

The “red button number” is the point at which a person would press the button, end the company, have every employee paid out that amount, and consider that an ethically acceptable thing to do. It’s safe to assume that almost everyone in the private sector has a red button number. For the idealists, and for the wealthy executives, that number might be very high: maybe $10 million. For most, it’s probably quite low. People who are about to be fired, and don’t expect a severance, might push the button at $1. Let’s assume that we could ask people for their red button numbers, and they’d answer honestly, and that this survey could be completed across a whole company. Take the median red button number and multiply it by the number of employees. That’s the company’s defensibility. We can’t actually measure this number directly, but it has a real-world meaning. If there were a vote on whether to dissolve the company and pay out some sum D, divided among all employees equally, the defensibility is the D* for which, if D > D*, the motion will pass and the company will be disbanded, and if D < D*, the company will persist. It’s the valuation the employees assign to the company (which is, often, a very different number from its market capitalization or private valuation).

Of course, such a vote would never actually happen. Companies don’t give employees that kind of power, and there’s an obvious reason why. Most companies, at least according to the stock market or private valuations, assigned values much greater than their defensibility. (This is not unreasonable or surprising, if a bit sad.) I can’t measure this to be sure, and I’d rather not pick on specific companies, so let me give three models and leave it to the reader to judge whether my assessments make sense.

Model Company A: a publicly-traded retail outlet with 100,000 employees, many earning less than $10/hour. I estimate the median “red button” number at $5,000, putting the defensibility at $500 million. A healthy valuation for such a company would be $125,000 per employee, or $12.5 billion. Defensibility is 4 cents on the dollar.

Model Company B: an established technology company with 20,000 employees. Software engineers earn six figures, and engineers-in-test and the like earn high-five-figure salaries. There’s a “cool” factor to working there. I’d estimate the median “red button” number at about 9 months of salary (and, for some of the most enthusiastic employees, it might be as much as five years, but at the median, it’s 6-9 months) or $75,000, putting the defensibility at $1.5 billion. A typical valuation for such a company would be $5 million per head, or $100 billion. Even though this is a company whose employees wouldn’t leave lightly, its defensibility is still only 1.5 cents on the dollar.

Model Company C: a funded startup, with 100 employees and a lot of investor and “tech press” attention. Many “true believers” among the employee ranks. Let’s assume that that, to get a typical employee to push the “red button”, we’d have to guarantee 6 months of pay ($50,000) and 250 percent of the fully-vested equity (0.04%) because so many employees really expect the stock to grow. The valuation of the company is $200 million (or $2 million per employee). We reach a defensibility of $250,000 per employee, or $25 million. That’s a lot, but it’s still only 12.5% of the valuation of the business.

None of these companies are, numerically speaking, very defensible. That is, if the company could be dissolved to anarchy with its value (as assessed by the market) distributed among the employees, they’d prefer it that way. Of course, a company need not be defensible at 100 cents on the dollar for its employees to wish it to remain in existence. If a $10 billion company were dissolved in such a way, there wouldn’t actually be $10 billion worth of cash to dish out. To the extent that companies can be synergistic (i.e. worth more than the sum of the component parts) it’s a reasonable assumption that a company whose defensibility was even 50 percent of its market capitalization would never experience voluntary dissolution, even if it were put to a vote.

In real life, of course, these “red button” scenarios don’t exist. Employees don’t get to vote on whether their companies continue existing, and, in practice, they’re usually the biggest losers in full-out corporate dissolution because they have far less time to prepare than the executives. The “red button number” and defensibility computations are an intellectual exercise, that’s all. Defensibility is what the company is worth to the employees. Given that defensibility numbers seem (under assumptions I consider reasonable) to consistently come in below the valuation of the company, we understand that companies would prefer that their persistence not come down to an employee vote.

That a typical company might have a defensibility of 5 cents on the dollar, to me, underscores the extreme imbalance of power between capital and labor. If the employees value the thing at X, and capital values it at 20*X, that seems to indicate that capital has 20 times as much power as the employees do. It signifies that companies aren’t really partnerships between capital and labor, but exist almost entirely for capital’s benefit.

Does defensibility matter? In this numerical sense, it’s a stretch to say that it does, because such votes can’t be called. Fifty-one percent of a company’s workers realizing that they’d let the damn thing die for six months’ severance has no effect, because they don’t have that power. If defensibility numbers were within a factor of 2, or even 4, of company’s market capitalizations, I’d say that these numbers (educated guesses only) tell us very little. It’s the sheer magnitude of the discrepancy under even the most liberal assumptions that is important.

People in organizations do “vote” on the value of the organization with what they do, day to day, ethically (and unethically) as they trade off between self-interest and the upkeep of the organization. How much immediate gain would a person forgo, in order to keep up the organization? The answer is: very little. I’d have no ethical qualms, with regard to most of the companies that I’ve worked at, in pressing the red button at $100,000. Most employees will be ecstatic, a few executives will be miserable; fair trade.

Thus far, I’ve only addressed public, ethical, fair behavior. Secretive and unethical behaviors also affect a company, obviously. However, I don’t see the latter as being as much of a threat. Organizations can defend themselves against the unethical, the bad actors, if the ethical people care enough to participate in the upkeep. It’s when ethical people (publicly and for just reasons) tune out that the organization is without hope.

The self-healing organization

Organizations usually rot as they age. It’s practically taken as a given, in the U.S. private sector, that companies will change for the worse as time progresses. Most of the startup fetishism of the SillyCon Valley is derived from this understanding: organizations will inexorably degrade invariably with age (a false assumption, and I’ll get to that) and the best way for a person (or for capital) to avoid toxicity is to hop from one upstart company to another, leaving as soon as the current habitat gets old.

It is true that most companies in the private sector degrade, and quite rapidly. Why is it so? What is it about organizations that turns them into ineffective, uninspiring messes over time? Are they innately pathological? Is this just a consequence of corporate entropy? Are people who make organizations better so much rarer than those who make them worse? I don’t think so. I don’t think it’s innate that organizations rot over time. I think that it’s common, but avoidable.

The root entropy-increasing cause of corporate decay is “selfishness”. I have to be careful with this word, because selfishness can be a virtue. I certainly don’t intend to impose any value, good or bad, on that concept here. Nor do I imply secrecy or subterfuge. Shared selfishness can be a group experience. Disaffected employees can slack together and protect each other. Often this happens. One might argue that it becomes “groupishness” or localism. I don’t care to debate that point right now.

Organizations decay because people and groups within them, incrementally, prefer their interests over that of the institution. If offered promotions they don’t deserve, into management roles that will harm morale, people usually take them. If they can get away with slacking– either to take a break, or to put energy into efforts more coherent with their career goals– they will. (The natural hard workers will continue putting forth effort, but only on the projects in line with their own career objectives.) If failures of process advantage them, they usually let those continue. When the organization acts against their interests, they usually make it hurt, causing the company to recoil and develop scar tissue over time. They protect themselves and those who have protected them, regardless of whether their allies possess “merit” as defined by the organization. These behaviors aren’t exactly crimes. Most of this “selfishness” is stuff that morally average people (and, I would argue, even many morally good ones) will do. Most people, if they found a wallet with $1,000 and a driver’s license in it, would take it to the police station for return to its owner. However, if they were promoted based on someone else’s work, and that “someone else” had left the company so there was no harm in keeping the promotion, they’d keep the arrangement as-is. I’m not different from the average person on this; I’ll just admit to it, in the open.

People in the moral mid-range will generally try to do the right thing. On the “red button” issue, most wouldn’t tank their own companies for a personal payout, leaving all their colleagues screwed. Most would press the button in the “every employee gets paid” scenario, because it’s neither ethically indefensible nor socially unacceptable to do so. Such people are in the majority and not inherently corrosive to institutions– they uphold those that are good to them and their colleagues, and bad to those that harm them. However, they hasten the decay of those organizations that clearly don’t deserve concern.

Let’s talk about entropy, or the increasing tendency toward disorder in a closed system. Life can only persist because certain genetic configurations enable an organism, taking in external energy, to preserve local low-entropy conditions. A lifeless human body, left on the beach to be beaten by the waves, will be unrecognizable within a couple of days. That’s entropy. A living human can sit in the same conditions with minimal damage. In truth, what we seem to recognize as life is “that which can preserve its own local order”. Living organisms are constantly undergoing self-repair. Cells are destroyed by the millions every hour, but new ones are created. Dead organisms are those that have lost the ability to self-repair. The resources in them will be recycled by self-interested and usually “lower” (less complex) organisms that feed on them as they decay.

Organizations, I think, can be characterized as “living” or “dead”, to some degree, based on whether their capacity for self-repair exceeds the inevitable “wear and tear” that will be inflicted by the morally acceptable but still entropy-increasing favoritism that its people show for their own interests. The life/death metaphor is strained, a bit, by the difficulty in ascertaining which is which. In biology, it’s usually quite clear whether an organism is alive. We know that when the human heart stops, the death process is likely to begin (and, absent medical intervention, invariably will begin) within 5 minutes and, after 10 minutes, the brain will typically be unrecoverable and that person will no longer exist in the material world. Life versus death isn’t completely binary in biology, but it’s close enough. With organizations, it’s far less clear whether the thing is winning or losing its ongoing fight against entropy. To answer that question involves debate and research that, because the questions asked are socially unacceptable, can rarely be performed properly.

“Red button” scenarios don’t happen, but every day, people make small decisions that influence the life/death direction of the company. Individually, most of these decisions don’t matter for much. A company isn’t going to fail because a disaffected low-level employee spent his whole morning searching for other work. If everyone’s looking for another job, that’s a problem.

In the VC-funded world, self-repair has been written off as a lost cause. It’s not even attempted. The mythology is that everything old (“legacy”) is of low value (and should be sold to some even older, more defective company) and that only the new is worth attention. Don’t try to repair old organizations; just create new shit and make old mistakes. It’s all about new programmers (under 30 only, please) and new languages (often recycling ideas from Lisp and Haskell, implementing them poorly) and new companies. This leads to massive waste as nothing is learned from history. It becomes r-selective in the extreme, with the hope that, despite frequent death of the individual organisms, there can be a long-lived clonal colony for… someone. For whom? To whose benefit is this clonal organism? It’s for the well-connected scumbags who can peddle influence and power no matter which companies beat the odds and thrive for a few days, and which ones die.

In the long run, I don’t think this is going to work. Building indefensible companies in large numbers is not going to create a defensible meta-organism. To do so is to create a con (admittedly, a somewhat brilliant and unprecedented one, a truly postmodern corporate organism) in which enthusiastic young people trade their energy and ardor for equity is mostly-worthless companies, and whose macroscopic portfolio performance is mediocre (as seen in the pathetic returns VC offers for passive investors) but which affords great riches for those with the social connections (and lack of moral compass) necessary to navigate it. It works for a while, and then people figure out what’s going on, and it doesn’t. We call this a “bubble/crash” cycle, but what it really is, at least in this case, is an artifact of limitations on human stupidity. People will only fall for a con for so long. The naive (like most 22-year-olds when they’re just starting the Valley game) get wiser, and the perpetual suckers have short attention spans and will be drawn to something shinier.

Open allocation

What might a defensible company look like? Here I come to one my pet issues: open allocation. The drawback of open allocation, terrifying to the hardened manageosaur, is that it requires the organization be defensible in order to work, because it gives employees a real vote on what the company does. The good news is that open allocation tends naturally toward defensibility. If the organization is fair to its employees and its people are of average or better moral quality, then the majority of them can be trusted to work within the intersection between their career interests and the needs of the company.

Why does open allocation work so well? Processes, projects,  patterns, protections, personality interactions, policies and power relationships (henceforth, “the Ps”) are all subjected to a decentralized immune system, rather than a centralized one that doesn’t have the bandwidth to do the job properly. Organizations of all kinds produce the Ps on a rather constant basis. When one person declines to use the bathroom because his manager just went in, that microdeference is P-generation. The same applies to the microaggression (or, to be flat about it, poorly veiled aggression) of a manager asking for an estimate. Requesting an estimate generates several Ps at once: a power relationship (to be asked for an estimate is to be made someone’s bitch), a trait of a project (it’s expected at a certain time, quality be damned), and a general pattern (managerial aggression, in the superficial interest of timeliness, is acceptable in that organization). Good actions also generate Ps. To empower people generates power relationships of a good kind, and affords protection. Even the most superficial interactions generate Ps, good and bad.

Under open allocation, the bad Ps are just swept away. When one person tries to dominate the other through unreasonable requests, or to socially isolate a person in order to gain power, the other has the ability to say, “fuck that shit” and walk away. The doomed, career-ending projects and the useless processes and the toxic power relationships just disappear. People will try to make such things, even in the best of companies, and sometimes without ill intent. Under open allocation, however, bad Ps just don’t stick around. People have the power to nonexist them. What this means is that, over time, the long-lived Ps are the beneficial ones. You have a self-healing organization. This generates good will, and people begin to develop a genuine civic pride in the organization, and they’ll participate in its upkeep. Open allocation may not be the only requisite ingredient for success on the external market, but it does insure an organization against internal decay.

Then there’s the morale issue, and the plain question of employee incentives. Employees of open allocation companies know that if their firms dissolve tomorrow, they’re going to end up in crappy closed-allocation companies afterward. They actually care– beyond “will I get a severance?”– whether their organizations live or die. In closed-allocation companies, the only way to get a person to care in this way is either (a) to give him a promotion that another organization would not (which risks elevating an incompetent) or (b) to pay him a sizable wage differential over the prevailing market wage– this leads to exponential wage growth, typically at a rate of 20 to 30 percent per year, and can be good for the employee but isn’t ideal for the employer. Because closed-allocation companies are typically also stingy, (a) is preferred, which means that loyalists are promoted regardless of competence. One can guess where that leads: straight to idiocy.

Under closed allocation, bad Ps tend to be longer-lived than good ones. Why? Something I’ve realized in business is that good ideas fly away from their originators and become universal, which means they can be debated on their actual merits and appropriateness to the situation (a good idea is not necessarily right for all situations). Two hundred years from now, if open allocation is the norm, it’s quite likely that no one will remember my name in connection to it. (To be fair, I only named it, I didn’t invent the concept.) Who invented the alphabet? We don’t know, because it was a genuinely good idea. Who invented sex? No one. On the other hand, bad ideas become intensely personal– loyalty tests, even. Stack ranking becomes “a Tom thing” (“Tom” is a made-up name for a PHP CEO) because it’s so toxic that it can only be defended by an appeal to authority or charisma, and yet to publicly oppose it is to question Tom’s leadership (and face immediate reprisal, if not termination). In a closed-allocation company, bad ideas don’t get flushed out of the system. People double down on them. (“You just can’t see that I’m right, but you will.”) They become personal pet projects of executives and “quirky” processes that no one can question. Closed allocation companies simply have no way to rid themselves of bad Ps– at least, not without generating new ones. Even firing the most toxic executive (and flushing his Ps with him) is going to upset some people, and the hand-over of power is going to result in P-generation that is usually in-kind. Most companies, when they fire someone, opt for the cheapest and most humiliating kind of exit– crappy or nonexistent severance, no right to represent oneself as employed during the search, negative (and lawsuit-worthy) things said about the departing employee– and that usually makes the morale situation worse. (If you disparage a fired and disliked executive, you still undermine faith in your judgment; why’d you hire him in the first place?) No matter how toxic the person is, you can’t fire someone in that way without generating more toxicity. People talk, and even the disaffected and rejected have power, when morale is factored in. The end result of all this is that bad Ps can’t really be removed from the system without generating a new set of bad Ps.

I can’t speak outside of technology because to do so is to stretch beyond my expertise. However, a technology company cannot have closed allocation and retain its capacity for self-repair. It will generate bad Ps faster than good ones.

What about… ?

There’s a certain recent, well-publicized HR fuckup that occurred at a well-known company using open allocation. I don’t want to comment at length about this. It wouldn’t have been newsworthy were it not for the high moral standard that company set for itself in its public commitment to open allocation. (If the same had happened at a closed-allocation oil company or bank, no one would have ever heard a word about it.) Yes, it proved that open allocation is not a panacea. It proved that open allocation companies develop political problems. This is not damning of open allocation, because closed allocation creates much worse problems. More damningly, the closed allocation company can’t heal.

Self-healing and defensibility are of key importance. All organizations experience stress, no exceptions. Some heal, and most don’t. The important matter is not whether political errors and HR fuckups happen– because they will– but whether the company is defensible enough that self-repair is possible.

The complexity factor

The increase of entropy in an organization is a hard-to-measure process. We don’t see most of those “Ps” as they are generated, and moral decay is a subjective notion. What seems to be agreed-upon is that, objectively, complexity grows as the organization does, and that this becomes undesirable after a certain point. Some call it “bureaucratic red tape” and note it for its inefficiency. Others complain about the political corruption that emerges from relationship-based complexity. For a variety of reasons, an organization gets into a state where it is too complicated and self-hogtied to function well. It becomes pathological. Not only that, but the complexity that exists becomes so onerous that the only way to navigate it is to create more complexity. There are committees to oversee the committees.

Why does this happen? No one sets out “to generate complexity”. Instead, people in organizations use what power they have (and even low-level grunts can have major effects on morale) to create conditional complexity. They make decisions that favor their interests simple and those that oppose them complicated enough that no one wants to think them through; instead, those branches of the game tree are just pruned. That’s how savvy people play politics. If a seasoned office politicker wants X and not Y, he’s not going to say, “If you vote for Y, I’ll cut you.” He can’t do it that way. In fact, it’s best if no one knows what his true preference is (so he doesn’t owe any favors to X voters). Nor can he obviously make Y unpleasant or bribe people into X. What he can do is create a situation (preferably over a cause seemingly unrelated to the X/Y debate) that makes X simple and obvious, but Y complex and unpredictable. That’s the nature of conditional complexity. It generates ugliness and mess– if the person’s interests are opposed.

In the long run, this goes bad because an organization will inevitably get to a point where it can’t do anything without opposing some interest, and then all of those conditional complexities (which might be “legacy” policies, set in place long ago and whose original stakeholders may have moved on) are triggered. Things become complex and “bureaucratic” and inefficient quickly, and no one really knows why.

The long-term solution to this complexity-burden problem is selective abandonment. If there isn’t a good reason to maintain a certain bit of complexity, that bit is abandoned. For example, it’s illegal, in some towns, to sing in the shower. Are those laws enforced? Never, because there’s no point in doing so. The question of what is the best way to “garbage collect” junk complexity is one I won’t answer in a single essay, but in technology companies, open allocation provides an excellent solution. Projects and power relationships and policies (again, the Ps) that no longer make sense are thrown out entirely.

The best defense is to be defensible

The low defensibility of the typical private-sector organization is, to me, quite alarming. Rational and morally average (or even morally above average) don’t value their employers very much, because most companies don’t deserve to be valued. They’re not defensible, which means they’re not defended, which is another way of saying self-repair doesn’t happen and that organizational decay is inexorable.

On programmers, deadlines, and “Agile”

One thing programmers are notorious for is their hatred of deadlines. They don’t like making estimates either, and that makes sense, because so many companies use the demand for an estimate as a “keep ’em on their toes” managerial microaggression rather than out of any real scheduling need. “It’ll be done when it’s done” is the programmer’s gruff, self-protecting, and honest reply when asked to estimate a project. Whether this is an extreme of integrity (those estimates are bullshit, we all know it) or a lack of professionalism is hotly debated by some. I know what the right answer is, and it’s the first, most of the time.

Contrary to stereotype, good programmers will work to deadlines (by which, I mean, work extremely hard to complete that project by a certain time) under specific circumstances. First, those deadlines need to exist in some external sense, i.e. a rocket launch whose date has been set in advance and can’t be changed. They can’t be arbitrary milestones set for emotional rather than practical reasons. Second, programmers need to be compensated for the pain and risk. Most deadlines, in business, are completely arbitrary and have more to do with power relationships and anxiety than anything meaningful. Will a good programmer accept business “deadline culture”, and the attendant risks, at a typical programmer’s salary? No, not for long. Even with good programmers and competent project management, there’s always a risk of a deadline miss, and the great programmers tend to be, at least, half-decent negotiators (without negotiation skills, you don’t get good projects and don’t improve). My point is only that it can happen for a good programmer to take on personal deadline responsibility (either in financial or career terms) and not find it unreasonable. That usually involves a consulting rate starting around $250 per hour, though.

Worth noting is that there are two types of deadlines in software: there are “this is important” deadlines (henceforth, “Type I”) and “this is not important” deadlines (“Type II”). Paradoxically, deadlines are assessed to the most urgent (mission critical) projects but also to the least important ones (to limit use of resources) while the projects of middling criticality tend to have relatively open timeframes.

A Type I deadline is one with substantial penalties to the client or the business if the deadline’s missed. Lawyers see a lot of this type of deadline; they tend to come from judges. You can’t miss those. They’re rarer in software, especially because software is no longer sold on disks that come in boxes, and because good software engineers prefer to avoid the structural risk inherent in setting hard deadlines, preferring continual releases. But genuine deadlines exist in some cases, such as in devices that will be sent into space. In those scenarios, however, because the deadlines are so critical, you need professional project-management muscle. When you have that kind of a hard deadline, you can’t trust 24-year-old Ivy grads turned “product managers” or “scrotum masters” to pull it out of a hat. It’s also very expensive. You’ll need, at least, the partial time of some highly competent consultants who will charge upwards of $400 per hour. Remember, we’re not talking about “Eric would like to see a demo by the 6th”; we’re talking about “our CEO loses his job or ends up in jail or the company has to delay a satellite launch by a month if this isn’t done”. True urgency. This is something best avoided in software (because even if everything is done right, deadlines are still a point of risk) but sometimes unavoidable and, yes, competent software engineers will work under such high-pressure conditions. They’re typically consultants starting around $4,000 per day, but they exist. So I can’t say something so simple as “good programmers won’t work to deadlines”, even if it applies to 99 percent of commercial software. They absolutely will– if you pay them 5 to 10 times the normal salary, and can guarantee them the resources they need to do the job. That’s another important note: don’t set a deadline unless you’re going to give the people expected to meet it the power, support, and resources to achieve it if at all possible. Deadlines should be extremely rare in software so that when true, hard deadlines exist for external resources, they’re respected.

Most software “deadlines” are Type II. “QZX needs to be done by Friday” doesn’t mean there’s a real, external deadline. It means, usually, that QZX is not important enough to justify more than a week of a programmer’s time. It’s not an actual deadline but a resource limit. That’s different. Some people enjoy the stress of a legitimate deadline, but no one enjoys an artificial deadline, which exists more to reinforce power relationships and squeeze free extra work out of people than to meet any pressing business need. More savvy people use the latter kind as an excuse to slack: QZX clearly isn’t important enough for them to care if it’s done right, because they won’t budget the time, so slacking will probably be tolerated as long as it’s not egregious. If QZX is so low a concern that the programmer’s only allowed to spend a week of his time on it, then why do it at all? Managers of all stripes seem to think that denying resources and time to a project will encourage those tasked with it to “prove themselves” against adversity (“let’s prove those jerks in management wrong… by working weekends, exceeding expectations and making them a bunch of money”) and work hard to overcome the gap in support and resources between what is given and what is needed. That never happens; not with anyone good, at least. (Clueless 22-year-olds will do it; I did, when I was one. The quality of the code is… suboptimal.) The signal sent by a lack of support and time to do QZX right is: QZX really doesn’t matter. Projects that are genuinely worth doing don’t have artificial deadlines thrown on them. They only have deadlines if there are real, external deadlines imposed by the outside world, and those are usually objectively legible. They aren’t deadlines that come from managerial opinion “somewhere” but real-world events. It’s marginal, crappy pet projects that no one has faith in that have to be delivered quickly in order to stay alive. For those, it’s best to not deliver them and save energy for things that matter– as far as one can get away with it. Why work hard on something the business doesn’t really care about? What is proved, in doing so?

Those artificial deadlines may be necessary for the laziest half of the programming workforce. I’m not really sure. Such people, after all, are only well-suited to unimportant projects, and perhaps they need prodding in order to get anything done. (I’d argue that it’s better not to hire them in the first place, but managers’ CVs and high acquisition prices demand headcount, so it looks like we’re stuck with those.) You’d be able to get a good programmer to work to such deadlines around $400 per hour– but because the project’s not important, management will (rightly) never pay that amount for it. But the good salaried programmers (who are bargains, by the way, if properly assigned or, better yet, allowed to choose their projects) are likely to leave. No one wants to sacrifice weekends and nights on something that isn’t important enough for management to budget a legitimate amount of calendar time.

Am I so bold as to suggest that most work people do, even if it seems urgent, isn’t at all important? Yes, I am. I think a big part of it is that headcount is such an important signaling mechanism in the mainstream business world. Managers want more reports because it makes their CVs look better. “I had a team of 3 star engineers and we were critical to the business” is a subjective and unverifiable claim. “I had 35 direct reports” is objective and, therefore, more valued. When people get into upper-middle management, they start using the terms like “my organization” to describe the little empires they’ve built inside their employers. Companies also like to bulk up in size and I think the reason is signaling. No one can really tell, from the outside, whether a company has hired good people. Hiring lots of people is an objective and aggressive bet on the future, however. The end result is that there are a lot of people hired without real work to do, put on the bench and given projects that are mostly evaluative: make-work that isn’t especially useful, but allows management to see how the workers handle artificial pressure. Savvy programmers hate being assigned unimportant/evaluative projects, because they have all the personal downsides that important ones do (potential for embarrassment, loss of job) but no career upside. Succeeding on such a project gets one nothing more than a grade of “pass“. For a contrast, genuinely important projects can carry some of the same downside penalties if they are failed, obviously, but they also come with legitimate upside for the people doing them: promotions, bonuses, and improved prospects in future employment. The difference between an ‘A’ and a ‘B’ performance on an important project (as opposed to evaluative make-work) actually matters. That distinction is really important to the best people, who equate mediocrity with failure and strive for excellence alone.

All that said, while companies generate loads of unimportant work, for sociological reasons it’s usually very difficult for management to figure out which projects are waste. The people who know that have incentives to hide it. But the executives can’t let those unimportant projects take forever. They have to rein them in and impose deadlines with more scrutiny than is given important work. If it an important project over-runs its timeframe by 50 percent, it’s still going to deliver something massively useful, so that’s tolerated. What tends to happen is that the important projects are, at least over time, given the resources (and extensions) they need to succeed. Unimportant projects have artificial deadlines imposed to prevent waste of time. That being the case, why do them at all? Obviously, no organization intentionally sets out to generate unimportant projects.  The problem, I think, is that when management loses faith in a project, resources and time budget are either reduced, or just not expanded even when necessary. That would be fine, if workers had the same mobility and could also vote with their feet. The unimportant work would just dissipate. It’s political forces that hold the loser project together. The people staffed on it can’t move without risking an angry (and, often, prone to retaliation) manager, and the manager of it isn’t likely to shut it down because he wants to keep headcount, even if nothing is getting done. The result is a project that isn’t important enough to confer the status that would allow the people doing it to say, “It’ll be done when it’s done”. The unimportant project is in a perennial race against loss of faith in it from the business, and it truly doesn’t matter to the company as a whole whether it’s delivered or not, but there’s serious personal embarrassment if it isn’t made.

It’s probably obvious that I’m not anti-deadline. The real world has ’em. They’re best avoided, as points of risk, but they can’t be removed from all kinds of work. As I get older, I’m increasingly anti-“one size fits all”. This, by the way, is why I hate “Agile” so vehemently. It’s all about estimates and deadlines, simply couched in nicer terms (“story points”, “commitments”) and using the psychological manipulation of mandatory consensus. Ultimately, the Scrum process is a well-oiled machine for generating short-term deadlines on atomized microprojects. It also allows management to ask detailed questions about the work, reinforcing the sense of low social status that “conventional wisdom” says will keep the workers on their toes and most willing to work hard, but that actually has the opposite effect: depression and disengagement. (Open-back visibility in one’s office arrangement, which likewise projects low status, is utilized to the same effect and, empirically, it does not work.) It might be great for squeezing some extra productivity out of the bottom-half– how to handle them is just not my expertise– but it demotivates and drains the top half. If you’re on a SCRUM team, you’re probably not doing anything important. Important work is given to trusted individuals, not to SCRUM teams.

Is time estimation on programming projects difficult and error-prone? Yes. Do programming projects have more overtime risk than other endeavors? I don’t have the expertise to answer that, but my guess would be: probably. Of course, no one wants to take on deadline risk personally, which is why savvy programmers (and almost all great programmers are decent negotiators, as discussed) demand compensation and scope assurance. (Seasoned programmers only take personal deadline risk with scope clearly defined and fixed.) However, the major reason for the programmers’ hatred of deadlines and estimates isn’t the complexity and difficulty-of-prediction in this field (although that’s a real issue) but the fact that artificial deadlines are an extremely strong signal of a low-status, unimportant, career-wasting project. Anyone good runs the other way. And that’s why SCRUM shops can’t have nice things.

Why programmers can’t make any money: dimensionality and the Eternal Haskell Tax

To start this discussion, I’m going to pull down a rather dismal tweet from Chris Allen (@bitemyapp):

For those who don’t know, Haskell is a highly productive, powerful language that enables programmers (at least, the talented ones) to write correct code quickly: at 2 to 5 times the development speed of Java, with similar performance characteristics, and fewer bugs. Chris is also right on the observation that, in general, Haskell jobs don’t pay as well. If you insist on doing functional programming, you’ll make less money than people who sling C++ at banks with 30-year-old codebases. This is perverse. Why would programmers be economically penalized for using more powerful tools? Programmers are unusual, compared to rent-seeking executives, in actually wanting to do their best work. Why is this impulse penalized?

One might call this penalty “the Haskell Tax” and, for now, that’s what I’ll call it. I don’t think it exists because companies that use Haskell are necessarily cheaper or greedier than others. That’s not the case. I think the issue is endemic in the industry. Junior programmer salaries are quite high in times like 2014, but the increases for mid-level and senior programmers fall short of matching their increased value to the business, or even the costs (e.g. housing, education) associated with getting older. The only way a programmer can make money is to develop enough of a national reputation that he or she can create a bidding war. That’s harder to do for one who is strongly invested in a particular language. It’s not Haskell’s fault. There’s almost certainly a Clojure Tax and an Erlang Tax and a Scala Tax.

Beyond languages, this applies to any career-positive factor of a job. Most software jobs are career-killing, talent-wasting graveyards and employers know this, so when there’s a position that involves something interesting like machine learning, green-field projects, and the latest tools, they pay less. This might elicit a “well, duh” response, insofar as it shouldn’t be surprising that unpleasant jobs pay well. The reason this is such a disaster is because of its long-term effect, both on programmers’ careers and on the industry. Market signals are supposed to steer people toward profitable investment, but in software, it seems to fall the other way. Work that helps a programmer’s career is usually underpaid and, under the typical awfulness of closed allocation, jealously guarded, politically allocated, and usually won through unreasonable sacrifice.

Why is the Haskell Tax so damning?

As I said, the Haskell Tax doesn’t apply only to Haskell. It applies to almost all software work that isn’t fish-frying. It demolishes upper-tier salaries. One doesn’t, after all, get to be an expert in one’s field by drifting. It takes focus, determination, and hard work. It requires specialization, almost invariably. With five years of solid experience, a person can add 3 to 50 times more value than the entry-level grunt. Is she paid for that? Almost never. Her need to defend her specialty (and refuse work that is too far away from it) weakens her position. If she wants to continue in her field, there are a very small number of available jobs, so she won’t have leverage, and she won’t make any money. On the other hand, if she changes specialty, she’ll lose a great deal of her seniority and leverage, she’ll be competing with junior grunts, and so she won’t make any money either. It’s a Catch-22.

This puts an economic weight behind the brutality and incivility of closed allocation. It deprives businesses of a great deal of value that their employees would otherwise freely add. However, it also makes people less mobile, because they can’t move on to another job unless a pre-defined role exists matching their specialties. In the long run, the effect of this is to provide an incentive against expertise, to cause the skills of talented programmers to rot, and to bring the industry as a whole into mediocrity.

Code for the classes and live with the masses. Code for the masses and live… with the masses.

Artists and writers have a saying: sell to the masses, and live with the classes; sell to the classes, and live with the masses. That’s not really a statement about social class as about the low economic returns of high-end work. Poets don’t make as much money as people writing trashy romance novels. We might see the Haskell Tax as an extension of this principle. Programmers who insist on doing only the high-end work (“coding for the classes”) are likely to find themselves either often out of work, or selling themselves at a discount.

Does this mean that every programmer should just learn what is learned in 2 years at a typical Java job, and be done with it? Is that the economically optimal path? The “sell to the masses” strategy is to do boring, line-of-business, grunt work. Programmers who take that tack still live with the masses. That kind of programming (parochial business logic) doesn’t scale. There’s as much work, for the author, in writing a novel for 10 people as 10 million; but programmers don’t have that kind of scalability, and the projects where there are opportunities for scaling, growth, and multiplier-type contributions are the “for the classes” projects that every programmer wants to do (we already discussed why those don’t pay). So, programming for the masses is just as much of a dead end, unless they can scale up politically— that is, become a manager. At that point, they can sell code, but they don’t get to create it. They become ex-technical, and ex-technical management (with strongly held opinions, once right but now out of date) can be just as suboptimal as non-technical management.

In other words, the “masses” versus “classes” problem looks like this, for the programmer: one can do high-end work and be at the mercy of employers because there’s so little of it to go around, or to low-end commodity work that doesn’t really scale. Neither path is going to enable her to buy a house in San Francisco.


One of the exciting things about being a programmer is that the job always changes. New technologies emerge, and programmers are expected to keep abreast of them even when their employers (operating under risk aversion and anti-intellectualism) won’t budget the time. What does it mean to be a good programmer? Thirty years ago, it was enough to know C and how to structure a program logically. Five years ago, a software engineer was expected to know a bit about a Linux, MySQL, a few languages (Python, Java, C++, Shell) and the tradeoffs among them. In 2014, the definition of “full stack” has grown to the point that almost no one can know all of it. Andy Shora (author of the afore-linked essay) puts it beautifully, on the obnoxiousness of the macho know-it-all programmer:

I feel the problem for companies desperate to hire these guys and girls, is that the real multi-skilled developers are often lost in a sea of douchebags, claiming they know it all.

Thirty years ago, there was a reasonable approximation of a linear ordering on programmer skill. If you could write a C compiler, understood numerical stability, and could figure out how to program in a new language or for a new platform by reading the manuals, you were a great programmer. If you needed some assistance and often wrote inefficient algorithms, you were either a junior or mediocre. In 2014, it’s not like that at all; there’s just too much to learn and know! I don’t know the first thing, for example, about how to build a visually appealing casual game. I don’t expect that I’d struggle as much with graphics as many do, because I’m comfortable with linear algebra, and I would probably kill it when it comes to AI and game logic, but the final polish– the difference between Candy Crush and an equivalent but less “tasty” game– would require someone with years of UI/UX experience.

The question of, “What is a good programmer?”, has lost any sense of linear ordering. The field is just too vast. It’s now an N-dimensional space. This is one of the things that makes programming especially hostile to newcomers, to women, and non-bullshitters of all stripes. The question of which of those dimensions matter and which don’t is political, subjective, and under constant change. One year, you’re a loser if you don’t know a scripting language. The next, you’re a total fuckup if you can’t explain what’s going on inside the JVM. The standards change at every company and frequently, leaving most people not only at a loss regarding whether they are good programmers, but completely without guidance about how to get there. This also explains the horrific politics for which software engineering is (or, at least, ought to be) notorious. Most of the “work” in a software company is effort spent trying to change the in-house definition of a good programmer (and, to that end, fighting incessantly over tool choices).

I don’t think that dimensionality is a bad thing. On the contrary, it’s a testament to the maturity and diversity of the field. The problem is that we’ve let anti-intellectual, non-technical businessmen walk in and take ownership of our industry. They demand a linear ordering of competence (mostly, for their own exploitative purposes). It’s the interaction between crass commercialism and dimensionality that causes so much pain.

Related to this is the Fundamental Hypocrisy of Employers, a factor that makes it damn hard for a programmer to navigate this career landscape. Technology employers demand specialization in hiring. If you don’t have a well-defined specialty and unbroken career progress toward expertise in that field, they don’t want to talk to you. At the same time, they refuse to respect specialties once they’ve hired people, and people who insist on protecting their specialties (which they had to do to get where they are) are downgraded as “not a team player”. Ten years of machine learning experience? Doesn’t matter, we need you to fix this legacy Rails codebase. It’s ridiculous, but most companies demand an astronomically higher quality of work experience than they give out. The result of this is that the game is won by political favorites and self-selling douchebags, and most people in either of those categories can’t really code.

The Eternal Haskell Tax

The Haskell Tax really isn’t about Haskell. Any programmer who wishes to defend a specialty has a smaller pool of possible jobs and will generally squeeze less money out of the industry. As programming becomes more specialized and dimensional, the Haskell Tax problem affects more people. The Business is now defining silos like “DevOps” and “data science” which, although those movements began with good intentions, effectively represent the intentions of our anti-intellectual colonizers to divide us against each other into separate camps. The idea (which is fully correct, by the way) that a good Haskell programmer can also be a good data scientist or operations engineer is threatening to them. They don’t want a fluid labor market. Our enemies in The Business dislike specialization when we protect our specialties (they want to make us interchangeable, “full stack” generalists) but, nonetheless, want to keep intact the confusion and siloization that dimensionality creates. If the assholes in charge can artificially disqualify 90 percent of senior programmers from 90 percent of senior programming jobs based on superficial differences in technologies, it means they can control us– especially if they control the assignment of projects, under the pogrom that is closed allocation– and (more importantly) pay us less.

The result of this is that we live under an Eternal Haskell Tax. When the market favors it, junior engineers can be well-paid. But the artificial scarcities of closed allocation and employer hypocrisy force us into unreasonable specialization and division, making it difficult for senior engineers to advance. Engineers who add 10 times as much business value as their juniors are lucky to earn 25 percent more; they, as The Business argues, should consider themselves fortunate in that they “were given” real projects!

If we want to fix this, we need to step up and manage our own affairs. We need to call “bullshit” on the hypocrisy of The Business, which demands specialization in hiring but refuses to respect it internally. We need to inflict hard-core Nordic Indignation on closed allocation and, in general, artificial scarcity. Dimensionality and specialization are not bad things at all (on the contrary, they’re great) but we need to make sure that they’re properly managed. We can’t trust this to the anti-intellectual colonial authorities who currently run the software industry, who’ve played against us at every opportunity. We have to do it ourselves.

Gervais / MacLeod 21: Why Does Work Suck?

This is a penultimate “breather” post, insofar as it doesn’t present much new material, but summarizes much of what’s in the previous 20 essays. It’s now time to tie everything together and Solve It. This series has reached enough bulk that such an endeavor requires two posts: one to tie it all together (Part 21) and one to discuss solutions (Part 22). Let me try to put the highlights from everything I’ve covered into a coherent whole. That may prove hard to do; I might not succeed. But I will try.

This will be long and repeat a lot of previous material. There are two reasons for that. First, I intend this essay to be a summarization of some highlights from where we’ve been. Second, I want it to stand alone as a “survey course” of the previous 20 essays, so that people can understand the highlights (and, thus, understand what I propose in the conclusion) even if they haven’t read all the prior material.

If I were to restart this series of posts (for which I did not intend it, originally, to reach 22 essays and 92+ kilowords) I would rename it Why Does Work Suck? In fact, if I turn this stuff into a book, that’s probably what I’ll name it. I never allowed myself to answer, “because it’s work, duh.” We’re biologically programmed to enjoy working. In fact, most of the things people do in their free time (growing produce, unpaid writing, open-source programming) involve more actual work than their paid jobs. Work is a human need.

How Does Work Suck?

There are a few problems with Work that make it almost unbearable, driving it into such a negative state that people only do it for the lack of other options.

  • Work Sucks because it is inefficient. This is what makes investors and bosses angry. Getting returns on capital either requires managing it, which is time-consuming, or hiring a manager, which means one has to put a lot of trust in this person. Work is also inefficient for average employees (MacLeod Losers) which is why wages age so low.
  • Work Sucks because bad people end up in charge. Whether most of them are legitimately morally bad is open to debate, but they’re certainly a ruthless and improperly balanced set of people (MacLeod Sociopath) who can be trusted to enforce corporate statism. Over time, this produces a leadership caste that is great at maintaining power internally but incapable of driving the company to external success.
  • Work Sucks because of a lack of trust. That’s true on all sides. People are spending 8+ hours per day on high-stakes social gambling while surrounded by people they distrust, and who distrust them back.
  • Work Sucks because so much of what’s to be done in unrewarding and pointless. People are glad to do work that’s interesting to them or advances their knowledge, or work that’s essential to the business because of career benefits, but there’s a lot of Fourth Quadrant work for which neither applies. This nonsensical junk work is generated by strategically blind (MacLeod Clueless) middle managers and executed by rationally disengaged peons (MacLeod Losers) who find it easier to subordinate than to question the obviously bad planning and direction.

All of these, in truth, are the same problem. The lack of trust creates the inefficiencies that require moral flexibility (convex deception) for a person to overcome. In a trust-sparse environment, the people who gain people are the least deserving of trust: the most successful liars. It’s also the lack of trust that generates the unrewarding work. Employees are subjected, in most companies, to a years-long dues-paying period which is mostly evaluative– to see how each handles unpleasant make-work and pick out the “team players”. The “job” exists to give the employer an out-of-the-money call option on legitimately important work, should it need some done. It’s a devastatingly bad system, so why does it hold up? Because, for two hundred years, it actually worked quite well. Explaining that requires delving into mathematics, so here we go.

Love the Logistic

The most important concept here is the S-shaped logistic function, which looks like this (courtesy of Wolfram Alpha):

The general form of such a function L(x; A, B, C) is:

where A represents the upper asymptote (“maximum potential”), B represents the rapidity of the change, and C is a horizontal offset (“difficulty”) representing the x-coordinate of the inflection point. The graph above is for L(x; 1, 1, 0).

Logistic functions are how economists generally model input-output relationships, such as the relationship between wages and productivity. They’re surprisingly useful because they can capture a wide variety of mathematical phenomena, such as:

  • Linear relationships; as B -> 0, the relationship becomes locally linear around the inflection point, (C, A/2).
  • Discrete 0/1 relationships: as B -> infinity, the function approaches a “step function” whose value is A for x > C and 0 for x < C.
  • Exponential (accelerating) growth: If B > 0, L(x; A, B, C) is very close to being exponential at the far left (x << C). (Convexity.)
  • Saturation: If B > 0, L(x; A, B, C) is approaching A with exponential decay at the far right (x >> C). (Concavity.)

Let’s keep inputs abstract but assume that we’re interested in some combination of skill, talent, effort, morale and knowledge called x with mean 0 and “typical values” between -1.0 and 1.0, meaning that we’re not especially interested in x = 10 because we don’t know how to get there. If C is large (e.g. C = 6) then we have an exponential function for all the values we care about: convexity over the entire window. Likewise, leftward C values (e.g. C = -6) give us concavity over the whole window.

Industrial work, over the past 200 years, has tended toward commoditization, meaning that (a) a yes/no quality standard exists, increasing B, and (b) it’s relatively easy for most properly set-up producers to meet it most of the time (with occasional error). The result is a curve that looks like this one, L(x; 10, 4.5, -0.7), which I’ll call a(x):

Variation, here, is mainly in incompetence. Another way to look at it is in terms of error rate. The excellent workers make almost no errors, the average ones achieve 95.8% of what is possible (or a 4.2% error rate) with the mediocre (x = -0.5) making almost 5 times as many mistakes (28.9% error rate), and the abysmal unemployable with an error rate well over 50%. This is what employment has looked like for the past two hundred years. Why? Because an industrial process is better modeled as a complex network of these functions, with outputs from one being inputs to another. The relationship of individual wage into morale, morale into performance, performance into productivity, and individual productivity into firm productivity, and firm productivity into profitability, can all be modeled as S-shaped curves. With this convoluted network of “hidden nodes” that exists in a context of a sophisticated industrial operation, it’s generally held to be better to have a consistently high-performing (B high, C negative) node than higher-performing but variable node.

One way to understand the B in the above equation is that it represents how reliably the same result is achieved, noting the convergence to a step function as B goes to infinity. In this light, we can understand mechanization. Middle grades of work rarely exist with machines. In the ideal, they either execute perfectly, or fail perfectly (and visibly, so one can repair them). Further refinements to this process are seen in the changeover from purely mechanical systems to electronic ones. It’s not always this way, even with software. There are nondeterministic computer behaviors that can produce intermittent bugs, but they’re rare and far from the ideal.

As I’ve discussed, if we can define perfect performance (i.e. we know what A, the error-free yield, looks like) then we can program a machine to achieve it. Concave work is being handed over to machines, with the convex tasks remaining available. With convexity, it’s rare that one knows what A and B are. On explored values, the graph just looks like this one, for L(x; 200, 2.0, 1.5), which I’ll call b(x):

It shows no signs of leveling off and, for all intents and purposes, it’s exponential. This is usually observed for creative work where a few major players (the “stars”) get outsized rewards in comparison to the average people.

Convexity Isn’t Fair

Let’s say that you have two employees, one of whom (Alice) is slightly above average (x = 0.1) and the other of whom (Bob) is just average (x = 0.0). You have the resources to provide 1.0 full point of training, and you can split it anyway you choose (e.g. 0.35 points for Alice, and 0.65 points for Bob). Now, let’s say that you’re managing concave work modeled by the function L(x; 100, 2.0, -0.3), which is concave.

Let the x-axis represent the amount of training (0.0 to 1.0) given to Alice, with the remainder given to Bob. Here’s a graph of their individual productivity levels, with Alice in blue, Bob in purple, and their sum productivity in the green curve

If we zoom in to look at the sum curve, we see a maximum at x = 0.45, an interior solution where both get some training.

At x = 0.0 (full investment in Bob) Alice is producing 69.0 points and Bob’s producing 93.1, for a total of 162.1.

At x = 0.5 (even split of training) Alice in producing 85.8 points and Bob’s producing 83.2, for a total of 169.0.

At x = 1.0 (full investment in Alice) Alice is producing 94.3 points and Bob’s producing 64.6, for a total of 158.9.

The maximal point is x = 0.45, which means that Alice gets slightly less training because Bob is further behind and needs it more. Both end up producing 84.55 points, for a total of 169.1. After the training is disbursed, they’re at the same level of competence (0.55). This is a “share the wealth” interior optimum that justifies sharing the training.

Let’s change to a convex world, with the function L(x; 320, 2.0, 1.1). Then, for the same problem, we get this graph (blue representing Alice’s productivity, purple representing Bob’s, and the green curve representing the sum):

Zooming in on the graph sum productivity, we find that the “fair” solution (x = 0.45) is the worst!

At x = 0.0 (full investment in Bob) Alice is producing 38.1 points and Bob’s producing 144.1, for a total of 182.2.

At x = 0.5 (even split of training) Alice in producing 86.1 points and Bob’s producing 74.1, for a total of 160.2.

At x = 1.0 (full investment in Alice) Alice is producing 160.0 points and Bob’s producing 31.9, for a total of 191.9.

The maxima are at the edges. The best strategy is to give Alice all of the training, but giving all to Bob is better than splitting it evenly, which is about the worst of the options. This is a “starve the poor” optimum. It favors picking a winner and putting all the investment into one party. This is how celebrity economies work. Slight differences in ability lead to massive differences in investment and, ultimately, create a permanent class of winners. Here, choosing a winner is often more important than getting “the right one” with the most potential.

Convexity pertains to decisions that don’t admit interior maxima, or for which such solutions don’t exist or make sense. For example, choosing a business model for a new company is convex, because putting resources into multiple models would result in mediocre performance in all of them, thus failure. The rarity of “co-CEOs” seems to indicate that choosing a leader is also a convex matter.

Convexity is hard to manage

In optimization, convex problems tend to be the easier ones, so the nomenclature here might be strange. In fact, this variety of convexity is the exact opposite of convexity in labor. Optimization problems are usually framed in terms of minimization of some undesirable quantity like cost, financial risk, statistical error, or defect rate. Zero is the (usually unattainable) perfect state. In business, that would correspond to the assumption that an industrial apparatus has an idealized business model and process, with the management’s goal to drive execution error to zero.

What makes convex minimization methods easier is that, even in a high-dimensional landscape, one can converge to the optimal point (global minimum) by starting from anywhere and iteratively stepping in the direction recommended by local features (usually, first and second derivative). It’s like finding the bottom point in a bowl. Non-convex optimizations are a lot harder because (a) there can be multiple local optima, which means that starting points matter, and (b) the local optima might be at the edges, which has its own undesirable properties (including, with people, unfairness). The amount of work required to find the best solutions is exponential in the number of dimensions. That’s why, for example, computers can’t algorithmically find the best business model for a “startup generator”. Even if it were a well-formed problem, the dimensionality would be high and the search problem intractable (probably).

Convex labor is analogous to non-convex optimization problems while management of concave labor is analogous to convex optimization. Sorry if this is confusing. There’s an important semantic difference to highlight here, though. With concave labor, there is some definition of perfect completion so that error (departure from that) can be defined and minimized with a known lower bound: 0. With convex labor, no one knows what the maximum value is, because the territory is unexplored and the “leveling off” of the logistic curve hasn’t been found yet. It’s natural, then, to frame that as a maximization problem without a known bound. With convex labor, you don’t know what the “zero-or-max” point is because no one knows how well one can perform.

Concave labor is the easy, nice case from a managerial perspective. While management doesn’t literally implement gradient descent, it tends to be able to self-correct when individual labor is concave (i.e. the optimization problem is convex). If Alice starts to pull ahead while Bob struggles, management will offer more training to Bob.

However, in the convex world, initial conditions matter. Consider the Alice-Bob problem above with the convex productivity curve, and the fact that splitting the training equitably is the worst possible solution. Management would ideally recognize Alice’s slight superiority and give her all the training, thus finding the optimal “edge case”. But what if Bob managed (convex dishonesty) to convince management that he was slightly superior to Alice and at, say, x = 0.2? Then Bob would get all the training, and Alice would get none, and management would converge on a sub-optimal local maximum. That is the essence of corporate backstabbing, is it not? Management’s increasing awareness of convexity in intellectual work means that it will tend to double down its investment in winners and toss away (fire) the losers. Thus, subordinates put considerable effort into creating the appearance of high potential for the sake of driving management to a local maximum that, if not necessarily ideal for the company, benefits them. That’s what “multiple local optima” means, in practical terms.

The traditional three-tiered corporation has a firm distinction between executives and managers (the third tier being “workers”, who are treated as a landscape feature) and its pertains to this. Because business problems are never entirely concave and orderly, the local “hill climbing” is left to managers, while the convex problems (which, like choosing initial conditions, require non-local insight) such as selecting leaders and business models are left to executives.

Yet with everything concave being performed, or soon to be performed, by machines, we’re seeing convexity pop up everywhere. The question of which programming languages to learn is a convex decision that non-managerial software engineers have to make in their careers. Picking a specialty is likewise; convexity is why it’s of value to specialize. The most talented people today are becoming self-executive, which means that they take responsibility for non-local matters that would otherwise be left to executives, including the direction of their own career. This, however, leads to conflicts with authority.

Older managers often complain about Millennial self-executivity and call it an attitude of entitlement. Actually, it’s the opposite. It’s disentitlement. When you’re entitled, you assume social contracts with other people and become angry when (from your perception) they don’t hold up their end. Millennials leave jobs, and furtively use slow periods to invest in their careers (e.g. in MOOCs) rather than asking for more work. That’s not an act of aggression or disillusion; it’s because they don’t believe the social contract ever existed. It’s not that they’re going to whine about a boss who doesn’t invest in their career– that would be entitlement– because that would do no good. They just leave. They weren’t owed anything, and they don’t owe anything. That’s disentitlement.

Convexity is bad for your job security

Here’s some scary news. When it comes to convex labor, most people shouldn’t be employed. First, let me show a concave input-output graph for worker productivity, assuming even distribution in worker ability from -1.0 to 1.0. Our model also assumes this ability statistic to be inflexible; there’s no training effect.

The blue line, at 82.44, represents the mean worker in the population. Why’s this important? It represents the expected productivity of a new hire off the street. If you’re at the median (x = 0.0) or even a bit below it, you are “above average”. It’s better to retain you than to bring someone in off the street. Let’s say that John is 40th percentile (x = -0.2) hire, which means that his productivity is 90. A random person hired off the street will be better than John, 60% of the time. However, the upside is limited (10 points at most) and the downside (possibly 70 points) is immense so, on average, it’s a terrible trade. It’s better to keep John (a known mediocre worker) on board than to replace him.

With a convex example, we find the opposite to be true:

Here, we have an arrangement in which most people are below the mean, so we’d expect high turnover. Management, one expects, would be inclined to hire people on a “try out” basis with the intention of throwing most of them back on the street. An average or even good (x = 0.5) hire should be thrown out in order to “roll the dice” with a new hire who might be the next star. Is that how managers actually behave? No, because there are frictional and morale reasons not to fire 80% of your people, and because this model’s assumption that people are inflexibly set at a competence level is not entirely true for most jobs, and those where it is true (e.g. fashion modeling) make it easy for management to evaluate someone before a hire is made. In-house experience matters. That is, however, how venture capital, publishing and record labels work. Once you turn out a couple failures, with those being the norm, it might still be that you’re a high performer who’s been unlucky, but you’re judged inferior to a random new entrant (with more upside potential) and flushed out of the system.

In the real world, it’s not so severe. We don’t see 80% of people being fired, and the reason is that, for most jobs, learning matters. The above applies to work at which there’s no learning process, but each worker is inflexibly put at a certain perfectly measurable productivity level. That’s not how the world really works. In-born talent is one relevant input, but there are others like skill, in-house experience, and education that have defensive properties and keep a person’s job security. People can often get themselves above the mean with hard work.

Secondly, the model above assumes workers are paid equally, which is not the case for most convex work. In the convex model above, the star (x = 1.0) might command several times the salary of the average performer (x = 0.0) and he should. That compensation inequality actually creates job security for the rest of them. If the best people didn’t charge more for their work, then employers would be inclined to fire middling performers in the search of a bargain.

This may be one of the reasons why there is such high turnover in the software industry. You can’t a get seasoned options trader for under $250,000 per year, but you can get excellent programmers (who are worth 5-10 times that amount, if given the right kind of work) for less than half of that. This is often individually justified (by the engineer) with an attitude of, “well, I don’t need to be paid millions; I care more about interesting work”. As an individual behavior, that’s fine, but it might be why so many software employers are so quick to toss engineers aside for dubious reasons. Once the manager concludes that the individual doesn’t have “star” potential, it’s worth it to throw out even a good engineer and try again for a shot at a bargain, considering the number of great engineers at mediocre salary levels.

One thing I’ve noticed in software (which is highly convex) is that there’s a cavalier attitude toward firing, and it’s almost certainly related to that “star economy” effect. What’s different is that software convexity has a lot inputs other that personal ability– project/person fit, tool familiarity, team cohesion, and a lot factors that are so hard to detect that they feel like pure luck– in the mix, so the “toss aside all but the best” strategy is severely defective, at least for a larger organization that should be enabling people to find better fitting projects, which makes a lot of sense amid convexity. That’s one of the reasons why I am so dogmatic about open allocation, at least in big companies.

Convexity is risky

Job insecurity amid convexity is an obvious problem, but not damning. If there’s a fixed demand for widgets, a competitor who can produce 10 times more of them is terrifying, because it will crash prices and put everyone else out of business (and, then, become a monopolist and raise them). Call that “red ocean convexity”, where the winners put the losers out of business because a “10X” performer takes 9X from someone else. However, if demand is limitless, then the presence of superior players isn’t always a bad thing. A movie star making $3 million isn’t ruined by one making $40 million. The arts are an example of “blue ocean convexity”, insofar as successful artists don’t make the others poorer, but increase the aggregate demand of art. It’s not “winner-take-all” insofar as one doesn’t have to be the top player to add something people value.

Computational problem solving (not “programming”) is a field where there’s very high demand, so the fact that top performers will produce an order of magnitude more value (the “10X effect”) doesn’t put the rest out of business. That’s a very good thing, because most of those top performers were among “the rest” when they started their career. Not only is there little direct competition, but as software engineers, we tend to admire those “10X” people and take every opportunity we can get to learn from them. If there were more of them, it wouldn’t make us poorer. It would make the world richer.

Is demand for anything limitless, though? For industrial products, no. Demand for televisions, for example, is limited by peoples’ need for them and space to put them. For making peoples’ lives better, yes. For improving processes, sure. Generation of true wealth (as Paul Graham defines it: “stuff people want”) is something for which there’s infinite demand, at least as far as we can see. So what’s the limiting factor? Why can’t everyone work on blue-ocean convex work that makes peoples’ lives better? It comes down to risk. So, let’s look at that. The model I’m going to use is as follows:

  • We only care about the immediate neighborhood of a specific (“typical”) competence level. We’ll call it x = 0.
  • Tasks have a difficulty t between -1.0 and 2.0, which represents the C in the logistic form. B is going to be a constant 4.5; just ignore that. 
  • The harder a task is, the higher the potential payoff. Thus, I’ll set A = 100 * (1 + e^(5*t)). This means that work gets more valuable slightly faster (11% faster) than it gets harder (“risk premium”). The constant term in A is based on the understanding that even very easy (difficulty of -1.0) work has value insofar as it’s time-consuming and therefore people must be paid to do it.
  • We measure risk for a given difficulty t by taking the first derivative of L(x; …), with respect to x, at x = 0. Why? L’(x; …) tells us how sensitive the output (payoff) is to marginal changes in input. We’re modeling unknown input variables and plain luck factors as a random, zero-mean “noise” variable d and assuming that for known competence x the true performance will be L(x + d; …). So this first derivative tells us, at x = 0, how sensitive we are to that unknown noise factor.

What we want to do is assess the yield (expected value) and risk (first derivative of yield) for difficulty levels from -1 to 2 when known x = 0. Here’s a graph of expected yield:

It’s hard to notice on that graph, but there’s actually a slight “dip” or “uncanny valley” as one goes from the extreme of easiness (t = -1.0) to slightly harder (-1.0 < t < 0.0) work:

Does it actually work that way in the real world? I have no idea. What causes this in the model is that, as we go from the ridiculously easy (t = 1.0) to the merely moderately easy (t = 0.5) the rate of potential failure grows faster than the maximum potential A does, as a function of t. That’s an artifact of how I modeled this and I don’t know for sure that a real-world market would have this trait. Actually, I doubt it would. It’s a small dip so I’m not going to worry about it. What we do see is that our yield is approximately constant as a function of difficulty for t from -1.0 to 0.0, where the work is concave for that level of skill; and then it grows exponentially as a function of t from 0.0 to 2.0, where the work is convex. That is what we tend to see on markets. The maximal market value of work (1 + e^(5 * t) in this model) grows slightly faster than difficulty in completing it (1 + e^(4.5*t), here).

However, what we’re interested in is risk, so let me show that as well by graphing the first derivative of L with respect to x (not t!) for each t.

What this shows us, pretty clearly, is monotonic risk increase as the tasks become more difficult. That’s probably not too surprising, but it’s nice to see what it looks like on paper. Notice that the easy work has almost no risk involved. Let’s plot these together. I’ve taken the liberty of normalizing the risk formula (in purple) to plot them together, which is reasonable because our units are abstract:

Let’s look at one other statistic, which will be the ratio between yield and risk. In finance, this is called the Sharpe Ratio. Because the units are abstract (i.e. there’s no real meaning to “1 unit” of competence or difficulty) there is no intrinsic meaning to its scale, and therefore I’ve again taken the liberty of normalizing this as well. That ratio, as a function of task difficulty, looks like this…

…which looks exactly like affine exponential decay. In fact, that’s what it is. The Sharpe Ratio is exponentially favorable for easy work (t < 0.0) and approaches a constant value (1.0 here, because of the normalization) for large t.

What’s the meaning of all this? Well, traditionally, the industrial problem was to maximize yield on capital within a finite “risk budget”. If that’s the case– you’re constrained by some finite amount of risk– then you want to select work according to the Sharpe Ratio. Concave tasks might have less yield, but they’re so low in risk that you can do more of them. For each quantum of risk in your budget, you want to get the most yield (expected value) out of it that you can. This favors the extreme concave labor. This is why industrial labor, for the past 200 years, has been almost all concave. Boring. Reliable. In many ways, the world still is concave and that’s a desirable thing. Good enough is good enough. However, it just so happens that when we, as humans, master a concave task when tend to look for the convex challenge of making it run itself. In pre-technological times, this was done by giving instructions to other people, and making machines as easy as possible for humans to use. In the technological era, it’s done with computers and code. Even the grunt work of coding is given to programs (we call them compilers) so we can focus on the interesting stuff. We’re programming all of that concave work out of human hands. Yes, concave work is still the backbone of the industrial world and always will be. It’s just not going to require humans doing it.

What if, instead, the risk budget weren’t an issue? Let’s say that we have a team of 5 programmers given a year to do whatever they want, and the worst they can do is waste their time, and you’re okay with that maximal-risk outcome (5 annual salaries for a learning experience). They might build something amazing that sells for $100 million, or they might work for a year and have the project still fail on the market. Maybe they do great work, but no one wants it; that’s a risk of creation. In this case, we’re not constrained by risk allocation but by talent. We’ve already accepted the worst possible outcome as acceptable. We want them to be doing convex work, which has the highest yield. Those top-notch people are the limiting resource, not risk allocation.

Convexity requires teamwork

Above, I established that if individual productivity is a convex function of investment in that person, and group performance is a sum of individual productivity, then the optimal solution is to ply one person with resources and starve (and likely fire) the rest. Is that how things actually work? No, not usually. There’s a glaring false assumption, which is the additive model where group performance is a simple sum of individual performances. Real team efforts shouldn’t work that way.

When a team is properly configured, most of their efforts don’t merely add to some pile of assets, but they multiply each others’ productivity. Each works to make the others more successful. I wrote about this advancement of technical maturity (from multiplier to adder) as it pertains to software but I think it’s more general. Warning: incompetent attempts at multiplier efforts are every bit as toxic as incompetent management and will have a divider effect.

Team convexity is a bit unique in the sense that both sides of the logistic “S-curve” are observed. You have synergy (convexity) as the team scales up to a certain size, but congestion (concavity) beyond a certain point. It’s very hard to get team size and configuration right, and typical “Theory Z” management (which attempts to coerce a heterogeneous set of people who didn’t choose each other, and probably didn’t choose the project, into being a team) generally fails at this. It can’t be managed competently from a top-down perspective, despite what many executives say (they are wrong). It has to be grass-roots self-organization. Top-down, closed-allocation management can work well in the Alice/Bob models above where productivity is the sum of individual performances (i.e. team synergies aren’t important) but it fails catastrophically on projects that require interactive, multiplicative effects in order to be successful.

Convexity has different rules

The technological economy is going to be very different, because of the way business problems are formulated. In the industrial economy, capital was held in some fixed amount by a business, whose goal was to gain as much yield (profit or interest) from it while keeping risk within certain bounds deemed acceptable. That made concavity desirable. It still is; stable income with low variation is always a good thing. It’s just that such work no longer requires humans. Concave work has been so commoditized that it’s hard to get a passive profit from it.

Ultimately, I think a basic income is the only way society will be able to handle widespread convexity of individual labor. What does it say about the future? People will either be very highly compensated, or effectively unemployed. There will be an increasing need for unpaid learning while people push themselves from the low, flat region of a convex curve to the high, steep part. Right now, we have a society where people with the means to indulge in that can put themselves on a strong career track, but the majority who have a lifelong need for monthly income end up getting shafted: they become a permanent class of unskilled labor and, by keeping wages low, they actually hold back technological advancement.

Industrial management was risk-reductive. A manager took ownership of some process and his job was to look for ways it could fail, then tried to reduce the sources of error in that process. The rare convex task (choosing a business strategy) was for a higher order of being, an executive. Technological management has to embrace risk, because all the concave work’s being taken by machines. In the future, it will only be economical for a human to do something when perfect completion is unknown or undefinable, and that’s the convex work.

A couple more graphs deserve attention, because both pertain to managerial goals. There are two ways that a manager can create a profit. One is to improve output. The other is to reduce costs. Which is favorable? It depends. Below is a graph that shows productivity ($/hour) as a function of wages for some task where performance is assumed to be convex in wages. The relationship is assumed here to be inflexible and go both ways: better people will expect more in wages, low wages will cause peoples’ out-of-work distractions to degrade their performance. Plotted in purple is the y = x or “break-even” line.

As one can see, it doesn’t even make sense to hire people for this kind of work at less than $68/hour: they’ll produce less than they cost. That “dip” is an inherent problem for convex work. Who’s going to pay people in the $50/hour range so they can become good and eventually move to the $100/hour range (where they’re producing $200/hour work)? This naturally tends toward a “winners and losers” scenario. The people who can quickly get themselves to the $70/hour productivity level (through the unpaid acquisition of skill) are employable, and will continue to grow; the rest will not be able to justify wages that sustain them. The short version: it’s hard to get into convex work.

Here’s a similar graph for concave work:

… and here’s a graph of the difference between productivity and wage, or per-hour profit, on each worker:

So the optimal profit is achieved at $24.45 per hour, where the worker provides $56.33 worth of work in that time. It doesn’t seem fair, but improvements to wages beyond that, while they improve productivity, do not improve it by enough to justify the additional cost. That’s not to say that companies will necessarily set wages to that level. (They might raise them higher to attract more workers, increasing total profit.) Also, here is a case where labor unions can be powerful (they aren’t especially helpful with convex work): in the above, the company would still earn a respectable profit on each worker with wages as high as $55 per hour, and wouldn’t be put out of business (despite managements’ claim that “you’ll break us” at, say, $40) until almost $80.

The tendency of corporate management toward cost-cutting, “always say no”, and Theory-X practices is an artifact of the above result of concavity. So while I can argue that “convexity is unfair” insofar as it encourages inequality of investment and resources, enabling small differences in initial conditions to produce a winner-take-all outcome; concavity produces its own variety of unfairness, since it often encourages wages to go to a very low level, where employers take a massive surplus.

The most important problem…?

Above is a lot about convexity, but I feel like the changeover to convexity in individual labor is the most important economic issue of the 21st century. So if we want to understand why the contemporary, MacLeod-hierarchical, organization won’t survive it, we need a deep understanding of what convexity is and how it works. I think we have that, now.

What does this have to do with Work Sucking? Well, there are a few things we get out of it. First, for the concave work that most of the labor force is still doing…

  • Concave (“commodity”) labor leads to grossly unfair wages. This creates a natural adversity between workers and management on the issue of wage levels. 
  • Management has a natural desire to reduce risk and cut costs, on an assumption of concavity. It’s what they’ve been doing for over 200 years. When you manage concave work, that’s the most profitable thing to do.
  • Management will often take a convex endeavor (e.g. computer programming) and try to treat it as concave. That’s what we, in software, call the “commodity developer” culture that clueless software managers try to shove down hapless engineers’ throats.
  • Stable, concave work is disappearing. Machines are taking it over. This isn’t a bad thing (on the contrary, it’s quite good) but it is eroding the semi-skilled labor base that gave the developed world a large middle class.

Now, for the convex:

  • Convex work favors low employment and volatile compensation. It’s not true that there “isn’t a lot of convex work” to go around. In fact, there’s a limitless amount of demand for it. However, one has to be unusually good for a company to justify paying for it at a level one could live on, because of the risk. Without a basic income in place, convexity will generate an economy where income volatility is at a level beyond what people are able to accept. As a firm believer in the need for market economies, this must be addressed.
  • Convex payoffs produce multiple optima on personnel matters (e.g. training, leadership). This sounds harmless until one realizes that “multiple optima” is a euphemism for “office politics”. It means there isn’t a clear meritocracy, as performance is highly context-sensitive.
  • Convex work often creates a tension between individual competition and teamwork. Managers attempting to grade individuals in isolation will create a competitive focus on individual productivity, because convexity rewards acceleration of small individual differences. This managerial style works for simple additive convexity, but fails in an organization that needs people to have multiplicative or synergistic effects (team convexity) and that’s most of them.

Red and blue ocean convexity

One of the surprising traits of convexity, tied-in with the matter of teamwork, is that it’s hard to predict whether it will be structurally cooperative or competitive. This leads me to believe that there are fundamental differences between “red ocean” and “blue ocean” varieties of convexity. For those unfamiliar with the terms, red ocean refers to well-established territory in which competition is fierce. There’s a known high quantity of resources (“blood in the water”) available but there’s a frenzy of people (some with considerable competitive advantages) working to get at it. It’s fierce and if you aren’t strong, the better predators will crowd you out. Blue ocean refers to unexplored territory where the yields are unknown but the competition’s less fierce (for now).

I don’t know this industry well, but I would think that modeling is an example of red-ocean convexity. Small differences in input (physical attractiveness, and skill at self-marketing) result in massive discrepancies of output, but there’s a small and limited amount of demand for the work. If there’s a new “10X model” on the scene, all the other models are worse off, because the supermodel takes up all of the work. For example, I know that some ridiculous percentage of the world’s hand-modeling is performed by one woman (who cannot live a normal life, due to her need to protect her hands).

What about professional sports, the distilled essence of competition? Blue ocean. Yep. That might seem surprising, given that these people often seem to want to kill each other, but the economic goal of a sports team is not to win games, but to play great games that people will pay money to watch. A “10X” player might revitalize the reputation of the sport, as Tiger Woods did for golf, and expand the audience. Top players actually make a lot of money for the opponents they defeat; the stars get a larger share of the pool, meaning their opponents get a smaller percentage, but they also expand that pool so much that everyone gets richer.

How about the VC-funded startup ecosystem? That’s less clear. Business formation is blue ocean convexity, insofar as there are plenty of untapped opportunities to add immense value, and they exist all over the world. However, fund-raising (at least, in the current investor climate) and press-whoring are red ocean convexity: a few already-established (and complacent) players get the lion’s share of the attention and resources, giving them an enormous head start. Indeed, this is the point of venture capital in the consumer-web space: use the “rocket fuel” (capital infusion) to take a first-entrant advantage before anyone else has a shot.

Red and blue ocean convexity are dramatically different in how they encourage people to think. With red-ocean convexity, it’s truly a ruthless, winner-take-all, space because the superior, 10X, player will force the others out of business. You must either beat him or join him. I recommend “join”. With blue-ocean convexity (which is the force that drives economic growth) outsized success doesn’t come at the expense of other people. In fact, the relationship may be symbiotic and cooperative. For example, great programmers build tools that are used all over the world and make everyone better at their jobs. So while there is a lot of inequality in payoffs– Linus Torvalds makes millions per year, I use his tools– because that’s how convexity works, it’s not necessarily a bad thing because everyone can win.

Convexity and progress

Convexity’s most important property is progressive time. Real-world convexity curves are often steeper than the ones graphed above and, if there isn’t a role for learning, then the vast majority of people will be unable to achieve at a level supporting an income, and thus unemployed. For example, while practice is key in (highly convex) professional sports, there aren’t many people who have the natural talent to earn a living at it. Convexity shuts out those without natural talent. Luckily for us and the world, most convex work isn’t so heavily influenced by natural limitations, but by skills, specialization and education. There’s still an elite at the rightward side of the payoff distribution curve that takes the bulk of the reward, but it’s possible for a diligent and motivated person to enter that elite by gaining the requisite skills. In other words, most of the inputs into that convex payoff function are within the individual actor’s control. This is another case of “good inequality”. In blue-ocean convexity, we want the top players to reap very large rewards, because it motivates more people to do the work that gets them there. 

Consider software engineering, which is perhaps the platonic ideal of blue-ocean convexity. What retards us the most as an industry is the lack of highly-skilled people. As an industry, we contend with managerial environments tailored to mediocrity, and suffer from code-quality problems that can reduce a technical asset’s real value to 80, 20, or even minus-300 cents on the dollar compared to its book value. Good software engineers are rare, and that hurts everyone. In fact, perhaps the easiest way to add $1 trillion in value to the economy would be to increase software engineer autonomy. Because most software engineers never get the environment of autonomy that would enable them to get any good, the whole economy suffers. What’s the antidote? A lot of training and effort– the so-called “10000 hours” of deliberate practice– that’s generally unpaid in this era of short-term, disposable jobs.

Convexity’s fundamental problem is that it requires highly-skilled labor, but no employer is willing to pay for people to develop the relevant skills, out of a fear that employees who drive up their market value will leave. In the short term, it’s an effective business strategy to hire mediocre “commodity developers” and staff them on gigantic teams for uninspiring projects, and give them work that requires minimal intellectual ability aside from following orders. In the long term, those developers never improve and produce garbage software that no one knows how to maintain, producing creeping morale decay and, sometimes, “time bombs” that cause huge business losses at unknown times in the future.

That’s why convexity is such a major threat to the full-employment society to which even liberal Americans still cling. Firms almost never invest in their people– empirically, we see that– in favor of the short-term “solution”, which is to ignore convexity and try to beat the labor context into concavity, that is terrible in the long term. Thus, even in convex work, the bulk of people linger at the low-yield leftward end of the curve. Their employers don’t invest in them, and often they lack the time and resources to invest in themselves. What we have, instead of blue-ocean convexity, is an economy where the privileged (who can afford unpaid time for learning) become superior because they have the capital to invest in themselves, and the rest are ignored and fall into low-yield commodity work. This was socially stable when there was a lot of concave, commodity work for humans to do, but that’s increasingly not the case.

Someone is going to have to invest in the long term, and to pay for progress and training. Right now, privileged individuals do it for themselves and their progeny, but that’s not scalable and will not avert the social instability threatened by systemic, long-term unemployment.

Trust and convexity

As I’ve said, convexity isn’t only a property of the relationship between individual inputs (talent, motivation, effort, skill) and productivity, but also occurs in team endeavors. Teams can be synergistic, with peoples’ efforts interacting multiplicatively instead of additively. That’s a very good thing, when it happens.

So it’s no surprise that large accomplishments often require multiple people. We already knew that! That is less true in 2013 than it was 1985– now, a single person can build a website serving millions– but it’s still the case. Arguably, it’s more the case now; it’s only that many markets have become so efficient that interpersonal dependencies “just work” and give more leverage to single actors. (The web entrepreneur is using technologies and infrastructure built by millions of other people.) At any rate, it’s only a small space of important projects that will be accomplished well by a single party, acting alone. For most, there’s a need to bring multiple people together, but to retain focus and that requires interior political inequalities (leadership) to the group.

We’re hard-wired to understand this. As humans, we fundamentally get the need for team endeavors with strong leadership. That’s why we enjoy team sports so much.

Historically, there have been three “sources of power” that have enabled people to undertake and lead large projects (team convexity):

  • coercion, which exists when negative consequences are used to motivate someone to do work that she wouldn’t otherwise do. This was the cornerstone of pre-industrial economies (slavery) but is also used, in a softer form, by ineffective managers: do this or lose your income/reputation. Anyway, coercion is how the Egyptian pyramids were built: coercive slave labor.
  • divination, in which leaders are elected based on an abstract principle, which may be the whim of a god, legal precedent, or pure random luck. For example, it has been argued that gambling (a case of “pure random luck”) served a socially positive purpose on the American frontier. Although it moved funds “randomly”, it allowed pools of capital to form, financing infrastructural ventures. Something like divination is how the cathedrals were built: voluntary labor, motivated by religious belief, directed by architects who often were connected with the Church. Self-divination, which tends to occur in a pure power vacuum, is called arrogation.
  • aggregation, where an attempt to compute, fairly, the group preference or the true market value of an asset is made. Political elections and financial markets are aggregations. Aggregation is how the Internet was built: self-directed labor driven by market forces.

When possible, fair aggregations are the most desirable, but it’s non-trivial to define what fair is. Should corporate management be driven by the one-dollar, one-vote system that exists today? Personally, I don’t think so. I think it sucks. I think employees deserve a vote simply because they have an obvious stake in the company. As much as the current, right-wing, state of the American electorate infuriates me, I really like the fact that citizens have the power to fire bad politicians. (They don’t use it enough; incumbent victory rates are so high that a bad politician has more job security than a good programmer.) Working people should have the same power over their management. By accepting a wage that is lower than the value of what they produce, they are paying their bosses. They have a right to dictate how they are managed, and to insist on the mentorship and training that convexity is making essential.

Because it’s so hard to determine a fair aggregation in the general case, there’s always some room for divination and arrogation, or even coercion in extreme cases. For example, our Constitution is a case of (secular, well-informed) divination on the matter of how to build a principled, stable and rational government, but it sets up an aggregation that we use elect political leaders. Additionally, if a political leader were voted out of office but did not hand over power, he’d be pushed out of it by force (coercion). Trust is what enables self-organizing (or, at least, stable) divination. People will grant power to leaders based on abstract principles if they trust those ideas, and they’ll allow representatives to act on their behalf if they trust those people.

Needless to say, convex payoffs to group efforts generate an important role for trust. That’s what the “stone soup” parable is about; because there’s no trust in the community, people hoard their own produce instead of sharing, and no one has had a decent meal for months. When outside travelers offer a nonexistent delicacy– the stone is a social catalyst with no nutritional value– and convince the other villagers to donate their spare produce, they enable them all to work together. So they get a nutritious bowl of soup and, one hopes, they can start to trust each other and build at least a barter or gift economy. They all benefit from the “stone soup”, but they were deceived.

Convex dishonesty isn’t always bad. It is the act of “borrowing” trust by lying to people, with the intent to pay them back out of the synergistic profits. Sometimes convex dishonesty is exactly what a person needs to do in order to get something accomplished. Nor is it always good. Failed convex frauds are damaging to morale, and therefore they often exacerbate the lack-of-trust problem. Moreover, there are many endeavors (e.g. pyramid schemes) that have the flavor of convex fraud but are, in reality, just fraud.

This, in fact, is why modern finance exists. It’s to replace the self-divinations that pre-financial societies required to get convex projects done with a fairer aggregation system that properly measures, and allows the transfer of, risks.


For macroscopic considerations like the fair prices of oil or business equity, financial aggregations seem to work. What about the micro-level concern of what each worker should do on a daily basis? That usually exists in the context of a corporation (closed system) with specific authority structures and needs. Companies often attempt to create internal markets (tough culture) for resources and support, with each team’s footprint measured in internal “funny money” given the name of dollars. I’ve seen how those work, and they often become corrupt. The matter of how people direct the use of their time is based on an internal social currency (including job titles, visibility, etc.) that I’ve taken to calling credibility. It’s supposed to create a meritocracy, insofar as the only way one is supposed to be able to get credibility is through hard work and genuine achievement, but it often has some severely anti-meritocratic effects. 

So why does your job (probably) Suck? Your job will generally suck if you lack credibility, because it means that you don’t control your own time, have little choice over what you do and how you do it, and that your job security is poor. Your efforts will be allocated, controlled, and evaluated by an external party (a manager) whose superiority in credibility grants him the right of self-divination. He gets to throw your time into his convex project, but not vice versa. You don’t have a say in it. Remember: he’s got credibility, and you lack it. 

Credibility always generates a black market. There is no failing in this principle. Performance reviews are gamed, with various trades being made wherein managers offer review points in exchange for non-performance-related favors (such as vocal support for an unrelated project, positive “360-degree reviews”, and various considerations that are just inappropriate and won’t be discussed here) and loyalty. Temporary strongmen/thugs use transient credibility (usually, from managerial favoritism) to intimidate and extort other people into sharing credit for work accomplished, thus enabling the thug to appear like a high performer and get promoted to a real managerial role (permanent credibility). You win on a credibility market by buying and selling it for a profit, creating various perverted social arbitrages. No organization that has allowed credibility to become a major force has avoided this.

Now I can discuss the hierarchy as immortalized by this cartoon from Hugh MacLeod:


Losers are not undesirable, unpopular, or useless people. In fact, they’re often the opposite. What makes them “losers” is that, in an economic sense, they’re losing insofar as they contribute more to the organization than they get out of it. Why do they do this? They like the monthly income and social stability. Sociopaths (who are not bad people; they’re just gamblers) take the other side of that risk trade. They bear a disproportionate share of the organization’s risk and work the hardest, but they get the most reward. They have the most to lose. A Loser who gets fired will get another job at the same wage; a Sociopath CEO will have to apply for subordinate positions if the company fails. Clueless are a level that forms later on when this risk transfer becomes degenerate– the Sociopaths are no longer putting in more effort or taking more risk than anyone else, but have become an entitled, complacent rent-seeking class– and they need a middle-management layer of over-eager “useful idiots” to create the image (Effort Thermocline) that the top jobs are still demanding.

What’s missing in this analysis? Well, there’s nothing morally wrong, at all, with a financial risk transfer. If I had a resource that had a 50% chance of yielding $10 million, and 50% chance of being worthless, I’d probably sell it to a rich person (whose tolerance of risk is much greater) for $4.9 million to “lock in” that amount. A +5-million-dollar swing in personal wealth is huge to me and minuscule to him. It’d be a good trade for both of us. I’d be paying a (comparably small) $100,000 risk premium to have that volatility out of my financial life. I’m not a Loser in this deal, and he’s not a Sociopath. It’s by-the-book finance, how it’s supposed to work.

What generates the evil, then? Well, it’s the credibility market. I don’t hold the individual firm responsible for prevailing financial scarcity and, thus, the overwhelmingly large number of people willing to make low-expectancy plays. As long as that firms pays its people reasonably, it has clean hands. So the financial Loser trade is not a sign of malfeasance. The credibility market’s different, because the organization has control over it. It creates the damn thing. Thus, I think the character of the risk transfer has several phases, each deserving its own moral stance:

  1. Financial risk transfer. Entrepreneurs put capital and their reputations at risk to amass the resources necessary to start a project whose returns are (macroscopically, at least) convex. This pool of resources is used to pay bills and wages, therefore allowing workers to get a reliable, recurring monthly wage that is somewhat less than the expected value of their contribution. Again, there’s nothing morally wrong here. Workers are getting a risk-free income (so long as the business continues to exist) while participating in the profits of industrial macro-convexity. 
  2. De-risking, entrenchment, and convex fraud. As the business becomes more established, its people stop viewing it as a risk transfer between entrepreneurs and workers, and start seeing it (after the company’s success is obvious) as a pool of “free” resources to gain control over. Such resources are often economic (“this place has millions of dollars to fund my ideas”) but reputation (“imagine what I could do as a representative of X”) is also a factor. People begin making self-divination (convex fraud) gambits to establish themselves as top performers and vault into the increasingly complacent, rent-seeking, executive tier. This is a red-ocean feeding frenzy for the pile of surplus value that the organization’s success has created.
  3. Credibility emerges, and becomes the internal currency. Successful convex fraudsters are almost always people who weren’t part of the original founding team. They didn’t get their equity when it was cheap, so now they’re in an unstable positions. They’re high-ranking managers, but haven’t yet entwined themselves with the business or won a significant share of the rewards/equity. Knowing that their success is a direct output of self-divination (that is, arrogation) they use their purloined social standing to create official credibility in the forms of titles (public statements of credibility), closed allocation (credibility as a project-maker and priority-setter), and performance reviews (periodic credibility recalibrations). This turns the unofficial credibility they’ve stolen into an official, secure kind.
  4. Panic trading, and credibility risk transfer. Newly formed businesses, given their recent memory of existential risk, generally have a cavalier attitude toward firing and a tough culture, which I’ll explain below. This means that a person can be terminated not because of doing anything wrong or being incompetent, but just because of an unlucky break in credibility fluctuations (e.g. a sponsor who changes jobs, a performance-review “vitality curve”). In role-playing games, this is the “killed by the dice” question: should the GM (game coordinator who functions as a neutral party, creating and directing the game world) allow characters, played well, to die– really die, in the “create a new character” sense, not in the “miraculously resurrected by a level-18 healer” sense– because of bad rolls of the dice? In role-playing games, it’s a matter of taste. Some people hate games where they can lose a character by random chance; others like the tension that it creates. At work, though, “killed by the dice” is always bad. Tough-culture credibility markets allow good employees to be killed by the dice. In fact, when stack-ranking and “low performer” witch hunts set in, they encourage it. This creates a lot of panic trading and there’s a new risk transfer in town. It’s not the morally acceptable and socially-positive transfer of financial risk we saw in Stage 1. Rather, it’s the degenerate black-market credibility trading that enables the worst sorts of people (true psychopaths) to rise.
  5. Collapse into feudalistic rank culture. No one wants a job where she can be fired “for performance” because of bad luck, so tough cultures don’t last very wrong; they turn into rank cultures. People (Losers) panic-trade their credibility, and would rather subordinate to get some credibility (“protection”) from a feudal lord (Sociopath) than risk having none and being flushed out. The people who control the review process become very powerful and, eventually, can manufacture enough of an image of high performance to become official managers. You’re no longer going to be killed by the dice in a rank culture, but you can be killed by a manager because he can unilaterally reduce your credibility to zero.
  6. Macroscopic underperformance and decline. Full-on rank culture is terribly inefficient, because it generates so much fourth-quadrant work that serves the need of local extortionists (usually, middle managers and their favorites) but does not help the business. Eventually, this leads to underperformance of the business as a whole. Rank culture fosters so much incompetence that trust breaks down within the organization, and it’s often permanent. Firing bad apples is no longer possible, because the process of flushing them away would require firing a substantial fraction of the organization, and that would become so politicized and disruptive as to break the company outright. Such companies regularly lapse into brief episodes of “tough culture”, when new executives (usually, people who buy it as its market value tanks) decide that it’s time to flush out the low performers, but they usually do it in a heavy-handed, McKinsey-esque way that creates a new and equally toxic credibility market. But… like clockwork, those who control said black markets become the new holders of rank and, soon enough, the official bosses. These mid-level rank-holders start out as the mean-spirited witch-hunters (proto-Sociopaths) who implement the “low performer initiative” but they eventually rise and leave a residue of strategically-unaware, soft, complacent and generally harmless mid-ranking “useful idiots” (new Clueless). Clueless are the middle managers who get some power when the company lurches into a new rank culture, but don’t know how to use it and don’t know the main rule of the game of thrones: you win or you die.
  7. Obsolescence and death. Self-explanatory. Some combination of rank-culture complacency and tough-culture moral decay turn the company into a shell of what it once was. The bad guys have taken out their millions and are driving up house prices in the area and their wives with too much plastic surgery are on zoning committees keeping those prices high; everyone else who worked at the firm is properly fucked. Sell off the pieces that still have value, close the shop.

That cycle, in the industrial era, used to play out over decades. If you joined a company in Stage 1 in 1945, you might start to see the Stage 4 midlife when you retired in 1975. Now, it happens much more quickly: it goes down over years, and sometimes months for fast-changing startups. It’s much more of an immediate threat to personal job security than it has ever been before. Cultural decay used to be a long-term existential risk to companies not taken seriously because calamity was decades away; now, it’s often ongoing and rapid thanks to the “build to flip” mentality.

To tell the truth about it, the MacLeod rank culture wasn’t such a bad fit for the industrial era. Industrial enterprises had a minimal amount of convex work (choosing the business model, setting strategies) that could be delegated to a small, elite, executive nerve-center. Clueless middle managers and rationally-disengaged (Loser) wage earners could implement ideas delivered from the top without too much introspection or insight, and that was fine because individual work was concave. Additionally, that small set of executives could be kept close to the owners of the company (if they weren’t the same set of people).

In the technological era, individual labor is convex and we can no longer afford Cluelessness, or Loserism. The most important work– and within a century or so, all work where there’s demand for humans to do it– requires self-executivity. The hierarchical corporation is a brachiosaur sunning itself on the Yucatan, but that bright point of light isn’t the sun.

Your job is a call option

If companies seem to tolerate, at least passively, the inefficiency of full-blown rank culture, doesn’t that mean that there isn’t a lot of real work for them to do? Well, yes, that’s true. I’ve already discussed the existence of low-yield, boring, Fourth Quadrant busywork that serves little purpose to the business. It’s not without any value, but it doesn’t do much for a person’s career. Why does it exist? First, let’s answer this: where does it come from?

Companies have a jealously-guarded core of real work: essential to the business, great for the careers of those who do it. The winners of the credibility market get the First Quadrant (1Q) of interesting and essential work. They put themselves on the “fun stuff” that is also the core of the business– it’s enjoyable, and it makes a lot of money for the firm and therefore leads to high bonuses. There isn’t a lot of work like this, and it’s coveted, so few people can be in this set. Those are akin to feudal lords, and correspond with MacLeod Sociopaths. Those who wish to join their set, but haven’t amassed enough credibility yet, take on the less enjoyable, but still important Second Quadrant (2Q) of work: unpleasant but essential. Those are the vassals attempting to become lords in the future. That’s often a Clueless strategy because it rarely works, but sometimes it does. Then there is a third monastic category of people who have enough credibility (got into the business early, usually) to sustain themselves but have no wish to rise in the organizational hierarchy. They work on fun, R&D projects that aren’t in the direct line of business (but might be, in the future). They do what’s interesting to them, because they have enough credibility to get away with that and not be fired. They work on the Third Quadrant (3Q): interesting but discretionary. How they fit into the MacLeod pyramid is unclear. I’d say they’re a fortunate sub-caste of Losers in the sense that they rationally disengage from the power politics of the essential work; but they’re Clueless if they’re wrong about their job security and get fired. Finally, who gets the Fourth Quadrant (4Q) of unpleasant and discretionary work? The peasants. The Losers without the job security of permanent credibility are the ones who do that stuff, because they have no other choice.

Where does the Fourth Quadrant work come from? Clueless middle-managers who take undesirable (2Q) or unimportant (3Q) projects, but manage to take all the career upside (turning 2Q into 4Q for their reports) and fun work (turning 3Q into 4Q) for themselves, leaving their reports utterly hosed. This might seem to violate their Cluelessness; it’s more Sociopathic, right? Well, MacLeod “Clueless” doesn’t mean that they don’t know how to fend for themselves. It means they’re non-strategic, or that they rarely know what’s good for the business or what will succeed in the long-term. They suck at “the big picture” but they’re perfectly capable of local operations. Additionally, some Clueless are decent people; others are very clearly not. It is perfectly possible to be MacLeod Clueless and also a sociopath.

Why do the Sociopaths in charge allow the blind Clueless to generate so much garbage make-work? The answer is that such work is evaluative. The point of the years-long “dues paying” period is to figure out who the “team players” are so that, when leadership opportunities or chances for legitimate, important work open up, the Sociopaths know which of the Clueless and Losers to pick. In other words, hiring a Loser subordinate and putting him on unimportant work is a call option on a key hire, later.

Workplace cultures

I mentioned rank and tough cultures above, so let me get into more detail of what those are. In general, an organization is going to evaluate its individuals based on three core traits:

  • subordinacy: does this person put the goals of the organization (or, at least, his immediate team and supervisor) above her own?
  • dedication: will she do unpleasant work, or large amounts of work, in order to succeed?
  • strategy: does she know what is worth working on, and direct her efforts toward important things?

People who lack two or all three of these core traits are generally so dysfunctional that all but the most nonselective employers just flush them out. Those types– such as the strategic, not-dedicated, and insubordinate Passive-Aggressive and the dedicated, insubordinate, and not-strategic Loose Cannon– occasionally pop up for comic relief, but they’re so incompetent that they don’t last long in a company and are never in contention for important roles. I call them, as a group, the Lumpenlosers.

MacLeod Losers tend to be strategic and subordinate, but not dedicated. They know what’s worth working on, but they tend to follow orders because they’re optimizing for comfort, social approval, and job security. They don’t see any value in 90-hour weeks (which would compromise their social polish) or radical pursuit of improvement (which would upset authority). They just want to be liked and adjust well to the cozy, boring, middle-bottom. If you make a MacLeod Loser work Saturdays, though, she’ll quit. She knows that she can get a similar or better job elsewhere.

MacLeod Clueless are subordinate and dedicated but not strategic. They have no clue what’s worth working on. They blindly follow orders, but will also put in above-board effort because of an unconditional work ethic. They frequently end up cleaning up messes made by Sociopaths above and Losers below them. They tend to be where the corporate buck actually stops, because Sociopaths can count on them to be loyal fall guys.

MacLeod Sociopaths are dedicated and strategic but insubordinate. They figure out how the system works and what is worth putting effort into, and they optimize for personal yield. They’re risk-takers who don’t mind taking the chance of getting fired if there’s also a decent likelihood of a promotion. They tend to have “up-or-out” career trajectories, and job hopping isn’t uncommon.

Since there are good Sociopaths out there, I’ve taken to calling the socially positive ones the Technocrats, who tend to be insubordinate with respect to immediate organizational authority, but have higher moral principles rooted in convexity: process improvements, teamwork and cooperation, technical and infrastructural excellence. They’re the “positive-sum” radicals.  I’ll get back to them.

Is there a “unicorn” employee who combines all three desired traits– subordinacy, dedication, and strategy? Yes, but it’s strictly conditional upon a particular set of circumstances. In general, it’s not strategic to be subordinate and dedicated. If you’re strategic, you’ll usually either optimize for comfort and be subordinate, but not dedicated, because that’s uncomfortable. If you follow orders, it’s pretty easy to coast in most companies. That’s the Loser strategy. Or, you might optimize for personal yield and work a bit harder, becoming dedicated, but you won’t do it for a manager’s benefit: it’s either your own, or some kind of higher purpose. That’s the Sociopath strategy. The exception is a mentor/protege relationship. Strategic and dedicated people will subordinate if they think that the person in authority knows more than they do, and is looking out for their career interests. They’re subordinating to a mentor conditionally, based on the understanding that they will be in authority, or at least able to do more interesting and important work, in the future.

From this understanding, we can derive four common workplace cultures:

  • rank cultures value subordinacy above all. You can coast if you’re in good graces with your manager, and the company ultimately becomes lazy. Rank cultures have the most pronounced MacLeod pyramid: lazy but affable Losers, blind but eager Clueless, and Sociopaths at the top looking for ways to gain from the whole mess. 
  • tough cultures value dedication, and flush out the less dedicated using informal social pressure and formal performance reviews. It’s no longer acceptable to work a standard workweek; 60 hours is the new 40. Tough culture exists to purge the Loser tier, splitting it between the neo-Clueless sector and the still-Loser rejects, which it will fire if they don’t quit first. So the MacLeod pyramid of a tough culture is more fluid, but every bit as pathological.
  • self-executive cultures value strategy. Employees are individually responsible for directing their own efforts into pursuits that are of the most value. This is the open allocation for which Valve and Github are known. Instead of employees having to compete for projects (tough culture) or managerial support (rank culture) it is the opposite. Projects compete for talent on an open market, and managers (if they exist) must operate in the interests of those being managed. There is no MacLeod hierarchy in a self-executive culture.
  • guild culture values a balance of the three. Junior employees aren’t treated as terminal subordinates but as proteges who will eventually rise into leadership/mentoring positions. There isn’t a MacLeod pyramid here; to the extent that there may be undesirable structure, it has more to do with inaccurate seniority metrics (e.g. years of experience) than with bad-faith credibility trading. 

Rank and guild cultures are both command cultures, insofar as they rely on central planning and global (within the institution) rule-setting. Top management must keep continual awareness of how many people are at each level, and plan out the future accordingly. Tough and self-executive cultures are market cultures, because they require direct engagement with an organic, internal market.

The healthy, “Theory Y” cultures are the guild and self-executive cultures. These confer a basic credibility on all employees, which shuts off the panic trading that generates the MacLeod process. In a guild culture, each employee has credibility for being a student who will grow in the future. In self-executive culture, each employee has power inherent in the right to direct her efforts to the project she considers most worthy. Bosses and projects competing for workers is a Good Thing. 

The pathological, “Theory X” cultures are the rank and tough cultures. It goes without saying that most rank cultures try to present themselves as guild cultures– but management has so much power that it need not take any mentorship commitments seriously. Likewise, most tough cultures present themselves as self-executive ones. How do you tell if your company has a genuinely healthy (Theory Y) culture? Basic credibility. If it’s there, it’s the good kind. If it’s not, it’s the bad kind of culture.

Basic credibility

In a healthy company, employees won’t be “killed by the dice”. Sure, random fluctuations in credibility and performance might delay a promotion for a year or two, but the panicked credibility trading of the Theory-X culture isn’t there. People don’t fear their bosses in a Theory-Y culture; they’re self-motivated and fear not doing enough by their own standards– because they actually care. Basic credibility means that every employee is extended enough credibility to direct his own work and career.

That does not mean people are never fired. If someone punches a colleague in the face or steals from the company, you fire him, but it has nothing to do with credibility. You get rid of him because, well, he did something illegal and harmful. What it does mean is that people aren’t terminated for “performance reasons” that really mean either (a) they were just unlucky and couldn’t get enough support to save them in tough-culture “stack ranking”, or (b) their manager disliked them for some reason (no-fault lack-of-fit, or manager-fault lack-of-fit). It does mean that people are permitted to move around in the company, and that the firm might tolerate a real underperformer for a couple of years. Guess what? In a convex world, underperformance almost doesn’t matter.

With convexity, the difference between excellence and mediocrity matters much more than that between mediocrity and underperformance. In a concave world, yes, you must fire underperformers because the margin you get on good employees is so low that one slacker can cancel out 4 or 5 good people. In a convex world, the danger isn’t that you have a few underperformers. You will have, at the least, good-faith low-performers, just because the nature of convexity is to create risk and inequality of return and some peoples’ projects won’t pan out. Thjat’s fine. Instead, the danger is that you don’t have any excellent (“10x”) employees.

There’s a managerial myth that cracking down on “low performers” is useful because they demotivate the “10x-ers”. Yes and no. Incompetent management and having to work around bad code are devastating and will chase out your top performers. If 10xer’s have to work with incompetents and have no opportunity to improve them, they get frustrated and quit. There are toxic incompetents (dividers) who make others unproductive and damage morale, and then there are low-impact employees who just need more time (subtracters). Subtracters cost more in salary than they deliver, but they aren’t hurting anyone and they will usually improve. Fire dividers immediately. Give subtracters a few years (yes, I said years) to find a fit. Sometimes, you’ll hire someone good and still have that person end up as a subtracter at first. That common in the face of convexity– and remember that convexity is the defining problem of the 21st-century business world. The right thing to do is to let her keep looking for a fit until she finds one. Almost never will it take years if your company runs properly.

“Low performer initiatives” rarely smoke out the truly toxic dividers, as it turns out. Why? Because people who have defective personalities and hurt other peoples’ morale and productivity are used to having their jobs in jeopardy, and have learned to play politics. They will usually survive. It’ll be unlucky subtracters you end up firing. You might save chump change on the balance sheet, but you’re not going to fix the real organizational problems.

Theories X, Y, and Z

I grouped the negative workplace cultures (rank and tough) together and called them Theory X; the positive ones (self-executive and guild) I called Theory Y. This isn’t my terminology; it’s about 50 years old, coming from Douglas MacGregor. The 1960s was the height of Theory Y management, so that was the “good” managerial style. Let’s compare them and see what they say.

Recall what I said about the “sources of power”: coercion, divination, and aggregation. Coercion was, by far, the predominant force in aggregate labor before 1800. Slavery, prisons, and militaries (with, in that time, lots of conscription) were the inspirations for the original corporations, and the new class of industrialists was very cruel: criminal by modern standards. Theory X was the norm. Under Theory X, workers are just resources. They have no rights, no important desires, and should be well-treated only if there’s an immediate performance benefit. Today, we recognize that as brutal and psychotic, but for a humanity coming off over 100,000 years of male positional violence and coerced labor, the original-sin model of work shouldn’t seem far off. Theory X held that employees are intrinsically lazy and selfish and will only work hard if threatened.

Around 1920, industrialists began to realize that, even though labor in that time mostly was concave, it was good business to be decent to one’s workers. Henry Ford, a rabid anti-Semite, was hardly a decent human being, much less “a nice guy”, but even he was able to see this. He raised wages, creating a healthy consumer base for his products. He reduced the workday to ten hours, then eight. The long days just weren’t productive. Over the next forty years, employers learned that if workers were treated well, they’d repay the favor by behaving better and working harder. This lead to the Theory Y school of management, which held that people were intrinsically altruistic and earnest, and that management’s role was to nurture them. This gave birth to the paternalistic corporation and the bilateral social contracts that created the American middle class.

Theory Y failed. Why? It grew up in the 1940s to ’60s, when there was a prosperous middle class, but in a time of very low economic inequality. One thing that would amaze most Millennials is that, when our parents grew up, the idea that a person would work for money was socially unacceptable. You just couldn’t say that you wanted to get rich, in 1970, and not be despised for it. And it was very rare for a person to make 10 times more than the average citizen! However, the growth of economic inequality that began in the 1970s, and accelerated since then, raised the stakes. Then the Reagan Era hit.

Most of the buyout/private equity activity that happened in the 1980s had a source immortalized by the movie Wall Street: industrial espionage, mostly driven by younger people eager to sell out their employers’ secrets to get jobs from private equity firms. There was a decade of betrayal that brutalized the older, paternalistic corporations. Given, by a private equity tempter, the option of becoming CEO immediately through chicanery, instead of working toward it for 20 years, many took the former. Knives came out, backs were stabbed, and the most trusting corporations got screwed.

Since the dust settled, around 1995, the predominant managerial attitude has been Theory Z. Theory X isn’t socially acceptable, and Theory Y’s failure is still too recently remembered. What’s Theory Z? Theory X takes a pessimistic view of workers and distrusts everyone. Theory Y takes an optimistic view of human nature and becomes too trusting. Theory Z is the most realistic of the three: it assumes that people are indifferent to large organizations (even their employers) but loyal to those close to them (family, friends, immediate colleagues, distant co-workers; probably in that order). Human nature is neither egoistic or altruistic, but localistic. This was an improvement insofar as it holds a more realistic view of how people are. It’s still wrong, though.

What’s wrong with Theory Z? It’s teamist. Now, when you have genuine teamwork, that’s a great thing. You get synergy, multiplier effects, team convexity– whatever you want to call it, I think we all agree that it’s powerful. The problem with the Theory-Z company is that it tries to enforce team cohesion. Don’t hire older people; they might like different music! Buy a foosball table, because 9:30pm diversions are how creativity happens! This is more of a cargo cult than anything founded in reasonable business principles, and it’s generally ineffective. Teamism reduces diversity and makes it harder to bring in talent (which is critical, in a convex world). It also tends toward general mediocrity.

Each Theory had a root delusion in it. Theory X’s delusion was that morale didn’t matter; workers were just machines. Theory Y’s delusion is rooted in the tendency for “too good” people to think everyone else is as decent as they are; it fell when the 1980s made vapid elitism “sexy” again, and opportunities to make obscene wealth in betraying one’s employer emerged. Theory Z’s delusion is that a set of people who share nothing other than a common manager constitute a genuine (synergistic) team. See, in an open-allocation world, you’re likely to get team synergies because of the self-organization. People would naturally tend to form teams where they make each other more productive (multiplier effects). It happens at the grass-roots level, but can’t be forced in people who are deprived of autonomy. With closed-allocation, you don’t get that. People (with diverging interests) are brought together by force outside of their control and told to be a team. Closed-allocation Theory Z lives in denial of how rare those synergistic effects actually are.

I mentioned, previously an alternative to these 3 theories that I’ve called Theory A, which is a more sober and realistic slant on Theory Y: trust employees with their own time and energy; distrust those who want to control others. I’ll return to that in Part 22, the conclusion.

Morality, civility, and social acceptability

The MacLeod Sociopaths that run large organizations are a corrosive force, but what defines them isn’t true psychopathy, although some of them are that. There are also plenty of genuinely good people who fit the MacLeod Sociopath archetype. I am among them. What makes them dangerous is that the organization has no means to audit them. If it’s run by “good Sociopaths” (whom I’ve taken to calling Technocrats) then it will be a good organization. However, if it’s run by the bad kind, it will degenerate. So, with the so-called Sociopaths (while it is less necessary for the Losers and Clueless) it is important to understand the moral composition of that set.

I’ve put a lot of effort into defining good and evil, and that’s a big topic I don’t have much room for, so let me be brief on them. Good is motivated by concerns like compassion, social justice, honesty, and virtue. Evil is militant localism or selfishness. In an organizational context, or from a perspective of individual fitness, both are maladaptive when taken to the extreme. Extreme good is self-sacrifice and martyrdom that tends to take a person out of the gene pool, and certainly isn’t good for the bottom line; extreme evil is perverse sadism that actually gets in a person’s way, as opposed to the moderate psychopathy of corporate criminals.

Law and chaos are the extremes of a civil spectrum, which I cribbed from AD&D. Lawful people have faith in institutions and chaotic people tend to distrust them. Lawful good sees institutions as tending to be more just and fair than individual people; chaotic good finds them to be corrupt. Lawful neutrality sees institutions as being efficient and respectable; chaotic neutrality finds them inefficient and deserving of destruction. Lawful evil sees institutions as a magnifier of strength and admires their power; chaotic evil sees them as obstructions that get in the way of raw, human dominance. 

Morality and civil bias, in people, seem to be orthogonal. In the AD&D system, each spectrum has three levels, producing 9 alignments. I focused on the careers of each here. In reality, though, there’s a continuous spectrum. For now, I’m just going to assume a Gaussian distribution, mean 0 and standard deviation 1, with the two dimensions being uncorrelated.

MacLeod Losers tend to be civilly neutral, and Clueless tend to be lawful; but MacLeod Sociopaths come from all over the map. Why? To understand that, we need to focus on a concept that I call well-adjustment. To start, humans don’t actually value extremes in goodness or in law. Extreme good leads to martyrdom, and most people who are more than 3 standard deviations of good are taken to be neurotic narcissists, rather than being admired. Extremely lawful people tend to be rigid, conformist, and are therefore not much liked either. I contend that there’s a point of maximum well-adjustment that represents what our society says people are supposed to be. I’d put it somewhere in the ballpark of 1 standard deviation of good, and 1 of law, or the point (1, 1). If we use +x to represent law, –x to represent chaos, +y to represent good, and –y to represent evil, we get the well-adjustment formula:

Here, low f means that one is more well-adjusted. It’s better to be good than evil, and to be lawful than chaotic, but it’s best to be at (1, 1) exactly. But wait! Is there really a difference between (1, 1) and (0, 0)? Or between (5, 5) and (5, 6)? Not really, I don’t think. Well-adjustment tends to be a binary relationship, so I’m going to put f through a logistic transform where 0.0 means total ill-adjustment at 1.0 means well-adjustment. Middling values represent a “fringe” of people who will be well-adjusted in some circumstances but fail, socially speaking, in others. Based on my experience, I’d guess that this:

is a good estimate. If your squared distance from the point of maximal well-adjustment is less than 4, you’re good. If it’s more than 8, you’re probably ill-adjusted– too good, too evil, too lawful, or too chaotic. What gives us, in the 2-D moral/civil space, is a well-adjustment function looking exactly like this:

whose contours look like this:

Now, I don’t know whether the actual well-adjustment function that drives human social behavior has such a perfect circular shape. I doubt it does. It’s probably some kind of contiguous oval, though. The white part is a plateau of high (near 1.0) social adjustment. People in this space tend to get along with everyone. Or, if they have social problems, it has little to do with their moral or civil alignments, which are socially acceptable. The red outside is a deep sea (near 0.0) of social maladjustment. It turns out that if you’re 2 standard deviations of evil and of chaos, you have a hard time making friends.

In other words, we have a social adjustment function that’s almost binary, but there’s a really interesting circular fringe that produces well-adjustment values between 0.1 and 0.9. Why would that be important? Because that’s where the MacLeod Sociopaths comes from.

Well-adjusted people don’t rise in organizations. Why? Because organizations know exactly how to make it so that well-adjusted, normal people don’t mind being at the bottom, and will slightly prefer it if that’s where the organization thinks they belong. It’s like Brave New World, where the lower castes (e.g. Gammas) are convinced that they are happiest where they are. If you’re on that white plateau of well-adjustment, you’ll probably never be fired. You’ll always have friends wherever you go. You can get comfortable as a MacLeod Loser, or maybe Clueless. You don’t worry. You don’t feel a strong need to rise quickly in an orgnaization.

Of course, the extremely ill-adjusted people in the red don’t rise either. That should not surprise anyone. Unless they become very good at hiding their alignments, they are too dysfunctional to have a shot in social organizations like a modern corporation. To put it bluntly, no one likes them.

However, let’s say that a Technocrat has 1.25 standard deviations of law and chaos each, making her well-adjustment level 0.65. She’s clearly in that fringe category. What does this mean? It means that she’ll be socially acceptable in about 65% of all contexts. The MacLeod Loser career isn’t an option for her. She might get along with one set of managers and co-workers, but as they change, things may turn against her. Over time, something will break. This gives her a natural up-or-out impetus. If she doesn’t keep learning new things and advancing her career, she could be hosed. She’s liked by more people than dislike her, but she can’t rely on being well-liked as it were a given.

It’s people on the fringe who tend to rise to the top of, and run, organizations, because they can never get cozy on the bottom. We can graph “fringeness”, measured as the magnitude of the slope (derivative) of the well-adjustment function and you get contours like this:

It’s a ring-shaped fringe. Nothing too surprising. The perfection of the circular ring is, of course, an artifact of the model. I don’t know if it’s this neat in the real world, but the idea there is correct. Now, here’s where things get interesting. What does that picture tell us? Not that much aside from what we already know: the most ambitious (and, eventually, most successful) people in an organization will be those who are not so close to the “point of maximal well-adjustment” to get along in any context, but not so far from it as to be rejected out of hand.

But how does this give us the observed battle royale between chaotic good and lawful evil? Up there, it just looks like a circle. 

Okay, so we see the point (3, 3) in that circular band. How common is it for someone to be 3 standard deviations of lawful and 3 standard deviations of good? Not common at all. 3-sigma events are rare (about 1 in 740) so a person who was 3 deviations from the norm in both would be 1-in-548,000– a true rarity. Let’s multiply this “fringeness” function we’ve graphed by the (Gaussian) population density at each point.

That’s what the fringe, weighted by population density, looks like. There’s a lack of presence of people at positions like (3, 3) because there’s almost no one there. There’s a clear crescent “C” shape and it contains a disproportionate share of two kinds of people. It has a lot of lawful evil in the bottom right, and a lot of chaotic good in the top left, in addition to some neutral “swing players” who will tend to side (with unity in their group) with one or the other. How they swing tends to determine the moral character of an organization. If they side with the chaotic good, then they’ll create a company like Valve. If they side with lawful evil, you get the typical MacLeod process.

That’s the theoretical reason why organizations come down to an apocalyptic battle between chaotic good (Technocrats) and lawful evil (corrosive Sociopaths, in the MacLeod process). How does this usually play out? Well, we know what lawful evil does. It uses the credibility black market to gain power in the organization. How should chaotic good fight against this? It seems that convexity plays to our advantage, insofar as the MacLeod process can no longer be afforded. In the long term, the firm can only survive if people like us (chaotic good) win. How do we turn that into victory in the short term?

So what’s a Technocrat to do? And how can a company be built to prevent it from undergoing MacLeod corrosion? What’s missing in the self-executive and guild cultures that a 5th “new” type of culture might be able to fix? That’s where I intend to go next.

Take a break, breathe a little. I’ll be back in about a week to Solve It.

Gervais followup: rank, tough, and market cultures.

This is a follow-up to yesterday’s essay in which I confronted the MacLeod Hierarchy of the organization, which affixes unflattering labels to the three typical tiers (workers, managers, executives) of the corporate pyramid. Subordinate workers, MacLeod names Losers, not as a pejorative, but because life at the bottom is a losing proposition. Lifelong middle managers are the Clueless who lack the insight necessary to advance. Executives are the exploitative Sociopaths who win. I looked at this and discovered that each category possessed two of three corporate traits: strategy, dedication, and subordinacy (which I called “team player” in the original essay). I replaced team player with subordinacy because I realized that “team player” isn’t well-defined. Here, by subordinacy, I mean that a person is willing to accept subordinate roles, without expectation of personal benefit. People who lack it are not constitutionally insubordinate, but view their work as a contract between themselves and manager. They take direction from the manager and show loyalty, so long the manager advances their careers. Since they show no loyalty to a manager or team that doesn’t take an interest in their careers, they get the “not a team player” label.

MacLeod Losers are subordinate and strategic, but not very dedicated. They work efficiently and generally do a good job, but they’re usually out the door at 5:00, and they’re not likely to stand up to improve processes if that will bring social discomfort upon them. Clueless are subordinate and dedicated, but not strategic. They’ll take orders and work hard, but they rarely know what is worth working on and should not advance to upper management. MacLeod Sociopaths are strategic and dedicated, but not subordinate. The next question to ask is, “Is insubordinacy necessarily psychopathy?”, and I would say no. Hence, my decision to split the Sociopath tier between the “good Sociopaths” (Technocrats) and the bad ones, the true Psychopaths.

People who have one or zero of the three traits (subordinacy, dedication, strategy) are too maladaptive to fit into the corporate matrix at all and become Lumpenlosers. People with all three do not exist. A person who is strategic is not going to dedicate herself to subordination. She might subordinate, in the context of a mutually beneficial mentor/protege role, but general subordination is out of the question. Strategic people either decide to minimize discomfort, which means being well-liked team players in the middle of the performance curve– not dedicated over-performers– or to maximize yield, which makes subordinacy unattractive.

What I realized is that, from these three traits, one can understand three common workplace cultures.

Rank Culture

The most common one is rank culture, which values subordinacy. Even mild insubordination can lead to termination, and for a worker to be described as “out for herself” is the kiss of death. Rank cultures make a home for the MacLeod Losers and Clueless, but MacLeod Sociopaths don’t fare well. They have to keep moving, preferably upward.

While the MacLeod Clueless lack the strategic vision to decide what to work on, their high degree of dedication and subordinacy makes them a major tactical asset for the Sociopath. The Clueless middle manager becomes a role model for the Losers. He played by the rules and worked hard, and moved into a position of (slightly) higher pay and respect. 

Rank cultures have, within them, a thermocline. From worker to middle manager, jobs get harder and require more effort to maintain and achieve. Below the thermocline, one really does have to exceed expectations to qualify for the next level up. Above the thermocline, in Sociopath territory, the power associated to rank starts paying dividends and jobs get easier as one ascends, rising into executive ranks where one controls the performance assessment and can either work only on “the fun stuff”, or choose not to work at all. Rank cultures require this thermocline, an opaque veil, to keep the workers motivated and invested in their belief that work will make them free. That is why, in such cultures, the strategically inept Clueless are so damn important.

Tough Culture

Rank culture’s downfall is that, over time, it reduces the quality of employees. The best struggle to rise in an unfair, politicized environment where subordination to local gatekeepers (managers) is more important than merit, and eventually quit or get themselves fired. The worst find a home in the bottom of the Loser tier, the Lumpenlosers. At some point, companies decide that the most important thing to do is clear out the underperformers. Thus is born tough culture. Rank culture values subordinacy above all else; tough culture demands dedication. Sixty hours per week becomes the new 40. Performance reviews now come with “teeth”.

Enron’s executives were proud of their “tough” culture, with high-stakes performance reviews and about 5% of employees fired for performance each year. The firm was berated for its “mean-spirited” and politicized review system that, in reality, is no different from what exists now in many technology companies. This is the “up-or-out” model where if you don’t appear to be “hungry”, you’ll be gone in a year or two. Those who appear to be coasting have targets on their heads.

Over time, tough culture begets a new rank culture, because the effort it demands becomes unsustainable, and because those who control the new and more serious performance review system begin using it to run extortions based on loyalty and tribute rather than objective effort. They become the new rank-holders.

Market Culture

Rank culture demands subordinacy, and tough culture demands dedication. Market culture is one that demands strategy, even of entry-level employees. You don’t have to work 60-hour weeks, nor do you have to be a well-liked, smiling order-follower. You have to work on things that matter.

Rank and tough cultures focus on “performance”, which is an assessment of how well one does as an individual. If you work hard to serve a bad boss, or on an ill-conceived project, you made no mistake. You were following orders. You had no impact, but it wasn’t your fault, and you “performed” well. Market cultures ignore the “performance” moralism and go straight to impact. What did you do, and how did it serve the organization? Low-impact doesn’t mean you’ll be fired, but you are expected to understand why it happened and to take responsibility for moving to higher impact pursuits.

People who serially have low impact, if the company can’t mentor them until they are self-executive, will need to be fired because, if they’re not, they’ll become the next generation’s subordinates and generate a rank culture. Firing them is the hard part, because most of the people who conceive market cultures are well-intended Technocrats (or, in the MacLeod triarchy, “good Sociopaths”) who want to liberate peoples’ creative energies. They don’t like giving up on people. But any market culture is going to have to deal with people who are not self-executive enough to thrive, and if it doesn’t have the runway to mentor them, it has to let them go.

A true market culture is “bossless”. You don’t work for a manager, but for the company. You might find mentors who will guide you toward more important, high-impact pursuits, but that’s your job. In technology, this is the open allocation methodology.

Why Market Culture is best

Market culture seems, of the three, the most explicitly Sociopathic (in the MacLeod sense). Rank cultures are about who you are, and how well you play the role that befits your rank. Tough cultures are about how much you sacrifice, and democratic in that sense. Market cultures are about what you do. Your rank and social status and “personality” don’t matter. Is that dehumanizing? In my opinion, no. It might look that way, but I see it from a different angle. In a rank culture, you’re expected to submit (or subordinate) yourself. In a tough culture, your contribution is sacrifice. In a market culture, you submit work. Of these three, I prefer the last. I would rather submit work to an “impersonal” market than a rank-seeking extortionist trying to rise in a dysfunctional rank culture. 

What I haven’t yet addressed is the cleavage I’ve drawn within the MacLeod Sociopath category between Technocrats and Psychopaths. The most important thing for an organization is the differential fitness of these two categories. Technocrat executives are, on average, beneficial. Psychopaths are unethical and usually undesirable.

Pure rank cultures do not seem to confer an advantage to either category. Tough cultures, on the other hand, benefit Psychopaths who find an outlet for their socially competitive energies. Ultimately, as the tough culture devolves into an emergent rank culture, Psychopaths thrive amid the ambiguity and political turmoil. Tough cultures unintentionally attract Psychopaths.

What about market cultures? I think that Psychopaths actually have a short term advantage in them, and that might seem damning of the market culture. This is probably why rank cultures are superficially more attractive to the Clueless middle-managers, with their dislike of “job hopping” and overt ambition. On the other hand, and much more importantly, market cultures confer the long-term advantage on Technocrats. They’re “eventually consistent”. To thrive in one for the long game, one must develop a skill base and deliver real value. That’s a Technocrat’s game.

No, idiot. Discomfort Is Bad.

Most corporate organizations have failed to adapt to the convexity of creative and technological work, a result of which is that the difference between excellence and mediocrity is much more meaningful than that between mediocrity and zero. An excellent worker might produce 10 times as much value as a mediocre one, instead of 1.2 times as much, as was the case in the previous industrial era. Companies, trapped in concave-era thinking, still obsess over “underperformers” (through annual witch hunts designed to root out the “slackers”) while ignoring the much greater danger, which is the risk of having no excellence. That’s much more deadly. For example, try to build a team of 50th- to 75th-percentile software engineers to solve a hard problem, and the team will fail. You don’t have any slackers or useless people– all would be perfectly productive people, given decent leadership– but you also don’t have anyone with the capability to lead, or to make architectural decisions. You’re screwed.

The systematic search-and-destroy attitude that many companies take toward “underperformers” exists for a number of reasons, but one is to create pervasive discomfort. Performance evaluation is a subjective, noisy, information-impoverished process, which means that good employees can get screwed just by being unlucky. The idea behind these systems is to make sure that no one feels safe. One in 10 people gets put through the kangaroo court of a “performance improvement plan” (which exists to justify termination without severance) and fired if he doesn’t get the hint. Four in 10 get damaging, below-average reviews that damage the relationship with the boss, but make internal mobility next to impossible. Four more are tagged with the label of mediocrity, and, finally, one of those 10 gets a good review and a “performance-based” bonus… which is probably less than he feels he deserved, because he had to play mad politics to get it. Everyone’s unhappy, and no one is comfortable. That is, in fact, the point of such systems: to keep people in a state of discomfort.

The root idea here is that Comfort Is Bad. The idea is that if people feel comfortable at work, they’ll become complacent, but that if they’re intimidated just enough, they’ll become hard workers. In the short term, there’s some evidence that this sort of motivation works. People will stay at work for an additional two hours in order to avoid missing a deadline and having an annoying conversation the next day. In the long term, it fails. For example, open-plan offices, designed to use social discomfort to enhance productivity, actually reduce it by 66 percent. Hammer on someone’s adrenal system, and you get response for a short while. After a certain point, you get a state of exhaustion and “flatness of affect”. The person doesn’t care anymore.

What’s the reason for this? I think that the phenomenon of learned helplessness is at play. One short-term reliable way to get an animal such as a human to do something is to inflict discomfort, and to have the discomfort go away if the desired work is performed. This is known as negative reinforcement; the removal of unpleasant circumstances in exchange for desired behavior. An example of this known to all programmers is the dreaded impromptu status check: the pointless unscheduled meeting in which a manager drops in, unannounced, and asks for an update on work progress,usually  in the absence of an immediate need. Often, this isn’t malicious or intentionally annoying, but comes from a misunderstanding of how engineers work. Managers are used to email clients that can be checked 79 times per day with no degradation of performance, and tend to forget that humans are not this way. That said, the behavior is an extreme productivity-killer, as it costs about 90 minutes per status check. I’ve seen managers do this 2 to 4 times per day. The more shortfall in the schedule, the more grilling there is. The idea is to make the engineer work hard so there is progress to report and the manager goes away quickly. Get something done in the next 24 hours, or else. This might have that effect– for a few weeks. At some point, though, people realize that the discomfort won’t go away in the long term. In fact, it gets worse, because performing well leads to higher expectations, while a decline in productivity (or even a perceived decline) brings on more micromanagement. Then learned helplessness sets in, and the attitude of not giving a shit takes hold. This is why, in the long run, micromanagers can’t motivate shit to stink.

Software engineers are increasingly inured to environments of discomfort and distraction. One of the worst trends in the software industry is the tendency toward cramped, open-plan offices where an engineer might have less than 50 square feet of personal space. This is sometimes attributed to cost savings, but I don’t buy it. Even in Midtown Manhattan, office space only costs about $100 per square foot per year. That’s not cheap, but not expensive enough (for software engineers) to justify the productivity-killing effect of the open-plan office.

Discomfort is an especial issue for software engineers, because our job is to solve problems. That’s what we do: we solve other peoples’ problems, and we solve our own. Our job, in large part, is to become better at our job. If a task is menial, we don’t suffer through it, nor do we complain about it or attempt to delegate it to someone else. We automate it away. We’re constantly trying to improve our productivity. Cramped workspaces, managerial status checks, and corrupt project-allocation machinery (as opposed to open allocation) all exist to lower the worker’s social status and create discomfort or, as douchebags prefer to call it, “hunger”. This is an intended effect, and because it’s in place on purpose, it’s also defended by powerful people. When engineers learn this, they realize that they’re confronted with a situation they cannot improve. It becomes a morale issue.

Transient discomfort motivates people to do things. If it’s cold, one puts on a coat. When discomfort recurs without fail, it stops having this effect. At some point, a person’s motivation collapses. What use is it to act to reduce discomfort if the people in charge of the environment will simply recalibrate it to make it uncomfortable again? None. So what motivates people in the long term? See: What Programmers Want. People need a genuine sense of accomplishment that comes from doing something well. That’s the genuine, long-lasting motivation that keeps people working. Typically, the creative and technological accomplishments that revitalize a person and make long-term stamina possible will only occur in an environment of moderate comfort, in which ideas flow freely. I’m not saying that the office should become an opium den, and there are forms of comfort that are best left at home, but people need to feel secure and at ease with the environment– not like they’re in a warzone.

So why does the Discomfort Is Good regime live on? Much of it is just an antiquated managerial ideology that’s poorly suited to convex work. However, I think that another contributing factor is “manager time”. One might think, based on my writing, that I dislike managers. As individuals, many of them are fine. It’s what they have to do that I tend to dislike, but it’s not an enviable job. Managing has higher status but, in reality, is no more fun than being managed. Managers are swamped. With 15 reports, schedules full of meetings, and their own bosses to “manage up”, they are typically overburdened. Consequently, a manager can’t afford to dedicate more than about 1/20 of his working time to any one report. The result of this extreme concurrency (out of accord with how humans think) is that each worker is split into a storyline that only gets 5% of the manager’s time. So when a new hire, at 6 months, is asking for more interesting work or a quieter location, the manager’s perspective is that she “just got here”. Six months times 1/20 is 1.3 weeks. That’s manager time. This explains the insufferably slow progress most people experience in their corporate careers. Typical management expects 3 to 5 years of dues-paying (in manager time, the length of a college semester) before a person is “proven” enough to start asking for things. Most people, of course, aren’t willing to wait 5 years to get a decent working space or autonomy over the projects they take on.

A typical company, as it sees its job, is to create a Prevailing Discomfort so that a manager can play “Good Cop” and grant favors: projects with more career upside, work-from-home arrangements, and more productive working spaces. Immediate managers never fire people; the company does “after careful review” of performance (in a “peer review” system wherein, for junior people, only managerial assessments are given credence). “Company policy” takes the Bad Cop role. Ten percent of employees must be fired each year because “it’s company policy”. No employee can transfer in the first 18 months because of “company policy”. (“No, your manager didn’t directly fuck you over. We have a policy of fucking over the least fortunate 10% and your manager simply chose not to protect you.”) Removal of the discomfort is to be doled out (by managers) as a reward for high-quality work. However, for a manager to fight to get these favors for reports is exhausting, and managers understandably don’t want to do this for people “right away”. The result is that these favors are given out very slowly, and often taken back during “belt-tightening” episodes, which means that the promised liberation from these annoying discomforts never really comes.

One of the more amusing things about the Discomfort Is Good regime is that it actually encourages the sorts of behaviors it’s supposed to curtail. Mean-spirited performance review systems don’t improve low performers; they create them by turning the unlucky into an immobile untouchable class with an axe to grind, and open-plan offices allow the morale toxicity of disengaged employees to spread at a rapid rate. Actually, my experience has been that workplace slacking is more common in open-plan offices. Why? After six months in open-plan office environments, people learn the tricks that allow them to appear productive while focusing on things other than work. Because such environments are exhausting, these are necessary survival adaptations, especially for people who want to be productive before or after work. In a decent office environment, a person who needed a 20-minute “power nap” could take one. In the open-plan regime, the alternative is a two-hour “zone out” that’s not half as effective.

The Discomfort Is Good regime is as entrenched in many technology startups as in large corporations, because it emerges out of a prevailing, but wrong, attitude among the managerial caste (from which most VC-istan startup founders, on account of the need for certain connections, have come). One of the first things that douchebags learn in Douchebag School is to make their subordinates “hungry”. It’s disgustingly humorous to watch them work to inflict discomfort on others– it’s transparent what they are trying to do, if one knows the signs– and be repaid by the delivery of substandard work product. Corporate America, at least in its current incarnation, is clearly in decline. While it sometimes raises a chuckle to see decay, I thought I would relish this more as I watched it happen. I expected pyrotechnics and theatrical collapses, and that’s clearly not the way this system is going to go. This one won’t go out with an explosive bang, but with the high-pitched hum of irritation and discomfort.

Fourth quadrant work

I’ve written a lot about open allocation, so I think it’s obvious where I stand on the issue. One of the questions that is always brought up in that discussion is: so who answers the phones? The implicit assumption, with which I don’t agree, is that there are certain categories of work that simply will not be performed unless people are coerced into doing it. To counter this, I’m going to answer the question directly. Who does the unpleasant work in an open-allocation company? What characterizes the work that doesn’t get done under open allocation?

First, define “unpleasant”. 

Most people in most jobs dislike going to work, but it’s not clear to me how much of that is an issue of fit as opposed to objectively unpleasant work. The problem comes from two sources. First, companies often determine their project load based on “requirements” whose importance is assessed according to the social status of the person proposing it rather than any reasonable notion of business, aesthetic, or technological value, so that generates a lot of low-yield busywork that people prefer to avoid because it’s not very important. Second, companies and hiring managers tend to be ill-equipped at matching people to their specialties, especially in technology. Hence, you have machine learning experts working on payroll systems. It’s not clear to me, however, that there’s this massive battery of objectively undesirable work on which companies rely. There’s probably someone who’d gladly take on a payroll-system project as an excuse to learn Python.

Additionally, most of what makes work unpleasant isn’t the work itself but the subordinate context: nonsensical requirements, lack of choice in one’s tools, and unfair evaluation systems. This is probably the most important insight that a manager should have about work: most people genuinely want to work. They don’t need to be coerced, and doing that will only reduce their intrinsic incentives in the long run. In that light, open allocation’s mission is to remove the command system that turns work that would otherwise be fulfilling into drudgery. Thus, even if we accept that there’s some quantity of unpleasant work that any company will generate, it’s likely that the amount of it will decrease under open allocation, especially as people are freed to find work that fits their interests and specialty. What’s left is work that no one wants to do: a smaller set of the workload. In most companies, there isn’t much of that work to go around, and it can almost always be automated.

The Four Quadrants

We define work as interesting if there are people who would enjoy doing it or find it fulfilling– some people like answering phones– and unpleasant if it’s drudgery that no one wants to do. We call work essential if it’s critical to a main function of the business– money is lost in large amounts if it’s not completed, or not done well– and discretionary if it’s less important. Exploratory work and support work tend to fall into the “discretionary” set. These two variables split work into four quadrants:

  • First Quadrant: Interesting and essential. This is work that is intellectually challenging, reputable in the job market, and important to the company’s success. Example: the machine learning “secret sauce” that powers Netflix’s recommendations or Google’s web search.
  • Second Quadrant: Unpleasant but essential. These tasks are often called “hero projects”. Few people enjoy doing them, but they’re critical to the company’s success. Example: maintaining or refactoring a badly-written legacy module on which the firm depends.
  • Third Quadrant: Interesting but discretionary. This type of work might become essential to the company in the future, but for now, it’s not in the company’s critical path. Third Quadrant work is important for the long-term creative health of the company and morale, but the company has not been (and should not be) bet on it.  Example: robotics research in a consumer web company.
  • Fourth Quadrant: Unpleasant and discretionary. This work isn’t especially desirable, nor is it important to the company. This is toxic sludge to be avoided if possible, because in addition to being unpleasant to perform, it doesn’t look good in a person’s promotion packet. This is the slop work that managers delegate out of a false perception of a pet project’s importance. Example: at least 80 percent of what software engineers are assigned at their day jobs.

The mediocrity that besets large companies over time is a direct consequence of the Fourth Quadrant work that closed allocation generates. When employees’ projects are assigned, without appeal, by managers, the most reliable mechanism for project-value discovery– whether capable workers are willing to entwine their careers with it– is shut down. The result, under closed allocation, is that management does not get this information regarding what projects the employees consider important, and therefore won’t even know what the Fourth Quadrant work is. Can they recover this “market information” by asking their reports? I would say no. If the employees have learned (possibly the hard way) how to survive a subordinate role, they won’t voice the opinion that their assigned project is a dead end, even if they know it to be true.

Closed allocation simply lacks the garbage-collection mechanism that companies need in order to clear away useless projects. Perversely, companies are much more comfortable with cutting people than projects. On the latter, they tend to be “write-only”, removing projects only years after they’ve failed. Most of the time, when companies perform layoffs, they do so without reducing the project load, expecting the survivors to put up with an increased workload. This isn’t sustainable, and the result often is that, instead of reducing scope, the company starts to underperform in an unplanned way: you get necrosis instead of apoptosis.

So what happens in each quadrant under open allocation? First Quadrant work gets done, and done well. That’s never an issue in any company, because there’s no shortage of good people who want to do it. Third Quadrant work also gets enough attention, likewise, because people enjoy doing it. As for Second Quadrant work, that also gets done, but management often finds that it has to pay for it, in bonuses, title upgrades, or pay raises. Structuring such rewards is a delicate art, since promotions should represent respect but not confer power that might undermine open allocation. However, I believe it can be done. I think the best solution is to have promotions and a “ladder”, but for its main purpose to be informing decisions about pay, and not an excuse to create power relationships that make no sense.

So, First and Third Quadrant work are not a problem under open allocation. That stuff is desirable and allocates itself. Second Quadrant work is done, and well, but expensive. Is this so bad, though? The purpose of these rewards is to compensate people for freely choosing work that would otherwise be averse to their interests and careers. That seems quite fair to me. Isn’t that how we justify CEO compensation? They do risky work, assume lots of responsibilities other people don’t want, and are rewarded for it? At least, that’s the story. Still, a “weakness” of open allocation is that it requires management to pay for work that they could get “for free” in a more coercive system. The counterpoint is that coerced workers are generally not going to perform as well as people with more pleasant motivations. If the work is truly Second Quadrant, it’s worth every damn penny to have it done well.

Thus, I think it’s a fair claim that open allocation wins in the First, Second, and Third Quadrant. What about the Fourth? Well, under open allocation, that stuff doesn’t get done. The company won’t pay for it, and no one is going to volunteer to do it, so it doesn’t happen. The question is: is that a problem?

I won’t argue that Fourth Quadrant work doesn’t have some value, because from the perspective of the business, it does. Fixing bugs in a dying legacy module might make its demise a bit slower. However, I would say that the value of most Fourth Quadrant work is low, and much of it is negative in value on account of the complexity that it imposes, in the same way that half the stuff in a typical apartment is of negative value. Where does it come from, and why does it exist? The source of Fourth Quadrant work is usually a project that begins as a Third Quadrant “pet project”. It’s not critical to the business’s success, but someone influential wants to do it and decides that it’s important. Later on, he manages to get “head count” for it: people who will be assigned to complete the less glamorous work that this pet project generates as it scales; or, in other words, people whose time is being traded, effectively, as a political token. If the project never becomes essential but its owner is active enough in defending it to keep it from ever being killed, it will continue to generate Fourth Quadrant work. That’s where most of this stuff comes from. So what is it used for? Often, companies allocate Third Quadrant work to interns and Fourth Quadrant work to new hires, not wanting to “risk” essential work on new people. The purpose is evaluative: to see if this person is a “team player” by watching his behavior on relatively unimportant, but unattractive, work. It’s the “dues paying” period and it’s horrible, because a bad review can render a year or two of a person’s working life completely wasted.

Under open allocation, the Fourth Quadrant work goes away. No one does any. I think that’s a good thing, because it doesn’t serve much of a purpose. People should be diving into relevant and interesting work as soon as they’re qualified for it. If someone’s not ready to be working on First and Second Quadrant (e.g. essential) work, then have that person in the Third Quadrant until she learns the ropes.

Closed-allocation companies need the Fourth Quadrant work because they hire people but don’t trust them. The ideology of open allocation is: we hired you, so we trust you to do your best to deliver useful work. That doesn’t mean that employees are given unlimited expense accounts on the first day, but it means that they’re trusted with their own time. For a contrast, the ideology of closed allocation is: just because we’re paying you doesn’t mean we trust, like, or respect you; you’re not a real member of the team until we say you are. This brings us to the real “original sin” at the heart of closed allocation: the duplicitous tendency of growing-too-fast software industries to hire before they trust.