Programmer autonomy is a $1 trillion issue.

Sometimes, I think it’s useful to focus on trillion-dollar problems. While it’s extremely difficult to capture $1 trillion worth of value, as made evident by the nonexistence of trillion-dollar companies, innovations that create that much are not uncommon, and these trillion-dollar problems often have not one good idea, but a multitude of them to explore. So a thought experiment that I sometimes run is, “What is the easiest way to generate $1 trillion in value?” Where is the lowest-hanging trillion dollars of fruit? What’s the simplest, most obvious thing that can be done to add $1 trillion of value to the economy?

I’ve written at length about the obsolescence of traditional management, due to the convex input-output relationship in modern, creative work, making variance-reductive management counter-productive. I’ve also shared a number of thoughts about Valve’s policy of open allocation, and the need for software engineers to demand the respect accorded to traditional professions: sovereignty, autonomy, and the right to work directly for the profession without risk of the indirection imposed by traditional hierarchical management. So, for this exercise, I’m going to focus on the autonomy of software engineers. How much improvement will it take to add $1 trillion in value to the economy?

This is an open-ended question, so I make it more specific: how many software engineers would we have to give a high degree (open allocation) of autonomy in order to add $1 trillion? Obviously, the answer is going to depend on the assumptions that are made, so I’ll have to answer some basic questions. Because there’s uncertainty in all of these numbers, the conclusion should be treated as an estimate. However, when possible, I’ve attempted to make my assumptions as conservative as possible. Therefore, it’s quite plausible that the required number of engineers is actually less than half the number that I’ll compute here.

Question 1: How many software engineers are there? I’m going to restrict this analysis to the developed world, because it’s hard to enforce cultural changes globally. Data varies, but most surveys indicate that there are about 1.5 million software engineers in the U.S. If we further assume the U.S. to be one-fifth of “the developed world”, we get 7.5 million full-time software engineers in the world. This number could probably be increased by a factor of two by loosening the definition of “software engineer” or “developed world”, but I think it’s a good working number. The total number of software engineers is probably two to three times that.

Question 2: What is the distribution of talent among software engineers? First, here’s the scale that I developed for assessing the skill and technical maturity of a software engineer. Defining specific talent percentiles requires specifying a population, but I think the software engineer community can be decomposed into the following clusters, differing according to the managerial and cultural influences on their ability and incentive to gain competence.

Cluster A: Managed full-time (75%). These are full-time software engineers who, at least nominally, spend their working time either coding or on software-related problems (such as site reliability). If they were asked to define their jobs, they’d call themselves programmers or system administrators: not meteorologists or actuaries who happen to program software. Engineers in this cluster typically work on large corporate codebases and are placed in a typical hierarchical corporate structure. They develop an expertise within their defined job scope, but often learn very little outside of it. Excellence and creativity generally aren’t rewarded in their world, and they tend to evolve into the stereotypical “5:01 developers”. They tend to plateau around 1.2-1.3, because the more talented ones are usually tapped for management before they would reach 1.5. Multiplier-level contributions for engineers tend to be impossible to achieve in their world, due to bureaucracy, limited environments, project requirements developed by non-engineers, and an assumption that anyone decent will become a manager. I’ve assigned Cluster A a mean competence of 1.10 and a standard deviation of 0.25, meaning that 95 percent of them are between 0.6 and 1.6.

Cluster B: Novices and part-timers (15%). These are the non-engineers who write scripts occasionally, software engineers in their first few months, interns and students. They program sometimes, but they generally aren’t defined as programmers. This cluster I’ve given a mean competence of 0.80 and a standard deviation of 0.25. I assign them to a separate cluster because (a) they generally aren’t evaluated or managed as programmers, and (b) they rarely spend enough time with software to become highly competent. They’re good at other things.

Cluster B isn’t especially relevant to my analysis, and it’s also the least well-defined. Like the Earth’s atmosphere, its outer perimeter has no well-defined boundary. Uncertainty about its size is also the main reason why it’s hard to answer questions about the “number of programmers” in the world. The percentage would increase (and so would the number of programmers) if the definition of “programmer” were loosened.

Cluster C: Self-managing engineers (10%). These are the engineers who either work in conditions of unusual autonomy (being successful freelancers, company owners, or employees of open-allocation companies) or who exert unusual efforts to control their careers, education and progress. This cluster has a larger mean competence and variance than the others. I’ve assigned it a mean of 1.5 and a variance of 0.35. Consequently, almost all of the engineers who get above 2.0 are in this cluster, and this is far from surprising: 2.0-level (multiplier) work is very rare, and impossible to get under typical management.

If we mix these three distributions together, we get the following profile for the software engineering world:

Skill Percentile
1.0     38.37
1.2     65.38
1.4     85.24
1.6     94.47
1.8     97.87
2.0     99.23
2.2     99.78
2.4     99.95
2.6     99.99

How accurate is this distribution? Looking at it, I think it probably underestimates the percentage of engineers in the tail. I’m around 1.7-1.8 and I don’t think of myself as a 97th-percentile programmer (probably 94-95th). It also says that only 0.77 percent of engineers are 2.0 or higher (I’d estimate it at 1 to 2 percent). I would be inclined to give Cluster C an exponential tail on the right, rather than the quadratic-exponential decay of a Gaussian, but for the purpose of this analysis, I think the Gaussian model is good enough.

Question 3: How much value does a software engineer produce? Marginal contribution isn’t a good measure of “business value”, because it’s too widely variable based on specifics (such company size) but I wouldn’t say that employment market value is a good estimate either, because there’s a surplus of good people who (a) don’t know what they’re actually worth, and (b) are therefore willing to work for low compensation, especially because a steady salary removes variance from compensation. Companies know that they make more (a lot more) on the most talented engineers than on average ones– if they have high-level work for them. The better engineer might be worth ten times as much, but only cost twice as much, as the average. So I think of it in terms of abstract business value: given appropriate work, how much expected value (ignoring variance) can that person deliver?

This quantity I’ve assumed to be exponential with regard to engineer skill, as described above: an increment of 1.0 is a 10-fold increase in abstract business value (ABV). Is this the right multiplier? It indicates that an increment of 0.3 is a doubling of ABV, or that a 0.1-increment is a 26 percent increase in ABV. In my experience, this is about right. For some skill-intensive projects, such as technology-intensive startups, the gradient might be steeper (a 20-fold difference between 2.0 and 1.0, rather than 10) but I am ignoring that, for simplicity’s sake.

The ABV of a 1.0 software engineer I’ve estimated at $125,000 per year in a typical business environment, meaning that it would be break-even (considering payroll costs, office space, and communication overhead) to hire one at a salary around $75,000. A 95th-percentile engineer (1.63) produces an ABV of $533,000 and a 99th-percentile engineer (1.96) produces an ABV of $1.14 million. These numbers are not unreasonable at all. (In fact, if they’re wrong, I’d bet on them being low.)

This doesn’t, I should say, mean that a 99th-percentile engineer will reliably produce $1.14 million per year, every year. She has to be assigned at-level, appropriate, work. Additionally, that’s an expected return (and the variance is quite high at the upper end). She might produce $4 million in one year under one set of conditions, and zero in another arena. Since I’m interested in the macroeconomic effect of increasing engineer autonomy, I can ignore variance and focus on mean expectancy alone. This sort of variance is meaningful to the individual (it’s better to have a good year than a bad one) and small companies, but the noise cancels itself out on the macroeconomic scale.

Putting it all together: with these estimates of the distribution of engineer competence, and the ABV estimates above, it’s possible to compute an expected value for a randomly-chosen engineer in each cluster:

Cluster A: $185,469 per year.

Cluster B: $89,645 per year.

Cluster C: $546,879 per year.

All software engineers: $207,236 per year.

Software engineers can evolve in all sorts of ways, and improved education or more longevity might change the distributions of the clusters themselves. That I’m not going to model, because there are so many possibilities. Nor am I going to speculate on macroeconomic business changes that would change the ABV figures. I’m going to focus on only one aspect of economic change, although there are others with additional positive effects, and that change is the evolution of an engineer from the micromanaged world of Cluster A to the autonomous, highly-productive one of Cluster C. (Upward movement within clusters is also relevant, and that strengthens my case, but I’m excluding it for now.) I’m going to assume that it takes 5 years of education and training for that evolution to occur, and assume an engineer’s career (with some underestimation here, justified by time-value of money) lasts 20 years, meaning there are 15 years to reap the benefits in full, plus two years to account for partial improvement during that five-year evolution.

What this means is that, for each engineer who evolves in this way, it generates $361,410 per year in value for 17 years, or $6.14 million per engineer. That is the objective benefit that accrues to society in engineer skill growth alone when a software engineer moves out of a typical subordinate context and into one like Valve’s open allocation regime. Generating $1 trillion in this way requires liberating 163,000 engineers, or a mere 2.2% of the total pool. That happens even if (a) the number of software engineers remains the same (instead of increasing due to improvements to the industry) and (b) other forms of technological growth, that would increase the ABV of a good engineer, stop, although it’s extremely unlikely that they will. Also, there are the ripple effects (in terms of advancing the state of the art in software engineering) of a world with more autonomous and, thus, more skilled engineers. All of these are substantial, and they improve my case even further, but I’m removing those concerns in the interest of pursuing a lower bound for value creation. What I can say, with ironclad confidence, is that the movement of 163,000 engineers into an open-allocation regime will, by improving their skills and, over the years, their output, generate $1 trillion at minimum. That it might produce $5 or $20 trillion (or much more, in terms of long-term effects on economic growth) in eventual value, through multiplicative “ripple” effects, is more speculatory. My job here was just to get to $1 trillion.

In simpler terms, the technological economy into which our society needs to graduate is one that requires software (not to mention mathematical and scientific) talent at a level that would currently be considered elite, and the only conditions that allow such levels of talent to exist are those of extremely high autonomy, as observed at Valve and pre-apocalyptic Google. Letting programmers direct their own work, and therefore take responsibility for their own skill growth is, multiplied over the vast number of people in the world who have to work with software, easily a trillion-dollar problem. It’s worth addressing now.

Programmers don’t need a union. We need a profession.

Every so often, I read a blog post or essay about the undesirable state of the software industry, and occasionally someone will suggest that we ought to unionize, in order to put an end to long hours and low pay compared to the value we produce. The argument is that, because software engineers are potentially worth millions per year to their employers, collective bargaining is the best way to capture more of this value. I disagree that a labor union is the way to go, because of the highly variable output of a software engineer, and the need for continuing education in this line of work. What we actually need is a profession.

Unions work best for commodity labor, and I use that term non-pejoratively. Commodity work is easily measurable and people can often be individually evaluated for performance. For example, a fishing boat operator is measured according to the quantity of fish she procures. A lot of very important work is commodity labor, so I don’t intend to disparage anyone by using that term. Commodity work can be unionized because there aren’t large and often intangible discrepancies in quality of output, and collective bargaining is often the best way to ensure that the workers are fairly compensated for the value they produce. Software is not commodity work, however. It’s difficult to measure quality, and the field is so specialized that engineers are not remotely interchangeable. When the work is difficult to measure and large disparities of quality exist, you have a situation in which a certain less-egalitarian (in the sense of allowing top performers to receive high compensation, because it’s essential to encourage people to improve themselves) and more self-regulatory structure is required: a profession.

The term professional is one of the most overused words in the English language, conflated often with white-collar work in general. If you work in an air-conditioned environment, someone will call you a professional. Most white-collar workers are not professionals, even in industries like investment banking, consulting, and management. Professionalism has nothing to do with social status, job category, or behavior (in the sense of an “unprofessional comment”). Rather, it’s all about a certain style of ethical structure, and the structure of a profession is nearly nonexistent in most white-collar industries (and under attack within the true professions).

What is a profession?

A profession is an attempt to impose global structure over a category of specialized, cognitively intensive work where the quality of output has substantial ramifications, but is difficult (if not, in the short term, impossible) to measure, giving ethics and competence primary importance. A profession is needed when it’s clear that not everyone can perform the work well, especially without specialized training. Here are some traits that signify the existence of a profession.

  1. Ethical obligations that supersede managerial authority. Professions have global ethical standards that the professional may not break, even under pressure from an employer. “Following orders” is not a defense. Therefore, the professional is both allowed and required to be autonomous in ethical judgment. It is rare for people to be fired, in true professions, because of disagreements with managers or political misfortune, but unethical behaviors are punished severely (in the worst cases, leading not only to termination but expulsion from the profession). This said, ethical demands on a profession may not match the common sense we have of ethics; they must be industry-specific. For example, attorneys’ stringent requirement to keep client information secret (attorney-client privilege) supersedes concerns regarding whether the client’s behavior itself is ethical; it’s not for the attorney to make that call.
  2. Weak power relationships. This is directly related to the above. In order to prevent the ethical lapses that are disturbingly common in non-professional work, and the abuses of power that are often behind those, professions deliberately weaken supervisory power relationships, with the intention of making it difficult for a manager to unilaterally fire an employee or damage his reputation. The result of this is that professionals answer to their companies or the profession directly. Professions attempt deliberately to be “bossless” so people will do what’s right rather than what’s politically expedient.
  3. Continuing improvement and self-direction as requirements. The professions assume change both in terms of what kinds of work will be valuable, and of what tools will be available to do it, and expect people to dedicate time to adapt to it without managerial permission or direction.
  4. Allowance for self-direction. Professionals are expected to place their career objectives at a high priority, since they serve the profession and their employers by becoming better at their work. Traditionally, metered work (that is, work directly relevant to the line of business) expectations in professions have been about 800 to 1200 hours per year, with the expectation that an equal amount will be spent on “off-meter” work such as attending conferences, reading journals, pursuing exploratory projects, mentoring others, and (at senior levels) looking for business or supporting the profession.
  5. Except in egregious cases, an agreement between employee and firm to serve each others’ interests, even after employment ends. Professionals are almost never fired, in the cinematic sense of the word. If a firm wants to get rid of someone, that person is encouraged to look for new employment on working time, and therefore allowed to represent himself as employed and retain an income during the job search. When people actually are fired, they’re typically offered a severance. In the rare case that a professional firm must lay people off or close an office for economic reasons, it gives as much notice as it can of the change, and announces the layoff in the public to protect departing employees’ images. That employee will almost always be given a positive reference (not the neutral “name and dates” reference that is often taken as negative) in exchange for his time. Efforts are made to protect the reputations of departing employees, who are expected to return the favor by speaking well of previous companies and colleagues. Additionally, employees who leave the firm are treated as alumni rather than deserters.

If I were to sum it up, I’d say that professionalism is about liberal service, and my use of the word “liberal” has nothing to do with political ideology. Rather, it’s the same etymology as “liberal arts”, where “liberal” means “pertaining to free people”. Professional service is that which is judged to be intellectually and ethically demanding enough that it should only be done by free people, not serfs or “wage slaves” who lack the autonomy for trust to be placed in them. The efforts that professions exert to curtail managerial authority are in place to prevent situations of ethical compromise, as well as political influences that might result in inferior work.

This is not to say that the professions are workplace utopias. I doubt that any workplace is perfectly clean, politically speaking, and the imperatives of a profession may improve behavior, but can’t change human nature wholesale. What I would argue, however, is that the global structure and protection of a profession protects the individual worker, to some degree, from local adversity. A doctor can be fired by his supervisor, but he’s still a doctor. Regular workers lose their credibility when their employment ends. Professionals don’t.

It’s all about the “Bozo Bit”.

The difference between a profession and regular work is the default assumption about an unknown person’s character, intellect, and competence. In typical industrial work, the “Bozo Bit” starts out in the “On” position, meaning that a typical worker is assumed to be stupid, treacherous, and useless. Since barriers to entry are low, the only thing that defines a worker is wanting money, and being willing to do unpleasant tasks in order to get it. There’s no respect for the average, individual member in such an industry; the default assumption is incompetence, ethical depravity, and childlike stupidity. One has to prove non-bozoism, which is usually established with a managerial role or other formal sanction: a “manager-equivalent” job title. (Non-professional companies typically make the work environment bad enough for non-managers that the default assumption is that anyone competent will become a manager.) Proving non-bozoism is often very difficult to do, because workers who attempt to do so are often fired for attempting to make their case instead of doing their assigned work. If you’re assumed to be a bozo, you probably won’t get the autonomy that would enable you to prove otherwise.

In this style of working environment, people without official managerial sanction (such as coveted job titles) have no credibility and therefore can be discarded on a whim. People with such decorations have only minimal power, because they can be deprived of these assets immediately, and often unilaterally. The result is an environment where all the power is held by the people who disburse funds, job titles, performance reviews, and authority, and in which the workers have none. Except at the top, there is no autonomy, because a manager can reduce the worker’s credibility to zero for any reason or no reason at all.

Professions clear the Bozo Bit (i.e. default it to “Off”) by making it difficult for incompetent people to enter, and punishing objectively unethical people severely. This gives people the assurance that a professional is highly likely to be both competent and ethical. In doing so, they create a high-trust zone. A person who completes medical school is a doctor and, unless the license is revoked because of a serious ethical lapse, has valuable expertise. No manager or employer can take that credibility away for political reasons.

Is software engineering a profession?

No. It’s not.

Professionals have the right to disobey managerial authority if they believe they are ethically right in doing so. On the other hand, a software engineer who refuses to cut corners or hide defects under managerial instruction has no recourse and will probably be fired.

Professionals take great pains to avoid disparaging others’ work in the public, preferring to keep feedback private. Many software engineers work in companies where performance reviews are part of an employee’s packet any time she wishes to transfer, and wherein a negative review can leave a person internally black-listed.

Professionals are allowed and expected to dedicate about half their working time to career development and continuing education. Many software engineers will get fired if they get caught in that.

Professionals have an external credibility that is independent of reporting to a specific manager or being employed at a specific firm, which enables them to serve the profession and the abstract ethical principles it values. The rest of workers don’t. They have managers who can unilaterally fire them from their current jobs (or, at least, damage their reputations through the aggressive performance review systems that are now the norm). Therefore, they have no option other than to serve their direct manager’s career goals, even if those diverge from the company’s needs or the employee’s long-term career objectives.

Most software engineers fall into the “rest of workers” category. They don’t have the right to buck managerial authority, and companies rarely allot time for their continuing education and career needs. Some have to fucking use vacation days to attend conferences!

Are the professions still relevant?

In Corporate America, the professions have been losing ground for at least half a century. Attorneys’ metered work demands have grown substantially, so that associates can’t participate in the off-meter work on which their careers depend, resulting in slowed development. It’s now typical that a “biglaw” associate is expected to log 2000 billable hours (metered work) and it’s therefore nearly impossible for her to build the business contacts necessary for transition into partnership. Academia and basic research have lost most of their funding and prominence. Professionalism remains in medicine for doctors, but the aggressively anti-professional insurance companies are working just as hard to deny care as doctors do to deliver it. Journalism is becoming more about entertainment, as objectivity is thrown to the wind by many participants. In sum, it seems that all of the professions have seen an erosion of status and, for lack of a better word, professionalism over the past few decades.

Why is the corporate world killing the professions? There’s an irreducible disparity between the professions and corporations. Professions are republics that exist to serve global, ethical objectives, and professionals happen to make money because the service they provide is valuable to others. Corporations are autocratic, amoral machines that exist to make money regardless of whether they provide any useful service, and the scope of service expected is local: take direct orders, don’t think for yourself. Workers serve their managers, who serve executives, and executives serve the short-term vacillations of the company’s stock price. A worker doesn’t have the right to prioritize work that she considers more beneficial (for the company, or the industry) than what her manager has told her to do. In the non-professional world, she doesn’t get to make that call.

Contemporary corporate leaders consider the professions to be archaic (I consider them to be archaic, but I won’t go there just yet) relics of a pre-capitalist era, unable to compete in a more ruthless business environment. This explains their scorn for government officials, journalists, and most especially professors who are “insulated” from the market. From a business person’s perspective, what the professions value is, if not supported by the market, then not worth preserving. (One exception: attorney-client privilege is a very strong ethical demand for lawyers, and business leaders are very glad for that one!) For this reason, they’re hostile toward the core ideas coming out of professions: the right to resist authority, the expectation of doing what is right rather than what is expedient, the diversion of half one’s working time into continuing growth and development, and the encouragement to seek external visibility that gives a person credibility independent of employment status. Finally, business leaders view professions, which intercede against managerial authority and thereby confer benefits on the professionals, as extortive institutions like guilds, or perhaps even command economies.

Why are the business leaders wrong? They are right about the superiority of market economies over command economies, but I don’t think that point is in contention anymore. They are wrong because they believe the professions can’t thrive in the market. That they can has been proven by a Seattle video game company called Valve, one of the only companies to truly professionalize software development, and one of the most successful software companies on Earth.

I’ve written a lot about Valve and its open allocation policy, under which engineers are trusted to move their desks to join another team as they wish. There’s no transfer process, and teams don’t have “headcount” constraints. Projects that no one wants to do don’t get done, unless they are genuinely important enough for the executives to create an incentive for it. Project supervision is driven by leadership and consensus rather than executive authority. What is open allocation about, philosophically? (For more about Valve’s open allocation, read this excellent blog post by Yanis Varoufakis.) It’s about professionalizing game and software development. Engineers are actually trusted to work for the company, rather than a manager who might use authority to divert their efforts toward the manager’s own goals (including preservation of the power relationship). Valve’s employees have an ethical commitment– make Valve better– they are expected to hold, but they have autonomy in how they do so. They live in the sort of high-trust zone that professions exist to create.

Not only do I believe that the professions can thrive in the market, but I think that professionalized software development will be superior to the industrial-age managed framework. Most of the progress in software comes from creative successes, which can be thought of as black swans. It is impossible to manage them into existence, and subordination will kill them before they are born. The best you can do, if you have managerial authority, is to use it as rarely as possible. At scale, you can be confident that people will come up with great ideas, and that enough of them will be profitable to justify the time and resources committed.

Ethics of a Technologist

Should software become a profession? I would say that the answer is a resounding “yes”. This emerging profession should not require expensive schooling. One of the things that’s great about programming is that no educational credential is required to enter, and I think we should keep it that way. However, I think that the idea of a global ethical structure is a sound one, and here are some thoughts.

I am using the term “technologist” for two reasons. The first is that I don’t want to limit my scope to software engineers, but to include a larger set of people who work on technology, such as designers and software testers and startup founders. The second is that the most important thing for a technologist to do, at a given time, may not be to write code. Sometimes, writing more code is not the answer: cleaning the existing stuff is better. As professionals, we should expect ourselves to do what is right for the problem we are trying to solve, not what allows us to write the most code.

Here are some basic, bedrock ethical principles that I think should be part of the technologist profession.

  1. Technologists do not create an inferior product for personal or single-party gain. We do not create “logic bombs” to extort clients or “back doors” that allow us to exploit systems we create. We do not create bugs for the purpose of “finding” them later. While we cannot ensure perfect software quality (it’s mathematically impossible to do so) we will not compromise on quality unless we believe it to be in the interest of all parties involved. We deliver the best product or service we can with the resources given to us, and if the resources given are not enough, we voice that concern.
  2. Technologists collaborate. We do not compete by harming another’s performance. We believe the world to be positive-sum and our industry is structurally cooperative. Therefore, to the extent that we compete, it’s in the direction of making ourselves better at our jobs, not making others worse. We also do not make technical decisions for the purpose of reducing another technologist’s ability to perform.
  3. Technologists do not disparage another’s performance to a non-technologist. Ever. All people with managerial authority are considered “non-technologists”, in the context of this item. Put simply: we don’t sell each other to outsiders. People who break this policy are fired for life from the profession. We handle our own affairs, period. If we need to remove a technologist from our team for reasons of incompetence or non-performance, we have the right to tell a non-technologist that this person cannot continue as a member of the team, and we are neither obligated nor allowed to give further reasons. We handle matters among ourselves, and do not attempt to use managerial warlords for personal gain. It’s up to us to form and disband teams, and to expel problematic members.
  4. Technologists choose their own leaders. It is not for non-technologist meddlers to decide who are the leaders of our groups. We shall not answer to stooges selected by executives. We choose our leaders, typically through democratic processes, and leaders who fail to serve the groups they are supposed to lead shall lose that distinction.
  5. Technologists work for the greater good of the company that employs them, the Technologist profession, and the greater good of the world. We serve the world first; our job is to make it better by improving technical processes, solving difficult but important problems, and advancing the state of science and rationality. Secondarily, we serve the Technologist profession and its values. As a tertiary concern, we act in the interests of our employers. We solve their technical problems and work to improve their infrastructure. Technologists are trusted to serve their employers directly, and any management that exists must be purely for providing guidance, not an irresistible authority. As a quaternary concern, technologists are expected and allowed to prioritize their own career growth, making themselves more valuable in the future.
  6. Technologists are trusted to work on any project that will have them. It is the right of the project owner to allow (or expel) members. Projects shall not be constrained by “head count” limitations set by non-technologists, who are to be deemed unqualified to make such distinctions.
  7. Technologists deliver on commitments they make. If a technologist cannot meet a commitment, he or she explains the cause as soon as possible, and attempts in earnest to address the shortfall. Additionally, it is not for managers to pressure technologists into making commitments that they would otherwise consider unreasonable. This “always deliver” policy only applies to freely made commitments, not any made under managerial duress.
  8. Technologists have the right to refuse work, unless it is of existential importance to the firm, or work that they have freely committed to doing. The only time a technologist can be required to complete a project he or she did not choose is when the company is at credible risk of failure or catastrophic loss if the work is not completed. Even then, it is best for the company to try to provide incentives (such as bonuses or promotions) before resorting to authority.
  9. Technologists have the right to inquire about other specialties without facing professional repercussions. Technologists are not fired or dismissed as “not a team player” when they voice concerns about a discrepancy between the work expected of them and the direction they want to take in their career, or the specialties into which they want to grow. Rather, technologists are encouraged to be direct and forthright about the specialties that interest them, and companies shall make reasonable efforts to allow them to find appropriate work.
  10. Making mistakes is tolerable; honesty about mistakes is required. The employment arena for a technologist must be one where people are not punished for making mistakes, or for discovering and revealing mistakes made by others. However, knowingly hiding a mistake, when it risks harm to others, is a grave ethical fault.
  11. Terminated employees recieve proper career support. Companies will sometimes need to fire a technologist, if he is unable to lead (as determined by other technologists) or follow within the context of the firm. When companies terminate, they always grant the right to represent oneself as employed during the job search, and provide a positive reference unless the employee was fired for a grave ethical breach. Companies that would be considered (reasonable person standard) able to afford severance pay it, in a large enough amount to cover the expected time for a job search. This does not apply to pre-funded startups (the risk is well-known) or companies in financial distress (that can’t afford it).
  12. Companies and investors do not create “no poach” or “do not hire” lists. Anyone who breaks these rules is censured severely and turned over to law enforcement. Investors are also disallowed from communicating any information about a technologist that might prevent him from getting further investment, unless a formal breach of ethics was involved.
  13. 1200 hours of metered work, per year, as a maximal obligation under normal circumstances. The general expectation is that a full-time technologist will deliver 400 to 1200 hours per year of metered work (work that is relevant to the direct line of business, and non-discretionary) per year. Companies have the right to increase metered work expectations in unusual circumstances. It is up to the company to decide what those circumstances are, but if they exist, they should be disclosed before an employment agreement begins (e.g. these are situations we consider abnormal, and these are the expectations we’ll have if they occur). Otherwise, the default assumption is that the company should expect no more than this level of direct dedication. Expecting anything more is to expect the employee to take on unreasonable risk of career stagnation. This 1200-hour standard shall be pro-rated for people who have part-time employment agreements.
  14. 800 hours of off-meter work as a minimal expectation of the technologist. “Full time” (2000 hours per year) technologists are usually expected to dedicate the remaining (800 or more) hours to off-meter work: continuing education, attending conferences, exploratory work and research, career-directed open source work, and pursuit of other specialties. This is not an obligation per se, but technologists should expect to spend this time if they wish to be maximally relevant, and companies shall be expected to allow time for it. This eliminates the excuse that many programmers have currently for stagnation and mediocrity, which is that their bosses won’t allocate time for growth. If we become a technologist profession, we will make allowance for such time an inflexible pillar.

This is just a starting set of principles. I’m sure I will remember more, but my intent here is to indicate what a technologist profession will look like.

One final thought: selectivity

My word count’s getting high, indicating that I should wrap this up, but I’d like to address one other concern: Who do we let in? What’s the barrier to entry? Here are my initial thoughts.

No formal education is required to become a technologist. Technologists are expected to have the breadth of knowledge of an average college graduate from a top-50 university, but how they get it is up to them. Age, socioeconomic status, educational matriculations, and national origin are irrelevant. Anyone with the competence can be a technologist, but we shall set the bar for competence very high. I think the best model for this is actuarial science in the United States, where progress is exam-based. Classes and study guides are available, but not required. Additionally, the profession requires and expects that associates will dedicate a significant fraction of their working time to studying for the exams.

There are a few tweaks I’d make to this system. The first is that, instead of a linear series of eight or ten exams, there shall be a larger number, with some being elective. The courses on technological ethics, basic mathematics, code quality and scientific thought would generally be required, for example. Machine learning, startup economics, and compilers courses would be optional. The second is that some courses would require code, and most courses would have a non-exam option whereby, for example, a high-quality open-source contribution could be substituted for a typical exam. (This is because, while most people who are “not a good test taker” are just lazy, the condition does exist and alternative evaluation is appropriate for making the profession maximally inclusive of talent.)

The purpose of these exams would be to provide an alternative path to success and true independent credibility for technologists, and to deprive parochial managers of the ability to reduce a technologist’s credibility to zero. They would not necessarily be required for a technologist to have employment, but they would be designed to be difficult and relevant enough to increase a professional’s employability dramatically– so dramatically as to give the technologist true independence of managerial authority.

That is all I have to say on this matter tonight, so I yield the floor.

The world sucks at finding the right work for engineers.

This is directly in response to Matt Aimonetti’s “Engineers Suck at Finding the Right Jobs“, because I disagree that the problem resides solely with engineers. Rather, I think the problem is bilateral. An equally strong argument could be made that there’s an immense wealth of engineering talent (or, at least, potential) out there, but that our contemporary business leadership lacks the vision, creativity, and intelligence to do anything with it.

Don’t get me wrong: I basically agree with what he is saying. Most software engineers do a poor job at career management. A major part of the problem is that the old-style implicit contract between employers and employees has fallen to pieces, and people who try to follow the old rules will shortchange themselves and fail in their careers. In the old world, the best thing for a young person to do was to take a job– any job– at a reputable company and just be seen around the place, and eventually graduate into higher quality of work. Three to five years of dues-paying grunt work (that had little intrinsic career-building value, but fulfilled a certain social expectation) was the norm, but this cost only had to be paid once. The modern world is utterly different. Doing grunt work does nothing for your career. There are people who get great projects and advance quickly, and others who get bad projects and never get out of the mud. Our parents lived in a world where “90 percent is showing up”, whereas we live in one where frequent job changes are not only essential to a good career, but often involuntary.

Software engineering is full of “unknown unknowns”, which means that most of us have a very meager understanding of what we don’t know, and what our shortcomings are. We often don’t know what we’re missing. It’s also an arena in which the discrepancy between the excellent and the mediocre isn’t a 20 to 40 percent increase, but a factor of 2 to 100. Yet to become excellent, an engineer needs excellent work, and there isn’t much of that to go around, because most managers and executives have no vision. In fact, there’s so little excellent work in software that what little there is tends to be allocated as a political favor, not given to those with the most talent. The most important skill for a software engineer to learn, in the real world, is how to get allocated to the best projects. In fact, what I would say distinguishes the most successful engineers is that they develop, at an early age, the social skills to say “no” to career-negative grunt work without firing oneself in the process. That’s how they take the slack out of their careers and advance quickly.

That’s not the same as picking “the right jobs”, because engineers don’t actually apply to specific work sets when they seek employment, but to companies and managers. Bait-and-switch hiring practices are fiendishly common, and many companies are all-too-willing to allocate undesirable and even pointless work to people in the “captivity interval”, which tends to span from the 3rd to the 18th month of employment, at which point leaving will involve a damaging short job tenure on the resume. (At less than 3 months, the person has the option of omitting that job, medium-sized gaps being less damaging than short-term jobs, which suggest poor performance.) I actually don’t think there’s any evidence to indicate that software engineers do poorly at selecting companies. Where I think they are abysmal is at the skill of placing themselves on high-quality work once they get into these companies.

All of this said, this matter raises an interesting question: why is there so much low-quality work in software? I know this business well enough to know that there aren’t strong business reasons for it to be that way. High-quality work is, although more variable, much more profitable in general. Companies are shortchanging themselves as much as their engineers by having a low-quality workload. So why is there so little good work to go around?

I’ve come to the conclusion that most company’s workloads can be divided into four quadrants based on two variables. The first is whether the work is interesting or unpleasant. Obviously, “interestingness” is subjective, so I tend to assume that work should be called interesting if there is someone out there who would happily do it for no more (and possibly less) than a typical market salary. Some people don’t have the patience for robotics, but others love that kind of work, so I classify it in the “interesting” category, because I’m fairly confident that I could find someone who would love to do that kind of work. For many people, it’s “want-to” work. On the other hand, the general consensus is that there’s a lot of work that very few people would do, unless paid handsomely for it. That’s the unpleasant, “have-to” work.

The second variable is whether the work is essential or discretionary. Essential work involves a critical (and often existential) issue for the company. If it’s not done, and not done well, the company stands to lose a lot of money: millions to billions of dollars. Discretionary work, on the other hand, isn’t in the company’s critical path. It tends to be exploratory work, or support work that the firm could do without. For example, unproven research projects are discretionary, although they might become essential later on.

From these two variables, work can be divided into four quadrants:

Interesting and Essential (1st Quadrant): an example would be Search at Google. This work is highly coveted. It’s both relevant and rewarding, so it benefits an employee’s internal and external career goals. Sadly, there’s not a lot of this in most companies, and closed-allocation companies make it ridiculously hard to get it.

Unpleasant and Essential (2nd Quadrant): grungy tasks like maintenance of important legacy code. This is true “have-to” work: it’s not pleasant, but the company relies on it getting done. Boring or painful work generally doesn’t benefit an employee’s external career, so well-run companies compensate by putting bonuses and internal career benefits (visibility, credibility, promotions) on it: a market solution. These are “hero projects”.

Interesting and Discretionary (3rd Quadrant): often, this takes the form of self-directed research projects and is the domain of “20% time” and “hack days”. This tends to be useful for companies in the long term, but it’s not of immediate existential importance. Unless the project were so successful as to become essential, few people would get promoted based on their contributions in this quadrant. That said, a lot of this work has external career benefits, because it looks good to have done interesting stuff in the past, and engineers learn a lot by doing it.

Unpleasant and Discretionary (4th Quadrant): this work doesn’t look good in a promotion packet, and it’s unpleasant to perform. This is the slop work that most software engineers get because, in traditional managed companies, they don’t have the right to say “no” to their managers. The business value of this work is minimal and the total value (factoring in morale costs and legacy) is negative. 4th-Quadrant work is toxic sludge that should be avoided.

One of the reasons that I think open allocation is the only real option is that it eliminates the 4th-Quadrant work that otherwise dominates a corporate atmosphere. Under open allocation, engineers vote with their feet and tend to avoid the 4th-Quadrant death marches.

The downside of open allocation, from a managerial perspective, is that the non-coercive nature of such a regime means they have to incent people to work on 2nd-Quadrant work, often with promotions and large (six- or seven-figure) bonuses. It seems expensive. Closed allocation enables managers to get the “have-to” work done cheaply, but there’s a problem with that. Under closed allocation, people who are put on these unpleasant projects often get no real career compensation, because management doesn’t have to give them any. So the workers put on such projects feel put-upon and do a bad job of it. If the work is truly 2nd-Quadrant (i.e. essential) the company cannot afford to have it done poorly. It’s better to pay for it and get high quality than to coerce people into it and get garbage.

The other problem with closed allocation is that it eliminates the market mechanic (workers voting with their feet) that allows this quadrant structure to become visible at all, which means that management in closed-allocation companies won’t even know when it has a 4th-Quadrant project. The major reason why closed-allocation companies load up on the toxic 4th-Quadrant work is because they have no idea that it’s even there, nor how to get rid of it.

There’s no corporate benefit to 4th-Quadrant work. So what incentive is there to generate it? Middle management is to blame. Managers don’t care whether the work is essential or discretionary, because they just want the experience of “leading teams”. They’re willing to work on something less essential, where there’s less competition to lead the project (and also a higher chance of keeping one’s managerial role) because their careers benefit either way. They can still say they “led a team of 20 people”, regardless of what kind of work they actually oversaw. Middle managers tend to what little interesting stuff these discretionary projects have for themselves, placing themselves in the 3rd-Quadrant, while leaving the 4th-Quadrant work to their reports.

This is the essence of what’s wrong with corporate America. Closed allocation generates pointless work that (a) no one wants to do, and (b) provides no real business value to the company. It’s a bilateral lose-lose for the company and workers, existing only because it suits the needs of middlemen.

It’s common wisdom in software that 90 to 95 percent of software engineers are depressingly mediocre. I don’t know what the percentage is, but I find that to be fairly accurate, at least in concept. The bulk of software engineers are bad at their jobs. I disagree that this mediocrity is intrinsic. I think it’s a product of bad work environments, and the compounding effect of bad projects and discouragement over time. The reason there are so many incompetent software engineers out there is that the work they get is horrible. It’s not only that they lack the career-management skills to get better work; it’s also that good work isn’t available to them when they start out, and it becomes even less available over time as their skills decline and their motivation and energy levels head toward the crapper.

I don’t see any intrinsic reason why the world can’t have ten, or even a hundred, times as many competent software engineers as it has now, but the dumbed-down corporate environment that most engineers face will block that from coming to fruition.

There’s an incredible amount of potential engineering talent out there, and for the first time in human history, we have the technology to turn it into gold. Given this, why is so much of it being squandered?

The end of management

I come with good news. If I’m correct about the future of the world economy, the Era of Management is beginning to close, and will wind down over the next few decades. I’ve spent a lot of time thinking about these issues, and I’ve come to a few conclusions:

  1. The quality gap between the products of managed work and unmanaged work has reversed, with unmanaged work being superior by an increasing– at this point, impossible to ignore– amount. For one notable example, open-source software is now superior to gold-plated commercial tools. Creativity and motivation matter more than uniformity and control. This was not always the case, but it has become true and this trend is accelerating.
  2. This change is intrinsic and permanent. It is unnatural for people to manage or be managed, and the end of the managerial era is a return to a more natural motivational framework.
  3. Approaches to business that once seemed radical, such as Valve‘s open allocation policy, will soon enough be established as the only reasonable option. Starting with top technical companies, an with the trend later moving into a wide variety of industries, firms will discard traditional management in favor of intrinsic motivation as a means of getting the best quality of work from their people.

What’s going on? I believe that there’s a simple explanation for all of this.

“We will kill them with math”

Consider payoff curves for two model tasks, each as a function of the performance of the person completing it.

Performance | A Payoff | B Payoff |

5 (Superb)  |      150 |      500 |
4 (Great)   |      148 |      300 |
3 (Good)    |      145 |      120 |
2 (Fair)    |      135 |       40 |
1 (Poor)    |      100 |       10 |
0 (Awful)   |       50 |        0 |

What might this model? Task A represents easy work for which an average (“fair”) player can achieve 90 percent of the highest potential output:135 points out of a possible 150. An employee achieving only 50 percent of that maximum is clearly failing, and will probably be replaced, and there won’t be much variation between the people who make the cut. Task B represents difficult work for which there’s much more upside, but for which the probability of success is low. Average performers contribute very little, while the difference between “good” and “superb” is large. Task B’s curve might be more applicable to high-yield R&D work, in which a person would be considered highly successful if she had success in even 30 percent of the projects she set out to do, but it increasingly applies to disciplines like computer programming, where insight, taste, and vision are worth far more than commodity code. What matters, mathematically, is that Task A’s input-output relationship flattens as performance improves, while Task B’s accelerates. Task A’s curve is concave and Task B’s is convex. For Task A, the difference in return between an excellent and an average performer is minimal, but for Task B, it’s immense.

Does excellence matter? At most jobs, the answer has traditionally been “no”. At least, it has mattered far less than uniformity, reliability, and cost reduction. The concave behavior of Task A is more appropriate to most jobs than a convex one, and that’s largely by design. The problem with creative excellence is that it’s intermittent. Creativity can’t be managed into existence, while reliable mediocrity can be. As much as we might want managers to “nurture creativity”, the fact is that they work for companies, not subordinate employees, and their job is largely to limit risk. If we expect managers to do anything different, we’re being unreasonable. For Task A, performance-middling behaviors like micromanagement are highly appropriate, because bringing the slackers into line provides much more benefit than is lost by irritating high performers, and most industrial work that humans have performed, over our history, has been more like A than B. Getting the work done has mattered more than doing it well.

One of the interesting differences between concave and convex work is the relationship between expectancy (average performance) and variance. For traditional concave work, there’s a lot of variation at the low-performing end of the curve, but very little among high performers. To consider variance uniformly bad, therefore, will not be detrimental, the upside of variation being so minimal. Managerial activities that reduce variance are generally beneficial under such a regime. Even if high performers are constrained, this is offset by the improved productivity of the slackers. For convex work, the opposite is true. In a convex world, variation and expectancy are positively correlated. It turns out to be much easier, for a manager, to control variance than it is to improve expectancy. For this reason, almost everything in the discipline of “management” that has formed over the past hundred years has been focused on risk reduction. In a concave world, that worked. Reducing variance, while it might regress individual performances into mediocrity, would nonetheless bring the aggregate team performance up to a level where no one could reliably do better with comparable inputs. For most of industrial humanity’s history, that was enough.

Variance reduction falls flat in the convex world. Managerial pressures that bring individual performance to the middle don’t guarantee that a company has an “average” number of high-performing people, but make it likely that the firm has zero such people, and the result of such mediocrity is an end to innovation. In the short term, this damage is invisible, but in the long term, it renders the company unable to compete. Its prominence and market share will be snapped up in small pieces by smaller, more agile, companies until nothing is left for it but dominance over low-margin “commodity” work. Contrary to the typical depiction of large corporate behemoths being sunk wholesale by a startup “<X> killer”, what actually tends to happen is a gradual erosion of that company’s dominance as new entrants compete against it for something more important, in the long run, than market share: talent. Talent is naturally attracted to convex, risk-friendly work environments.

For a digression into applied mathematics– specifically, optimization– I would like to point out that since maximizing a concave function (such as bulk productivity) is equivalent to minimizing a convex one, we can think of management in the concave world as somewhat akin to a convex optimization problem. This is more of a metaphor than a true isomorphism, with one being abstract mathematics and the other rooted in human psychology, but I think the metaphor’s quite useful. I’ll gloss over a lot of detail and just say this: convex optimizations (again, akin to management of concave work) are easier. A convex minimization problem is like finding the bottom of a bowl (follow gravity, or the gradient). However, if the problem is non-convex, the surface might be more convoluted, with local valleys, and one might end up in a suboptimal place (local minimum) from which no incremental improvement is possible. The first category of problem can be solved using an algorithm called gradient descent: start somewhere, and iterate by stepping in the direction that, locally, appears best. The second category of problem can’t be solved  by simple gradient descent. One can fall into a local optimum from which some sort of non-local insight (I’ll return to this, later) is required if one wants to improve.

Concave and convex work are, in kind, also sociologically different. When the work is concave, the optimization problem is (loosely speaking) convex, and the one stable equilibrium (or local optimum) is, roughly speaking, “fairness”. On average, you’ll get more if you focus your efforts on improving low performers (who will improve more quickly) than by making the best even better. A policy that often works is to standardize performance: figure out how many widgets people can produce, and develop a strategy for bringing as many people as possible to that level (and firing the few who can’t make it). Slackers are intimidated into acceptable mediocrity, incorrigible incompetents are fired, and the bulk of workers get exactly the amount of support they need to reliably hit their widget quota. It’s a “one-size-fits-all” approach that, while imperfect, has worked well for a wide variety of industrial work.

Management of convex work is, as it were, a distinctly non-convex optimization problem. It’s sociologically much more complicated, because while the concave world has a “fairness” equilibrium, convex work has multiple equilibria that are usually “unfair”. You end up with winners and losers, and the winners need to be paired with the best projects, roles and mentors, although one might argue that the winners “don’t need them” from a fairness perspective. For convex work, you don’t manage to the middle. The stars who get more support and better opportunities will improve faster, and the schlubs’ mediocrity (whether a result of inability or poor political position) will persist. The best strategy, for a managed company, would be to figure out who has “star” potential and invest heavily in them from the start, but the measurements involved (especially because people have such strong incentives to game them) are effectively impossible for most people to make, both for intrinsic and political reasons.

For convex work, excellence and creativity matter, and they can’t be forced into existence by giving orders. Additionally, the value produced in convex work is almost impossible to measure on a piece-rate basis. Achievements in concave work tend to be separable: one can determine exactly how much was accomplished in each hour, day, and week, so it’s easy to see when people are slacking off. Work that is separable by time is usually also separable by person: visible low performers can be removed, because the group’s performance is strictly additive of individual productivity. For convex work, this is nearly impossible. A person can seem nonproductive while generating ideas that lead to the next breakthrough– the archetypical “overnight success” that takes years– and a colleague who might not be publishing notable papers may still contribute to the group in an important, but invisible, way.

If your tools are traditional management tactics, then convex work is intractable, and management is often counterproductive. I think the best metaphor that I can come up with for managers and executives is “trading boredom”. There are many traders out there who could turn a profit if they stuck to what they knew well, but get bored with “grinding” and start to depart from their domains of competence, adding noise and room for mistakes, and burning up their winnings in the process. Poker players have the same problem: the game gets so boring (at 2000-3000 hours per year) that they start taking unwise risks. The 40-hour work week is so ingrained in modern people that there’s often a powerful guilt people face of feeling useless when there is no work for them to do (even if they achieve enough within 10 of those hours to “earn their keep”) and this often leads to counterproductive effort. This, I believe, explains 90 percent of managerial activity: messing with something that already works well, because watching the gauges gets boring. Whenever an executive comes up with a hare-brained HR policy that the company doesn’t need, trading boredom, and the need to still feel useful when there is no appropriate work to do, is the cause.

At concave work, this managerial “trading boredom” is a hassle that veteran workers, who have been doing the job for decades, learn to ignore. They already know how to do their jobs better than their bosses do, so they show enough compliance to keep management off their backs, but change little about what they’re actually doing unless there’s a legitimate reason for the change. They keep on working, and the function of the team or factory remains intact. For convex work, on the other hand, managerial meddling is utterly destabilizing. The pointless meetings and disruptions inflicted by overmanagement take an enormous toll. In a convex world where small differences in performance lead to large discrepancies in returns, spending 2 hours each week in pointless meetings isn’t going to reduce output by a mere 5 percent, as one might expect from a linear model (2 hours lost out of 40). It’s probably closer to 25 percent.

The corporate hierarchy: an analytical perspective.

The optimization metaphor above, I believe, explains certain functional reasons for the typical three-tiered corporate hierarchy, with executives, managers, and workers. The workers are just “inputs”– machines made of meat, with varying degrees of reliability and quality, and for which there exist well-studied psychological strategies for reducing variance in performance in order to impose as much uniformity as possible. A manager‘s job is to focus on a small region over which the optimization problem is convex (which implies that the work is concave) and perform the above-mentioned gradient descent, or to iterate step-wise toward a local optimum. The strategy is given to the manager from above, and his job is to drive execution error as close to zero as possible. As variance will, all else being equal, contribute to execution error, variance must be limited as well. The job of an executive is to have the non-local insight and knowledge required to find a global optimum rather than being stuck at a local one. Executives ask non-local “vision” questions like, “Should we make toothpaste or video games?” Managers figure out what it will take to get a group of people to produce 2 percent more toothpaste.

This hierarchy is becoming obsolete. Machines are now outperforming us at the mechanical work that defined the bottom of the traditional, three-tier hierarchy. They are far more reliable and uniform than we could ever hope to be, and they excel at menial, concave work. We can’t hope to compete with them on this; we’ll need to let them have this one. So the bottom of the three tiers is being replaced outright. In addition, specialization has created a world where there is no place for mediocrity, and, therefore, in which the individual “worker” is now responsible for finding a direction of development (a non-local, executive objective) and planning her own path to excellence (a managerial task). The most effective people have figured this out by now and become “self-executive”, which means that they take responsibility for their own advancement, and prioritize it over stated job objectives. As far as they’re concerned, their real job is to become excellent at something, and they will focus on their career rather than their immediate job responsibilities, which they perform only because it helps their career to do so. As far as self-executive people are concerned, their employers are just along for the ride– funding them, and capturing some the byproducts they generate along the way, while they work mostly on their real job: becoming really good at something, and advancing their career.

Self-executive employees are a nightmare for traditional managers. They favor their career growth over their at-moment responsibilities, have no respect for the transient managerial authority that might be used to compel them to depart from their interests, yet tend at the same time to be visibly highly competent, which means that firing them is a political mess. They’re the easiest to fire from an HR “cover your ass” perspective (they won’t sue if you fire them, because they’ll quickly get an external promotion) but the damage to morale in losing them is substantial. In concave work, a team could lose its most productive member with minimal disruption; at convex work, such a loss is crippling. Managers who want to peaceably remove such people have to make it look like something the group wanted, and so they create divisions between the self-executive and colleagues– perhaps by setting unrealistic deadlines and then citing the self-executive person’s extracurricular education as a cause for slippage– but these campaigns are disastrous for group performance in the long run.

From a corporate perspective, a self-executive employee is the opposite of a “team player” and possibly even a sociopath, but I prefer to call the self-executive attitude adaptive. What point is there in being a “team player” when that “team” will be a different set of people in 36 months, and where one can be discarded from the team at any time, often unilaterally by a non-productive player who’s not even a real part of it? None that I see. The “team player” ethic is for chumps who haven’t figured it out yet. Additionally, because the working world is increasingly convex, self-executive people are increasingly good (if chaotic good, to use a role-playing analogy) for society. They sometimes annoy their bosses, but they become extremely competent in the process and, in the long term, they will advance the world far more than anyone can do by following orders. Self-executives tend to “steal an education” from their bosses and companies, but twenty years later, they’re building superior companies.

Self-executive employees want to take risks. They want to tackle hard problems, so they get better at what they do. While managers want to reduce variance, almost obsessively, self-executives want to increase it. Also relevant is the fact that managerial fictions about intrinsic “A”, “B”, and “C” players don’t exist. Stack-ranking– the annual “low performer” witch hunt that companies engage in to scare their middling crowd– doesn’t actually do much good in personnel selection. (It excels at intimidation, which is performance-middling and thus reduces variance, but the desirability of this effect is rapidly declining.) What does exist, and seems to be intrinsic, is that there are low- and high-variance people. Low-variance workers can kick out an acceptable performance under almost any circumstances– long hours, poor health, boring work, difficult or even aggressive clients and managers. They’re reliable, not especially creative, and tend to do well at the war of attrition known as the “corporate ladder”. They make lousy executives, but are most likely to be selected for those sorts of roles. High-variance people, on the other hand, are much more creative, and tend to be self-executive, but are much less reliable in the context of managed work. Their level of output is very high if measured over a long enough timeframe, but impossible to read at the level of a single day, or even a quarter. This distinction of variance, much more than the A- and B-player junk science, seems to be intrinsic or, at the least, very slow to change for specific individuals. Unfortunately, traditional managers and high-variance individuals are natural enemies. Low-variance people tend to be selected for management positions, are easiest to manage, and (most importantly) are less likely to make their bosses insecure.

What is changing

Why is work moving from concavity to convexity in output? There are a few answers, all tightly connected. The first of these is that concave work tends to have a defined maximum value: there’s one right way to perform the task. If we can define a target, we can structure the task as a computation, and it can be automated. Machines win, no contest, at reliability as well as cost-reduction. They’re ideal workers. They never complain, work 168-hour weeks, and don’t have hidden career objectives that they place at a higher priority than what they’re asked to do. As we get better at programming machines to perform the concave work, it leaves us with the convex stuff.

Second, the technological economy enhances potential individual productivity. The best programmers deliver tens of millions of dollars per year in business value, while the worst should probably not be employed at all. The capacity to have multiplier effects across a company, rather than the additive impact of a mere workhorse, is no longer the domain of managers only. The best software architects and engineers are also multipliers, because their contributions become infrastructural in nature. I don’t think that this potential for multiplicative impact is limited to software, either. As software becomes more capable of eliminating menial tasks from peoples’ days, there’s more time available for the high-yield, high-risk endeavors at which machines do poorly. What this enables is the potential for rapid growth.

When studying finance, one often learns that high rates of growth (8% per year) in a portfolio are “unsustainable”, because anything that grows so fast will eventually “outgrow” the entire world economy, which grows at only 3 to 5 percent, as if that latter rate were an immutable maximum. This might also apply to the 10-15% per year salary growth that young people expect in their careers– also unsustainable. Wall Street (in terms of compensation) has been called “a bubble” for this reason: even average bankers experience this kind of exponential compensation growth well into middle age, and it seems that this is unreasonable, because even small-scale economies or subsectors “can’t” grow that fast, so how can a person? Can someone actually increase the business value of his knowledge and capability by 15% per year, for 40 years? It seems that there “must be” some limiting factor. I no longer believe this to be necessarily true at our current scale. (There are physical, information-theoretic upper limits to economic prosperity, but we’re so far from those that we can ignore them in the context of our natural lifespans.) Certainly, rapid growth becomes harder to maintain at scale; that is empirically true. But who says that world economic growth can’t some day reach 10% (or 50%) per year and continue at such a rate until we reach a physical maximum (far past a “post-scarcity” level at which we stop caring)? Before 1750, growth at a rate higher than 0.5% per year would have been considered impossible: 0.1 to 0.2 percent, in agrarian times, was typical. If we view our entire history in stages– evolutionary, early human, agricultural, industrial– we observe that growth rates improve at each stage. It’s faster-than-exponential. I don’t believe in a single point of nearly-infinite growth– a “Singularity”– but I think that human development is more likely than not to accelerate for the foreseeable future. In the technological era, rapid improvements are increasingly possible. Whether this will result in rapid (30% per year) macroscopic economic growth I am not qualified to say, and I don’t think anyone has the long-term answer on that one, but we are certainly in a time when local improvements on that order are commonplace. Many startups consider user growth of 10% per month to be slow.

Rapid growth and process improvements require investment into convex work, which often lacks a short-term payoff but often provides an immense upside. It’s this kind of thinking that companies need if they wish to grow at technological rather than industrial rates, and traditional variance-reduction management is at odds with that. That said, traditional management is quite a strong force in corporate America. Most companies cannot even imagine how they would run their affairs without it. For sure, the managerial and executive elites won’t go gently into that good night. The private-sector career politicians who’ve spent decades mastering this inefficient, archaic, and often stupid system are not going to give up the position they’ve worked so hard to acquire. The macroscopic economic, social, and cultural benefits to a less-managed work world are extreme, but also irrelevant to the current masters, who have a personal desire to keep their dominance over other humans. The people in charge of the current system would rather reign in hell than serve in heaven. So what will give?

There won’t be an “extinction event” for managerial dinosaurs and the numerous corporations that have adopted their mentality, so much as an inability to compete. First, consider the superior quality of open-source software over commercial alternatives for an expanding set of software. That’s indicative. Open-source projects grow organically because people value them and willingly contribute, with no managers (in the industrial-era sense) needed. Commercial products die unless their owners continue to throw money at them (and sometimes even then). Open-source contributors are intrinsically motivated to be invested in the quality of their software. They’re often users of the product, and they can also improve their careers by gaining visibility in the wider software world. They have real, technological-era, self-executive motivations for wanting to do good work. For a contrast, most commercial software products are completed at a standard of “just good enough” to appease a boss and remain in good standing. It’s software written for managers, but from a product-quality standpoint, bosses themselves rarely matter. Users do. The quality gap between non-managed work and managed work is becoming so severe that the value of managed work is (albeit slowly) declining, out-competed by superior alternatives. This is bringing us to a state where “radical” cultures such as Valve’s purportedly manager-free open allocation policy become the only acceptable option. I would be shocked, in 30 years, if open allocation weren’t the norm at leading technology companies.

The truth is that managed work and variance reduction, which made the Industrial Revolution possible, are capable of producing growth at industrial (1 to 5 percent per year) rates, but not at technological rates (and a venture-funded startup must grow at technological rates or it will die). Compared to the baseline agrarian growth rate (0.05 to 0.3% per year) of antiquity, the industrial rate was rapid. Traditional management still works just fine, if your job is to turn $1.00 into $1.03 in twelve months. If you’re already rich and looking to generate some income from your massive supply of capital, this might continue to work for you indefinitely. If you’re poor, or looking to compete in the most exciting industries, and you need to unlock the energies that turn $1.00 into $2.00, you need something different.

Does this mean that there will no longer be no role for managers? It depends on how “manager” is defined. Leadership, communication, and mentorship skills will always be in high demand. In fact, the increasing complexity of technology will put education at a premium, and the few people who can lead groups of self-executive workers are becoming immensely valuable. Although the most talented workers will evolve into self-executive free agents, they will need some way of learning what efforts are worth their time, and they’ll be learning this from other people. Some aspects of “management” will always be important, but to the extent that management lives on, it will have to be about genuine leadership rather than authority.

Fossil fools

What has a one-way ticket to the tarpit (and almost no one will miss it) is the contemporary institution of corporate management: the Bill Lumbergh, who uses authority by executive endowment to compensate for his complete lack of leadership skills.

Leaders are chosen by a group that decides to be led, whereas corporate managers are puppet governors selected by external forces (or “from above”) as a means of exerting control. They don’t have to have any leadership skill, because the people being led have no choice. They’re hand-picked by their bosses: higher-level managers and executives. Leadership often requires handling trade-offs of peoples’ interests in a fair way, but that’s impossible for a corporate manager to do. Executives will never select a manager who would support the workers’ interests at an equal level to their own. Managers play a variety of roles, but their main one is to be a buffer between two groups of people (executives and workers) who would otherwise be at opposition because, even if for no other reason, they get dramatically different proportions of the reward. Managers legitimize executives by creating the impression that the separation between the company’s real members and its workers is a continuous spectrum (and thereby support the company’s efforts to mislead people regarding their true chances of upward mobility) rather than a discrete chasm, but they also form, because of their own increasingly divergent interests, a weak link that is increasingly problematic.

The corporate structure is effectively feudal. Just as medieval kings never cared how the dukes and earls treated their peasants, as long as tributes were paid, managers generally have unilateral power over the managed (as long as they don’t get the company sued). Managers are trusted to execute the corporate interest. It seems like this should create a weak link, giving managers the power to force workers to suit the manager’s career goals rather than the corporate objective. Perhaps surprisingly, though, in a concave world this doesn’t cause a major problem for the company. Managerial and corporate interests, at concave work, are aligned for reasons of sociological coincidence.

Managers have high social status and status is positively autocorrelated in time (that is, high status tends to reinforce itself) so a manager will “drift” into higher position as the group evolves, so long as nothing embarrasses him. Workers have to prove themselves in order to stand above the masses, but managers can coast and acquire seniority. In other words, a manager’s career is optimized not when he maximizes group productivity (which may be impossible to measure) but when he minimizes risk of embarrassment. A subordinate who breaks rules, no matter how trivial, embarrasses the manager– even if he’s highly productive. It’s better, from a manager’s career perspective, to have ten thoroughly average reports than nine good ones and one visible problem employee. Great employees make themselves look good, while bad ones are taken to reflect on the manager who failed to keep them in line. The consequence of this is that managers are encouraged toward risk-reduction. In a concave world, this is exactly what the company needs. So what happens in a convex world?

The convex world is different. If a company “gets” convexity (which is rare) it will begin to make allowances for individual contributors to allocate time to high-variance, high-reward activities which are often self-directed. This gives workers the opportunity to achieve visible high performance, and it’s good for the (expected) corporate profit, but managers lose out, because the worker who hits a home run will get most of the credit, not the manager. They find their subordinates increasingly interested in “side projects” and extra-hierarchical pursuits that “distract” them from their assigned work. There’s a conflict of interest that emerges between what the worker perceives as managerial mediocrity and the quest for the larger-scale excellence that can exist in convex pursuits. Because self-executive workers think non-locally (extra-hierarchical collaboration, self-directed career advancement) the appearance is that they’re jumping rank.

In the concave world, managers were the tactical muscle of the company. They drove the workers toward the local optimum in the neighborhood that the globally-oriented executives chose. In the convex world, managers are pesky middlemen. If they operate according to self-interest (and it’s unreasonable to expect otherwise of anyone) then their best bet is to use their authority to coerce their reports to prioritize the manager’s own career goals, which have now diverged from the larger objective. In other words, they become extortionists. That’s not a business that will live for much longer.

What seems clear is that middle management will decline in power and importance over the next 50 years. Increasingly convex work landscapes will decrease the use for it, and people will have less desire to fill such positions (which, in concave work, are coveted). What’s less clear is what will replace it. If this corporate middle class disappears within the current framework, what’s left is a two-tier system with workers and executives. That’s a problem. Two-class societies are extremely unstable, so I don’t think that arrangement will thrive. What’s more likely, in my (perhaps overly optimistic) opinion is that the functions of workers, managers, and executives will all blend together as individuals becomes increasingly self-executive. In many ways, this is a desirable outcome. However, it dramatically changes the styles of business that people will be able to form, making companies fairer and more creative, but also more chaotic and probably smaller. If the result of this is a macroscopic increase in creativity, progress, and individual freedom over one’s work, then a truly technological era might begin.