How the Other Half Works: an Adventure in the Low Status of Software Engineers

Bill (not his real name, and I’ve fuzzed some details to protect his identity) is a software engineer on the East Coast, who, at the time (between 2011 and 2014) of this story, had recently turned 30 and wanted to see if he could enter a higher weight class on the job market. In order to best assess this, he applied to two different levels of position at roughly equivalent companies: same size, same level of prestige, same U.S. city on the West Coast. To one company, he applied as a Senior Software Engineer. To the other, he applied for VP of Data Science.

Bill had been a Wall Street quant and had “Vice President” in his title, noting that VP is a mid-level and often not managerial position in an investment bank. His current title was Staff Software Engineer, which was roughly Director-equivalent. He’d taught a couple of courses and mentored a few interns, but he’d never been an official manager. So he came to me for advice on how to appear more “managerial” for the VP-level application.

The Experiment

His first question was what it would take to get “managerial experience” in his next job. I was at a loss, when it comes to direct experience, so my first thought was, “Fake it till you make it”. Looking at his résumé, the “experiment” formed in my mind. Could I make Bill, a strong but not exceptional data-scientist-slash-software-engineer, over into a manager? The first bit of good news was that we didn’t have to change much. Bill’s Vice President title (from the bank) could be kept as-is, and changing Staff Software Engineer to Director didn’t feel dishonest, because it was a lateral tweak. If anything, that’s a demotion because engineering ladders are so much harder to climb, in dual-track technology companies, than management ladders.

Everything in Bill’s “management résumé” was close enough to true that few would consider it unethical. We upgraded his social status and management-culture credibility– as one must, and is expected to, do in that world– but not his technical credentials. We turned technical leadership into “real”, power-to-fire leadership, but that was the only material change. We spent hours making sure we weren’t really lying, as neither Bill nor I was keen on damaging Bill’s career to carry out this experiment, and because the integrity of the experiment required it.

In fact, we kept the management résumé quite technical. Bill’s experience was mostly as implementor, and we wanted to stay truthful about that. I’ll get to the results of the experiment later on, but there were two positive side effects of his self-rebranding, as a “manager who implemented”. The first is that, because he didn’t have to get his hands dirty as a manager, he got a lot of praise for doing things that would just have been doing his job if he were a managed person. Second, and related to the first but far more powerful, is that he no longer had to excuse himself for menial projects or periods of low technical activity. As opposed to, “I was put on a crappy project”, which projects low status, his story evolved into “No one else could do it, so I had to get my hands dirty”, which is a high-status, managerial excuse for spending 6 months on an otherwise career-killing project. Instead of having to explain why he didn’t manage to get top-quality project allocation, as one would ask an engineer, he was able to give a truthful account of what he did but, because he didn’t have to do this gritty work, it made him look like a hero rather than a zero.

What was that project? It’s actually relevant to this story. Bill was maintaining a piece of old legacy code that took 40,000 lines to perform what is essentially a logistic regression. The reason for this custom module to exist, as opposed to using modern statistical software instead, was that a variety of requirements had come in from the business over the years, and while almost none of these custom tweaks were mathematically relevant, they all had to be included in the source code, and the program was on the brink of collapsing under the weight of its own complexity. These projects are career death for engineers, because one doesn’t learn transferrable skills by doing them, and because maintenance slogs don’t have a well-defined end or “point of victory”. For Bill’s technical résumé, we had to make this crappy maintenance project seem like real machine learning. (Do we call it a “single-layer neural network”? Do we call the nonsensical requirements “cutting-edge feature engineering”?) For his management résumé, the truth sufficed: “oversaw maintenance of a business-critical legacy module”.

In fact, one could argue that Bill’s management résumé, while less truthful on-paper, was more honest and ethical. Yes, we inflated his social status and gave him managerial titles. However, we didn’t have to inflate his technical accomplishments, or list technologies that he’d barely touched under his “Skills” section, to make a case for him. After a certain age, selling yourself as an engineer tends to require (excluding those in top-notch R&D departments or open-allocation shops) that you (a) only work on the fun stuff, rather than the career-killing dreck, and play the political games that requires, (b) mislead future employers about the quality of your work experience, or (c) spend a large portion of your time on side projects, which usually turns into a combination of (a) and (b).

Was this experiment ethical? I would say that it was. When people ask me if they should fudge their career histories or résumés, I always say this: it’s OK to fix prior social status because one’s present state (abilities, talents) is fully consistent with the altered past. It’s like formally changing a house’s address from 13 to 11 before selling it to a superstitious buyer: the fact being erased is that it was once called “13″, one that will never matter for any purpose or cause material harm to anyone. On the other hand, lying about skills is ethically wrong (it’s job fraud, because another person is deceived into making decisions that are inconsistent with the actual present state, and that are possibly harmful in that context) and detrimental, in the long term, to the person doing it. While I think it’s usually a bad idea to do so, I don’t really have a moral problem with people fudging dates or improving titles on their résumés, insofar as they’re lying about prior social status (a deception as old as humanity itself) rather than hard currencies like skills and abilities.

Now, let’s talk about how the experiment turned out.

Interview A: as Software Engineer

Bill faced five hour-long technical interviews. Three went well. One was so-so, because it focused on implementation details of the JVM, and Bill’s experience was almost entirely in C++, with a bit of hobbyist OCaml. The last interview sounds pretty hellish. It was with the VP of Data Science, Bill’s prospective boss, who showed up 20 minutes late and presented him with one of those interview questions where there’s “one right answer” that took months, if not years, of in-house trial and error to discover. It was one of those “I’m going to prove that I’m smarter than you” interviews.

In the post-mortem, I told Bill not to sweat that last interview. Often, companies will present a candidate with an unsolved or hard-to-solve problem and don’t expect a full solution in an hour. I was wrong on that count.

I know people at Company A, so I was able to get a sense of how things went down. Bill’s feedback was: 3 positive, 1 neutral, and 1 negative, exactly as might have been expected from his own account. Most damning were the VP’s comments: “good for another role, but not on my team“. Apparently the VP was incensed that he had to spend 39 and a half minutes talking to someone without a PhD and, because Bill didn’t have the advanced degree, the only way that that VP would have considered him good enough to join would be if he could reverse-engineer the firm’s “secret sauce” in 40 minutes, which I don’t think anyone could.

Let’s recap this. Bill passed three of his five interviews with flying colors. One of the interviewers, a few months later, tried to recruit Bill to his own startup. The fourth interview was so-so, because he wasn’t a Java expert, but came out neutral. The fifth, he failed because he didn’t know the in-house Golden Algorithm that took years of work to discover. When I asked that VP/Data Science directly why he didn’t hire Bill (and he did not know that I knew Bill, nor about this experiment) the response I got was “We need people who can hit the ground running.” Apparently, there’s only a “talent shortage” when startup people are trying to scam the government into changing immigration policy. The undertone of this is that “we don’t invest in people”.

Or, for a point that I’ll come back to, software engineers lack the social status necessary to make others invest in them.

Interview B: as Data Science manager.

A couple weeks later, Bill interviewed at a roughly equivalent company for the VP-level position, reporting directly to the CTO. 

Worth noting is that we did nothing to make Bill more technically impressive than for Company A. If anything, we made his technical story more honest, by modestly inflating his social status while telling a “straight shooter” story for his technical experience. We didn’t have to cover up periods of low technical activity; that he was a manager, alone, sufficed to explain those away.

Bill faced four interviews, and while the questions were behavioral and would be “hard” for many technical people, he found them rather easy to answer with composure. I gave him the Golden Answer, which is to revert to “There’s always a trade-off between wanting to do the work yourself, and knowing when to delegate.” It presents one as having managerial social status (the ability to delegate) but also a diligent interest in, and respect for, the work. It can be adapted to pretty much any “behavioral” interview question.

As a 6-foot-1, white male of better-than-average looks, Bill looked like an executive and the work we did appears to have paid off. In each of those interviews, it only took 10 minutes before Bill was the interviewer. By presenting himself as a manager, and looking the part, he just had an easier playing field than a lifelong engineer would ever get. Instead of being a programmer auditioning to sling code, he was already “part of the club” (management) and just engaging in a two-way discussion, as equals, on whether he was going to join that particular section of the club.

Bill passed. Unlike for a typical engineering position, there were no reference checks. The CEO said, “We know you’re a good guy, and we want to move fast on you”. As opposed tot he 7-day exploding offers typically served to engineers, Bill had 2 months in which to make his decision. He got a fourth week of vacation without even having to ask for it, and genuine equity (about 75% of a year’s salary vesting each year).

I sat in when Bill called to ask about relocation and, honestly, this is where I expected the deal to fall apart. Relocation is where so many offers fall to pieces. It’s a true test of whether a company actually sees someone as a key player, or is just trying to plug a hole with a warm body. The CEO began by saying, “Before getting into details, we are a startup…”

This was a company with over 100 employees, so not really a startup, but I’m going to set that aside for now. I was bracing for the “oh, shit” moment, because “we’re a startup” is usually a precursor to very bad news.

“… so we’ll cover the moving costs and two months of temporary housing, and a $10,000 airfare budget to see any family out East, but we can’t do loss-on-sale for the house, and we can’t cover realtor fees.”

Bill was getting an apology because the CEO couldn’t afford a full executive relocation workup. (“We’re just not there yet.”) For a software engineer, “relocation” is usually some shitty $3,000 lump-sum package, because “software engineer”, to executives, means “22-year-old clueless male with few possessions, and with free storage of the parental category”. On the other hand, if you’re a manager, you might be seen as a real human being with actual concerns about relocating to another part of the country.

It was really interesting, as I listened in, to see how different things are once you’re “in the club”. The CEO talked to Bill as an equal, not as a paternalistic, bullshitting, “this is good for your career” authority figure. There was a tone of equality that a software engineer would never get from the CEO of a 100-person tech company. 

Analysis

Bill has a superhuman memory and took a lot of notes after each interview, so there was plenty to analyze about this sociological experiment. It taught me a lot. At Company A, Bill was applying for a Senior Engineer position and his perceived “fit” seemed to start at 90. (Only 90, for his lack of PhD and Stanford pedigree.) But everything he didn’t know was points off. No experience with Spring and Struts? Minus 5. Not familiar with the firm’s Golden Algorithm? Not a real “data scientist”; minus 8. No Hadoop experience? Minus 6. Bill was judged on what he didn’t know– on how much work it would take to get him up to speed and have him serving as a reliable corporate subordinate.

Company B showed a different experience entirely. Bill started at 70, but everything he knew was a bonus. He could speak intelligently about logistic regression and maximum likelihood methods? Plus 5. He’s actually implemented them? Plus 6. He knows about OCaml? Plus 5. Everything he knew counted in his favor. I’d argue that he probably scored these “points” for irrelevant “interesting person” details, like his travel.  

When a programmer gets to a certain age, she knows a lot of stuff. But there’s a ton of stuff she doesn’t know, as well, because no one can know even a fraction of everything that’s going on in this industry. It’s far better, unless you’re applying for a junior position, to start at 70 and get credit for everything you do know, than to start at 90 (or even 100) and get debited for the things you don’t know.

This whole issue is about more than what one knows and doesn’t know about technology. As programmers, we’re used to picking up new skills. It’s something we’re good at (even if penny-shaving businessmen hate the idea of training us). This is all about social status, and why status is so fucking important when one is playing the work game– far more important than being loyal or competent or dedicated. 

Low and high status aren’t about being liked or disliked. Some people are liked but have low status, and some people are disliked but retain high status. In general, it’s more useful and important to have high status at work than to be well-liked. It’s obviously best to have both, but well-liked low-status people get crap projects and never advance. Disliked high-status people, at worst, get severance. As Machiavelli said, “it is far safer to be feared than loved if you cannot be both.” People’s likes and dislikes change with the seasons, but a high-status person is more unlikely to have others act against his interests.

Moreover, if you have low social status, people will eventually find reasons to dislike you unless you continually sacrifice yourself in order to be liked, and even that strategy runs out of time. At high social status, they’ll find reasons to like you. At low status, your flaws are given prime focus and your assets, while acknowledged, dismissed as unimportant or countered with “yes, buts” which turn any positive trait into a negative. (“Yes, he’s good in Clojure, but he’s might be one of those dynamic-typing cowboy coders!” “Yes, he’s good in Haskell, but that means he’s one of those static-typing hard-asses.” “Yes, he’s a good programmer, but he doesn’t seem like a team player.”) When you have low status, your best strategy is to be invisible and unremarkable, because even good distinctions will hurt you. You want to keep your slate ultra-clean and wait for mean-reversion to drift you into middling status, at which point being well-liked can assist you and, over some time– and it happens glacially– bring you upper-middle or high status.

When you have high status, it’s the reverse. Instead of fighting to keep your slate blank, it’s actually to your benefit to have things (good things) written about you on it. People will exaggerate your good traits and ignore the bad ones (unless they are egregious or dangerous). You start at 70 and people start looking for ways to give you the other 30 points.

The Passion of the Programmer

I’ve always felt that programmers had an undeserved low social status, and the experiment above supports that claim. Obviously, these are anecdotes rather than data, but I think that we can start to give a technical definition to the low social status of “software engineers”.

Whether programmers are over- or underpaid usually gets into debates about economics and market conditions and, because those variables fluctuate and can’t be measured precisely enough, the “are programmers (under|over)-paid?” debate usually ends up coming down to subjective feelings rather than anything technical. Using this technical notion of status– whether a person’s flaws or positive traits are given focus– we have the tools to assess the social status of programmers without comparing their salaries and work conditions to what we feel they “deserve”. If you are in a position where people emphasize your flaws and overlook your achievements, you have low social status (even if you make $200,000 per year, which only means efforts to cut your job will come faster). If the opposite is true, you have high social status. 

Using this lens, the case for the low social status of the programmer could not be any clearer. We’ll never agree on a “platonically correct” “fair value” for an engineer’s salary. What can see is that technologists’ achievements are usually under-reported by the businesses in which they work, while their mistakes are highlighted. I’ve worked in a company where the first thing said to me about a person was the production outage he caused 4 years ago, when he was an intern. (Why is nothing said about the manager who let an intern cause an outage? Because that manager was a high status person.) A big part of the problem is that programmers are constantly trying to one-up each other (see: feigned surprise) and prove their superior knowledge, drive, and intelligence. From the outside (that is, from the vantage point of the business operators we work for) these pissing contests make all sides look stupid and deficient. By lowering each others’ status so reliably, and when little to nothing is at stake, programmers lower their status as a group. 

There was a time, perhaps 20 years gone by now, when the Valley was different. Engineers ran the show. Technologists helped each other. Programmers worked in R&D environments with high levels of autonomy and encouragement. To paraphrase from one R&D shop’s internal slogan, bad ideas were good and good ideas were great. Silicon Valley was an underdog, a sideshow, an Ellis Island for misfits and led by “sheepdogs” intent on keeping mainstream MBA culture (which would destroy the creative capacity of that industry, for good) away. That period ended. San Francisco joined the “paper belt” (to use Balaji Srinivasan’s term) cities of Boston, New York, Washington and Los Angeles. Venture capital became Hollywood for Ugly People. The Valley became a victim of its own success. Bay Area landlords made it big. Fail-outs from MBA-culture strongholds like McKinsey and Goldman Sachs found a less competitive arena in which they could boss nerds around with impunity; if you weren’t good enough to make MD at the bank, you went West to become a VC-funded Founder. The one group of people that didn’t win out in this new Valley order were software engineers. Housing costs went up far faster than their salaries, and they were gradually moved from being partners in innovation to being implementors’ of well-connected MBA-culture fail-outs’ shitty ideas. That’s where we are now. 

So what happened? Was it inevitable that the Valley’s new wealth would attract malefactors, or could this have been prevented? I actually think that it could have been stopped, knowing what we know now. Would it be possible to replicate the Valley’s success in another geographical area (or, perhaps, in a fully distributed technical subculture) without losing our status and autonomy once the money spotted it and came in? I think so, but it’ll take another article to explain both the theoretical reasons why we can hold advantage, and the practical strategies for keeping the game fair, and on our terms. That’s a large topic, and it goes far beyond what I intend to do in this article. 

The loss of status is a sad thing, because technology is our home turf. We understand computers and software and the mathematical underpinnings of those, and our MBA-culture colonizers don’t. We ought to have the advantage and retain high status, but fail at doing so. Why? There are two reasons, and they’re related to each other.

The first is that we lack “sheep dogs”. A sheep dog, in this sense, is a pugnacious and potentially vicious person who protects the good. A sheep dog drives away predators and protects the herd. Sheep dogs don’t start fights, but they end many– on their terms. Programmers don’t like to “get political”, and they dislike it even when their own kind become involved in office politics, and the result is that we don’t have many sheep dogs guarding us from the MBA-culture wolves. People who learn the skills necessary to protect the good, far too often, end up on the other side. 

The second is that we allow “passion” to be used against us. When we like our work, we let it be known. We work extremely hard. That has two negative side effects. The first is that we don’t like our work and put in a half-assed effort like everyone else, it shows. Executives generally have the political aplomb not to show whether they enjoy what they’re doing, except to people they trust with that bit of information. Programmers, on the other hand, make it too obvious how they feel about their work. This means the happy ones don’t get the raises and promotions they deserve (because they’re working so hard) because management sees no need to reward them, and that the unhappy ones stand out to aggressive management as potential “performance issues”. The second is that we allow this “passion” to be used against us. Not to be passionate is almost a crime, especially in startups. We’re not allowed to treat it as “just a job” and put forward above-normal effort only when given above-normal consideration. We’re not allowed to “get political” and protect ourselves, or protect others, because we’re supposed to be so damn “passionate” that we’d do this work for free. 

What most of us don’t realize is that this culture of mandatory “passion” lowers our social status, because it encourages us to work unreasonably hard and irrespective of conditions. The fastest way to lose social status is to show acceptance of low social status. For example, programmers often make the mistake of overworking when understaffed, and this is a terrible idea. (“Those execs don’t believe in us, so let’s show them up by… working overtime on something they own!”) To do this validates the low status of the group that allows it to be understaffed. 

Executives, a more savvy sort, lose passion when denied the advancement or consideration they feel they deserve. They’re not obnoxious about this attitude, but they don’t try to cover it up, either. They’re not going to give a real effort to a project or company that acts against their own interests or lowers their own social status. They won’t negotiate against themselves by being “passionate”, either. They want to be seen as supremely competent, but not sacrificial. That’s the difference between them and us. Executives are out for themselves and relatively open about the fact. Programmers, on the other hand, heroize some of the stupidest forms of self-sacrifice: the person who delivers a project (sacrificing weekends) anyway, after it was cancelled; or the person who moves to San Francisco without relocation because he “really believes in” a product that he can’t even describe coherently, and that he’ll end up owning 0.05% of. 

What executives understand, almost intuitively, is reciprocity. They give favors to earn favors, but avoid self-sacrifice. They won’t fall into “love of the craft” delusions when “the craft” doesn’t love them back. They’re not afraid to “get political”, because they realize that work is mostly politics. The only people who can afford to be apolitical or “above the fray”, after all, are the solid political winners. But until one is in that camp, one simply cannot afford to take that delusion on. 

If programmers want to be taken seriously, and we should be taken seriously and we certainly should want this, we’re going to have to take stock of our compromised position and fix it, even if that’s “getting political”. We’re going to have to stop glorifying pointless self-sacrifice for what is ultimately someone else’s business transaction, and start asserting ourselves and our values. 

Never relocate unpaid

Someone asked me, a few months ago, if he should take a Silicon Valley (actually, San Francisco) job offer where the relocation was a “generous” $4,000. I told him to negotiate for more and, if the company wouldn’t budge, to decline. For an adult, that’s not a relocation package. That’s half-assery. 

I won’t get too particular on the details, but this person lived on the East Coast and had children. For an adult of any kind facing a cross-country move, $4,000 is not a generous relocation package. It’s downright pathetic. Movers don’t work for equity. A full-service move for an adult, with a family, costs at least twice that number. When you move for a job, your employer is still going to expect you to start as soon as possible and be 100% on-the-ball in your first weeks. You can’t take on a self-move and start a new job properly. If you do have that kind of energy, it means you’re the alpha-trader type who should be on Wall Street, not taking employee positions at startups. For the 99.99% of us who are mere mortals, taking on a self-move means you’ll be slacking at your job in the first weeks and, while slacking might be OK when you’re 3 years in and just waiting for something better to come along, it’s not a way to start a job– especially not a job you moved across the country to take.

In addition to the small size of that package, I think lump-sum relocations are a bad deal in general. I think they fail for both sides. They’re bad for the employee because, in addition to losing a chunk of it to taxes, the lump-sum arrangement leaves it to the employee to make arrangements and haggle. The employer, however, risks getting the amount severely wrong, while not getting the major benefit of a relocation, which is a 100%-focused employee with high morale, because the employee still has to haggle with movers and manage minutiae. A good employer, knowing how toxic moving stress can be, will solve every niggling, stupid little problem that comes up in the process of relocation. They’ll have a trusted moving company, rather than expecting the employee to make the calls and compare costs. (If the moving company’s client is the company, rather than you, they’ll do a good job because they want repeat business– a concern that isn’t there with “man with a van” outfits.) They’ll manage the flight and, with a high-quality placement agency involved, have an interview per day lined up for the spouse until he or she gets a good job. If you’re moving internationally, you’ll get an extra week or two of vacation (and you won’t have to ask for it) each year and they’ll have tax equalization already worked out. Why? Because genuinely good employers (who are rare these days) want their best people on, 100 percent, rather than phoning it in at their jobs because they’re fighting little piss fires in their personal lives.

I know that relocation is derided as an “old-style perk” in the Valley. If you broach the subject, you risk seeming entitled, high-maintenance, and worst of all, old. Most companies in the Valley don’t fall under the “good employers” qualification. Most of these Valley “startups” are just long-shot gambles running on a shoestring budget, and their real purpose isn’t to build businesses but to try out middling product managers (called “CEOs” of semi-existent, non-autonomous companies) for promotion into the investor ranks (EIRs or associates or partners at VC firms). The reason they can’t pay or relocate properly is that, while investors are handing these half-companies pennies and saying “Humor me”, even their own backers don’t trust the companies enough to take a real risk and give them the resources that’d enable them to pay real salaries and benefits. This raises the question, for the employee: if the investors aren’t willing to bet on that business, then why should you?

Anyway, this person didn’t heed my advice, took the job anyway, left a few months later and, from the titles and prestige of the following company, the next move appears to have been a demotion. I can’t speak for what happened or whether my perception (based on LinkedIn) of a demotion is correct. I generally cut ties with people who make bad decisions, so I haven’t heard his side.

California, here we come

So, there’s an epidemic of Shitty Relo (or even nonexistent relo) in Silicon Valley and it’s California arrogance at its finest. I need to address it, for the good of society. The mythology justifying it is that California is such a wonderful place to live, with its traffic and absurd real estate prices and brogrammers, that a $2,500 relocation package for an adult (meaning, here, 25+ and no longer allowed to live without furniture, because it’s just not socially acceptable to sleep on a hand-me-down mattress with no bed) is to be taken as a perk rather than a fuck-you. 

This culture of cheapness is disgusting. One time a couple of years ago, I spoke to a startup about a supposedly “VP-level” role and, when it came time to discuss an on-site interview, they asked if I could apply to another company (that would cover travel costs) and “piggyback” their interview process on the same day– that is, go to them in the evening after an all-day interview with another company– sparing them the cost of flights and a hotel. At that point, I stopped returning their calls, because I knew that if they couldn’t spring for airfare, they wouldn’t be willing or able to do the right thing when it came to relocation, so there was no point in continuing. 

Frankly, I think that the “we don’t do relo” policy is short-sighted if not idiotic. I checked my almanac to be sure, but the world is very big. No metropolitan area has anywhere close to a dominating share of the global talent pool. The Bay Area has some impressive people (and a much larger number of not-impressive people who still command impressive salaries) but if you’re serious about competing for talent, you just can’t restrict yourself to one geographic area. Not in 2014. Either set yourself up to be distributed (which is hard to do) or man the fuck up and pay for talent and pay for it to move. Otherwise, you aren’t serious about winning and you deserve to lose.

Oh, and another thing…

Then there is the matter of relocation clawbacks. Many companies put a stipulation on a relocation package that it must be repaid if the employee leaves within a certain timeframe. On what moral grounds is this OK? Did that employee not suffer the costs of relocation, either way? Do movers reimburse a person who relocated for a job that turned out to be a stinker? Of course they don’t, because they still did the work and that would be ludicrous. Almost no one joins a job expecting to leave in a year, which means people only will do so if things get really shitty. Why make it worse, by imposing a ridiculous penalty, if things go bad? Chances are, if the employee is willing to leave a company in the first year, that the company is partially at fault. The purely opportunistic “job hopper” is a hated straw man for employers, because the reality is that at least 75 percent of employers are just shitty (and bad at hiding it, so savvy people leave quickly). I don’t know why it’s considered a virtue to waste one’s life by investing in an employer that isn’t invested in one’s own career. It’s not. 

Clawbacks aren’t usually a big deal (it’s not hard to stay at a company for a year) but they send a strong signal: we employ at least one person who enjoys shitting in places where fecal matter doesn’t belong. That clawback clause didn’t appear by accident. It’s not likely that cosmic rays hit the hard drive where the contract template was stored and just happened to put additional words there. Someone in that company, at some time, made the decision to include a clawback clause. And I’m sorry, but the people who try to earn their keep by shitting on things should be fired. Companies are complex beasts and need a wide variety of services to maintain and grow them. What they don’t need are people who indiscriminately shit on things because it amuses them to leave festering piles of dookie all over the company. If I wanted to see creatures shitting indiscriminately, I’d go to the zoo. The issue with a relocation clawback isn’t the economic risk for employee, but the fact that the company actually retains someone who took the time to put that garbage in the contract.

The bigger problem

In this essay, I’ve Solved It with regard to relocation, because it actually is a simple issue: if you’re a company, play or go home. If you’re a prospective employee and you’re offered an inadequate relocation package, turn the job down. If the company was serious about winning, they’d give you what you need to pull off the move properly. These, “oh yeah, we offer $2000″ fly-by-night startups aren’t serious about winning. They don’t need top talent. What they need are clueless, naive, true believers who’ll throw enough hours and youthful enthusiasm behind a bad idea to continue the founders’ campaign to defraud investors. It’s best if those true believers don’t have families, because their spouses might “disrupt” (I had to use that word) them with questions like, “why are you working more than 40 hours per week, at a company that only gave you 0.02% in equity?” It’s best if they’re under 27 and don’t mind taking on the insane risk of going into an extremely expensive city without an offer of temporary housing.

Of course, there’s one exception. If you will have at least 20 percent equity in the company, you might consider taking on the risk of an unpaid relocation. That’s right, 20%. You have to be partner-level for it to make sense, and prior to external investment, anything less than 20% is not partner level. (After external investment, full relocation should be a given for all key players.) Don’t take partner-level risk for an employee-level role. It doesn’t engender respect; it shows weakness. 

So what’s the bigger issue? What I’ve noticed about such a large number of startups, in the Valley and elsewhere, is that they’re disgustingly cheap. Cheap things are usually of low quality. Exceptions exist, but they’re rare and if your business strategy is based on cheapness, you’ll fail, because opportunities to buy something of high quality at a low price are usually far too intermittent to build a company on them. Also, one thing you learn in technology is that low-quality components often corrupt the whole system. If you pour a cup of wine in a barrel of sewage, you have a barrel of sewage. But if you pour a cup of sewage in a barrel of wine, you also have a barrel of sewage. Most VC-funded companies are launched with absolutely no understanding of this principle. They’re designed to be as stingy as possible in the hope that one “home run” company will emerge from the chaos and pay for the hundred failures. Occasionally, the idea is so good that even terrible health benefits and sloppy HR and closed allocation won’t block a success. These opportunities only emerge a few times in each generation, but they do exist. In general, though, a company designed poorly and with low-quality components (management, policies, mission and founding principles) will just be a loser. VCs can afford to generate hundreds of losers because they’re diversified. Rank-and-fire employees can’t. 

In reality, a company succeeds not by being as cheap as possible, but focusing on the delta. What are you paying, and what are you getting? Cost-minimization usually leads to low quality and a minuscule if not negative delta. A new Volkswagen GTI for $20,000 is a great deal. A beat-up clunker unlikely to last a year isn’t worth $1,000, and is likely to cost more in headaches and maintenance than buying a quality car. In general, both extremes of the price spectrum are where the worst deals are found. The high end is populated by Veblen goods and dominated by the winner’s curse, while the low end’s psychological and technical advantages (some purchasers will always buy the cheapest option, and sometimes this is by regulation) to the supplier render it protected, often allowing quality to fall to (or below) zero. In hiring, just as with commodity goods, this is observed. Overpaid executives are often the most damaging people in an organization, and paying them more doesn’t improve the quality of people hired; on the other hand, the desperate employees willing to take the worst deals are usually of low or negative value to the business. 

The actual “golden rule” of success at business isn’t “buy low, sell high”, because opportunities to do so are extremely rare, and “carrying costs” associated with the wait for perfect deals are unaffordable. Instead, it’s “maximize the spread between your ‘sell’ and ‘buy’ price” or, for a more practical depiction that sounds less manipulative, “sell something that is more than the sum of its parts”. With some components, the right move is to buy high and sell higher. For others, it’s to buy low and sell less-low. For example, in software, you probably want to take the “buy high and sell higher” strategy with employees. A competent software engineer is worth $1 million per year to the business, so it’s better to hire her at $150,000 (which, in current market conditions, is more than enough to attract talent, if you have interesting problems to solve) than to hire an incompetent at any salary. Talent matters, which means that to spend on salary and benefits and relocation is worth it. The way you win in software is to buy high and sell higher. That said, “buy high” also means you have to buy (in this case, to hire) selectively, because you can’t buy many things if you’re buying high. Unfortunately, people who have sufficient talent to make those calls are rare (and most of us are irrelevant bloggers, not investors). MBA-toting VCs like cheapness because, without the ability to assess talent and make those calls, the scatter-shot approach is all they have. 

I think it’s important, before I go further, to focus on the difference between genuine constraint and stupid cheapness. Before venture capital got involved in small-business formation to the extent that it has (largely because bank loans now require personal liability, making them a non-starter for anything with a non-zero chance of failure) most emerging firms were legitimately limited in their ability to hire more people. “Lean” wasn’t some excuse for being mean-spirited or miserly; it was just the reality of getting a company off the ground. If the constraints really exist, then play within them. If you can barely afford salaries for your three founders because you’re bootstrapping on scant consulting revenue, then maybe you can’t afford to pay their relocation costs. The problem with the VC-funded cheapness is that those constraints really don’t exist. “We can’t afford $X” is predicated on “We only have 100X and need to hire 100 people” which is predicated on “We’re going to hire such mediocre people that we need 100 of them to get the job done”, and that mediocrity wouldn’t be such an issue if $2X were offered instead. If $2X were offered, the job might be feasible with 15 people, but the VCs aren’t able to assess talent at that level, nor pick founders who can, so it’s easier for that class of people to just pretend that talent differentials among engineers don’t exist and implement a closed-allocation MBA culture. 

Mindless, stupid cost-cutting isn’t limited to startups. Large corporations show this behavior, as well, and it’s easy to explain why it’s there. Those who can, do. Those who can’t, evaluate and squeeze. This evaluation process can take many forms, such as micromanagement; but a complementary form is the senseless and mean-spirited penny-shaving of the modern corporate bureaucrat (“I don’t think you need that!”) It takes no vision to “cut costs” in a way that, in 99% of cases, actually externalizes costs to somewhere else (low-quality technology, morale problems, environmental impact). Penny-shaving is what stupid overpaid fuckheads do to justify their executive positions when they don’t have the vision to actually lead or innovate. They cut, and they cut, and they cut. They get good at it, too. Then they start asking questions like, “Why are we paying full relocation for these experienced programmers, when there are a bunch of starry-eyed 22-year-olds with California dreams?” Six months later, that’s answered by the cataclysmic drop in code quality, which is starting to bring major pain to the business, but the cost-cutting idiot who had the idea has already gotten his promotion and moved on to fuck something else up.

When companies balk at the concept of offering a proper relocation, the message it sends to me is that they’re in the business of squeezing zero-sum petty wins out of their employees, rather than vying for actual wins on the market. 

Conclusion

Most software engineers don’t know what they’re getting into when they enter this industry, and spend more time than is reasonable being a loser. I can’t claim innocence on this one. There were jobs where I worked for shitty companies or inept managers or on pointless projects for a variety of reasons that seemed to make sense at the time, but were almost certainly errant. Don’t make that mistake. Don’t be a loser. The good news is that that’s rather easy to control. One can’t change one’s level of talent or influence the large component that is truly luck, but avoiding loserism has some straightforward rules. Don’t work with losers. Don’t work for losers. (Don’t fight against losers either. That’s a mistake I’ve made as well, and they can be vicious and powerful when in groups.) Don’t make excuses for people who don’t seem to be able to get it together and play to win. Stick to the people who actually want to achieve something and will bet big on the things that matter, not the 95% of VC-funded founders and executives just trying to draw a salary on a venture capitalist’s dime (despite all that paper-thin bullshit rhetoric about “changing the world”) while squeezing everyone else at every opportunity. 

Unless you’re a founder and the resources simply don’t exist yet, never relocate unpaid. If the company actually sees you as a key player, and it cares about winning more than zero-sum cost-cutting, it will solve your moving problems for you, so you can get to work in earnest on your first day. 

Greed versus sadism

I’ve spent a fair amount of time reading Advocatus Diaboli, and his view on human nature is interesting. He argues that sadism is a prevailing human trait. In an essay on human nature, he states:

They all clearly a demonstrate a deep-seated and widespread human tendency to be deceitful, cruel, abusive and murderous for reasons that have almost nothing to with material or monetary gain. It is as if most human beings are actively driven a unscratchable itch to hurt, abuse, enslave and kill others even if they stand to gain very little from it. Human beings as a species will spend their own time, effort and resources to hurt other living creatures just for the joy of doing so.

This is a harsh statement, and far from socially acceptable. Sadism is a defining human characteristic, rather than a perversion? To put it forward, I don’t agree that sadism is nearly as prevalent as AD suggests. However, it’s an order of magnitude more prevalent than most people want to admit. Economists ignore it and focus on self-interest: the economic agent may be greedy (that is, focused on narrow self-interest) but he’s not trying to hurt anyone. Psychology treats sadism as pathological, and limited to a small set of broken people called psychopaths, then tries to figure out what material cause created such a monster. The liberal, scientific, philosophically charitable view is that sadistic people are an aberration. People want sex and material comfort and esteem, it holds, but not to inflict pain on others. Humans can be ruthless in their greed, but are not held to be sadistic. What if that isn’t true? We should certainly entertain the notion.

The Marquis de Sade– more of a pervert than a philosopher, and a writer of insufferably boring, yet disturbing, material– earned his place in history by this exact argument. In the Enlightenment, the prevailing view was that human nature was not evil, but neutral-leaning-good. Corrupt states and wayward religion and unjust aristocracies perverted human nature, but the fundamental human drive was not perverse. De Sade was one of the few to challenge this notion. To de Sade, inflicting harm on others for sexual pleasure was the defining trait. This makes the human problem fundamentally insoluble. If self-interest and greed are the problem, society can align peoples’ self-interests by prohibiting harmful behaviors and rewarding mutually beneficial ones. If, however, inflicting pain on others is a fundamental human desire, then it is impossible for any desirable state of human affairs to be remotely stable; people will destroy it, just to watch others suffer.

For my part, I do not consider sadism to be the defining human trait. It exists. It’s real. It’s a motivation behind actions that are otherwise inexplicable. Psychology asserts it to be a pathological trait of about 1 to 2 percent of the population. I think it’s closer to 20 percent. The sadistic impulse can overrun a society, for sure. Look at World War II: Hitler invaded other countries to eradicate an ethnic group for no rational reason. Or, the sadists can be swept to the side and their desires ignored. Refusing to acknowledge that it exists, however, is not a solution, and I’ll get to why that is the case.

Paul Graham writes about the zero-sum mentality that emerges in imprisoned or institutionalized populations. He argues that the malicious and pointless cruelty seen in U.S. high schools, prisons, and high-society wives is of a kind that emerges from boredom. When people don’t have something to do– and are institutionalized or constrained by others’ low regard for them (teenagers are seen as economically useless, high-society wives are made subservient, prisoners are seen as moral scum)– they create senseless and degrading societies. He’s right about all this. Where he is wrong is in his assertion that “the adult world” (work) is better. For him, working on his own startup in the mid-1990s Valley, it was. For the 99%, it’s not. Office politics is the same damn thing. Confine and restrain people, and reinforce their low status with attendance policies and arbitrary orders, and you get some horrendous behavior. Humans are mostly context. Almost all of us will become cruel and violent if circumstances demand it. Okay, but is that the norm? Is there an innate sadism to humans, or is it rare except when induced by poor institutional design? The prevailing liberal mentality is that most human cruelty is either the fault of uncommon biological aberration (mental illness) or incompetent (but not malicious) design in social systems. The socially unacceptable (but not entirely false) counterargument is that sadism is a fundamental attribute of us (or, at least, many of us) as humans.

What is greed?

The prevailing liberal attitude is that greed is the source of much human evil. The thing about greed is that it’s not all that bad. In computer science, we call an optimization algorithm “greedy” if it is short-sighted (i.e. not able to capture the whole space, at a given algorithmic step) and these greedy algorithms often work. Sometimes, they’re the only option because anything else requires too much in the way of computational resources. “Greed” can simplify. Greedy people want to eat well, to travel, and for their children to be well-educated. Since that’s what most people want, they’re relatable. They aren’t malignant. They’re ruthless and short-sighted and often arrogant, but they (just like anyone else) are just trying to have good lives. What’s wrong with that? Nothing, most would argue. Most importantly, they’re reasonable. If society can be restructured and regulated so that doing the right thing is rewarded, and doing the wrong thing is punished or forbidden, greedy people can be used for good. Unlike the case with sadism, the problem can be solved with design.

Is greed good? It depends on how the word is defined. We use the word ambition positively and greed negatively, but if we compare the words as they are, I’m not sure this makes a lot of sense. Generally, I view people who want power more negatively than those who want wealth (in absolute, rather than relative terms) alone. As a society, we admire ambition because the ambitious person has a long-term strategy– the word comes from the Latin ambire, which means to walk around gathering support– whereas greed has connotations of being short-sighted and petty. We conflate long-range thinking with virtue, ignoring the fact that vicious and sadistic people are capable of long-term thought as well. At any rate, I don’t think greed is good. However, greed might be, in certain contexts, the best thing left.

To explain this, note the rather obvious fact that corporate boardrooms aren’t representative samples of humanity. For each person in a decision-making role in a large business organization, there’s a reason why he’s there and, if you think it comes down to “hard work” or “merit”, you’re either an idiot or painfully naive. Society is not run by entrepreneurs, visionaries, or creators. It’s run by private-sector social climbers. Who succeeds in such a world? What types of people can push themselves to the top? Two kinds. The greedy, and the sadistic. No one else can make it up there, and I’ll explain why, later in this post.

This fact is what, in relative terms, makes greed good. It’s a lot better than sadism.

The greedy person may not value other concerns (say, human rights or environmental conservation) enough, but he’s not out to actively destroy good things either. The sadist is actively malicious and must be rooted out and destroyed. It is better, from the point of view of a violence-averse liberal, that the people in charge be merely greedy. Then it is possible to reason with them, especially because technology makes rapid economic growth (5 to 20+ percent per year) possible. What prevents that from happening now is poor leadership, not malignant obstruction, and if we can share the wealth with them while pushing them aside, that might work well for everyone. If the leaders are sadistic, the only way forward is over their dead bodies.

“The vision thing”

Corporate executives do not like to acknowledge that the vast majority of them are motivated either by greed or by sadism. Instead, they talk a great game about vision. They concoct elaborate narratives about the past, the future, and their organization’s place in the world. It makes greed more socially acceptable. Yes, I want power and wealth; and here is what I plan to do with it. In the corporate world, however, vision is almost entirely a lie, and there’s a solid technical reason why that is the case.

We have a term in software engineering called “bikeshedding“, which refers to the narcissism of petty differences. Forget all that complicated stuff; what color are we going to paint the bike shed? The issue quickly becomes one that has nothing to do with aesthetics. It’s a referendum on the status of the people in the group. You see these sorts of things in mergers often. In one company, software teams are named after James Bond villains; in the other, they’re named after 1980s hair bands. If the merger isn’t going well, you’ll see one team try to obliterate the memetic cultural marks of the other. “If you refer to Mötley Crüe in another commit message, or put umlauts where they don’t belong for any reason, I will fucking cut you.”

Bikeshedding gets ugly, because it’s a fundamental human impulse (and one that is especially strong in males) to lash out against unskilled creativity (or the perception of unskilled creativity, because the perceiver may be the defective one). You see this in software flamewars, or in stand-up comedy (with hecklers pestering comics, and the swift comics brutally insulting their adversaries.) This impulse toward denial is not sadistic or even a bad thing at its root. It’s fundamentally conservative, but inflicting brutal social punishments on incompetent wannabe chieftains is what kept early humans from walking into lions’ dens.

As a result of the very strong anti-bikeshedding impulse, creativity and vision are punished, because (a) even those with talent and vision come under brutal attack and are drawn into lose-lose ego wars, and (b) almost never are there creatively competent adults in charge who can resolve conflicts, consistently, on the right side. The end result is that these aspects of humans are driven out of organizations. If you stand for something– anything, even something obviously good for the organization– the probability that you’ll take a career-ending punch approaches one as you climb the ladder. If you want to be a visionary, Corporate America is not the place for it. If you want to be seen as a visionary in Corporate America, the best strategy is to discern what the group wants before a consensus has been reached, and espouse the viewpoint that is going to win– before anyone else has figured that out. What this means is that corporate decisions are actually made “by committee”, and that the committee is usually made up of clever but creatively weak individuals. In the same way as mixing too many pigments produces an uninspiring blah-brown color, an end result of increasing entropy, the decisions that come from such committees are usually depressing ones. They can’t agree on a long-term vision, and to propose one is to leave oneself politically exposed and be termed a “bikeshedder”. The only thing they can agree upon is short-term profit improvement. However, increasing revenue is itself a problem that requires some creativity. If the money were easy to make, it’d already be had. Cutting costs is easier; any dumbass can do that. Most often, these costs are actually only externalized. Cutting health benefits, for one example, means work time is lost to arguments with health insurance companies, reducing productivity in the long run, and being a net negative on the whole. But because those with vision are so easily called out as bikeshedding, impractical narcissists, the only thing left is McKinsey-style cost externalization and looting.

Hence, two kinds of people remain in the boardroom, after the rest have been denied entry or demoted out of the way: the ruthlessly greedy, and the sadistic.

Greedy people will do what it takes to win, but they don’t enjoy hurting people. On the contrary, they’re probably deeply conflicted about what they have to do to get the kind of life they want. The dumber ones probably believe that success in business requires ruthless harm to others. The smarter ones see deference to the mean-spirited cost-cutting culture as a necessary, politically expedient, evil. If you oppose it, you risk appearing “soft” and effeminate and impractical and “too nice to succeed”. So you go along with the reduction of health benefits, the imposition of stack ranking, the artificial scarcities inherent in systems like closed allocation, just to avoid being seen that way. That’s how greed works. Greedy people figure out what the group wants and don’t fight it, but front-run that preference as it emerges. So what influences go into that group preference? Even without sadism, the result of the entropy-increasing committee effect seems to be, “cost cutting” (because no one will ever agree on how to increase revenue). With sadism in the mix, convergence on that sort of idea happens faster, and ignorance of externalized costs is enhanced.

The sadist has an advantage in the corporate game that is unmatched. The more typical greedy-but-decent person will make decisions that harm others, but is drained by doing so. Telling people that they don’t have jobs anymore, and that they won’t get a decent severance because that would have been a losing fight against HR, and that they have to be sent out by security “by policy”, makes them pretty miserable. They’ll play office politics, and they play to win, but they don’t enjoy it. Sadists, on the other hand, are energized by harm. Sadists love office politics. They can play malicious games forever. One trait that gives them an advantage over the merely greedy is that, not only are they energized by their wins, but they don’t lose force in their losses. Greedy people hate discomfort, low status, and loss of opportunity. Sadists don’t care what happens to them, as long as someone else is burning.

This is why, while sadists are probably a minority of the general population, they make up a sizeable fraction of the upper ranks in Corporate America. Their power is bolstered by the fact that most business organizations have ceased to stand for anything. They’re patterns of behavior that have literally no purpose. This is because the decision-making derives from a committee of greedy people with no long-term plans, and sadistic people with harmful long-term plans (that, in time, destroy the organization).

Sadists are not a majority contingent in the human population. However, we generally refuse to admit that it exists at all. It’s the province of criminals and perverts, but surely these upstanding businessmen have their reasons (if short-sighted ones, but that is chalked up to a failure of regulation) for bad behaviors. I would argue that, by refusing to admit to sadism’s prevalence and commonality, we actually give it more power. When people confront frank sadism either in the workplace or in the public, they’re generally shocked. Against an assailant, whether we’re talking about a mugger or a manager presenting a “performance improvement plan”, most people freeze. It’s easy to say, “I would knee him in the nuts, gouge out his eyeballs, and break his fingers in order to get away.” Very few people, when battle visits them unprepared, do so. Mostly, the reaction is, I can’t believe this is happening to me. It’s catatonic panic. Refusing to admit that sadism is real and that it must be fought, we instead give it power by ignoring its existence, thus allowing it to ambush us. In a street fight, this is observed in the few seconds of paralytic shock that can mean losing the fight and being killed. In HR/corporate matters, it’s the tendency of the PIP’d employee to feel intense personal shame and terror, instead of righteous anger, when blindsided by managerial adversity.

The bigger problem

Why do I write? I write because I want people in my generation to learn how to fight. The average 25-year-old software engineer has no idea what to do when office politics turn against him (and that, my friends, can happen to anyone; overperformance is more dangerous than underperformance, but that’s a topic for another essay). I also want them to learn “Work Game”. It’s bizarre to me that learning a set of canned social skills to exploit 20-year-old women with self-esteem problems (pickup artistry) is borderline socially acceptable, while career advice is always of nice-guy “never lie on your resume, no exceptions” variety. (Actually, that’s technically correct. Everyone who succeeds in the corporate game has lied to advance his career, but never put an objectively refutable claim in writing.) Few people have the courage to discuss how the game is actually played. If men can participate in a “pickup artist” culture designed to exploit women with low self-respect and be considered “baller” for it, and raise millions in venture funding… then why it is career-damaging to be honest about what one has to do in the workplace just to maintain, much less advance, one’s position? Why do we have to pretend to uphold this “nice guy”/AFC belief in office meritocracy?

I write because I want the good to learn how to fight. We need to be more ruthless, more aggressive, and sometimes even more political. If we want anything remotely resembling a “meritocracy”, we’re going to have to fight for it and it’s going to get fucking nasty.

However, helping people hack broken organizations isn’t that noble of a goal. Don’t get me wrong. I’d love to see the current owners of Corporate America get a shock to the system. I’d enjoy taking them down (that’s not sadism, but a strong– perhaps pathologically strong, but that’s another debate– sense of justice.) Nonetheless, we as a society can do better. This isn’t a movie or video game in which beating the bad guys “saves the world”. What’s important, if less theatric and more humbling, is the step after that: building a new and better world after killing off the old one.

Here we address a cultural problem. Why do companies get to a point where the ultimate power is held by sadists, who can dress up their malignant desires as hard-nosed cost-cutting? What causes the organization to reach the high-entropy state in which the only self-interested decision it can make is to externalize a cost, when there are plenty of overlooked self-interested decisions that are beneficial to the world as a whole? The answer is the “tallest nail” phenomenon. The tallest nail gets hammered down. As a society, that’s how we work. Abstractly, we admire people who “put themselves out there” and propose ideas that might make their organizations and the world much better. Concretely, those people are torn down as “bikeshedders”, by (a) their ideological opponents, who usually have no malicious intent but don’t want their adversaries to succeed– at least, not on that issue–; (b) sadists relishing the opportunity to deny someone a good thing; (c) personal political rivals, which any creative person will acquire over time; and (d) greedy self-interested people who perceive the whim of the group as it is emerging and issue the final “No”. We have a society that rewards deference to authority and punishes creativity, brutally. And capitalism’s private sector, which is supposed to be an antidote to that, and which is supposed to innovate in spite of itself, is where we see that tendency in the worst way.

Greed (meaning self-interest) can be good, if directed properly by those with a bit of long-term vision and an ironclad dedication to fairness. Sadism is not. The combination of the two, which is the norm in corporate boardrooms, is toxic. Ultimately, we need something else. We need true creativity. That’s not Silicon Valley’s “make the world a better place” bullshit either, but a genuine creative drive that comes from a humble acknowledgement of just how fucking hard it is to make the world a tolerable, much less “better”, place. It isn’t easy to make genuine improvements to the world. (Mean-spirited cost-cutting, sadistic game-playing, and cost externalization are much easier ways to make money. Ask any management consultant.) It’s brutally fucking difficult. Yet millions of people every day, just like me, go out and try. I don’t know why I do it, given the harm that even my mild public cynicism has brought to my career, but I keep on fighting. Maybe I’ll win something, some day.

As a culture, we need to start to value that creative courage again, instead of tearing people down over petty differences.

 

In defense of defensibility

I won’t say when or where, but at one point in time, a colleague and I were discussing our “red button numbers” for the organization under which we toiled.

What’s that? The concept is this: a genie offers you the option to push a “red button” and, if you do so, your company will go bankrupt and cease to exist. Equity will be worthless, paychecks will bounce, and jobs will end. However, every employee in that company gets the same cash severance. (Let’s say $50,000.) The stipulation that every employee gets paid is important. I’m not interested in what some people might do if they had no moral scruples. Some people would blow up their employers for a $50,000 personal payoff, with everyone else getting nothing, but almost no one would admit this, or do it if it were to become known. If everyone gets paid, it pushing the red button becomes ethically acceptable. At $50,000 for a typical company? Hell yeah. Most employees would see their lives improve. The executives would be miserable, getting a pittance compared to their salaries, but… seriously, fuck ‘em. A majority of working people, if their company were scrapped and they were given a $50,000 check, would be dealt a huge favor by that circumstance.

The “everyone gets paid” red-button scenario is more interesting because it deals with what people will do in the open and consider ethically acceptable. When I get to more concrete matters of decisions people make that repair or dismantle companies, the interesting fact is that most of those decisions happen in the open. Companies are rarely sabotaged in secret, but disempowered and decomposed by their own people in plain view.

The “red button number” is the point at which a person would press the button, end the company, have every employee paid out that amount, and consider that an ethically acceptable thing to do. It’s safe to assume that almost everyone in the private sector has a red button number. For the idealists, and for the wealthy executives, that number might be very high: maybe $10 million. For most, it’s probably quite low. People who are about to be fired, and don’t expect a severance, might push the button at $1. Let’s assume that we could ask people for their red button numbers, and they’d answer honestly, and that this survey could be completed across a whole company. Take the median red button number and multiply it by the number of employees. That’s the company’s defensibility. We can’t actually measure this number directly, but it has a real-world meaning. If there were a vote on whether to dissolve the company and pay out some sum D, divided among all employees equally, the defensibility is the D* for which, if D > D*, the motion will pass and the company will be disbanded, and if D < D*, the company will persist. It’s the valuation the employees assign to the company (which is, often, a very different number from its market capitalization or private valuation).

Of course, such a vote would never actually happen. Companies don’t give employees that kind of power, and there’s an obvious reason why. Most companies, at least according to the stock market or private valuations, assigned values much greater than their defensibility. (This is not unreasonable or surprising, if a bit sad.) I can’t measure this to be sure, and I’d rather not pick on specific companies, so let me give three models and leave it to the reader to judge whether my assessments make sense.

Model Company A: a publicly-traded retail outlet with 100,000 employees, many earning less than $10/hour. I estimate the median “red button” number at $5,000, putting the defensibility at $500 million. A healthy valuation for such a company would be $125,000 per employee, or $12.5 billion. Defensibility is 4 cents on the dollar.

Model Company B: an established technology company with 20,000 employees. Software engineers earn six figures, and engineers-in-test and the like earn high-five-figure salaries. There’s a “cool” factor to working there. I’d estimate the median “red button” number at about 9 months of salary (and, for some of the most enthusiastic employees, it might be as much as five years, but at the median, it’s 6-9 months) or $75,000, putting the defensibility at $1.5 billion. A typical valuation for such a company would be $5 million per head, or $100 billion. Even though this is a company whose employees wouldn’t leave lightly, its defensibility is still only 1.5 cents on the dollar.

Model Company C: a funded startup, with 100 employees and a lot of investor and “tech press” attention. Many “true believers” among the employee ranks. Let’s assume that that, to get a typical employee to push the “red button”, we’d have to guarantee 6 months of pay ($50,000) and 250 percent of the fully-vested equity (0.04%) because so many employees really expect the stock to grow. The valuation of the company is $200 million (or $2 million per employee). We reach a defensibility of $250,000 per employee, or $25 million. That’s a lot, but it’s still only 12.5% of the valuation of the business.

None of these companies are, numerically speaking, very defensible. That is, if the company could be dissolved to anarchy with its value (as assessed by the market) distributed among the employees, they’d prefer it that way. Of course, a company need not be defensible at 100 cents on the dollar for its employees to wish it to remain in existence. If a $10 billion company were dissolved in such a way, there wouldn’t actually be $10 billion worth of cash to dish out. To the extent that companies can be synergistic (i.e. worth more than the sum of the component parts) it’s a reasonable assumption that a company whose defensibility was even 50 percent of its market capitalization would never experience voluntary dissolution, even if it were put to a vote.

In real life, of course, these “red button” scenarios don’t exist. Employees don’t get to vote on whether their companies continue existing, and, in practice, they’re usually the biggest losers in full-out corporate dissolution because they have far less time to prepare than the executives. The “red button number” and defensibility computations are an intellectual exercise, that’s all. Defensibility is what the company is worth to the employees. Given that defensibility numbers seem (under assumptions I consider reasonable) to consistently come in below the valuation of the company, we understand that companies would prefer that their persistence not come down to an employee vote.

That a typical company might have a defensibility of 5 cents on the dollar, to me, underscores the extreme imbalance of power between capital and labor. If the employees value the thing at X, and capital values it at 20*X, that seems to indicate that capital has 20 times as much power as the employees do. It signifies that companies aren’t really partnerships between capital and labor, but exist almost entirely for capital’s benefit.

Does defensibility matter? In this numerical sense, it’s a stretch to say that it does, because such votes can’t be called. Fifty-one percent of a company’s workers realizing that they’d let the damn thing die for six months’ severance has no effect, because they don’t have that power. If defensibility numbers were within a factor of 2, or even 4, of company’s market capitalizations, I’d say that these numbers (educated guesses only) tell us very little. It’s the sheer magnitude of the discrepancy under even the most liberal assumptions that is important.

People in organizations do “vote” on the value of the organization with what they do, day to day, ethically (and unethically) as they trade off between self-interest and the upkeep of the organization. How much immediate gain would a person forgo, in order to keep up the organization? The answer is: very little. I’d have no ethical qualms, with regard to most of the companies that I’ve worked at, in pressing the red button at $100,000. Most employees will be ecstatic, a few executives will be miserable; fair trade.

Thus far, I’ve only addressed public, ethical, fair behavior. Secretive and unethical behaviors also affect a company, obviously. However, I don’t see the latter as being as much of a threat. Organizations can defend themselves against the unethical, the bad actors, if the ethical people care enough to participate in the upkeep. It’s when ethical people (publicly and for just reasons) tune out that the organization is without hope.

The self-healing organization

Organizations usually rot as they age. It’s practically taken as a given, in the U.S. private sector, that companies will change for the worse as time progresses. Most of the startup fetishism of the SillyCon Valley is derived from this understanding: organizations will inexorably degrade invariably with age (a false assumption, and I’ll get to that) and the best way for a person (or for capital) to avoid toxicity is to hop from one upstart company to another, leaving as soon as the current habitat gets old.

It is true that most companies in the private sector degrade, and quite rapidly. Why is it so? What is it about organizations that turns them into ineffective, uninspiring messes over time? Are they innately pathological? Is this just a consequence of corporate entropy? Are people who make organizations better so much rarer than those who make them worse? I don’t think so. I don’t think it’s innate that organizations rot over time. I think that it’s common, but avoidable.

The root entropy-increasing cause of corporate decay is “selfishness”. I have to be careful with this word, because selfishness can be a virtue. I certainly don’t intend to impose any value, good or bad, on that concept here. Nor do I imply secrecy or subterfuge. Shared selfishness can be a group experience. Disaffected employees can slack together and protect each other. Often this happens. One might argue that it becomes “groupishness” or localism. I don’t care to debate that point right now.

Organizations decay because people and groups within them, incrementally, prefer their interests over that of the institution. If offered promotions they don’t deserve, into management roles that will harm morale, people usually take them. If they can get away with slacking– either to take a break, or to put energy into efforts more coherent with their career goals– they will. (The natural hard workers will continue putting forth effort, but only on the projects in line with their own career objectives.) If failures of process advantage them, they usually let those continue. When the organization acts against their interests, they usually make it hurt, causing the company to recoil and develop scar tissue over time. They protect themselves and those who have protected them, regardless of whether their allies possess “merit” as defined by the organization. These behaviors aren’t exactly crimes. Most of this “selfishness” is stuff that morally average people (and, I would argue, even many morally good ones) will do. Most people, if they found a wallet with $1,000 and a driver’s license in it, would take it to the police station for return to its owner. However, if they were promoted based on someone else’s work, and that “someone else” had left the company so there was no harm in keeping the promotion, they’d keep the arrangement as-is. I’m not different from the average person on this; I’ll just admit to it, in the open.

People in the moral mid-range will generally try to do the right thing. On the “red button” issue, most wouldn’t tank their own companies for a personal payout, leaving all their colleagues screwed. Most would press the button in the “every employee gets paid” scenario, because it’s neither ethically indefensible nor socially unacceptable to do so. Such people are in the majority and not inherently corrosive to institutions– they uphold those that are good to them and their colleagues, and bad to those that harm them. However, they hasten the decay of those organizations that clearly don’t deserve concern.

Let’s talk about entropy, or the increasing tendency toward disorder in a closed system. Life can only persist because certain genetic configurations enable an organism, taking in external energy, to preserve local low-entropy conditions. A lifeless human body, left on the beach to be beaten by the waves, will be unrecognizable within a couple of days. That’s entropy. A living human can sit in the same conditions with minimal damage. In truth, what we seem to recognize as life is “that which can preserve its own local order”. Living organisms are constantly undergoing self-repair. Cells are destroyed by the millions every hour, but new ones are created. Dead organisms are those that have lost the ability to self-repair. The resources in them will be recycled by self-interested and usually “lower” (less complex) organisms that feed on them as they decay.

Organizations, I think, can be characterized as “living” or “dead”, to some degree, based on whether their capacity for self-repair exceeds the inevitable “wear and tear” that will be inflicted by the morally acceptable but still entropy-increasing favoritism that its people show for their own interests. The life/death metaphor is strained, a bit, by the difficulty in ascertaining which is which. In biology, it’s usually quite clear whether an organism is alive. We know that when the human heart stops, the death process is likely to begin (and, absent medical intervention, invariably will begin) within 5 minutes and, after 10 minutes, the brain will typically be unrecoverable and that person will no longer exist in the material world. Life versus death isn’t completely binary in biology, but it’s close enough. With organizations, it’s far less clear whether the thing is winning or losing its ongoing fight against entropy. To answer that question involves debate and research that, because the questions asked are socially unacceptable, can rarely be performed properly.

“Red button” scenarios don’t happen, but every day, people make small decisions that influence the life/death direction of the company. Individually, most of these decisions don’t matter for much. A company isn’t going to fail because a disaffected low-level employee spent his whole morning searching for other work. If everyone’s looking for another job, that’s a problem.

In the VC-funded world, self-repair has been written off as a lost cause. It’s not even attempted. The mythology is that everything old (“legacy”) is of low value (and should be sold to some even older, more defective company) and that only the new is worth attention. Don’t try to repair old organizations; just create new shit and make old mistakes. It’s all about new programmers (under 30 only, please) and new languages (often recycling ideas from Lisp and Haskell, implementing them poorly) and new companies. This leads to massive waste as nothing is learned from history. It becomes r-selective in the extreme, with the hope that, despite frequent death of the individual organisms, there can be a long-lived clonal colony for… someone. For whom? To whose benefit is this clonal organism? It’s for the well-connected scumbags who can peddle influence and power no matter which companies beat the odds and thrive for a few days, and which ones die.

In the long run, I don’t think this is going to work. Building indefensible companies in large numbers is not going to create a defensible meta-organism. To do so is to create a con (admittedly, a somewhat brilliant and unprecedented one, a truly postmodern corporate organism) in which enthusiastic young people trade their energy and ardor for equity is mostly-worthless companies, and whose macroscopic portfolio performance is mediocre (as seen in the pathetic returns VC offers for passive investors) but which affords great riches for those with the social connections (and lack of moral compass) necessary to navigate it. It works for a while, and then people figure out what’s going on, and it doesn’t. We call this a “bubble/crash” cycle, but what it really is, at least in this case, is an artifact of limitations on human stupidity. People will only fall for a con for so long. The naive (like most 22-year-olds when they’re just starting the Valley game) get wiser, and the perpetual suckers have short attention spans and will be drawn to something shinier.

Open allocation

What might a defensible company look like? Here I come to one my pet issues: open allocation. The drawback of open allocation, terrifying to the hardened manageosaur, is that it requires the organization be defensible in order to work, because it gives employees a real vote on what the company does. The good news is that open allocation tends naturally toward defensibility. If the organization is fair to its employees and its people are of average or better moral quality, then the majority of them can be trusted to work within the intersection between their career interests and the needs of the company.

Why does open allocation work so well? Processes, projects,  patterns, protections, personality interactions, policies and power relationships (henceforth, “the Ps”) are all subjected to a decentralized immune system, rather than a centralized one that doesn’t have the bandwidth to do the job properly. Organizations of all kinds produce the Ps on a rather constant basis. When one person declines to use the bathroom because his manager just went in, that microdeference is P-generation. The same applies to the microaggression (or, to be flat about it, poorly veiled aggression) of a manager asking for an estimate. Requesting an estimate generates several Ps at once: a power relationship (to be asked for an estimate is to be made someone’s bitch), a trait of a project (it’s expected at a certain time, quality be damned), and a general pattern (managerial aggression, in the superficial interest of timeliness, is acceptable in that organization). Good actions also generate Ps. To empower people generates power relationships of a good kind, and affords protection. Even the most superficial interactions generate Ps, good and bad.

Under open allocation, the bad Ps are just swept away. When one person tries to dominate the other through unreasonable requests, or to socially isolate a person in order to gain power, the other has the ability to say, “fuck that shit” and walk away. The doomed, career-ending projects and the useless processes and the toxic power relationships just disappear. People will try to make such things, even in the best of companies, and sometimes without ill intent. Under open allocation, however, bad Ps just don’t stick around. People have the power to nonexist them. What this means is that, over time, the long-lived Ps are the beneficial ones. You have a self-healing organization. This generates good will, and people begin to develop a genuine civic pride in the organization, and they’ll participate in its upkeep. Open allocation may not be the only requisite ingredient for success on the external market, but it does insure an organization against internal decay.

Then there’s the morale issue, and the plain question of employee incentives. Employees of open allocation companies know that if their firms dissolve tomorrow, they’re going to end up in crappy closed-allocation companies afterward. They actually care– beyond “will I get a severance?”– whether their organizations live or die. In closed-allocation companies, the only way to get a person to care in this way is either (a) to give him a promotion that another organization would not (which risks elevating an incompetent) or (b) to pay him a sizable wage differential over the prevailing market wage– this leads to exponential wage growth, typically at a rate of 20 to 30 percent per year, and can be good for the employee but isn’t ideal for the employer. Because closed-allocation companies are typically also stingy, (a) is preferred, which means that loyalists are promoted regardless of competence. One can guess where that leads: straight to idiocy.

Under closed allocation, bad Ps tend to be longer-lived than good ones. Why? Something I’ve realized in business is that good ideas fly away from their originators and become universal, which means they can be debated on their actual merits and appropriateness to the situation (a good idea is not necessarily right for all situations). Two hundred years from now, if open allocation is the norm, it’s quite likely that no one will remember my name in connection to it. (To be fair, I only named it, I didn’t invent the concept.) Who invented the alphabet? We don’t know, because it was a genuinely good idea. Who invented sex? No one. On the other hand, bad ideas become intensely personal– loyalty tests, even. Stack ranking becomes “a Tom thing” (“Tom” is a made-up name for a PHP CEO) because it’s so toxic that it can only be defended by an appeal to authority or charisma, and yet to publicly oppose it is to question Tom’s leadership (and face immediate reprisal, if not termination). In a closed-allocation company, bad ideas don’t get flushed out of the system. People double down on them. (“You just can’t see that I’m right, but you will.”) They become personal pet projects of executives and “quirky” processes that no one can question. Closed allocation companies simply have no way to rid themselves of bad Ps– at least, not without generating new ones. Even firing the most toxic executive (and flushing his Ps with him) is going to upset some people, and the hand-over of power is going to result in P-generation that is usually in-kind. Most companies, when they fire someone, opt for the cheapest and most humiliating kind of exit– crappy or nonexistent severance, no right to represent oneself as employed during the search, negative (and lawsuit-worthy) things said about the departing employee– and that usually makes the morale situation worse. (If you disparage a fired and disliked executive, you still undermine faith in your judgment; why’d you hire him in the first place?) No matter how toxic the person is, you can’t fire someone in that way without generating more toxicity. People talk, and even the disaffected and rejected have power, when morale is factored in. The end result of all this is that bad Ps can’t really be removed from the system without generating a new set of bad Ps.

I can’t speak outside of technology because to do so is to stretch beyond my expertise. However, a technology company cannot have closed allocation and retain its capacity for self-repair. It will generate bad Ps faster than good ones.

What about… ?

There’s a certain recent, well-publicized HR fuckup that occurred at a well-known company using open allocation. I don’t want to comment at length about this. It wouldn’t have been newsworthy were it not for the high moral standard that company set for itself in its public commitment to open allocation. (If the same had happened at a closed-allocation oil company or bank, no one would have ever heard a word about it.) Yes, it proved that open allocation is not a panacea. It proved that open allocation companies develop political problems. This is not damning of open allocation, because closed allocation creates much worse problems. More damningly, the closed allocation company can’t heal.

Self-healing and defensibility are of key importance. All organizations experience stress, no exceptions. Some heal, and most don’t. The important matter is not whether political errors and HR fuckups happen– because they will– but whether the company is defensible enough that self-repair is possible.

The complexity factor

The increase of entropy in an organization is a hard-to-measure process. We don’t see most of those “Ps” as they are generated, and moral decay is a subjective notion. What seems to be agreed-upon is that, objectively, complexity grows as the organization does, and that this becomes undesirable after a certain point. Some call it “bureaucratic red tape” and note it for its inefficiency. Others complain about the political corruption that emerges from relationship-based complexity. For a variety of reasons, an organization gets into a state where it is too complicated and self-hogtied to function well. It becomes pathological. Not only that, but the complexity that exists becomes so onerous that the only way to navigate it is to create more complexity. There are committees to oversee the committees.

Why does this happen? No one sets out “to generate complexity”. Instead, people in organizations use what power they have (and even low-level grunts can have major effects on morale) to create conditional complexity. They make decisions that favor their interests simple and those that oppose them complicated enough that no one wants to think them through; instead, those branches of the game tree are just pruned. That’s how savvy people play politics. If a seasoned office politicker wants X and not Y, he’s not going to say, “If you vote for Y, I’ll cut you.” He can’t do it that way. In fact, it’s best if no one knows what his true preference is (so he doesn’t owe any favors to X voters). Nor can he obviously make Y unpleasant or bribe people into X. What he can do is create a situation (preferably over a cause seemingly unrelated to the X/Y debate) that makes X simple and obvious, but Y complex and unpredictable. That’s the nature of conditional complexity. It generates ugliness and mess– if the person’s interests are opposed.

In the long run, this goes bad because an organization will inevitably get to a point where it can’t do anything without opposing some interest, and then all of those conditional complexities (which might be “legacy” policies, set in place long ago and whose original stakeholders may have moved on) are triggered. Things become complex and “bureaucratic” and inefficient quickly, and no one really knows why.

The long-term solution to this complexity-burden problem is selective abandonment. If there isn’t a good reason to maintain a certain bit of complexity, that bit is abandoned. For example, it’s illegal, in some towns, to sing in the shower. Are those laws enforced? Never, because there’s no point in doing so. The question of what is the best way to “garbage collect” junk complexity is one I won’t answer in a single essay, but in technology companies, open allocation provides an excellent solution. Projects and power relationships and policies (again, the Ps) that no longer make sense are thrown out entirely.

The best defense is to be defensible

The low defensibility of the typical private-sector organization is, to me, quite alarming. Rational and morally average (or even morally above average) don’t value their employers very much, because most companies don’t deserve to be valued. They’re not defensible, which means they’re not defended, which is another way of saying self-repair doesn’t happen and that organizational decay is inexorable.

On programmers, deadlines, and “Agile”

One thing programmers are notorious for is their hatred of deadlines. They don’t like making estimates either, and that makes sense, because so many companies use the demand for an estimate as a “keep ‘em on their toes” managerial microaggression rather than out of any real scheduling need. “It’ll be done when it’s done” is the programmer’s gruff, self-protecting, and honest reply when asked to estimate a project. Whether this is an extreme of integrity (those estimates are bullshit, we all know it) or a lack of professionalism is hotly debated by some. I know what the right answer is, and it’s the first, most of the time.

Contrary to stereotype, good programmers will work to deadlines (by which, I mean, work extremely hard to complete that project by a certain time) under specific circumstances. First, those deadlines need to exist in some external sense, i.e. a rocket launch whose date has been set in advance and can’t be changed. They can’t be arbitrary milestones set for emotional rather than practical reasons. Second, programmers need to be compensated for the pain and risk. Most deadlines, in business, are completely arbitrary and have more to do with power relationships and anxiety than anything meaningful. Will a good programmer accept business “deadline culture”, and the attendant risks, at a typical programmer’s salary? No, not for long. Even with good programmers and competent project management, there’s always a risk of a deadline miss, and the great programmers tend to be, at least, half-decent negotiators (without negotiation skills, you don’t get good projects and don’t improve). My point is only that it can happen for a good programmer to take on personal deadline responsibility (either in financial or career terms) and not find it unreasonable. That usually involves a consulting rate starting around $250 per hour, though.

Worth noting is that there are two types of deadlines in software: there are “this is important” deadlines (henceforth, “Type I”) and “this is not important” deadlines (“Type II”). Paradoxically, deadlines are assessed to the most urgent (mission critical) projects but also to the least important ones (to limit use of resources) while the projects of middling criticality tend to have relatively open timeframes.

A Type I deadline is one with substantial penalties to the client or the business if the deadline’s missed. Lawyers see a lot of this type of deadline; they tend to come from judges. You can’t miss those. They’re rarer in software, especially because software is no longer sold on disks that come in boxes, and because good software engineers prefer to avoid the structural risk inherent in setting hard deadlines, preferring continual releases. But genuine deadlines exist in some cases, such as in devices that will be sent into space. In those scenarios, however, because the deadlines are so critical, you need professional project-management muscle. When you have that kind of a hard deadline, you can’t trust 24-year-old Ivy grads turned “product managers” or “scrotum masters” to pull it out of a hat. It’s also very expensive. You’ll need, at least, the partial time of some highly competent consultants who will charge upwards of $400 per hour. Remember, we’re not talking about “Eric would like to see a demo by the 6th”; we’re talking about “our CEO loses his job or ends up in jail or the company has to delay a satellite launch by a month if this isn’t done”. True urgency. This is something best avoided in software (because even if everything is done right, deadlines are still a point of risk) but sometimes unavoidable and, yes, competent software engineers will work under such high-pressure conditions. They’re typically consultants starting around $4,000 per day, but they exist. So I can’t say something so simple as “good programmers won’t work to deadlines”, even if it applies to 99 percent of commercial software. They absolutely will– if you pay them 5 to 10 times the normal salary, and can guarantee them the resources they need to do the job. That’s another important note: don’t set a deadline unless you’re going to give the people expected to meet it the power, support, and resources to achieve it if at all possible. Deadlines should be extremely rare in software so that when true, hard deadlines exist for external resources, they’re respected.

Most software “deadlines” are Type II. “QZX needs to be done by Friday” doesn’t mean there’s a real, external deadline. It means, usually, that QZX is not important enough to justify more than a week of a programmer’s time. It’s not an actual deadline but a resource limit. That’s different. Some people enjoy the stress of a legitimate deadline, but no one enjoys an artificial deadline, which exists more to reinforce power relationships and squeeze free extra work out of people than to meet any pressing business need. More savvy people use the latter kind as an excuse to slack: QZX clearly isn’t important enough for them to care if it’s done right, because they won’t budget the time, so slacking will probably be tolerated as long as it’s not egregious. If QZX is so low a concern that the programmer’s only allowed to spend a week of his time on it, then why do it at all? Managers of all stripes seem to think that denying resources and time to a project will encourage those tasked with it to “prove themselves” against adversity (“let’s prove those jerks in management wrong… by working weekends, exceeding expectations and making them a bunch of money”) and work hard to overcome the gap in support and resources between what is given and what is needed. That never happens; not with anyone good, at least. (Clueless 22-year-olds will do it; I did, when I was one. The quality of the code is… suboptimal.) The signal sent by a lack of support and time to do QZX right is: QZX really doesn’t matter. Projects that are genuinely worth doing don’t have artificial deadlines thrown on them. They only have deadlines if there are real, external deadlines imposed by the outside world, and those are usually objectively legible. They aren’t deadlines that come from managerial opinion “somewhere” but real-world events. It’s marginal, crappy pet projects that no one has faith in that have to be delivered quickly in order to stay alive. For those, it’s best to not deliver them and save energy for things that matter– as far as one can get away with it. Why work hard on something the business doesn’t really care about? What is proved, in doing so?

Those artificial deadlines may be necessary for the laziest half of the programming workforce. I’m not really sure. Such people, after all, are only well-suited to unimportant projects, and perhaps they need prodding in order to get anything done. (I’d argue that it’s better not to hire them in the first place, but managers’ CVs and high acquisition prices demand headcount, so it looks like we’re stuck with those.) You’d be able to get a good programmer to work to such deadlines around $400 per hour– but because the project’s not important, management will (rightly) never pay that amount for it. But the good salaried programmers (who are bargains, by the way, if properly assigned or, better yet, allowed to choose their projects) are likely to leave. No one wants to sacrifice weekends and nights on something that isn’t important enough for management to budget a legitimate amount of calendar time.

Am I so bold as to suggest that most work people do, even if it seems urgent, isn’t at all important? Yes, I am. I think a big part of it is that headcount is such an important signaling mechanism in the mainstream business world. Managers want more reports because it makes their CVs look better. “I had a team of 3 star engineers and we were critical to the business” is a subjective and unverifiable claim. “I had 35 direct reports” is objective and, therefore, more valued. When people get into upper-middle management, they start using the terms like “my organization” to describe the little empires they’ve built inside their employers. Companies also like to bulk up in size and I think the reason is signaling. No one can really tell, from the outside, whether a company has hired good people. Hiring lots of people is an objective and aggressive bet on the future, however. The end result is that there are a lot of people hired without real work to do, put on the bench and given projects that are mostly evaluative: make-work that isn’t especially useful, but allows management to see how the workers handle artificial pressure. Savvy programmers hate being assigned unimportant/evaluative projects, because they have all the personal downsides that important ones do (potential for embarrassment, loss of job) but no career upside. Succeeding on such a project gets one nothing more than a grade of “pass“. For a contrast, genuinely important projects can carry some of the same downside penalties if they are failed, obviously, but they also come with legitimate upside for the people doing them: promotions, bonuses, and improved prospects in future employment. The difference between an ‘A’ and a ‘B’ performance on an important project (as opposed to evaluative make-work) actually matters. That distinction is really important to the best people, who equate mediocrity with failure and strive for excellence alone.

All that said, while companies generate loads of unimportant work, for sociological reasons it’s usually very difficult for management to figure out which projects are waste. The people who know that have incentives to hide it. But the executives can’t let those unimportant projects take forever. They have to rein them in and impose deadlines with more scrutiny than is given important work. If it an important project over-runs its timeframe by 50 percent, it’s still going to deliver something massively useful, so that’s tolerated. What tends to happen is that the important projects are, at least over time, given the resources (and extensions) they need to succeed. Unimportant projects have artificial deadlines imposed to prevent waste of time. That being the case, why do them at all? Obviously, no organization intentionally sets out to generate unimportant projects.  The problem, I think, is that when management loses faith in a project, resources and time budget are either reduced, or just not expanded even when necessary. That would be fine, if workers had the same mobility and could also vote with their feet. The unimportant work would just dissipate. It’s political forces that hold the loser project together. The people staffed on it can’t move without risking an angry (and, often, prone to retaliation) manager, and the manager of it isn’t likely to shut it down because he wants to keep headcount, even if nothing is getting done. The result is a project that isn’t important enough to confer the status that would allow the people doing it to say, “It’ll be done when it’s done”. The unimportant project is in a perennial race against loss of faith in it from the business, and it truly doesn’t matter to the company as a whole whether it’s delivered or not, but there’s serious personal embarrassment if it isn’t made.

It’s probably obvious that I’m not anti-deadline. The real world has ‘em. They’re best avoided, as points of risk, but they can’t be removed from all kinds of work. As I get older, I’m increasingly anti-”one size fits all”. This, by the way, is why I hate “Agile” so vehemently. It’s all about estimates and deadlines, simply couched in nicer terms (“story points”, “commitments”) and using the psychological manipulation of mandatory consensus. Ultimately, the Scrum process is a well-oiled machine for generating short-term deadlines on atomized microprojects. It also allows management to ask detailed questions about the work, reinforcing the sense of low social status that “conventional wisdom” says will keep the workers on their toes and most willing to work hard, but that actually has the opposite effect: depression and disengagement. (Open-back visibility in one’s office arrangement, which likewise projects low status, is utilized to the same effect and, empirically, it does not work.) It might be great for squeezing some extra productivity out of the bottom-half– how to handle them is just not my expertise– but it demotivates and drains the top half. If you’re on a SCRUM team, you’re probably not doing anything important. Important work is given to trusted individuals, not to SCRUM teams.

Is time estimation on programming projects difficult and error-prone? Yes. Do programming projects have more overtime risk than other endeavors? I don’t have the expertise to answer that, but my guess would be: probably. Of course, no one wants to take on deadline risk personally, which is why savvy programmers (and almost all great programmers are decent negotiators, as discussed) demand compensation and scope assurance. (Seasoned programmers only take personal deadline risk with scope clearly defined and fixed.) However, the major reason for the programmers’ hatred of deadlines and estimates isn’t the complexity and difficulty-of-prediction in this field (although that’s a real issue) but the fact that artificial deadlines are an extremely strong signal of a low-status, unimportant, career-wasting project. Anyone good runs the other way. And that’s why SCRUM shops can’t have nice things.

Why corporate penny-shaving backfires. (Also, how to do a layoff right.)

One of the clearest signs of corporate decline (2010s Corporate America is like 1980s Soviet Russia, in terms of its low morale and lethal overextension) is the number of “innovations” that are just mean-spirited, and seem like prudent cost-cutting but actually do minimal good (and, often, much harm) to the business.

One of these is the practice of pooling vacation and sick leave in a single bucket, “PTO”. Ideally, companies shouldn’t limit vacation or sick time at all– but my experience has shown “unlimited vacation” to correlate with a negative culture. (If I ran a company, it would institute a mandatory vacation policy: four weeks minimum, at least two of those contiguous.) Vacation guidelines need to be set for the same reason that speed limits (even if intentionally under-posted, with moderate violation in mind) need to be there; without them, speed variance would be higher on both ends. So, I’ve accepted the need for vacation “limits”, at least as soft policies; but employers expect their people to either use a vacation day for sick leave, or come into the office while sick, are just being fucking assholes.

These PTO policies are, in my view, reckless and irresponsible. They represent a gamble with employee health that I (as a person with a manageable but irritating disability) find morally repugnant. It’s bad enough to deny rest to someone just because a useless bean-counter wants to save the few hundred dollars paid out for unused vacation when someone leaves the company. But by encouraging the entire workforce to show up while sick and contagious, they subject the otherwise healthy to an unnecessary germ load. Companies with these pooled leave, “PTO”, policies end up with an incredibly sickly workforce. One cold just rolls right into another, and the entire month of February is a haze of snot, coughing, and bad code being committed because half the people at any given time are hopped up on cold meds and really ought to be in bed. It’s not supposed to be this way. This will shock those who suffer in open-plan offices, but an average adult is only supposed to get 2-3 colds per year, not the 4-5 that are normal in an open-plan office (another mean-spirited tech-company “innovation”) or the 7-10 per year that is typical in pooled-leave companies.

The math shows that PTO policies are a raw deal even for the employer. In a decently-run company with an honor-system sick leave policy, an average healthy adult might have to take 5 days off due to illness per year. (I miss, despite my health problems, fewer than that.) Under PTO, people push themselves to come in and only stay home if they’re really sick. Let’s say that they’re now getting 8 colds per year instead of the average 2. (That’s not an unreasonable assumption, for a PTO shop.) Only 2 or 3 days are called-off, but there are a good 24-32 days in which the employee is functioning below 50 percent efficiency. Then there are the morale issues, and the general perception that employees will form of the company as a sickly, lethargic place; and the (mostly unintentional) collective discovery of how low a level of performance will be tolerated. January’s no longer about skiing on the weekends and making big plans and enjoying the long golden hour… while working hard, because one is refreshed. It’s the new August; fucking nothing gets done because even though everyone’s in the office, they’re all fucking sick with that one-rolls-into-another months-long cold. That’s what PTO policies bring: a polar vortex of sick.

Why, if they’re so awful, do companies use them? Because HR departments often justify their existence by externalizing costs elsewhere in the company, and claiming they saved money. So-called “performance improvement plans” (PIPs) are a prime example of this. The purpose of the PIP is not to improve the employee. Saving the employee would require humiliating the manager, and very few people have the courage to break rank like that. Once the PIP is written, the employee’s reputation is ruined, making mobility or promotion impossible. The employee is stuck in a war with his manager (and, possibly, team) that he will almost certainly lose, but he can make others lose along the way. To the company, a four-month severance package is far cheaper than the risk that comes along with having a “walking dead” employee, pissing all over morale and possibly sabotaging the business, in the office for a month. So why do PIPs, which don’t even work for their designed intention (legal risk mitigation) unless designed and implemented by extremely astute legal counsel, remain common? Well, PIPs a loss to the company, even compared to “gold-plated” severance plans. We’ve established that. But they allow the HR department to claim that it “saved money” on severance payments (a relatively small operational cost, except when top executives are involved) while the costs are externalized to the manager and team that must deal with a now-toxic (and if already toxic before the PIP, now overtly destructive) employee. PTO policies work the same way. The office becomes lethargic, miserable, and sickly, but HR can point to the few hundred dollars saved on vacation payouts and call it a win.

On that, it’s worth noting that these pooled-leave policies aren’t actually about sick employees. People between the ages of 25 and 50 don’t get sick that often, and companies don’t care about that small loss. However, their children, and their parents, are more likely to get sick. PTO policies aren’t put in place to punish young people for getting colds. They’re there to deter people with kids, people with chronic health problems, and people with sick parents from taking the job. Like open-plan offices and the anxiety-inducing micromanagement often given the name of “Agile”, it’s back-door age and disability discrimination. The company that institutes a PTO policy doesn’t care about a stray cold; but it doesn’t want to hire someone with a special-needs child. Even if the latter is an absolute rock star, the HR department can justify itself by saying it helped the company dodge a bullet.

Let’s talk about cost cutting more generally, because I’m smarter than 99.99% of the fuckers who run companies in this world and I have something important to say.

Companies don’t fail because they spend too much money. “It ran out of money” is the proximate cause, not the ultimate one. Some fail when they cease to excel and inspire (but others continue beyond that point). Some fail, when they are small, because of bad luck. Mostly, though, they fail because of complexity: rules that don’t make sense and block useful work from being done, power relationships that turn toxic and, yes, recurring commitments and expenses that can’t be afforded (and must be cut). Cutting complexity rather than cost should be the end goal, however. I like to live with few possessions not because I can’t afford to spend the money (I can) but because I don’t want to deal with the complexity that they will inject into my life. It’s the same with business. Uncontrolled complexity will cause uncontrolled costs and ultimately bring about a company’s demise. What does this mean about cutting costs, which MBAs love to do? Sometimes it’s great to cut costs. Who doesn’t like cutting “waste”? The problem there is that there actually isn’t much obvious waste to be cut, so after that, one has to focus and decide on which elements of complexity are unneeded, with the understanding that, yes, some people will be hurt and upset. Do we need to compete in 25 businesses, when we’re only viable in two? This will also cut costs (and, sadly, often jobs).

The problem, see, is that most of the corporate penny-shaving increases complexity. A few dollars are saved, but at the cost of irritation and lethargy and confusion. People waste time working around new rules intended to save trivial amounts of money. The worst is when a company cuts staff but refuses to reduce its internal complexity. This requires a smaller team to do more work– often, unfamiliar work that they’re not especially good at or keen on doing; people were well-matched to tasks before the shuffle, but that balance has gone away. The career incoherencies and personality conflicts that emerge are… one form of complexity.

The problem is that most corporate executives are “seagull bosses” (swoop, poop, and fly away) who see their companies and jobs in a simple way: cut costs. (Increasing revenue is also a strategy, but that’s really hard in comparison.) A year later, the company is still failing not because it failed to cut enough costs or people, but because it never did anything about the junk complexity that was destroying it in the first place.

Let’s talk about layoffs. The growth of complexity is often exponential, and firms inevitably get to a place where they are too complex (and, a symptom of this is that operations are too expensive) to survive. The result is that it needs to lay people off. Now, layoffs suck. They really fucking do. But there’s a right way and a wrong way to execute one. To do a layoff right, the company needs to cut complexity and cut people. (Otherwise, it will have more complexity per capita, the best people will get fed up and leave, and the death spiral begins.) It also needs to cut the right complexity; all the stuff that isn’t useful.

Ideally, the cutting of people and cutting of complexity would be tied together. Unnecessary business units being cut usually means that people staffed on them are the ones let go. The problem is that that’s not very fair, because it means that good people, who just happened to be in the wrong place, will lose their jobs. (I’d argue that one should solve this by offering generous severance, but we already know why that isn’t a popular option, though it should be.) The result is that when people see their business area coming into question, they get political. Of course this software company needs a basket-weaving division! In-fighting begins. Tempers flare. From the top, the water gets very muddy and it’s impossible to see what the company really looks like, because everyone’s feeding biased information to the executives. (I’m assuming that the executive who must implement the cuts is acting in good faith, which is not always true.) What this means is that the crucial decision– what business complexity are we going to do without?– can’t be subject to a discussion. Debate won’t work. It will just get word out that job cuts are coming, and political behavior will result. The horrible, iron fact is that this calls for temporary autocracy. The leader must make that call in one fell swoop. No second guessing, no looking back. This is the change we need to make in order to survive. Good people will be let go, and it really sucks. However, seeing as it’s impossible to execute a large-scale layoff without getting rid of some good people, I think the adult thing to do is write generous severance packages.

Cutting complexity is hard. It requires a lot of thought. Given that the information must be gathered by the chief executive without tipping anyone off, and that complex organisms are (by definition) hard to factor, it’s really hard to get the cuts right. Since the decision must be made on imperfect information, it’s a given that it usually won’t be the optimal cut. It just has to be good enough (that is, removing enough complexity with minimal harm to revenue or operations) that the company is in better health.

Cutting people, on the other hand, is much easier. You just tell them that they don’t have jobs anymore. Some don’t deserve it, some cry, some sue, and some blog about it but, on the whole, it’s not actually the hard part of the job. This provides, as an appealing but destructive option, the lazy layoff. In a lazy layoff, the business cuts people but doesn’t cut complexity. It just expects more work from everyone. All departments lose a few people! All “survivors” now have to do the work of their fallen brethren! The too-much-complexity problem, the issue that got us to the layoff in the first place… will figure itself out. (It never does.)

Stack ranking is a magical, horrible solution to the problem. What if one could do a lazy layoff but always cull the “worst” people? After all, some people are of negative value, especially considering the complexity load (in personality conflicts, shoddy work) they induce. The miracle of stack ranking is that it turns a layoff– otherwise, a hard decision guaranteed to put some good people out of work– into an SQL query. SELECT name FROM Employee WHERE perf <= 3.2. Since the soothsaying of stack ranking has already declared the people let-go as bottom-X-percent performers, there’s no remorse in culling them. They were dead weight”. Over time, stack ranking evolves into a rolling, continuous lazy layoff that happens periodically (“rank-and-yank”).

It’s also dishonest. There are an ungodly number of large technology companies (over 1,000) that claim to have “never had a layoff”. That just isn’t fucking true. Even if the CEO was Jesus Christ himself, he’d have to lay people off because that’s just how business works. Tech-company sleazes just refuse to use the word “layoff”, for fear of losing their “always expanding, always looking for the best talent!” image. So they call it a “low performer initiative” (stack ranking, PIPs, eventual firings). What a “low-performer initiative” (or stack ranking, which is a chronic LPI) inevitably devolves into is a witch hunt that turns the organization into pure House of Cards politics. Yes, most companies have about 10 percent who are incompetent or toxic or terminally mediocre and should be sent out the door. Figuring which 10 percent those people are, is not easy. People who are truly toxic generally have several years’ worth of experience drawing a salary without doing anything, and that’s a skill that improves with time. They’re really good at sucking (and not getting caught). They’re adept political players. They’ve had to be; the alternative would have been to have grown a work ethic. Most of what we as humans define as social acceptability is our ethical immune system, which can catch and punish the small-fry offenders but can’t do a thing about the cancer cells (psychopaths, parasites) that have evolved to the point of being able to evade or even redirect that rejection impulse. The question of how to get that toxic 10 percent out is an unsolved one, and I don’t have space to tackle it now, but the answer is definitely not stack ranking, which will always clobber several unlucky good-faith employees for every genuine problem employee it roots out.

Moreover, stack ranking has negative permanent effects. Even when not tied to a hard firing percentage, its major business purpose is still to identify the bottom X percent, should a lazy layoff be needed. It’s a reasonable bet that unless things really go to shit, X will be 5 or 10 or maybe 20– but not 50. So stack ranking is really about the bottom. The difference between the 25th percentile and 95th percentile, in stack ranking, really shouldn’t matter. Don’t get me wrong: a 95th-percentile worker is often highly valuable and should be rewarded. I just don’t have any faith in the ability of stack ranking to detect her, just as I know some incredibly smart people who got mediocre SAT scores. Stack ranking is all about putting people at the bottom, not the top. (Top performers don’t need it and don’t get anything from it.)

The danger of garbage data (and, #YesAllData generated by stack ranking is garbage) is that people tend to use it as if it were truth. The 25th-percentile employee isn’t bad enough to get fired… but no one will take him for a transfer, because the “objective” record says he’s a slacker. The result of this– in conjunction with closed allocation, which is already a bad starting point– is permanent internal immobility. People with mediocre reviews can’t transfer because the manager of the target team would prefer a new hire (with no political strings attached) over a sub-50th-percentile internal. People with great reviews don’t transfer for fear of upsetting the gravy train of bonuses, promotions, and managerial favoritism. Team assignments become permanent, and people divide into warring tribes instead of collaborating. This total immobility also makes it impossible to do a layoff the right way (cutting complexity) because people develop extreme attachments to projects and policies that, if they were mobile and therefore disinterested, they’d realize ought to be cut. It becomes politically intractable to do the right thing, or even for the CEO to figure out what the right thing is. I’d argue, in fact, that performance reviews shouldn’t be part of a transfer packet at all. The added use of questionable, politically-laced “information” is just not worth the toxicity of putting that into policy.

A company with a warring-departments dynamic might seem like a streamlined, efficient, and (most importantly) less complex company. It doesn’t have the promiscuous social graph you might expect to see in an open allocation company. People know where they are, who they report to, and who their friends and enemies are. The problem, with this insight, is that there’s hot complexity and cold complexity. Cold complexity is passive and occasionally annoying, like a law from 1890 that doesn’t make sense and is effectively never enforced. When people collaborate “too much” and the social graph of the company seems to have “too many” edges, there’s some cold complexity there. It’s generally not harmful. Open allocation tends to generate some cold complexity. Rather than metastasize into an existential threat to the company, it will fade out of existence over time. Hot complexity, which usually occurs in an adversarial context, is a kind that generates more complexity. Its high temperature means there will be more entropy in the system. Example: a conflict (heat) emerges. That, alone, makes the social graph more complex because there are more edges of negativity. Systems and rules are put in place to try to resolve it, but those tend to have two effects. First, they bring more people (those who had no role in the initial conflict, but are affected by the rules) into the fights. Second, the conflicting needs or desires of the adversarial parties are rarely addressed, so both sides just game the new system, which creates more complexity (and more rules). Negativity and internal competition create the hot complexity that can ruin a company more quickly than an executive (even if acting with the best intentions) can address it.

Finally, one thing worth noting is the Welch Effect (named for Jack Welch, the inventor of stack-ranking). It’s one of my favorite topics because it has actually affected me. The Welch Effect pertains to the fact that when a broad-based layoff occurs, the people most likely to be let go aren’t the worst (or best) performers, but newest members of macroscopically underperforming teams. Layoffs (and stack ranking) generally propagate down the hierarchy. Upper management disburses bonuses, raises, and layoff quotas based on the macroscopic performance of the departments under it, and at each level, the node operators (managers) slice the numbers based on how well they think each suborganization did (plus or minus various political modifiers). At the middle-management layer, one level separated from the non-managerial “leaves”, it’s the worst-performing teams that have to vote the most people off the island. It tends to be those most recently hired who get the axe. This isn’t especially unfair or wrong, for that middle manager; there’s often no better way to do it than to strike the least-embedded, least-invested junior hire.

The end result of the Welch Effect, however, is that the people let go are often those who had the least to do with their team’s underperformance. (It may be a weak team, it may be a good team with a bad manager, or it may be an unlucky team.) They weren’t even there for very long! It doesn’t cause the firm to lay off good people, but it doesn’t help it lay off bad people either. It has roughly the same effect as a purely seniority-based layoff, for the company as a whole. Random new joiners are the ones who are shown out the door. It’s bad to lose them, but it rarely costs the company critical personnel. Its effect on that team is more visibly negative: teams that lose a lot of people during layoffs get a public stink about them, and people lose the interest in joining or even helping them– who wants to work for, or even assist, a manager who can’t protect his people?– so the underperforming team becomes even more underperforming. There are also morale issues with the Welch Effect. When people who recently joined lose their jobs (especially if they’re fired “for performance” without a severance) it makes the company seem unfair, random, and capricious. The ones let go were the ones who never had the chance to prove themselves. In a one-off layoff, this isn’t so destructive. The Welch Effected usually move on to better jobs anyway. However, when a company lays off in many small cuts, or disguises a layoff as a “low-performer initiative”, the Welch Effect firings demolish belief in meritocracy.

That, right there, explains why I get so much flak over how I left Google. Technically, I wasn’t fired. But I had a disliked, underdelivering manager who couldn’t get calibration points for his people (a macroscopic issue that I had nothing to do with) and I was the newest on the team, so I got a bad score (despite being promised a reasonable one– a respectable 3.4, if it matters– by that manager). Classic Welch Effect. I left. After I was gone I “leaked” the existence of stack ranking within Google. I wasn’t the first to mention that it existed there, but I publicized it enough to become the (unintentional) slayer of Google Exceptionalism and, to a number of people I’ve never met and to whom I’ve never done any wrong, Public Enemy #1. I was a prominent (and, after things went bad, fairly obnoxious) Welch Effectee, and my willingness to share my experience changed Google’s image forever. It’s not a disliked company (nor should it be) but its exceptionalism is gone. Should I have done all that? Probably not. Is Google a horrible company? No. It’s above average for the software industry (which is not an endorsement, but not damnation either.) Also, my experiences are three years old at this point, so don’t take them too seriously. As of November 2011, Google had stack ranking and closed allocation. It may have abolished those practices and, if it has, then I’d strongly recommend it as a place to work. It has some brilliant people and I respect them immensely.

In an ideal world, there would be no layoffs or contractions. In the real world, layoffs have to happen, and it’s best to do them honestly (i.e. don’t shit on departing employees’ reputations by calling it a “low performer initiative”). As with more minor forms of cost-cutting (e.g. new policies encouraging frugality) it can only be done if complexity (that being the cause of the organization’s underperformance) is reduced as well. That is the only kind of corporate change that can reverse underperformance: complexity reduction.

If complexity reduction is the only way out, then why is it so rare? Why do companies so willingly create personnel and regulatory complexity just to shave pennies off their expenses? I’m going to draw from my (very novice) Buddhist understanding to answer this one. When the clutter is cleared away… what is left? Phrases used to define it (“sky-like nature of the mind”) only explain it well to people who’ve experienced it. Just trust me that there is a state of consciousness that can be attained when gross thoughts are swept away, leaving something more pure and primal. Its clarity can be terrifying, especially the first time it is experienced. I really exist. I’m not just a cloud of emotions and thoughts and meat. (I won’t get into death and reincarnation and nirvana here. That goes farther than I need, for now. Qualia, or existence itself, as opposed my body hosting some sort of philosophical zombie, is both miraculous and the only miracle I actually believe in.) Clarity. Essence. Those are the things you risk encountering with simplicity. That’s a good thing, but it’s scary. There is a weird, paradoxical thing called “relaxation-induced anxiety” that can pop up here. I’ve fought it (and had some nasty motherfuckers of panic attacks) and won and I’m better for my battles, but none of this is easy.

So much of what keeps people mired in their obsessions and addictions and petty contests is an aversion to confronting what they really are, a journey that might harrow them into excellence. I am actually going to age and die. Death can happen at any time, and almost certainly it will feel “too soon”. I have to do something, now, that really fucking matters. This minute counts, because I may not get another in this life. People are actually addicted to their petty anxieties that distract them from the deeper but simpler questions. If you remove all the clutter on the worktable, you have to actually look at the table itself, and you have to confront the ambitions that impelled you to buy it, the projects you imagined yourself using it for (but that you never got around to). This, for many people, is really fucking hard. It’s emotionally difficult to look at the table and confront what one didn’t achieve, and it’s so much easier to just leave the clutter around (and to blame it).

Successful simplicity leads to, “What now?” The workbench is clear; what are we going to do with it? For an organization, such simplicity risks forcing it to contend with the matter of its purpose, and the question of whether it is excelling (and, relatedly, whether it should). That’s a hard thing to do for one person. It’s astronomically more difficult for a group of people with opposing interests, and among whom excellence is sure to be a dirty word (there are always powerful people who prefer rent-seeking complacency). It’s not surprising, then, that most corporate executives say “fuck it” on the excellence question and, instead, decide it suffices to earn their keep to squeeze employees with mindless cost-cutting policies: pooled sick leave and vacation, “employee contributions” on health plans, and other hot messes that just ruin everything. It feels like something is getting done, though. Useless complexity is, in that way, existentially anxiolytic and addictive. That’s why it’s so hard to kill. But it, if allowed to live, will kill. It can enervate a person into decoherence and inaction, and it will reduce a company to a pile of legacy complexity generated by self-serving agents (mostly, executives). Then it falls under the MacLeod-Gervais-Rao-Church theory of the nihilistic corporation; the political whirlpool that remains once an organization has lost its purpose for existing.

At 4528 words, I’ve said enough.

Silicon Valley and the Rise of the Disneypreneur

Someone once explained the Las Vegas gambling complex as “Disneyland for adults”, and the metaphor makes a fair amount of sense. The place sells a fantasy– expensive shows, garish hotels (often cheap or free if “comped”) and general luxury– and this suspension of reality enables people to take financial risks they’d usually avoid, giving the casino an edge. Comparing Silicon Valley to Vegas, also, makes a lot of sense. Even more than a Wall Street trading floor, it’s casino capitalism. Shall we search for some kind of transitivity? Yes, indeed. Is it possible that Silicon Valley is a sort of “Disneyland”? I think so.

It starts with Stanford and Palo Alto. The roads are lined with palm trees that do not grow there naturally, and cost tens of thousands of dollars a piece to plant. The whole landscape is designed and fake. In a clumsy attempt to lift terminology from Southern aristocrats, Stanford’s nickname is “the Farm”. At Harvard or Princeton, there’s a certain sense of noblesse oblige that students are expected to carry with them. A number of Ivy Leaguers eschew investment banking in favor of a program like Teach for America. Not so much at Stanford, which has never tempered itself with Edwardian gravity (by, for example, encouraging students to read literature from civilizations that have since died out) in the way that East Coast and Midwestern colleges have. The rallying cry is, “Go raise VC.” Then, they enter a net of pipelines: Stanford undergrad to startup, startup to EIR gig, EIR to founder, founder to venture capitalist. The miraculous thing about is that progress on this “entrepreneurial” path is assured. One never needs to take any risk to do it! Start in the right place, don’t offend the bosses-I-mean-investors, and there are three options: succeed, fail up, or fail diagonal-up. Since they live in an artificial world in which real loss isn’t possible for them, but one that also limits them from true innovation, they perform a sort of Disney-fied entrepreneurship. They’re the Disneypreneurs.

Just as private-sector bureaucrats (corporate executives) who love to call themselves “job creators” (and who only seem to agree on anything when they’re doing the opposite) are anything but entrepreneurs, I tend to think of these kids as not real entrepreneurs. Well, because I’m right, I should say it more forcefully. They aren’t entrepreneurs. They take no risk. They don’t even have to leave their suburban, no-winter environment. They don’t put up capital. They don’t risk sullying their reputations by investing their time in industries the future might despise; instead, they focus on boring consumer-web plays. They don’t go to foreign countries where they might not have all the creature comforts of the California suburbs. They don’t do the nuts-and-bolts operational grunt work that real entrepreneurs have to face (e.g. payroll, taxes) when they start new businesses, because their backers arrange it all for them. Even failure won’t disrupt their careers. If they fail, instead of making their $50-million payday sin this bubble cycle, they’ll have to settle for a piddling $750,000 personal take in an “acqui-hire”, a year in an upper-middle-management position, and an EIR gig. VC-backed “founders” take no real risk, but get rewarded immensely when things go their way. Heads, they win. Tails, they don’t lose.

Any time someone sets up a “heads I win, tails I-don’t-lose” arrangement, there’s a good bet that someone else is losing. Who? To some extent, it’s the passive capitalists whose funds are disbursed by VCs. Between careerist agents (VC partners) seeking social connection and status, and fresh-faced Disneypreneurs looking to justify their otherwise unreasonable career progress (due to their young age, questionable experience, and mediocrity of talent) what is left for the passive capitalist is a return inferior to that offered by a vanilla index fund. However, there’s another set of losers for whom I often prefer to speak, their plight being less well-understood: the engineers. Venture capitalists risk other peoples’ money. Founders risk losing access to the VCs if they do something really unethical. Engineers risk their careers. They’ve got more skin in the game, and yet their rewards are dismal.

If it’s such a raw deal to be a lowly engineer in a VC-funded startup (and it is) then why do so many people willingly take that offer? They might overestimate their upside potential, because they don’t know what questions to ask (such as, “If my 0.02% is really guaranteed to be worth $1 million in two years, then why do venture capitalists value the whole business at only $40 million?”). They might underestimate the passage of time and the need to establish a career before ageism starts hitting them. Most 22-year-olds don’t know what a huge loss it is not to get out of entry-level drudgery by 30. However, I think a big part of why it is so easy to swindle so many highly talented young people is the Disneyfication. The “cool” technology company, the Hooli, provides a halfway house for people just out of college. At Hooli, no one will make you show up for work at 9:00, or tell you not to wear sexist T-shirts, or expect you to interact decently with people too unlike you. You don’t even have to leave the suburbs of California. You won’t have to give up your car for Manhattan, your dryer for Budapest, your need to wear sandals in December for Chicago, or your drug habit for Singapore. It’s comfortable. There is no obvious social risk. Even the mean-spirited, psychotic policy of “stack ranking” is dressed-up as a successor to academic grading. (Differences glossed over are (a) that there’s no semblance of “meritocracy” in stack ranking; it’s pure politics, and a professor who graded as unfairly as the median corporate manager would be fired; (b) academic grading is mostly for the student’s benefit while stack-ranking scores are invariably to the worker’s detriment; and (c) universities genuinely try to support failing students while corporations use dishonest paperwork designed to limit lawsuit risk.) The comfort offered to the engineer by the Disney-fied tech world, which is actually more ruthlessly corporate (and far more undignified) than the worst of Wall Street, is completely superficial.

That doesn’t, of course, mean that it’s not real. Occasionally I’m asked whether I believe in God. Well, God exists. Supernatural beings may not, and the fictional characters featured in religious texts are almost certainly (if taken literally) pure nonsense, but the idea of God has had a huge effect on the world. It cannot be ignored. It’s real. The same of Silicon Valley’s style of “entrepreneurship”. Silicon Valley breathes and grows because, every year, an upper class of founders and proto-founders are given a safe, painless path to “entrepreneurial glory” and a much larger working class of delusional engineers are convinced to follow them. It really looks like entrepreneurship.

I should say one thing off the bat: Disneypreneurs are not the same thing as wantrapreneurs. You see more of the second type, especially on the East Coast, and it’s easy to conflate the two, but the socioeconomic distance is vast. The wantrapreneur can talk a big game, but lacks the drive, vision, and focus to ever amount to anything. He’s the sort of person who’s too arrogant to work for someone else, but can’t come up with a convincing reason why anyone should work for him, and doesn’t have the socioeconomic advantages that’d enable him to get away with bullshit. Except in the most egregious bubble times, he wouldn’t successfully raise venture capital, not because VCs are discerning but because the wantrapreneur usually lacks sufficient vision to learn how to do even that. Quite sadly, wantrapreneurs sometimes do find acolytes among the desperate and the clueless. They “network” a lot, sometimes find friends or relatives clueless enough to bankroll them, and produce little. Almost everyone has met at least one. There’s no barrier to entry in becoming a wantrapreneur.

Like a wantrapreneur, Disneypreneurs lack drive, talent, and willingness to sacrifice. The difference is that they still win. All the fucking time. Even when they lose, they win. Evan Spiegel (Snapchat) and Lucas Duplan (Clinkle) are just two examples, but Sean Parker is probably the most impressive. If you peek behind the curtain, he’s never actually succeeded at anything, but he’s a billionaire. They float from one manufactured success to another, build impressive reputations despite adding very little value to anything. They’re given the resources to take big risks and, when they fail, their backers make sure they fail up. Being dropped into a $250,000/year VP role at a more successful portfolio company? That’s the worst-case outcome. Losers get executive positions and EIR gigs, break-evens get acqui-hired into upper-six-figure roles, and winners get made.

One might ask: how does one become a Disneypreneur? I think the answer is clear: if you’re asking, you probably can’t. If you’re under 18, your best bet is to get into Stanford and hope your parents have the cardiac fortitude to see the tuition bill and not keel over. If you’re older, you might try out the (admirably straightforward, and more open to middle-class outsiders than traditional VC) Y Combinator. However, I think that it’s obvious that most people are never going to have the option of Disneypreneurship, and there’s a clear reason for that. Disneypreneurship exists to launder money (and connections, and prestige, and power; but those are highly correlated and usually mutually transferrable) for the upper classes, frank parasitism from inherited wealth being still socially unacceptable. The children of the elites must seem to work under the same rules as everyone else. The undeserving, mean-reverting progeny of the elite must be made to appear like they’ve earned the status and wealth their parents will bequeath upon them.

Elite schools were once intended toward this end. They were a prestige (multiple meanings intended) that appeared, from the outside, to be a meritocracy. However, this capacity was demolished by an often-disparaged instrument, the S.A.T. Sometimes, I’ll hear a knee-jerk leftist complain about the exam’s role in educational inequality, citing (correctly) the ability of professional tutoring (“test prep”, a socially useless service) to improve scores. In reality, the S.A.T. isn’t creating or increasing socioeconomic injustices in terms of access to education; it merely measures some of them. The S.A.T. was invented with liberal intentions, and (in fact) succeeded. After its inception in the 1920s, “too many” Jews were admitted to Ivy League colleges, and much of the “extracurricular” nonsense involved in U.S. college admissions was invented in a reaction to that. Over the following ninety years, there’s been a not-quite-monotonic movement toward meritocracy in college admissions. If I had to guess, college admissions are a lot more meritocratic than 90 years ago (and, if I’m wrong, it’s not because the admissions process is classist but because it’s so noise-ridden, thanks to technology enabling the application of a student to 15-30 colleges; 15 years ago, five applications was considered high). The ability-to-pay factor, however, keeps this meritocracy from being realized. Ties are, observably, broken on merit and there is enough meritocracy in the process to threaten the existing elite. The age in which a shared country-club membership of parent and admissions officer ensured a favorable decision is over. Now that assurance requires a building, which even the elite cannot always afford.

These changes, and the internationalization of the college process, and those pesky leftists who insist on meritocracy and diversity, have left the ruling classes unwilling to trust elite colleges to launder their money. They’ve shifted their focus to the first few years after college: first jobs. However, most of these well-connected parasites don’t know how to work and certainly can’t bear the thought of their children suffering the indignity of actually having to earn anything, so they have to bump their progeny automatically to unaccountable upper-management ranks. The problem is that very few people are going to respect a talentless 22-year-old who pulls family connections to get what he wants, and who gets his own company out of some family-level favor. Only a California software engineer would be clueless enough to follow someone like that– if that person calls himself “a founder”.