Some ways to be bad at hiring

There are some assumptions in corporate hiring processes that sound like “tough love conservatism” but are just wrong. One of them is that false positives (bad hires) are worse than false negatives (passing on good people). While true on paper, this is often used as an excuse to reject good applicants for the slightest of reasons. Currently unemployed? Rejected. Lost social polish at the end of a 7-hour interview session? “Not a culture fit.” Capable of doing the work, but not yet operating two levels beyond it? B player, so pass.

People are mostly context

Over-hiring (that is, hiring people who are more skilled than the role requires) is actually more dangerous than under-hiring. Overqualified, bored people can easily become toxic to an organization and, when they go into attack mode, they have sufficient credibility to make it really hurt. Perhaps unexpectedly, it’s often the mechanisms companies use to catch and remove low performers that turn the over-hires into aggressive morale problems, rather than harmless ghosts who’ll find their way out without noise.

Let’s get into the first misconception about hiring. We’ve all heard that phrase, “A players hire A players, but B players hire C players.” The idea is sound. If you hire mediocrities (B players) you’ll find that they hire truly awful people. The solution? Don’t hire mediocrities. The problem? People are mostly context. (For evidence behind “people are mostly context”, see: Stanford Prison Experiment.) An organization will always have B players. Some people will gain influence within the group and others will be ignored. The output, from people who fall into low status, will be mediocre. If you have 100 people who would be “A players” in most other contexts, you’re still going to have a few people who, for a variety of reasons, check out and start coasting.

If people are mostly context, it stands to reason that some work environments will turn B players into A players, and vice versa. That is the more important question. A company that turns B players into A players will kick ass, and a company that turns A players into B players (as most do) will slowly decline. Companies think they can outguess the labor market and “hire only A players” while offering B-level salaries and work conditions. For the most part, they’re wrong. Believing oneself to be smarter than a market, on a hunch, is a dangerous assumption. Companies can do very little (other than offer more) to improve the quality of people that comes in the door. What they can control is how they evolve once there. Some environments are empowering and improve the people they take in. Others (most) are stifling and artificially limit people.

While they won’t admit it, because it’s not socially acceptable, most corporate executives want high levels of internal competition, whether for promotions or the best projects, and create false scarcities to make it happen. The problem is that this is exactly the sort of thing that creates the checked-out, “B player” attitude among former or potential high-performers. They’re smart enough to see the false scarcity that is restricting their autonomy and career advancement, so when they lose a political fight (there were two A players, so one had to take a dive) they aren’t likely to say, “Man, I really need to step up my game around here.” Either they leave, or they use it as “a recovery job”, which means that they’re essentially coasting.

Is it true that A players hire A players (or, as some say, A+ players) while B players hire C players? It is, but it’s not about innate ability. Rather, it’s about political security. People who expect to be in good standing at the same company 24 months later will tend to hire people better than them, because they view their association with that team as a semi-permanent residence. Those who are insecure will tend to hire people they can control or, worse, “insurance incompetents” who come in handy in event of “low performer” witch hunts or stack ranking. Increasingly, companies strive to have everyone a bit insecure, under the delusion that aggressive performance reviews and micromanagement bring people to their best. It doesn’t work that way. It’s insecurity that begets not only mediocrity, but those sorts of blatantly political operations.

This gets worse, because “don’t hire B players” becomes code for “don’t hire people who are different, have unusual career histories, or might have anything wrong with them”. Forty years old and only a Manager II? Bzzzt! B player, no hire. “A player” becomes code for “people who think, act, and look exactly like me.” This sort of homogeneity imposes artificial limitations on a company’s ability to grow, and tends to foster arrogance rather than improvement.

False negatives are false positives

One irksome claim about hiring that, on paper, appears true, is the claim that “false negatives are better than false positives”. It’s actually blatantly false, and I’ll get to that, but let me make an observation. When a company passes on a good person (false negative) what happens? It has to keep hiring, taking on the risk of a false positive. If it has a fixed number of positions to fill and those must be filled, then the cost of a false negative can be measured in false positives. Let’s avoid the complexity of the truth (people are mostly context) and pretend that there are specific numbers of “good” and “bad” people. Say, there are 100 good hires and 900 bad hires out there. Let’s assume that the existing hiring process has a 50% false-negative rate and a 4.4% false-positive rate, and that job applicants come uniformly from the population. (That’s an incredibly wrong assumption. In reality, the perception of employers that most job seekers are idiots comes from the disproportionate over-representation of low-skilled perennial candidates– and resume spam from unscrupulous recruiters.) Then, out of 100 applicants, there’ll be 5 good people and 4 bad people offered jobs. This means that turning away a good candidate costs the company 0.44 bad hires. A false negative is, in essence, 0.44 false positives! It gets worse, because if we assume that companies hire not to fill positions but to meet specific needs, and use the approximation that only good hires count toward those needs, the cost of a false negative is 0.8 false positives. This is a made-up example; in practice, that number can exceed 1.0.

What about a much tougher hiring process? It’s easy to increase the false negative rate, but hard to improve on that 4.4% false positive rate. Some people have the social and political skills to acquire credentials, make allies, succeed at interviews, and get promoted despite being utterly useless. Increasing scrutiny doesn’t keep them out. In fact, it can give them an advantage. An example of this is seen in reference checks. The “classic 3″ (three references, basic verification) can flush out candidates who lie about their backgrounds or who did really bad things at work, and it’s about as thorough a reference check as one can get without having this counterproductive result. Why? Let’s say that a company does 10 reference checks, some back-channel. This isn’t a basic background verification that 95% of the population will pass, it’s a competitive cavity search. What type of people win? Two types. The first are people who never pissed anyone off. That means they’re B players. There isn’t anything wrong with that. In fact, I’d argue that companies’ prejudice against “B players” leaves them prone to dangerous over-hiring. But that intrusive a reference check is hardly worth it if you just want to hire B players. The second are the wheeler-dealers who charm the powerful and frighten the powerless. Psychopaths. (False positives.) On the other hand, most normal people, if they accomplished anything, will have made enemies and will fail an intrusive, 10-ply reference check. (False negatives.) The reason the “Classic 3″ in reference checking is limited to three checks is that any more scrutiny is dangerously counterproductive. If someone passes a 10-ply back-channel reference check, the odds are thousands to one that he’s had people directly intimidated (either by legal professionals, or by illegal professionals) in order to clean his story.

How expensive are false positives?

The claim that false positives are worse than false negatives is, in fact, dead wrong. This is backed by the claim that it’s somehow hard to fire false positives. I should mention that there are two classes of false positives. The first are those who are unethical, but acquire the trust of the group and don’t seem like bad hires until, often years later, they do something harmful to the organization. Those people are truly toxic, but they’re not caught early (that’s why they can do so much damage) and, given their ability to evade detection in a year of employment, no amount of pre-hire scrutiny is going to catch them. The best way to handle them is not to attract them, but that’s a topic for another essay. The second are the obvious bad hires. They’re brought on and, within three months, it’s clear that they’re not up to the demands of the job. Some can be trained or re-assigned, others can’t.

The “problem”, as often stated, is that it’s “too hard” to fire such people. They’re rarely malicious or blatantly incompetent. They’re just mediocre, and a bit harmful for morale. Cold-firing one might result in a lawsuit or public disparagement by a well-liked (if below average) person. Putting one on a PIP risks turning a harmless low performer into a morale killer. The PIP’s purpose is not to improve performance but to document low performance, but people on PIPs either bring their performance up to a high enough level to “beat” the PIP, but will then slack again when attention is off of them, or (more often) sabotage their manager with the time they have left, and neither is a desired outcome. So, companies end up letting such people underperform for several years. This is the source of the “false positives are more expensive than false negatives” claim, but it’s the company’s fault.

It’s actually easy and cheap to fire someone like that. Write a severance equal to 1.5 to 2 times the expected length of a job search for that level, and include non-litigation and non-disparagement. Quick, easy, and everyone’s happy. The cost of a false positive is bounded, if the company’s willing to do the right thing, cut its losses quickly, and write a fair severance rather than kicking the shit out of morale with a PIP. (Most companies aren’t. The preference for PIPs over severances is a way for HR departments to claim they “saved money” on exit payments, while externalizing the costs to a manager, who has to conduct the PIP’s kangaroo court, and the team.) What about the cost of a false negative? That cost is the negative of whatever that person would have produced, and some people turn out to be extraordinarily valuable, not only in their own contributions but in terms of the people they’ll bring on in the future. Practically speaking, the cost of a false negative is unbounded.

Explaining what is

The silly prejudices that companies acquire, toward the young or the old, toward people with too few or too many jobs, and so on… those seem counterproductive, and they are. So why do companies develop these (evidently untrue) pretenses of never hiring “B players” and keeping a hawk’s watch over “false positive” hires (at the admitted cost of numerous false negatives)? Mostly, it’s to create a narrative. It’s to expand executive arrogance into something that the lower-level players can participate in. I’d argue that organizations actually want to reject quite a few good people, not for intrinsic reasons, but the boost the morale of the grunts on the team. It makes the peons feel like they’re part of an elite squadron and, in truth, this makes it a lot easier to take advantage of them. One might also argue that vicious hiring practices (such as back-channel reference checking) go a step further, by encouraging the slightly disaffected (but still profitable, for the organization) peasants to be terrified of the job search process, in order to keep them where they are.

Erring on the side of exclusivity and, even, elitism has its purposes for the organization’s executives. They want the disaffected to feel that it’s better to be on the bottom of their current organization than to be anyone else. The “we never hire B players” narrative helps offset the cognitive dissonance of inhabiting an organization that won’t do the least to help them, and justifies overt cultural toxicity as “tough love”. That sort of thing can hold an organization together, for a while, when it faces a state of decline. However, is it a sensible way to grow? Does it make the organization better and, much more importantly, does it make for an organization that makes its people better? Probably not.

Street fighting, HR, and the importance of collective action.

Street fights

Street fighting is a topic on which a large number of men hold strong opinions, although very few have been in a true fight. I haven’t, and I hope never to be in one. Playground fisticuffs are one thing, but for adults, street fighting is far too dangerous to be considered anything but a terrible idea. It’s not likely to happen, but a punch from an adult can kill. The exact mechanics by which punches to the face incapacitate aren’t fully understood, but rotation in the neck (constricting blood vessels) seems to be a culprit, and the forces involved in a proper face punch are enough, with the right conditions, rupture blood vessels in the neck and head and end a life. A person falling and hitting the ground can do it, too.

Anything before age 16 doesn’t count. Martial arts sparring doesn’t count. Boxing doesn’t count. Fight clubs don’t count. Those are a lot safer. In a real fight, you typically don’t know your opponent. If he wins, he might kill you (intentionally or otherwise) once you are on the ground. He may have a weapon and, even if he doesn’t, he can kill you with a few kicks or punches. Losing means you’re likely to end up in the hospital (and possibly dead). Winning means you have to appear in court, possibly for murder. Either way, it’s not a good place to be. If there are bystanders, the loser faces humiliation but probably won’t die. If there are none, it is up to the one who wins the fight, and unintentional (or intentional) deaths from punches and kicks happen all the time.

The moral bikeshedding around fistfighting (and the actual physical risks of it) are discussed in this analysis of the Trayvon Martin case. Is George Zimmerman a huge asshole and a probable racist? Absolutely. Should he have followed and harassed a 17-year-old? No, not at all. Is George Zimmerman a good or admirable person? Clearly not. Was Zimmermann in mortal danger during that fistfight? Yes. He was on the ground, being punched, which alone puts his life in danger. (“Ground and pound” constitutes attempted murder in most jurisdictions.) He had a lethal weapon and, if he lost capacity, he’d also lose control of the weapon. Zimmerman’s wrongdoing was following a stranger in the first place– but not firing a gun in “just a fistfight”. When the opponent is a stranger and there is no one to break up the fight, there is no such thing as “just a fistfight”.

Fistfights (and armed confrontations) aren’t like what people see in movies or on television. Some notable differences are:

  • in most, the loser is incapacitated within a few seconds. One punch can do it, and the element of surprise (that is, being the one to start the fight) confers a huge advantage.
  • fighting is exhausting and most people will be winded (and vulnerable) inside of 15 seconds. Fights become remarkably more dangerous at this point, a common cause of unexpected death being falls after a decisive punch.
  • it’s very difficult to get up when grounded, especially if one has taken several blows to the face or head.
  • an incapacitated opponent is easy to kill, either with continual punching, kicks to the head, or a stomp to the head or neck. Most fights are broken up before that can happen, but unexpected deaths (even from a single punch) occur, sometimes hours after the fight.
  • if there are two or more opponents, fighting skill and strength typically won’t prevent a very bad outcome. Two-on-one is dangerous.
  • most untrained people, when attacked, will panic. Between the short duration of typical fights, the incapacitating nature of blows to the face or head, and the effect of surprise, that initial second or two of shock is enough to cause them to lose the fight.

Knowing all this, I also know to avoid getting into fistfights. I might know more than most people about them (because I’ve done my research) but I won’t pretend to have any idea how I’d perform. It’s not worth finding out.

Enough about fistfighting, though. It’s not that interesting. I just wanted to bring it up, because it’s something that men think they know a lot about, and they tend to underestimate the risks. There are a lot of people who think they know what they’d do in George Zimmerman’s position (panicked, mounted, having sustained several punches). Most of them don’t. They haven’t been in a fight since junior high, and those don’t count.

“I would…” Stop right there. You don’t know.

Let’s talk about organizational life, on which I’ve shared a lot of opinions.

Ask a software engineer in Silicon Valley for his opinion on collective bargaining, and you’re likely to hear a negative response. Unions have been targeted for negative publicity for decades and have a reputation for mediocrity and corruption. “I don’t need no stinkin’ union!” Your typical 25-year-old engineer probably hasn’t been fired from a job he wanted or needed to keep (a summer job doesn’t count). He’s probably never faced financial pain or stress on account of a bad job outcome. If he needs to negotiate a severance (or, a more delicate task, an agreed-upon reference) from an employer unwilling to budge, he probably has no idea what to do.

As with fistfighting, most people have an alarmingly precise image of how they’d get out of a bad situation and, for the most part, they don’t know.

“If I were in that grapple, I’d grab his wrist and bite down, then drive my elbow into his balls, then knee him in the head until he gave.” (No, you wouldn’t. You’d be panicked or confused, having taken several punches, and struggling to stay conscious.)

“If I were in a two-on-one fight like that, I would…” (Sorry, but in a two-on-one fight against capable, adult opponents, you’re fucked.)

“If I were put on a PIP, I would…” Same principle. If you’ve never been in a real fight, you don’t know.

Companies have trained fighters on staff, between their attorneys and HR. They have hard fighters who’ll insult and belittle you, and soft fighters who’ll pretend to take your side, while taking notes that can be used against you. If the company wants you in a two-on-one, you’ll face that: your manager and HR, or your manager and his manager. You basically can’t say anything in a two-on-one meeting. They will plan the meeting beforehand together, decide on “the story” afterward together, and (most likely) you’ll have no credibility in court or internally. If the meeting is adversarial and two-on-one, say as little as you possibly can.

In a police interrogation, you’d have the right to have your attorney present, and the right to remain silent, and you really want to exercise those rights. Watch this incredibly convincing video, backed by decades of experience, if you have doubt on that matter. What about a PIP meeting, which comes unannounced? On Tuesday, your boss was congratulating you on your work. You were an employee in good standing. On Wednesday, you’re told in the middle of a 1-on-1 “not performing at your level”. You were ambushed; there was no sight that this was coming. That corner you cut because your boss told you to do so is now being cited as an example of the low quality of your work. Most people think they would immediately fight intelligently, with the same confidence as applied to fistfights. Not so. Between the initial panic (especially if the person has financial needs) and intense emotions related to betrayal, it’s going to be really hard not to shit the bed when under a PIP ambush. And, as with a fistfight, it’s really easy to lose quickly. Raise your voice? You’ll probably be fired “for cause” because of “threatening behavior”. Attempt to amend the PIP’s factual inaccuracies? (There will always be inaccuracies in a PIP and they’re intentional. Those documents are supposed to piss you off because about 1 in 5 will lose his cool, thereby firing himself on the spot.) That might be construed as insubordination.

HR is probably present in meetings with management, from that point forward. The reason put forward to the PIPee is to protect the employee against managerial retaliation, and that’s complete bullshit. HR is there to corroborate management’s version of the story (even if management lies). Nothing that the employee says can improve his case. As with a fistfight, two-on-one means you’re probably done. Avoid it if you can.

In one way, it’s more perilous to be under PIP ambush than in a police interrogation, though. In the latter, one has the right to be silent and to call in a lawyer. In a PIP meeting, the rules aren’t clearly defined. Is an employee who refuses to answer a manager’s question guilty of “insubordination”, allowing termination on the spot? Can the employee be fired for refusing to sign the PIP “until I talk to a lawyer”? (That will be noticed.) Can the employee turn down an “independent” HR party’s meeting request “to hear your side of the story”? (Don’t fall for this.) Is the employee guilty of insubordination if he shares, with other employees, that he’s under a PIP, and voices his supposition that the decision to place him on it was political? At-will employment is complicated and undefined in many cases, and the answers to these questions are, “no one knows”. The company will terminate (once a PIP is shown, the decision has been made and only a change in management can reverse it) when it feels it can do so with minimal risk and cost. Some companies are aggressive and will use anything that is said in a PIP or HR meeting as cause to fire without severance. Now, at-will employment has a lot of serviceable corner cases (despite what employers claim) but is it worth it to launch a lawsuit against a deep-pocketed adversary? For most people, probably not. And lawsuits (again, like fistfights) are another form of combat in which an untrained person has no idea what to expect, and is at a disadvantage. Say the wrong thing, and it will be used against you. Even if you “should” win because you have the space or superior ability, you can lose in seconds.

With PIP meetings, management has had time to prepare, and the employee is ambushed. That unfairness is intentional. It’s part of the design. Same with the two-on-one intimidation. In a PIP meeting, silence will be presented as a non-option. Even though this is illegal, most employers will claim that not signing the PIP will lead to termination (record this conversation on a cell phone if you can; but if you’re not unusually calm, there’s a low chance that you’ll remember to do this). They can bring anyone they want into the room, but you almost certainly won’t be allowed to have an attorney or representative present. It’s unfair, it’s shitty, and you need to have a plan before it ever gets to that. Your best option is to say, “I need to go to the bathroom”. Take time, calm down, and possibly call a lawyer. If you return to the PIP meeting 20 minutes later, so be it. They can wait.

Macho self-reliance

No amount of self-defense training will make it safe to get into a street fight. The odds may improve, but it’s still an inherently dangerous thing to do. Knives aren’t good protection, being much harder to control in a fight than one might think, and inviting other weapons such as guns. Speaking of guns, most people wouldn’t know what to do with a gun in a situation of actual danger. Many would think that, with a gun against four unarmed assailants, they’d be able to emerge the winner. I wouldn’t count on that.

All that said, there are plenty of people (mostly men) who believe that, because of their martial arts training or because they possess a weapon, they can safely go to places, and engage in behaviors, that make fistfights common. They’re wrong. The issue is that people wish to identify with the winner in any conflict. “I would never end up on the ground like that.” They overestimate their performance in the face of fatigue, panic, confusion, or under time pressure. Until something bad happens to that person, many people assume it never will. “I’d never be put on a PIP, because I’m good at my job”. That’s wrong, too. True low performers are usually eased out through other means. PIPs are generally used to sucker-punch a politically-targeted high performer, and they work. Plenty of people shit the bed the first time they see one.

In the world of work, this macho self-reliance is seen in the resistance of many workers to any sort of protection. You hear this one a lot from software engineers: “I wouldn’t want to work somewhere where it’s hard to fire people.” I’m sorry, but that’s a bit ludicrous. It should be hard enough to fire people that the politically unlucky still end up with a decent severance, and hard enough that people who fit poorly with their first project are given at least one more chance.

Let’s talk about the “10x engineer”. That effect is real. It’s not only about natural ability, so it’s not the same people who’ll be “10x” in every single context. Motivation, project/person fit, political position (those who write code and make decisions will always be more productive than those who read code and are downwind of decisions) and domain-specific expertise all apply. To understand why, consider the Gaussian bell curve, which emerges as random variables compound additively. In most human affairs, though, they compound multiplicatively, producing the “fat-tailed” lognormal distribution. Sometimes, there are knock-on effects that produce an even fatter-tailed power law distribution.  (If this all sounds like jargon, the takeaway is, “outliers are more common than one would think.”) Consider a world in which a new employee’s productivity is a function of the outcome of a 6-sided die, like so:


| Die |  A  |  B  |
-------------------
|  1  |   0 |   4 |
|  2  |   9 |   6 |
|  3  |  10 |  10 |
|  4  |  10 |  20 |
|  5  |  11 |  40 |
|  6  |  11 | 100 |

In scenario A, average employees (10 points) have job security, because even at that point, they’re above the mean productivity (8.5 points) of a new employee. Even the noticeably below average employees are OK. On the other hand, in Scenario B, the mean is 30 points, which means that the mediocre ’3′s and ’4′s (10 and 20 points, respectively) should be fired and will be, even if it’s not their fault. In practice, it doesn’t workout quite that badly. Firing average employees is bad for morale, and the “10x” effect is as much about motivation and purpose as it is about the individual. But we can see, from a theoretical basis and considering the convex nature of software development, why tech companies tend to have such mean-spirited management practices. From their perspective, they’re in the business of finding and exploiting 10x-ers (who won’t be paid anything remotely close to their value, because they’re lucky to still be employed) and anyone who seems not to be a 10x-er should be tossed back into the sea.

Closed allocation tech companies, one notes, are notoriously awful when it comes to internal mobility. This is a severe ethical issue because internal mobility is promised almost inherently by big companies. It’s the one advantage large companies have over small ones. Yet, most technology companies are set up so that median performers are practically immobile (unless their projects are canceled and they must be put somewhere). Headcount limitations and political intricacies (one manager not wanting to upset the other) arrange it so that middling and even average-plus performers can’t really move anywhere. The fact that there are political externalities to deal with, alone, makes them less desirable to the target team’s hiring manager than a fresh recruit from outside the company. Stars generally are worth fighting a broken transfer system to attain, but they generally don’t want to move, because they have a gravy train they’d rather not stop.

Most software engineers think too logically and will take basically reasonable premises (“low performers should be fired”) to unreasonable conclusions (“performance reviews should be part of the transfer packet”). This allows technology companies to abuse them, taking advantage of their just world delusion and rugged individualism. When very real abuses are targeted toward those around them, they fail to perceive that such things could ever happen to them. “I’m a rock star coder. I’ll never be put on a PIP! If I were, I’d just work weekends and show the boss he’s wrong!” This is like claiming that knowing a couple jiujitsu throws makes it safe to get into bar brawls. It’s just not true.

Do we need unions?

Originally, I said no. When I said that, to be frank about it, I didn’t know what the fuck I was talking about. I got professions right, but I had a defective understanding of what labor unions actually do and why they are important.

I’m not ready to come down on either side of that question. We also don’t want to see the seniority-bound nightmare of, say, the airline pilots’ unions. Also, uniformity in compensation begets mediocrity on both sides and we don’t want that; the good news is that writers’ and actors’ unions seem not to destroy the upside in pay for those people. Hollywood’s labor pool, including celebrity actors, is unionized and hasn’t suffered in quality or compensation for it. Professional structure doesn’t necessarily lead to mediocrity.

However, we need collective bargaining. We need the right of a software engineer to have an independent representative, as well as legal counsel, in the room when in hot water with management (to prevent those “shit the bed” PIP ambush scenarios). We need protection against political terminations, such as for revealing salary information (the general prohibition offices have against that isn’t for anyone’s benefit but their own) or committing the overperformance offenses (doing a job too well, and embarrassing others) typical of engineers. We need to be part of a political body that corporate management actually fears. When we act in good faith against the interests of our employers (say, by revealing abuses of power) we shouldn’t have to do it alone. We also need to have the right to have those contractual issues that are embarrassing to negotiate for oneself (severance, health accommodations, privacy) negotiated for us by more experienced parties. We need automatic legal assistance in questionable terminations, and automatic work-stopping shutdowns (strikes) against any employer who resorts to a negative reference. We need sleazy and unreliable “back channel” reference checking to be abolished, with employers who engage in or allow it being sued or struck into oblivion.

Right now, we get none of this, and it’s a total fucking crime.

That delusional programmer self-reliance (“I know how to handle myself in a fight”) allows employers to isolate “troublemakers” and pick them off, one by one. It allows them to negotiate salaries and conditions down, brutally, due to asymmetric information and broken, reputation-driven power structures in which even six months out of traditional employment can ruin a person’s life. It allows stack ranking and closed allocation, which any labor body with self-respect would fight back against.

Because of our nonsensical self-reliance, when we do fight, it is usually an individual acting alone. Employers can easily ruin the reputations of isolated individuals, and few will protect a person thus disparaged. Consequently, those battles are typically ineffective and humiliating to the fighter. The conclusion of the workers is that management simply can’t be fought. Yet those same people believe that, through hard work or some notion of personal “merit”, they’ll never face political adversity, unfair terminations, or humiliating, destabilizing processes such as PIPs. Like armchair street fighters, they don’t know what the fuck they are talking about.

Collective strength

Street and bar fights are depressing, illegal, stupid, and dangerous. Your average person would have no idea how to respond in a sudden, unexpected assault, and be completely at the mercy of the attacker. Others who could assess the situation would have a chance to interpose, but the victim would have lost the fight by the time the initial shock wore off. The good news is that such things are uncommon. Most people haven’t been in a fight since high school, if ever. Why? In public spaces, attacking another person is dangerous and generally a terrible idea. One might get beaten up oneself, especially if others interpose, and jail time is likely. When it comes to physical violence, there’s an understanding that, while rare, bad actors do exist and that it’s worth it to protect a stranger against an attack, if one can. People will often defend each other, which makes the costs and risks of assault sufficiently high that it’s a rarity.

In the workplace, we don’t see this. When a good employee is served with a PIP or fired, nothing really happens. People live in such fear of management that word of the unjust termination is unlikely to travel far, and even more unlikely to have any effect on the company. Those who oppose a company’s management in any way, no matter how trivial and even if it is accidental, are isolated and embarrassed by the organization. If it really cannot afford the image issues associated with a clearly political demotion or termination, it will simply assign him untenable work, interfere with performance, and render it impossible for him to do a good job, making the eventual outcome appear deserved.

The result is that sudden, unpublished assaults are common in the corporate world, sometimes because a person falls into an unknown political trap and winds up in a management-level grudge, sometimes because the organization must protect unrelated interests, and sometimes for no reason at all. They are rarely talked about. The person is isolated and either goes silently, or is humiliated. Opposition to the organization resulting from his treatment will never happen. In light of the severe (and unreasonable) stigma placed on employees who “bad-mouth” ex-employers, he can’t sue or publicly disparage that company without risking far more than he should ever have to put on the line. No one has the individual employee’s back, and that’s a shame.

Software engineers might be the worst in this regard, because not needing support is taken as a badge of pride, as seen in tech’s celebration of reckless firing, punishing work conditions, and psychotic micromanagement in the name of “agile”. “I don’t need no stinkin’ rights. Two fists are all I need.” That just makes no sense. It renders the individual programmer (a person with skills for which demand is so high that, if we acted collectively, we could really improve things for ourselves) easy to isolate, exploit, and humiliate if he even attempts to get a fairer deal.

The enemy, and this is especially a problem for software engineers, is delusional self-confidence. People feel a lot safer than they actually are, and to communicate the truth of one’s lack of safety is taken as admitting weakness, which allows the bad actors to isolate their targets easily. This prevents non-managerial professional groups (such as software engineers) from acting collectively and overcoming their current disadvantage.

Meritocracy is the software engineer’s Prince Charming (and why that’s harmful).

One of the more harmful ideas peddled to women by romance novels and the older Disney movies is the concept of “Prince Charming”, the man who finds a young girl, sweeps her off her feet, and takes care of her for the rest of her days. It’s not a healthy concept, insofar as it encourages passivity as well as perfectionism in mates. But it also encourages women to make excuses for bad (and often abusive) men. Because the archetype is so unreasonable, men who can make themselves seem to fulfill it are the manipulative and sometimes abusive ones, not genuine good (but flawed) men. I’d argue that software engineers have a similar Prince Charming.

It might begin as a search for “a mentor”. Savvy software engineers take knowledge and favor from multiple people, but every Wall Street or Silicon Valley movie showcases a mentor/protege relationship as the path to success. Meet this magical person, and he’ll take care of your career from there on out. That doesn’t exist for most people, either, and most software engineers learn that around age 25. Their counterreaction is to develop a bizarre self-reliance in which they start refusing help, wanting to work alone, and denigrating those who advance their careers based on “politics” or “connections”. Having too much dignity to wait for a magical mentor to rescue them from mediocrity, they insist on their new Prince Charming, an interpersonal force that will recognize and elevate talent: meritocracy.

The problem with meritocracy is that every organization claims to be one, yet almost all organizations are deeply political. Software engineers are not a subtle breed, so I must imagine that they imagine most non-meritocracies perceive themselves as such, and admit so much, and that’s clearly not true. Oil companies, banks, startups and dysfunctional academic bureaucracies all have this in common: they believe in their own meritocracy. Otherwise, they wouldn’t be self-consistent and stable. “We’re a meritocracy” means nothing. And what is “merit”? Organizations make promotion decisions not to recognize some abstract principle of “merit”, but on what is perceived to be in the short-term, narrow interest of the organization. It’s not what software engineers mean when they use the term merit, but one could argue that political acumen is organizational merit. The people who are promoted in and end up dominating organizations are… those most able to convince organizations to promote them, whether through delivering objective value or by trickery and intimidation. It’s a self-referential, Darwinian sense of “merit” akin to “fitness”. Darwinian fitness is neither a matter of good, bad, or anything other than the ability to self-replicate.

Of course, I know what software engineers mean when they say they want to live in a “meritocracy”. They want important decisions that affect their working lives to be made by the right people. The problem is that the ability to make good executive decisions is almost impossible to measure, reliably, especially on a timeframe that businesses would consider acceptable. Political machinations can happen, on the other hand, in split seconds. Saying something stupid in a meeting can end someone’s career, even if that person is, in general, a good decision-maker. It takes too long to select leaders based on the quality of their decisions, so organizations develop political side games that end up consuming more energy, time and attention (especially at high levels) than the actual work or purpose of the organization. Generally, this side game takes on the feeling of a war of attrition. Nonsensical pressures and busywork are added until people embarrass themselves out of contention, or their health fails, or they leave to pursue better options, leaving one person standing. Software isn’t different from that, with the long hours and posturing machismo and general disregard for health.

By believing in meritocracy, software engineers trick themselves into making excuses for awful companies and bad bosses that hurt their careers, destroy their confidence, and unapologetically exploit them. When they enter organizations, they tend (at least, when young) to want to believe in the self-professed “meritocracy”, and it’s hard to let such an idea go even in the face of adverse evidence. When these engineers are betrayed, it’s practically an ambush.

Older, savvier engineers know that few workplaces are meritocracies. In general, the claim of “meritocracy” is nothing more than a referendum on the leadership of the company. For this reason, it’s only in the midst of an open morale crisis (in which firing the obviously unhappy people isn’t viable because almost everyone is obviously unhappy) that one can admit to the organization’s non-meritocracy.

The expensiveness of it all

Software engineers’ belief in meritocracy costs them money and career advancement. By conflating their organizational position (low, usually) with personal merit, their confidence falls to zero. Computer programming, if marketed properly, ought to be “the golden skill” that allows a person unlimited mobility within industry. However, we’ve allowed the businessmen who’ve colonized us to siloize us with terms like DBA, operations, data scientist, etc., and use those to deny opportunities, e.g. “you can’t take on that project, you’re not a real NLP programmer”. As a class, we’ve let these assholes whittle our confidence down to such a low level that our professional aura is one either of clueless youth or depressive resignation. When they beat us down, we tend to blame ourselves.

Our belief in meritocracy hurts us in another way, in that we justify things being unduly hard on us. We hate the idea of political promotion. Perhaps, on first principles, we should. What this means is that engineers are promoted “confirmationally” rather than “aspirationally”. In HR-speak, confirmational promotion means that they’re given formal recognition (and the organizational permission to operate at the level they have been) once they’re already working at the level signified by the title. Aspirational promotion means that people are promoted based on potential, but this opens the door for a host of clearly political promotions. On paper, confirmational promotion is superior, if infuriatingly slow. (It requires people to blow off their assigned duties and to take unrequested risks.) Engineers, of course, prefer confirmational regimes. And what’s wrong with that?

Engineers don’t like to negotiate, they don’t like politics, and they’re against favoritism. Most have a proud self-reliance that would leave them uncomfortable even if personally favored. They’re also, in general, irreverent toward title as long as they believe they’re fairly paid. To them, confirmational promotion is right. The problem? Everyone but engineers is promoted aspirationally. Engineers need long, completed, successful projects to get bumped to the next level. What, pray tell, does it take to become VP of Product or Senior Manager as opposed to Manager, or to rise on just about any of the nontechnical tracks, in most tech companies? Absolutely nothing. There is no fucking magic there. You have to convince someone to “see something” in you. That is, you have to play politics.

To the engineer’s chagrin, playing politics comes easily for most ambitious people. It sure isn’t rocket science. Getting over one’s own moral objections is, for most people, the hardest part. The result of this is that nontechnical tracks, including management tracks that often cross over engineers, are characterized by easy aspirational promotion driven by favoritism and politics. The “meritocratic” engineering track is clearly much more difficult. There are people over 35, with IQs over 140, who haven’t made “senior engineer”, for fuck’s sake. (At a “mere” 125 IQ, you’re smarter than 97% of the nontechnical VPs at most tech companies.) It’s characterized by confirmational promotion, instead. And this is a point of pride for software engineers: it’s really hard to climb the ladder, because one is competing with the smartest people in the organization, and because while favoritism exists, political promotions are much rarer on the engineering track than on non-technical tracks (largely because promotions in general are rarer).

This is something that software engineers don’t really get. What do job titles actually mean in organizations? Companies will say that “Director” means one thing and “VP” means another, with some jargon about “the big picture” and a person’s responsibilities within the organization. The truth is that they mean very little, other than serving as political tokens that prove the person was able to get them. “Director” means, “he was able to negotiate a salary between $X and $Y from HR”. Not more.

Where it leads

If you ask an engineer whether he thinks he’s ready to be VP of Engineering or CTO, you’ll get a half-hearted, self-deprecating answer. “You know, I might be ready to lead a small team, but I’m not sure I’m at the VP/Eng level yet.” Cluelessly, he believes that “the VP/Eng level” exists objectively rather than politically. On the other hand, if you ask a nontech the same question, he’ll take it without hesitation. Even if he’s terrible at the job, he gets a generous severance (he’s a VP) and will fail up into a better job. The relevant concept here is the effort thermocline, or the level in an organization where jobs stop being harder with increasing rank, but become easier (although, more political). It can be politically difficult to get a job above the effort thermocline, but it’s ridiculously easy to keep it. At that point, one has power and credibility within the organization sufficient that one cannot, personally, fail due to a lack of effort.

Nontechs, except for clueless people in their 20s who haven’t figured out what they want to do, go into work with one purpose: to get promoted beyond the effort thermocline. That’s not to say that they’re all unambitious or lazy. They’re just realistic about how the game works. Even if you want to work hard, you don’t want hard work to be expected of you. If you’re an SVP and you show up for work every day and put in an honest effort, you get credit for it. If you’re a worker bee, you get nothing for your 8-or-more hours per day. It’s just what you’re expected to do.

Above the effort thermocline, promotion is political, and people stop pretending otherwise. When you get “into the club”, you’re permitted to speak frankly (and hear frank speech) about how the organization actually works. The issue with the engineer’s mind is that it clings to a belief in right and wrong. It’s moralistic. It struggles to accept what people really are. Engineers don’t want to contend with the basic fact of most organizations, which is that they’re politically corrupt and dysfunctional, because most people are lazy, greedy, and weak. I’d likewise argue that this is connected to the low levels of acquired social skills in people like software engineers. It’s not a neurological disability for most. They never learn to read cues beyond a subconscious and juvenile level, because they hate what they see, which is that humans are mostly defective and that many are horrible.

Engineers don’t like the concept of the effort thermocline, or of political promotion in general. As much as they can, they’d refuse to have it within their ranks. I’d tend to side with the engineers. Who wouldn’t, from first principles, prefer a meritocracy over a political rat’s nest? The business responds by turning off political promotions for most engineers– while the rest of the organization continues to get them. The result is that, while they start off well in terms of pay and occupational dignity, engineers are being surpassed by the nontechs (who gleefully accept political promotions and feel none the worse for it) by age 30 and, by 40, are undervalued and way underpaid relative to their worth to their companies.

Engineering tracks in organizations are notoriously title-deflating, in comparison to the rest of the business world. Most software engineers would be appalled by how little talent and work ethic are required to become a non-technical VP at even the most esteemed tech companies. Many of these people are lazy (11-to-3 with 90-minute lunches) and just plain dumb. And by dumb, I don’t mean programmer dumb (understands the theory behind neural networks, but has never put one in production) but actual shame-to-the-family, village-idiot stupid. You know how towns in the Midwest used to bus their “defectives” to San Francisco in the mid-20th century? Well, so does the corporate world, and they end up as nontechs and upper management in tech companies.

Conclusion?

Meritocracy is the Prince Charming of the software engineer. It doesn’t exist. It never has, and it never will. Some have asked me to comment on recent HR issues occurring at open-allocation technology companies. The only thing I can say is that, yes, open-allocation companies have serious political issues; but closed-allocation companies have those same issues and more. Open allocation is strictly superior, but not a panacea. When there are people, there is politics. The best an organization can do is to be fair and open about what is going on, and hope to achieve eventual consistency.

Every organization defines itself as a meritocracy, and most engineers (at first, until they are disillusioned with a company) will tend to believe it. They aren’t stupid, so they don’t believe their companies to be perfect in that regard, but they (cluelessly) tend to believe that meritocracy is a core value of the leadership. Almost never is that the case. “We’re a meritocracy” is code for, “don’t question promotions around here”.

The Prince Charming delusion of meritocracy is dangerous because it leads people to make excuses for bad actors. Every company has to lay off or fire people, and frequently these choices are made with imperfect information and under time pressure (one large layoff is less damaging to morale than several small, measured, layoffs) so often the wrong people are let go. A self-aware organization understands this and lets them go gracefully: with severance, outplacement assistance, and positive reference. A delusional “meritocracy” has to cook the books, create psychotic policies that impede internal mobility for everyone, and generate useless process in order to build phony performance cases. In practice, just as many people are let go as in established (and less delusional companies) but their reputations have to be demolished first, with bad project assignments and hilariously disingenuous “performance improvement plans“. Personally, I’d rather see the honest, times-are-tough, layoff than the tech company’s dishonest “low performer initiatives”, much less the permanent (and destructive) rolling layoff of stack ranking.

The biggest casualty, however, of the typical engineer’s head-in-sand attitude toward political promotion is that they never stop happening to everyone else. Engineers just make themselves ineligible. Engineers want promotion to be confirmational (that is, resulting from demonstrated merit) rather than aspirational (that is, based on potential and, therefore, subjective, personal, and political). The problem with this is that, after 10 to 20 years, most engineers haven’t been able to demonstrate even 20% of what they’re capable of. They kept getting crappy projects, were never allowed to finish anything, were rushed to produce work that broke under strain, and their lack of finished accomplishment (due to political forces often not their fault) left them ineligible for promotion to more senior roles, but too old to even pretend in the junior roles (hence, the age discrimination problem). After that gauntlet of false starts and misery, they’re still answering to nontechnical people and executives who had the benefit of aspirational, political promotion. By refusing to play politics and believing in the false god of meritocracy, they deprived themselves of the full spectrum of causes for advancement. Politics, however, went on regardless of whether they believed in it.

This false meritocracy is very clever when it comes to reinventing itself. Few expect a large company like, say, Alcoa or Exxon-Mobil to be a meritocracy. Engineers have figured out, as a group, that “big companies” become political. The response? Startups! Venture capital! The actual result of this has been to replace well-oiled and stable (if inefficient) corporate non-meritocracies with the mean-spirited and psychotic non-meritocracy of the VC-funded ecosystem and the feudalistic reputation economy that the leading investors, through collusion, self-dealing, and note-sharing, have created. The cheerleading of intellectually charismatic figures like Paul Graham and Marc Andreessen has managed to create a sense of meritocracy in that world, but belief in those idols also seems to be waning, and I’m proud to say that I contributed to that loss of faith.

If meritocracy is impossible, what should we do? As individuals, we need to learn to fight for ourselves. It’s not undignified or desperate or “pushy” to look out for our own interests. It’s what everyone else is doing, and we should get on board. As a collective, we need to have honest introspection on what we value and how best to achieve it. Perfect meritocracy within any organization is impossible. It is good to strive for that, but bad to believe it has been achieved anywhere. Eventual consistency and technical excellence are achievable, and we should aim for those.

Before we do anything, though, we need to learn how to fight for ourselves. Bringing frank knowledge to the good, in that fight, is what I’ve been striving to do all along.

The right and wrong way to lie in business, Part 2

In the previous essay, I opened an honest discussion of the ethics and practice of lying in business. I argued that it is better to tell one large-enough lie than a hundred small lies, and that the best lies are those to establish social equality in spite of an existing trust-sparse environment. That is, you lie to flip one’s bozo bit to the “off” position, but not to go any further and certainly not in an attempt to establish superiority over the other party. I also argued that one should aim to lie harmlessly. People who spread malicious gossip ruin themselves as much as their targets. They become, figuratively and literally, bad news. Now I’ll cover a third principle of lying in business: own the lie.

3. Owning the lie.

Lies, even ethical ones, can be corrosive to relationships. People have a visceral aversion to being lied to. When you lie to someone, you’re making the statement that you don’t believe the person can be trusted with the truth. In reality, most people can’t be trusted with the truth. The truth is too complex, they’ll only understand it partially, and often conclude against you based not only the truth but on their own superficial, limiting prejudices. However, it’s not socially acceptable to tell people that you can’t trust them with the truth. You can’t just say, “I’ve inflated my job titles because we barely know each other, and I’m afraid that you’ll write me off if you know that I was only Director, not a VP”. This means that you never want to be caught lying to someone. It’s less damaging if you tell the lie to a group that they happen to be a part of.

Lying to someone is corrosive because people take it personally. This means that, when you decide to lie, you have to be comfortable telling the lie to everyone. Moreover, you have to act, going forward, as if the lie were true. There’s a saying in creative writing, most often applied to poets, that the bad ones borrow and the good ones steal. Incompetent people lack the originality to deviate from a found gem, so they replicate it in a way that is overly literal and clumsy. The adept creators, on the other hand, know that very little in the world is entirely original, so they willingly take ideas from disparate sources and merge them in a way that is uniquely theirs. A similar rule applies to lying in business: steal, don’t borrow. If you’re going to lie about something, you must be prepared to continue lying about it until the end of time. People are unable to detect truth in others, so they fixate on consistency (which is hypocritical, because human beings are deeply inconsistent) instead. The result of this is that, for any lie you intend to tell, you must make sure it is consistent with existing written facts, and live in a way that continues to be consistent with it.

Things that are inconsistent put people ill-at-ease. For example, Wall Street has a negative reputation among the public and, while some of that’s earned, much of it’s not. People hold a negative view of markets in general, and a large part of that is that they appear inconsistent. The “fair value” of a corporation can rise or drop by a billion dollars in a day for no apparent reason. (More accurately, there is a trade-off between availability, efficiency, consistency at a given time, and consistency across time. Markets favor the first three and abandon the fourth.) To people who want everything to be “fairly” priced, it seems like something shady is going on. How can it be that the “fair value” of something changes so erratically? In reality, nothing shady is (at most times) going on; it’s just that the platonic “fair value” doesn’t exist. To the general public, however, this feels like inconsistency, and hence there are complaints about “price gouging” when market forces drive the price of gas to $4 per gallon.

Of course, there are bad things that happen on markets. There are manipulators, dishonest schemes, and moral hazards pertaining to risk (especially when derivatives are used) all over the place. Markets also do a great job of local optimization (what is the best price for butter?) but fail at achieving equally important global targets, such as avoidance of poverty and social breakdown, enforcement of human rights, protection of the environment, and universal healthcare. Much that happens on Wall Street deserves to be despised, and I don’t intend to claim otherwise, and much of the American corporate system is far from a free market anyway. I simply purport that the visceral dislike humans have (and have had, for centuries) for market mechanics has little to do with the actual abuses, and is more a result of an overreaction to their obvious (but not morally objectionable, since supply and demand do change, sometimes rapidly) inconsistency. Something that is constantly changing (such as a claimed “fair price”) is distrusted. That applies to human beings as well. An intelligent person changes his views when they discover new information, or just because they think about a problem in a different way, but a politician whose ideology evolves is called a “flip-flopper”.

So what does it mean to own a lie? Many people, when they tell a lie for the first time, get petty enjoyment out of it. “I got away with it!” Don’t fall for that petty “rush”. It’s actually really easy to get away with most lies. That’s why the consequences for being caught are often so disproportionate to the offense. Many companies, at least by their outward claims, hold a zero-tolerance policy toward lies on resumes because, in fact, 95 percent of those lies are never caught, requiring the penalty for those who are caught to be at least 20 times the possible gain. (Moreover, in that other 5 percent, those are usually people who are going to be fired anyway, and put under re-investigation because the company wishes to renege on a contractual executive severance, or to somehow extort the caught liar.) Getting away with a lie is actually the typical, default, outcome. It shouldn’t be celebrated. One definitely shouldn’t take in the petty thrill of “getting one over” on someone. Some people get addicted to that thrill and become the purveyors of small or harmful lies: the blowhards and gossips. That’s useless, because it’s so easy to get away with lies.

When you decide to lie, to improve your career or reputation, you’re not “getting one over” on anyone. You’re taking a non-truth and making it true. You will forever act as if it were true. The people you’ve lied to haven’t been defeated or bested. You’ve had no win against them. You changed the truth independent of them. You, then, told that truth to them. You related to them as equals, not as if they were gullible inferiors.

If the above sounds vaguely psychotic, it’s not my intent. Yes, if you’re going to use a lie, you must own it. For example, if your lie is that a previous employer placed you in its high-potential program, you have to stop complaining about that company and how unfair the place was. It’s best to convince yourself that the lie was true, because it has become the new truth. Now, if one began owning delusions, one might tend toward (if not psychosis) pathological narcissism. That’s clearly not good. Instead, I would argue that one should only tell ownable lies. That restricts the scope of what one can lie about, which is generally a good thing. Lying is somewhat of a surgical art. You have to change one aspect of “the working truth” without creating inconsistencies or having too many side effects on unrelated “working truths”. It’s far from easy to do it right.

I initially said to “lie big”, and I stand by that. By “big” I mean effectively and tactically. The social costs of lying (even if not caught) are severe enough that one should not lie without an agenda. A hundred small lies become impossible to keep up. That one should usually lie harmlessly, and restrict oneself to ownable lies, push downward on the scope of reasonable lies and keep a person from lying too big. Some people lie about the college they attended. That’s a terrible idea. Faking a four-year experience, to one who has actually had it, is pretty much impossible. If the lie can’t be owned, then don’t tell it.

What is the truth? How can it be “modified”?

The concept of truth is much more complicated than people like to admit. “It is 39 degrees in New York City.” Is that a true statement? As of 8:16 am on April 6, 2014, it is objectively true (at least, according to The Weather Channel). At other times, it will be false. There is nothing about New York that makes it inherently 39 degrees. That’s a property of it at a given time, and the world on April 7 will be nearly indistinguishable (apart from trusted written record) from one in which New York’s temperature was a different number. “In Inception, Cobb dreamt the entire thing.” Is that true? People debate it, and while I’d rather not spoil the movie, I think the evidence points a certain way on that. But is it a fact? No, it’s a conclusion one draws from presented fiction. There’s a book (I won’t name it, because that would spoil it) where the titular character discovers that she’s a character in a book, written for a woman (call her Helen) who’s “real”. Of course, Helen’s also a character in a book, so “Helen is real” is not a truthful statement about reality about intent. The author intends her to be real within a universe that is fictional. Ah! But can anyone other than the author speak to intent? And couldn’t that author truthfully (or, at least, consistently) represent his intent in myriad ways?

You should never lie in a way that contradicts an objective fact. This should be obvious. Don’t cook the books; you will probably get caught. The good news, for those who need to lie, is that most human behavior and judgment (especially in business) is based on pseudofacts, which are much more manipulable. “Erica is good at her job.” “Stanley was formally recognized as a high-potential hire.” “Andrea does not get along well with the team.” “Jason was only promoted because he’s the boss’s favorite; Jason must have something on him.” All of those sound like factual statements, but are completely subjective.

Great minds discuss ideas, middling minds discuss events, and base minds discuss people. (Most minds are base, and great minds are base some of the time.) Ideas stand alone on their own merit and can be debated from first principles. The extreme of this is mathematics, where things are objectively true (within a specific formal system) or not. Even “controversies” within mathematics (e.g. the Axiom of Choice) pertain not to whether the axiom is valid mathematics (there is a valid mathematics with Choice, and a valid one without it) but to the subjective question of which of these equally valid mathematical frameworks is more useful and deserves more study. As for events, those stand in for facts in a more parochial but also more applicable way. “2 plus 2 is 4″ is a factual theorem that is always true, everyhwere. “The temperature in New York at 8:16 am was 39 degrees” is an event, or a piece of data, relevant to one place and point in time. We can reason about why the seasons exist, in the realm of ideas, and such explorations have informed humanity’s understanding of the solar system and, eventually, the cosmos; but if we want to know what to wear for a trip, we’re better off with empirical data (events) pertaining to the weather we can actually expect. Further down the line, we have the business world, still driven by emotion far more than by data. There, judgment of people (the ultimate bike shed) outweighs anything else, and one should do whatever is necessary (lie, intimidate, cajole, bribe) to make sure the relevant parties come down on the right side.

In the realm of ideas, a lie is an objective falsehood like “2 plus 2 is 5″. One should assume that those will always blow up, making the liar appear foolish. However, in the realm of events, lies can be inserted relatively easily. “At 8:16 am on April 6, 2014; the temperature in New York was 60 degrees.” That’s false, but it could conceivably be true. New York, in April, has cold days and warm days. Nothing contradicts it (although there are other, more reliable, readings that would call it into doubt, since New York isn’t known for microclimates). Even with that written record, it’s theoretically possible (if unlikely) that one observation point recorded a valid 60-degree reading while the rest were around 39. All that said, the realm of events has got to remain mostly truthful. Let’s say that the world of mathematics is 100% truthful. Events that are recorded as true in various databases (climate, physics experiments, business) are probably true 99.9999% of the time. Occasionally, there’ll be a ridiculous reading. When it comes to people, especially in business, it’s mostly non-truth that we call “reputation”: social-status-biased judgments that, while full of exaggeration and rumor, are used because they’re the best proxy we have. Not quite lies, not quite truth. Bullshit would be a good technical term. Most of the information we use to judge other people in the business world is bullshit: non-verifiable, non-truthful non-lies. “Sam left because he couldn’t hack it.” “Teresa wasn’t a team player.” “Bill was the obvious leader of the group.” “Mark only wanted to work on the fun stuff.” These statements are utterly subjective, but it’s their subjectivity (and their bullshittiness) that makes them so powerful. People are viscerally drawn to those of high status. Merit is something we invent because we want to believe we’re more than animals, and that our decisions are made from more of a high-minded place than they actually (for most people) are. Status is what humans judge each other by, and it’s almost all bullshit. Sam (above) left because his lack of pedigree had his superiors dropping low-end grunt work on him. Teresa’s high intelligence intimidated those around her and she was saddled with the “not a team player” epithet. Bill claims he was the leader of the group, and the rest were too meek to oppose him. Mark was an objective high performer but disliked for his political views, and “only wanted to work on the fun stuff” was the only charge that could stick, in the effort to damage his reputation. All of that stuff, above, is judgment of people (complex organisms, simplified with labels like “not a team player” or “high performer”) given false objectivity. One lies because one needs to fight it, and to “correct” unfavorable judgments of oneself.

False events and the new truth

To lie effectively, one has to operate in the realm of events, which is the world of middling (and practical) minds. In the realm of ideas, it is hard to lie, because bad ideas usually end in some sort of contradiction or failure. The judgment of people can’t be addressed directly, because it’s not socially acceptable to discuss, directly, what people are actually trying to figure out. Let’s say that you’re under attack. I’ll use, again, the case of a negative reference when seeking a job. Your ex-boss is saying that you were a poor performer and that he wouldn’t hire you again. What is the best counterattack?

  • A. “Well, he’s a jackass.”
  • B. “That’s because he’s a child molester.” (Assuming that’s a lie.)
  • C. “You know, that’s funny, because three months after I left, he called me begging to re-join his team. He even offered me a 20% raise.” (Assuming that’s a lie.)
  • D. “He never liked me, because I’m a Red Sox fan and he’s a Yankees fan.” (Assuming that’s a lie.)

The answer is, of course, C. But why is it C? Let’s eliminate the other three. Answer D violates “lie big”. It’s a small lie and it’s just not believable. There might be a few people who are so vested in sports fandom that they’d ruin someone’s life over it, but they’re rare and typically don’t acquire organizational credibility such as a management position. It’s a small lie. When you use a small lie as “the explanation” for something negative in your past, you lose credibility, because even if you may not have deserved the hit against you, you’re showing a lack of insight into what happened. So strike D, that’s a no-go. Answer B violates “lie harmlessly”. It’s understandable that one would be angry when facing a negative reference, and wish to ruin that person’s reputation and life. That might be a worthy project in its own right. However, the practical need at the moment is to recover from the accusation of poor work performance, not to damage the one making the claim. “He’s a child molester” doesn’t answer the question. It raises more questions. (How do you know? Why haven’t you reported him?  It’s too big of a lie, way too harmful, and doesn’t help you. His being a pervert doesn’t invalidate his accusation against you.

We’ve stricken the two obvious bad answers. What about Answer A, which is arguably the only truthful answer (non-jackasses don’t ruin their ex-subordinates’ careers, so I can call A “truthful” from first principles) among the four? Aside from the general prohibition against “bad-mouthing” ex-employers– that’s a prohibition that might not apply here (negative references put you squarely into atypical territory, and when you have to choose between your reputation and an ex-employers you always favor yours)– why does “he’s a jackass” fail you? You’re addressing judgment of people directly– too directly, in fact. That’s not socially acceptable, because gossip is so low-minded. You have to make it look like you’re discussing objective events. If you don’t have any, make some up.

Most of the judgment-of-people bullshit (to use the technical meaning of bullshit, which is a non-truthful non-lie, neither true nor false because the claim is of such low value that its truthfulness is undefined) can be subverted with the insertion of non-truthful events. Bad reference from an ex-boss? Repeat again: “three months later, he begged me to rejoin his team.” Bad performance review interfering with internal mobility? “I accidentally upstaged him in a meeting with his manager by having an idea that his boss really liked. He tried to take credit and, being loyal, I actually let him, but his boss still attributed it to me.” Fired from a previous job? “I left on amicable terms, and my manager has repeatedly said that the door’s still open.” Ex-boss said you’re “not a team player”? “Man, he told me not to work so hard because it was making the rest of the team insecure. I thought we were past that, but I guess not.” You can almost always recover from a smear, even when cloaked in false objectivity, by inserting non-truthful events (verbal conversations, with no record, are the best) into the stream. When you do so, you’re not “telling a lie”. You’re changing the truth. Those conversations, even if they never happened physically, now did happen. You make it a fact that your ex-boss begged you to rejoin his team, and your choice to remain with your new job (your professionalism) is actually why he’s smearing you. This makes the explanation for his smear against your performance much simpler than the complex array of things (typically, a months-long story that caused bad things to happen to a good person) that actually happened. It’s a mind-fuck, I won’t deny it. That’s why one shouldn’t lie often.

Ethics, past and future

I’ve put forward that there are good and bad liars. A “bad liar” could be the ineffective kind or the unethical kind. The ineffective ones are the blowhards. They may or may not get caught in specific lies, but they fail to achieve their desired effects. Seeking to elevate their social status through non-truth, they undermine their own credibility and become laughingstocks. Those who strive to achieve social superiority through lies usually end up that way. It’s much better to lie just enough to establish equality and basic credibility– that is, to overcome the prejudices that emerge in a trust-sparse system. Doing this requires that one’s lies simplify. The problem with blowhards is that they’re so in love with their own (exaggerated or outright made-up) stories that they litter the “claimed event stream” with complexity and lose credibility.

Let’s step away from the blowhards (really, they aren’t that interesting) and ask a higher-minded question. What differentiates the ethical liars from the unethical ones? This is a subjective matter (I’m sure not everyone will agree with my definition of ethical) but I think the crux of it is that ethical liars focus on fixing the past: making it simpler and cleaner so it goes down easier. They’re manicuring their own reputations and removing some hard-to-explain bad luck, but not trying to mislead anyone. On the other hand, fraudsters intend to deceive about the future. Con artists want their targets to believe in high-impact future events (specifically, financial returns) that simply aren’t going to happen. Ethical liars are making it easier for counterparties to make the right decision for both parties, and using non-truth to overcome the pernicious, lose-lose, inefficiency of a trust-sparse world. Often, simplifying non-truths about the past are necessary to overcome embarrassments that, trivial as they are, might disrupt the trust needed to build a future that is properly coherent and (paradoxically) more truthful than what would emerge if those non-truths weren’t there. Unethical liars, on the other hand, want their targets to make what are, for the target, wrong decisions. That is, I think, the fundamental difference. Ethical liars simplify the past to make the future truthful. Unethical ones want the future to contain even more untruth (specifically, untruth that benefits them).

It is bizarre that, in the judgment-of-people theatre of business, the best way to achieve truth is (sometimes) with a strategic lie. I don’t know how to resolve that dissonance. It’s probably connected to quite a few of the deeper philosophical questions of general human politics. That’d take at least another essay to explore.

Until then, go forth, beat the bad guys, and lie carefully.

The right and wrong way to lie in business, Part 1

There’s a New York Times article entitled “The Surprisingly Large Cost of Telling Small Lies”. According to it, the best strategy for success in business is never to lie. Not surprisingly, few people can get through even a short conversation without telling a lie. I don’t disagree with the premise that honesty is often the best approach to forming a genuine, long-term relationship. However, it wouldn’t be honest for me to give the advice that one should never lie in business. In fact, there are times when it’s the optimal approach, and even cases when it’s ethically the right thing to do.

It’s rare that someone will say, under his or her real name, that people should lie on their CVs or have peers pose as ex-managers on reference calls or otherwise misrepresent their prior social status. I won’t exactly go that far. If the purpose of your lie is to turn a truthful 90th-percentile CV into a pants-on-fire 99th-percentile one, you should usually spare yourself the headache and not lie. If you have an ex-boss who hates you and has cost you jobs with negative references, you probably should have a peer fill in for him. These are judgment calls that come down to a case-by-case basis, and I don’t think the general problem has a simple solution.

What I can offer are three principles for lying in business: lie big, lie harmlessly, and own the lie.

1. Lying big

By “lie big”, what I really mean is, “lie effectively”. The lie has to be big enough to matter, because unskilled and small lies will drag a person down with unanticipated complexity. You can easily paint yourself into a corner. We just don’t have, as humans, the cognitive bandwidth to keep up ten unrelated lies at the same time without becoming utterly exhausted.

The cost of a lie isn’t, usually, being caught. Perhaps 99% of the lies people tell to inflate their social status are never explicitly caught, but they can do damage on a subconscious level. Social interaction is a real-time problem, and the psychological overhead involved in keeping up a network of lies is often detectable. The lies themselves aren’t detected, and the conscious thought, “he is lying”, probably never occurs. However, the liar appears less genuine. Astute people will find him “fishy” or “sketchy”. He might seem like he’s trying too hard to impress people, or that he’s a “politician”. Nothing formally sticks to him, but he doesn’t warm hearts or earn trust. That is the most common case of the person who lies too much: never caught, but never really trusted. He’s a blowhard, full of himself, and probably doesn’t have the best motivations.

That, above, is the logical endpoint of too many lies. How to people get to blowhard status? Small lies. Pointless, minuscule lies that inflate the liar’s status but in a meaningless way, like cheating at a golf game. In sitcoms, these “white lies” blow up in some hilarious way and are resolved inside of 20 minutes (“oh, that Jack!”) In real life, they tend to just accumulate. People don’t like confrontation, and it’s more fun to egg the liar on anyway, so they prefer not to call the blowhard out on his shit. They’d rather watch him make a fool of himself. The only people who’ll do that for the blowhard are his best friends; but some people are so addicted to the petty thrill of tiny lies that they alienate everyone and have no friends. Then, they’re past the point of no return.

One big lie that achieves a strategic effect is infinitely superior to the cognitive load and social upkeep of a hundred little lies toward the same effect. Make the lie count, or don’t lie.

So, what are some good reasons to lie? This is going to sound completely fucked-up, but the best reason to lie is to earn basic trust (a technical term that I’ll define later). Now that I said something that sounds obnoxious, let me explain why it’s not. People lie, most of the time, for one basic reason: social status modulation. This establishes a taxonomy of lies that gives us four categories: (I) up-modulating status to equality, (II) up-modulating status past equality, or to superiority, (III) down-modulating status to equality, and (IV) down-modulating status to inferiority. I’m not going to focus on down-regulation here. Type III lies are usually harmless omissions, justifiable as social humility, and Type IV is sycophantic and rarely useful. So, let’s focus on the upward lies: types I and II. Type I lies are to establish equality, and those I recommend. If you’re an entrepreneur dealing with an investor who’s never been fired, then you’ve never been fired (even if you have). Type II lies are the lies of the blowhard. Avoid those. Once you are lying just to seem smarter, better, or more connected than the people you are lying to, you’re going to come under a hundred times more scrutiny. If you only lie to establish equality, the level of scrutiny is much lower, because to scrutinize your claims is to assert superiority, and people (even in positions of power) generally aren’t comfortable doing that.

So, the best reason to lie is to up-regulate one’s status to equality, and not beyond it. People who fixate on superiority become small liars. Rather than lying strategically, they’re so focused on being dominant at everything that they lie even when the stakes are petty, and eventually make fools of themselves.

That said, when is it right to lie? Examine your past, your job history, and reputation. Are you a social equal with the other party, and would you expect him to feel that way? You need not be more accomplished; he might be older or just luckier. You need not be richer. You do need to be a social equal. Figure out what that means in your given context. Next, are you looking to form a long-term friendship or a “weak tie”? If the former, try to avoid generating new lies. If you’re interested in forging genuine friendships, honesty really is the best policy. Weak ties have different rules. A weak tie is a tacit understanding of social equality and credibility. It’s only about what I’ve termed “basic trust”. You don’t get a lot of time in which to form (or not form) it. You’re going to be judged superficially. It’s not enough to be a person of merit; you have to look like one.

A theme that continually recurs around the question of honesty is complexity. Small lies generate a nasty complexity load. Even though you won’t be caught on specific lies– because if you’re a known blowhard, no one cares– you’ll start to lose your general credibility (basic trust). Astute people can practically smell the smoke of an overheated mind. Complexity is the devil. In software, it’s a source of bugs. In data science, it’s a source of “over-fitted” models with no predictive power. In politics, it’s a source of exploits and loopholes. In social interactions, it’s a source of general enervation. People throw their hands up and say, “I can’t figure this shit out”, and it’s just lose-lose. When you want them siding with you, they back away slowly.

So when is it the absolute right strategy to lie? Sometimes, the truth is too complex for people to handle. The lie might be simpler, and this might favor it, especially under the superficial judgments that form (or break) weak ties in business.

Take the biggest disappointment of your career, dear reader. Chances are, there were multiple contributing factors. Some were your fault, some weren’t. There were probably months of warning signs along the way. This setback or disappointment will be different for everyone, so let’s come up with a model example: a two-year-long “hero story” that still leads to a negative outcome, such as being fired. On the social market where weak ties are formed, are most people willing to hear a story that complex, and expend the cognitive energy necessary to come down on the right side? Nope. They hear the words “I was fired” and the “bozo bit” goes into the on position. Everything else becomes a story of a weak or unlucky person trying to justify himself. In these cases, it’s better to present a simple lie that goes down easy than the complex truth. It might even be more socially acceptable. For example, “bad-mouthing” an ex-employer is usually more disliked than telling a bilaterally face-saving story (that is, a lie).

This importance of weak ties and simplicity to the heart of it, which is the (above-mentioned) notion of basic trust. Basic trust doesn’t mean that a person is trusted in all things. Would you, as reader, trust me with a million dollars in cash? Probably not. However, you’re reading what I am writing, which means you trust that what I have to say is worth your time. Basic trust is the belief that someone is essentially competent and has integrity. The person is worth hearing out, and treating as a social equal. This is more bluntly termed the “bozo bit”, or “flipping the switch”. If the bozo bit is “on”, that person’s input is ignored. If it’s “off”, that person will usually be treated as a social equal, regardless of differences in rank or wealth.

Organizations can be trust-sparse or trust-dense, and tend to “flip the switch”, collectively, at once. Elite colleges are trust-dense, insofar as students generally trust each other to have valid intellectual input. Some people may lose that trust (because there are idiots everywhere, even at top schools) but new people start out with the bozo bit in the “off” position. There’s a basic trust in them. Most companies become trust-sparse at around 50 people. The way one can tell is to examine its attitude toward internal mobility. Formal performance reviews are already a sign of trust-sparsity, but when those become part of the transfer packet, the organization is stating that it only considers managerial input in personnel decisions. Trust sparsity is the rule, and non-managerial employees (i.e. those who haven’t been vetted and placed on a trusted white-list) have their bozo bit “on”. At a later point, organizations become trust-sparse even within the managerial subset, and begin requiring “VP-level approval” for even minor actions. This means that the organization has reached such size that even the managerial set exhibits trust sparsity, and only a smaller subset (those with VP-level titles) are trusted by the organization.

Trust sparsity is unpleasant, but something one must contend with. If you cold-call a company or send a resume without a personal introduction, you have to prove that you’re not a loser. One might find high-status arrogant people with shitty prejudices (“I don’t hire unlucky people”) abhorrent. Abstractly, I might agree with that dislike of them. That doesn’t mean they’re never useful. I wouldn’t want to have a meaningful relationship with a hiring manager who thinks anyone with a less-than-perfect career history is a loser, but he is a gatekeeper, and I might lie for the purpose of using him.

When should you lie in business? There is one good reason. You lie to “flip the switch” on your bozo bit. It’s that simple. In a trust-sparse organization, or the world at large, it often takes a reasonably big lie to achieve that. Lying by saying that you earned “Employee of the Month” in July 2007 won’t do it, because that’s one of those small lies that really doesn’t mean anything; you need affirmation that your previous company considered you a genuine high-potential employee. (You were placed in the semi-secret “high-potential program” and had lunch once a month with the CEO.) So lie big. Make the lies count, so you can make them few, and keep that complexity load down. Massage your past and reputation, if needed. Change a termination to a voluntary departure. If it suits your story, back-recognize yourself as a leader or a high-potential employee by the organization where you last were. Flip that bozo bit into the “off” position, establishing social equality with the other party. And lie no more than that.

2. Lying harmlessly.

It is my reckless honesty that has me speaking on the rectitude of certain classes of lies in business. Good lies are those that get past peoples’ prejudices to establish basic social equality and form useful “weak ties”. I do not advocate being unethical. If you make a promise you can’t possibly deliver, you’re doing the wrong thing and deserve the punishments that fall upon you. If you claim to be a licensed doctor and you’ve never set foot in a medical school, that’s job fraud and you deserve to go to jail. That’s not what I’m talking about.

If you massage dates on your resume to cover a gap (remember that a simple lie can be better, socially, than a complex truth) then that’s ethically OK; you’re not doing anything wrong. (Still, don’t get caught on that one. Many in business have Category 5 man-periods over even the smallest resume lies. Best to keep lies out of writing.) If you falsely claim to been in the top bonus bucket during your analyst program, because the private equity firm to which you’re applying won’t interview you otherwise, you’re doing no wrong. They deserve to be lied to, for having such a shitty prejudice.

Lies that hurt people are more likely to be caught than those that don’t, and most lies that hurt people are flat-out unethical. Avoid that kind. Your goal in lying should be to make yourself win, not to have others lose.

In a trust-dense setting, one should never have to lie, and one generally shouldn’t lie, at all. It’s lies that bring the organization or subculture toward trust-sparsity in the first place! On the other hand, trust-sparsity admits opportunities in which one can lie while causing no harm to anyone. In trust-sparse settings, people are assumed to be low-status idiots (“bozos”) unless formally recognized otherwise, with accolades such as job titles and managerial authority, and they’re almost never given the opportunity to prove otherwise. If a person of essentially good character and ability can use strategic non-truths to establish credibility, and lies no more than is necessary to do that, then no harm was done. In fact, it can be ethically the right thing to do. The person simply took ownership of his own reputation by inserting a harmless non-truth. This “flipping one’s own switch” is subversive of the general trust-sparsity, but trust-sparsity is goddamn inefficient at any rate, and society needs this sort of lubrication or else it will simply cease to function. This is why, in the MacLeod analysis of the organization, so-called Sociopaths (who are not all bad people, but generally political and willing to employ the forms of dishonesty I uphold) are so necessary. Without lies, nothing gets done in a trust-sparse world.

The problem is that people often do lie harmfully. There are two major kinds of harmful lies. The first is a false promise. This ranges from outright job fraud (claiming a capacity one does not have) to the sympathetic but reckless, but not consciously dishonest, optimism of the typical entrepreneur. I am in no way advocating promises that one cannot keep. Rather, I’m advising people to bring their reputation and status to where they belong, but not past that point. Don’t claim to be a surgeon if you’re not. The second (very common) kind is the lie to hurt others: rumors invented to disparage and humiliate. In addition to being generally unethical and toxic, they’re almost always counter-productive. No one likes a rumormonger or a bearer of bad news, even when that news is believed to be truthful.

Occasionally, one is in an adversarial situation where lying about another person is required. An example would be a bad reference. It’s best to avoid bad references by having peers substitute as ex-managers, but one might get caught in the blue, betrayed unexpectedly or nabbed by a “back channel” reference check. (Note: subvert back-channel reference checks by faking a competing offer and imposing time pressure. If you ever face a back-channel reference check, you failed in getting the offer fast enough.) In that case, my advice is: discredit, don’t humiliate.

You might be very angry when you find a negative reference. You have the right to be angry. You’ve been sucker punched. You might be tempted to say, “That’s because I caught him sleeping with his secretary.” Don’t do that! (At least, don’t sabotage his personal life while looking for a new job; keep your projects separate.) You’re better off with a lie to the effect of, “That’s funny, because he asked me to come back three months after I left. I declined respectfully, but he must be bitter.” That discredits him, but it doesn’t embarrass him any more than is necessary to do the job. You can’t appear to enjoy delivering news that makes someone look bad. With the affair with the secretary, you’re reveling in your ex-boss’s (made-up) demise. With the latter, you’re painting yourself as a top performer (even your ex-boss recognized it) and leaving the other party to connect the dots (that the bad reference is an artifact of the ex’s bitterness).

Also, one must always assume that, when lying about another person, that person will learn of the lie. So “discredit, don’t humiliate” is an aspect of a more general principle, “intimidate, don’t frighten”. You want your adversaries to be intimidated. Timid people shrink from action. They’ll shut the fuck up about you and let you focus on better things (like selling yourself, not justifying yourself in light of rumors). Frightened people, on the other hand, are humiliated, angry, and unpredictable. Even though fright is more of a psychic punishment than timidity, having severely-punished people on the stage is not good for you.

Lies (or truths) that destroy people tend to have enough kinetic energy to boomerang. Even the people who had the news first, unless they’re investigative journalists and the news is truthful, will be hit hard. Negative rumors are best avoided in all contexts: don’t start them, don’t spread them, and don’t even hear them in public. That is the general rule. There are (very rare) times when it is best to break it, and those involve frank combat. In frank combat, you don’t seek to humiliate or frighten your enemies. You have to destroy them, before they destroy you.

Competition is not enough to justify lying harmfully. If the only way to win among multiple candidates for a promotion is to lie harmfully, it’s probably worth passing on that round. (Maybe the other candidate actually is a better fit for the role.) If someone’s legitimately outperforming you and you lie harmfully to bring her down, you’re committing a grave wrong. It’s a much better use of the energy to befriend and learn from her. Jobs are short, but careers are long, and a rival in one bardo is often a great friend in the next. Good-faith competition is not frank combat, and the rule of “lie harmlessly” (or, better yet, not lying at all) still applies. Frank combat exists not when you are being outperformed in good faith, but when your reputation is being attacked. You didn’t choose war, but it chose you.

In frank combat, the best policy is still to lie with minimal harm, but not to shrink away from force if you need to use it. If a stun gun will work, use it instead of the revolver. Only use lethal force if the assailant won’t respond to anything else. The guideline of “discredit, don’t humiliate” applies when it can, but some people just won’t accept that they’ve been discredited (i.e. shut the fuck up) until they’re down for the count. That is a rare case, but it’s the one in which nasty, negative rumors might be the best way to go. Even then, there’s a subtlety to it. Not only must the rumor be believable, but you have to deny it in the public. Negative rumors, most of the time, aren’t so devastating because people actually believe the non-truths. Rather, it’s because they lower the target’s status, generate complexity (leading to people, as discussed above, just giving up rather than rendering judgments) and paint the person as one who “fits the mold” for the rumor, even though you, personally, haven’t taken a stand and won’t call it true.

All this said, frank combat is quite rare and always best avoided. Like a bar fight, no one wins. There’s pain, there’s losing, and there’s losing big. Winning at frank combat is like winning an earthquake. Go out of your way to avoid it.

Most ineffective liars don’t intentionally put themselves into frank combat. The problem of harmful liars is that, like the small liars, they enjoy the petty win over the other person and lose sight of the one valid purpose of lying in business: to flip one’s own “bozo bit”. Unless someone is calling you a bozo, you gain nothing by setting his “bozo bit” back into the “on” position, and you make the world worse (trust sparsity). People who lie harmfully contribute to trust sparsity, also known as discord, and Dante has them in the Eighth Circle of Hell for a reason.

(Part 2 will come out later this week.)

Ambition tournament (more like a large game) in San Francisco on March 23

This post pertains to the Ambition tournament planned for the eve of Clojure/West.

What?:We’ll be playing the card game, Ambition (follow link for rules) using a modified format capable of handling any number of players (from 4 to ∞).

When? Sunday, March 23. Start time will be 5:17pm (17:17) Pacific time. I’ll be going over the rules in person at 5:00pm. You should plan to show up at 5:00 if you have any questions or need clarification, since I’d like to start play on time. 

How long? About 2.5 to 3.5 hours at expected size (6-8 people). Less with fewer people, not much more with a larger group (because play can occur in parallel).

Where? At or near the Palace Hotel in San Francisco. I’ll tweet a specific location that afternoon, around and probably not later than 4:30.

Who? Based on expressed interest, I’d guess somewhere between 5 and 10 people. If only 4, we’ll play a regular game. If fewer, we may cancel and reschedule.

Is there food? We’ll break around 7:00pm and figure something out.

Do I need prior experience with Ambition? Absolutely not. There will be more new players than experienced people. Ambition is more skill than luck, but I’ve also seen brand new players win 8+ person tournaments.

Do I need to be attending Clojure/West to join? No. This isn’t officially affiliated with Clojure/West and you need not be attending the conference to play. It just happens that a lot of the people playing in this tournament will also be attending Clojure/West.

How should I RSVP? “Purchase” a free ticket on the EventBrite page.

Can I come if I don’t RSVP? Can I invite friends? Yes and yes.

Are there any prizes? Very possibly…

Will you (Michael O. Church) be playing? That depends on how many people there are (if this is a home run and gets 12+ people, I’ll be too busy with administrative stuff). I’ll probably sit in the final rounds but I won’t be eligible to win any prizes.

Corporate atheism

Legally and formally, a corporation is a person, with the same rights (life, liberty, and property) that a human is accorded. Whether this is good is hotly debated.

In theory, I like the corporate veil (protection of personal property, reputation, and liberty in cases of good-faith business failure and bankruptcy) but I don’t see it playing well in practice. If you need $400,000 in bank loans to start your restaurant, you’ll still be expected to take on personal liability, or you won’t get the loan. I don’t see corporate personhood doing what it’s supposed to for the little guys. It seems to work only for those with the most powerful lobbyists. (Prepare for rage.) Health insurance companies cannot be sued, not even for the amount of the claim, if their denial of coverage causes financial hardship, injury, or death. (If a health-insurance executive is sitting next to you, I give you permission to beat the shit out of him.) On the other hand, a restaurant proprietor or software freelancer who makes mistakes on his taxes can get seriously fucked up by the IRS. I’m a huge fan of protecting genuine entrepreneurs from the consequences of good-faith failure. As for cases of bad-faith failure among corrupt, social-climbing, private-sector bureaucrats populating Corporate America’s upper ranks, well… not as much. Unfortunately, the corporate veil in practice seems to protect the rich and well-connected from the consequences of some enormous crimes (environmental degradation, human rights violations abroad, etc.) I can’t stand for that.

On the corporation, it’s clearly not a person like you or me. It can’t be imprisoned. It can be fined heavily (loss of status and belief) but not executed. It has immense power, if for no other reason than its reputation among “real” physical people, but no empirically discernible will, so we must trust its representatives (“executives”) to know it. It tends to operate in ways that are outside of mortal human’s moral limitations, because it is nearly immune from punishment, but a fair deal of bad behavior will be justified. The worst that can happen to it is gradual erosion of status and reputation. A mere mortal who behaved as it does would be called a psychopath, but it somehow enjoys a high degree of moral credibility in spite of its actions. (Might makes right.) Is that a person, a flesh-and-blood human? Nope. That’s a god. Corporations don’t die because they “run out of money”. They die because people stop believing in them. (Financial capitalism accelerates the disbelief process, one that used to require centuries.) Their products and services are no longer valued on divine reputation, and their ability to finance operations fails. It takes a lot of bad behavior for most humans to dare disbelieve in a trusted god. Zeus was a rapist, and the literal Yahweh genocidal, and they still enjoyed belief for thousands of years.

“God” is a loaded word, because some people will think I’m talking about their concept of a god. (This diversion isn’t useful here, but I’m actually not an atheist.) I have no issue with the philosophical concept of a supreme being. I’m actually talking about the historical artifacts, such as Zeus or Ra or Odin or (I won’t pick sides) the ones believed in today. I do have an issue with those, because their political effects on real, physical humans can be devastating. It’s not controversial in 2014 that most of these gods don’t exist– and it probably won’t be controversial in 5014 that the literal Jehovah/Allah doesn’t exist– but people believed in them at a time, and no longer do. When they were believed to be real, they (really, their human mouthpieces) could be more powerful than kings.

The MacLeod model of the organization separates it into three tiers. The Losers (checked-out grunts) are the half-hearted believers who might suspect that their chosen god doesn’t exist, but would never say it at the dinner table. The Clueless (unconditional overperformers who lack strategic vision and are destined for middle-management) are the zealots destined for the low priesthood, who clean the temple bathrooms. Not only do they believe, but they’re the ones who work to make blind faith look like a virtue. At the top are the Sociopaths (executives) who often aren’t believers themselves, but who enjoy the prosperity and comfort of being at the highest levels of the priesthood and, unless their corruption becomes obnoxiously obvious, being trusted to speak for the gods. The fact that this nonexistent being never punishes them for acting badly means there is virtually no check on their increasing abuse of “its” power.

Ever since humans began inventing gods, others have not believed in them. Atheism isn’t a new belief we can pin on (non-atheistic scientist) Charles Darwin. Many of the great Greek philosophers were atheists, to start. Buddha was, arguably, an atheist and Buddhism is theologically agnostic to this day. Socrates may not have thought himself an atheist, but one of his major “crimes” was disbelief in the literal Greek gods. In truth, I would bet that the second human ever to speak on anthropomorphic, supernatural beings said, “You’re full of shit, Asshole”. (Those may, however, have been his last words.) There’s nothing to be ashamed of in disbelief. Many of the high priests (MacLeod Sociopaths) are, themselves, non-believers!

I’m a corporate atheist and a humanist. My stance is radical. From most, these gods (and not the people doing all the damn work) are claimed to be the engines of progress and innovation. People who are not baptized and blessed by them (employment, promotion, good references) are judged to be filthy, and “natural” humans deserve shame (original sin). If you don’t have their titles and accolades, your reputation is shit and you are disenfranchised from the economy. This enables them to act as extortionists, just as religious authorities could extract tribute not because those supernatural beings existed (they never did) but because they possessed the political and social ability to banish and to justify violence.

I’m sorry, but I don’t fucking agree with any of that.

Some astonishing truths about “job hopping”, and why the stigma is evil.

A company where I know at least 5 people just went through a massive, and mostly botched, reorganization. Details are useless here, but I’m struck by the number of high-functioning people who developed physical and mental health ailments (at least four that I know of, and probably tens that I don’t) in the chaos. It got me thinking: why? Now, I understand why an individual would fear job change or loss, because it’s a big deal to be jobless when the rent is too damn high. What I mean is: why do we, as a society, care enough about work to make ourselves sick? We wouldn’t even have to do less work to avoid sickness, because most of the stuff that makes people ill is socioeconomic chatter and anxiety that’s not productive; we could do have just as much stuff (or much more) just by working efficiently. That’s a big economic problem and a bit beyond the scope I can tackle here. I’m going to focus on our biggest grunt-level issue (the “job hopper” stigma) and one that’s holding us back morally, industrially and technically.

I’m completely on board with people wanting to do their jobs well (eustress) and be recognized, because ambition and a work ethic are admirable and I don’t have much in common with those who lack them, but how on earth is anything we do in white-collar work remotely important enough to deserve a sacrifice of health? I can’t come up with a good answer, because I don’t think there is one. Except when lives are at stake (e.g. medicine, war) there’s simply no job that’s worth giving up one’s mental or physical well-being. Does the typical private-sector project ever merit the risk of developing a long-term condition like panic disorder or depression? “Fuck no, sir” is the only right answer.

To be fair to the executives involved, it’s probably impossible to carry out any corporate action, good or bad, without the stress having some effect on peoples’ health. You can’t expect a CEO of a large company not to do something he considers necessary because it might stress someone out. That’d be a ludicrous demand. Some people get stressed out by the tiniest things. Most of the blame (for peoples’ health issues, at least) doesn’t go to the top. An executive’s perspective is that jobs change or end and that people should just deal with it. I don’t disagree with that at all! I believe fully that society should implement a basic income and I think it’s utterly criminal how this country has tied health insurance to employment for so long– I think people should suffer far less from job volatility than they do, in other words– but I think that this volatility is essential to economic progress. Sometimes, the cuts are going to be initiated by the employer (preferably with a generous severance and continuing career support). Other times, the employee is quick enough to see the dead end ahead and leaves on her own. There is, unfortunately, a problem. There’s something that makes job changes a million times more stressful, and it’s the creepy realization that, after a certain number of employment changes, a person is branded as a hot mess and, at least for the top positions, untouchable. That’s the dreaded job hopper stigma, and it needs to die. I’m here to slaughter it, because God works through people. It won’t be pretty.

In the case that inspired this essay, I don’t even know if those four people (who became sick due to reorg-induced stress) wanted to leave their company. One of them seemed quite happy there. What made them ill, in my mind, is not that they work for a bad company (they don’t) but the feeling of being trapped: the knowledge that, for the first 18 months on a job, one pretty much cannot leave it no matter what happens. One could be severely demoted in the first month of a new job and it would still be suicidal to leave.

Why is there a job hopper stigma? In part, it’s because HR offices need some objective way to “stack rank” external candidates, and dates of employment are the hardest to fudge. Titles get tweaked or bought (“exit promotions” for the employee to go gently with a meager or no severance, or inflated titles in lieu of raises, and then there are the self-assigned ones) and accomplishments get embellished (or just plain made up) and people can claim whatever they want in salary and performance bonus (and sue an employer who contradicts them, for misappropriating confidential information). Dates of employment, however, are objective and readily furnished by even the most risk-averse HR offices. So the duration of the job becomes the measure of how successful it was. Under 6 months? That person was probably completely useless and fired for performance, and certainly didn’t accomplish anything. (Some of us are “10x” players who can achieve a lot in 6 months, but HR judgments are based on the mediocre, and not on us.) 6 to 17 months? Needs explanation, and the candidate will have to sell herself hard just to get back to par. 18 to 47 months? Probably a middle-of-the-pack performer, passed-over for promotion but still worth talking to… if there aren’t other candidates. (I’ve heard that long job tenures, beyond 6 years without rapid promotion, can also be damaging, but that has never been a problem for me. I’m not one to stick around and stagnate.)

One short-term job (under 15 months) is seen as forgivable, but two becomes “a pattern” (note: “a pattern”, in HR-speak, means “lather my frenulum, bitch”). At three, you’ll spend 30% of your time on job interviews explaining away your past, leaving you at 70% capacity to convince them of your fit and potential in the future. (In other words, you’ll spend so much energy proving that you’re not bad that you’re enervated when it comes to what should actually matter: showing that you’re good.) At four, you’re branded a total fuckup and many HR departments won’t even return your calls, the smug cockbags. The “job hopper” stigma is real and it needs to die, now. Any dinosaurs who cling to it can, for all I care, go along with it. White-collar employment world’s obsession with shame and embarrassment and social position, instead of excellence and progress and positive-sum collective efforts, is the single thing most responsible for holding us back as a society.

Here we go…

I’m 30 years old and I’ve had a ridiculous number of short-term jobs. I’ve stopped being ashamed of my past, because I’ve done little wrong and what wrong I have done has already been over-punished. I’ve paid my fucking debts. I’ll go over my past, just to show how common it can be for a good person to just be unlucky. Nothing I’m about to talk about is atypical in volatile industries like finance and tech. Jobs end, people leave, turnover happens, and often it’s no one’s fault.

Job hop #1 was a small company that just didn’t have much need for high-power work, and I wasn’t interested in the regular, day-to-day stuff once I’d met their R&D needs, so I left (a few R&D projects completed, stellar quality) when my work was done. #2 was health related; the real-time nature of the work mandated an open-plan office– a rare case where that layout was beneficial and necessary– and my panic issues weren’t nearly as manageable then as they are now. (These days, 20 minutes after a panic attack I am fine and back to work.) No one’s at fault, and I still recommend that firm highly when asked about it, but I chose to leave. #3 was Google, a good company I’ve made my peace with, but where I had a manager so awful that the company formally apologized. #4 (which I’ve taken off my resume) was an absolutely shitty startup that hired three people (of whom I was one) for the same “informal VP/Eng” role, which was bad enough, but then asked me to commit felony perjury (at 3.5 months) against 10 colleagues with “too much” equity, and fired me when I refused. #5 was a large-scale layoff in a terrible year for that company, and sad because I know I management liked me (I got top performance reviews, but my project was cut) and I liked them. They treated me well while I was there and afterward. That’s 5 jobs that ended before the 18-month mark, and while three (#1, 2, 5) involved no managerial or corporate malfeasance, only one of them (#1, the most innocuous) could be legitimately called my fault.

Needless to say, that stuff gets asked about during job interviews. It’s annoying, because it has me starting from a disadvantaged position. After all, I broke that corporate commandment: thou shalt not leave a job before being in it for 18 months. Even in 2014, the same senile Tea Partiers who want “government hands off my Medicare!” insist on upholding their archaic “job hopper” stigma and write uninformed, syphilitic blog rants about how one should never hire a “job hopper”.

The image of a job hopper is of a mercenary, young executive born in the Millennial generation (1982-2004) and hustling his way up the corporate ladder by exploiting the winner’s curse. Because the Boomers did such a good job of raising this generation (ha!) these suburban-bred “trophy kids” have never known adversity (ha!) and they have a keen ability to exploit favorable (again, ha!) market conditions. Instead of “paying dues” and suffering like they’re goddamn s’posed to, “kids these days” jump for a better opportunity at another firm. (In reality, job hops reset the clock and mean that one will have pay dues and prove oneself again, but most Boomers don’t realize that, because they never had to live the way we do.) On a side note, in no way did the Boomers steal the future from the Xers and Millennials. That stuff about housing prices and college tuition costs and non-dischargeable student debt and health insurance premiums and adjustable-rate mortgages and the death of private-sector basic research pushing PhDs into formerly BA-level jobs is Soviet propaganda designed to make capitalism look bad. It’s all lies, I tell you, lies!

Boredom

Job volatility is more of an issue for young people now than it was 30 years ago. Layoffs happen, positions change, redundancies emerge, and projects get cancelled. It happens, and it’s not even a bad thing. Economic progress virtually requires change that ends jobs. The failure is more in the fact that our society is failing to manage this volatility, by trading up (or, thanks to outsourcing, cheap) instead of training up. This has all been discussed ad nauseum, so let me get into a (possibly generational) subjective factor that hasn’t been discussed: increasing boredom. If I’m right, this would make it especially difficult for those who should be high-performers.

I suspect that, today, low-level jobs white-collar jobs are more boring. We’ve achieved a level of boredom in white-collar work so impressive that many people prefer fucking Farmville. Now, that must be some boring work! People will disagree and point quickly to new technology that is supposed to automate all the grunt work away. I agree that this is a possibility of technology. I don’t know if it’s actually being used that way. The social expectation of 40 hours’ work per week, minimum, to stay in the organizations that employ most of the middle class is actually generating a lot of work that is pointless and boring. Work expanding to fill the allotted time (which may be 50 to 70 hours per week in organizations claiming to have “performance-oriented” cultures)  seems to be having some pretty serious consequences for peoples’ mental health.

I’m a technologist, so I love what computers can do, but much of what they are actually used to do is, to be frank, pretty fucking dismal. Computers make excellence possible in new ways, but most executives just want to make mediocrity cheaper (often externalizing costs, which is fancy speak for “robbing people”) to get a quick bonus. Information technology has been used as a centrifuge of work, enabling organizations to separate the labor they management thinks it needs (work without executive sanction, no matter how important, gets ignored in a closed-allocation company) into strata and specialties and keep people, as much as possible, working on the same goddamn thing every day. The monoculture and the restraining permissions systems are supposed to limit operational risk, but it actually introduces a long-term and subtle but existential risk: mediocrity, which sets in as people get bored (on the aggregate scale) and check the fuck out.

Most people I know want heterogeneity in their work: a mix of collaboration and isolation, a portfolio with high-risk creative projects but also low-risk (but still fulfilling) “grunt work” that is reliably useful, and enough variety in projects to build a unique personal understanding of what they’re doing. Modern project management doesn’t respect that. The work gets chewed down into boxy little quanta (Jira tickets! user stories!) and the individual worker ends up mired in a psychological monoculture. This happened with physical labor about 200 years ago, but the combination of information technology and upper-management psychopathy has been, over the past 30 years, doing the same to much mental work. We’re seeing an epidemic of a mental-health issue that, until recently, was pretty rare: extreme, soul-crushing boredom. I’m not talking about spending two hours in a traffic jam. That sucks but it’s a one-off. Nor am I talking about mundane, “unsexy” subtasks in more interesting long-term project, because even the best jobs are 90% mundane, hour by hour– even the best software projects involve lots of debugging– and we’re fine with that. Nor am I referring to the nagging “I might be wasting my life” sensation that people get sometimes but can ignore when work needs to be done. That’s not what I’m talking about when I talk about boredom. I’m talking about a psychiatric “brick wall” that is probably neurological (connected to a miserable disengagement response that, when it fires without a known context or too often, is called clinical depression) in nature.

If you’ve never experienced it, here’s a task that might bring on the “brick wall”. You must draw (by hand) a 56-by-56 grid of squares, 0.5 inches (1.25 cm) on each side with a 1% side-length tolerance (if you fail, scrap the drawing and start over) in side lengths and a 2-degree angle tolerance. Each box must be drawn individually (i.e. it’s not legal to draw 57 horizontal lines and 57 vertical lines) but no duplicated edges. Each segment connecting grid points must have no more arc length than 1.005 times that of a perfectly straight line (“straightness” requirement). You may use a ruler for measurement but not as a straight-edge: the segments must be free-drawn. Finally, boxes must be drawn in order from the upper left, left to right, then top to bottom. As they are completed, the boxes must then be filled with the numbers 1, 2, …, 3136 in that order. (If any work is done out of order, you must redo the entire task.) The digits must be legible, they must fit inside their box, they must be aesthetically pleasing (vague requirement) and reasonably close in size as well as centered within the box. As a last criterion, the large 56-by-56 square must meet a 0.02% side-length tolerance (on the 28-inch total size) and a 0.5-degree angle tolerance. (If this is not met, discard it and start over.) You will be in a noisy environment, you will be yelled at from time to time– and you are expected to smile and say “Hello, sir”, make eye contact for at least 1 but no more than 4 seconds– and you will be periodically interrupted (context switch) and asked to solve simple math and spelling questions. (If you get any wrong, discard your progress on the grid and start again.) People will be eating, drinking, playing sports, and possibly having sex in front of you, and you are not to interact with or even look at them. (If you do, discard your progress on the grid and start again.) Finally, if you give up or fail to complete the task after 6 hours, your punishment (humiliation) is to call ten random people on your contact list and make “oink” sounds for 15 seconds, then say, “I am an incompetent fuckup and I failed at the most basic task”, and then you must ask them for money (simulating being fired on bad terms). You can never explain, to them, the context in which you humiliated yourself thus.

I would guess that the percentage of people who could bring themselves to do this for a reward of, say, $10,000 is very small. I’m not sure that I’d do it for 100 thousand dollars. The task sounds easy. It is, physically and mentally, within the ability of almost anyone, but it’s psychiatric torture. For me, it was an anxious experience just to write this paragraph.

Where I believe surprisingly many people would fail (or, at least, be tempted to do so) is on the “in-order” requirement on the completion of the boxes and digits. Ever notice how, when performing a tedious task, there’s a tendency to inject some creativity into the process by, say, filling the boxes in, or completing the form, out of order? That requirement may seem stringent, but that kind of conformity is not uncommon in corporate environments where any display of individuality is taken as self-indulgent and arrogation of rank. (That’s where I put it there.) Few employers would force a worker to do the whole project over such a small departure from expectations, but the “why did you do it that way?” wolf-snap (microaggression) that some people direct at any expression of individuality is more than enough punishment to cancel out any psychological reward from doing the task.

If you’re a Boomer executive who thinks boredom is an attitude problem and not an involuntary, neurological brick wall (and one that especially affects the most capable) then get out a pen and draw me a fucking 56-by-56 grid.

So, on boredom, I’m talking about a vicious cycle of involuntary and escalating anxiety that emerges from the cognitive dissonance of a mind forced to contend with an unending slew of work it finds pointless. This is something that most white-collar Boomers haven’t experienced, because the IT-fueled work centrifuge hadn’t been perfected yet, and because their corporate ladder game was (to be frank) just a lot less competitive, but that most Millennials will. That cycle starts with the low-level social anxiety that everyone experiences at work. (People would be “on edge” to find two strangers in their car, much less a hundred in their career.) Even in great companies, that low-level anxiety is there; it’s just that the work offsets it. Novelty can offset, but under psychological monoculture, the reward and novelty stop and the only place to go from the anxious state is into boredom. Anxiety and boredom reduce performance, which causes further anxiety, and so forth, until frank depressive symptoms set in, it’s blamed on the employee, and the employee is let go, usually with a dishonestly-named “performance improvement plan” (PIP) because most companies, these days, are too cheap to pay severance. (Side note: even when performance problems do exist, prefer severance over PIPs. Three months’ pay is astronomically cheaper than having a “walking dead” employee ruining morale for 30 days.) Even if that person has normal mental health in general (“neurotypical”) an employee who’s been through the stress and humiliation of a PIP is probably suffering from diagnosable clinical depression and, if evaluated based on his at-then state, will have enough of a health story to sue (it’s not a good idea; he may not win and it will be terrible for him regardless, but it fucks the company) or push for severance. That’s a whole lot of ugliness that just shouldn’t have to happen.

That system– the typical corporate war of attrition based on social polish and boredom tolerance– doesn’t even work on its own terms, because this malignant nonsense generates no profit. Just as obesity is (in my opinion) only 10% personal fault and 90% the result of systemic issues with our food supply and culture, I tend to think the same of work boredom. Ninety percent of it, at least, is the fault of employers. They could structure themselves in a way that gets more value delivered, has employees happier, and doesn’t induce boredom. And don’t get me started on defective workspaces such as open-plan offices. People subjected to intermittent distraction and unreasonable anxiety, at less-than-liminal levels, will often experience work as “boring” even if the material itself is not the problem. Plenty of studies have shown that when people are subjected to the chronic, low-level stresses of a typically defective work environment (ringing phones, people shifting in their seats, personal space under 200 square feet, intense conversations nearby) they are aware of their underperformance but often attribute it to “boring” material, even when groups in better settings described the same work or reading as interesting.

Boomers see boredom as an attitude problem. “If you’re bored, read a book or go outside!” (“But stay off my damn lawn!”) The stereotype of a bored, “entitled” Millennial is of a mid-20s weakling who just refuses to give up on his adolescent fantasy of an easy job where high pay and recognition, “just for being awesome”, come without sacrifice, and whose rejection of even the slightest compromise with “the real world” leads to parasitism or underperformance. Certainly, those weaklings exist. They’re great anecdotes for shitty human-interest news stories using parentally-funded fuckups with impractical advanced degrees as exemplars of youth unemployment. They’re not common. They’re certainly not the norm. The boredom I’m talking about isn’t, “They won’t give me a corner office so, fuck ‘em, I’ll slack” boredom. It is an involuntary neurological response that occurs because mind rebels against being used in a way that it finds perniciously inefficient and insultingly pointless. And here’s why it’s such an issue for business: the smartest and most creative people are the ones who will fail first. The deeply unethical people who will actually kill your company are psychopaths  and they, because they can allay their boredom by manipulating others, shifting blame, and causing destructive drama, get the least bored and fail last (i.e. win the corporate war of attrition).

The reveal

Here’s the truth about job hopping, as a person who’s had a lot of bad luck and therefore plenty of “hops”. It fucking sucks. It’s not a fun life. I’ve done it enough and I’m a goddamn authority on this topic. Don’t get me wrong. There are plenty of cases where it’s better than the alternative (stagnation) to “hop”. But I don’t think anyone goes in to a job with an attitude of, “I’m going to work here for 291 days, waste half a year learning some terrible closed-source codebase, accomplish very little of note, burn bridges by leaving in the middle of an important project, get a pathetic lateral move for my next gig, and do the same thing all over again.” People go into jobs wanting 5-year-fits. Often, employers don’t realize how much people would prefer not to move around. If they just gave a little bit of a shit about their people, they’d be paid handsomely in loyalty. Take Valve’s culture of open allocation. Why do people work so hard in an open-allocation regime when they could “more easily” (I’m not actually sure this is true) get away with slacking? The organization is defensible. The workers give a shit, because they know they’ll be at risk of ending up in crappy closed-allocation jobs if the company fails or has to let them go. Most organizations rot because there’s a bilateral not-giving-a-shit that between management and labor, but defensible organizations like Valve, however, are capable of organic self-repair. The people don’t just want to succeed “at” the organization but they have a genuine commitment to keeping the principles it represents alive. Organizations like that, alas, are very uncommon. Most businesses exist only for one reason (shareholder enrichment) that barely includes the employees, and are worth leaving when they cease to suit the individual’s ambition.

As I said before, job hoppers get stuck paying more dues (even at executive levels, everyone has to pay dues to gain credibility in a new firm, because explicit authority doesn’t go very far) and, in general, see less advancement than they’d have had if they’d been able to stay put. Trust me that every sane person would rather do a little grunt work (appreciated grunt work with mentorship, at least) than be unemployed, interview for jobs at various (mostly shitty) companies, or spend 8 hours per day in the incoherent, tactical socialization process called “networking”. People want to work, to be challenged, and (in general) to stay put for a while and recover their energies. Work, if appreciated and meaningful, is energizing. Moving around is draining. Watching others accomplish more because they get the elusive “5-year fit” (not just a 5-year job, because it’s easy not to get fired, but a genuine 5-year stream of better projects) and not having that, is demoralizing. People don’t want to have to prove themselves to a new set of people every 14 damn months: that’s just hell.

Why’s there so much hopping? Because most jobs are terrible, that’s why. Let’s just get that out there. There are managers who take their reports’ career goals and advancement seriously, but most don’t. There are companies dedicated toward excellence rather than executive ego-stroking, but most aren’t. There are open allocation companies where people have a real say in what they work on and are trusted to “vote with their feet”, but most are closed-allocation shops where they’re just “assigned” to tasks. Most jobs are dead-end endeavors that become pointless after 12 months, and most organizations are not (in the meaning of the word used above) defensible. Sturgeon’s Law (“90 percent of everything is crap”) is utterly true in the corporate world. I’ll spare the reader any pompousness I could put here about Poisson distributions and get to the fucking point: statistics back me up that good people can just be unlucky and have a string of 3 or 4 or many more short-term jobs and have it not be their fault.

This role of bad luck in the “job hopper” death spiral is enhanced by autocorrelation. (Failures that are perceived as independent and indicative of personal shortfall might be subtly connected.) First, short job tenures tend to erode an applicant’s desirability and beget low-quality jobs (that are more likely to be short-term) in the future, a case of that job-hopping dynamic perpetuating itself. Second, there’s the Welch Effect, named for Jack Welch’s popularization of “rank-and-yank” (in technology, stack ranking). The Welch Effect is that, in a large layoff or in stack-ranking (which is just a dishonest, continual layoff dressed as being “performance-related” to save on severance) the first people to be fired are new or junior members of underperforming teams (who had the least to do with the team’s underperformance). Since stack-ranking (and, less so, layoffs) tend to have a “no infanticide” rule, people in their first 6 months are usually fine, but months 7 to 18 are extremely dangerous, and it’s not good to spend much time in them. The Welch Effect also enhances the “job hopper” stigma, because people with damaged resumes are more likely to end up on teams and projects that can’t be filled internally, and have carry a high probability of them being Welch Effected.

The “job hopper” stigma doesn’t always keep people from being able to find new gigs. What it does is shuttles them into second-tier jobs, long-shot companies, and teams that companies have trouble staffing internally. The only way out that seems to work is to accept that job searches will be very long (6+ months, as most first-rate jobs aren’t available to a “job hopper”) and “reverse interview” companies aggressively. Then, you’ve got a good chance of eventually (how are your finances? can you hold out for a few months?) getting a job and a company that are good, at least at the time you take the offer. Whether it stays good for 18 months and frees you, that’s another matter.

Background, but I’ll spare you the shitty teenaged poetry.

Now, I didn’t go to Stanford. When I was 17, I wanted to be a poet. Math was my backup plan. Computer programming? I grew up in Central Pennsylvania and for someone to make even $75,000 in software was unheard-of. A “programmer” was a guy who wrote cash-register interfaces in COBOL. Doctors, lawyers, executives, and these far-off and vaguely disliked people called “investment bankers” in New York made the money, and it wasn’t until 2007 (age 24) that I realized one could be a full-time coder (that’s what quants are) and be paid decently. My parents pushed me to major in math (I’m very glad I listened to them, because I’m a lot better with artificial neural networks than at slam poetry) but my first love was creative writing. Later, I did some (tabletop) game design and created Ambition, a great card game that will never make me a cent. Anyway, on the writing, I resisted (to my career detriment) early specialization and so, even though I had Ivy options– I didn’t get Harvard, but was offered in by a brand-name professor to another Ivy, and very likely would’ve gotten MIT or Stanford– I went to a liberal arts college in the Midwest (Carleton) because I figured that the experience (and rural setting) would be more conducive to the reflective mind a writer needs.

I got a fantastic education, but the exposure to the machinery of the working world (i.e. how very rich people think, how the gigantic machines they create tend to operate, and how to exploit those for personal benefit) was just not on the out-of-course curriculum as it would have been at, say, a Stanford or Harvard. Those skills turn out to be very important even in the “meritocracy” of software. There is no reason for me to be bitter about this now, and I have no regrets about the choices I’ve made, but there are “job hops” I could have avoided if I’d learned, much younger, how to spot warning signs. I had to learn how to fight (and I did so, in the best place for it, which is New York) on my own, and I got fucking good at it too– that’s why I help other people fight, costing malignant executives whom I’ve never met to the tune of millions per year– but it took years of trial and error, because I had to learn all of those skills by myself. I became the mentor I wish I’d had.

I’ve been in the private sector for almost 8 years, and I can count 39 months that I wouldn’t trade for that time back in youth. The rest was junk. No career value, nothing learned, just shitty grunt work I did because I actually would have suffered more if without an income. What’s amazing, though, is that my ratio (41.5 percent) is extremely good by the standards of people from middle-class backgrounds. The typical state-school kid from Iowa (let’s say he is of comparable talent to me, though talent doesn’t really have much to do with it) might have had 15 months of real work by my age. If he was actually poor and couldn’t go to college, I don’t know that he’d have any real work, in the software world, by age 30. If you’re from a well-connected family, you’ve got a decent chance of getting the “Why are you wasting your time on bullshit?” intervention/mentor that everyone hopes for, the sort of thing that 5-year-fits are made of, on the first gig out of school. Everyone else has to job hop, roll the dice, self-mentor and keep trying.

I’ve said before that open-plan offices are backdoor health/age discrimination, and the “job hopper” stigma is backdoor class discrimination. People from wealthy families (I’m not talking about Pennsylvania upper-middle-class like me, but the well-connected “1%”, and it’s more like 0.2 percent) can relax a bit and wait for opportunities to come to them. They’ll probably get some grunt work, but if they perform poorly at it while networking and building side projects, they’ll get the benefit of the doubt (“it’s OK, they had other things in their lives”) and fail up. They can stay in one place for 5 years, because others will come to them whereever they are. Everyone else, on the other hand, has to hustle and play the game. The most important aspect of that game is recognizing a dead end and cutting losses. Often the most ethical thing (I’ll get to ethics, shortly) a person can do when at a dead end is, for the benefit of both sides, to extricate herself from the situation. (It’s possible to exploit dysfunction for personal benefit, but it’s a shitty thing to do. Leave compassionately and move on.)

This where I perceive an inconsistency. Why is job hopping really so despised? Is it because, as claimed, the job hoppers are showing the flippant disloyalty that comes from the high status afforded to in-demand professionals? Or is the real (yet unmentioned, because it’s socially unacceptable) reason for the stigma, as I suspect, that this tentative attitude toward work, and hard-nosed realism about the value of what one is doing, is a cultural and social mark of The Poors, for whom useless work historically had devastating consequences? Are job hoppers high-flying mercenary yuppies enjoying undeserved success, or are they ill-bred uppity serfs who lack the blue-blooded couth to get upper management invested in their careers and merit a 5-year job? Which one is it: are they high-status assholes or low-status assholes? The people who still believe in this antiquated stigma have to pick a damn side!

Ethics of job hopping

One thing that is said about job hoppers is that they’re “disloyal”. I don’t agree. There’s a difference between being loyal or not loyal to someone (loyalty must be earned) and being constitutionally disloyal. I will not harm a stranger but I have no loyalty to him, and that’s not a character flaw on my part. To me, there’s just no use in being loyal to a company. A person can earn loyalty, for sure, but a company is just a pile of other peoples’ money. On whether job hoppers are constitutionally disloyal, I think that the latter is very uncommon, in in fact. It’s a brutal charge, because a constitutionally disloyal person is likely to be unethical. Is job hopping unethical? Absolutely not. Second to masturbating, it’s one of the most honest things people do: walking away from a relationship that has begun to fail, before it hurts both sides.

An unethical person has no qualms about drawing a salary while producing no useful work (due to mismanagement, boredom, or poor fit) while ethical people go insane in such circumstances. Ethical people worry, when that starts to happen, about getting fired and the shame and embarrassment. (This ties in to the boredom-anxiety loop I discussed above.) Unethical people figure out who’s politically weakest in the organization and who they should blame, should they either underperform or be unable (due to advanced environmental dysfunction) to perform. Ethical people leave jobs when they find themselves becoming useless. Unethical people ingratiate themselves to upper management and acquire power, turning organizational dysfunction toward their own benefit. Ethical people focus on skill growth and leave jobs if they risk stagnation. Unethical people realize that connections are more powerful than skills and focus on the players, not the cards. In general, unethical people are far more likely to climb one ladder instead of “hopping”, because unethical people generally understand social dynamics far better than average people (this may be survivor bias, with unethical and socially unskilled people getting incarcerated, leaving only the smooth scumbags in play) and that trust is acquired over time. Ethical, ambitious people want universal knowledge so they can add more value to the world. Unethical people want to learn the people so they can exploit their weaknesses. That’s how it actually works.

To a middle manager, job hoppers can be irritating. Middle management isn’t a fun job because there is pressure from above and below, and unexpected personnel changes can be very disruptive. Moreover, that effect is quite visible, the action (of leaving for another job) is wholly initiated by one person and, once she has left the organization, she can be blamed without consequence (she’s not there to hear it, and her job’s not in danger). I know for sure that, in many of my “hops”, people were disappointed that I left. This is just standard business friction, though. No one’s doing anything wrong. That people leave companies is not an ethics problem.

The nature of social stigma in general

I would argue that for most social stigmas, the reason they exist is that people tend to correlate (falsely and uselessly) things that are irritating or socially unacceptable with the unethical. They want to believe in a world where the villains look like villains (instead of like regular people, as they actually do). The small betrays the large, they presume. The guy who shows up 15 minutes later than everyone else is a slacker. The guy who complains about minor things is probably subversively and intentionally undermining morale. The woman who doesn’t return small talk because she’s fucking coding is a frigid bitch. In reality, nothing works that way at all, but many people (and this is more true of people in groups) are stupid and shallow and in their desire to believe in a world where villains look like villains, they lash out at those who slightly annoy them.

In actuality, most people who do bad things get away with them, at least in the short term. I believe in the Eastern concept of karma– each action leaves an imprint on the mind, and the fact that we have no idea what our minds do after death requires humility– so I might argue from the other side, over the very long term. However, in the social theater where humans seeking short-term gains operate and where the punishments are coarse rather than subtle, most people who do bad shit get away with all of it. By the time people with the power and desire to punish them know what these unethical scoundrels have done, they’ve moved away and usually “up”.  My pointing this isn’t to encourage bad behavior, because the gains of most unethical acts (again, counted in raw numbers) are petty while the risks are substantial. A 99% chance of not getting caught in stealing a candy bar doesn’t make it worth it, given the penalties.

I mentioned the self-healing properties of defensible organizations like Valve, which operates under open allocation and gives the rank-and-file a legitimate reason (projects they enjoy, and not wanting to work for crappy closed-allocation companies) to participate in its upkeep. Those companies fix themselves faster than they rot, but they’re also rare because most executive teams lack the coherence, vision, and (frankly) the interest to commit to a self-repairing organization. So, most organizations are not going to commit to employee well-being any more than they have to, and won’t be defensible. They’ll rot, and that’s accepted (because executives will enrich themselves along the way) but they’d prefer it to rot slowly. This requires targeted aggression toward the causes of organizational rot, which are (and I agree with them on this) ambitious, disloyal, and unethical people.

Now, unethical people can beat (or, quite often, use as a personal weapon) the social immune system of any organization. They can pass reference checks, establish social proof, and avoid having their bad behavior catch up with them for a long time. Some (the less capable) may be shut down, but other psychopaths will evolve faster, just like cancer cells or harmful bacteria, and go effectively undetected. The organization will rot, but no one will be able to say why it is rotting because the people doing the damage will be sure that they only people who know are either powerless or complicit. What everyone sees, as the edifice starts to shake and crumble, is the exodus of talented people. That’s the visible rot. It’s all those damn job hoppers jumping ship when things get “difficult” (which usually means “hopeless”). Waves of departures (“job hoppers”) may be the visible proximate cause of corporate collapse– and that’s why they are blamed for things falling apart– but they’re rarely the ultimate cause.

Let’s ask ourselves if these “job hoppers” fit the bill of the toxic person (ambitious, disloyal, and unethical). Are job hoppers ambitious? Some are, some are just fed-up. That’s irrelevant, however, because a functioning organization can make a home for ambitious people. Are they disloyal? I would say they’re simply “not loyal”. Disloyalty suggests a moral shortfall. Not-loyalty should be the default afforded a large organization that wouldn’t reciprocate any loyalty given to it. Like religious faith, “loyalty” is not a virtue when unqualified. It is fine to have religious faith, and it is fine not to have it. The same goes, in my mind, for organizational loyalty. Are these “job hoppers” unethical? That’s the only of these three questions that actually matters, and I’ve established the answer to be “no”. Among the discontents (and, in a dysfunctional organization, that’s over 75% of the people) they’re some of the most ethical ones. They’ve realized that there is no place for them, and left. The real anger should be at the things that happened (and the people who caused them) 3 or 6 or 12 months ago that caused so many good people to leave.

If you stop promoting from within, soon you can’t.

I’ve been around and inside of tech companies enough to know that, as a general rule, they don’t promote from within. Why? One VC-specific reason is the extreme amount of power held by venture capitalists, who function as managers rather than passive investors. VCs’ buddies, middle-aged underachievers who need to pull a lucky connection to snag an executive position in a fast-growing company, tend to be inserted into such companies at high levels. That sets a tenor: that internal achievement matters less than the story you can tell as a free agent, especially if you can play the press and the investors’ social network. The issue doesn’t stop there, of course. Even for mid-level management positions and the higher engineering distinctions (once those exist) the best slots and projects tend, over time, to go to external people.

Two things drive this. The first is the social climbing mentality that a growing technology company has. Most founders think that they’re a higher calibre of people than the first engineers they hire. (Are they right? It depends on how they hire.) The first round of hires are often brought on with a bit of compromise: not exactly what the company wants, but good enough “for now” and affordable on a startup’s shoestring (relative to investor demands) budget. Now, a good entrepreneur will find talented people who’ve landed in the “bargain bin”: wrong schools, wrong career choices, or coming back into tech from something else. (Keep in mind that, at the level I’m talking about, “bargain bin” is still the $75-150k salary range– reasonable, but not the level seen at hedge funds or for established tech people. I’m talking about top-1% talent. Don’t bet your company’s start on anyone else.) That’s almost an arbitrage, for those who can detect talent at high levels. But for each of those “diamond in the rough” hires, there are 9 who are equally cheap but correctly priced (i.e. not very good). In the startup phase, these companies tend to assume that their early technical hires were “desperation hires” and throw them under the bus as soon as they can get “real” engineers, designers, and management. That social-climbing dynamic– constantly seeking “better” (read: more impressive on paper) people than what they have– lasts for years beyond the true startup phase. The second driver is the tendency of almost all companies to overpromise in hiring. Authority is zero-sum, and when authority is promised to external executives being sought, internal people must lose some. The end result is that the best projects and positions go first to people the company is trying to sell on joining it. Only if there are some goodies left over are they allocated to those who are already there.

Executive turnover hits morale in two ways. The first and more obvious one is that it makes it clear that the straight-and-narrow, pay-your-dues path is for losers. The second is more subtle. High-level turnover and constant change of priorities and initiatives means that the lower-level people rarely get much done. They don’t have the runway. Ask a typical four-year veteran of such a company what he’s accomplished, and he’ll tell you all about the work that is no longer used, the project canceled three months before fruition because a new CTO arrived, and the miasma of unglamorous work (i.e. technical integration, maintenance necessary due to turnover) generated by this volatility that, while it might have been necessary to keeping the company afloat, doesn’t show macroscopic velocity. That doesn’t make the case for promotion or advancement. Eventually, the high-power people realize that they can’t get anything done because of all the executive instability, and they leave.

At a certain point, companies get to a state where they no longer trust their own people to rise to any challenge beyond what can fit in a single Jira ticket. Promotion from within essentially stops. Titles will go up (especially because salaries won’t) but true advancement will be hard to find. Then, people stop leading.

One concept often used in corporate-speak is the distinction between “confirmational” and “aspirational” promotions. In a confirmational regime, people are promoted when they’re already operating competently at that level. If you’re a Senior Engineer, it means you’ve been performing as one for some time. Aspirational promotions indicate a belief that the employee will achieve that level at some time. Of course, every company will claim that its promotion system is confirmational. No employee wants to answer to a manager the company “decided to take a chance on” (that’s a sign that the employee is marginal or an underperformer) and no company would ever admit that some of its promotions are mistakes. One would argue that confirmational promotion is the right way to do things– even if it’s usually cited as an excuse for stinginess. (The company myth is that people are fairly evaluated, thus given roles, and compensated based on that role. The company reality is that people negotiate what compensation they can get, and then titles and managerial authority are back-filled to match payroll numbers.) Let’s, however, ignore these complexities and agree, for the moment, that confirmational promotion should be the goal. People should lead, and later be recognized; because “pre-recognizing” people as leaders tends to generate the culture of managerial entitlement that we know to be toxic. Okay, so what does it mean to lead?

I would say that leadership is to do things (1) for the benefit of a group, and (2) that one wasn’t asked to do. One doesn’t need to be a manager to lead: most good programmers are leaders, since the truly excellent ones can’t help themselves from doing work that isn’t in any Jira ticket. Some managers are leaders– they protect the group from external adversity, and drive it toward a coherent shared vision– and some aren’t. In hierarchical companies, managers tend to “manage up”, which means they fail both criteria: they’re favoring their own advancement over the group’s needs, and they’re taking orders from on high. In those sorts of companies, managers become puppet chieftains, mostly there to prevent the group from selecting a leader (who might become an agitator, or even a unionist) of its own.

The idea behind making promotion confirmational is that people should lead, in a genuine positive-sum sense, before they manage. I’d tend to agree. Should promotion be confirmational (i.e. conservative) rather than aspirational? Sure, absolutely. So what’s the problem? Where is the cause of dysfunction?

The problem

Simply put, the system falls apart when one set of people gets confirmationally promoted and others are aspirationally advanced. The latter group, the ones who get the benefit of the doubt, will always win. However, companies typically find themselves forced to promise authority to attract executives, and they’re doing this while knowing nothing about what they’ll do at the organization. External executives are, by definition, aspirationally placed. A good negotiator with strong on-paper credentials, facing a social-climbing company, can always get more authority than his demonstrated ability (passing an interview) merits. Over time, those external hires begin to dominate, and the company has an escalating sense of being run “from elsewhere”. Moreover, this incentivizes both political behavior and job hopping. Now, I’m notoriously pro-”job hopper”. Ethical job hoppers (and most “hoppers” are far more ethical than traditional ladder climbers; when they lose faith in an institution, they leave it instead of abusing it) shouldn’t bear the stigma they do, because when the norm is for companies not to recognize internal achievement, it’s the best individual strategy. (Hate the game, not the player.) What I recognize, however, is that it’s not healthy for individual companies to have high turnover; but that’s exactly what they encourage when they fall into the pattern I describe.

Most companies, internally, have slogans like “leadership is action, not title” or “act like an owner”. They tell people that they should rise to the occasion and lead (as I defined the term, above) regardless of whether they’re given official authority. Those slogans are mostly empty talk, but even the most hierarchy-friendly executive will agree that the alternative (an army of disengaged clock-punchers who don’t do anything unless explicitly told to do it) is worse. If, however, a company loses faith in its own people (as constant external promotion suggests) then this is a really bad strategy for the individual. In most companies, attempting to lead without authority is a fast-track to getting oneself fired. Now, losing a job often comes with severance, but it might not, especially as companies replace honest layoffs with phony “performance” cases. The risk of losing 2-4 months of income (not to mention starting over, having another thing to explain in future interviews) is pretty much never justified by whatever upside “acting like an owner” might have. (A 10% “performance” bonus on the upside? Not worth risking getting fired. Trust me.) People (even managers) just stop leading after a while. This tends to make a company “comfortable” insofar as individual performance expectations become very low and the clock-puncher mentality becomes the norm– as long as you’re not obvious about it, you can coast– but it’s not what anyone really wants.

Solution?

The core of the problem, I think, is that companies are inherently going to make sweeter promises to those it is trying to entice to join than it will offer to those who are already there. Later hires usually get better deals (“salary inversion”, in HR terms). Traditionally, technology startups offset this issue by offering superior equity slices to early people; but, in 2014, employee option grants are so pathetic that this is no longer true. Just as much as this is true of salary and bonuses and titles, it also seems to be true of authority.

Perhaps the problem is managerial authority itself. Now, I’m all for a conceptual hierarchy. That’s just how we think, as humans. We can’t hold more than about six or seven things in short-term memory at once, so for anything complex (high entropy) we need clusters, modules, and pruning of relationships. I just don’t think that rigid managerial hierarchies do much good. They create massive conflicts of interest and elevated classes whose sole purpose becomes to perpetuate their superiority, regardless of whether it benefits the organization.

I’ve written a lot about open allocation, and it seems that the biggest issue with it is that it makes it difficult for a company to hire external executives. They can’t be promised managerial authority if there isn’t much of that to go around. So, the dinosaur types who are used to “being executives” are displeased by the concept of a company where they have to convince people that their ideas are right, rather than just threatening to turn off plebes’ incomes, can no longer be enticed to join. I say: good. Fuck ‘em. Those wankers only cause problems anyway. Open allocation, at least in technology, is the way forward.

Under closed allocation, control over what people work on becomes, like anything else, a bargaining chip or commodity. At that point, this bargaining chip will be used to perform one of the company’s most pressing needs: recruiting. External promotion will become the norm, and internal advancement will cease as the leadership opportunities that might permit them to demonstrate their ability (assuming that promotions are confirmational rather than aspirational– but we all know how complex that little issue is) disappear. Internal promotion is the first to go. But then internal lateral mobility (already reduced to enforce the closed-allocation regime) ceases as well, to pre-empt the chaos that might ensue as people jockey laterally for the best (read: externally career-advancing) projects. Soon enough, not only can internal people not get promoted, but they can’t really move laterally either. Then, people develop a total sense of “stuckness” and the company is asleep. If stack ranking is imposed in an attempt to “wake them up”, you get the warring departments dynamic that tears a company to pieces. There’s no good there. In technology, closed allocation is just a dead end not worth exploring for any reason.

A company that wants to excel needs its people to excel. Externally hiring those who are attractive “on paper” won’t work, because often those executives (who join mediocre companies) are the ones looking for sabbaticals, and those who aren’t will have one foot out the door. But as soon as companies make control over what gets worked on a bargaining chip, the slide to mediocrity is inevitable. External hiring is no solution specifically because those external hires won’t jolt the company into excellence, but be brought down (or repelled) by the mediocrity.

Open allocation– the framework in which “promotion” takes the self-motivated form of greater challenges and larger achievements rather than increasing control over others (a zero-sum game)– is the only answer that I can see.

Look-ahead: a likely explanation for female disinterest in VC-funded startups.

There’s been quite a bit of cyber-ink flowing on the question of why so few women are in the software industry, especially at the top, and especially in VC-funded startups. Paul Graham’s comments on the matter, being taken out of context by The Information, made him a lightning rod. There’s a lot of anger and passion around this topic, and I’m not going to do all of that justice. Why are there almost no venture capitalists, few women being funded, and not many women rising in technology companies? It’s almost certainly not a lack of ability. Philip Greenspun argues that women avoid academia because it’s a crappy career. He makes a lot of strong points, and that essay is very much worth reading, even if I think a major factor (discussed here) is underexplored.

Why wouldn’t this fact (of academia being a crap career) also make men avoid it? If it’s shitty, isn’t it going to be avoided by everyone? Often cited is a gendered proclivity toward risk. People who take unhealthy and outlandish risks (such as by becoming drug dealers) tend to be men. So do overconfident people who assume they’ll end up on top of a vicious winner-take-all game. The outliers on both ends of society tend to be male. As a career with subjective upsides (prestige in addition to a middle-class salary) and severe, objective downsides it appeals to a certain type of quixotic, clueless male. Yet making bad decisions is hardly a trait of one gender. Also, we don’t see 1.5 or 2 times as many high-power (IQ 140+) men making bad career decisions. We probably see 10 times as many doing so; Silicon Valley is full of quixotic young men wasting their talents to make venture capitalists rich, and almost no women, and I don’t think that difference can be explained by biology alone.

I’m going to argue that a major component of this is not a biological trait of men or women, but an emergent property from the tendency, in heterosexual relationships, for the men to be older. I call this the “Look-Ahead Effect”. Heterosexual women, through the men they date, see doctors buying houses at 30 and software engineers living paycheck-to-paycheck at the same age. Women face a number of disadvantages in the career game, but they have access to a kind of high-quality information that prevents them from making bad career decisions. Men, on the other hand, tend to date younger women covering territory they’ve already seen.

When I was in a PhD program (for one year) I noticed that (a) women dropped out at higher rates than men, and (b) dropping out (for men and women) had no visible correlation with ability. One more interesting fact pertained to the women who stayed in graduate school: they tended either to date (and marry) younger men, or same-age men within the department. Academic graduate school is special in this analysis. When women don’t have as much access to later-in-age data (because they’re in college, and not meeting many men older than 22) a larger number of them choose the first career step: a PhD program. But the first year of graduate school opens their dating pool up again to include men 3-5 years older than them (through graduate school and increasing contact with “the real world”). Once women start seeing first-hand what the academic career does to the men they date– how it destroys the confidence even of the highly intelligent ones who are supposed to find a natural home there– most of them get the hell out.

Men figure this stuff out, but a lot later, and usually at a time when they’ve lost a lot of choices due to age. The most prestigious full-time graduate programs won’t accept someone near or past 30, and the rest don’t do enough for one’s career to offset the opportunity cost. Women, on the other hand, get to see (through the guys they date) a longitudinal survey of the career landscape when they can still make changes.

I think it’s obvious how this applies to all the goofy, VC-funded startups in the Valley. Having a 5-year look ahead, women tend to realize that it’s a losing game for most people who play, and avoid it like the plague. I can’t blame them in the least. If I’d had the benefit of 5-year look-ahead, I wouldn’t have spent the time I did in VC-istan startups either. I did most of that stuff because I had no foresight, no ability to look into the future and see that the promise was false and the road led nowhere. If I had retained any interest in VC-istan at that age (and, really, I don’t at this point) I would have become a VC while I was young enough that I still could. That’s the only job in VC-istan that makes sense.