Hypochondria isn’t what people think it is, and the U.S. medical system makes it worse.

Trigger warning: discusses panic attacks and health problems. Use caution if you’re sensitive to health anxiety or panic.  

I suffer from health anxiety, often called hypochondria. In addition to cyclothymia (a milder variant of bipolar disorder) and panic attacks, I suffer from an intense fear of health problems. In fact, almost all of my panic attacks are related to obsessive health-related anxieties.

Oddly, despite my extreme phobia of sickness, I fear death very little. I’m a Buddhist and believe in reincarnation, so I believe that I have died millions of times (although I don’t remember them) and am therefore “dead” already, and it ain’t so bad. Death doesn’t scare me. I hope that it won’t be painful (the period leading up to it probably will be; I’m realistic) and I look forward to the event itself. Getting sick scares the shit out of me, not because of the pain or risk of death or inconvenience, but just due to the sheer humiliation of it.

Most people use “hypochondriac” to describe a person who is overly dramatic, needy, or attention-seeking when it comes to matters of health. That’s not me. When a panic attack starts throwing bizarre symptoms and I start belching, smacking my stomach to prevent (a completely imagined and probably medically impossible) diaphragmatic spasm that I fear might stop my breathing and kill me, ineffectively massaging my neck and shoulder muscles, or pacing the room, I really wish others wouldn’t take notice. It’s fucking embarrassing. It’s stupid. The stereotypical hypochondriac constantly believes that he is sick. Not so, at least not in my case. Cognitively, I know that (with high probability) I am not sick. I eat well, don’t drink or use drugs, and I exercise regularly. I have also (unfortunately) had enough people close to me die that I realize that bodies usually break down slowly– House be damned– and never for no reason– House gets that right– so I can ratiocinate that the slight pain at my ribcage isn’t actual angina, which I am almost certainly not having because I am a 31-year-old in good shape and no family history of early heart disease. I know that, cognitively. But the thing about obsession is that it trumps cognition. You can know that you’re OK and that the panic attack will end once the meds kick in, and you are basically lucid, but intrusive thoughts about ambulances and hospitals and uncaring medical professionals and life-wrecking bills and losing jobs and relationships can’t be stopped. They build up until you have detached from reality and your body (or, at least, the signals you are getting from it, with probable neurological wire-crossings turning mild discomfort into more dreadful sensations) begins to go haywire.

Panic disorder is the ultimate troll. It is almost hallucinatory in its ability to throw bizarre physical symptoms at a person. I won’t list them, because I don’t want to give panic fuel to someone else who might be reading this, but to name one of the more bizarre ones: phantom smells. (I name it because it’s actually pretty funny, moreso than thinking today is the day you discover that you have adult-onset diabetes because of a dry throat.) What the fuck is up with that phantom smell shit? If God or “the Universe” is testing me or sending a message, what the hell does the smell of fucking relish have to do with it? I think the only other disease (than panic disorder) that causes phantom smells is brain tumor, and I’ve had the problem for 7 years and I’m still alive, so I’m pretty sure it’s not the latter. Fucking relish. Anyway…

One TV character who is shaping up, for the record, to be my favorite mentally ill character is Chuck McGill on Better Call Saul. It’s a compassionate depiction, but a realistic one. He’s not a drooling mental patient or an invalid or a psychotic murderer. He has a crippling anxiety disorder (far more intrusive than mine) which is a fear of electricity, and has had to leave his high-paying job as a law partner because of it, but (as of Episode 2) he’s lucid, smart, morally decent, and likable on all counts. In the mentally ill population, he’s a “silent majority” example: one who has a stigmatized illness, but with full intelligence and moral decency intact. Admittedly, “full intelligence and moral decency intact” is not how most people view mental illness; that is, largely, because most people associate mental illness with the most visible and extreme examples: (1) people who are so far gone that they can’t function in society at all, and (2) substance abusers, who are an atypical set for a number of reasons. I’d actually guess that the “silent majority”, even with stigmatized illnesses like bipolar disorder, is well over 90 percent. The stereotypical Hollywood manic-depressive goes on spending sprees, becomes sexually promiscuous, does a bunch of drugs, or gets into fistfights. When I go hypomanic I can be found at 1:00 am… reading. Or writing. Or coding. I call this the “cerebral subtype” and while it has its dangers (sleep deprivation can exacerbate hypomania) it does not make a person like me morally dissolute. It makes me… slightly groggy the next day.

The general population, in my view, doesn’t like to accept the reality about “mental illness”, which is the term we use for neurological diseases whose symptoms involve cognition. First, these are mostly “boring” health problems like all the other ones, which present challenges and can be extremely painful and disruptive, but rarely change the moral character of a person. People wouldn’t deign to ask whether a person with migraines or diabetes “can handle” a difficult job, but that’s a common question asked of people who’ve dealt with anxiety. These “mental” illnesses don’t send a person on a one-way journey into insane lal-a tinfoil land. They don’t, in general, make a person “crazy” in the sense of being impulsive, delusional, or dangerous. They are malfunctions in an organ that, while extremely powerful and resilient, exists in a stew of complex organic chemicals and operates according to an electrical protocol that we just barely understand. Most mentally ill people (whether we’re talking about bipolar disorder, depression, or anxiety) are surprisingly “normal”: again, the silent majority. Why is there a resistance to this idea, in the mainstream? Because, in general, people don’t want to believe that painful things won’t happen to them. “If I don’t smoke, I won’t get lung cancer.” “If I take 27 vitamin pills per day, I’ll die at age 109 in my sleep.” “If I’m not a crazy person, I’ll never have a life-altering depressive episode.” Sorry, but all of those beliefs are false. Healthy choices alter the probabilities quite favorably, and most people who reach age 25 without a mental health issue are in the clear… but there are no guarantees.

This is why (as of Episode 2) I find Chuck McGill so interesting. He’s not “crazy”. Everything he says has value. He’s an intelligent and good man. He also happens to suffer from a severe psychiatric illness. It seems paradoxical in light of our expectation that mentally ill people be “crazy” (in the drooling, “mental patient” sort of way) but it’s actually pretty normal. I don’t know how Better Call Saul intends to develop him, but this is one of the more honest portrayals of mental illness that I’ve encountered. The tragedy of these diseases (in their most severe forms; mine is relatively mild) is that they afflict normal people, not “crazy” ones. There is such a thing as “crazy”, but it’s mostly orthogonal to mental illness. Religiously motivated suicide bombers are crazy, but I would wager the guess that quite a large number of them suffer from no biological mental illness and that, at the top of terrorist organizations, people with mental illness (except psychopathy) are very uncommon. Evil is real and not the same thing as mental illness.

Back to panic: a true panic attack– and I’m not talking about the low-level yuppie anxiety attacks that come from a deadline and too much caffeine– is a venture into something like “crazy”, but paradoxical in that the panicking person is, in fact, terrifyingly lucid. A person with that much adrenaline is, in some ways, in peak physical function (despite mental distress). Facing off against a smilodon, one would want that “fight-or-flight” response. It’s when it’s triggered without cause that we call it “a panic attack”, because it’s so pathological against the backdrop of modern life, in which mortal danger is rare but social protocols must be followed. Acute panic tends to last no more than three minutes, although the fatigue and anxiety that can exist afterward can spawn another wave of panic, leading into an episode that can last (at worst) two or more hours. Panic is almost hallucinatory; it wouldn’t be inaccurate to describe it as a (short-lived) bad trip on a drug that one didn’t voluntarily take, and never really wanted. It is almost admirably creative in its ability to take mild discomfort (which is inevitable with any anxiety disorder) and transform it, wholly in the mind, into acute danger. A tight shoulder muscle becomes “chest pain”, and cold extremities become “imminent hypothermia”, and the fatigue that sets in after 20 minutes of panic (if the attack hasn’t abated by then, which it usually has) becomes a threat of fainting (which any proper hypochondriac knows by the medical term, “syncope”.) I know that this sounds fucking ridiculous. Sufferers of the disease would agree; it is. But the nature of a panic attack is that baseless and undefined fear reaches such a crescendo that it will (for a minute or two) override sensible cognition. That extreme, biologically-induced fear needs to crystallize around something and it is usually something in the body– that machine that (usually) works so well running on its own, for billion-year-old reasons that science is just starting to understand, and that it is impossible for us, on a second-by-second basis, to consciously manage.

The popular image of a hypochondriac is of someone who either convinces himself that he is sick, or feigns illness because he enjoys attention and sympathy. An actual hypochondriac is the opposite. First, we have no incentive to pretend and, if anything, we downplay our suffering. It’s fucking humiliating, and it would depress the shit out of most people, and my desire with regard to negative moods is anti-reproductive; I combat them by making efforts not to spread them. It’s the one thing I can do. Depression and anxiety are not physically contagious and my job is to prevent social contagion. If there’s any danger of error in a hypochondriac, it’s that we may (later in life, when life-threatening physical illnesses are more common) misinterpret dangerous health problems as just panic attacks and not seek care. Second, we generally do not frequent ERs when we have health-anxiety-induced panic attacks. Contrary to the image of a hypochondriac as someone who is stupidly or delusionally convinced of an illness he doesn’t have, we know cognitively that we are probably fine, but are overwhelmed by the intense physical symptoms and racing, uncontrollable negative thoughts. When you misinterpret (due to crossed neurological wires, not stupidity) neck tension as the throat closing up, you will fucking panic. (In fact, your breathing is fine. But you feel like you are choking and it is going to freak you out.) Going to an ER during a panic attack doesn’t help: you’ll spend four hours surrounded by hospital staff who resent you (“another one of these rich white pieces of shit ‘freaking out’ after a hard day at work”) and you’ll see people who are suffering from painful and more severe health issues (panic fuel) than the one you’re having… only not to get the medical attention you need because (a) ERs are intended for life-threatening conditions rather than subjectively terrifying ones and (b) ER doctors can’t be sure that you’re not a drug-seeker and are therefore conservative (for understandable reasons) when it comes to dispensing medicine. I’ve been to an ER twice for a panic attack (more on that, later) and it turned one of the worst experiences of my life into… an even worster experience. And yes, I made up the word “worster” because some things are so shitty that they entitle a person who has experienced them to make up words. Third and most importantly, we do not “freak out” because we want to get out of work, win sympathy, or otherwise gain favor from other people. First of all, that doesn’t work, especially not with stigmatized conditions. Most of us hypochondriacs are type-A control freaks (after all, hypochondria derives in part from our inability to know or control the operation of our own bodies) who, if anything, love to work a little bit too much. Trust me on this: I enjoy my job, I’m good at it, and I would absolutely love to be able to work 80 hours per week (not to say that I would work that much, but I’d like to have the ability) and not lose a single minute, ever, to anxiety or panic.

That’s enough of this shit, but this rant wouldn’t be complete without an indictment. One of the most important things to understand about mental illness is that, in terms of origin, it’s usually “no one’s fault”. My parents didn’t cause this; they were great. Nor is it my fault, really. I didn’t ask to have it. Nor is the fault of society or past relationships and jobs or present ones… for the most part. Based on genetics, it’s almost a guarantee that a person of my makeup would have struggles no matter what circumstances he landed in. It’s an interesting trade: 4 standard deviations of this fetishized quantity called “intelligence” in exchange for losing 5% of your time to painful mood and anxiety disorders. I’m not bitter about the deal, even though I didn’t make it voluntarily. To be honest, I’d probably make it again. I’m glad I’m me. It really sucks some of the time, but isn’t that true for everyone?

No one’s at fault for the fact that I’ve had panic attacks, but I’m going to throw some stones at those who’ve made the condition worse. The U.S. medical system, to put it bluntly, can die in a fucking taint fire. I’ve had good doctors and bad ones, and I continue to see the good ones despite my acquired doctor-phobia because I’m rational, but the system itself is a moral disaster.

My first panic attack came in 2008. It was scary, bizarre, and confusing. Though many come with warning and a build-up, this one hid immediately. This sudden and absolutely terrifying “mystery” health problem involved vomiting, tunnel vision, apprehension and shaking. It came on at 2:37 pm. I tried to drink water and it was physically impossible to swallow. At 2:41 pm, I was white as a ghost but lucid. At 2:46 I began vomiting and screaming (in front of work colleagues). At 2:50 I was fine. Around 3:00 an ambulance was called and I arrived at an ER at 3:15. I saw a lot of people who were suffering, and worried about that being my immediate future, so I had a couple of anxiety waves (none reaching the level of the original one). Around 7:15, I got about 72 seconds of contact with a doctor who diagnosed it as a panic attack. So, that is what a fucking panic attack is. See, I’d thought I’d had “panic attacks” before but, in retrospect, those were mild anxiety attacks. The difference is in degree. If anxiety is sugar, panic is coke.

The second one was worse. The first panic attack is scary but might just be a one-off. The second comes with the “yep, I’m now going to be a psychiatric cripple for a while” realization. It came a week later. Most of my panic attacks come in the late afternoon, but this one came around 1:00 in the morning. It made sleep impossible (exacerbating the illness) and rolled along for about 22 hours. A typical panic episode has 3-4 peaks spread out over 15-30 minutes. This one must’ve had 100 peaks. I was exhausted, dry-heaving, unable to keep food down. At random times during that day, my vision would suddenly go blurry, or I’d have an intense whole-body tingling, or I’d feel weightless (not in a good way, but like one is leaving the planet forever but will never die). Finally, at 6:00pm that evening, the girl I was living with forced me to go to the ER. I arrived at 6:38pm.

I was living in Williamsburg. I like Brooklyn but dislike Williamsburg: there is an intense negative energy there. It was full of young people who were arrogant, flaky, and full of bullshit Burning Man drug-wisdom that can combine with the “openness” of one’s mind upon acquiring a new disability to make you more scared of what’s going on than you should be. (“You’re entering a new spiritual plane!” vs. “You’ve developed a treatable condition and, if you take your meds and pursue cognitive-behavior therapy, you’ll be able to hold down a job and function normally within 6-12 months.”) I bring this up because, while Williamsburg is an affluent part of Brooklyn, the hospital that we chose to go to was… not. It was in the ghetto, and it was badly run.

I learned this later: when you present with a panic attack, physicians are supposed to run a battery of tests to rule out other conditions that can produce similar symptoms. This is good for two reasons. First, although they are rare in 24-year-old males, there are far more serious health conditions with similar symptoms that ought to be excluded. Second, it lets you (as patient) know that you are objectively healthy. Panic attacks are not nearly as scary when you are able to convince yourself, 100%, that “just panic” is what they are. If every panic attack felt like “just a panic attack” they wouldn’t be scary. It’s their weird-ass inventiveness at coming up with new symptoms that makes them terrifying. Anyway, that didn’t happen at my first ER visit, so I demanded it at my second. I went to triage and said, perhaps with some exhaustion due to 17 hours of mental anguish, “I know that I’m just having a panic attack but I want you to run all the requisite diagnostic tests and tell me what the fuck I have to do to fix my fucking brain.” I may have been a bit pushy, given the state that I was in. The staffer didn’t like this. He didn’t like me. I was a white kid living in Williamsburg with a $100+k-per-year job at age 24. He probably assumed that I had blitzed my brain on designer drugs (I hadn’t). He dumped me in a psychiatric ER. Now that is a place where I never again want to be.

You can leave a regular ER if the wait time is unreasonable, but not in the psychiatric section. Once you’re in, you can’t get out. They take your shoelaces, they take your money, and you can’t be discharged until you’ve seen a doctor (which may be, as it was in my case, more than 6 hours later). There was a loud television, and what struck me in the hypersensitivity of acute panic was how negative the content was: commercials designed to exploit envy and insecurity, some episode of some shit about teenagers being horrible to each other due to boredom. Most people (including myself, in normal moods) can be exposed to that low-level negativity and not be affected by it; it’s just crass entertainment. But it is fucking irresponsible to blast that shit, at full volume, into a crowded psychiatric ER with 14 people and 12 chairs. Most people in psychiatric ERs are recovering from panic attacks (and they are probably one of the worst places to recover from anything) and hypersensitive and should not jostled with the negativity of the world. I mean, for fuck’s sake… pretend you actually care about these people instead of just wanting to drown them out with TV noise.

During that six hours, I heard a number of screaming matches between staff and patients, and while one patient stood out as particularly loud and pugnacious, many of them did not deserve the harsh treatment they got. I, for my part, was treated well because I was able to gain the presence of mind to learn the rules and follow them. Sit down (on the floor, because there weren’t enough chairs) and don’t scream or piss anyone off. Be polite when you need to use the bathroom. Definitely don’t say, “I could get Xanax on the street instead of waiting for you assholes.” (The woman who screamed out that line must’ve added a few hours to her wait. Also, you’ll never get a benzo at an ER, even if you actually need one. Also, there’s a cop standing right there at the door, and while he does have better things to do than give a shit about such things being said, show some respect for the law.) I followed the rules, put up with my six-hour wait, tried to read although it was impossible to concentrate, and watched people with far more serious mental health issues suffer and (because of my own state) the most prominent thought was, and I’m ashamed of this now, “I hope that’s not my future.” Turned out, it wasn’t. But I’ll get there.

I got to see a doctor (the one doctor, on a Monday) around 1:00 in the morning. For all the horseshit of that ER and hospital, he was pretty good. He was surprised that I ended up in a psychiatric ER (and apologized for the fact) and he explained to me the physiology of panic attacks and, while he couldn’t prescribe, he gave me a referral.

There’s one more bit of the story that I have to tell. A month later, I went to a specialist for something else, and he discovered that it wasn’t a “phantom” health problem (or hypochondria) that was triggering the panic attacks. At least, it wasn’t then. I’m now at the point where just reading about a disease can give me a panic attack, but at that time, I had a real health issue. The difficulties I’d been having with breathing and swallowing were caused by a bacterial plaque that had formed in my throat after a bout of flu (that I, being stupid and macho and 24, tried to work through) from which I hadn’t properly recovered. The chest pains that I’d been having were LPRD/GERD (acid reflux) triggered by flakes of the plaque breaking off and fucking up my guts. The derealization and hallucination I’d been experiencing were not a brain tumor but typical of high-level panic attacks (thankfully, I rarely have them now.) Once discovered, that problem was easy enough to cure… but I spent a month with a giant bacterial plaque in my throat. I think that even normal people would get panic attacks after having a nasty motherfucker like that living inside them. But I digress.

So why do I indict the American medical system? From my two ER visits, I learned (perhaps in error, because I don’t think that all ERs are bad; I’ve just never met a good one) that emergency rooms will resent and ignore me. This is fine, because an ER is a terrible place to go during a panic attack, and if I ever have a real ER-worthy condition, the odds are that I won’t be conscious. I was eventually able to find good doctors (a psychiatrist for the panic, which persisted even after the throat condition was cured; and an ENT in Chinatown who managed to figure out what was actually wrong with me, physically speaking) but I had to seek out the specialists and hope that my insurance would cover the visits. I was my own Dr. House. The worst bit was the nightmare of insurance. Let me put it in no uncertain terms: U.S. health insurance is a fucking scam, and while “Obamacare” (PPACA) has improved the situation, the basic facts of it remain. In a way, health insurance is the most brilliant swindle there is. First, it picks a great target: sick people. When you rob sick people, they’re less likely to fight back or kick your ass or throw a year of life (when many of them have not so many years left) into a lawsuit. The problem with robbing sick people is that most of them don’t have any money, because they tend to be old and unemployed. So they’re soft targets, but with little meat to chew on. Health insurance is brilliant, as far as ignominious crimes go, because it collects premiums while people are well, cash-rich, and generally young… and then denies promised care when (in general) they are too old, sick, and poor to fight back. It’s like a robber with absolutely no sense of ethics discovered time travel! It’s fucking evil, but give credit where credit is due. Health insurance is an amazingly inventive form of theft.

If you’re not from the U.S., you cannot imagine how bad our health care system is. The quality of care when it is delivered is, I would say, spotty. We do have world-class researchers and there are many individual doctors and nurses who are excellent. I wouldn’t take that away from anyone. Our hospitals are depressing and dangerous places where super-bugs breed because our government doesn’t have the spine to disallow the (cruel and needless) abuse of antibiotics in fucking factory meat farms. Relevant to this topic, our billing/insurance system– even if you’re insured, you’re likely to face enormous out-of-pocket costs if you get seriously sick– compounds the stress of illness and has been a contributing factor to thousands of deaths every year. It’s terrible. I’d call it “Third World” but it’s honestly worse. It’s one thing not to have the resources, as the Third World medical systems often do not. It’s another to be rich in resources but to be a greedy fucking asshole. If there is a hell, I’m sure that the architects of the U.S. health insurance system are going to receive punishments that’d make Dante faint.

Health insurance may seem to be an orthogonal, after-the-fact issue when one is talking about panic. In fact, it taps into what, I think, is at the core of panic. What is it that one truly fears during a panic attack? I’ve said this before, and I’m not bullshitting: I don’t fear death. At least, I don’t fear it in the abstract. It will happen to me and, while I would prefer for it to take its time, I am at peace with it, and I think that most people (including most panic sufferers) are. This body will turn into a corpse, I will pass into another state of existence, and (if reincarnation is true) I will re-emerge as a person who will probably never hear the name “Michael O. Church”. To me, the one thing that is most comforting is that death and panic are opposites. Death is an end of this life; we don’t know what follows it, but we know that we ain’t here anymore. It’s impermanence. For a contrast; in panic, there is a fear of permanence or finality or stuckness. It’s not that the suffering (physical symptoms of health problems one does not have) is intolerable, but there is a sensation that they will never end. (Of course, they always end.) Sometimes, there is the fear, “I will die like this.” In fact, panic attacks are non-lethal. I am not a man of steadfast faith, but I feel comfortable in the belief that whatever being dead is like (and, of course, I don’t know what it is like) is quite different from a panic attack, and probably much nicer.

I don’t believe that panic is actually about death. I think that it’s about humiliation and disempowerment and finally abandonment. So let’s talk about those three fears. Are panic attacks humiliating? A little bit, but most people in the vicinity of a person having an attack are not going to have the intense focus on the event that the sufferer does. I’ve had panic attacks in public and I doubt most people remember them. Are they disempowering? They can be, and some people become shut-ins if they get really bad, but the truth is that a person who is  flooded with adrenaline is actually at the peak of his physical power (although he will be utterly exhausted when the adrenaline wears off). Panickers fear “losing control” but the adrenaline’s purpose is to make sure they have total control if any real danger that should present itself (this means there is a loss of non-essential control, and that produces many of panic’s trademark symptoms). As far as disabilities go, I think that panic attacks are one of the less disempowering, unless they become extremely frequent.

So then, let’s talk about abandonment. I think that it’s something that all of us, and even those who seem to be self-reliant badasses, fear. One can hold a cognitive belief that one is owed nothing by others in the world and not suffer for it. Many people, when they acquire disabilities (such as my relatively mild one) attempt to minimize the disability’s impact on others and be as self-supporting as possible. That’s all fine and good. However, I think that we, as humans, have an ancestral terror with regard to abandonment. We’ll be self-reliant as much as we can, but we need to believe that others will care for us if we are suddenly struck down. I think that most people are OK with the concept of eventual death, but the idea of being left to die, when they could be saved, strikes a primal chord. This, of course, gets to why there is, in many, a more bitter hatred toward health insurance companies (who murder by inaction) than there is toward cancer (which, though not a conscious organism, does the actual killing).

Now we have the U.S. medical system (and, most relevantly, these monstrosities that we call insurance companies) in our crosshairs, because abandoning people is what they do. Doctors don’t; if anything, they are eager to save lives whenever possible. But the rest of the system conspires to deny care, push people away, and let sick people die on someone else’s doorstep.

See, it was irritating that I spent 4 hours in an ER, having to wait to be told that my life wasn’t actually in danger. And spending 7 hours in a psychiatric ER (a prison, in essence) when I didn’t need to be there was a pretty miserable ordeal as well. That shit, though, is small potatoes. I’m OK. (It wasn’t that way for this woman, who died of deep vein thrombosis after a 24-hour (!!!) wait in a psychiatric ER’s waiting room). Then I had to deal with billing, and insurance, and insurance run-arounds, and denials of care that were explicitly contrary to law… for years. Having to leave my job, I had to shell out for COBRA only to have basic claims denied for arcane reasons that were clearly illegal. (“What are you going to do, unemployed, sick person? Sue us?”) That was 2008. Luckily, I had the good sense to find physicians who’d accept fair rates, and I ended up OK, but I also knew that if I did develop a life-threatening health problem, I’d be at the mercy of some absolutely horrible organizations.

It was probably 2010 before I recovered to the point of being traditionally employable (and a bit longer before I had the guts to leave the crappy startup I was at) and there is a large class of jobs that is probably out of the question forever. I’m 31 years old and quite functional (as a programmer, I’d say that I prefer to be “purely functional”) but being a hot-shot trader is pretty much out of the cards. Anyway, let’s talk about 2008 and 2009. I took a job at a pre-A startup, knowing that I still wasn’t well enough to deal with an 8-hour day in a typical tech office. (Actually, I’m surprised that normal people can withstand 8 hours in an open-plan office. Noise is one thing, but being visible to so many other people is horrendous. Every worker ought to be entitled to a barrier at his back.) At this startup, I had no health insurance. You know what’s worse than having a hypochondriacal panic attack? Having a hypochondriacal panic attack and knowing that you have no health insurance, which means that all these health problems the mind is inventing could actually lead to disaster. To put it bluntly, the US healthcare system took a period of my life that should have been one of recovery, and made it one of continuing stress and unraveling. It would not surprise me if I were diagnosed with PTSD based both on the period during which I was underinsured and received poor care, and during the panic-onset recovery period (late 2008-2009) during which I was uninsured.

I recovered. The panic attacks are pretty rare now. I have a good job, I’m married, I have two beautiful cats, and I even have decent health insurance– well, I think so; of course, the only way to actually test your health insurance is to become seriously ill– and can access decent doctors for my continuing medical needs, which are relatively low in expense and volatility. I have, for the most part, beaten this motherfucker. That said, I still have the attacks on occasion. Maybe it’ll be a benign heart palpitation or stomach pain that sets it off. Maybe it will be a bad memory pertaining to the dangerously inept medical treatment I received in the past, that spills over into a flashback. These are things that I shouldn’t have to deal with. I am fucking sick of the fucking panic attacks, and I am fucking sick of living in a society that thinks that it is OK for hospitals and insurance companies to take sick people and, out of a level of greed that even the robber barons would consider abhorrent, stress them out and fuck up their lives even further.

I am, and this should be obvious, disappointed by the progress achieved by President Obama on the healthcare front. Did he improve the system, incrementally? Absolutely, but not by enough. Getting rid of “life caps” and shooting down the scam plans was a good thing, no question. That said, health insurers will invent new ways to fuck people over and in 5 years we’ll be back at the same old shit. The system is too rotten to be improved by any incremental means. You can’t transplant new organs into a patient whose body is 85% cancer. At that point, it’s well past over. We need a single-payer or public-option system, and the existing private insurance companies need to die. Right now we have a system in which the doctors work overtime to heal and fix people, but the hospital billing departments and the insurance companies work overtime to stress them out and re-sicken them. It’s ludicrous. It’s like paying one person to dig holes and another to fill them up, but with more severe ethical ramifications because peoples’ health is at stake.

I write this not as a victim of a rather boring (to tell the truth about it) condition, because I am no victim. At my worst, I was no sicker than most people will be at their worst. Instead, I am a courier, and the message is clear: destroy the current, failing, morally execrable system, and build something new.

Why high-deductible medical insurance often doesn’t do what it’s supposed to.

“There was a friend of mine growing up, call him Tom, whose father was a health insurance executive. Once a month or so, he’d come for dinner and sleep over because his Dad was just in a foul mood, would go upstairs, not be able to cook, and not want to talk to anyone. I asked Tom why his dad had so many bad nights, and he’d explain that his father was a health insurance executive. I didn’t get it at first. Finally, when Tom was about 16, I asked him to explain this matter further. What causes his father’s bad nights? Tom laid it out straight: ‘Those are the days when Dad saves a man at work.’ “ — Unknown American origin.

As the medical insurance and healthcare picture in the U.S., despite the best intentions of at least a few left-leaning policymakers, continually gets worse over the decades, it’s becoming common that health insurance plans have high deductibles, sometimes as high as $10,000 for a family if one needs to go “out of network”. Moreover, given that health insurers will just decide not to cover things because some college dropout or failed-into-the-dark-side doctor decides that a treatment is “not medically necessary” (against the word of an actual fucking doctor) or a “lifestyle” treatment, even “out-of-pocket maximums” can’t always be trusted. Being “insured” means less by the day.

All of this said, for young and relatively well-off people, these high-deductible plans with HSAs seem like a good deal. On paper, taking one can be a reasonable bet, and if there were a way guarantee that they covered all medical expenses, I might agree. If you have a few thousand dollars or more in the bank, and you’re not likely to get sick, then you’re probably only giving up a few hundred dollars in expectancy by taking the high-deductible plan. So what goes wrong? What is the unexpected and systemic issue with high-deductible plans?

Libertarians like the idea of high-deductible plans insofar as they encourage patients to respond to economic signals when choosing treatments. While this appears to be a fine idea (on its own terms, that is) on paper, there are a number of issues with it. Free markets work well at solving a large number of pricing problems, but healthcare has extreme time behavior that other markets don’t. An issue that costs $500 to treat now might cost the patient, or society, $100,000 in a year if untreated. Markets work best when short-term signals reflect long-term conditions, and poorly when there’s a severe discrepancy between the two. Second, there’s a huge information asymmetry for patients, who simply don’t know enough to make informed decisions. Most patients would do better to trust their doctors than to try to make every single medical decision for themselves. This means that exposing patients to “price signals” is at best pointless and, at worst, dangerous. Due to the already-mentioned time behavior of most medical problems, by “dangerous” I also mean “expensive”.

What goes wrong with high-deductible plans? It’s not that deductibles are inherently a bad concept. They apply to auto insurance policies and are generally pretty harmless. The problem with high-deductible plans is this: while insurance companies are trypophobia-inducing clusters of assholes, the “good” news is that they’re assholes to hospitals and medical billing departments as much as to patients, and they have leverage, and they twist arms, and they get prices down. The result of this is that medical bills assessed to fully insured people are about a third as high as those assigned to the uninsured. The medical industry has high fixed costs, and no one knows what a service “should” cost, and uninsured or underinsured patients are so unlikely to pay (and, quite often, unable to pay) that billing departments will just plain price gouge. It’s ridiculous and perverse, and it’s questionable whether it should even be legal to set fees after a service is rendered. Hotels, restaurants, and transportation agencies have to set a price before the consumer makes a decision, but hospitals get to make up numbers after the service is rendered, resulting in absurdities like $250 charges for “mucus collection system” (in non-asshole language, a Kleenex). The only check against this are the health insurance bureaucrats. While they’re clearly motivated by corporate greed rather than good intentions, this class of people indirectly benefits policyholders by knocking prices down reducing premiums.

If we accept that insured patients pay medical bills indirectly, then at least the insured patient has an asshole on his side in negotiation with medical billing departments. The insurer will say, “accept this price or you’re ‘out of network’ and will get fewer patients”. As an individual, though, no patient can say “reduce the damn bill or I’ll never get appendicitis in your ER again”.

The problem with high-deductible plans is that, when a young person insured under one gets sick and incurs a mid-sized bill (say, $1500) the insurer has no incentive to engage in the arm-twisting (arm-twisting that is directly responsible for slashing insured patients’ bills by 60 to 80%, and that you will miss dearly, should you have to pay a medical bill directly) that they absolutely would do if they, as insurer, were paying the bill. (This is different if the insured person is frequently sick and likely to overflow the deductible on a regular basis; but until recently, people like that couldn’t even get insurance.) Don’t get me wrong: I’m not pro-arm-twisting in general. I’d like to see doctors and nurses and medical technicians fairly compensated, not driven to the bottom. In fact, I’d much prefer to re-join the First World and replace our rotten system with a public-option or single-payer system. I’m only saying that, as an individual, I’d prefer to have an asshole arm-twister negotiating my bills down rather than not have one.

High deductible health insurance would be a reasonable idea, and appealing to high-income young people like me, if there were some way to guarantee that the insurer would negotiate just as aggressively as if the deductible were zero and the insurer were paying the bill in its entirety. Unfortunately, I am not aware of any way to enforce that.

There is no “next Silicon Valley”, and that’s a good thing.

I recently moved to Chicago and, a couple weeks later, found myself reading this article: Why Chicago Needs to Stop Playing by Silicon Valley’s rules. I agree with it. I also want to speak more generally on “the next Silicon Valley”. It doesn’t exist. The current Silicon Valley is succeeding in some ways ($$$) and failing in others (everything else) but the truth of it is that it’s an aberration. It has as much staying power as the boomtowns surrounding North Dakota oil. Trying to replicate it is like attempting to create one of those ultraheavy chemical elements that lasts for 50 nanoseconds, but less interesting and far less cool.

I’m 31 years old, which is about 96 in Silicon Valley years, and I’ve seen a lot of the country and world, and I’ve come to the conclusion that “the next Silicon Valley” is a doomed ambition because it’s a pretty fucking lame one.

Rather than explain this, I think that a picture really is worth a thousand years here. So let’s look at some inspiring, creatively energetic, cities. These are the sorts of places that bring ordinary people to reach for the extraordinary, instead of the reverse.



New York:



Okay, so now let’s take a look at Silicon Valley.

I think my point is made by these pictures. There is a sense of place in the world’s great cities that just isn’t found at 5700 Technology Park, Suite #3-107, Nimbyvale, CA 94369.

Why no location can electively become “the next Silicon Valley”

I think the pictures above tell the story well. Becoming the next New York or Budapest or Paris or Chicago is a worthy vision, although any city will develop its own identity more quickly and more successfully than it can replicate another. Becoming the next Palo Alto is fucking lame. Now that the cherry orchards are gone, the only thing that the Valley has is money, and “I just want more money” is a pathetic ambition that leads to failure. Money has to come from somewhere, so it’s worthwhile to understand the source of the money and whether a region’s success is replicable (and desirable). Silicon Valley is rich because it’s a focal point for passive capital. This capital, drawn from teachers’ pension funds and university endowments, gets funneled through a machine called “venture capital” that is supposed to throw its money behind the most promising new businesses. Yet for reasons that most would find unclear, those funds tend to be directed toward a small geographic area. Now, the passive capitalists don’t especially care where their money is sent, so long as they get good returns. If the best business decision were to put that most of that money into Northern California, that would be easily accepted by the passive capitalists, even if they live in other places. While an Ohio State Police pension fund might ideally prefer that some of the jobs created by their passive capital be created in Ohio, they’ll gladly see their money deployed whereever it gets the best returns. That means that the extreme concentration of deployment in California would be completely OK– if it were justified by returns on investment. However, venture capital has been an underperforming asset class for years, and there’s no sign that this will change, because VCs are incentivized to optimize for their personal reputations and careers rather than their portfolios, and that favors the behavior we see. With the returns being abysmal, however, perhaps the Palo Alto strategy ain’t working. Perhaps this extreme concentration of passive capital, creating jobs in already-congested places where ever owning a house is extremely improbable for people who do actual work, is pathological.

My sense on the matter is that Silicon Valley is pathological, hypertrophic, and innately dysfunctional. Talent and capital like to concentrate, but not necessarily in that specific way, and not in such heated competition for resources with the existing economic elite (whose values are at odds with those of the most talented people). While it starves the rest of the country of attention and capital, Silicon Valley is past congestion and suffering for it, in terms of traffic and land prices. On paper, it’s set in a beautiful geographic area, and if you can get away from everything created by humans, California actually is quite pretty. That said, 22-year-olds without cars aren’t going to be impressed by Mountain View’s 200-mile radius when everything they actually see on a daily basis is an ugly, suburban shithole that they pay far too much to look at. Talented people do want to be around other talented people, but they prefer diversity in talent, not rows and rows of tech bros (who often aren’t very talented, but that’s another story). Because of talent’s natural draw toward concentration, and given the U.S.’s tendency toward high geographic mobility, I don’t think that this country will ever have more than 15 or 20 serious technology “hubs”– and even that would be a stretch, given that we currently have about five– but I do think that it’s possible to have a distribution that’s better for everyone involved. The current arrangement is bad for “winning” locations like Northern California, bad for the losing geographic areas, and bad for pretty much everyone individually except for extremely wealthy venture capitalists (who benefit from a reduced need to travel) and well-placed landlords.

As it exists, Silicon Valley probably shouldn’t. It’s a boomtown with ugly (and expensive) housing that wasn’t built to last. It has what could be a decent (if sleepy) almost-city in San Francisco, recently destroyed by the unintentional conspiracy of NIMBY natives (who create housing supply problems) and VC-funded new money. It is rich, on paper, and will be for some time. But replicating accident and pathology is a pretty lame strategy when directing the fate of a new place. The causes of Silicon Valley’s richness and (mostly former) excellence are more worthy of study than the superficial factors, like weather or workplace perks or representation in the entertainment industry.

What, then?

While “next Silicon Valley” is a lame ambition, there is something to that geographic region that makes it attractive to talented entrepreneurs. It provides a path to corporate hegemony that, at least by appearance, combines the “cool factor” of starting a business with the low risk of an investment banking or management consulting track. It encourages risk-taking and a superficially cavalier attitude toward failure, which appeals to a certain type of young person who has never failed and who hasn’t learned yet that life has stakes. The Valley has also done a great job of rebranding the corporate experience as something that left-leaning, upper-middle-class young people can swallow. Silicon Valley excelled in technology in the late 20th century; in the early 21st, it has proven itself world-class at marketing. Since brand-making is crucial to success in the sorts of first-mover, red-ocean gambits that VC (increasingly oriented toward attempting to sit inside the natural monopolies that technology sometimes creates, rather than actually building technology) now favors, that’s not surprising.

In business, there seems to be a continuum between low-risk, slow-growing businesses and “rocket ships” that burn up in orbit 95% of the time. There’s also a misperception, that I want to combat, that utter failure of new businesses is the norm. The risk exists, but 90% failure rates (while not uncommon in the Valley) are actually pathological. The actual failure rate is somewhere around 50 percent. In fact, compared to corporate jobs, the failure rate of typical small businesses (as opposed to VC-funded red-ocean gambits) isn’t much worse. Between firings, layoffs, political messes and damaged reputations relegating a person to second-class status, non-promotability, and less-desirable projects, it seems to be a constant that about 50 percent of jobs fail within 5 years. Of course, the range of outcomes is different; starting a business has more personal downside, and more potential gain. If there’s something that ought to be fixed in the process of new-business formation, it’s the amount of financial risk borne by those who don’t use venture capital.

For low-risk businesses that are unlikely to fail, bank loans are available. However, bank funding is a non-starter in launching even the least risky (“lifestyle”) technology companies, because bank loans those tend to require personal liability, which means that you can’t use them for something that might actually fail. Bank loans are great if one wants to capitalize a franchise restaurant or a parking garage, but not suitable for anything that involves making a new product. At the other extreme, there’s VC. The mid-risk, mid-growth range is, however, overlooked. For a business carrying, say, a 20-30% chance of failure and targeting 40% annual growth, there’s no one out there. Why is that?

Venture capital could be just as profitable by investing in mid-risk businesses as it is by throwing into the extreme high-risk pool. After all, if valuations are fair, then there’s just as much profit to be made investing in large companies as small ones. We’re probably not going to see a change in VCs’ behavior, though. The truth about that industry is that it’s celebrity-driven, and the VCs have a lot to gain and lose by playing the reputation gain. No one cares about the difference between a 7% and an 12% annual return on investment, but there’s a lot of credibility that comes from having “been in on” a Facebook or a Google. This also explains the (justly) hated tendency of venture capitalists toward collusion, co-funding, and reliance on social proof. One might want for VCs to compete with each other (i.e. do their jobs) and avoid this sort of mediocritizing collusion, but with the career benefit of being in on the once-per-decade whale deals being what it is, the incentive to spread information (even at the expense of entrepreneurs, and of ethical decency) around is obvious.

A successful business could easily be built by focusing on the mid-growth / mid-risk space, and delivering an option that removes personal financial risk while avoiding the ugliness and the aggressive risk-seeking (even at the expense of the ecosystem’s health) of traditional venture capital. That would also reduce reliance, for businesses, on the geographical advantage of Silicon Valley, which is access to ongoing capital and the perception of a liquid market for talent. It could be very profitable. It could be this mentality that builds the next ten thousand great companies. It won’t be done in Silicon Valley, however; and when it happens, it won’t come from anyone attempting to, or even cognizant of such a concept, create “the next Silicon Valley”. It will be driven by people creating the first something.


I haven’t written much lately, in large part because I am trying to change course in terms of what I write about.

A change of focus?

Over the next year, I’d like to steer my focus toward more technical topics. CS 666 (software politics) is a subject that I’ve had to learn and use, the knowledge is important, and I’m glad to have shared it, and will continue to do so if I think that it’s good for the world. All that said, my heart’s not as much in it as it could be in other fields (like machine learning, language design, and even board game design, all of which are more dear to me than the MacLeod organizational model as it applies to software). It’s easy to focus on the intricacies of CS 666 and forget about the stuff that inspired us to get into technology in the first place. I swear that I didn’t join Google, back in 2011, hoping to become an expert in office politics. (Alternate summary: “I joined a Leading AI Company, and all I got was this lousy MBA’s Worth of CS 666 Knowledge.”) I wanted to level up on machine learning and software engineering… but a working knowledge of software politics is what I actually got from Google (and many other companies where I worked). At any rate, I think that I’ve put that to good use since then. I do what I can. But I’d rather focus on other stuff now.

The road to technical excellence (on which I am still a journeyman, not yet a seasoned ranger) is hard: you have to get high-quality work (which, in most companies, involves CS 666 in order to hack the project-allocation system) and be able to deploy it into the organization (ditto). Most programmers ditch the individual-contributor path in the manage-or-be-managed world of the closed-allocation mainstream, knowing that the only way to sustain sufficient advantage in the division of labor to grow and protect expertise and excellence is to gain direct control of it (“it” meaning the division of labor). They learn the political game, become managers easily once that is done, rise into the executive ranks because managerial tracks in supposedly “dual-track” organizations are always easier to climb than the technical ladders, and are lost to the field as individual contributors. It’s good for them, but not always for the world. A cynic might say that what begins as a diversion into CS 666 becomes, for many, a permanent state of distraction.

At the same time, we have this epidemic of criminally underqualified, well-connected individuals getting funded and acquired. In this frothy state, tech seems to be all about fucking distractions. I don’t like that it’s happening, and I’ve said more than my piece on it. The question I have to ask myself, continually, is whether I am making real progress, or just contributing to that state of distraction that I dislike. And then I have to ask what is best for me. Looking at the next 12, 24, and 48 months… I’ll be honest, I’d like to learn more computer science and spend less time on CS 666. There’s just a lot out there that I don’t know, and should, not only in computer science but in mathematics, the sciences, and the arts.

In truth, my status as some sort of emerging “conscience” of Silicon Valley must be considered temporary, since I don’t even live there, and have no interest in ever doing so. (What does it mean when the conscience of a place lives thousands of miles away from it?) On all that political stuff, the best thing that could happen to me would be for me to meet someone with the right vision, but who’s better than I am at pushing it through, and who doesn’t (like me) secretly wish he could study machine learning and leave the CS 666 to someone else. Ten years from now, I don’t want to be dumping execration on the moral failures of the technology industry, the way I do now. I want to see that we have grown the fuck up and solved our own problems. That will require people who are like me, but even better in battle, to take charge and start fighting.

The good news, for me individually, is that I think I just might be reaching the level of capability where the politics starts to get less intense. When you’re 22 and unproven, you’re going to have fight political battles just to get the good projects, and to get recognized for what you’ve done. It’s an ugly process of trial and error that I’d like never to repeat. Now I’m 31, more eminent in terms of talent, and would like to see myself protecting the good, in the future, rather than needing protection. Time will tell how that goes, but I’d like to finish next year with more technical articles and fewer political ones on the blog.


In other news, I’m moving to Chicago in early January. I’m quite excited about the move. Anyone who wants to get together over pizza (either kind!) and beer and talk about functional programming, machine learning, board games, or why the world would be better if it was run by cats, should reach out.

On the supposed aversion of software engineers to “the business”.

There’s a claim that’s often made about software engineers, which is that we “don’t want anything to do with the business”. To hear the typical story told, we just want to put our heads down and work on engineering problems, and have little respect for the business problems that are of direct importance to the companies where we work. There’s a certain mythology that has grown up around that little concept.

Taking a superficial view, this perception is accurate. The most talented software engineers seem to have minimal interest in the commercial viability of their work, and a rather low level of respect for the flavor-of-the-month power-holders who direct and supervise their work. It’s easy to conclude that software engineers want to live in an ivory tower far away from business concerns. It’s also, in my experience, completely incorrect. Business can be intellectually fascinating. As I’ve learned with age, new product development, microeconomics and game theory, and interpersonal interactions are just as rich in cognitive nutrition as compiler design or random matrix theory. I might prefer to study hard technical topics in my free time, in order to keep up a specialty, but I’m a generalist at heart and I don’t view business problems or interpersonal challenges as inferior or “dirty”. More to this, I think that most software engineers agree with me on that. We’re not ivory tower theoreticians. We’re builders, and as we age, we begin to respect the challenges involved in large projects that present interpersonal as well as technical challenges.

So why are so many talented software engineers seemingly averse to the business? Why do most talented programmers fly away from line-of-business work, leaving it to the less capable and credible? It’s this: we don’t want to deal with the business as subordinates. That, stated so, is the truth of it.

There are a few who protect their specialties with such intensity that any business-related work is viewed as an unwanted distraction, and I’m glad that they exist, because the hardest technical problems require a single-minded focus. I’m not speaking (not here and now, anyway) for them. Instead, I’m talking about a more typical technologist, with an attraction to problem-solving in general. Is she willing to work for “the business”? Of course, but not as a subordinate. If she’s going to be called in to mix business concerns with her work, she’s going to want the authority and autonomy necessary to actually solve the problems put in front of her. It’s when working with the business doesn’t come with these requisite resources and permissions that she’d rather slink away and build interpreters for esoteric languages.

The stereotype is that software engineers and technologists “don’t care” about business problems. The reality is that they avoid working on line-of-business software because the position is inherently subordinate. Give them the authority to set requirements, instead of coding to them, and they’ll care. Make them partners and drivers instead of “resources”, and they’ll actually give a damn. But expect them to interact with the business in a purely subordinate role, as in a typical business-driven “tech” company, and the talented ones (who are invariably smarter than the executives shouting orders, but have chosen not to participate in the political contest necessary to get to that level) will hide from the front lines.

If a company views software engineering as a cost center, and programming as a fundamentally subordinate activity, it will find that talented programmers avoid direct interaction with the business (which will, by design, happen on subordinate terms) and software it builds will either be of low quality or irrelevant to its business needs– because those who have the ability to write high-quality software won’t even bother to make their work relevant. However, this pattern of degeneracy (although common) should not be taken as a foregone conclusion. There are more similarities than differences between business problems and engineering problems, and it’s quite possible to give people with programming and engineering talent the incentive to learn about the business. While technical talent flies away from “business-driven programming” like a bat out of hell, there’s no intrinsic animosity between programming talent and “the business”. To the contrary, I think that people with experience solving these two classes of problems could have a natural affinity, and have a lot to learn from each other. Any such meeting has to come on terms of equality, however. If working with the business means doing so as a subordinate, then no one with technical talent will do so in earnest.

The back-channel culture, Silicon Valley’s war on privacy, and the juvenility of all of this.

One of the more execrable Silicon Valley institutions (and it’s not like there’s a shortage of moral failures in the contemporary Silicon Valley) is the “back channel” reference call. This is when a prospective employer or investor circumvents the candidate’s provided reference list and calls people who weren’t volunteered. While it’s morally acceptable for certain kinds of government jobs (e.g. in a security clearance) because national security is stake (and because back-channel reference checking is a well-published part of the clearance process) this is just plain obnoxious, unprofessional, and often unethical when done for regular office jobs, where human lives aren’t at stake. It’s bad for job seekers, but also bad for the people being called, who may have never volunteered to give references in the first place.

Unfortunately, the technology industry is full of unprofessional, juvenile man-children who don’t seem to know, or care, about professional protocols. So this conversation actually has to happen, and it will happen here. But, for us as a community, it’s an embarrassment that I’m writing this. It’s like when tech conferences have to publish anti-harassment policies. Shan’t we be embarrassed, as a community, that not all of our members know that groping strangers is not OK? We should, and for this issue, likewise.

Why is back-channel reference checking so bad? I can think of four reasons to despise this practice.

It violates an existing and important social contract.

When someone applies for a job, there’s a social contract between the candidate and the company. The candidate is, under this contract, expected to represent her qualifications truthfully, and the company is expected to evaluate her in good faith.

A violation of this contract would be a company that has no open positions, but holds interviews to get proprietary information about its competitors. That’s not “good faith” because the candidate has no chance of being hired. Her time is being wasted, in order for the company to get information. That does happen, but it’s generally considered to be a slimy practice, and it’s hard enough for a company to keep the secret that it’s uncommon. Back-channel reference calling is another, similar, violation of the social contract. A company that extends an interview is representing that (a) it has the resources to hire, and that (b) it will hire the employee if the employee’s total packet (CV, interview performance, and furnished references) performs sufficiently well. To do otherwise is to show a complete lack of respect for the employee’s time. This implies that if a candidate is rejected, it ought to be something in the official “front channel” package that was the reason.

How much feedback should be offered to rejected candidates is, ethically, an open question. I doubt that it’s reasonable to expect an employer to take time to explain exactly where in the interview a candidate failed, because that can lead to fruitless and mutually demoralizing discussions. Many companies refuse to provide explanations, for that reason. I will maintain the ethical obligation of the employer to communicate (sometimes, passively) what stage the failure occurred at. If the candidate isn’t called back, it was the CV. If the candidate gets an interview but nothing else, it was interview performance. If the candidate is asked for references but doesn’t get an offer, then he needs to consider a different set of people the next time he gives references. Injecting other “secret” stages into this process just adds noise to the feedback. While I don’t consider companies responsible to communicate the exact reasoning behind their decision, using a process that obfuscates existing feedback is a breach of professional ethics.

For a concrete example, let’s say that a candidate gets to the stage of furnishing references and volunteers a few, and they come up positive. Then a few back-channel references are called, and something negative comes back. It doesn’t matter if it’s untrue or if the person isn’t a credible source; the candidate’s probably sunk and, of course, he probably won’t be told that it was a back-channel reference that did him in. Now his relationship with three of his closest professional colleagues is needlessly and wrongly complicated.

Back-channel reference checking also has a way of getting back to the candidate’s current employer. Plenty of defenders of this practice say, “Oh, I’d never do that.” Bullshit. If you’d reject an otherwise stellar candidate based on unreliable back-channel feedback, then you’ve already proven that you can’t be trusted to be “careful” with peoples’ careers. Back-channeling publishes a possibly private job search (yet another violation of the social contract) and word travels fast.

I think that, in the long run, back-channel reference checking is actually quite expensive for companies. Savvy candidates, when dealing with companies that use the practice, are going to fake competing offers in order to put time pressure on employers and prevent the back-channel cavity search from happening. (It violates the social contract for a candidate to lie like that, but if the contract’s already broken, why not?) That will lead to hasty decision-making, compromise the existing hiring practices, and result in costly mistakes.

It’s a show of power and aggression.

It takes social access to get into a stranger’s past at any level of depth. People don’t like giving references unless they’ve agreed to be a reference for someone, and back-channel references never knew that they were references (and may take personal offense to not having been asked first, not knowing that their names weren’t volunteered). HR officials at companies, often, will only verify basic details about previous employees, knowing the legal risks of giving anything more. Likewise, most people who are asked out of the blue for a reference aren’t going to give one to just anyone. They have to trust the person asking. Back-channel cavity searches require knowing a lot of people. They’re easier for large corporations which can involve a lot of people, or for venture capitalists who’ve been buying and selling influence for decades, but pretty much impossible for the little guy to use.

When VCs claim that back-channel reference checks (currently legal, but let’s hope that Washington becomes aware of the issue and does the right thing) are critical to their business, what they’re actually doing is gloating about having the social resources necessary to conduct such investigations. It’s hard to get people to volunteer information that is often inappropriate for them to share. “Do you feel like fucking over a random stranger?” “I really want to know if Sue is of the future-pregnant persuasion; does she talk about kids a lot?” “Tom didn’t put dates on his CV; can you tell me his approximate ‘graduation year’?” “Give me a rundown of Mark’s health-problems-I-mean-‘performance’-reviews from 2008 to 2013.” “Is Angela one of the ‘political’ Native Americans or is she just like anyone else?” People don’t answer these questions, asked cold by strangers with no skill at interrogation. It takes resources (mostly, trust and contacts) to get it.

Often, the person who does the back-channel reference check will admit to doing it. When it results in rejecting the candidate, the failure is more silent, but often it results in further “conversation”, the purpose of which is to humiliate the candidate (reducing her likelihood of negotiating for a higher salary or better job role than the company is prepared to give her) under the guise of addressing “concerns”. At that point, it’s about showing utter dominance by waltzing into that person’s career, turning over all the furniture, and using the toilet without flushing. It’s to impress the target with the godlike ability to get access to all sorts of inappropriate information. It’s a way of saying, I don’t have to play by the rules, because I’m powerful enough to get away with anything.

It’s invasive.

That Silicon Valley’s back-channeling is invasive hardly needs explanation. There’s a general protocol around what is and is not appropriate for a prospective employer to research. Running a background check to make sure the person worked at the companies, and attended the schools, that she said she did? Totally OK. Finding out that she has kids, showing up at their elementary school unannounced to observe them, and bribing an unscrupulous principal into getting their academic records, in order to find out if they’re special-needs kids who might be more demanding of the mother’s time than average kids? That’s not OK. There’s a lot of information that is arguably potentially relevant to someone’s future job performance that we, as a society, have rightly decided to be off-limits in making decisions about whether to hire someone.

The “front channel” employment process, at least, imposes some accountability on both sides. The employer communicates its priorities through the questions that it asks, and thereby puts credibility at risk if those priorities are unreasonable or, worse yet, illegally discriminatory. Volunteered references are provided so the employer can validate that the candidate actually worked at the companies claimed and isn’t completely off-base about previous roles and functions within those companies. Using back-channel references is, however, about the more powerful party’s escape from accountability. To ask for information communicates that there is interest in it. To surreptitiously acquire it does not, which means that there’s plenty of room for impropriety and invasion.

It’s also uselessly invasive. The feedback is noisy. For every person with knowledge about someone and his work, there are ten with opinions. Venture capitalists and CEOs who perform these back-channel inquiries may think they’re sharpshooters who can quickly get to credible sources, but they’re not that good. They just never get feedback on their failures, because they’ll reject anyone who doesn’t come up with a perfect bill and never see that person again.

One of the reasons why 2-3 volunteered references (and, at absolute most, 5) has been the standard in employment for so long is that, perhaps counterintuitively, the quality of employee hired doesn’t improve beyond that number. The main reason to check references is to filter out unethical people who interview well, but past 7, you are empirically more likely to hire an unethical psychopath. Why so? Among unethical people, you tend to have two kinds: the petty, untalented ones who make annoying messes, and the talented, dangerous ones (psychopaths, usually) who can take down a whole company. The first category can’t get references; they burn bridges, leave messes in their wake, and are generally disliked by everyone who knows them well. The second category always have glowing references. They have no qualms about making friends pose as bosses, buying references off-the-shelf from made-up companies (yes, this service exists) and “coaching” people into telling exactly the story they want. You can’t actually filter out the second category of psychopath through any social proof system, which means that, after a certain point, your best odds are with not excluding normal people.

If you ask for 10 references, the average, basically ethical, person with a normal career has to dig into his third string and at least one of them is going to be less impressed with him than he expected, so he’ll fail. The psychopath, however, will always pass the 10-ply reference check. That applies even moreso to back-channel references, because psychopaths hide in plain sight and great at intimidating other people into acquiescence. The psychopath might have enemies and detractors, but it’s not being disliked that ruins a person’s career, but low social status (by the way, “performance” at 95% of jobs is social status). Psychopaths make sure that any slight against their social status is swiftly punished, often having loyalists in every social sphere they’ve inhabited. So the back-channel reference check, counter-intuitively, strengthens the psychopath’s power. Some of the people called will dislike him, but not a single one will diminish his social status in any way, and this will strengthen his image as a powerful, “high-performing” person and deliver him the job. Psychopaths really are like cancer cells, able not just to evade human “immune systems” built on social proof and reputation, but often to co-opt those for their own purposes and weaponize them against the good. How to beat psychopaths is a complicated topic for another essay; the best strategy is not to attract them. One of the major reasons I champion open allocation is the specific fact that such an environment is unpalatable to the workplace psychopath.

The invasiveness of the back-channel reference check, empirically, delivers no investigative value about what actually matters (ethical character). It drags up a lot of “juicy” (meaning “inappropriate”) gossip, though. That is why, in a world of oppressive, inane juvenility like Silicon Valley, they’ll probably never go away entirely. It’s too much “fun”, for a certain species of manchild that the Valley has given undue power, to invade a stranger’s personal and professional lives.

At any rate, reference checks aren’t actually investigative in purpose. The real purpose of reference checking is to keep the moral middle classes– people who aren’t unethical psychopaths, but probably would lie a bit to improve their careers, if they weren’t afraid of getting caught– honest. At that, it’s probably a necessary evil, and I think it’s fine for employees to ask for 2-3 (hell, even 5) references to validate that the employee’s represented career history and qualifications are correct. If references were never checked, then people would inflate their qualifications more than they already do, and that would add noise to the job-search process. The reference check has legitimate value in verifying the correctness of a candidate’s claims. This doesn’t justify an adversarial invasion of privacy.

It’s discriminatory.

For those who rely on back-channel references, one of their favorite reasons for doing so is access to all sorts of information that can’t be requested in the “front channel” process, relating to age and health issues and pregnancy history and socioeconomic status. Off the bat, the discriminatory intent is obvious.

There’s more to it, though. Right now, in 2014, venture capitalists and technology employers have an almost pedophiliac attraction to youth, especially when it comes in the package of a sociopathic frat boy. It’s not really about chronological age. Rather, they like people who haven’t been challenged yet, with stars in their eyes and a general cluelessness about the world. They champion “failure” because it benefits the VC for the founders to take risks that would be considered irresponsible by anyone who’s had enough life experience to see what actually happens when one fails. Also, there’s a vicarious nostalgia in play: VCs want to be reminded of the time when all they worried about was drinking and getting laid, before the jobs and the kids and aging parents. Instead of being honest about their midlife crises and buying Ferraris or boats or marrying trophy wives, they’ve taken the midlife crisis in the form of a hilariously underqualified protégé (like Lucas Duplan or Evan Spiegel) made “startup CEO” by their own largesse (and, given the Valley’s culture of co-funding, connections to other investors). This, dear reader, is what “culture fit” at startups is really about. It’s about socioeconomic and cultural homogeneity, and isolation from the challenges of the real world (kids, aging parents, health issues). It’s college for people who were too socially incompetent in their late adolescence to make the most of that stage of life when they were in it, and who want a retry in their young adult (or, for investors, midlife) years.

People who’ve been challenged, and how know that there are actual stakes in life, don’t like back-channel reference checking for the same reason that they don’t like open-plan offices. If you’ve ever had a serious health problem, the added stress of having it in front of 50 other people is just intolerable. With age and challenge, people become far more competent in general but lose a bit of endurance with respect to the weird and mostly minor (but cumulative) cultural insults of the open-plan, juvenile, “startup” culture with its lousy health benefits, blurred lines between personal and professional life, and general lack of respect for established professional protocols that form out of decades of experience. If this heightened desire for privacy doesn’t happen for any other reason, it certainly happens when people have kids. Why so? The college bubble is an artificial world whose socioeconomic heterogeneity is enforced in order to create a culture of ubiquitous trust. You don’t need to worry about each person individually, because the “adult supervision” of the admissions office has already done that. The “hip” tech culture is all about preserving that youthful attitude (“I don’t need privacy because there are no really bad people around”) toward life. However, in reality, the world is dangerous and has lots of evil people in it (even at colleges, but that’s swept under the rug!) When people have children, the biological and emotional need to protect another creature makes it impossible to harbor that prevailing, universal trust. If you have to protect someone else’s life, you can’t trust everyone. And open-plan offices (more specifically, visibility from behind, or “open-back visibility”) are about forcing the employee to render trust to the whole office. The same holds for back-channel references, but in that case, there’s even less consent. If you think back-channel reference checking is morally acceptable, then you’re arguing that people should be mandated to trust complete strangers in their own careers.

The truth about privacy and protocol is that they’re not just there to protect “the weak” or people “who have something to hide”. To want privacy doesn’t mean you’re doing anything wrong. It just means that you’ve had enough life experience to know that not everyone can be trusted with all information. I’m well aware of the fact that my tone, on issues like this one, sounds unduly adversarial to many people. I don’t actually see the world in adversarial, zero-sum terms. The world ought to be mostly cooperative, and I think that it is. However, I recognize that a large number of interactions and transactions are innately adversarial, and I’m old enough to know that even people who’ve done nothing wrong have a desire for (and a right to) privacy.

Privacy and the lack of it

College is a time of life in which people relinquish some privacy because there is “adult supervision” that is supposed to prevent things from getting too far out of hand, and because people of low socioeconomic status (presumed to be where criminals come from) are generally excluded, unless they’ve been vetted heavily for intellectual ability as well as good behavior. In the college bubble, most students don’t have to work outside of their studies, and most students’ parents are in their 50s and at the peak of their careers (not 35-year-old single mothers who gave birth to them at 17, not 82 and dying). Excluding managed intellectual challenges in coursework, most of these people have never been challenged… and those who have either don’t fit in or become others’ “diversity experiences”. If this is an uncharitable depiction, let me admit: this isn’t entirely bad. It’s the “magic” of socioeconomic protection and age heterogeneity that enables people who met in September to be “best friends” by October, and feeling safe to discover alcohol and sex and psychoactive drugs and politics and computer science around such people– in a world where privacy is relaxed and people get to know each other quickly– is a big part of that. The suspicion and chaos and status-assessment and busyness that characterize “the Real World” haven’t set in yet, in the sunny college bubble, and that allows deep friendships to form in a month instead of over years. Yes, I can see the appeal of being 18-22 forever.

“Culture fit” and the Valley’s worship of youth are outgrowths of this desire, which many share: to create a world in which it’s possible not to grow up. (If one wonders how the adult supervisors, like VCs, benefit by running a silly camp for overgrown adolescents, the answer is that people who aren’t expected to act like adults won’t demand to be paid like adults, and the VCs can make out like bandits. Considering that, they actually charge a bit more than college bureaucracies.) One of the reasons why the consumer web contingent now dominating the Valley simply doesn’t get peoples’ need for privacy is that it, collectively, is still stuck at a mental age around 23 and, more specifically, in the mindset of a certain type of 23-year-old who’s never been challenged or tested. (Obviously, I do not intend to apply the “never been tested” label to all people at that age, or at any age, since it wouldn’t be accurate or fair.) They don’t have to be white, male, young, heterosexual, childless and from upper-middle- or upper-class backgrounds but, for statistical reasons, they usually are.

Back-channel reference checking becomes, if not morally acceptable, more understandable when one realizes how juvenile private-sector technology has become. I’ve lived in the Real World and I’ve definitely had legitimate challenges: deaths in the family, personal health issues, lost jobs and even a couple 3-4 month spells of unemployment. I’ve seen enough to know that the stakes in this life are fucking real. There’s no Dean of Students who sits down and talks the bad things out of happening. You don’t go in front of a Financial Standing Committee if you lose all sources of income; you actually suffer. As such a person who has actually lived in the world, and seen what it is, and learned that it is pure idiocy to trust literally anyone into one’s career via “back channel references”, I am vehemently against the practice. And I am morally right. However, there are exactly two types of people who can be ethically OK with the increasing prevalence of back-channeling in technology: (1) powerful sociopaths who’ve decided that the rules no longer apply to them, because their getting away with something is proof that it’s OK for them to do it, and (2) clueless naifs who’ve never suffered or been challenged… yet. To people in college mode, a back-channel reference check is no different from asking, “Hey bro, how many guys do you think Monica has slept with?” That is, it’s very inappropriate, but not the sort of thing that would lead to a seven-figure lawsuit or a jail sentence.

More generally than this, I think that people become more private and discerning as they grow up. When you’re six, you’ll be friends with that “funny” kid who puts dog poop on a stick, holds it front of his face, and laughs at it. When you’re 31, if you’re like me, you’ll have a hard time making conversation with people of average (or even 90th percentile!) taste and cultural awareness. This isn’t all good. I spend a lot of time trying to figure out how to crack barriers in a meaningful way and, given my average-at-best social ability, don’t always know how to do it. I’m glad that there are people like the 20-year-old (as of 2004) Harvard student Mark Zuckerberg who can get insights into these problems and, at least in some partial way, solve them. College is too “open” for that mentality to work outside of a socioeconomically heterogeneous bubble– we’d have to become a Scandinavian socialist country for the college mentality to work outside of an academic bubble, and I don’t see that as politically palatable in this country– but the “adult world” is a bit too closed, cynical, and cold. In many ways, I see the appeal of the former, and I think the ideal point is somewhere in the middle. (The world would be less closed and frigid with less socioeconomic inequality, but I’m not anywhere close to having control over that variable.) That said, I’m a realist. Trusting strangers unconditionally in one’s career is just plain stupid at any age. Not everyone in this world is good, it doesn’t take much effort or luck for the bad people to make themselves dangerous, and the stakes are too fucking real to pretend they don’t exist.

We, the fully-fledged adults, might seem cynical, stodgy, and adversarial when we tell Silicon Valley man-children that their fratty back-channel reference calls aren’t OK, or that they should stop putting sexist humor in slides about their products, or that open-plan offices are a back-door age and health discrimination that we find crass, or that they need to stop betting their companies on the (extremely rare) clueless young thing that gives his all to a company that gives him only 0.05% of the equity (because, honestly, most of those people aren’t talented, just young and eager). We’re not. We’re just experienced. We know that privacy, protocol, and propriety are actually important. We’ve seen people suffer needlessly due to others’ stupidity, and we’ve learned that the world is a complicated and difficult place, and we’re trying to defend the good, not just against the evil, but against the much larger threat presented by the mindless and immature.


The predominant culture in Silicon Valley has moved against privacy, with personal and professional lives bleeding together, frat culture (and its general disregard for propriety) invading San Francisco, and back-channel communication becoming part of the hiring process, all in the name of “culture fit” (preserving the college-like bubble). This is not the only culture in technology, and it’s certainly not moving in unopposed. Sadly, it does seem to be winning. Demanding privacy at a level previously taken for granted (even asking for quiet working conditions and a barrier at one’s back) has become unusual, isolating, and embarrassing. The attitude that it often meets is, “Why do you need privacy if you’re not doing anything wrong?” Only political naifs consider that question to be remotely reasonable to ask. Everyone needs privacy, because the world is complicated and dangerous and trusting the whole world with all of one’s information is just reckless. This isn’t Stanford. This is real fucking life.

The end result of this is an exclusionary, insular culture of an especially pernicious sort. Silicon Valley’s oppressive mandatory optimism and its contempt for privacy and those who demand it aren’t just classist and sexist and racist and ageist. In fact, Silicon Valley doesn’t have a coherent desire to be any of those things. It’s about a rejection of experience. To live in that sort of college-like bubble, you have to reject the knowledge that not all people in the world are good. You have to accept intrusions against your privacy and person like open-back visibility at work, micromanagement in the name of “Agile”, and back-channel reference calls. You have to have never been challenged or tested, or at least seem like that’s the case. However, that puts us, as an industry and community, far away from the realities of human existence. It makes us, just as we are ethically and professionally reckless as shown in our use of back-channel references, out-of-touch and dangerously oblivious to what we are actually doing to the world.

We have to take stock of this and change course. No one else is going to do it for us. It’s up to us to lead and, to do that, we have to grow the fuck up.

Leadership is not a stepping stone

This line of thought was inspired by a tweet from Carter Schonwald:

To which I replied…

These aren’t new ideas, from me or in general. Savvy people in technology have begun to realize that much of what’s getting funded isn’t deep, infrastructural technology, but the audition projects of well-connected, mid-level product managers trying to make their case for “acqui-hire” into a junior executive role at a large corporation, or, better yet, a position in venture capital. No news, right? It’s an old topic. Let’s not beat it to any more deaths than it’s already had.

Yet I realized that, of all the con games going on in the VC-funded consumer-web ecosystem, this insight gets to the fundamental issue. There’s a dishonesty inherent to a “founder” presenting himself as an entrepreneur, doomed to sail or sink with his ship, when his actual priority is shoring up his reputation so that he gets a better job no matter what happens to the company. This means that, if saving the business or his employees’ careers mandates that he oppose the interests of investors, he won’t (and can’t) do so.

The “founders”– at least the business ones who tend to be tracked naturally into the CEO role– are probably savvy enough to know that they’re really mid-level product managers because the VCs are the real executives of Silicon Valley. They also know that most of them are going to get managed promotions (e.g. acqui-hires or VC jobs) rather than build independent companies. They must know that. The odds already tell that story. For the business “founders” and probably some of the technical ones, the job is just a stepping stone. It’s the technical people, who don’t know as much as they think they do about business, negotiation, or the dominant personalities in this game, who believe they’re building the next Facebook and will throw down 100 hours per week to overcome the deliberate understaffing (relative to expectations) of the venture. Most of the work is done by “true believers”, but the power in and over the company is held most strongly by nonresidents (VC bosses) and transients (business co-founders, connected executives) angling for their next bump up. This leads us directly to a six-word compact objection…

dot dot fucking dot

… Leadership is not a stepping stone.

Ethically, I’m fine with people treating their jobs as stepping stones, to be used to get to something better, because most people are in non-leadership positions. In truth, “stepping stone” is how I’ve viewed most of my jobs, as an impatient person at a high level of talent. If I’m not being groomed for a meaningful position or a major role on an important project, I’ve already got my eyes focused elsewhere. That is, on my part, knowing non-leadership. It’s a peacekeeping strategy: rather than fight for the limited advancement opportunities or executive attention/mentoring or top projects in one place, why not avoid conflict and seek improvement elsewhere, at no one’s cost? I don’t see it as disloyal or “mercenary” to keep an eye out for external promotion. I view it as necessary because it prevents and defuses conflict.

That said, people who are expected to be leaders shouldn’t be treating their companies as stepping stones. It’s one thing to be a manager in the reductionist sense– an officer hired to make decisions pertaining to another’s assets– and take that careerist view. That’s not what executives present themselves as being, however. In most companies, they call themselves the “leadership team” (a gag-inducing pair of words, but never mind that). Founders, as well, certainly present themselves as being tied-to-the-mast leaders. This isn’t quite correct, because while a genuine leader may have to oppose the interests of an individual within the group, they ought to be defending the group against external threats. That’s why people give up their power, as individuals, to leaders: to have a more coordinated and quicker response to external or emergent dangers.

Yet, when there is a conflict of interest between their employees and their investors, founders must choose the investors. Founders know that VCs talk and that the influential ones can shut them down with a phone call. They also know that, if they fail, they need references and introductions from their VC backers. A boss can end your job, but a VC can end your career. Founders have no choice but to manage up, and that’s a problem for the whole system, because managing up is generally the antithesis of leadership.

The truth is that there’s very little leadership in Silicon Valley. While the ability to flit about companies does give talented, reputable engineers more leverage than they would have elsewhere, individual Valley startups are often characterized by intense power distances, and holding political power isn’t the same as leading. “Flat” is often a euphemism for “dictatorial”. Well-run larger companies actually require managers to show some of the characteristics expected of a leader, while startups often take a “my way or the highway”: approach, and use “culture” to back-cast departures as “non-regretted”. These startups generally manage up into the founders, who manage up into “investors” (the true executives of the Valley) who manage up into better-connected investors with better deal flow. Everyone is just trying to get a notch or two ahead. There’s nothing wrong with that– I’m the same way– in general, but it’s not appropriate for people who want others to look to them for defense and direction.

Is management leadership?

Corporate executives like to use “management” and “leadership” interchangeably, but they have almost opposing meanings in many cases. A manager is a person who makes decisions pertaining to an asset that he or she does not own, such as a company or a celebrity’s reputation. They’re almost always going to be selected from the top, by owners of those assets or by higher tiers of management. Genuine leaders are generally selected and elevated from the bottom. You don’t get to decide that you’re a leader just because you have authority or resources. The people being led decide whether you’re a leader. Of course, there is a shared interest between owners and employees that the company sustain basic function, but the alignment often ends there, and the pathetic equity slices that Silicon Valley gives to regular employees (like software engineers) are never going to change that. When this conflict of interest exists (and it usually does) to be a manager requires taking one side, and being a leader requires taking the other one.

A leader can be a manager or not, and a manager can be a leader or not. All four possibilities exist. Managers will often say that they are leaders, but their salaries are paid and their performance is evaluated from above, and they know it. Often, they are at best puppet leaders. Some have the genuine charisma or alignment of interest necessary to be accepted as legitimate leaders (that the group would choose if left to its own devices) and others have the moral fortitude to take their reports’ career needs and long-term goals (personal, financial, and career-related) seriously, but it’s not a requisite part of the charter, and it’s not common.

The middle management problem

This problem isn’t limited to Silicon Valley. Middle management is generally problematic, in this analysis. Most companies can find a place for a lifelong individual contributor. For the highly competent, there’s an opportunity to establish credibility and value without traditional organizational ascent. Management has different rules. Just as there are (by definition) no good poker players who lose money, there are no good managers who don’t rise. If you’re a middle manager for ten years, no one will take you seriously. Top executives won’t mentor you, and you won’t get the most talented reports, because you won’t be able to promote them. If you couldn’t bring yourself to rise against any political headwinds, how can you protect and advance others? As soon as a person steps into a management role, the clock starts. Middle management is an up-our-out role.

This is what VC-funded technology’s age discrimination problem, for the record, is really about. Most of these consumer web startups aren’t technology but marketing experiments using technology. There isn’t enough technical depth to them to justify an individual contributor track lasting more than 5-10 years. That brings the acceptable maximum age for engineers to 30-35 (and for “product” people, it’s even lower). Allowing no more than 5 years in middle management, this requires that people reach the executive ranks (venture capitalist) no later than 40. If a 41-year-old VC partner encounters a 50-year-old “founder” who’s still asking VCs for money, he’s going to wonder what the hell happened. By 50, people should be asking you for money, introductions, and resources.

The severe time pressure that is on middle managers tends to compromise their decisions. They need approval from above to get promoted. That’s not negotiable. As for anyone else in the corporate world; if they do their jobs well, but their bosses dislike them and evaluate them poorly, they still lose. Good will from below, on the other hand, is completely optional. Sure, it’s better and easier to have it, but it can be tossed away in a pinch. If they succeed, they won’t be seeing much of those people in the future, because they’ll be a level or two higher in a couple of years.

In sum, there’s an intractable conflict of interest in the concept of middle management. To be honest about it, I don’t think there’s a solution. Performance evaluation in any job where the results aren’t completely objective is, in truth, destined to be gamed. And most of the work that is perfectly objective is being given over to machines, who work more reliably and cheaply than humans do. For the subjective stuff, those who quickly identify influential people and appease them are always going to rise faster than earnest, uniform high performers. Managing up will always be rewarded more than genuinely leading a team. This is no surprise, in the corporate theatre, to the more cynical among us. What’s more irksome, in contrast against the way that world is presented, is that it’s equally true in Silicon Valley. For all the talk about “vision” and “disruption”, anyone who has the political skill to be a founder knows that whether one’s startup succeeds or fails matters only one-tenth as much as how one’s performance is viewed by investors. If you build a great business, but you’re fired and stripped of your equity and can’t get a meeting with anyone to build your next project, you’ve lost. If your company crashes and 180 employees see their last paychecks bounce, but it’s viewed as not your fault and you get a partner-level position at Sequoia, then you’ve won.

Where this all ends up

Most managers aren’t leaders, because they can’t be. They have to manage up. That is, in effect, their job. I don’t think that “360-degree reviews” fix this problem either, because people they have the power to fire aren’t going to honestly evaluate their performance (not if they’re smart, anyway). The effect, for a middle manager, of failing to manage up, is immediate and brutal: loss of reputation, advancement opportunities, and often the job. The effect of poor leadership is insidious and unfolds over enough time that other circumstances (including external conditions and random events that occur in the mean time) can be blamed. By the time there could be macroscopic damage, visible from above, due to poor leadership; the manager has either been advanced, relegated to terminal status, or fired, all for reasons unrelated to his actual ability to lead those below him.

This means that it will be rare that a middle manager actually leads the group he is expected to oversee. It’s not his fault. His job is defined above him by people with almost no concern with the well-being of his reports. In the Valley, we shouldn’t expect this kind of leadership from founders, either. The only people with the latitude to genuinely lead are the well-connected investors whose names can make or break companies. Of course, since that set of people is selected through a process that values “managing up” as well, it’s only by a rare coincidence when a person is invited who actually has the vision, charisma, or moral perceptiveness necessary to lead. Just as in any other executive suite, 90 percent of them won’t have it, because they’re selected based on other criteria: the ability to manipulate and appease the people above them, and to game whatever system of performance evaluation is set in place.

The lesson of this is that truth is anarchy. If you’re a young engineer, don’t look for leadership. Don’t expect the Hollywood depiction of affairs, where a “mentor” just happens to see where you are and “fix” your career, to occur. It rarely works like that. Most young engineers think that, if they work 100-hour weeks on the low-impact grunt work that they’re assigned, someone above them will “discover” them, ask “Why the fuck are they wasting your time on this shit?”, and fast-track them to better things. That’s far too rare to bet one’s career on it happening. Barring the rare stroke of fortune that might happen once every ten years or so, you have to become your own mentor and advocate, because no one’s going to do that job for you. The few people who do have the credibility to clear away political nonsense, and to create small fields of sanity and protection, are going to want to work with people who’ve done much of their work for themselves. Self-mentoring is the rule, and guardian angels are the exception.

The insight that truth is anarchy, in the corporate world, is an important one. As I’ve grown older, I’ve realized that the few people who can genuinely lead aren’t born with the ability. Personal charisma is superficial. Leading others is largely about providing protection against the chaotic and negligent-to-malevolent world outside: from external competitors (who aren’t malevolent, but opposed in interests) to internal cost-cutters (who compensate for their mediocrity and lack of vision by offering ideas that seem to save money in the short term, while harming the company in the long run). Much of whether one can provide this protection relies on credibility and status, and getting those is always more political than based on merit, but an equally important capability is to know how to create fields of sanity and fairness in an insane and unfair world. The first step is to attempt to create such a thing for oneself, and it’s typical to fail a couple of times before getting it right. I think that, 15 years from now, the people in my age group who mI’ll recognize as the best leaders will be the ones who are currently waking up to reality (“truth is anarchy”) and, rather than being blinded by corporate smoke-screens and phony loyalty, learning how to fight for themselves. To lead is to fight for others, and that’s almost impossible to know how to do, unless one has years of battle scars won in fights for oneself.

Of course, most of the people who get to be executives and founders will be the non-fighters and the company police. That will never change. We just have to find a place where we appear out of their way, and outperform them. Silicon Valley used to be the place to do just that. Circa 2025, it’s going to have to be somewhere else.