Never Invent Here: the even-worse sibling of “Not Invented Here”

“Not Invented Here”, or “NIH syndrome”, refers to the tendency of organizations to undervalue external or third-party technical assets, even if they are free and easily available, when it is taken to an illogical extreme. The NIH archetype is the enterprise architect who throws person-decade after person-decade into reinventing solutions that exist elsewhere, maintaining this divergent “walled garden” of technology that has no future except by executive force. No doubt, that’s bad. I’m sure that it exists in rich, insular organizations, but I almost never see it in organizations with under a thousand employees. Too often in software, however, I see the opposite extreme: a mentality that I call “Never Invent Here” (NeIH). With that mentality, external assets are overvalued and often implicitly trusted, leaving engineers to spend more time adapting to the quirks of off-the-shelf assets, and less time building assets of their own.

Often, the never-invent-here mentality is couched in other terms, such as business-driven engineering or “Agile” software production. Let’s be honest about this faddish “Agile” nonsense: if engineers are micromanaged to the point of having to justify weeks or even days of their own working time, not a damn thing is ever going to be invented, because no engineer can afford to take the risk; they’re mired in user stories and backlog grooming. The core attitude underlying “Agile” and NeIH is that anything that takes more than some insultingly small amount of time (say, 2 weeks) to build should not be trusted to in-house employees. Rather than building technical assets, programmers spend most of their time in the purgatory of evaluating assets with throwaway benchmarking code and in writing “glue code” to make those third-party assets work together. The rewarding part of the programmer’s job is written off as “too hard”, while programmers are held responsible for the less rewarding part of the job: gluing the pieces together in order to meet parochial business requirements. Under such a regime, there is little room for progress or development of skills, since engineers are often left to deal with the quirks of unproven “bleeding edge” technologies rather than either (a) studying the work of the masters, or (b) building their own larger works and having a chance to learn from their own mistakes.

Never-invented-here engineering can be either positive or negative for an engineer’s career, depending on where she wants to go, but I tend to view its effects as negative for more senior talent. To the good, it assists in buzzword bingo. She can add Spring and Hibernate and Maven and Lucene to her CV, and other employers will recognize those technologies by name, and that might help her get in the door. To the bad, it makes it hard for engineers to progress beyond the feature-level stage, because meatier projects just aren’t done in most organizations when it’s seen as tenable for non-coding architects and managers to pull down off-the-shelf solutions and expect the engineers to “make the thingy work with the other thingy”.

Software engineers don’t mind writing some glue code, because even the best jobs involve grunt work, but no one wants to be stuck doing only that. While professional managers often ignore the fact, engineers can be just as ambitious as they are; the difference is that their ambition is focused on project scope and impact rather than organizational ascent or number of people managed. Entry-level engineers are satisfied to fix bugs and add small features– for a year or two. Around 2 years in, they want to be working on (and suggesting) major features and moving to the project level. At 5 years, they’re ready for bigger projects, initiatives, infrastructure, and to lead multi-engineer projects. And so on. Non-technical managers may ignore this, preferring to institute the permanent juniority of “Agile”, but they do so at their peril.

One place where this is especially heinous is in corporate “data science”. It seems like 90 percent (possibly more) of professional “data scientists” aren’t really being asked to develop or implement new algorithms, but are stuck in a role that has them answering short-term business needs, banging together off-the-shelf software, and getting mired in operations rather than fundamental research. Of course, if that’s all that a company really needs, then it probably doesn’t make sense for it to invest in the more interesting stuff, and in that case… it probably doesn’t need a true data scientist. I don’t intend to say that data cleaning and glue code are “bad” because they’re a necessary part of every job. They don’t require a machine learning expert, is all.

People ask me why I dislike the Java culture, and I’ve written much about that, but I think that one of Java’s worst features is that it enables the never-invent-here attitude of the exact type of risk-averse businessman who makes the typical corporate programmer’s job so goddamn depressing. In Java, there’s arguably a solution out there that sorta-kinda matches any business problem. Not all the libraries are good, but there are a lot of them. Some of those Java solutions are work very well, others do not, and it’s hard to know the difference (except through experience) because the language is so verbose and the code quality so low (in general; again, this is cultural rather than intrinsic to the language) that to actually read it is a non-starter. Even in the case where an engineer actually wanted to read the code and figure out what was actually going on, the business would never budget the time. Still, off-the-shelf solutions are trusted implicitly until they fail (either breaking, or being ill-suited to the needs of the business). Usually, that doesn’t happen for quite a while, because most off-the-shelf, open-source solutions are of decent quality when it comes to common problems, and far better than what would be written under the timelines demanded by most businesses, even in “technology” companies. The problem is the fact that, a year or two down the road, those off-the-shelf products often aren’t enough to meet every need. What happens then?

I wrote an essay last year entitled, “If you stop promoting from within, soon you can’t.” Companies tend to have a default mode of promotion. Some promote from within, and others tend to hire externally for the top jobs, and people tend to figure out which mode is in play within a year or so. In technology, the latter is more common for three reasons. One is the cultural prominence of venture capital. VCs often inject their buddies, regardless of merit, at high levels in companies they fund, regardless of whether the founders want them there. Second is the rapid scramble for headcount accumulation that exists in, and around, the VC-funded world. This requires companies to sell themselves very hard to new hires, which means that the best jobs and projects are often used to entice new people into joining rather than handed down to those already on board. The third is the tendency of software to be extremely political, because for all of our beliefs about “meritocracy”, the truth is that an individual’s performance is extremely context-dependent and we, as programmers, tend to spend a lot of time arguing for technologies and practices that’ll put us, individually, high in the rankings. Even if they are the same in terms of skill and natural ability, a team of programmers will usually have one “blazer” and N-1 who keep up with the blazer’s changes, and no self-respecting programmer is going to let himself be in the “keep-up-with” category for longer than a month. At any rate, once a company develops the internal reputation of not promoting internally, it starts to lose its best people. Soon, it reaches a point where it has to hire externally for the best jobs, because everyone who would have been qualified is already gone, pushed out by the lack of advancement. While many programmers don’t seek promotion in terms of ascent in a management hierarchy, they do want to work on bigger and more interesting projects with time. In a never-invent-here culture that just expects programmers to work on “user stories”, the programmers who are capable of more are often the first ones to leave.

Thus, if most of what a company has been doing has been glue code and engineers are not trusted to run whole projects, then by the time the company’s needs have out-scaled the off-the-shelf product, the talent level will have fallen to the point that it cannot resolve the situation in-house. It will either have to find “scaling experts” at a rate of $400 per hour to solve future problems, or live with declining software quality and functionality.

Of course, I am not saying, “don’t use off-the-shelf software”. In fact, I’d say that while programmers ought to be able to spend the majority of their time writing assets instead of adapting to pre-existing ones, it is still very often best to use an existing solution if one will suffice. Unless you’re going to be a database company, you shouldn’t be rolling your own alternative to Postgres; you should use what is already there. I’d make a similar argument with programming languages: there are enough good ones already in existence that expecting employees to contend with an in-house programming language, that probably won’t be very good, is a bad idea. In general, something that is necessary but outside the core competency of the engineers should be found externally, if possible. If you’re a one-product company that needs minimal search, there are great off-the-shelf products that will deliver that. On the other hand, if you’re calling your statistically-literate engineers “data scientists” and they want to write some machine learning algorithms instead of trying to make Mahout work for their problem, you should let them.

With core infrastructure (e.g. Unix, C, Haskell) I’d agree that it’s best to use existing, high-quality solutions. I also support going off-the-shelf with the relatively small problems: e.g. a CSV parser. If there’s a bug-free CSV parser out there, there’s no good reason to write one in-house. The mid-range is where off-the-shelf solutions are often inferior– and, often, in subtle ways (such as tying a large piece of software architecture to the JVM, or requiring expensive computation to deal with a wonky binary protocol)– to competently-written in-house solutions. Why is this? For the deep, core infrastructure there is a wealth of standards that already exists, and there are high-quality implementations to meet them. Competing against existing assets is probably a wasted effort. On the other hand, for the small problems like CSV parsing, there isn’t much meaningful variability in what a user can want. Typically, the programmer just wants the problem to be solved so she can forget about it. The mid-range of problem size is tricky, though, because there’s enough complexity that off-the-shelf solutions aren’t likely to deliver everything one wants, but not quite enough demand for solutions for nearly-unassailable standard implementations to exist in the open-source world. Let’s take linear regression. This might seem like a simple problem, but there are a lot of variables and complexities, such as: handling of large categorical variables, handling of missing data, regularization, highly-correlated inputs, optimization methods, whether to use early stopping, basis expansions, and choice of loss function. For a linear regression problem in 10,000 dimensions with 1 million data points, standards don’t exist yet. This problem isn’t a core infrastructural problem like building an operating system, but it’s hard enough that off-the-shelf solutions can’t be blindly relied upon to work.

This “mid-range” of problem is where programmers are expected to establish themselves, and it’s often where there’s a lot of pressure to use third-party products, regardless of whether they’re appropriate to the job. At this level, there’s enough variability in expectations and problem type that beating an off-the-shelf solution into conforming to the business need is just as hard as writing it from scratch, but the field isn’t so established that standards exist and the problem is considered “fully solved” (or close to it) already. Of course, off-the-shelf software should be used on mid-range problems if (a) it’s likely to be good enough, (b) those problems are uncorrelated to the work that the software engineers are trying to do and would be perceived as a distraction, and (c) the software can be used without architectural compromise (i.e. rewriting code in Java).

The failure, I would say, isn’t that technology companies use off-the-shelf solutions for most problems, because that is quite often the right decision. It’s that, in many technologies, that’s all that they use, because core infrastructure and R&D don’t fit into the two-week “sprints” that the moronic “Agile” fad demands that engineers accommodate, and therefore can’t be done in-house at most companies. The culture of trust in engineers is not there, and that (not the question of whether one technology is used over another) is the crime. Moreover, this often means that programmers spend more time overcoming the mismatch between existing assets and the problems that they need to solve than they spend in building new assets from scratch (which is what we’re trained, and built, to do). In the long term, this leads the engineer to the atrophy of skills, lowers her level of satisfaction with her job, and can damage her career (unless she can move into management). For a company, this spells attrition and permanent loss of capability.

The never-invent-here attitude is stylish because it seems to oppose the wastefulness and lethargy of the old “not-invented-here” corporate regime, while simultaneously reaffirming the fast-and-sloppy values of the new one, doped with venture capital and private equity. It benefits “product people” and non-technical makers of unrealistic promises (to upper management, clients, or investors) while accruing technical debt and turning programmers into a class of underutilized API Jockeys. It is, to some extent, a reaction against the “not invented here” world of yesteryear, in which engineers (at least, by stereotype) toiled on unnecessary custom assets without a care about the company’s more immediate needs. I would also say that it’s worse.

Why is the “never invent here” (NeIH) mentality worse than “not invented here” (NIH)? Both are undesirable, clearly. NIH, taken to the extreme, can become a waste of resources. That said, it is at least a “waste” that keeps the programmers’ skills sharp. On the other hand, NeIH can be just as wasteful of resources, as programmers contend with the quirks and bugs of software assets that they must find externally, because their businesses (being short-sighted and talent-hostile) do not trust them to build such things. It also has long-term negative effects on morale, talent level, and the general integrity of the programming job. My guess is that the “never invent here” mentality will be proven, by history, to have been a very destructive one that will lose us half a generation of programmers.

If you’re a non-technical businessperson, or a CTO who’s been out of the code game for five years, what should you take away from this post? If your sense is that your engineers want to use existing, off-the-shelf software, then you should generally let them. I am certainly not saying that it is bad to do so. If the engineers believe that an existing asset will do a job better than they could do if they started from scratch, and they’re industrious and talented, they’re probably right. On the other hand, senior engineers will develop a desire to build and run their own projects, and they will agitate in order to get that opportunity. The short-termist, never-invent-here attitude that I’ve seen in far too many companies is likely to get in the way of that; you should remove it before it does. Of course, the matter of what to invent in-house is far more important than the ill-specified and vague question of “how much”; in general and on both, senior engineering talent can be trusted to figure that out.

In that light, we get to the fundamental reason why “never invent here” is so much more toxic than its opposite. A “not invented here” culture is one in which engineers misuse freedom, or in which managers misuse authority, and do a bit of unnecessary work. That’s not good. But the “never invent here” culture is one in which engineers are out of power, and therefore aren’t trusted to decide when to use third-party assets and when to build from scratch. It’s business-driven engineering, which means that the passengers are flying the plane, and that’s never a good thing.

Yes, I’ll defend Daylight Saving Time

For those unfamiliar with U.S. timekeeping, we “lost an hour” of sleep last night, at least for those who slept according to clock time. Immediately after 1:59:59 in the morning, it was 3:00:00. We effectively changed into a different time zone, one hour east, and we do this switching forward and back every year. A lot of people don’t like Daylight Saving Time (or, in Europe, Summer Time) because it effectively “tricks” people into waking up an hour earlier, relative to solar time. If you wake up at 7:00, you’re actually forced to wake up at 6:00 for the majority of the year (DST is active for eight months out of the year, meaning that it is actually the usual time regime and winter/standard time is the special one).

On paper, Daylight Saving Time sounds really stupid– really, why on earth would we inflict that sort of unnecessary complexity on our timekeeping system? If it didn’t exist, and were proposed now, the idea would be rejected, and possibly for good reason. Benjamin Franklin proposed it as a joke, commenting on the unused daylight wasted by Paris’s nightlife culture. That said, I’ll go on record and be somewhat controversial. I like it. Its drawbacks are serious but I think that its advantages are numerous, as well. Sure, I know that there are a number of intelligent reasons to oppose DST, but I also have an emotional attachment to the 8:45 pm summer sunsets of my childhood. I “know” that time is arbitrary and stupid and that “light at 9:00 pm” means nothing because we invented these numbers and are really calling 8:00, 9:00. While I oppose this phrase when it is overused, time-of-day and especially daylight savings time literally are social constructs. They exist simply because follow the customs.

So let’s talk about Daylight Saving Time, and I’ll try to explain why it’s a good thing.

It’s not about energy saving. It’s cultural. 

The evidence is pretty strong that Daylight Saving Time doesn’t save energy. Nor does it waste energy. On the whole, energy use is unaffected by it. Discretionary lighting just isn’t a large enough component of our energy expenditure for it to matter. Rather, we use Daylight Saving Time because (contrary to a vocal minority) most of us actually like it. Or, more specifically, we like the results of it. No one likes the transitional dates themselves, but in exchange for two very annoying days each year, we get (at typical latitudes):

  • one hour less “wasted” morning daylight in the summer.
  • sunset after 6:00pm at the peak of autumn (late October).
  • sunrise before 8:00am in the depths of winter, because we transition back to standard time.

To make my argument that DST is cultural, just look at the dates when the transitions occur: mid-March and early November. Relative to daylight availability, this is inconsistent because there’s a lot less daylight in October than in early March. It’s asymmetrical, but it makes sense in the context of typical weather. In October, it’s still warm. In early March, it’s often cold. The transition dates are anchored to temperature, which affects human activity, rather than the amount of daylight.

Most people who oppose DST would prefer to make the summer time regime permanent. It’s not that they care about having “12:00″ (again, as defined by humans) be less than 30 minutes away from mean solar noon. They just don’t like the semi-annual change-over. So why don’t we just “make DST year-round” by moving one timezone to the east? The reasons, again, are cultural and come down to, it would really piss people off.

Year-round DST is (possibly) a bad idea.

I believe that the case for DST is strong. Sure, if you’re luckier than most people and have full control of your schedule, you’re likely to think it’s stupid. Why fuck with the clocks twice per year just to prevent people from “wasting” daylight? Shouldn’t people for whom that is some kind of issue just wake up earlier?

Most people, however, don’t control their schedules, especially when it comes to working hours. DST isn’t for people who can work 7-to-3 if they so choose. It’s not for freelancers who work from home or for people who set their own hours. It’s for people who have to be in an office or a retail outlet till 5:00 or 6:00 or 7:00. It’s to give them some daylight after work, especially in the spring and fall when natural light is not so ample. For them, that extra hour of after-work daylight matters.

In the winter, however, those people are going to be working until dark regardless of the time regime. Year-round DST would punt the typical winter sunset from 4:30 to 5:30, which isn’t a meaningful gain for them. They’d still go home in twilight and eat dinner in darkness. Moreover, it’d put the typical winter sunrise after 8:00. They’d be waking up and going to work in the dark, for no gain. With non-DST or “winter” time, they’re at least able to get some daylight in the morning. That can make a large difference when it comes to mental health and morale, because peoples’ satisfaction with work plummets if they don’t get some daylight on at least one side of their working block.

This has everything to do with how humans react to nominal time, and nothing to do with natural design. There is, of course, absolutely no natural basis for changing the clocks. It is, I would argue, a good thing that we do so, but our reasons for doing it are connected entirely to our social construct of time.

Yes, this is unnatural.

Daylight Savings Time might seem ridiculous because it’s so unnatural. Nature doesn’t have any concept of it, and it’s an active annoyance for farmers whose animals’ circadian rhythms don’t respond to our conception of time. This, I concede. It’s not natural to change the way we keep time by exactly 1/24th of a day (a fraction that matters to us for archaic reasons) twice a year. Also, we choose our transition dates based on approximate average temperature rather than daylight amount– specifically, keeping DST into late fall when the days are short but it is still warm– makes it obvious to me that this thing is cultural to begin with.

That said, the clock isn’t natural. Left to our own devices, we’d probably rise about an hour after sunset and go to sleep about two hours after sunset, with a 3- to 4-hour period of wakefulness between two spells of sleep (biphasic sleep). Where we evolved, there wasn’t much seasonality, so this probably didn’t change much over the course of the year, but it had to be changed once people moved north (and south) into the mid-latitudes.

If we were to focus on one geographic point, the ideal clock wouldn’t be anchored based on noon (12:00) but on sunrise. In a way, the original Roman hour achieved this, because an hour was exactly 1/12 of the duration between sunrise and sunset, causing it to vary throughout the year. For modern use, we wouldn’t want a variable hour but it would be pretty useful to anchor sunset to 6:00 exactly. This way, people can schedule their time according to how much light they need in the sky to awaken happily, rather than live in a world where “8:00 am” is just after sunrise (and, if it’s cloudy, still fairly dark) at one time of the year and bright mid-morning at a different time of year. That said, the “anchor sunrise to 6:00″ concept is actually pretty ridiculous, because it wouldn’t just require longitudinal timezones, but latitudinal ones as well, and they’d vary continuously throughout the year. If this were done, the actual time difference between Seattle and Miami would be 1 hour and 42 minutes at the height of summer, but 3 hours and 52 minutes in winter. That’s clearly not acceptable. There isn’t a sane way to come up with a time policy that stabilizes sunrise at, or even very close to, 6:00 in the morning. The system we have is probably the optimal point in the tradeoff between local convenience and global complexity. It imposes some complexity, and that’s why almost no one’s entirely happy with our time regime, but it also (a) prevents egregious “waste” of daylight in the summer, with the understanding that few people will wake before whatever time we call “6:00″, while (b) minimizing the number of people who have to commend and finish work in darkness by realigning political time with solar time in the winter.

Of course, there is the bigger question…

Will we, as a species, outgrow Daylight Saving Time? We only need it because so many of our recurring commitments (in particular, work) are tied to the clock. Unless there is jet lag, people on vacation don’t care if the daylight occurs in the morning hours or evening; they’ll just wake up at whatever time is appropriate to their activities. In 50-100 years, humanity will either have advanced into a leisure society where work is truly voluntary (as opposed to the semi-coercive wage labor that is most common, and still quite necessary, now) or destroyed itself: the technological trends spell mass unemployment that will lead either to abundance and leisure, or to class warfare and ruin. I don’t know which one we’ll get, but let’s assume that it’s the better outcome. Then it’s quite possible that in 2115, schoolchildren (unfamiliar with the concept of people being stuck in offices for continuous 8-12 hour periods) will learn that the people of our time were so tied down to others’ expectations that they had to change the clock twice per year just to align their working lives and seasonal daylight availability in a least-harmful way. They’ll probably find it completely ridiculous, and they’ll be right. All of that said, we live in a world where the social construct of clock time matters. It matters a lot: we’d have a lot more seasonal depression if we made people go to work and leave work in darkness, so we align our clocks to solar noon in the winter to avoid too-late (after 8:00) sunrises. But it’s also remarkably difficult to get people to wake up at a time which they’ve been conditioned to think is early, so we jerk the clocks ahead to avoid wasting daylight during the warmer seasons. From an engineering perspective, and with a focus on our needs as humans right now, I think that the system that exists now is surprisingly effective. It has some complexity and that’s annoying, but only a moderate amount relative to what it achieves.

Open-plan offices, panic attacks, all in the game.

I’ve been waiting to write this piece for years, just because I’ve never seen an industry be so hazardously wrong about something. Here we go…

I’m a programmer who suffers from panic disorder, and I hate the way the open-plan office, once a necessary evil for operations like trading floors, has become the default office environment for people who write software. Yes, there are cases in which open plan offices are entirely appropriate. I’m not going to argue against that fact. I am, however, going to attack the open-plan fetishism that has become trendy, in spite of overwhelming evidence against this style of office layout.

This is a risky piece to write because, in so far as it seems to deliver an indictment, it’s probably going to cover 99 percent of the software industry. At this point, horrible working spaces are so ubiquitous in software engineering, perhaps out of a misguided sense that programmers actually like them (which is true… if you define “programmer” as a 24-year-old male who’d rather have a college-like halfway house culture than actually go to work). So let me make it clear that I’m not attacking all companies that use open-plan offices. I’m attacking the practice itself, which I consider counterproductive and harmful, of putting people who perform intense knowledge work in an environment that is needlessly stressful, and generally despised by the people who have to work in it.

I work for a great company. I have a great job. I like being a technologist. I like writing code. I don’t plan to stop making things. In almost all ways, my life is going well right now. (It hasn’t always been so, and I’ve had years that would make it physically impossible to envy me.) However, for the past 8 years– and probably, at a sub-crisis level, for 10 before that– I’ve suffered from panic attacks. Some months, I don’t get any. In others, I’ll get five or six or ten (plus the daily mild anxiety attacks that aren’t true panic). These are not the yuppie anxiety attacks that come from too much coffee (I had those as a teenager and thought that they were “panic attacks”; they weren’t) although I get those, too. Actual panic attacks are otherworldly, often debilitating, and (perhaps worst of all) just incredibly embarrassing in the event that one becomes noticeable to others. Perhaps one of the most traumatizing things about the condition is its introductory phase, when it’s an ER-worthy “mystery health problem”. It takes a while to learn that, despite their incredible ability to throw almost any physical symptom at a person, they aren’t physically dangerous.

Panic attacks suck. They deliver intense negative reinforcement, almost to the point that I’d call it torture, for… what? What does one learn about an attack that comes out of the blue at 3:47pm? That 3:47pm is a menacing time of day? (This seems absurd, but panic makes a person very superstitious at first.) Some peoples’ attacks are phobic and attributable to a single cause; others’ are more random. Mine are random but patterned. They almost never happen in the morning or at night, with a peak around 3:00 in the afternoon. They sometimes come with stop-and-go traffic, but (oddly enough) almost never when I am biking. Extremely humid weather can provoke them, but dry heat, cold, rain and snow don’t. They also tend to come in clusters: there are foreshocks, there’s a main event, and then there’s a slew of stupid aftershocks (most being mild anxiety attacks rather than true panic) stretching out over about two weeks. I certainly can’t blame all of them on open-plan offices, because I’ve had them in all sorts of places: cars, planes, boats, my own bed, random street corners. Moreover, I’ve reached a point where, with treatment, I can endure such office environments, most of the time. I’ll say this much about the disorder as it relates to open-pan offices; the severe ones that can be called true panic started in, and largely because of, open-plan offices in the early years of my career. Even to this point, the attacks are exacerbated by the noise and personal visibility of an open-plan office, in which the cost of a panic attack is not limited to the experience itself, but includes embarrassment and the disruption of others’ work. A bad panic attack produces belching, farting, an autistic sort of “rocking”, and numerous tics that can be very loud. No one wants to see or hear that, and I don’t need the added stress of the enhanced consequences of an attack.

I don’t blame companies directly for this because absolutely no one would want their employees to have panic attacks. In fact, if the fact were well-known that open-plan offices exacerbate (and, I would argue, can produce) a debilitating disability in about 2 percent of the population, I think their use would be a lot less common. If the purpose of the open-plan office is to induce anxiety and stress– and this may be a motivation in a small number of companies, although I like to think better of people– it’s clearly there to induce the low levels that supposedly make people work harder, and not the plate-dropping extreme levels.

As for me, I’m mostly cured of the affliction. (I’ve had a bad February, between cabin fever from a sports injury and running out of an important medication; but, in general, my trend is positive.) At my worst (late 2009) I was a shut-in, able to survive largely because my work was only 1500 feet from my house. The 20-minute subway ride to my therapist’s office, without a prophylactic dose of clonazepam, was unthinkable. I began to improve in 2010, as I distinctly remember being actually able to ride a plane (a prerequisite for traveling to Alaska). Oddly enough, I probably had 20 panic attacks in the months leading up to the plane ride. On the airplane itself, I had only a mild one, and it ended 10 minutes after takeoff. So there was about a 100x multiplier on the anticipation of the plane ride that, when it came, actually wasn’t so bad. This is just one of the ironies of panic: the anxiety that can form in anticipation of a potential panic-trigger is often worse than the triggered panic itself.

Many basic skills I had to relearn in the wake of developing Panic Disorder. There was the First Run Over 10 Miles (February 2010) and the First Drive (May 2010) and the First Swim (June 2010) and the First Bike Ride (August 2011) and the First Swim in Open Water (February 2012) and the First Flight Not On Meds (June 2012). In a way, it was like aging in reverse. In 2008 and ’09, I was a crippled old man, unable to do much, and convinced that I was unemployable because throwing myself into the hell of open-plan programming was not an option. In 2010, I was a cautious 65-year-old who could get around but clearly had limitations. By 2012, I’d probably aged down into my 50s; and at this point in 2015, I’m probably at an “effective age” of about 40. I fret about health problems far too much, even though I’m probably (factoring out mild weight gain due to medications) in the top 10% of my age group for physical shape. I’ve decided that 2015 is going to be the year that I finally lick this thing, in large part because I really want to go Scuba diving, which is contraindicated for an active panic disorder. Part of why I am talking about this nightmarish condition is because I have some hope (perhaps naive) that doing so will help me finally put it down for good. Fuck panic. I’m ready for this nonsense to be over. I suspect that it continues because “something” (call it God or “the Universe”) needs me to speak out on this particular issue so, maybe, telling this story in the right way will end it.

It’s been a fight. I haven’t told a tenth of it. Perhaps it has strengthened me– when unaffected by this disease, almost nothing fazes me– and perhaps it has weakened or sickened me. Given that I use no drugs, don’t drink, exercise and eat well, I might make it to 85. Or I might die at 50. Oddly, I’m OK with that. What ended my fear of death (as a Buddhist believing in reincarnation) was the realization that it is the opposite of panic. Death is the essence of impermanence. It’s there to remind us that nothing in this world lasts forever. Panic is a visceral fear of eternal “stuckness”, the sense that an undesirable state will never end. There is (and I won’t lie about the morbid fact) a fear of death that lives in panic, but it’s more of a fear of “dying like this” than one of death itself. I know that I must die, but I really don’t want to die in an office at age 31, and it’s the latter thought that makes panic so awful. From what I know of death and the final moments, it seems like something that should not be feared; the pain of death is in it happening to so many other people.

Attacks are often trigger-less or, at least, have no obvious trigger. The worst ones aren’t usually “caused by” anything, and you can’t stop one by wishing it were over. Telling a panicker to “calm down” is about as useless as telling a depressed person to “cheer up”. If it has any effect, it’s a negative one. In general, though, once a wave of panic or dread has begun, it will run its course no matter what is done. The medication, the fruit juice, the hot tub… those are all useful at preventing a short-term “aftershock” or a recurrence, and therefore are great at preventing “rat king” attacks where one wave comes after another… but the attack itself (at least, a single “wave” of panic that lasts for an intense 10 to 300 seconds) can usually not be aborted.

Violent transparency

What is it about the open-plan office that causes anxiety? People are quick to implicate noise, and I agree that loud conversations can be a problem and can cause small amounts of anxiety, and it might possibly be sufficient to cause panic. I don’t think office noise, alone, explains the problem. The direction of the noise exacerbates it. Noise is especially distressing when it comes from behind a person, and poor acoustics can make this an issue at any point on the floor, and noise that comes “from everywhere” creates a sense of incoherence. All that said, I don’t think that noise is the major killer when it comes to open-plan-induced anxiety or panic. Annoyance, yes. Mental fatigue, for sure. However, I think the omnipresent stress of being visible from behind– typically a mild stress, because one usually has nothing to hide, but a creepy one that never goes away– tends to accumulate throughout the day. If there’s any one thing (and, of course, there probably isn’t just one) that causes open-plan panic attacks, it’s that constant creepy feeling of being watched.

I’ve studied office boredom and its causes, and one recurring theme is that people are notoriously bad at attributing the right cause for boredom, anxiety, or poor performance. I think this explains why people overestimate the influence of acoustics, and underestimate that of line-of-sight issues, on their open-plan problems. For example, people subjected to low-level irritations (sniffling, people shifting in their seats, intermittent noise) will often attribute their poor performance in reading comprehension to “boring” material, even when others who read the same passages in comfortable environments found the material interesting. In other words, people misattributed their distraction to a fault in the material rather than the (perhaps less than liminal) defects in their environment. I, likewise, tend to think that “noise” is the attributed cause for open-plan discomfort, anxiety and panic, largely because people fear that attributing their negative response to lines-of-sight would be a “confession” that they have something to hide. But lines of sight matter, and almost every human space except for an office is designed with this in mind. In a restaurant booth, you can ignore the noise, even though the environment is often louder than a typical office. The noise isn’t a problem because you know that it doesn’t concern you. You’ve got a wall at your back, and enough visual barriers to feel confident that almost no one is looking at you, and so you can eat in peace.

Programmers have, against their own interests, created a cottage industry around a culture that I would call violent transparency. One can start by noting the evils of “Agile” systems that micromanage and monitor individual work, in some companies and use cases, down to the fucking hour. While these techniques are beloved by middle management, all over the technology industry, for identifying (and also creating) underperformers (“Use Scrum to find your Scum”), I’ve seen enough to know that these innovations do far more harm than good. They tend to incent political behavior, have unacceptable false-positive rates (especially among people inclined to anxiety problems) and generally create an office-wide sense that programmers are terminal subordinates, incapable of anything that involves long-term thinking. Moreover, the culture of violent transparency is inhospitable to age and experience, favoring short-term flashiness over sustainable, thoughtful development. Over time, that leads to echo chambers, cultural homogeneity, and a low overall quality of work produced.

To make it clear, it’s not transparency itself that is bad. I think it’s great when people are proud of their work and are eager to share it. That should be encouraged whenever possible. I also think that it’s worthwhile for each person on a team to give regular notice of progress and impediments. In fact, I think that, properly used and timeboxed, so-called “standup” meetings can be good things that may reduce political behavior by eliminating the “Does Tom actually do anything?” class of suspicion. Somewhere, there is a middle ground between “The programmers work on whatever they want and update us when they feel like it and have to be needled to integrate their work with the rest of the company” and “We torture programmers by forcing them to justify days and hours of their own working time” and I think it’s important to find it. Unfortunately, the “Agile” industry seems built to sell one vision of things (autonomous self-managing teams! no waterfalls or derechos!) to programmers while promising something entirely different to management.

Oddly enough, young programmers seem not to oppose violent transparency, whether in the form of oppressive project management and downright aggressive visibility into the day-to-day fluctuations of their productivity, or in the open-plan office. Indeed, some of the strongest advocates of the paradoxical macho-subordinate cultures are programmers. (“I want an open-plan environment. I have nothing to hide!”) This is tenable for the inexperienced because they haven’t been afflicted yet by the oppressive creepiness of feeling (even if, in reality, they are not being watched) like they are monitored. Those who’ve not yet had a negative experience at work (a set that becomes very small, with age) do not yet realize that a surveillance state is, even for the innocent and the pillars of the community, an anxiety state.

Why is it so rare for programmers to recognize “Agile” fads as a game being played against them? Well, first, I think that there are some, like me, who genuinely like the work and have few complaints except for these stupid passing fads, which become more navigable or avoidable as one gets older and more socially skilled. I have seen “Scrum” (and I’m not talking about standup meetings, but the whole process and the attitude of programmers as wholly subordinate to “product”) shave over 80 percent off of a large company’s valuation in about a year. So I know how dysfunctional it can be. However, I’ve never personally been fired because of “Agile”, and I’m sufficiently skilled at Agilepolitik that I probably never will be. As I get older and more politically skilled, and my panic situation becomes more treatable, this is increasingly someone else’s battle. That said, if I can raise the issue to prevent further destruction of shareholder value, loss of talent, and intense personal anxiety to be suffered by others, then I will do so.

As a group, most of us who are experienced know that these macho-subordinate “Agile” fads are extremely harmful, but we don’t speak up. It’s not our money that’s at risk. It doesn’t hurt us, nearly as much as it hurts shareholders, when “user stories” cause a talent bleed and bankrupt a company. As for the young and inexperienced, they often have no sense of when a game is being played against them. Like the open plan office, so-called “Agile” is becoming something that “everyone does” and we now have a generation of programmers who, not only do they consider that nonsense to be normal– because they’ve never seen anything else– but they will replicate it, even without malign intent.

Violent transparency appeals to the hyper-rational person who hasn’t yet learned that the world is fluid, subtle, complicated, and often very emotional and political. It appeals to someone who’s never had an embarrassing health problem or a career setback and who still thinks that a person who is good at his job and ethical has “nothing to hide”. “Agile” notions of estimation and story points look innocuous. I mean, shouldn’t “product managers” and executives know how long things are supposed (ha!) to take? It seems like these innovations are welcome. And if a few “underperformers” get found out and reassigned or fired, isn’t that good as well? (Most programmers hate bad programmers. Older, more seasoned programmers hate bad programs, which are often produced by “Agile” practices, even when good programmers are writing the code.) What many young programmers don’t recognize is that every single one of them, for one reason or another, will have a bad day. Or even a bad two weeks (“sprinteration”). Or even a bad month. Shit happens. Health problems occur, projects get cancelled, parents and pets die, kids get sick, and errors from all over the company (up, down, sideways, or all at once) can propagate into one’s own blame space for any reason or no reason at all. You’d think that programmers would recognize this, band together, and support each other. Many do. But the emerging Silicon Valley culture discourages it, instead pushing the macho-subordinate culture in which programmers endure unreasonable stress– of a kind that actually detracts from their ability to do their jobs well– in the name of “transparency”.

Why open plan?

To make it clear, I’m not against all uses of the open-plan office. One environment where open-plan offices make sense is a trading desk. Seconds, in trading, can mean the difference between respectable gains and catastrophic losses, and so a large number of people need to be informed as soon as is humanly possible when someone detects a change in the market, or a production problem with technology (their own or from a third party; you’d be surprised at how much bad third-party software the trading world runs on). In such an environment, private offices can’t be afforded. Knocking on doors and waiting isn’t acceptable, because a trading desk is about as opposite a long-term-focused R&D environment as one can get. The reality of life on a trading desk is that the job probably mandates an open-plan, bullpen environment in which a shout can be heard by the entire floor at once. It’s a stressful environment, and talented people demand 30 percent raises every year in order to stay in it, because even people who love trading feel like they’ve taken a beating after a 7-hour exposure to that sort of bullpen environment. In fact, most seasoned traders will admit that their office environments are stressful and that the job has a “young man’s game” flavor to it; even the ones who say they love the game of trading are usually in management roles by 50.

So, no, I’m not against all open-plan offices. I’m against unnecessary open-plan offices. These environments, once, were regarded as a necessary evil for a small number of use cases, and are now the default work environment for programmers. I’m a technologist. I tend to be writing or involved in the writing of code for the next 30 years. Some projects and circumstances may require that I endure an open-plan office, and I accept that as something that may occur from time to time. It shouldn’t be the only option. Holding up against artificial stress that makes everyone worse at his job should not be part of the programmer’s job description. I’m 31 and if I have chest pain, I’m almost 100% sure that it’s a panic attack. That will be different when I’m 60 and, at that age, I sure as hell don’t want to be dealing with an open-plan office. I’ll quit the industry before then if I can’t get some damn privacy.

So when are open-plan offices necessary or useful? And what are software companies’ reasons for using them?

There are, I’d argue, six reasons why companies use open-plan offices for software. Some are defensible, and some are atrocious.

  1. It’s a necessity of the job. This might apply to 1 percent of programming jobs, if that, but such environments do exist. A hedge fund probably needs its core quants to be within earshot of the traders they’re supporting. A larger percentage of programming jobs will require an open working space some of the time (e.g. mission control when something is launched into space). In these cases, the negatives of the environment are offset either by the mission or by compensation.
  2. The company is very new. It goes without saying that if a company has less than 5 members, an open-plan (“garage”) format is probably going to be used. I don’t see that as terribly problematic; running a four-person company presents plenty of stressors that are as exhausting as open-plan offices, and startups can mitigate the stress caused by such environments by encouraging flexible hours and mitigating office noise.
  3. The company is expanding rapidly. To be truthful, I don’t begrudge all tech companies for using an open-plan office. If you’re growing at 40 percent per year, you can’t afford to have private offices for all your programmers at all times. As a “growing pain”, I think that the cramped open-plan office is acceptable as a temporary solution. It shouldn’t be the long-term expectation, however. It’s perfectly reasonable to use an open-plan office when anticipating rapid growth, simply because it’s hard to set up any other plan if you expect to outgrow it quickly. The negatives of the open-plan environment are severe, but tolerable as a temporary arrangement. I don’t hate that some companies use open-plan offices. I hate the fact that it’s becoming no longer a choice whether to work in one, because even the 5,000-person companies are now using them.
  4. “Everyone else is doing it.” This relates to item #3. Because rapid-growth companies have a legitimate reason to use open-plan offices in the first couple of years, it has become “cool” to have one, in spite of their being unpleasant for the people who have to work in them. To me, this is upsetting and troubling. There are good startups out there, but I don’t think “startupness” should be valued for its own sake. Companies regress to the mean with size, for sure, and this means that the best (and worst) jobs at any given time are likely to be at smaller companies; but, on the whole, attempting to replicate startup traits in larger companies tends to reproduce the negatives more reliably than the positives. People in large and small companies ought to recognize that there are, in fact, negatives of being a startup: ill-defined division of labor, cramped spaces, rapid organizational change. Unfortunately, big-company cost-cutters– that is, the people who aren’t good at anything else, and who have no vision, so they use claimed title of “belt-tightener” to inspire fear and grow politically– are very willing to use the “coolness” of startups to justify changes in their much larger companies, and these changes are invariably harmful to the employees and their companies.
  5. It’s cheap. This is a dishonorable motivation, for any company other than a bootstrapped or seed-funded startup, but some employers use the open-plan office just because it’s the cheapest and crappiest option. To be honest, I think that this is a case of “you get what you pay for”. Open-plan offices save costs, but the quality of work produced suffers and, for high-end knowledge work, it’s almost certainly an unfavorable trade. Assuming a modest 20% increase in programmer productivity, private or pair offices at 200 SF per person will pay for themselves tenfold, in most locales.
  6. Age and disability discrimination. Is this the motivation for most employers who use open-plan offices? Probably not. Is it a motivation for some? Absolutely. In the early 2000s, when many companies had to make cuts and wanted to shed their most expensive workers regardless of value or ability, it was a fairly well-known HR trick to reduce privacy and office space, driving the more expensive older programmers out first. (I’d bet that “Agile” became popular around the same time.) Age discrimination has probably never been the only reason for introducing an unhealthy work environment, but it is one among many for sure. So I suspect that one of numerous reasons why startup executives love open-plan offices is their repulsiveness to older programmers (where “older” means “anyone who is over 34 or has had any health problem”) and women.

Should open-plan offices be abolished or made illegal? No, probably not. That’s not my goal. Though I suffer from a disability that is aggravated by this rather obnoxious feature of the typical software work environment, I am also aware of its necessity in a number of circumstances. If I went to back into the hedge-fund life, I’d probably have to “Med Up” and deal with the intense environment, but I’d expect to be compensated for the pain. Everywhere else, open-plan offices shouldn’t be the norm. They should be a temporary “growing pain” at most. I can deal with them if I have to, for short durations and with the ability to get away. I don’t want to deal with them in every fucking office environment that I have to use for the next thirty years. I don’t expect to be in this industry, or even alive, if I have to deal with 30 more years of open-plan.

The F.Y.S. Letter

How do we preserve the benefits of the open-plan environment (if there are any) while mitigating the literally sickening drawbacks? Here are some starting points and observations:

  1. There is no working utility to visibility from behind. Abolish it. If you must go for an open-plan layout, then buy booth-style walls and give each programmer a wall at her back. Noise is irritating, but I don’t think that it’s the noise of the open-plan office that causes the distraction or the panic attacks. It’s the noise, followed by the loss of concentration, followed by the awareness of being visible “not getting any work done” to 20 people, combined with the general mental exhaustion that comes from having been visible to other people for hours on end. It is stressful for anyone to be visible, like a fucking caged animal, to so many people for 8-10 hours, five days per week. It’s especially bad to be visible from behind. Combine that with the mental tiredness of a workday well spent, and it’s intolerable. Whereas even normal people get mild vertigo and nausea by 4:00pm from this– I’ve heard that doctors in Silicon Valley are beginning to discuss “Open Office Syndrome” as even people without anxiety disorders begin presenting with late-afternoon vertigo– I’m at risk of something much worse, and I shouldn’t be. I’m a programmer, and fighting off panic attacks shouldn’t be part of the fucking job. I’d much rather use all of my mental energy on the programming.
  2. If open-plan is the only option, discourage pairing sessions and impromptu meetings in the working space. That’s what conference rooms are for. If the conversation is going to last for longer than 180 seconds, then it shouldn’t be in earshot of people who have no need to hear it, and are trying to do their jobs.
  3. Yes, programmers are special. Our jobs are mentally exhausting in a way that most white-collar workers (including almost all of the high-ranking ones called “executives”) will never be able to relate to. Many programmers are fat not because we’re “nerds” but because it is difficult to fight off food cravings after hours of mental exertion. The difficulty of the job (assuming that you care about doing it right, instead of just playing politics) is generally held to be a positive; it’s great to get into a creative flow. We also have an unusual trait of wanting to work hard (which is why that “Agile” nonsense is so stupid; it’s designed by and for non-producers on the assumption that we are like them, in terms of wanting to do as little real work as possible and needing the micromanagement of “user stories”, when the exact fucking opposite is true) and tend to beat ourselves up when (even if the reasons are environmental, and not our fault) we can’t focus. However, that combination of mental fatigue and overactive superego puts us in an ultra-high-risk category for panic and anxiety disorders. So, yes, we are fucking special and we need a better fucking work environment than what 99 percent of us face.
  4. Discourage eat-at-desk culture but encourage “talking shop” at lunch. First of all, eating at one’s desk is like eating in a car. It’s OK to do it once in a while, but it shouldn’t be the norm. This has nothing to do with anxiety disorders or office layouts. I just don’t like it, although I’ll do it two days out of five. But it should be discouraged in general. Second, “collaboration” is an often-cited benefit of an open-plan office (even though such environments aren’t very collaborative; just stressful and annoying). However, you can’t force people to be “collaborative” by having them overhear each other’s conversations that are unrelated to what they’re trying to do and, also, people will gladly be collaborative even when they have private offices (from which, you know, they do sometimes emerge, humans being social creatures and all). Collaboration happens when people are relaxed, together, and “talk shop” because they’re genuinely engaged in what they’re up to. It happens quite often with programmers. You don’t have to force us to be “collaborative”. It just comes about naturally. Remember: most of us like work.
  5. Stop measuring people based on their decline curves in the first place. Programming is not the Marine Corps and it shouldn’t be. This should be a 40-year career. Programmers create their own stress (eustress, the good kind) and should not be subjected to the negative kind, especially on variables (such as the ability, increasingly compromised with age, to withstand a lack of privacy that can likened to nine-hour economy-class airplane ride every day) that do not correlate to any useful ability. When that happens, the people suffer, the code suffers, and the product suffers. And as a programmer, let me state that low-quality work in infrastructure or software is always astronomically more expensive than it appears that it “should” be. Since there is almost no correlation between basic reliability (that is, the ability to make good decisions under normal conditions, ethical character, and the insight necessary to build highly-reliable systems that are far more reliable than any human could be) and the sort of superficial but extreme reliability (“story points” per week) that characterizes office tournaments, we as an industry ought to abolish all focus on the latter. The former (which I termed, “basic reliability”) is what we care about, and the rightward tail of a person’s decline curve has no correlation to that.

Conclusion

I’m not writing this to indict specific companies– except for, perhaps, Facebook, for the woeful lack of cultural leadership shown in building a 2800-person open-plan office– because every programming environment I’ve been in, since 2007, has been an open-plan one. In fact, I’ve only been in two companies that didn’t use open-plan offices: one had pair offices (which worked very well, and are probably better than solo offices) and the other had a different kind of dysfunctional layout. Moreover, I recognize that there are legitimate uses for open-plan offices, even if such environments exacerbate some peoples’ health issues. Engineering, whether in technology or in designing a workplace, is about trade-offs, and sometimes the benefits of an open-plan office outweigh the drawbacks. That is, I think, quite rare, because I don’t believe that cramped offices are “collaborative” at all, but I’ll admit that the possibility exists. In general, though, an ideal office environment would afford 150-200 square feet of personal space, per programmer, at a minimum, while offering central, “public” spaces for collaboration and informal socializing (which often blend together). Relative to even an average programmer’s productivity, office space is ridiculously cheap. A 200 square-foot office, in a prime downtown location in the U.S., typically costs $4000 to $6000 per year, which is nothing compared to the gain from having happy, productive, healthy programmers able to do their jobs without distraction.

What I absolutely must kill is the assumption that the open-plan bullpen represents the way for programmers to work, as if it deserves to be a standard, and as if programmers were somehow immune to the health problems (acute and gradual) that badly-designed office environments inflict. It’s this sort of nonsense (in the obscene name of “culture fit”) that makes our industry needlessly exclusionary. The open-plan office is fine in a company’s transitional phase, but it should not be the standard, and it should absolutely never be, under any circumstances, a part of how we allow the rest of the world to view us as programmers. (Visibility from behind suggests low social status. If you’re not aware of that nuance, then you’re not qualified to have opinions about office space and you don’t get a vote.) If your company is at five people, then use the environment that’s right for those 5 people. If you’re growing too fast for individual or pair offices to be practical, then use cubicles or an open-plan layout during the growth phase, and shift over to offices with doors as you can. However, if you’re the CEO of a 5000-person company and you’re still putting software engineers in open-plan offices, then you’re either ignorant or a psychopath. The open-plan office is no more “cool” than smoking, and people who want to be programmers without signing up for 40 years of second-hand smoke should have that option.

Hypochondria isn’t what people think it is, and the U.S. medical system makes it worse.

Trigger warning: discusses panic attacks and health problems. Use caution if you’re sensitive to health anxiety or panic.  

I suffer from health anxiety, often called hypochondria. In addition to cyclothymia (a milder variant of bipolar disorder) and panic attacks, I suffer from an intense fear of health problems. In fact, almost all of my panic attacks are related to obsessive health-related anxieties.

Oddly, despite my extreme phobia of sickness, I fear death very little. I’m a Buddhist and believe in reincarnation, so I believe that I have died millions of times (although I don’t remember them) and am therefore “dead” already, and it ain’t so bad. Death doesn’t scare me. I hope that it won’t be painful (the period leading up to it probably will be; I’m realistic) and I look forward to the event itself. Getting sick scares the shit out of me, not because of the pain or risk of death or inconvenience, but just due to the sheer humiliation of it.

Most people use “hypochondriac” to describe a person who is overly dramatic, needy, or attention-seeking when it comes to matters of health. That’s not me. When a panic attack starts throwing bizarre symptoms and I start belching, smacking my stomach to prevent (a completely imagined and probably medically impossible) diaphragmatic spasm that I fear might stop my breathing and kill me, ineffectively massaging my neck and shoulder muscles, or pacing the room, I really wish others wouldn’t take notice. It’s fucking embarrassing. It’s stupid. The stereotypical hypochondriac constantly believes that he is sick. Not so, at least not in my case. Cognitively, I know that (with high probability) I am not sick. I eat well, don’t drink or use drugs, and I exercise regularly. I have also (unfortunately) had enough people close to me die that I realize that bodies usually break down slowly– House be damned– and never for no reason– House gets that right– so I can ratiocinate that the slight pain at my ribcage isn’t actual angina, which I am almost certainly not having because I am a 31-year-old in good shape and no family history of early heart disease. I know that, cognitively. But the thing about obsession is that it trumps cognition. You can know that you’re OK and that the panic attack will end once the meds kick in, and you are basically lucid, but intrusive thoughts about ambulances and hospitals and uncaring medical professionals and life-wrecking bills and losing jobs and relationships can’t be stopped. They build up until you have detached from reality and your body (or, at least, the signals you are getting from it, with probable neurological wire-crossings turning mild discomfort into more dreadful sensations) begins to go haywire.

Panic disorder is the ultimate troll. It is almost hallucinatory in its ability to throw bizarre physical symptoms at a person. I won’t list them, because I don’t want to give panic fuel to someone else who might be reading this, but to name one of the more bizarre ones: phantom smells. (I name it because it’s actually pretty funny, moreso than thinking today is the day you discover that you have adult-onset diabetes because of a dry throat.) What the fuck is up with that phantom smell shit? If God or “the Universe” is testing me or sending a message, what the hell does the smell of fucking relish have to do with it? I think the only other disease (than panic disorder) that causes phantom smells is brain tumor, and I’ve had the problem for 7 years and I’m still alive, so I’m pretty sure it’s not the latter. Fucking relish. Anyway…

One TV character who is shaping up, for the record, to be my favorite mentally ill character is Chuck McGill on Better Call Saul. It’s a compassionate depiction, but a realistic one. He’s not a drooling mental patient or an invalid or a psychotic murderer. He has a crippling anxiety disorder (far more intrusive than mine) which is a fear of electricity, and has had to leave his high-paying job as a law partner because of it, but (as of Episode 2) he’s lucid, smart, morally decent, and likable on all counts. In the mentally ill population, he’s a “silent majority” example: one who has a stigmatized illness, but with full intelligence and moral decency intact. Admittedly, “full intelligence and moral decency intact” is not how most people view mental illness; that is, largely, because most people associate mental illness with the most visible and extreme examples: (1) people who are so far gone that they can’t function in society at all, and (2) substance abusers, who are an atypical set for a number of reasons. I’d actually guess that the “silent majority”, even with stigmatized illnesses like bipolar disorder, is well over 90 percent. The stereotypical Hollywood manic-depressive goes on spending sprees, becomes sexually promiscuous, does a bunch of drugs, or gets into fistfights. When I go hypomanic I can be found at 1:00 am… reading. Or writing. Or coding. I call this the “cerebral subtype” and while it has its dangers (sleep deprivation can exacerbate hypomania) it does not make a person like me morally dissolute. It makes me… slightly groggy the next day.

The general population, in my view, doesn’t like to accept the reality about “mental illness”, which is the term we use for neurological diseases whose symptoms involve cognition. First, these are mostly “boring” health problems like all the other ones, which present challenges and can be extremely painful and disruptive, but rarely change the moral character of a person. People wouldn’t deign to ask whether a person with migraines or diabetes “can handle” a difficult job, but that’s a common question asked of people who’ve dealt with anxiety. These “mental” illnesses don’t send a person on a one-way journey into insane lal-a tinfoil land. They don’t, in general, make a person “crazy” in the sense of being impulsive, delusional, or dangerous. They are malfunctions in an organ that, while extremely powerful and resilient, exists in a stew of complex organic chemicals and operates according to an electrical protocol that we just barely understand. Most mentally ill people (whether we’re talking about bipolar disorder, depression, or anxiety) are surprisingly “normal”: again, the silent majority. Why is there a resistance to this idea, in the mainstream? Because, in general, people don’t want to believe that painful things won’t happen to them. “If I don’t smoke, I won’t get lung cancer.” “If I take 27 vitamin pills per day, I’ll die at age 109 in my sleep.” “If I’m not a crazy person, I’ll never have a life-altering depressive episode.” Sorry, but all of those beliefs are false. Healthy choices alter the probabilities quite favorably, and most people who reach age 25 without a mental health issue are in the clear… but there are no guarantees.

This is why (as of Episode 2) I find Chuck McGill so interesting. He’s not “crazy”. Everything he says has value. He’s an intelligent and good man. He also happens to suffer from a severe psychiatric illness. It seems paradoxical in light of our expectation that mentally ill people be “crazy” (in the drooling, “mental patient” sort of way) but it’s actually pretty normal. I don’t know how Better Call Saul intends to develop him, but this is one of the more honest portrayals of mental illness that I’ve encountered. The tragedy of these diseases (in their most severe forms; mine is relatively mild) is that they afflict normal people, not “crazy” ones. There is such a thing as “crazy”, but it’s mostly orthogonal to mental illness. Religiously motivated suicide bombers are crazy, but I would wager the guess that quite a large number of them suffer from no biological mental illness and that, at the top of terrorist organizations, people with mental illness (except psychopathy) are very uncommon. Evil is real and not the same thing as mental illness.

Back to panic: a true panic attack– and I’m not talking about the low-level yuppie anxiety attacks that come from a deadline and too much caffeine– is a venture into something like “crazy”, but paradoxical in that the panicking person is, in fact, terrifyingly lucid. A person with that much adrenaline is, in some ways, in peak physical function (despite mental distress). Facing off against a smilodon, one would want that “fight-or-flight” response. It’s when it’s triggered without cause that we call it “a panic attack”, because it’s so pathological against the backdrop of modern life, in which mortal danger is rare but social protocols must be followed. Acute panic tends to last no more than three minutes, although the fatigue and anxiety that can exist afterward can spawn another wave of panic, leading into an episode that can last (at worst) two or more hours. Panic is almost hallucinatory; it wouldn’t be inaccurate to describe it as a (short-lived) bad trip on a drug that one didn’t voluntarily take, and never really wanted. It is almost admirably creative in its ability to take mild discomfort (which is inevitable with any anxiety disorder) and transform it, wholly in the mind, into acute danger. A tight shoulder muscle becomes “chest pain”, and cold extremities become “imminent hypothermia”, and the fatigue that sets in after 20 minutes of panic (if the attack hasn’t abated by then, which it usually has) becomes a threat of fainting (which any proper hypochondriac knows by the medical term, “syncope”.) I know that this sounds fucking ridiculous. Sufferers of the disease would agree; it is. But the nature of a panic attack is that baseless and undefined fear reaches such a crescendo that it will (for a minute or two) override sensible cognition. That extreme, biologically-induced fear needs to crystallize around something and it is usually something in the body– that machine that (usually) works so well running on its own, for billion-year-old reasons that science is just starting to understand, and that it is impossible for us, on a second-by-second basis, to consciously manage.

The popular image of a hypochondriac is of someone who either convinces himself that he is sick, or feigns illness because he enjoys attention and sympathy. An actual hypochondriac is the opposite. First, we have no incentive to pretend and, if anything, we downplay our suffering. It’s fucking humiliating, and it would depress the shit out of most people, and my desire with regard to negative moods is anti-reproductive; I combat them by making efforts not to spread them. It’s the one thing I can do. Depression and anxiety are not physically contagious and my job is to prevent social contagion. If there’s any danger of error in a hypochondriac, it’s that we may (later in life, when life-threatening physical illnesses are more common) misinterpret dangerous health problems as just panic attacks and not seek care. Second, we generally do not frequent ERs when we have health-anxiety-induced panic attacks. Contrary to the image of a hypochondriac as someone who is stupidly or delusionally convinced of an illness he doesn’t have, we know cognitively that we are probably fine, but are overwhelmed by the intense physical symptoms and racing, uncontrollable negative thoughts. When you misinterpret (due to crossed neurological wires, not stupidity) neck tension as the throat closing up, you will fucking panic. (In fact, your breathing is fine. But you feel like you are choking and it is going to freak you out.) Going to an ER during a panic attack doesn’t help: you’ll spend four hours surrounded by hospital staff who resent you (“another one of these rich white pieces of shit ‘freaking out’ after a hard day at work”) and you’ll see people who are suffering from painful and more severe health issues (panic fuel) than the one you’re having… only not to get the medical attention you need because (a) ERs are intended for life-threatening conditions rather than subjectively terrifying ones and (b) ER doctors can’t be sure that you’re not a drug-seeker and are therefore conservative (for understandable reasons) when it comes to dispensing medicine. I’ve been to an ER twice for a panic attack (more on that, later) and it turned one of the worst experiences of my life into… an even worster experience. And yes, I made up the word “worster” because some things are so shitty that they entitle a person who has experienced them to make up words. Third and most importantly, we do not “freak out” because we want to get out of work, win sympathy, or otherwise gain favor from other people. First of all, that doesn’t work, especially not with stigmatized conditions. Most of us hypochondriacs are type-A control freaks (after all, hypochondria derives in part from our inability to know or control the operation of our own bodies) who, if anything, love to work a little bit too much. Trust me on this: I enjoy my job, I’m good at it, and I would absolutely love to be able to work 80 hours per week (not to say that I would work that much, but I’d like to have the ability) and not lose a single minute, ever, to anxiety or panic.

That’s enough of this shit, but this rant wouldn’t be complete without an indictment. One of the most important things to understand about mental illness is that, in terms of origin, it’s usually “no one’s fault”. My parents didn’t cause this; they were great. Nor is it my fault, really. I didn’t ask to have it. Nor is the fault of society or past relationships and jobs or present ones… for the most part. Based on genetics, it’s almost a guarantee that a person of my makeup would have struggles no matter what circumstances he landed in. It’s an interesting trade: 4 standard deviations of this fetishized quantity called “intelligence” in exchange for losing 5% of your time to painful mood and anxiety disorders. I’m not bitter about the deal, even though I didn’t make it voluntarily. To be honest, I’d probably make it again. I’m glad I’m me. It really sucks some of the time, but isn’t that true for everyone?

No one’s at fault for the fact that I’ve had panic attacks, but I’m going to throw some stones at those who’ve made the condition worse. The U.S. medical system, to put it bluntly, can die in a fucking taint fire. I’ve had good doctors and bad ones, and I continue to see the good ones despite my acquired doctor-phobia because I’m rational, but the system itself is a moral disaster.

My first panic attack came in 2008. It was scary, bizarre, and confusing. Though many come with warning and a build-up, this one hid immediately. This sudden and absolutely terrifying “mystery” health problem involved vomiting, tunnel vision, apprehension and shaking. It came on at 2:37 pm. I tried to drink water and it was physically impossible to swallow. At 2:41 pm, I was white as a ghost but lucid. At 2:46 I began vomiting and screaming (in front of work colleagues). At 2:50 I was fine. Around 3:00 an ambulance was called and I arrived at an ER at 3:15. I saw a lot of people who were suffering, and worried about that being my immediate future, so I had a couple of anxiety waves (none reaching the level of the original one). Around 7:15, I got about 72 seconds of contact with a doctor who diagnosed it as a panic attack. So, that is what a fucking panic attack is. See, I’d thought I’d had “panic attacks” before but, in retrospect, those were mild anxiety attacks. The difference is in degree. If anxiety is sugar, panic is coke.

The second one was worse. The first panic attack is scary but might just be a one-off. The second comes with the “yep, I’m now going to be a psychiatric cripple for a while” realization. It came a week later. Most of my panic attacks come in the late afternoon, but this one came around 1:00 in the morning. It made sleep impossible (exacerbating the illness) and rolled along for about 22 hours. A typical panic episode has 3-4 peaks spread out over 15-30 minutes. This one must’ve had 100 peaks. I was exhausted, dry-heaving, unable to keep food down. At random times during that day, my vision would suddenly go blurry, or I’d have an intense whole-body tingling, or I’d feel weightless (not in a good way, but like one is leaving the planet forever but will never die). Finally, at 6:00pm that evening, the girl I was living with forced me to go to the ER. I arrived at 6:38pm.

I was living in Williamsburg. I like Brooklyn but dislike Williamsburg: there is an intense negative energy there. It was full of young people who were arrogant, flaky, and full of bullshit Burning Man drug-wisdom that can combine with the “openness” of one’s mind upon acquiring a new disability to make you more scared of what’s going on than you should be. (“You’re entering a new spiritual plane!” vs. “You’ve developed a treatable condition and, if you take your meds and pursue cognitive-behavior therapy, you’ll be able to hold down a job and function normally within 6-12 months.”) I bring this up because, while Williamsburg is an affluent part of Brooklyn, the hospital that we chose to go to was… not. It was in the ghetto, and it was badly run.

I learned this later: when you present with a panic attack, physicians are supposed to run a battery of tests to rule out other conditions that can produce similar symptoms. This is good for two reasons. First, although they are rare in 24-year-old males, there are far more serious health conditions with similar symptoms that ought to be excluded. Second, it lets you (as patient) know that you are objectively healthy. Panic attacks are not nearly as scary when you are able to convince yourself, 100%, that “just panic” is what they are. If every panic attack felt like “just a panic attack” they wouldn’t be scary. It’s their weird-ass inventiveness at coming up with new symptoms that makes them terrifying. Anyway, that didn’t happen at my first ER visit, so I demanded it at my second. I went to triage and said, perhaps with some exhaustion due to 17 hours of mental anguish, “I know that I’m just having a panic attack but I want you to run all the requisite diagnostic tests and tell me what the fuck I have to do to fix my fucking brain.” I may have been a bit pushy, given the state that I was in. The staffer didn’t like this. He didn’t like me. I was a white kid living in Williamsburg with a $100+k-per-year job at age 24. He probably assumed that I had blitzed my brain on designer drugs (I hadn’t). He dumped me in a psychiatric ER. Now that is a place where I never again want to be.

You can leave a regular ER if the wait time is unreasonable, but not in the psychiatric section. Once you’re in, you can’t get out. They take your shoelaces, they take your money, and you can’t be discharged until you’ve seen a doctor (which may be, as it was in my case, more than 6 hours later). There was a loud television, and what struck me in the hypersensitivity of acute panic was how negative the content was: commercials designed to exploit envy and insecurity, some episode of some shit about teenagers being horrible to each other due to boredom. Most people (including myself, in normal moods) can be exposed to that low-level negativity and not be affected by it; it’s just crass entertainment. But it is fucking irresponsible to blast that shit, at full volume, into a crowded psychiatric ER with 14 people and 12 chairs. Most people in psychiatric ERs are recovering from panic attacks (and they are probably one of the worst places to recover from anything) and hypersensitive and should not jostled with the negativity of the world. I mean, for fuck’s sake… pretend you actually care about these people instead of just wanting to drown them out with TV noise.

During that six hours, I heard a number of screaming matches between staff and patients, and while one patient stood out as particularly loud and pugnacious, many of them did not deserve the harsh treatment they got. I, for my part, was treated well because I was able to gain the presence of mind to learn the rules and follow them. Sit down (on the floor, because there weren’t enough chairs) and don’t scream or piss anyone off. Be polite when you need to use the bathroom. Definitely don’t say, “I could get Xanax on the street instead of waiting for you assholes.” (The woman who screamed out that line must’ve added a few hours to her wait. Also, you’ll never get a benzo at an ER, even if you actually need one. Also, there’s a cop standing right there at the door, and while he does have better things to do than give a shit about such things being said, show some respect for the law.) I followed the rules, put up with my six-hour wait, tried to read although it was impossible to concentrate, and watched people with far more serious mental health issues suffer and (because of my own state) the most prominent thought was, and I’m ashamed of this now, “I hope that’s not my future.” Turned out, it wasn’t. But I’ll get there.

I got to see a doctor (the one doctor, on a Monday) around 1:00 in the morning. For all the horseshit of that ER and hospital, he was pretty good. He was surprised that I ended up in a psychiatric ER (and apologized for the fact) and he explained to me the physiology of panic attacks and, while he couldn’t prescribe, he gave me a referral.

There’s one more bit of the story that I have to tell. A month later, I went to a specialist for something else, and he discovered that it wasn’t a “phantom” health problem (or hypochondria) that was triggering the panic attacks. At least, it wasn’t then. I’m now at the point where just reading about a disease can give me a panic attack, but at that time, I had a real health issue. The difficulties I’d been having with breathing and swallowing were caused by a bacterial plaque that had formed in my throat after a bout of flu (that I, being stupid and macho and 24, tried to work through) from which I hadn’t properly recovered. The chest pains that I’d been having were LPRD/GERD (acid reflux) triggered by flakes of the plaque breaking off and fucking up my guts. The derealization and hallucination I’d been experiencing were not a brain tumor but typical of high-level panic attacks (thankfully, I rarely have them now.) Once discovered, that problem was easy enough to cure… but I spent a month with a giant bacterial plaque in my throat. I think that even normal people would get panic attacks after having a nasty motherfucker like that living inside them. But I digress.

So why do I indict the American medical system? From my two ER visits, I learned (perhaps in error, because I don’t think that all ERs are bad; I’ve just never met a good one) that emergency rooms will resent and ignore me. This is fine, because an ER is a terrible place to go during a panic attack, and if I ever have a real ER-worthy condition, the odds are that I won’t be conscious. I was eventually able to find good doctors (a psychiatrist for the panic, which persisted even after the throat condition was cured; and an ENT in Chinatown who managed to figure out what was actually wrong with me, physically speaking) but I had to seek out the specialists and hope that my insurance would cover the visits. I was my own Dr. House. The worst bit was the nightmare of insurance. Let me put it in no uncertain terms: U.S. health insurance is a fucking scam, and while “Obamacare” (PPACA) has improved the situation, the basic facts of it remain. In a way, health insurance is the most brilliant swindle there is. First, it picks a great target: sick people. When you rob sick people, they’re less likely to fight back or kick your ass or throw a year of life (when many of them have not so many years left) into a lawsuit. The problem with robbing sick people is that most of them don’t have any money, because they tend to be old and unemployed. So they’re soft targets, but with little meat to chew on. Health insurance is brilliant, as far as ignominious crimes go, because it collects premiums while people are well, cash-rich, and generally young… and then denies promised care when (in general) they are too old, sick, and poor to fight back. It’s like a robber with absolutely no sense of ethics discovered time travel! It’s fucking evil, but give credit where credit is due. Health insurance is an amazingly inventive form of theft.

If you’re not from the U.S., you cannot imagine how bad our health care system is. The quality of care when it is delivered is, I would say, spotty. We do have world-class researchers and there are many individual doctors and nurses who are excellent. I wouldn’t take that away from anyone. Our hospitals are depressing and dangerous places where super-bugs breed because our government doesn’t have the spine to disallow the (cruel and needless) abuse of antibiotics in fucking factory meat farms. Relevant to this topic, our billing/insurance system– even if you’re insured, you’re likely to face enormous out-of-pocket costs if you get seriously sick– compounds the stress of illness and has been a contributing factor to thousands of deaths every year. It’s terrible. I’d call it “Third World” but it’s honestly worse. It’s one thing not to have the resources, as the Third World medical systems often do not. It’s another to be rich in resources but to be a greedy fucking asshole. If there is a hell, I’m sure that the architects of the U.S. health insurance system are going to receive punishments that’d make Dante faint.

Health insurance may seem to be an orthogonal, after-the-fact issue when one is talking about panic. In fact, it taps into what, I think, is at the core of panic. What is it that one truly fears during a panic attack? I’ve said this before, and I’m not bullshitting: I don’t fear death. At least, I don’t fear it in the abstract. It will happen to me and, while I would prefer for it to take its time, I am at peace with it, and I think that most people (including most panic sufferers) are. This body will turn into a corpse, I will pass into another state of existence, and (if reincarnation is true) I will re-emerge as a person who will probably never hear the name “Michael O. Church”. To me, the one thing that is most comforting is that death and panic are opposites. Death is an end of this life; we don’t know what follows it, but we know that we ain’t here anymore. It’s impermanence. For a contrast; in panic, there is a fear of permanence or finality or stuckness. It’s not that the suffering (physical symptoms of health problems one does not have) is intolerable, but there is a sensation that they will never end. (Of course, they always end.) Sometimes, there is the fear, “I will die like this.” In fact, panic attacks are non-lethal. I am not a man of steadfast faith, but I feel comfortable in the belief that whatever being dead is like (and, of course, I don’t know what it is like) is quite different from a panic attack, and probably much nicer.

I don’t believe that panic is actually about death. I think that it’s about humiliation and disempowerment and finally abandonment. So let’s talk about those three fears. Are panic attacks humiliating? A little bit, but most people in the vicinity of a person having an attack are not going to have the intense focus on the event that the sufferer does. I’ve had panic attacks in public and I doubt most people remember them. Are they disempowering? They can be, and some people become shut-ins if they get really bad, but the truth is that a person who is  flooded with adrenaline is actually at the peak of his physical power (although he will be utterly exhausted when the adrenaline wears off). Panickers fear “losing control” but the adrenaline’s purpose is to make sure they have total control if any real danger that should present itself (this means there is a loss of non-essential control, and that produces many of panic’s trademark symptoms). As far as disabilities go, I think that panic attacks are one of the less disempowering, unless they become extremely frequent.

So then, let’s talk about abandonment. I think that it’s something that all of us, and even those who seem to be self-reliant badasses, fear. One can hold a cognitive belief that one is owed nothing by others in the world and not suffer for it. Many people, when they acquire disabilities (such as my relatively mild one) attempt to minimize the disability’s impact on others and be as self-supporting as possible. That’s all fine and good. However, I think that we, as humans, have an ancestral terror with regard to abandonment. We’ll be self-reliant as much as we can, but we need to believe that others will care for us if we are suddenly struck down. I think that most people are OK with the concept of eventual death, but the idea of being left to die, when they could be saved, strikes a primal chord. This, of course, gets to why there is, in many, a more bitter hatred toward health insurance companies (who murder by inaction) than there is toward cancer (which, though not a conscious organism, does the actual killing).

Now we have the U.S. medical system (and, most relevantly, these monstrosities that we call insurance companies) in our crosshairs, because abandoning people is what they do. Doctors don’t; if anything, they are eager to save lives whenever possible. But the rest of the system conspires to deny care, push people away, and let sick people die on someone else’s doorstep.

See, it was irritating that I spent 4 hours in an ER, having to wait to be told that my life wasn’t actually in danger. And spending 7 hours in a psychiatric ER (a prison, in essence) when I didn’t need to be there was a pretty miserable ordeal as well. That shit, though, is small potatoes. I’m OK. (It wasn’t that way for this woman, who died of deep vein thrombosis after a 24-hour (!!!) wait in a psychiatric ER’s waiting room). Then I had to deal with billing, and insurance, and insurance run-arounds, and denials of care that were explicitly contrary to law… for years. Having to leave my job, I had to shell out for COBRA only to have basic claims denied for arcane reasons that were clearly illegal. (“What are you going to do, unemployed, sick person? Sue us?”) That was 2008. Luckily, I had the good sense to find physicians who’d accept fair rates, and I ended up OK, but I also knew that if I did develop a life-threatening health problem, I’d be at the mercy of some absolutely horrible organizations.

It was probably 2010 before I recovered to the point of being traditionally employable (and a bit longer before I had the guts to leave the crappy startup I was at) and there is a large class of jobs that is probably out of the question forever. I’m 31 years old and quite functional (as a programmer, I’d say that I prefer to be “purely functional”) but being a hot-shot trader is pretty much out of the cards. Anyway, let’s talk about 2008 and 2009. I took a job at a pre-A startup, knowing that I still wasn’t well enough to deal with an 8-hour day in a typical tech office. (Actually, I’m surprised that normal people can withstand 8 hours in an open-plan office. Noise is one thing, but being visible to so many other people is horrendous. Every worker ought to be entitled to a barrier at his back.) At this startup, I had no health insurance. You know what’s worse than having a hypochondriacal panic attack? Having a hypochondriacal panic attack and knowing that you have no health insurance, which means that all these health problems the mind is inventing could actually lead to disaster. To put it bluntly, the US healthcare system took a period of my life that should have been one of recovery, and made it one of continuing stress and unraveling. It would not surprise me if I were diagnosed with PTSD based both on the period during which I was underinsured and received poor care, and during the panic-onset recovery period (late 2008-2009) during which I was uninsured.

I recovered. The panic attacks are pretty rare now. I have a good job, I’m married, I have two beautiful cats, and I even have decent health insurance– well, I think so; of course, the only way to actually test your health insurance is to become seriously ill– and can access decent doctors for my continuing medical needs, which are relatively low in expense and volatility. I have, for the most part, beaten this motherfucker. That said, I still have the attacks on occasion. Maybe it’ll be a benign heart palpitation or stomach pain that sets it off. Maybe it will be a bad memory pertaining to the dangerously inept medical treatment I received in the past, that spills over into a flashback. These are things that I shouldn’t have to deal with. I am fucking sick of the fucking panic attacks, and I am fucking sick of living in a society that thinks that it is OK for hospitals and insurance companies to take sick people and, out of a level of greed that even the robber barons would consider abhorrent, stress them out and fuck up their lives even further.

I am, and this should be obvious, disappointed by the progress achieved by President Obama on the healthcare front. Did he improve the system, incrementally? Absolutely, but not by enough. Getting rid of “life caps” and shooting down the scam plans was a good thing, no question. That said, health insurers will invent new ways to fuck people over and in 5 years we’ll be back at the same old shit. The system is too rotten to be improved by any incremental means. You can’t transplant new organs into a patient whose body is 85% cancer. At that point, it’s well past over. We need a single-payer or public-option system, and the existing private insurance companies need to die. Right now we have a system in which the doctors work overtime to heal and fix people, but the hospital billing departments and the insurance companies work overtime to stress them out and re-sicken them. It’s ludicrous. It’s like paying one person to dig holes and another to fill them up, but with more severe ethical ramifications because peoples’ health is at stake.

I write this not as a victim of a rather boring (to tell the truth about it) condition, because I am no victim. At my worst, I was no sicker than most people will be at their worst. Instead, I am a courier, and the message is clear: destroy the current, failing, morally execrable system, and build something new.

Why high-deductible medical insurance often doesn’t do what it’s supposed to.

“There was a friend of mine growing up, call him Tom, whose father was a health insurance executive. Once a month or so, he’d come for dinner and sleep over because his Dad was just in a foul mood, would go upstairs, not be able to cook, and not want to talk to anyone. I asked Tom why his dad had so many bad nights, and he’d explain that his father was a health insurance executive. I didn’t get it at first. Finally, when Tom was about 16, I asked him to explain this matter further. What causes his father’s bad nights? Tom laid it out straight: ‘Those are the days when Dad saves a man at work.’ “ — Unknown American origin.

As the medical insurance and healthcare picture in the U.S., despite the best intentions of at least a few left-leaning policymakers, continually gets worse over the decades, it’s becoming common that health insurance plans have high deductibles, sometimes as high as $10,000 for a family if one needs to go “out of network”. Moreover, given that health insurers will just decide not to cover things because some college dropout or failed-into-the-dark-side doctor decides that a treatment is “not medically necessary” (against the word of an actual fucking doctor) or a “lifestyle” treatment, even “out-of-pocket maximums” can’t always be trusted. Being “insured” means less by the day.

All of this said, for young and relatively well-off people, these high-deductible plans with HSAs seem like a good deal. On paper, taking one can be a reasonable bet, and if there were a way guarantee that they covered all medical expenses, I might agree. If you have a few thousand dollars or more in the bank, and you’re not likely to get sick, then you’re probably only giving up a few hundred dollars in expectancy by taking the high-deductible plan. So what goes wrong? What is the unexpected and systemic issue with high-deductible plans?

Libertarians like the idea of high-deductible plans insofar as they encourage patients to respond to economic signals when choosing treatments. While this appears to be a fine idea (on its own terms, that is) on paper, there are a number of issues with it. Free markets work well at solving a large number of pricing problems, but healthcare has extreme time behavior that other markets don’t. An issue that costs $500 to treat now might cost the patient, or society, $100,000 in a year if untreated. Markets work best when short-term signals reflect long-term conditions, and poorly when there’s a severe discrepancy between the two. Second, there’s a huge information asymmetry for patients, who simply don’t know enough to make informed decisions. Most patients would do better to trust their doctors than to try to make every single medical decision for themselves. This means that exposing patients to “price signals” is at best pointless and, at worst, dangerous. Due to the already-mentioned time behavior of most medical problems, by “dangerous” I also mean “expensive”.

What goes wrong with high-deductible plans? It’s not that deductibles are inherently a bad concept. They apply to auto insurance policies and are generally pretty harmless. The problem with high-deductible plans is this: while insurance companies are trypophobia-inducing clusters of assholes, the “good” news is that they’re assholes to hospitals and medical billing departments as much as to patients, and they have leverage, and they twist arms, and they get prices down. The result of this is that medical bills assessed to fully insured people are about a third as high as those assigned to the uninsured. The medical industry has high fixed costs, and no one knows what a service “should” cost, and uninsured or underinsured patients are so unlikely to pay (and, quite often, unable to pay) that billing departments will just plain price gouge. It’s ridiculous and perverse, and it’s questionable whether it should even be legal to set fees after a service is rendered. Hotels, restaurants, and transportation agencies have to set a price before the consumer makes a decision, but hospitals get to make up numbers after the service is rendered, resulting in absurdities like $250 charges for “mucus collection system” (in non-asshole language, a Kleenex). The only check against this are the health insurance bureaucrats. While they’re clearly motivated by corporate greed rather than good intentions, this class of people indirectly benefits policyholders by knocking prices down reducing premiums.

If we accept that insured patients pay medical bills indirectly, then at least the insured patient has an asshole on his side in negotiation with medical billing departments. The insurer will say, “accept this price or you’re ‘out of network’ and will get fewer patients”. As an individual, though, no patient can say “reduce the damn bill or I’ll never get appendicitis in your ER again”.

The problem with high-deductible plans is that, when a young person insured under one gets sick and incurs a mid-sized bill (say, $1500) the insurer has no incentive to engage in the arm-twisting (arm-twisting that is directly responsible for slashing insured patients’ bills by 60 to 80%, and that you will miss dearly, should you have to pay a medical bill directly) that they absolutely would do if they, as insurer, were paying the bill. (This is different if the insured person is frequently sick and likely to overflow the deductible on a regular basis; but until recently, people like that couldn’t even get insurance.) Don’t get me wrong: I’m not pro-arm-twisting in general. I’d like to see doctors and nurses and medical technicians fairly compensated, not driven to the bottom. In fact, I’d much prefer to re-join the First World and replace our rotten system with a public-option or single-payer system. I’m only saying that, as an individual, I’d prefer to have an asshole arm-twister negotiating my bills down rather than not have one.

High deductible health insurance would be a reasonable idea, and appealing to high-income young people like me, if there were some way to guarantee that the insurer would negotiate just as aggressively as if the deductible were zero and the insurer were paying the bill in its entirety. Unfortunately, I am not aware of any way to enforce that.

There is no “next Silicon Valley”, and that’s a good thing.

I recently moved to Chicago and, a couple weeks later, found myself reading this article: Why Chicago Needs to Stop Playing by Silicon Valley’s rules. I agree with it. I also want to speak more generally on “the next Silicon Valley”. It doesn’t exist. The current Silicon Valley is succeeding in some ways ($$$) and failing in others (everything else) but the truth of it is that it’s an aberration. It has as much staying power as the boomtowns surrounding North Dakota oil. Trying to replicate it is like attempting to create one of those ultraheavy chemical elements that lasts for 50 nanoseconds, but less interesting and far less cool.

I’m 31 years old, which is about 96 in Silicon Valley years, and I’ve seen a lot of the country and world, and I’ve come to the conclusion that “the next Silicon Valley” is a doomed ambition because it’s a pretty fucking lame one.

Rather than explain this, I think that a picture really is worth a thousand years here. So let’s look at some inspiring, creatively energetic, cities. These are the sorts of places that bring ordinary people to reach for the extraordinary, instead of the reverse.

Budapest:

Chicago:

New York:

London:

Paris:

Okay, so now let’s take a look at Silicon Valley.

I think my point is made by these pictures. There is a sense of place in the world’s great cities that just isn’t found at 5700 Technology Park, Suite #3-107, Nimbyvale, CA 94369.

Why no location can electively become “the next Silicon Valley”

I think the pictures above tell the story well. Becoming the next New York or Budapest or Paris or Chicago is a worthy vision, although any city will develop its own identity more quickly and more successfully than it can replicate another. Becoming the next Palo Alto is fucking lame. Now that the cherry orchards are gone, the only thing that the Valley has is money, and “I just want more money” is a pathetic ambition that leads to failure. Money has to come from somewhere, so it’s worthwhile to understand the source of the money and whether a region’s success is replicable (and desirable). Silicon Valley is rich because it’s a focal point for passive capital. This capital, drawn from teachers’ pension funds and university endowments, gets funneled through a machine called “venture capital” that is supposed to throw its money behind the most promising new businesses. Yet for reasons that most would find unclear, those funds tend to be directed toward a small geographic area. Now, the passive capitalists don’t especially care where their money is sent, so long as they get good returns. If the best business decision were to put that most of that money into Northern California, that would be easily accepted by the passive capitalists, even if they live in other places. While an Ohio State Police pension fund might ideally prefer that some of the jobs created by their passive capital be created in Ohio, they’ll gladly see their money deployed whereever it gets the best returns. That means that the extreme concentration of deployment in California would be completely OK– if it were justified by returns on investment. However, venture capital has been an underperforming asset class for years, and there’s no sign that this will change, because VCs are incentivized to optimize for their personal reputations and careers rather than their portfolios, and that favors the behavior we see. With the returns being abysmal, however, perhaps the Palo Alto strategy ain’t working. Perhaps this extreme concentration of passive capital, creating jobs in already-congested places where ever owning a house is extremely improbable for people who do actual work, is pathological.

My sense on the matter is that Silicon Valley is pathological, hypertrophic, and innately dysfunctional. Talent and capital like to concentrate, but not necessarily in that specific way, and not in such heated competition for resources with the existing economic elite (whose values are at odds with those of the most talented people). While it starves the rest of the country of attention and capital, Silicon Valley is past congestion and suffering for it, in terms of traffic and land prices. On paper, it’s set in a beautiful geographic area, and if you can get away from everything created by humans, California actually is quite pretty. That said, 22-year-olds without cars aren’t going to be impressed by Mountain View’s 200-mile radius when everything they actually see on a daily basis is an ugly, suburban shithole that they pay far too much to look at. Talented people do want to be around other talented people, but they prefer diversity in talent, not rows and rows of tech bros (who often aren’t very talented, but that’s another story). Because of talent’s natural draw toward concentration, and given the U.S.’s tendency toward high geographic mobility, I don’t think that this country will ever have more than 15 or 20 serious technology “hubs”– and even that would be a stretch, given that we currently have about five– but I do think that it’s possible to have a distribution that’s better for everyone involved. The current arrangement is bad for “winning” locations like Northern California, bad for the losing geographic areas, and bad for pretty much everyone individually except for extremely wealthy venture capitalists (who benefit from a reduced need to travel) and well-placed landlords.

As it exists, Silicon Valley probably shouldn’t. It’s a boomtown with ugly (and expensive) housing that wasn’t built to last. It has what could be a decent (if sleepy) almost-city in San Francisco, recently destroyed by the unintentional conspiracy of NIMBY natives (who create housing supply problems) and VC-funded new money. It is rich, on paper, and will be for some time. But replicating accident and pathology is a pretty lame strategy when directing the fate of a new place. The causes of Silicon Valley’s richness and (mostly former) excellence are more worthy of study than the superficial factors, like weather or workplace perks or representation in the entertainment industry.

What, then?

While “next Silicon Valley” is a lame ambition, there is something to that geographic region that makes it attractive to talented entrepreneurs. It provides a path to corporate hegemony that, at least by appearance, combines the “cool factor” of starting a business with the low risk of an investment banking or management consulting track. It encourages risk-taking and a superficially cavalier attitude toward failure, which appeals to a certain type of young person who has never failed and who hasn’t learned yet that life has stakes. The Valley has also done a great job of rebranding the corporate experience as something that left-leaning, upper-middle-class young people can swallow. Silicon Valley excelled in technology in the late 20th century; in the early 21st, it has proven itself world-class at marketing. Since brand-making is crucial to success in the sorts of first-mover, red-ocean gambits that VC (increasingly oriented toward attempting to sit inside the natural monopolies that technology sometimes creates, rather than actually building technology) now favors, that’s not surprising.

In business, there seems to be a continuum between low-risk, slow-growing businesses and “rocket ships” that burn up in orbit 95% of the time. There’s also a misperception, that I want to combat, that utter failure of new businesses is the norm. The risk exists, but 90% failure rates (while not uncommon in the Valley) are actually pathological. The actual failure rate is somewhere around 50 percent. In fact, compared to corporate jobs, the failure rate of typical small businesses (as opposed to VC-funded red-ocean gambits) isn’t much worse. Between firings, layoffs, political messes and damaged reputations relegating a person to second-class status, non-promotability, and less-desirable projects, it seems to be a constant that about 50 percent of jobs fail within 5 years. Of course, the range of outcomes is different; starting a business has more personal downside, and more potential gain. If there’s something that ought to be fixed in the process of new-business formation, it’s the amount of financial risk borne by those who don’t use venture capital.

For low-risk businesses that are unlikely to fail, bank loans are available. However, bank funding is a non-starter in launching even the least risky (“lifestyle”) technology companies, because bank loans those tend to require personal liability, which means that you can’t use them for something that might actually fail. Bank loans are great if one wants to capitalize a franchise restaurant or a parking garage, but not suitable for anything that involves making a new product. At the other extreme, there’s VC. The mid-risk, mid-growth range is, however, overlooked. For a business carrying, say, a 20-30% chance of failure and targeting 40% annual growth, there’s no one out there. Why is that?

Venture capital could be just as profitable by investing in mid-risk businesses as it is by throwing into the extreme high-risk pool. After all, if valuations are fair, then there’s just as much profit to be made investing in large companies as small ones. We’re probably not going to see a change in VCs’ behavior, though. The truth about that industry is that it’s celebrity-driven, and the VCs have a lot to gain and lose by playing the reputation gain. No one cares about the difference between a 7% and an 12% annual return on investment, but there’s a lot of credibility that comes from having “been in on” a Facebook or a Google. This also explains the (justly) hated tendency of venture capitalists toward collusion, co-funding, and reliance on social proof. One might want for VCs to compete with each other (i.e. do their jobs) and avoid this sort of mediocritizing collusion, but with the career benefit of being in on the once-per-decade whale deals being what it is, the incentive to spread information (even at the expense of entrepreneurs, and of ethical decency) around is obvious.

A successful business could easily be built by focusing on the mid-growth / mid-risk space, and delivering an option that removes personal financial risk while avoiding the ugliness and the aggressive risk-seeking (even at the expense of the ecosystem’s health) of traditional venture capital. That would also reduce reliance, for businesses, on the geographical advantage of Silicon Valley, which is access to ongoing capital and the perception of a liquid market for talent. It could be very profitable. It could be this mentality that builds the next ten thousand great companies. It won’t be done in Silicon Valley, however; and when it happens, it won’t come from anyone attempting to, or even cognizant of such a concept, create “the next Silicon Valley”. It will be driven by people creating the first something.

2015

I haven’t written much lately, in large part because I am trying to change course in terms of what I write about.

A change of focus?

Over the next year, I’d like to steer my focus toward more technical topics. CS 666 (software politics) is a subject that I’ve had to learn and use, the knowledge is important, and I’m glad to have shared it, and will continue to do so if I think that it’s good for the world. All that said, my heart’s not as much in it as it could be in other fields (like machine learning, language design, and even board game design, all of which are more dear to me than the MacLeod organizational model as it applies to software). It’s easy to focus on the intricacies of CS 666 and forget about the stuff that inspired us to get into technology in the first place. I swear that I didn’t join Google, back in 2011, hoping to become an expert in office politics. (Alternate summary: “I joined a Leading AI Company, and all I got was this lousy MBA’s Worth of CS 666 Knowledge.”) I wanted to level up on machine learning and software engineering… but a working knowledge of software politics is what I actually got from Google (and many other companies where I worked). At any rate, I think that I’ve put that to good use since then. I do what I can. But I’d rather focus on other stuff now.

The road to technical excellence (on which I am still a journeyman, not yet a seasoned ranger) is hard: you have to get high-quality work (which, in most companies, involves CS 666 in order to hack the project-allocation system) and be able to deploy it into the organization (ditto). Most programmers ditch the individual-contributor path in the manage-or-be-managed world of the closed-allocation mainstream, knowing that the only way to sustain sufficient advantage in the division of labor to grow and protect expertise and excellence is to gain direct control of it (“it” meaning the division of labor). They learn the political game, become managers easily once that is done, rise into the executive ranks because managerial tracks in supposedly “dual-track” organizations are always easier to climb than the technical ladders, and are lost to the field as individual contributors. It’s good for them, but not always for the world. A cynic might say that what begins as a diversion into CS 666 becomes, for many, a permanent state of distraction.

At the same time, we have this epidemic of criminally underqualified, well-connected individuals getting funded and acquired. In this frothy state, tech seems to be all about fucking distractions. I don’t like that it’s happening, and I’ve said more than my piece on it. The question I have to ask myself, continually, is whether I am making real progress, or just contributing to that state of distraction that I dislike. And then I have to ask what is best for me. Looking at the next 12, 24, and 48 months… I’ll be honest, I’d like to learn more computer science and spend less time on CS 666. There’s just a lot out there that I don’t know, and should, not only in computer science but in mathematics, the sciences, and the arts.

In truth, my status as some sort of emerging “conscience” of Silicon Valley must be considered temporary, since I don’t even live there, and have no interest in ever doing so. (What does it mean when the conscience of a place lives thousands of miles away from it?) On all that political stuff, the best thing that could happen to me would be for me to meet someone with the right vision, but who’s better than I am at pushing it through, and who doesn’t (like me) secretly wish he could study machine learning and leave the CS 666 to someone else. Ten years from now, I don’t want to be dumping execration on the moral failures of the technology industry, the way I do now. I want to see that we have grown the fuck up and solved our own problems. That will require people who are like me, but even better in battle, to take charge and start fighting.

The good news, for me individually, is that I think I just might be reaching the level of capability where the politics starts to get less intense. When you’re 22 and unproven, you’re going to have fight political battles just to get the good projects, and to get recognized for what you’ve done. It’s an ugly process of trial and error that I’d like never to repeat. Now I’m 31, more eminent in terms of talent, and would like to see myself protecting the good, in the future, rather than needing protection. Time will tell how that goes, but I’d like to finish next year with more technical articles and fewer political ones on the blog.

Chicago

In other news, I’m moving to Chicago in early January. I’m quite excited about the move. Anyone who wants to get together over pizza (either kind!) and beer and talk about functional programming, machine learning, board games, or why the world would be better if it was run by cats, should reach out.