The tyranny of the friendless

I’ve written a lot about the myriad causes of organizational decay. I wrote a long series on the topic, here. In most of my work, I’ve written about decay as an inevitable, entropic outcome driven by a number of forces, many unnamed and abstract, and therefore treated as inexorable ravages of time.

However, I’ve recently come to the realization that organizational decay is typically dominated by a single factor that is easy to understand, being so core to human sociology. While it’s associated with large companies, it can set in when they’re small. It’s a consequence of in-group exclusivity. Almost all organizations function as oligarchies, some with formal in-crowds (government officials or titled managers) and some without. If this in-crowd develops a conscious desire to exclude others, it will select and promote people who are likely to retain and even guard its boundaries. Only a certain type of person is likely to do this: friendless people. Those who dislike, and are disliked by, the out-crowd are unlikely to let anyone else in. They’re non-sticky: they come with a promise of “You get just me”, and that makes them very attractive candidates for admission into the elite.

Non-stickiness is seen as desirable from above– no one wants to invite the guy who’ll invite his whole entourage– but, in the business world, it’s negatively correlated with pretty much any human attribute that could be considered a virtue. People who are good at their jobs are more likely to be well-liked and engaged and form convivial bonds. People who are socially adept tend to have friends at high levels and low. People who care a lot about social justice are likely to champion the poor and unpopular. A virtuous person is more likely to be connected laterally and from “below”. That shouldn’t count against a person, but for an exclusive club that wants to stay exclusive, it does. What if he brings his friends in, and changes the nature of the group? What if his conscience compels him to spill in-group secrets? For this reason, the non-sticky and unattached are better candidates for admission.

The value that executive suites place on non-stickiness is one of many possible explanations for managerial mediocrity as it creeps into an organization. Before addressing why I think my theory is right, I need to analyze three of the others, all styled as “The $NAME Principle”.

The “Peter Principle” is the claim that people are promoted up to their first level of incompetence, and stay there. It’s an attractive notion, insofar as most people have seen it in action. There are terminal middle managers who don’t seem like they’ll ever gain another step, but who play politics just well enough not to get fired. (It sucks to be beneath one. He’ll sacrifice you to protect his position.) That said, I find the Peter Principle, in general, to be mostly false because of its implicit belief in corporate meritocracy. What is most incorrect about it is the belief that upper-level jobs are harder or more demanding than those in the middle. In fact, there’s an effort thermocline in almost every organization. Above the effort thermocline, which is usually the de facto delineation between mere management roles and executive positions, jobs get easier and less accountable with increasing rank. If the one-tick-late-but-like-clockwork Peter Principle were the sole limiting factor on advancement, you’d expect that those who pass the the thermocline would all become CEOs, and that’s clearly not the case. While merit and hard work are required less and less with increasing rank, political resistance amplifies just because there are so few of the top jobs that there’s bound to be competition. Additionally, even below the effort thermocline there are people employed below their maximum level of competence because of political resistance. The Peter Principle is too vested in the idea of corporate meritocracy to be accurate.

Scott Adams has proposed an alternative theory of low-merit promotion: the Dilbert Principle. According to it, managers are often incompetent line workers who were promoted “out of harm’s way”. I won’t deny that it exists in some organizations, although it usually isn’t applied within critical divisions of the company. When incompetents are knowingly promoted, it’s usually a dead-end pseudo-promotion that comes with a small pay raise and a title bump, but lateral movement into unimportant work. That said, its purpose isn’t just to limit damage, but to make the person more likely to leave. If someone’s not bad enough to fire but not especially good, gilding his CV with a fancy title might invite him to (euphemism?) succeed elsewhere… or, perhaps, not-succeed elsewhere but be someone else’s problem. All of that said, this kind of move is pretty rare. Incompetent people who are politically successful are not known to be incompetent, because politics-of-performance outweighs actual performance ten-to-one in terms of making reputations, and those who have a reputation for incompetence are those who failed politically, and they don’t get exit promotions. They just get fired.

The general idea that people are made managers to limit their damage potential is false because the decision to issue such promotions is one that would, by necessity, be made by other managers. As a tribe, managers have far too much pride to ever think the thought, “he’s incompetent, we must make him one of us”. Dilbert-style promotions occasionally occur and incompetents definitely get promoted, but the intentional promotion of incompetents into important roles is extremely rare.

Finally, there’s the Gervais Principle, developed by Venkatesh Rao, which asserts that organizations respond both to performance and talent, but sometimes in surprising ways. Low-talent high-performers (“eager beavers” or “Clueless”) get middle management roles where they carry the banner for their superiors, and high-talent low-performers (“Sociopaths”) either get groomed for upper-management or get fired. High-talent high-performers aren’t really addressed by the theory, and there’s a sound reason why. In this case, the talent that matters most is strategy: not working hard necessarily, but knowing what is worth working on. High talent people will, therefore, work very hard when given tasks appropriate to their career goals and desired trajectory in the company, but their default mode will be to slack on the unimportant make-work. So a high-talent person who is not being tapped for leadership will almost certainly be a low performer: at least, on the assigned make work that is given to those not on a career fast track.

The Gervais/MacLeod model gives the most complete assessment of organizational functioning, but it’s not without its faults. Intended as satire, the MacLeod cartoon gave unflattering names to each tier (“Losers” at the bottom, “Clueless” in middle-management, and “Sociopaths” at the top). It also seems to be a static assertion, while the dynamic behaviors are far more interesting. How do “Sociopaths” get to the top, since they obviously don’t start there? When “Clueless” become clued-in, where do they go? What do each of these people really want? For how long do “Losers” tolerate losing? (Are they even losing?) Oh, and– most importantly for those of us who are studying to become more like the MacLeod Sociopaths (who aren’t actually sociopathic per se, but risk-tolerant, motivated, and insubordinate)– what determines which ones are groomed for leadership and which ones are fired?

If there’s an issue with the Gervais Principle, it’s that it asserts too much intelligence and intent within an organization. No executive ever says, “that kid looks like a Sociopath; let’s train him to be one of us.” The Gervais model describes the stable state of an organization in advanced decline, but doesn’t (in my opinion) give full insight into why things happen in the way that they do.

So I’m going to offer a fourth model of creeping managerial mediocrity. Unlike the Peter Principle, it doesn’t believe in corporate meritocracy. Unlike the Dilbert Principle, it doesn’t assert that managers are stupid and unimportant (because we know both to be untrue) or consider their jobs to be such. Unlike the Gervais Principle, it doesn’t believe that organizations knowingly select for cluelessness or sociopathy (although that is sometimes the case).

  • The Lumbergh Principle: an exclusive sub-organization, such as an executive suite, that wishes to remain exclusive will select for non-stickiness, which is negatively correlated with most desirable personal traits. Over time, this will degrade the quality of people in the leadership ranks, and destroy the organization.

If it’s not clear, I named this one after Bill Lumbergh from Office Space. He’s uninspiring, devoid of charisma, and seems to hold the role of middle manager for an obvious reason: there is no chance that he would ever favor his subordinates’ interests over those of upper management. He’s friendless, and non-sticky by default. He wouldn’t, say, tell an underling that he’s undervalued and should ask for a 20% raise, or give advance notice of a project cancellation or layoff so a favored subordinate can get away in time. He’ll keep his boss’s secrets because he just doesn’t give a shit about the people who are harmed by his doing so. No one likes him and he likes no one, and that’s why upper management trusts him.

Being non-sticky and being incompetent aren’t always the same thing, but they tend to correlate often enough to represent a common case (if not the most common case) of an incompetent’s promotion. Many people who are non-sticky are that way because they’re disliked and alienating to other people, and while there are many reasons why that might be, incompetence is a common one. Good software engineers are respected by their peers and tend to make friends at the bottom. Bad software engineers who play politics and manage up will be unencumbered by friends at the bottom.

To be fair, the desire to keep the management ranks exclusive is not the only reason why non-stickiness is valued. A more socially acceptable reason for it is that non-sticky people are more likely to be “fair” in an on-paper sense of the word. They don’t give a damn about their subordinates, their colleagues, and possible future subordinates, but they don’t-give-a-damn equally. Because of this, they support the organization’s sense of itself as a meritocracy. Non-sticky people are also, in addition to being “fair” in a toxic way that ultimately serves the needs of upper management only, quite consistent. As corporations would rather be consistent than correct– firing the wrong people (i.e. firing competent people) is unfortunate, but firing inconsistently opens the firm to a lawsuit– they are attractive for this reason as well. You can always trust the non-sticky person, even if he’s disliked by his colleagues for a good reason, to favor the executives’ interests above the workers’. The fact that most non-sticky people absolutely suck as human beings is held to be irrelevant.


As I get older and more experienced, I’m increasingly aware that there’s a lot of diversity in how organizations run themselves. We’re not condemned to play out roles of “Loser”, “Clueless”, or “Sociopath”. So it’s worth acknowledging that there are a lot of cases in which the Lumbergh Principle doesn’t apply. Some organizations try to pick competent leaders, and it’s not inevitable that an organization develops such a contempt for its own workers as to define the middle-management job in such a start way. Also, the negativity that is often directed at middle management fails to account for the fact that upper management almost always has to pass through that tier in some way or another. Middle management gets its stigma because of the terminal middle managers with no leadership skills; the ones promoted into those roles because, while defective, their superiors could trust them. However, there are many other reasons why people pass through middle management roles, or take them on because they believe that the organization needs them to do so.

The Lumbergh Principle only takes hold in a certain kind of organization. That’s the good news. The bad news is that most organizations are that type. It has to do with internal scarcity. At some point, organizations decide that there’s a finite amount of “goodness”, whether we’re talking about autonomy or trust or credibility, that exists and it leaves people to compete for these artificially limited benefits. Employee stack ranking is a perfect example of this: for one person to be a high performer, another must be marked low. When a scarcity mentality sets in, R&D is slashed and business units are expected to compete for internal clients in order to justify themselves, which means that these “intrapreneurial” efforts face the worst of both worlds between being a startup and being in a large organization. It invariably gets ugly, and a zero-sum mentality takes hold. At this point, the goal of the executive suite becomes maintaining position rather than growing the company, and invitation into the inner circle (and the concentric circles that comprise various tiers of management) are given only to replace outgoing personnel, with a high degree of preference given to those who can be trusted not to let conscience get in in the way of the executives’ interests.

One might expect that startups would be a way out. Is that so? The answer is: sometimes. It is generally better, under this perspective, for an organization to be growing than stagnant. It’s probably better, in many cases, to be small. At five people, it’s far more typical to see the “live or die as a group” mentality than the internal adversity that characterizes large organizations. All of that said, there are quite a number of startups that already operate under a scarcity mentality, even from inception. The VCs want it that way, so they demand extreme growth and risk-seeking on (relative to the ambitions they require) a shoestring budget and call it “scrappy” or “lean”. The executives, in turn, develop a culture of stinginess wherein small expenses are debated endlessly. Then the middle managers bring in that “Agile”/Scrum bukkake in which programmers have to justify weeks and days of their own fucking working time in the context of sprints and iterations and glass toys. One doesn’t need a big, established company to develop the toxic scarcity mentality that leads to the Lumbergh Effect. It can start at a company’s inception– something I’ve seen on multiple occasions. In that case, the Lumbergh Effect exists because the founders and executives have a general distrust for their own people. That said, the tendency of organizations (whether democratic or monarchic on paper) toward oligarchy means that they need to trust someone. Monarchs need lieutenants, and lords need vassals. The people who present themselves as natural candidates for promotion are the non-sticky ones who’ll toss aside any allegiances to the bottom. However, those people are usually non-sticky because they’re disliked, and they’re usually disliked because they’re unlikeable and incompetent. It’s through that dynamic– not intent– that most companies end up being middle-managed (and, after a few years, upper-managed) by incompetents.

Advanced Lumberghism

What makes the Lumbergh Principle so deadly is the finality of it. The Peter Principle, were it true, would admit an easy solution: just fire the people who’ve plateaued. (Some companies do exactly that, but it creates a toxic culture of stack-ranking and de facto age discrimination.) The Dilbert Principle has a similar solution: if you are going to “promote” someone into a dead end, as a step in managing that person out, make sure to follow through. As for the Gervais Principle, it describes an organization that is already in an advanced state of dysfunction (but, it is so useful because most organizations are in such states) and, while it captures the static dynamics (i.e. the microstate transitions and behaviors in a certain high-entropy, degenerate macrostate) it does not necessarily tell us why decay is the norm for human organizations. I think that the Lumbergh Effect, however, does give us a cohesive sense of it. It doesn’t go far enough to say that “the elite” is the problem, because while elites are generally disliked, they’re not always harmful. The Lumbergh Effect sets in when the elite’s desire to protect its boundaries results in the elevation of a middling class of non-virtuous people, and as such people become the next elite (through attrition in the existing one) the organization falls to pieces. We now know, at least in a large proportion of cases, the impulses and mechanics that bring an organization to ruin.

Within organizations, there’s always an elite. Governments have high officials and corporations have executives. We’d like for that elite to be selected based on merit, but even people of merit dislike personal risk and will try to protect their positions. Over time, elites form their own substructures, and one of those is an outer “shell”. The lowest-ranking people inside that elite, and the highest-ranking people outside of it who are auditioning to get in, take on guard duty and form the barrier. Politically speaking, the people who live at that shell (not the cozy people inside or the disengaged outsiders who know they have no chance of entering) will be the hardest-working (again, an effort thermocline) at defining and guarding the group’s boundaries. Elites, therefore, don’t recruit for their “visionary” inner ranks or their middling directorate, because you have to serve at the shell before you have a chance of getting further in. Rather, they recruit guards: non-sticky people who’ll keep the group’s barriers (and its hold over the resources, information, and social access that it controls) impregnable to outsiders. The best guards, of course, are those who are loyal upward because they have no affection in the lateral or downward directions. And, as discussed, such people tend to be that way because no one likes them where they are. That this leads organizations to the systematic promotion of the worst kinds of people should surprise no one.

Can tech fix its broken culture?

I’m going to spoil the ending. The answer is: yes, I think so. Before I tackle that matter, though, I want to address the blog post that led me to write on this topic. It’s Tim Chevalier’s “Farewell to All That” essay about his departure from technology. He seems to have lost faith in the industry, and is taking a break from it. It’s worth reading in its entirety. Please do so, before continuing with my (more optimistic) analysis.

I’m going to analyze specific passages from Chevalier’s essay. It’s useful to describe exactly what sort of “broken culture” we’re dealing with, in order to replace a vague “I don’t like this” with a list of concrete grievances, identifiable sources and, possibly, implementable solutions.

First, he writes:

I have no love left for my job or career, although I do have it for many of my friends and colleagues in software. And that’s because I don’t see how my work helps people I care about or even people on whom I don’t wish any specific harm. Moreover, what I have to put up with in order to do my work is in danger of keeping me in a state of emotional and moral stagnation forever.

This is a common malaise in technology. By the time we’re 30, we’ve spent the better part of three decades building up potential and have refined what is supposed to be the most important skill of the 21st century. We’d like to work on clean energy or the cure for cancer or, at least, creating products that change and save lives (like smart phones, which surely have). Instead, most of us work on… helping businessmen unemploy people. Or targeting ads. Or building crappy, thoughtless games for bored office workers. That’s really what most of us do. It’s not inspiring.

Technologists are fundamentally progressive people. We build things because we want the world to be better tomorrow than it is today. We write software to solve problems forever. Yet most of what our employers actually make us do is not congruent with the progressive inclination that got us interested in technology in the first place. Non-technologists cannot adequately manage technologists because technologists value progress, while non-technologists tend to value subordination and stability.

Open source is the common emotional escape hatch for unfulfilled programmers, but a double-edged sword. In theory, open-source software advances the state of the world. In practice, it’s less clear cut. Are we making programmers (and, therefore, the world) more productive, or are we driving down the price of software and consigning a generation to work on shitty, custom, glue-code projects? This is something that I worry about, and I don’t have the answer. I would almost certainly say that open-source software is very much good for the world, were it not for the fact that programmers do need to make money, and giving our best stuff away for free just might be hurting the price for our labor. I’m not sure. As far as I can tell, it’s impossible to measure that counterfactual scenario.

If there’s a general observation that I’d make about software programmers, and technologists in general, it’s that we’re irresponsibly adding value. We create so much value that it’s ridiculous, and so much that, by rights, we ought to be calling the shots. Yet we find value-capture to be undignified and let the investors and businessmen handle that bit of the work. So they end up with the authority and walk away with the lion’s share; we’re happy if we make a semi-good living. The problem is that value (or money) becomes power, and the bulk of the value we generate accrues not to people who share our progressive values, but to next-quarter thinkers who end up making the world more ugly. We ought to fix this. By preferring ignorance over how the value we generate is distributed and employed, we’re complicit in widespread unemployment, mounting economic and political inequality, and the general moral problem of the wrong people winning.

I don’t spend much time solving abstract puzzles, at least not in comparison to the amount of time I spend doing unpaid emotional labor.

Personally, I care more about solving real-world problems and making peoples’ lives better than I do about “abstract puzzles”. It’s fun to learn about category theory, but what makes Haskell exciting is that its core ideas actually work at making quickly developed code robust beyond what is possible (within the same timeframe; JPL-style C is a different beast) in other languages. I don’t find much use in abstract puzzles for their own sake. That said, the complaint about “unpaid emotional labor” resonates with me, though I might use the term “uncompensated emotional load”. If you work in an open-plan office, you’re easily losing 10-15 hours of your supposedly free time just recovering from the pointless stress inflicted by a bad work environment. I wouldn’t call it an emotional “labor”, though. Labor implies conscious awareness. Recovering from emotional load is draining, but it’s not a conscious activity.

But the tech industry is wired with structural incentives to stay broken. Broken people work 80-hour weeks because we think we’ll get approval and validation for our technical abilities that way. Broken people burn out trying to prove ourselves as hackers because we don’t believe anyone will ever love us for who we are rather than our merit.

He has some strong points here: the venture-funded tech industry is designed to give a halfway-house environment for emotionally stunted (I wouldn’t use the word “broken”, because immaturity is very much fixable) fresh college grads. That said, he’s losing me on any expectation of “love” at the workplace. I don’t want to be “loved” by my colleagues. I want to be respected. And respect has to be earned (ideally, based on merit). If he wants unconditional love, he’s not going to find that in any job under the sun; he should get a dog, or a cat. That particular one isn’t the tech industry’s fault.

Broken people believe pretty lies like “meritocracy” and “show me the code” because it’s easier than confronting difficult truths; it’s as easy as it is because the tech industry is structured around denial.

Meritocracy is a useless word and I think that it’s time for it to die, because even the most corrupt work cultures are going to present themselves as meritocracies. The claim of meritocracy is disgustingly self-serving for the people at the top.

“Show me the code” (or data) can be irksome, because there are insights for which coming up with data is next to impossible, but that any experienced person would share. That said, data- (or code-)driven decision making is better than running on hunches, or based on whoever has the most political clout. What I can’t stand is when I have to provide proof but someone else doesn’t. Or when someone decides that every opinion other than his is “being political” while his is self-evident truth. Or when someone in authority demands more data or code before making a ruling, then goes on to punish you for getting less done on your assigned work (because he really doesn’t want you to prove him wrong). Now those are some shitty behaviors.

I generally agree that not all disputes can be resolved with code or data, because some cases require a human touch and experience; that said, there are many decisions that should be handled in exactly that way: quantitatively. What irks me is not a principled insistence on data-driven decisions, but when people with power acquire the right to make everyone else provide data (which may be impossible to come by) while remaining unaccountable, themselves, to do the same. And many of the macho jerks who overplay the “show me the code” card (because they’ve acquired undeserved political power), when code or data are too costly to acquire. are doing just that.

A culture that considers “too sensitive” an insult is a culture that eats its young. Similarly, it’s popular in tech to decry “drama” when no one is ever sure what the consensus is on this word’s meaning, but as far as I can tell it means other people expressing feelings that you would prefer they stay silent about.

I dislike this behavior pattern. I wouldn’t use the word “drama” so much as political. Politically powerful bad actors are remarkably good at creating a consensus that their political behaviors are apolitical and “meritocratic”, whereas people who disagree with or oppose them are “playing politics” and “stirring up drama”. False objectivity is more dangerous than admitted subjectivity. The first suits liars, the second suits people who have the courage to admit that they are fallible and human.

Personally, I tend to disclose my biases. I can be very political. While I don’t value emotional drama for its own sake, I dislike those who discount emotion. Emotions are important. We all have them, and they carry information. It’s up to us to decide what to do with that information, and how far we should listen to emotions, because they’re not always wise in what they tell us to do. There is, however, nothing wrong with having strong emotions. It’s when people are impulsive, arrogant, and narcissistic enough to let their emotions trample on other people that there is a problem.

Consequently, attempting to shut one’s opponent down by accusing him of being “emotional” is a tactic I’d call dirty, and it should be banned. We’re humans. We have emotions. We also have the ability (most of the time) to put them in place.

“Suck it up and deal” is an assertion of dominance that disregards the emotional labor needed to tolerate oppression. It’s also a reflection of the culture of narcissism in tech that values grandstanding and credit-taking over listening and empathizing.

This is very true. “Suck it up and deal” is also dishonest in the same way that false objectivity and meritocracy are. The person saying it is implicitly suggesting that she suffered similar travails in the past. At the same time, it’s a brush-off that indicates that the other person is of too low status for it to be worthwhile to assess why the person is complaining. It says, “I’ve had worse” followed by “well, I don’t actually know that, because you’re too low on the food chain for me to actually care what you’re going through.” It may still be abrasive to say “I don’t care”, but at least it’s honest.

Oddly enough, most people who have truly suffered fight hard to prevent others from having similar experiences. I’ve dealt with a lot of shit coming up in the tech world, and the last thing I would do is inflict it on someone else, because I know just how discouraging this game can be.

if you had a good early life, you wouldn’t be in tech in the first place.

I don’t buy this one. Some people are passionate about software quality, or about human issues that can be solved by technology. Not everyone who’s in this game is broken.

There certainly are a lot of damaged people working in private-sector tech, and the culture of the VC-funded world attracts broken people. What’s being said here is probably 80 or 90 percent true, but there are a lot of people in technology (especially outside of the VC-funded private sector tech that’s getting all the attention right now) who don’t seem more ill-adjusted than anyone else.

I do think that the Damaso Effect requires mention. On the business side of tech (which we report into) there are a lot of people who really don’t want to be there. Venture capital is a sub-sector of private equity and considered disreputable within that crowd: it’s a sideshow to them. Their mentality is that winners work on billion-dollar private equity deals in New York and losers go to California and boss nerds around. And for a Harvard MBA to end up as a tech executive (not even an investor!) is downright embarrassing. So that Columbia MBA who’s a VP of HR at a 80-person ed-tech startup is not exactly going to be attending reunions. This explains the malaise that programmers often face as they get older: we rise through the ranks and see that, if not immediately, we eventually report up into a set of people who really don’t want to be here. They view being in tech as a mark of failure, like being relegated to a colonial outpost. They were supposed to be MDs at Goldman Sachs, not pitching business plans to clueless VCs and trying to run a one-product company on a shoestring (relative to the level of risk and ambition that it takes to keep investors interested) budget.

That said, there are plenty of programmers who do want to be here. They’re usually older and quite capable and they don’t want to be investors or executives, though they often could get invited to those ranks if they wished. They just love solving hard problems. I’ve met such people; many, in fact. This is a fundamental reason why the technology industry ought to be run by technologists and not businessmen. The management failed into it and would jump back into MBA-Land Proper if the option were extended, and they’re here because they’re the second or third tier that got stuck in tech; but the programmers in tech actually, in many cases, like being here and value what technology can do.

Failure to listen, failure to document, and failure to mentor. Toxic individualism — the attitude that a person is solely responsible for their own success, and if they find your code hard to understand, it’s their fault — is tightly woven through the fabric of tech.

This is spot-on, and it’s a terrible fact. It holds the industry back. We have a strong belief in progress when it comes to improving tools, adding features to a code base, and acquiring more data. Yet the human behaviors that enable progress, we tend to undervalue.

But in tech, the failures are self-reinforcing because failure often has no material consequences (especially in venture-capital-funded startups) and because the status quo is so profitable — for the people already on the inside — that the desire to maintain it exceeds the desire to work better together.

This is an interesting observation, and quite true. The upside goes mostly to the well-connected. Most of the Sand Hill Road game is about taking strategies (e.g. insider trading, market manipulation) that would be illegal on public markets and applying them to microcap private equities over which there are fewer rules. The downside is borne by the programmers, who suffer extreme costs of living and a culture of age discrimination on a promise of riches that will usually never come. As of now, the Valley has been booming for so long that many people have forgotten that crashes and actual career-rupturing failures even exist. In the future… who knows?

As for venture capital, it delivers private prosperity, but its returns to passive investors (e.g. the ones whose money is being invested, as opposed to the VCs collecting management fees) are dreadful. This industry is not succeeding, except according to the needs of the well-connected few. What’s happening is not “so profitable” at all. It’s not actually very successful. It’s just well-marketed, and “sexy”, to people under 30 who haven’t figured out what they want to do with their lives.

I remember being pleasantly amazed at hearing that kind of communication from anybody in a corporate conference room, although it was a bit less nice when the CTO literally replied with, “I don’t care about hurt feelings. This is a startup.”

That one landed. I have seen so many startup executives and founders justify bad behavior with “this is a startup” or “we’re running lean”. It’s disgusting. It’s the False Poverty Effect: people who consider themselves poor based on peer comparison will tend to believe themselves entitled to behave badly or harm others because they feel like it’s necessary in order to catch up, or that their behavior doesn’t matter because they’re powerless compared to where they should be. It usually comes with a bit of self-righteousness, as well: “I’m suffering (by only taking a $250k salary) for my belief in this company.”The false-poverty behavior is common in startup executives, because (as I already discussed) they’d much rather be elsewhere– executives in much larger companies, or in private equity.

I am neither proud of nor sorry for any of these lapses, because ultimately it’s capitalism’s responsibility to make me produce for it, and within the scope of my career, capitalism failed. I don’t pity the ownership of any of my former employers for not having been able to squeeze more value out of me, because that’s on them.

I have nothing to say other than that I loved this. Ultimately, corporate capitalism fails to be properly capitalistic because of its command-economy emphasis on subordination. When people are treated as subordinates, they slack and fade. This hurts the capitalist more than anyone else.

Answering the question

I provided commentary on Tim Chevalier’s post because not only is he taking on the tech industry, but he’s giving proof to his objection by leaving it. Tech has a broken culture, but it’s not enough to issue vague complaints as many do. It’s not just about sexism and classism and Agile and Java Shop politics in isolation. It’s about all of that shit, taken together. It’s about the fact that we have a shitty hand-me-down culture from those who failed out of the business mainstream (“MBA Culture”) and ended up acquiring its worst traits (e.g. sexism, ageism, anti-intellectualism). It’s about the fact that we have this incredible skill in being able to program, and yet 99 percent of our work is reduced to a total fucking joke because the wrong people are in charge. If we care about the future at all, we have to fight this.

Fixing one problem in isolation, I’ll note, will do no good. This is why I can’t stand that “lean in” nonsense that is sold to unimaginative women who want some corporate executive to solve their problems. You cannot defeat the systemic problems that disproportionately harm women, and maintain the status quo at the same time. You can’t take an unfair, abusive system designed to concentrate power and “fix” it so that it is more fair in one specific way, but otherwise operates under the same rules. You can’t have a world where it is career suicide to take a year off of work for any reason except to have a baby. If you maintain that cosmetic obsession with recency, you will hurt women who wish to have children. You have to pick: either accept the sexism and ageism and anti-intellectualism and the crushing mediocrity of what is produced… or overthrow the status quo and change a bunch of things at the same time. I know which one I would vote for.

Technology is special in two ways, and both of these are good news, at least insofar as they bear on what is possible if we get our act together. The first is that it’s flamingly obvious that the wrong people are calling the shots. Look at many of the established tech giants. In spite of having some of the best software engineers in the world, many of these places use stack ranking. Why? They have an attitude that software engineering is “smart people work” and that everything else– product management, people management, HR– is “stupid people work” and this becomes a self-fulfilling prophecy. You get some of the best C++ engineers in the world, but you get stupid shit like stack ranking and “OKRs” and “the 18-month rule” from your management.

It would be a worse situation to have important shots called by idiots and not have sufficient talent within our ranks to replace them. But we do have it. We can push them aside, and take back our industry, if we learn how to work together rather than against each other.

The second thing to observe about technology is that it’s so powerful as to admit a high degree of mismanagement. If we were a low-margin business, Scrum would kill rather than merely retard companies. Put simply, successful applications of technology generate more wealth than anyone knows what to do with. This could be disbursed to employees, but that’s rare: for most people in startups, their equity slices are a sad joke. Some of it will be remitted to investors and to management. A great deal of that surplus, however, is spent on management slack: tolerating mismanagement at levels that would be untenable in an industry with a lower margin. For example, stack-ranking fell out of favor after it caused the calamitous meltdown of Enron, and “Agile”/”Scrum” is a resurrection of Taylorist pseudo-science that was debunked decades ago. Management approaches that don’t work, as their proponents desperately scramble for a place to park them, end up in tech. This leaves our industry, as a whole, running below quarter speed and still profitable. Just fucking imagine how much there would be to go around, if the right people were calling the shots.

In light of the untapped economic potential that would accrue to the world if the tech industry were better run, and had a better culture, it seems obvious that technology can fix the culture. That said, it won’t be easy. We’ve been under colonial rule (by the business mainstream) for a long time. Fixing this game, and eradicating the bad behaviors that we’ve inherited from our colonizing culture (which is more sexist, anti-progressive, anti-intellectual, classist and ageist than any of our natural tendencies) will not happen overnight. We’ve let ourselves be defined, from above, as arrogant and socially inept and narcissistic, and therefore incapable of running our own affairs. That, however, doesn’t reflect what we really are, nor what we can be.

The Sturgeon Filter: the cognitive mismatch between technologists and executives

There’s a rather negative saying, originally applied to science fiction, known as Sturgeon’s Law: “ninety percent of everything is crap“. Quantified so generally, I don’t think that it’s true or valuable. There are plenty of places where reliability can be achieved and things “just work”. If ninety percent of computers malfunctioned, the manufacturer would be out of business, so I don’t intend to apply the statement to everything. Still, there’s enough truth in the saying that people keep using it, even applying it far beyond what Theodore Sturgeon actually intended. How far is it true? And what does it mean for us in our working lives?

Let’s agree to take “ninety percent” to be a colloquial representation of “most, and it’s not close”; typically, between 75 and 99 percent. What about “is crap”? Is it fair to say that most creative works are crap? I wouldn’t even know where to begin on that one. Certainly, I only deign to publish about a quarter of the blog posts that I write, and I think that that’s a typical ratio for a writer, because I know far too well how often an appealing idea fails when taken into the real world. I think that most of the blog posts that I actually release are good, but a fair chunk of my writing is crap that, so long as I’m good at self-criticism, will never see daylight.

I can quote a Sturgeon-like principle with more confidence, in such a way that preserves its essence but is hard to debate: the vast majority (90 percent? more?) of mutations are of negative value and, if implemented, will be harmful. This concept of “mutation” covers new creative work as well as maintenance and refinement. To refine something is to mutate it, while new creation is still a mutation of the world in which it lives. And I think that my observation is true: a few mutations are great, but most are harmful or, at least, add complexity and disorder (entropy). In any novel or essay, changing a word at random will probably degrade its quality. Most “house variants” of popular games are not as playable as the original game, or are not justified by the increased complexity load. To mutate is, usually, to inflict damage. Two things save us and allow progress. One is that the beneficial mutations often pay for the failures, allowing macroscopic (if uneven) progress. The second is that we can often audit mutations and reverse a good number of those that turn out to be bad. Version control, for programmers, enables us to roll back mutations that are proven to be undesirable.

The Sturgeon Mismatch

Programmers experience the negative effects of random mutations all the time. We call them “bugs”, and they range from mild embarrassments to catastrophic failures, but very rarely is a discrepancy between what the programmer expects of the program, and what it actually does, desirable. Of course, intended mutations have a better success rate than truly “random” ones would, but even in those, there is a level of ambition at which the likelihood of degradation is high. I know very little about the Linux kernel and if I tried to hack it, my first commits would probably be rejected, and that’s a good thing. It’s only the ability to self-audit that allows the individual programmer, on average, to improve the world while mutating it. It can also help to have unit tests and, if available for the language, a compiler and a strong type system; those are a way to automate at least some of this self-censoring.

I’m a reasonably experienced programmer at this point, and I’m a good one, and I still generate phenomenally stupid bugs. Who doesn’t? Almost all bugs are stupid– tiny, random injections of entropy emerging from human error– which is why the claim (for example) that “static typing only catches ‘stupid’ bugs” is infuriating. What makes me a good programmer is that I know what tools and processes to use in order to catch them, and this allows me to take on ambitious projects with a high degree of confidence in the code I’ll be able to write. I still generate bugs and, occasionally, I’ll even come up with a bad idea. I’m also very good at catching myself and fixing mistakes quickly. I’m going to call this selective self-censoring that prevents 90 percent of one’s output from being crap the Sturgeon Filter.

With a strong Sturgeon Filter, you can export the good mutations and bury the bad ones. This is how reliability (either in an artistic masterpiece, orin  a correct, efficient program) can be achieved by unreliable creatures such as humans. I’d further argue that to be a competent programmer requires a strong Sturgeon Filter. The good news is that this filter is built up fairly quickly by tools that give objective feedback: compilers and computers that follow instructions literally, and malfunction at the slightest mistake. As programmers, we’re used to having our subordinates (compilers) tell us, “Fix your shit or I’m not doing anything.”

It’s no secret that most programmers dislike management, and have a generally negative view of the executives and “product managers” running most of the companies that employ them. This is because programmers pride themselves on having almost impermeable Sturgeon Filters, while lifelong managers have nonexistent Sturgeon Filters. They simply don’t get the direct, immediate feedback that would train them to recognize and reject their own bad ideas. That’s not because they’re stupider than we are. I don’t actually think that they are. I think that their jobs never build up the sense of fallibility that programmers know well.

Our subordinates, when given nonsensical instructions, give blunt, tactless feedback– and half the time they’re just pointing out spelling errors that any human would just ignore! Managers’ subordinates, however, are constantly telling them that they’re awesome, and will often silently clean up their mistakes. Carry this difference in experience out over 20 years or more, and you get different cultures and different attitudes. You get 45-year-old programmers who, while extraordinarily skillful, are often deeply convinced of their own fallibility; and you get 45-year-old executives who’ve never really failed or suffered at work, because even when they were bad at their jobs, they had armies of people ready to manage their images and ensure that, even in the worst case scenario where they lost jobs, they’d “fail up” into a senior position in another company.

Both sides now

Programmers and managers both mutate things; it’s the job. Programmers extend and alter the functionality of machines, while managers change the way people work. In both cases, the effects of a random mutation, or even an unwise intended one, are negative. Mutation for its own sake is undesirable.

For example, scheduling a meeting without a purpose is going to waste time and hurt morale. Hiring bad people and firing good ones will have massive repercussions. To manage at random (i.e. without a Sturgeon Filter) is almost as bad as to program at random. Only a small percentage of the changes to the way that people work that managers propose are actually beneficial. Most status pings or meetings serve no value except to allay the creeping sense of the manager that he isn’t “doing enough”, most processes that exist for executive benefit or “visibility” are harmful, and a good 90 to 99 percent of the time, the people doing the work have better ideas about how they should do it than the executives shouting orders. Managers, in most companies, interrupt and meddle on a daily basis, and it’s usually to the detriment of the work being produced. Jason Fried covers this in this talk, “Why work doesn’t happen at work”. As he says, “the real problems are … the M&Ms: the managers and the meetings”. Managers are often the last people to recognize the virtue of laziness: that constantly working (i.e. telling people what to do) is a sign of distress, while having little to do generally means that they’re doing their jobs well.

In the past, there was a Sturgeon Filter imposed by time and benign noncompliance. Managers gave bad orders just as often as they do now, but there was a garbage-collection mechanism in place. People followed the important orders, which were usually congruent already with common sense and basic safety, but when they were given useless orders or pointless rules to follow, they’d make a show of following the new rules for a month or two, then discard them when they failed to show any benefit or improvement. Many managers, I would imagine, preferred this, because it allowed them to have the failed change silently rejected without having any attention drawn to their mistake. In fact, a common mode of sub-strike resistance used in by organized labor is “the rule-follow“, a variety of slowdown in which rules were followed to the letter, resulting in low productivity. Discarding senseless rules (while following the important, safety-critical ones) is a necessary behavior of everyone who works in an organization; a person who interprets all orders literally is likely to perform at an unacceptably low level.

In the past, the passage of time lent plausible deniability to a person choosing to toss out silly policies that would quite possibly be forgotten or regretted by the person who issued them. An employee could defensibly say that he followed the rule for three months, realized that it wasn’t helping anything and that no one seemed to care, and eventually just forgot about it or, better yet, interpreted a new order to supersede the old one. This also imposed a check on managers, who’d embarrass themselves by enforcing a stupid rule. Since no one has precise recall of a months-old conversation of low general importance, the mists of time imposed a Sturgeon Filter on errant management. Stupid rules faded and important ones (like, “Don’t take a nap in the baler”) remained.

One negative side effect of technology is that it has removed that Sturgeon Filter from existence. Too much is put in writing, and persists forever, and the plausible deniability of a worker who (in the interest of getting more done, not in slacking) disobeys it has been reduced substantially. In the past, an intrepid worker could protest a status meeting by “forgetting” to attend it on occasion, or by claiming he’d heard “a murmur” that it was discontinued, or even (if he really wanted to make a point) by taking colleagues out for lunch at a spot not known for speedy service and, thus, an impersonal force just happening to half the team late for it. While few workers actually did such things on a regular basis (to make it obvious would get a person just as fired then as today) the fact that they might do so imposed a certain back-pressure on runaway management that doesn’t exist anymore. In 2015, there’s no excuse for missing a meeting when “it’s on your fucking Outlook calendar!”

Technology and persistence have evolved, but management hasn’t. Programmers have looked at their job of “messing with” (or, to use the word above, mutating) computers and software systems and spent 70 years coming up with new ways to compensate for the unreliable nature that comes from our being humans. Consequently, we can build systems that are extremely robust in spite of having been fueled by an unreliable input (human effort). We’ve changed the computers, the types of code that we can write, and the tools we use to do it. Management, on the other hand, is still the same game that it always has been. Many scenes from Mad Men could be set in a 2010s tech company, and the scripts would still fit. The only major change would be in the costumes.

To see the effects of runaway management, combined with the persistence allowed by technology, look no further than the Augean pile of shit that has been given the name of “Agile” or “Scrum”. These are neo-Taylorist ideas that most of industry has rejected, repackaged using macho athletic terminology (“Scrum” is a rugby term). Somehow, these discarded, awful ideas find homes in software engineering. This is a recurring theme. Welch-style stack ranking turned out to be a disaster, as finally proven by its thorough destruction of Enron, but it lives on in the technology: Microsoft used it until recently, while Google and Amazon still do. Why is this? What has made technology such an elephant graveyard for disproven management theories and bad ideas in general?

A squandered surplus

The answer is, first, a bit of good news: technology is very powerful. It’s so powerful that it generates a massive surplus, and the work is often engaging enough that the people doing it fail to capture most of the value they produce, because they’re more interested in doing the work than getting the largest possible share of the reward. Because so much value is generated, they’re able to have an upper-middle-class income– and upper-working-class social status– in spite of their shockingly low value-capture ratio.

There used to be an honorable, progressive reason why programmers and scientists had “only” upper-middle-class incomes: the surplus was being reinvested into further research. Unfortunately, that’s no longer the case: short-term thinking, a culture of aggressive self-interest, and mindless cost-cutting have been the norm since the Reagan Era. At this point, the surplus accrues to a tiny set of well-connected people, mostly in the Bay Area: venture capitalists and serial tech executives paying themselves massive salaries that come out of other peoples’ hard work. However, a great deal of this surplus is spent not on executive-level (and investor-level) pay but into another, related, sink: executive slack. Simply put, the industry tolerates a great deal of mismanagement simply because it can do so and still be profitable. That’s where “Agile” and Scrum come from. Technology companies don’t succeed because of that heaping pile of horseshit, but in spite of it. It takes about five years for Scrum to kill a tech company, whereas in a low-margin business it would kill the thing almost instantly.

Where this all goes

Programmers and executives are fundamentally different in how they see the world, and the difference in Sturgeon Filters is key to understanding why it is so. People who are never told that they are wrong will begin to believe that they’re never wrong. People who are constantly told that they’re wrong (because they made objective errors in a difficult formal language) and forced to keep working until they get it right, on the other hand, gain an appreciation for their own fallibility. This results in a cultural clash from two sets of people who could not be more different.

To be a programmer in business is painful because of this mismatch: your subordinates live in a world of formal logic and deterministic computation, and your superiors live in the human world, which is one of psychological manipulation, emotion, and social-proof arbitrage. I’ve often noted that programming interviews are tough not because of the technical questions, but because there is often a mix of technical questions and “fit: questions in them, and while either category is not terribly hard on its own, the combination can be deadly. Technical questions are often about getting the right answer: the objective truth. For a contrast, “fit” questions like “What would you do if given a deadline that you found unreasonable?” demand a plausible and socially attractive lie. (“I would be a team player.”) Getting the right answer is an important skill, and telling a socially convenient lie is also an important skill… but having to context-switch between them at a moment’s notice is, for anyone, going to be difficult.

In the long term, however, this cultural divergence seems almost destined to subordinate software engineers, inappropriately, to business people. A good software engineer is aware of all the ways in which he might be wrong, whereas being an executive is all about being so thoroughly convinced that one is right that others cannot even conceive of disagreement– the “reality distortion field”. The former job requires building an airtight Sturgeon Filter so that crap almost never gets through; the latter mandates tearing down one’s Sturgeon Filter and proclaiming loudly that one’s own crap doesn’t stink.

Sand Hill Road announces Diversity Outreach Program

I’m pleased to announce that I’ve succeeded in coordinating a number of Sand Hill Road’s most prestigious venture capital firms, including First Round Capital, Sequoia, and Kleiner Perkins, to form the first-ever Venture Capital Diversity Outreach Program. I could not have done this alone (obviously) and I want to thank all of my bros (and fembros) in Menlo Park and Palo Alto for making this happen.

In response to negative press surrounding this storied industry and its supposed culture of sexism (which we deny) we held a round-table discussion last weekend on Lake Tahoe, on the topic of rehabilitate our industry’s image. We’re hurt by the perception that we have a sexist, exclusionary, “frat boy” culture, so we decided to form a program to prove that we aren’t sexist. It was easy to agree on the first step: start funding and promoting women.

This idea, though brilliant and revolutionary, raised a difficult question: which women? Based on our back-of-the-envelope calculations, we estimated that there are between 3 and 4 billion women in the world! We had to narrow the pool. One venture capitalist who, unfortunately, declined to be named, said, “we need to fund young women.” Echoing Mark Zuckerberg, he said, “Young people are just smarter.” And so it was agreed that we will be funding 25 women under 23. Each will receive $25,000 worth of seed-round funding in exchange for a mere 15% of the business, along with unlimited one-on-one mentorship opportunities from the Valley’s leading venture capitalists.

We’re extending this opportunity outside of Northern California. In fact, you can apply from anywhere in the world. All pitches must be in video form, each lasting no more than 4 minutes. Which VC you should submit your pitch to depends on where you are applying from. Submissions from Eastern Europe will go to one VC for appraisal, Latin American submissions will go to another, and submissions from Asia to another. We have to match the judges with their specialties, you see. Don’t worry. We’ll have this all sorted out by this evening. Full-body pitch videos only. Face-only submissions will be rejected.

Based on the all-important and objective metric of cultural fit, the submissions will be stack-ranked and the winners will be notified within three days. We recommend that the winners, upon receiving funding, drop out of college to pursue this program. College may be necessary for middle-class people who want to become dentists, and it’s good for propagating Marxist mumbo-jumbo, but it’s so unnecessary if you have all of Sand Hill Road on your side. Which you will, if you’re a woman who wins this contest. Until you’re 28 or so, but that’s forever away and you’ll be a billionaire by then. We find that, in the magical land of California, it’s best not to think about the future or about risk. Future’s so bright, you gotta wear shades!

On a side note, being a venture capitalist is freaking awesome. No, the job doesn’t involve snorting blow off a hooker’s breasts– that actually stopped about 10 years ago, some HR thing. But nothing quite beats the thrill of playing football, in the midst of adiabatic para-skiing, while playing Halo on Google Glass!

Keep the Faith, Don’t Hate, and, above all, Lipra Solof.

Never Invent Here: the even-worse sibling of “Not Invented Here”

“Not Invented Here”, or “NIH syndrome”, refers to the tendency of organizations to undervalue external or third-party technical assets, even if they are free and easily available, when it is taken to an illogical extreme. The NIH archetype is the enterprise architect who throws person-decade after person-decade into reinventing solutions that exist elsewhere, maintaining this divergent “walled garden” of technology that has no future except by executive force. No doubt, that’s bad. I’m sure that it exists in rich, insular organizations, but I almost never see it in organizations with under a thousand employees. Too often in software, however, I see the opposite extreme: a mentality that I call “Never Invent Here” (NeIH). With that mentality, external assets are overvalued and often implicitly trusted, leaving engineers to spend more time adapting to the quirks of off-the-shelf assets, and less time building assets of their own.

Often, the never-invent-here mentality is couched in other terms, such as business-driven engineering or “Agile” software production. Let’s be honest about this faddish “Agile” nonsense: if engineers are micromanaged to the point of having to justify weeks or even days of their own working time, not a damn thing is ever going to be invented, because no engineer can afford to take the risk; they’re mired in user stories and backlog grooming. The core attitude underlying “Agile” and NeIH is that anything that takes more than some insultingly small amount of time (say, 2 weeks) to build should not be trusted to in-house employees. Rather than building technical assets, programmers spend most of their time in the purgatory of evaluating assets with throwaway benchmarking code and in writing “glue code” to make those third-party assets work together. The rewarding part of the programmer’s job is written off as “too hard”, while programmers are held responsible for the less rewarding part of the job: gluing the pieces together in order to meet parochial business requirements. Under such a regime, there is little room for progress or development of skills, since engineers are often left to deal with the quirks of unproven “bleeding edge” technologies rather than either (a) studying the work of the masters, or (b) building their own larger works and having a chance to learn from their own mistakes.

Never-invented-here engineering can be either positive or negative for an engineer’s career, depending on where she wants to go, but I tend to view its effects as negative for more senior talent. To the good, it assists in buzzword bingo. She can add Spring and Hibernate and Maven and Lucene to her CV, and other employers will recognize those technologies by name, and that might help her get in the door. To the bad, it makes it hard for engineers to progress beyond the feature-level stage, because meatier projects just aren’t done in most organizations when it’s seen as tenable for non-coding architects and managers to pull down off-the-shelf solutions and expect the engineers to “make the thingy work with the other thingy”.

Software engineers don’t mind writing some glue code, because even the best jobs involve grunt work, but no one wants to be stuck doing only that. While professional managers often ignore the fact, engineers can be just as ambitious as they are; the difference is that their ambition is focused on project scope and impact rather than organizational ascent or number of people managed. Entry-level engineers are satisfied to fix bugs and add small features– for a year or two. Around 2 years in, they want to be working on (and suggesting) major features and moving to the project level. At 5 years, they’re ready for bigger projects, initiatives, infrastructure, and to lead multi-engineer projects. And so on. Non-technical managers may ignore this, preferring to institute the permanent juniority of “Agile”, but they do so at their peril.

One place where this is especially heinous is in corporate “data science”. It seems like 90 percent (possibly more) of professional “data scientists” aren’t really being asked to develop or implement new algorithms, but are stuck in a role that has them answering short-term business needs, banging together off-the-shelf software, and getting mired in operations rather than fundamental research. Of course, if that’s all that a company really needs, then it probably doesn’t make sense for it to invest in the more interesting stuff, and in that case… it probably doesn’t need a true data scientist. I don’t intend to say that data cleaning and glue code are “bad” because they’re a necessary part of every job. They don’t require a machine learning expert, is all.

People ask me why I dislike the Java culture, and I’ve written much about that, but I think that one of Java’s worst features is that it enables the never-invent-here attitude of the exact type of risk-averse businessman who makes the typical corporate programmer’s job so goddamn depressing. In Java, there’s arguably a solution out there that sorta-kinda matches any business problem. Not all the libraries are good, but there are a lot of them. Some of those Java solutions are work very well, others do not, and it’s hard to know the difference (except through experience) because the language is so verbose and the code quality so low (in general; again, this is cultural rather than intrinsic to the language) that to actually read it is a non-starter. Even in the case where an engineer actually wanted to read the code and figure out what was actually going on, the business would never budget the time. Still, off-the-shelf solutions are trusted implicitly until they fail (either breaking, or being ill-suited to the needs of the business). Usually, that doesn’t happen for quite a while, because most off-the-shelf, open-source solutions are of decent quality when it comes to common problems, and far better than what would be written under the timelines demanded by most businesses, even in “technology” companies. The problem is the fact that, a year or two down the road, those off-the-shelf products often aren’t enough to meet every need. What happens then?

I wrote an essay last year entitled, “If you stop promoting from within, soon you can’t.” Companies tend to have a default mode of promotion. Some promote from within, and others tend to hire externally for the top jobs, and people tend to figure out which mode is in play within a year or so. In technology, the latter is more common for three reasons. One is the cultural prominence of venture capital. VCs often inject their buddies, regardless of merit, at high levels in companies they fund, regardless of whether the founders want them there. Second is the rapid scramble for headcount accumulation that exists in, and around, the VC-funded world. This requires companies to sell themselves very hard to new hires, which means that the best jobs and projects are often used to entice new people into joining rather than handed down to those already on board. The third is the tendency of software to be extremely political, because for all of our beliefs about “meritocracy”, the truth is that an individual’s performance is extremely context-dependent and we, as programmers, tend to spend a lot of time arguing for technologies and practices that’ll put us, individually, high in the rankings. Even if they are the same in terms of skill and natural ability, a team of programmers will usually have one “blazer” and N-1 who keep up with the blazer’s changes, and no self-respecting programmer is going to let himself be in the “keep-up-with” category for longer than a month. At any rate, once a company develops the internal reputation of not promoting internally, it starts to lose its best people. Soon, it reaches a point where it has to hire externally for the best jobs, because everyone who would have been qualified is already gone, pushed out by the lack of advancement. While many programmers don’t seek promotion in terms of ascent in a management hierarchy, they do want to work on bigger and more interesting projects with time. In a never-invent-here culture that just expects programmers to work on “user stories”, the programmers who are capable of more are often the first ones to leave.

Thus, if most of what a company has been doing has been glue code and engineers are not trusted to run whole projects, then by the time the company’s needs have out-scaled the off-the-shelf product, the talent level will have fallen to the point that it cannot resolve the situation in-house. It will either have to find “scaling experts” at a rate of $400 per hour to solve future problems, or live with declining software quality and functionality.

Of course, I am not saying, “don’t use off-the-shelf software”. In fact, I’d say that while programmers ought to be able to spend the majority of their time writing assets instead of adapting to pre-existing ones, it is still very often best to use an existing solution if one will suffice. Unless you’re going to be a database company, you shouldn’t be rolling your own alternative to Postgres; you should use what is already there. I’d make a similar argument with programming languages: there are enough good ones already in existence that expecting employees to contend with an in-house programming language, that probably won’t be very good, is a bad idea. In general, something that is necessary but outside the core competency of the engineers should be found externally, if possible. If you’re a one-product company that needs minimal search, there are great off-the-shelf products that will deliver that. On the other hand, if you’re calling your statistically-literate engineers “data scientists” and they want to write some machine learning algorithms instead of trying to make Mahout work for their problem, you should let them.

With core infrastructure (e.g. Unix, C, Haskell) I’d agree that it’s best to use existing, high-quality solutions. I also support going off-the-shelf with the relatively small problems: e.g. a CSV parser. If there’s a bug-free CSV parser out there, there’s no good reason to write one in-house. The mid-range is where off-the-shelf solutions are often inferior– and, often, in subtle ways (such as tying a large piece of software architecture to the JVM, or requiring expensive computation to deal with a wonky binary protocol)– to competently-written in-house solutions. Why is this? For the deep, core infrastructure there is a wealth of standards that already exists, and there are high-quality implementations to meet them. Competing against existing assets is probably a wasted effort. On the other hand, for the small problems like CSV parsing, there isn’t much meaningful variability in what a user can want. Typically, the programmer just wants the problem to be solved so she can forget about it. The mid-range of problem size is tricky, though, because there’s enough complexity that off-the-shelf solutions aren’t likely to deliver everything one wants, but not quite enough demand for solutions for nearly-unassailable standard implementations to exist in the open-source world. Let’s take linear regression. This might seem like a simple problem, but there are a lot of variables and complexities, such as: handling of large categorical variables, handling of missing data, regularization, highly-correlated inputs, optimization methods, whether to use early stopping, basis expansions, and choice of loss function. For a linear regression problem in 10,000 dimensions with 1 million data points, standards don’t exist yet. This problem isn’t a core infrastructural problem like building an operating system, but it’s hard enough that off-the-shelf solutions can’t be blindly relied upon to work.

This “mid-range” of problem is where programmers are expected to establish themselves, and it’s often where there’s a lot of pressure to use third-party products, regardless of whether they’re appropriate to the job. At this level, there’s enough variability in expectations and problem type that beating an off-the-shelf solution into conforming to the business need is just as hard as writing it from scratch, but the field isn’t so established that standards exist and the problem is considered “fully solved” (or close to it) already. Of course, off-the-shelf software should be used on mid-range problems if (a) it’s likely to be good enough, (b) those problems are uncorrelated to the work that the software engineers are trying to do and would be perceived as a distraction, and (c) the software can be used without architectural compromise (i.e. rewriting code in Java).

The failure, I would say, isn’t that technology companies use off-the-shelf solutions for most problems, because that is quite often the right decision. It’s that, in many technologies, that’s all that they use, because core infrastructure and R&D don’t fit into the two-week “sprints” that the moronic “Agile” fad demands that engineers accommodate, and therefore can’t be done in-house at most companies. The culture of trust in engineers is not there, and that (not the question of whether one technology is used over another) is the crime. Moreover, this often means that programmers spend more time overcoming the mismatch between existing assets and the problems that they need to solve than they spend in building new assets from scratch (which is what we’re trained, and built, to do). In the long term, this leads the engineer to the atrophy of skills, lowers her level of satisfaction with her job, and can damage her career (unless she can move into management). For a company, this spells attrition and permanent loss of capability.

The never-invent-here attitude is stylish because it seems to oppose the wastefulness and lethargy of the old “not-invented-here” corporate regime, while simultaneously reaffirming the fast-and-sloppy values of the new one, doped with venture capital and private equity. It benefits “product people” and non-technical makers of unrealistic promises (to upper management, clients, or investors) while accruing technical debt and turning programmers into a class of underutilized API Jockeys. It is, to some extent, a reaction against the “not invented here” world of yesteryear, in which engineers (at least, by stereotype) toiled on unnecessary custom assets without a care about the company’s more immediate needs. I would also say that it’s worse.

Why is the “never invent here” (NeIH) mentality worse than “not invented here” (NIH)? Both are undesirable, clearly. NIH, taken to the extreme, can become a waste of resources. That said, it is at least a “waste” that keeps the programmers’ skills sharp. On the other hand, NeIH can be just as wasteful of resources, as programmers contend with the quirks and bugs of software assets that they must find externally, because their businesses (being short-sighted and talent-hostile) do not trust them to build such things. It also has long-term negative effects on morale, talent level, and the general integrity of the programming job. My guess is that the “never invent here” mentality will be proven, by history, to have been a very destructive one that will lose us half a generation of programmers.

If you’re a non-technical businessperson, or a CTO who’s been out of the code game for five years, what should you take away from this post? If your sense is that your engineers want to use existing, off-the-shelf software, then you should generally let them. I am certainly not saying that it is bad to do so. If the engineers believe that an existing asset will do a job better than they could do if they started from scratch, and they’re industrious and talented, they’re probably right. On the other hand, senior engineers will develop a desire to build and run their own projects, and they will agitate in order to get that opportunity. The short-termist, never-invent-here attitude that I’ve seen in far too many companies is likely to get in the way of that; you should remove it before it does. Of course, the matter of what to invent in-house is far more important than the ill-specified and vague question of “how much”; in general and on both, senior engineering talent can be trusted to figure that out.

In that light, we get to the fundamental reason why “never invent here” is so much more toxic than its opposite. A “not invented here” culture is one in which engineers misuse freedom, or in which managers misuse authority, and do a bit of unnecessary work. That’s not good. But the “never invent here” culture is one in which engineers are out of power, and therefore aren’t trusted to decide when to use third-party assets and when to build from scratch. It’s business-driven engineering, which means that the passengers are flying the plane, and that’s never a good thing.

Yes, I’ll defend Daylight Saving Time

For those unfamiliar with U.S. timekeeping, we “lost an hour” of sleep last night, at least for those who slept according to clock time. Immediately after 1:59:59 in the morning, it was 3:00:00. We effectively changed into a different time zone, one hour east, and we do this switching forward and back every year. A lot of people don’t like Daylight Saving Time (or, in Europe, Summer Time) because it effectively “tricks” people into waking up an hour earlier, relative to solar time. If you wake up at 7:00, you’re actually forced to wake up at 6:00 for the majority of the year (DST is active for eight months out of the year, meaning that it is actually the usual time regime and winter/standard time is the special one).

On paper, Daylight Saving Time sounds really stupid– really, why on earth would we inflict that sort of unnecessary complexity on our timekeeping system? If it didn’t exist, and were proposed now, the idea would be rejected, and possibly for good reason. Benjamin Franklin proposed it as a joke, commenting on the unused daylight wasted by Paris’s nightlife culture. That said, I’ll go on record and be somewhat controversial. I like it. Its drawbacks are serious but I think that its advantages are numerous, as well. Sure, I know that there are a number of intelligent reasons to oppose DST, but I also have an emotional attachment to the 8:45 pm summer sunsets of my childhood. I “know” that time is arbitrary and stupid and that “light at 9:00 pm” means nothing because we invented these numbers and are really calling 8:00, 9:00. While I oppose this phrase when it is overused, time-of-day and especially daylight savings time literally are social constructs. They exist simply because follow the customs.

So let’s talk about Daylight Saving Time, and I’ll try to explain why it’s a good thing.

It’s not about energy saving. It’s cultural. 

The evidence is pretty strong that Daylight Saving Time doesn’t save energy. Nor does it waste energy. On the whole, energy use is unaffected by it. Discretionary lighting just isn’t a large enough component of our energy expenditure for it to matter. Rather, we use Daylight Saving Time because (contrary to a vocal minority) most of us actually like it. Or, more specifically, we like the results of it. No one likes the transitional dates themselves, but in exchange for two very annoying days each year, we get (at typical latitudes):

  • one hour less “wasted” morning daylight in the summer.
  • sunset after 6:00pm at the peak of autumn (late October).
  • sunrise before 8:00am in the depths of winter, because we transition back to standard time.

To make my argument that DST is cultural, just look at the dates when the transitions occur: mid-March and early November. Relative to daylight availability, this is inconsistent because there’s a lot less daylight in October than in early March. It’s asymmetrical, but it makes sense in the context of typical weather. In October, it’s still warm. In early March, it’s often cold. The transition dates are anchored to temperature, which affects human activity, rather than the amount of daylight.

Most people who oppose DST would prefer to make the summer time regime permanent. It’s not that they care about having “12:00″ (again, as defined by humans) be less than 30 minutes away from mean solar noon. They just don’t like the semi-annual change-over. So why don’t we just “make DST year-round” by moving one timezone to the east? The reasons, again, are cultural and come down to, it would really piss people off.

Year-round DST is (possibly) a bad idea.

I believe that the case for DST is strong. Sure, if you’re luckier than most people and have full control of your schedule, you’re likely to think it’s stupid. Why fuck with the clocks twice per year just to prevent people from “wasting” daylight? Shouldn’t people for whom that is some kind of issue just wake up earlier?

Most people, however, don’t control their schedules, especially when it comes to working hours. DST isn’t for people who can work 7-to-3 if they so choose. It’s not for freelancers who work from home or for people who set their own hours. It’s for people who have to be in an office or a retail outlet till 5:00 or 6:00 or 7:00. It’s to give them some daylight after work, especially in the spring and fall when natural light is not so ample. For them, that extra hour of after-work daylight matters.

In the winter, however, those people are going to be working until dark regardless of the time regime. Year-round DST would punt the typical winter sunset from 4:30 to 5:30, which isn’t a meaningful gain for them. They’d still go home in twilight and eat dinner in darkness. Moreover, it’d put the typical winter sunrise after 8:00. They’d be waking up and going to work in the dark, for no gain. With non-DST or “winter” time, they’re at least able to get some daylight in the morning. That can make a large difference when it comes to mental health and morale, because peoples’ satisfaction with work plummets if they don’t get some daylight on at least one side of their working block.

This has everything to do with how humans react to nominal time, and nothing to do with natural design. There is, of course, absolutely no natural basis for changing the clocks. It is, I would argue, a good thing that we do so, but our reasons for doing it are connected entirely to our social construct of time.

Yes, this is unnatural.

Daylight Savings Time might seem ridiculous because it’s so unnatural. Nature doesn’t have any concept of it, and it’s an active annoyance for farmers whose animals’ circadian rhythms don’t respond to our conception of time. This, I concede. It’s not natural to change the way we keep time by exactly 1/24th of a day (a fraction that matters to us for archaic reasons) twice a year. Also, we choose our transition dates based on approximate average temperature rather than daylight amount— specifically, keeping DST into late fall when the days are short but it is still warm– makes it obvious to me that this thing is cultural to begin with.

That said, the clock isn’t natural. Left to our own devices, we’d probably rise about an hour after sunset and go to sleep about two hours after sunset, with a 3- to 4-hour period of wakefulness between two spells of sleep (biphasic sleep). Where we evolved, there wasn’t much seasonality, so this probably didn’t change much over the course of the year, but it had to be changed once people moved north (and south) into the mid-latitudes.

If we were to focus on one geographic point, the ideal clock wouldn’t be anchored based on noon (12:00) but on sunrise. In a way, the original Roman hour achieved this, because an hour was exactly 1/12 of the duration between sunrise and sunset, causing it to vary throughout the year. For modern use, we wouldn’t want a variable hour but it would be pretty useful to anchor sunset to 6:00 exactly. This way, people can schedule their time according to how much light they need in the sky to awaken happily, rather than live in a world where “8:00 am” is just after sunrise (and, if it’s cloudy, still fairly dark) at one time of the year and bright mid-morning at a different time of year. That said, the “anchor sunrise to 6:00″ concept is actually pretty ridiculous, because it wouldn’t just require longitudinal timezones, but latitudinal ones as well, and they’d vary continuously throughout the year. If this were done, the actual time difference between Seattle and Miami would be 1 hour and 42 minutes at the height of summer, but 3 hours and 52 minutes in winter. That’s clearly not acceptable. There isn’t a sane way to come up with a time policy that stabilizes sunrise at, or even very close to, 6:00 in the morning. The system we have is probably the optimal point in the tradeoff between local convenience and global complexity. It imposes some complexity, and that’s why almost no one’s entirely happy with our time regime, but it also (a) prevents egregious “waste” of daylight in the summer, with the understanding that few people will wake before whatever time we call “6:00″, while (b) minimizing the number of people who have to commend and finish work in darkness by realigning political time with solar time in the winter.

Of course, there is the bigger question…

Will we, as a species, outgrow Daylight Saving Time? We only need it because so many of our recurring commitments (in particular, work) are tied to the clock. Unless there is jet lag, people on vacation don’t care if the daylight occurs in the morning hours or evening; they’ll just wake up at whatever time is appropriate to their activities. In 50-100 years, humanity will either have advanced into a leisure society where work is truly voluntary (as opposed to the semi-coercive wage labor that is most common, and still quite necessary, now) or destroyed itself: the technological trends spell mass unemployment that will lead either to abundance and leisure, or to class warfare and ruin. I don’t know which one we’ll get, but let’s assume that it’s the better outcome. Then it’s quite possible that in 2115, schoolchildren (unfamiliar with the concept of people being stuck in offices for continuous 8-12 hour periods) will learn that the people of our time were so tied down to others’ expectations that they had to change the clock twice per year just to align their working lives and seasonal daylight availability in a least-harmful way. They’ll probably find it completely ridiculous, and they’ll be right. All of that said, we live in a world where the social construct of clock time matters. It matters a lot: we’d have a lot more seasonal depression if we made people go to work and leave work in darkness, so we align our clocks to solar noon in the winter to avoid too-late (after 8:00) sunrises. But it’s also remarkably difficult to get people to wake up at a time which they’ve been conditioned to think is early, so we jerk the clocks ahead to avoid wasting daylight during the warmer seasons. From an engineering perspective, and with a focus on our needs as humans right now, I think that the system that exists now is surprisingly effective. It has some complexity and that’s annoying, but only a moderate amount relative to what it achieves.

Open-plan offices, panic attacks, all in the game.

I’ve been waiting to write this piece for years, just because I’ve never seen an industry be so hazardously wrong about something. Here we go…

I’m a programmer who suffers from panic disorder, and I hate the way the open-plan office, once a necessary evil for operations like trading floors, has become the default office environment for people who write software. Yes, there are cases in which open plan offices are entirely appropriate. I’m not going to argue against that fact. I am, however, going to attack the open-plan fetishism that has become trendy, in spite of overwhelming evidence against this style of office layout.

This is a risky piece to write because, in so far as it seems to deliver an indictment, it’s probably going to cover 99 percent of the software industry. At this point, horrible working spaces are so ubiquitous in software engineering, perhaps out of a misguided sense that programmers actually like them (which is true… if you define “programmer” as a 24-year-old male who’d rather have a college-like halfway house culture than actually go to work). So let me make it clear that I’m not attacking all companies that use open-plan offices. I’m attacking the practice itself, which I consider counterproductive and harmful, of putting people who perform intense knowledge work in an environment that is needlessly stressful, and generally despised by the people who have to work in it.

I work for a great company. I have a great job. I like being a technologist. I like writing code. I don’t plan to stop making things. In almost all ways, my life is going well right now. (It hasn’t always been so, and I’ve had years that would make it physically impossible to envy me.) However, for the past 8 years– and probably, at a sub-crisis level, for 10 before that– I’ve suffered from panic attacks. Some months, I don’t get any. In others, I’ll get five or six or ten (plus the daily mild anxiety attacks that aren’t true panic). These are not the yuppie anxiety attacks that come from too much coffee (I had those as a teenager and thought that they were “panic attacks”; they weren’t) although I get those, too. Actual panic attacks are otherworldly, often debilitating, and (perhaps worst of all) just incredibly embarrassing in the event that one becomes noticeable to others. Perhaps one of the most traumatizing things about the condition is its introductory phase, when it’s an ER-worthy “mystery health problem”. It takes a while to learn that, despite their incredible ability to throw almost any physical symptom at a person, they aren’t physically dangerous.

Panic attacks suck. They deliver intense negative reinforcement, almost to the point that I’d call it torture, for… what? What does one learn about an attack that comes out of the blue at 3:47pm? That 3:47pm is a menacing time of day? (This seems absurd, but panic makes a person very superstitious at first.) Some peoples’ attacks are phobic and attributable to a single cause; others’ are more random. Mine are random but patterned. They almost never happen in the morning or at night, with a peak around 3:00 in the afternoon. They sometimes come with stop-and-go traffic, but (oddly enough) almost never when I am biking. Extremely humid weather can provoke them, but dry heat, cold, rain and snow don’t. They also tend to come in clusters: there are foreshocks, there’s a main event, and then there’s a slew of stupid aftershocks (most being mild anxiety attacks rather than true panic) stretching out over about two weeks. I certainly can’t blame all of them on open-plan offices, because I’ve had them in all sorts of places: cars, planes, boats, my own bed, random street corners. Moreover, I’ve reached a point where, with treatment, I can endure such office environments, most of the time. I’ll say this much about the disorder as it relates to open-pan offices; the severe ones that can be called true panic started in, and largely because of, open-plan offices in the early years of my career. Even to this point, the attacks are exacerbated by the noise and personal visibility of an open-plan office, in which the cost of a panic attack is not limited to the experience itself, but includes embarrassment and the disruption of others’ work. A bad panic attack produces belching, farting, an autistic sort of “rocking”, and numerous tics that can be very loud. No one wants to see or hear that, and I don’t need the added stress of the enhanced consequences of an attack.

I don’t blame companies directly for this because absolutely no one would want their employees to have panic attacks. In fact, if the fact were well-known that open-plan offices exacerbate (and, I would argue, can produce) a debilitating disability in about 2 percent of the population, I think their use would be a lot less common. If the purpose of the open-plan office is to induce anxiety and stress– and this may be a motivation in a small number of companies, although I like to think better of people– it’s clearly there to induce the low levels that supposedly make people work harder, and not the plate-dropping extreme levels.

As for me, I’m mostly cured of the affliction. (I’ve had a bad February, between cabin fever from a sports injury and running out of an important medication; but, in general, my trend is positive.) At my worst (late 2009) I was a shut-in, able to survive largely because my work was only 1500 feet from my house. The 20-minute subway ride to my therapist’s office, without a prophylactic dose of clonazepam, was unthinkable. I began to improve in 2010, as I distinctly remember being actually able to ride a plane (a prerequisite for traveling to Alaska). Oddly enough, I probably had 20 panic attacks in the months leading up to the plane ride. On the airplane itself, I had only a mild one, and it ended 10 minutes after takeoff. So there was about a 100x multiplier on the anticipation of the plane ride that, when it came, actually wasn’t so bad. This is just one of the ironies of panic: the anxiety that can form in anticipation of a potential panic-trigger is often worse than the triggered panic itself.

Many basic skills I had to relearn in the wake of developing Panic Disorder. There was the First Run Over 10 Miles (February 2010) and the First Drive (May 2010) and the First Swim (June 2010) and the First Bike Ride (August 2011) and the First Swim in Open Water (February 2012) and the First Flight Not On Meds (June 2012). In a way, it was like aging in reverse. In 2008 and ’09, I was a crippled old man, unable to do much, and convinced that I was unemployable because throwing myself into the hell of open-plan programming was not an option. In 2010, I was a cautious 65-year-old who could get around but clearly had limitations. By 2012, I’d probably aged down into my 50s; and at this point in 2015, I’m probably at an “effective age” of about 40. I fret about health problems far too much, even though I’m probably (factoring out mild weight gain due to medications) in the top 10% of my age group for physical shape. I’ve decided that 2015 is going to be the year that I finally lick this thing, in large part because I really want to go Scuba diving, which is contraindicated for an active panic disorder. Part of why I am talking about this nightmarish condition is because I have some hope (perhaps naive) that doing so will help me finally put it down for good. Fuck panic. I’m ready for this nonsense to be over. I suspect that it continues because “something” (call it God or “the Universe”) needs me to speak out on this particular issue so, maybe, telling this story in the right way will end it.

It’s been a fight. I haven’t told a tenth of it. Perhaps it has strengthened me– when unaffected by this disease, almost nothing fazes me– and perhaps it has weakened or sickened me. Given that I use no drugs, don’t drink, exercise and eat well, I might make it to 85. Or I might die at 50. Oddly, I’m OK with that. What ended my fear of death (as a Buddhist believing in reincarnation) was the realization that it is the opposite of panic. Death is the essence of impermanence. It’s there to remind us that nothing in this world lasts forever. Panic is a visceral fear of eternal “stuckness”, the sense that an undesirable state will never end. There is (and I won’t lie about the morbid fact) a fear of death that lives in panic, but it’s more of a fear of “dying like this” than one of death itself. I know that I must die, but I really don’t want to die in an office at age 31, and it’s the latter thought that makes panic so awful. From what I know of death and the final moments, it seems like something that should not be feared; the pain of death is in it happening to so many other people.

Attacks are often trigger-less or, at least, have no obvious trigger. The worst ones aren’t usually “caused by” anything, and you can’t stop one by wishing it were over. Telling a panicker to “calm down” is about as useless as telling a depressed person to “cheer up”. If it has any effect, it’s a negative one. In general, though, once a wave of panic or dread has begun, it will run its course no matter what is done. The medication, the fruit juice, the hot tub… those are all useful at preventing a short-term “aftershock” or a recurrence, and therefore are great at preventing “rat king” attacks where one wave comes after another… but the attack itself (at least, a single “wave” of panic that lasts for an intense 10 to 300 seconds) can usually not be aborted.

Violent transparency

What is it about the open-plan office that causes anxiety? People are quick to implicate noise, and I agree that loud conversations can be a problem and can cause small amounts of anxiety, and it might possibly be sufficient to cause panic. I don’t think office noise, alone, explains the problem. The direction of the noise exacerbates it. Noise is especially distressing when it comes from behind a person, and poor acoustics can make this an issue at any point on the floor, and noise that comes “from everywhere” creates a sense of incoherence. All that said, I don’t think that noise is the major killer when it comes to open-plan-induced anxiety or panic. Annoyance, yes. Mental fatigue, for sure. However, I think the omnipresent stress of being visible from behind— typically a mild stress, because one usually has nothing to hide, but a creepy one that never goes away– tends to accumulate throughout the day. If there’s any one thing (and, of course, there probably isn’t just one) that causes open-plan panic attacks, it’s that constant creepy feeling of being watched.

I’ve studied office boredom and its causes, and one recurring theme is that people are notoriously bad at attributing the right cause for boredom, anxiety, or poor performance. I think this explains why people overestimate the influence of acoustics, and underestimate that of line-of-sight issues, on their open-plan problems. For example, people subjected to low-level irritations (sniffling, people shifting in their seats, intermittent noise) will often attribute their poor performance in reading comprehension to “boring” material, even when others who read the same passages in comfortable environments found the material interesting. In other words, people misattributed their distraction to a fault in the material rather than the (perhaps less than liminal) defects in their environment. I, likewise, tend to think that “noise” is the attributed cause for open-plan discomfort, anxiety and panic, largely because people fear that attributing their negative response to lines-of-sight would be a “confession” that they have something to hide. But lines of sight matter, and almost every human space except for an office is designed with this in mind. In a restaurant booth, you can ignore the noise, even though the environment is often louder than a typical office. The noise isn’t a problem because you know that it doesn’t concern you. You’ve got a wall at your back, and enough visual barriers to feel confident that almost no one is looking at you, and so you can eat in peace.

Programmers have, against their own interests, created a cottage industry around a culture that I would call violent transparency. One can start by noting the evils of “Agile” systems that micromanage and monitor individual work, in some companies and use cases, down to the fucking hour. While these techniques are beloved by middle management, all over the technology industry, for identifying (and also creating) underperformers (“Use Scrum to find your Scum”), I’ve seen enough to know that these innovations do far more harm than good. They tend to incent political behavior, have unacceptable false-positive rates (especially among people inclined to anxiety problems) and generally create an office-wide sense that programmers are terminal subordinates, incapable of anything that involves long-term thinking. Moreover, the culture of violent transparency is inhospitable to age and experience, favoring short-term flashiness over sustainable, thoughtful development. Over time, that leads to echo chambers, cultural homogeneity, and a low overall quality of work produced.

To make it clear, it’s not transparency itself that is bad. I think it’s great when people are proud of their work and are eager to share it. That should be encouraged whenever possible. I also think that it’s worthwhile for each person on a team to give regular notice of progress and impediments. In fact, I think that, properly used and timeboxed, so-called “standup” meetings can be good things that may reduce political behavior by eliminating the “Does Tom actually do anything?” class of suspicion. Somewhere, there is a middle ground between “The programmers work on whatever they want and update us when they feel like it and have to be needled to integrate their work with the rest of the company” and “We torture programmers by forcing them to justify days and hours of their own working time” and I think it’s important to find it. Unfortunately, the “Agile” industry seems built to sell one vision of things (autonomous self-managing teams! no waterfalls or derechos!) to programmers while promising something entirely different to management.

Oddly enough, young programmers seem not to oppose violent transparency, whether in the form of oppressive project management and downright aggressive visibility into the day-to-day fluctuations of their productivity, or in the open-plan office. Indeed, some of the strongest advocates of the paradoxical macho-subordinate cultures are programmers. (“I want an open-plan environment. I have nothing to hide!”) This is tenable for the inexperienced because they haven’t been afflicted yet by the oppressive creepiness of feeling (even if, in reality, they are not being watched) like they are monitored. Those who’ve not yet had a negative experience at work (a set that becomes very small, with age) do not yet realize that a surveillance state is, even for the innocent and the pillars of the community, an anxiety state.

Why is it so rare for programmers to recognize “Agile” fads as a game being played against them? Well, first, I think that there are some, like me, who genuinely like the work and have few complaints except for these stupid passing fads, which become more navigable or avoidable as one gets older and more socially skilled. I have seen “Scrum” (and I’m not talking about standup meetings, but the whole process and the attitude of programmers as wholly subordinate to “product”) shave over 80 percent off of a large company’s valuation in about a year. So I know how dysfunctional it can be. However, I’ve never personally been fired because of “Agile”, and I’m sufficiently skilled at Agilepolitik that I probably never will be. As I get older and more politically skilled, and my panic situation becomes more treatable, this is increasingly someone else’s battle. That said, if I can raise the issue to prevent further destruction of shareholder value, loss of talent, and intense personal anxiety to be suffered by others, then I will do so.

As a group, most of us who are experienced know that these macho-subordinate “Agile” fads are extremely harmful, but we don’t speak up. It’s not our money that’s at risk. It doesn’t hurt us, nearly as much as it hurts shareholders, when “user stories” cause a talent bleed and bankrupt a company. As for the young and inexperienced, they often have no sense of when a game is being played against them. Like the open plan office, so-called “Agile” is becoming something that “everyone does” and we now have a generation of programmers who, not only do they consider that nonsense to be normal– because they’ve never seen anything else– but they will replicate it, even without malign intent.

Violent transparency appeals to the hyper-rational person who hasn’t yet learned that the world is fluid, subtle, complicated, and often very emotional and political. It appeals to someone who’s never had an embarrassing health problem or a career setback and who still thinks that a person who is good at his job and ethical has “nothing to hide”. “Agile” notions of estimation and story points look innocuous. I mean, shouldn’t “product managers” and executives know how long things are supposed (ha!) to take? It seems like these innovations are welcome. And if a few “underperformers” get found out and reassigned or fired, isn’t that good as well? (Most programmers hate bad programmers. Older, more seasoned programmers hate bad programs, which are often produced by “Agile” practices, even when good programmers are writing the code.) What many young programmers don’t recognize is that every single one of them, for one reason or another, will have a bad day. Or even a bad two weeks (“sprinteration”). Or even a bad month. Shit happens. Health problems occur, projects get cancelled, parents and pets die, kids get sick, and errors from all over the company (up, down, sideways, or all at once) can propagate into one’s own blame space for any reason or no reason at all. You’d think that programmers would recognize this, band together, and support each other. Many do. But the emerging Silicon Valley culture discourages it, instead pushing the macho-subordinate culture in which programmers endure unreasonable stress– of a kind that actually detracts from their ability to do their jobs well– in the name of “transparency”.

Why open plan?

To make it clear, I’m not against all uses of the open-plan office. One environment where open-plan offices make sense is a trading desk. Seconds, in trading, can mean the difference between respectable gains and catastrophic losses, and so a large number of people need to be informed as soon as is humanly possible when someone detects a change in the market, or a production problem with technology (their own or from a third party; you’d be surprised at how much bad third-party software the trading world runs on). In such an environment, private offices can’t be afforded. Knocking on doors and waiting isn’t acceptable, because a trading desk is about as opposite a long-term-focused R&D environment as one can get. The reality of life on a trading desk is that the job probably mandates an open-plan, bullpen environment in which a shout can be heard by the entire floor at once. It’s a stressful environment, and talented people demand 30 percent raises every year in order to stay in it, because even people who love trading feel like they’ve taken a beating after a 7-hour exposure to that sort of bullpen environment. In fact, most seasoned traders will admit that their office environments are stressful and that the job has a “young man’s game” flavor to it; even the ones who say they love the game of trading are usually in management roles by 50.

So, no, I’m not against all open-plan offices. I’m against unnecessary open-plan offices. These environments, once, were regarded as a necessary evil for a small number of use cases, and are now the default work environment for programmers. I’m a technologist. I tend to be writing or involved in the writing of code for the next 30 years. Some projects and circumstances may require that I endure an open-plan office, and I accept that as something that may occur from time to time. It shouldn’t be the only option. Holding up against artificial stress that makes everyone worse at his job should not be part of the programmer’s job description. I’m 31 and if I have chest pain, I’m almost 100% sure that it’s a panic attack. That will be different when I’m 60 and, at that age, I sure as hell don’t want to be dealing with an open-plan office. I’ll quit the industry before then if I can’t get some damn privacy.

So when are open-plan offices necessary or useful? And what are software companies’ reasons for using them?

There are, I’d argue, six reasons why companies use open-plan offices for software. Some are defensible, and some are atrocious.

  1. It’s a necessity of the job. This might apply to 1 percent of programming jobs, if that, but such environments do exist. A hedge fund probably needs its core quants to be within earshot of the traders they’re supporting. A larger percentage of programming jobs will require an open working space some of the time (e.g. mission control when something is launched into space). In these cases, the negatives of the environment are offset either by the mission or by compensation.
  2. The company is very new. It goes without saying that if a company has less than 5 members, an open-plan (“garage”) format is probably going to be used. I don’t see that as terribly problematic; running a four-person company presents plenty of stressors that are as exhausting as open-plan offices, and startups can mitigate the stress caused by such environments by encouraging flexible hours and mitigating office noise.
  3. The company is expanding rapidly. To be truthful, I don’t begrudge all tech companies for using an open-plan office. If you’re growing at 40 percent per year, you can’t afford to have private offices for all your programmers at all times. As a “growing pain”, I think that the cramped open-plan office is acceptable as a temporary solution. It shouldn’t be the long-term expectation, however. It’s perfectly reasonable to use an open-plan office when anticipating rapid growth, simply because it’s hard to set up any other plan if you expect to outgrow it quickly. The negatives of the open-plan environment are severe, but tolerable as a temporary arrangement. I don’t hate that some companies use open-plan offices. I hate the fact that it’s becoming no longer a choice whether to work in one, because even the 5,000-person companies are now using them.
  4. “Everyone else is doing it.” This relates to item #3. Because rapid-growth companies have a legitimate reason to use open-plan offices in the first couple of years, it has become “cool” to have one, in spite of their being unpleasant for the people who have to work in them. To me, this is upsetting and troubling. There are good startups out there, but I don’t think “startupness” should be valued for its own sake. Companies regress to the mean with size, for sure, and this means that the best (and worst) jobs at any given time are likely to be at smaller companies; but, on the whole, attempting to replicate startup traits in larger companies tends to reproduce the negatives more reliably than the positives. People in large and small companies ought to recognize that there are, in fact, negatives of being a startup: ill-defined division of labor, cramped spaces, rapid organizational change. Unfortunately, big-company cost-cutters– that is, the people who aren’t good at anything else, and who have no vision, so they use claimed title of “belt-tightener” to inspire fear and grow politically– are very willing to use the “coolness” of startups to justify changes in their much larger companies, and these changes are invariably harmful to the employees and their companies.
  5. It’s cheap. This is a dishonorable motivation, for any company other than a bootstrapped or seed-funded startup, but some employers use the open-plan office just because it’s the cheapest and crappiest option. To be honest, I think that this is a case of “you get what you pay for”. Open-plan offices save costs, but the quality of work produced suffers and, for high-end knowledge work, it’s almost certainly an unfavorable trade. Assuming a modest 20% increase in programmer productivity, private or pair offices at 200 SF per person will pay for themselves tenfold, in most locales.
  6. Age and disability discrimination. Is this the motivation for most employers who use open-plan offices? Probably not. Is it a motivation for some? Absolutely. In the early 2000s, when many companies had to make cuts and wanted to shed their most expensive workers regardless of value or ability, it was a fairly well-known HR trick to reduce privacy and office space, driving the more expensive older programmers out first. (I’d bet that “Agile” became popular around the same time.) Age discrimination has probably never been the only reason for introducing an unhealthy work environment, but it is one among many for sure. So I suspect that one of numerous reasons why startup executives love open-plan offices is their repulsiveness to older programmers (where “older” means “anyone who is over 34 or has had any health problem”) and women.

Should open-plan offices be abolished or made illegal? No, probably not. That’s not my goal. Though I suffer from a disability that is aggravated by this rather obnoxious feature of the typical software work environment, I am also aware of its necessity in a number of circumstances. If I went to back into the hedge-fund life, I’d probably have to “Med Up” and deal with the intense environment, but I’d expect to be compensated for the pain. Everywhere else, open-plan offices shouldn’t be the norm. They should be a temporary “growing pain” at most. I can deal with them if I have to, for short durations and with the ability to get away. I don’t want to deal with them in every fucking office environment that I have to use for the next thirty years. I don’t expect to be in this industry, or even alive, if I have to deal with 30 more years of open-plan.

The F.Y.S. Letter

How do we preserve the benefits of the open-plan environment (if there are any) while mitigating the literally sickening drawbacks? Here are some starting points and observations:

  1. There is no working utility to visibility from behind. Abolish it. If you must go for an open-plan layout, then buy booth-style walls and give each programmer a wall at her back. Noise is irritating, but I don’t think that it’s the noise of the open-plan office that causes the distraction or the panic attacks. It’s the noise, followed by the loss of concentration, followed by the awareness of being visible “not getting any work done” to 20 people, combined with the general mental exhaustion that comes from having been visible to other people for hours on end. It is stressful for anyone to be visible, like a fucking caged animal, to so many people for 8-10 hours, five days per week. It’s especially bad to be visible from behind. Combine that with the mental tiredness of a workday well spent, and it’s intolerable. Whereas even normal people get mild vertigo and nausea by 4:00pm from this– I’ve heard that doctors in Silicon Valley are beginning to discuss “Open Office Syndrome” as even people without anxiety disorders begin presenting with late-afternoon vertigo– I’m at risk of something much worse, and I shouldn’t be. I’m a programmer, and fighting off panic attacks shouldn’t be part of the fucking job. I’d much rather use all of my mental energy on the programming.
  2. If open-plan is the only option, discourage pairing sessions and impromptu meetings in the working space. That’s what conference rooms are for. If the conversation is going to last for longer than 180 seconds, then it shouldn’t be in earshot of people who have no need to hear it, and are trying to do their jobs.
  3. Yes, programmers are special. Our jobs are mentally exhausting in a way that most white-collar workers (including almost all of the high-ranking ones called “executives”) will never be able to relate to. Many programmers are fat not because we’re “nerds” but because it is difficult to fight off food cravings after hours of mental exertion. The difficulty of the job (assuming that you care about doing it right, instead of just playing politics) is generally held to be a positive; it’s great to get into a creative flow. We also have an unusual trait of wanting to work hard (which is why that “Agile” nonsense is so stupid; it’s designed by and for non-producers on the assumption that we are like them, in terms of wanting to do as little real work as possible and needing the micromanagement of “user stories”, when the exact fucking opposite is true) and tend to beat ourselves up when (even if the reasons are environmental, and not our fault) we can’t focus. However, that combination of mental fatigue and overactive superego puts us in an ultra-high-risk category for panic and anxiety disorders. So, yes, we are fucking special and we need a better fucking work environment than what 99 percent of us face.
  4. Discourage eat-at-desk culture but encourage “talking shop” at lunch. First of all, eating at one’s desk is like eating in a car. It’s OK to do it once in a while, but it shouldn’t be the norm. This has nothing to do with anxiety disorders or office layouts. I just don’t like it, although I’ll do it two days out of five. But it should be discouraged in general. Second, “collaboration” is an often-cited benefit of an open-plan office (even though such environments aren’t very collaborative; just stressful and annoying). However, you can’t force people to be “collaborative” by having them overhear each other’s conversations that are unrelated to what they’re trying to do and, also, people will gladly be collaborative even when they have private offices (from which, you know, they do sometimes emerge, humans being social creatures and all). Collaboration happens when people are relaxed, together, and “talk shop” because they’re genuinely engaged in what they’re up to. It happens quite often with programmers. You don’t have to force us to be “collaborative”. It just comes about naturally. Remember: most of us like work.
  5. Stop measuring people based on their decline curves in the first place. Programming is not the Marine Corps and it shouldn’t be. This should be a 40-year career. Programmers create their own stress (eustress, the good kind) and should not be subjected to the negative kind, especially on variables (such as the ability, increasingly compromised with age, to withstand a lack of privacy that can likened to nine-hour economy-class airplane ride every day) that do not correlate to any useful ability. When that happens, the people suffer, the code suffers, and the product suffers. And as a programmer, let me state that low-quality work in infrastructure or software is always astronomically more expensive than it appears that it “should” be. Since there is almost no correlation between basic reliability (that is, the ability to make good decisions under normal conditions, ethical character, and the insight necessary to build highly-reliable systems that are far more reliable than any human could be) and the sort of superficial but extreme reliability (“story points” per week) that characterizes office tournaments, we as an industry ought to abolish all focus on the latter. The former (which I termed, “basic reliability”) is what we care about, and the rightward tail of a person’s decline curve has no correlation to that.


I’m not writing this to indict specific companies– except for, perhaps, Facebook, for the woeful lack of cultural leadership shown in building a 2800-person open-plan office– because every programming environment I’ve been in, since 2007, has been an open-plan one. In fact, I’ve only been in two companies that didn’t use open-plan offices: one had pair offices (which worked very well, and are probably better than solo offices) and the other had a different kind of dysfunctional layout. Moreover, I recognize that there are legitimate uses for open-plan offices, even if such environments exacerbate some peoples’ health issues. Engineering, whether in technology or in designing a workplace, is about trade-offs, and sometimes the benefits of an open-plan office outweigh the drawbacks. That is, I think, quite rare, because I don’t believe that cramped offices are “collaborative” at all, but I’ll admit that the possibility exists. In general, though, an ideal office environment would afford 150-200 square feet of personal space, per programmer, at a minimum, while offering central, “public” spaces for collaboration and informal socializing (which often blend together). Relative to even an average programmer’s productivity, office space is ridiculously cheap. A 200 square-foot office, in a prime downtown location in the U.S., typically costs $4000 to $6000 per year, which is nothing compared to the gain from having happy, productive, healthy programmers able to do their jobs without distraction.

What I absolutely must kill is the assumption that the open-plan bullpen represents the way for programmers to work, as if it deserves to be a standard, and as if programmers were somehow immune to the health problems (acute and gradual) that badly-designed office environments inflict. It’s this sort of nonsense (in the obscene name of “culture fit”) that makes our industry needlessly exclusionary. The open-plan office is fine in a company’s transitional phase, but it should not be the standard, and it should absolutely never be, under any circumstances, a part of how we allow the rest of the world to view us as programmers. (Visibility from behind suggests low social status. If you’re not aware of that nuance, then you’re not qualified to have opinions about office space and you don’t get a vote.) If your company is at five people, then use the environment that’s right for those 5 people. If you’re growing too fast for individual or pair offices to be practical, then use cubicles or an open-plan layout during the growth phase, and shift over to offices with doors as you can. However, if you’re the CEO of a 5000-person company and you’re still putting software engineers in open-plan offices, then you’re either ignorant or a psychopath. The open-plan office is no more “cool” than smoking, and people who want to be programmers without signing up for 40 years of second-hand smoke should have that option.