VC-istan 8: the Damaso Effect

Padre Damaso, one of the villains of the Filipino national novel, Noli me Tangere, is one of the most detestable literary characters, as a symbol of both colonial arrogance and severe theological incompetence. One of the novel’s remarks about colonialism is that it’s worsened by the specific types of people who implement colonial rule: those who failed in their mother country, and are taking part in a dangerous, isolating, and morally questionable project that is their last hope at acquiring authority. Colonizers tend to be people who have no justification for superior social status left but their national identity. One of the great and probably intractable tensions within the colonization process is that it forces the best (along with the rest) of the conquered society to subordinate to the worst of the conquering society. The total incompetence of the corrupt Spanish friars in Noli is just one example of this.

In 2014, the private-sector technology world is in a state of crisis, and it’s easy to see why. For all our purported progressivism and meritocracy, the reality of our industry is that it’s sliding backward into feudalism. Age discrimination, sexism, and classism are returning, undermining our claims of being a merit-based economy. Thanks to the clubby, collusive nature of venture capital, to secure financing for a new technology business requires tapping into a feudal reputation economy that funds people like Lucas Duplan, while almost no one backs anything truly ambitious. Finally, there’s the pernicious resurgence of location (thanks to VCs’ disinterest in funding anything more than 30 miles away from them) as a career-dominating factor, driving housing prices in the few still-viable metropolitan areas into the stratosphere. In so many ways, American society is going back in time, and private-sector technology is a driving force rather than a counterweight. What the fuck, pray tell, is going on? And how does this relate to the Damaso Effect?

Lawyers and doctors did something, purely out of self-interest, to prevent their work from being commoditized as American culture became increasingly commercial in the late 19th century. They professionalized. They invented ethical rules and processes that allowed them work for businessmen (and the public) without subordinating. How this all works is covered in another essay, but it served a few purposes. First, the profession could maintain standards of education so as to keep membership in the profession as a form of credibility that is independent of managerial or client review. Second, by ensuring a basic credibility (and, much more important, employability) for good-faith members, it enabled professionals to meet ethical obligations (i.e. don’t kill patients) that supersede managerial or corporate authority. Third, it ensured some control over wages, although that was not its entire goal. In fact, the difference between unionization and professionalization seems to be as follows. Unions are employed when the labor is a commodity, but ensure that the commoditization happens in a fair way (without collective bargaining, and in the absence of a society-wide basic income, that never occurs). Unions accept that the labor is a commodity, but demand a fair rate of exchange. Professionalization exists when there is some prevailing reason (usually an ethical one, such as in medicine) to prevent full commoditization. If it seems like I’m whitewashing history here, let me point out that the American Medical Association, to name one example, has done some atrocious things in its history. It originally opposed universal healthcare; it has received some karma, insofar as the inventively mean-spirited U.S. health insurance system has not only commoditized medical services, but done so on terms that are unfavorable to physician and patient both. I don’t mean to say that the professions have always been on the right side of history, because that’s clearly not the case; professionalization is a good idea, often poorly realized.

The ideal behind professionalization is to separate two senses of what it means to “work for” someone: (1) to provide services, versus (2) to subordinate fully. Its goal is to allow a set of highly intelligent, skilled people to deliver services on a fair market without having to subordinate inappropriately (such as providing personal services unrelated to the work, because of the power relationship that exists) as is the norm in mainstream business culture.

As a tribe, software professionals failed in this. We did not professionalize, nor did we unionize. In the Silicon Valley of the 1960s and ’70s, it was probably impossible to see the need for doing so: technologists were fully off the radar of the mainstream business culture, mostly lived on cheap land no one cared about, and had the autonomy to manage themselves and answer to their own. Hewlett-Packard, back in its heyday, was run by engineers, and for the benefit of engineers. Over time, that changed in the Valley. Technologists and mainstream, corporate businessmen were forced to come together. It became a colonial relationship quickly; the technologists, by failing to fight for themselves and their independence, became the conquered tribe.

Now it’s 2014, and the common sentiment is that software engineers are overpaid, entitled crybabies. I demolished this perception here. Mostly, that “software engineers are overpaid” whining is propaganda from those who pay software engineers, and who have a vested interest. It has been joined lately by leftist agitators, angry at the harmful effects of technology wealth in the Bay Area, who have failed thus far to grasp that the housing problem has more to do with $3-million-per-year, 11-to-3 product executives (and their trophy spouses who have nothing to do but fight for the NIMBY regulations that keep housing overpriced) than $120,000-per-year software engineers. There are good software jobs out there (I have one, for now) but, if anything, relative to the negatives of the software industry in general (low autonomy relative to intellectual ability, frequent job changes necessitated by low concern of employers for employee career needs, bad management) the vast majority of software engineers are underpaid. Unless they move into management, their incomes plateau at a level far below the cost of a house in the Bay Area. The truth is that almost none of the economic value created in the recent technology bubble has gone to software engineers or lifelong technologists. Almost all has gone to investors, well-connected do-nothings able to win sinecures from reputable investors and “advisors”, and management. This should surprise no one. Technology professionals and software engineers are, in general, a conquered tribe and the great social resource that is their brains is being mined for someone else’s benefit.

Here’s the Damaso Effect. Where do those Silicon Valley elites come from? I nailed this in this Quora answer. They come from the colonizing power, which is the mainstream business culture. This is the society that favors pedigree over (dangerous, subversive) creativity and true intellect, the one whose narcissism brought back age discrimination and makes sexism so hard to kick, even in software which should, by rights, be a meritocracy. That mainstream business world is the one where Work isn’t about building things or adding value to the world, but purely an avenue through which to dominate others. Ok, now I’ll admit that that’s an uncharitable depiction. In fact, corporate capitalism and its massive companies have solved quite a few problems well. And Wall Street, the capital of that world, is morally quite a bit better than its (execrable) reputation might suggest. It may seem very un-me-like to say this, but there are a lot of intelligent, forward-thinking, very good people in the mainstream business culture (“MBA culture”). However, those are not the ones who get sent to Silicon Valley by our colonial masters. The failures are the ones sent into VC firms and TechCrunch-approved startups to manage nerds. Not only are they the ones who failed out of the MBA culture, but they’re bitter as hell about it, too. MBA school told them that they’d be working on $50-billion private-equity deals and buying Manhattan penthouses, and they’re stuck bossing nerds around in Mountain View. They’re pissed.

Let me bring Zed Shaw in on this. His essay on NYC’s startup scene (and the inability thereof to get off the ground) is brilliant and should be read in full (seriously, go read it and come back to me when you’re done) but the basic point is that, compared to the sums of money that real financiers encounter, startups are puny and meaningless. A couple quotes I’ll pull in:

During the course of our meetings I asked him how much his “small” hedge fund was worth.

He told me:

30 BILLION DOLLARS

That’s right. His little hedge fund was worth more money than thousands of Silicon Valley startups combined on a good day. (Emphasis mine.) He wasn’t being modest either. It was “only” worth 30 billion dollars.

Zed has a strong point. The startup scene has the feeling of academic politics: vicious intrigue, because the stakes are so small. The complete lack of ethics seen in current-day technology executives is also a result of this. It’s the False Poverty Effect. When people feel poor, despite objective privilege and power, they’re more inclined to do unethical things because, goddammit, life owes them a break. That startup CEO whose investor buddies allowed him to pay himself $200,000 per year is probably the poorest person in his Harvard Business School class, and feels deeply inferior to the hedge-fund guys and MD-level bankers he drank with in MBA school.

This also gets into why hedge funds get better people (even, in NYC, for pure programming roles) than technology startups. Venture capitalists give you $5 million and manage you; they pay to manage. Hedge fund investors pay you to manage (their money). As long as you’re delivering returns, they stay out of your hair. It seems obvious that this would push the best business people into high finance, not VC-funded technology.

The lack of high-quality businessmen in the VC-funded tech scene hurts all of us. For all my railing against that ecosystem, I’d consider doing a technology startup (as a founder) if I could find a business co-founder who was genuinely at my level. For founders, it’s got to be code (tech co-founder) or contacts (business co-founder) and I bring the code. At my current age and level of development, I’m a Tech 8. A typical graduate from Harvard Business School might be a Biz 5. (I’m a harsh grader, that’s why I gave myself an 8.) Biz 6 means that a person comes with connections to partners at top VC firms and resources (namely, funding) in hand. The Biz 7′s go skiing at Tahoe with the top kingmakers in the Valley, and count a billionaire or two in their social circle. If I were to take a business co-founder (noting that he’d become CEO and my boss) I’d be inclined to hold out for an 8 or 9, but (at least, in New York) I never seemed to meet Biz 8′s or 9′s in VC-funded technology, and I think I’ve got a grasp on why. Business 8′s just aren’t interested in asking some 33-year-old California man-child for a piddling few million bucks (that comes along with nasty strings, like counterproductive upper management). They have better options. To the Business 8+ out there, whatever the VCs are doing in Silicon Valley is a miserable sideshow.

It’s actually weird and jarring to see how bad the “dating scene”, in the startup world, is between technical and business people. Lifelong technologists, who are deeply passionate about building great technology, don’t have many places elsewhere to go. So a lot of the Tech 9s and 10s stick around, while their business counterparts leave and a Biz 7 is the darling at the ball. I’m not a fan of Peter Shih, but I must thank him for giving us the term “49ers” (4′s who act like 9′s). The “soft” side, the business world of investors and well-connected people who think their modest connections deserve to trade at an exorbitant price against your talent, is full of 49ers– because Business 9′s know to go nowhere near the piddling stakes of the VC-funded world. Like a Midwestern town bussing its criminal element to San Francisco (yes, that actually happened) the mainstream business culture sends its worst and its failures into the VC-funded tech. Have an MBA, but not smart enough for statistical arbitrage? Your lack of mathematical intelligence means you must have “soft skills” and be a whiz at evaluating companies; Sand Hill Road is hiring!

The venture-funded startup world, then, has the best of one world (passionate lifelong technologists) answering to the people who failed out of their mother country: mainstream corporate culture.

The question is: what should be done about this? Is there a solution? Since the Tech 8′s and 9′s and 10′s can’t find appropriate matches in the VC-funded world (and, for their part, most Tech 8+ go into hedge funds or large companies– not bad places, but far away from new-business formation– by their mid-30s) where ought they to go? Is there a more natural home for Tech 8+? What might it look like? The answer is surprising, but it’s the mid-risk / mid-growth business that venture capitalists have been decrying for years as “lifestyle businesses”. The natural home of the top-tier technologist is not in the flash-in-the-pan world of VC, but the get-rich-slowly world of steady, 20 to 40 percent per year growth due to technical enhancement (not rapid personnel growth and creepy publicity plays, as the VCs prefer).

Is there a way to reliably institutionalize that mid-risk / mid-growth space, that currently must resort (“bootstrapping”) to personal savings (a scarce resource, given that engineers are systematically underpaid) just as venture capital has done to the high-risk /get-big-or-die region of the risk/growth spectrum? Can it be done with a K-strategic emphasis that forges high-quality businesses in addition to high-value ones? Well, the answer to that one is: I’m not sure. I think so. It’s certainly worth trying out. Doing so would be good for technology, good for the world, and quite possibly very lucrative. The real birth of the future is going to come from a fleet of a few thousand highly autonomous “lifestyle” businesses– and not from VC-managed get-huge-or-die gambits.

Was 2013 a “lost year” for technology? Not necessarily.

The verdict seems to be in. According to the press, 2013 was just a god-awful, embarrassing, downright shameful year for the technology industry, and especially Silicon Valley.

Christopher Mims voices the prevailing sentiment here:

All in, 2013 was an embarrassment for the entire tech industry and the engine that powers it—Silicon Valley. Innovation was replaced by financial engineering, mergers and acquisitions, and evasion of regulations. Not a single breakthrough product was unveiled—and for reasons outlined below, Google Glass doesn’t count.

He continues to point out the poor performance of high-profile product launches, the abysmal behavior of the industry’s “ruling class”– venture capitalists and leading executives– and the fallout from revelations like the NSA’s Prism program. Yes, 2013 brought forth a general miasma of bad faith, shitty ideas, and creepy, neoreactionary bubble zeitgeists: Uber’s exploitative airline-style pricing and BitTulip mania are just two prominent examples.

He didn’t cover everything; presumably for space, he gave no coverage to Sean Parker’s environmental catastrophe of a wedding (and the 10,000-word rant he penned while off his meds) and its continuing environmental effects. Nor did he cover the growing social unrest in California, culminating in the blockades against “Google buses”. Nor did he mention the rash of unqualified founders and mediocre companies like Summly, Snapchat, Knewton, and Clinkle and all the bizarre work (behind the scenes, by the increasingly country-club-like cadre of leading VCs) that went into engineering successes for these otherwise nonviable firms. In Mims’s tear-down of technology for its sins, he didn’t even scratch the surface, and even with the slight coverage given, 2013 in tech looks terrible. 

So, was 2013 just a toilet of a year, utterly devoid of value? Should we be ashamed to have lived through it?

No. Because technology doesn’t fucking work that way. Even when the news is full of pissers, there’s great work being done, much of which won’t come to fruition until 2014, 2015, or even 2030. Technology, done right, is about the long game and getting rich– no, making everyone rich– slowly. (Making everyone rich is, of course, not something that will be achieved in one quarter or even one decade.) “Viral” marketing and “hockey stick” obsessions are embarrassments to us. We don’t have the interest in engineering that sort of thing, and don’t believe we have the talent to reliably make it happen– because we’re pretty sure that no one does. But we’re very good, in technology, at making things 10, or 20, or 50 percent more efficient year-on-year. Those small gains and occasional big wins amount, in the aggregate, to world economic growth at a 5% annual rate– nearly the highest that it has ever achieved.

Sure, tech’s big stories of 2013 were mostly bad. Wearing Google Glass, it turns out, makes a person look like a gigantic fucking douchebag. I don’t think that such a fact damns an entire year, though. Isn’t technology supposed to embrace risk and failure? Good-faith failure is a sign of a good thing– experimentation. (I’m still disgusted by all the bad-faith failure out there, but that should surprise no one.) The good-faith failures that occur are signs of a process that works. What about the bad-faith ones? Let’s just hope they will inspire people to fix a few problems, or just one big problem: our leadership.

On that, late 2013 seems to have been the critical point in which we, as a technical community, lost faith in the leaders of our ecosystem: the venture capitalists and corporate executives who’ve claimed for decades to be an antidote to the centuries-old tension between capitalist and laborer, and who’ve proven no better (and, in so many ways, worse) than the old-style industrialists of yore. Silicon Valley exceptionalism is disappearing as an intellectually defensible position. The Silicon Valley secessionists and Sand Hill Road neofeudalists no longer look like visionaries to us; they look like sad, out-of-touch, privileged men abusing a temporary relevance, and losing it quickly through horrendous public behavior. The sad truth about this is that it will hurt the rest of us– those who are still coming up in technology– far more than it hurts them.

This loss of faith in our gods is, however, a good thing in the long run. Individually, none of us among the top 50,000 or so technologists in the U.S. has substantial power. If one of us objects to the state of things, there are 49,999 others who can replace us. As a group, though, we set the patterns. Who made Silicon Valley? We did, forty years ago, when it was a place where no one else wanted to live. We make and, when we lose faith, we unmake.

Progress is incremental and often silent. The people who do most of the work do least of the shouting. The celebrity culture that grows up around “tech” whenever there is a bubble has, in truth, little to do with whether our society can meet the technology challenges that the 2010s, ’20s, and onward will throw at us.

None of this nonsense will matter, ten years from now. Evan Spiegel, Sean Parker, Greg Gopman, and Adria Richards are names we will not have cause to remember by December 26, 2023. The current crop of VC-funded cool kids will be a bunch of sad, middle-aged wankers drinking to remember their short bursts of relevance. But the people who’ve spent the ten years between now and then continually building will, most likely, be better off then than now. Incremental progress. Hard work. True experimentation and innovation. That’s how technology is supposed to work. Very little of this will be covered by whatever succeeds TechCrunch.

Everything that happened in technology in 2013, and much of it was distasteful, is just part of a longer-running process. It was not a wasted year. It was a hard one for morale, but sometimes a morale problem is necessary to make things better. Perhaps we will wake up to the uselessness of the corporatists who have declared themselves our leaders, and do something about that problem.

So, I say, as always: long live technology.

VC-istan 4: Silicon Valley’s Tea Party

The SATs might have left people sour on analogies, but here’s one to memorize: VC-funded technology is to Corporate America as the Tea Party is to the Republican Party. I cannot think of a more perfect analogy for the relationship between this Sand Hill-funded startup bubble and the “good ol’ boy” corporate network it falsely claims to be displacing. It is, like the Tea Party, a more virulent resurgence of what it claims to be a reaction against.

What was the Tea Party?

Before analyzing VC-istan, let’s talk about the Tea Party. The contemporary American right wing has an intrinsic problem. First of all, it’s embarrassed by its internal contradictions, insofar as it fails to implement its claimed fiscal conservatism, instead getting us more indebted through wars fought to serve corporate interests. More to the point, it’s trying to get people to vote for things that are economically harmful to them– and it’s surprisingly good at that, but that requires keeping people misled about what they are actually supporting, which in turn mandates constant self-reinvention. For this reason, the Republican Party has a well-established pattern of generating a “revolution” every decade and a half or so.

First, there was the “Reagan Revolution” of 1980. Then there was the “join the fight” midterm election of 1994– the Republican landslide that brought us our first severe government shutdown. Around 2009, the modern Tea Party was born– and that brought us a second severe government shutdown. At first, this Tea Party appeared to be deeply libertarian, presented as a populist tax revolt without the overt corporate or religious affiliations of the Republican Party. It seems ridiculous in retrospect, but there were left-wing Tea Partiers at the beginning of the movement (there aren’t anymore). In time, the Tea Party was steered directly into the Republican tent, fueling the party’s electoral success in 2010. That’s miraculous for them, when one considers the gigantic image problem that the Bush Era created for that party. In 2008, some commentators believed the Republicans were finished for good, about to go the way of the Whigs; two years later, it had been reinvigorated by a populist movement that, at its inception, seemed radically different from the fiscally irresponsible GOP.

By promising a reduction in taxes and social complexity, the Tea Party managed to remove Bush and Cheney– old-style authoritarian stooges, big-government war hawks, and objective failures even before the end of their term– from the conversation in record time. Of course, time proved the Tea Partiers to be “useful idiots” for a more typical Republican resurgence– a reinvention of image, not of substance– and the most astute observers were not surprised. When the reputations of established players become sufficiently negative, reinvention (and “disruption”) becomes the marketers’ project of the hour.

Venture capital

Corporate America, too, has a severe image problem. The most talented people don’t want to work in stereotypical corporate environments. They want to be in academia, hedge funds, R&D labs, and cutting-edge startups– not dealing with the stingy pay, political intrigue, slow advancement, low autonomy, and archaic permissions systems associated that are stereotypical for large institutions. Of course, companies that need top talent can get it, but they must either (a) pay extremely well, (b) offer levels of autonomy that can complicate internal politics, or (c) market themselves extremely well.

Wall Street can simply buy its way out of the corporate image problem. However, this typically means that employee pay will go up by 20 to 30 percent per year, in order to keep abreast of the rapid hedonic adaptation that money exhibits. Few companies can afford to compensate that generously, especially putting that exponential growth in the context of a 20- to 40-year career. Venture capital’s ecosystem is an alternative solution to that image problem; a corporate system that appears to be maverick, anti-authoritarian and “disruptive”, when what it actually is is dishonest and muddled. The people would have been middling project managers in the old system are given the title of CEO/”founder” in one of VC-istan’s disposable companies. Instead of a team getting cut (and its staff reassigned) as would occur in a larger corporate machine, the supposedly independent company (of course, it is not truly independent, in the context of the feudal reputation economy that the VCs have created) is killed and everyone gets fired. This might seem like a worse and more dishonest corporate system, but it gives the impression of providing more individual prominence to highly talented (if somewhat clueless) people.

Not much of substance has improved in the transition from the older corporate system to the VC-funded world, but I think some things have actually been lost, particularly in reference to fairness. Bureaucracies can be dismal and ineffective, but those that work well are efficient and, most importantly, fair. In fact, attempts to achieve fairness (the definition of which seems, inexorably, to accrue complexity) seem to be a driving force behind bureaucratic increase. Obviously, bureaucracy is sometimes used toward unfair ends, or even designed maliciously (for example, over-restrictive policies are often built with the intentional purpose of making those with the power to grant exceptions powerful) but I would say, in general, that those negatives are not supposed to emerge from bureaucracy, and probably not characteristic of it in general. Bureaucracy is mostly boring, mostly effective, and only maligned because it’s infuriating when it fails (which is, often, the only time when most people notice it; bureaucracy that works goes unnamed). Without bureaucracy at all, however, social processes often devolve into a state where favor trading, influence peddling, and social connections– with those accrued early on (such as in school) and therefore most tied to socioeconomic status rather than merit, being most powerful– dominate.

VC-istan has reduced corporate bureaucracy (because companies are killed or sold before they can accrue that kind of complexity) but done away with the concern for fairness. It claims to be a meritocracy, and only accepts those who refuse to see (much less speak of( the machinations of power. Those who complain too loudly about VC collusion are ostracized. For just one petty example of the VC-funded world’s cavalier attitude toward injustice, people who voice the “wrong” opinions on Hacker News are silenced, “slowbanned” or even “hellbanned”. Injustice, accepted for the sake of efficiency, is tolerated as accidental noise that’s expected to converge over time, as the error from independent coin flips would smooth out as more occurred. The problem with social processes is that the errors (injustices that one hopes will be transient) don’t cancel each other out; they have a long-term autocorrelation.

In truth, what does self-assertion of meritocracy mean? It means that one is not even going to try to strive for additional fairness, under a belief that balance between fairness and other constraints has already been achieved. Of course, anyone who’s paying attention knows that not to be true.

Am I proposing that more bureaucracy is the solution to VC-istan’s moral failings? No. I’m only arguing that VC-istan’s selling point of “leanness” often comes at a cost, which is a sloppier and more unfair ecosystem. The old corporate ladder, with less of the ageism and emphasis on native social class and educational “pedigree”, was actually a much fairer one than VC-istan’s sloppily-built, poorly-thought-out one.

More virulent

The Tea Party turned out to be a more brazen and generally worse Republican Party than the one it supplanted. I’m not a fan of Bushite corporate stooges, but they would not have seriously considered the threat of fucking national default to be a valid negotiation tactic.  

Likewise, the VC-funded ecosystem is generally worse than the older and more stable corporate system that it is attempting to replace. To list some of the reasons why it is worse:

  • less intra-corporate mobility, since most VC-funded startups are built around a single project. As VC-funded companies become large enough that internal mobility would be viable, many develop mean-spirited stack-ranking cultures that keep internal mobility low or nonexistent.
  • the old corporate world’s large, announced layoffs, often with severance, have been replaced by dishonest “performance”-based firings designed to protect the company’s reputation (it may claim it is still hiring, and thus prosperous) at the expense of the departing employee’s.
  • increased social distance– investors vs. founders vs. employees is a much larger (and more permanent) social gulf than executives vs. managers vs. reports, the latter having more to do with seniority while the former is largely an artifact of native social class.
  • extreme ageism, classism, and sexism.
  • low rates of internal promotion, due to the company’s increasing need to validate its status with flashier hires (who get leadership roles as opportunities emerge). External promotion is the way to go in VC-istan, but that creates a “job hopping” impression that makes it hard to move back into the mainstream corporate world.
  • in general, meager benefits in meaningful dimensions (health coverage, 401k matching) matched with cosmetic or meaningless perks.
  • defined (if spartan) vacation allowances replaced by undefined (“unlimited”) vacation policies where social policing keeps everyone under two weeks per year.
  • on average, substantially longer work hours.
  • less working autonomy, on average, due to the tight deadlines faced by startups whose investors demand excessive risk-taking and rapid growth.
  • significantly more economic inequality, when the distribution of equity is considered. A hired (non-founder) executive might only earn 20% more, in salary, than an engineer, but typically receives 20-100 times as much equity in the company.

The future

What has actually emerged out of Silicon Valley is a failed social experiment that has generated much noise, little progress, and immense distraction. The good news is that it lacks comprehension of how to conduct itself outside of its own sandbox. For one small example, economics textbooks might argue that Uber’s “surge pricing” is supremely efficient and therefore right, even though the subjective experience for all who encounter it is extremely negative. I don’t intend to opine on whether Uber’s pricing model is morally right (it’s a useless discussion). I do find the observation valuable: the new economic elite of the Valley is shockingly gauche when it comes to self-presentation. It thinks it’s the height of science and culture, and everyone else finds it to be the worst case of uncultured new-money syndrome in over a century. It won’t last. If the gauche overlords of Silicon Valley– no longer engineers or technologists, but lifelong executives (with all the pejoratives appropriate to that word) who came up via private equity and good-ol’-boy networks– make a serious play for cultural prominence, they will be shoved back into their spider holes with overwhelming force.

The old corporate regime was deeply flawed, and that’s not going to come back either, but there was a certain humanity required of it if such organizations were to survive for the long term. The problem with VC-istan is that these companies don’t care about persistence; they’ll either be gigantic and invincible (and able to pay off old sins via meager settlements) or dead in five years. If VC-istan’s pretenses of building the future are taken at face value, then the future’s literally being built by people who give not a damn about it.

Uber can charge what it wants– that’s a private matter– but I’m disgusted when I see Valley darlings trying to shove their mindless, childish arrogance into politics. That’s actually scary. The price of housing and long commutes for which Silicon Valley is known are solid, incontrovertible proof that their little society is an utter failure.  Whatever they say about themselves is mitigated entirely by the messes– of their own making– in their own backyards. If they can’t even make San Francisco affordable, how are they equipped to handle the problems of the world? They aren’t. Just as the Tea Party proved itself incapable when it came down to the actual inherent complexity of politics in a nation of 315 million, the Valley darlings aren’t fit to rule more than a postage stamp.

What really built Silicon Valley, and Baltimore’s surprising advantage.

I’m moving to Baltimore in less than a week. There’ll be a lot to say about that in the future, and on the whole, I’m pretty excited about the move. Right now, Baltimore’s not known as a technology hub. Relative to the Valley, New York, and Austin, it’s not even on the map (yet). I think that is likely to change. I’m not going to call Baltimore a hands-down sure winner– it isn’t– but it’s a strong bet for the medium term (~10 years). I’ll get to why, in a little bit.

For my part, I don’t think any city is actually going to become “the next Silicon Valley” because I don’t think, in the near future, we’ll see that lopsided a distribution of high-talent activity. Up-and-coming cities like Austin, Boulder and Baltimore will grow, but no city will enjoy the utter dominance that Silicon Valley once had (and still has, to a lesser degree) in technology. The only people who win when it’s like that, really, are the landlords. The future, I think, is much more likely to be location-agnostic and better spread about. In 2009, “number-8 startup hub” was functionally equivalent to “in the sticks” because there were only three to five such places (in the U.S.) worth mention. I’d bet on that changing; I think the distribution of high-talent activity will be much more even in the next 15 years. I’m not going to prognosticate on the winner because I think there will be several. Austin is pretty much a sure bet; Philly and Baltimore have good odds, and the old contenders (e.g. Seattle, Boston) will still be strong. San Francisco will be formidable and probably a leader for a decade, although the rest of the Valley doesn’t have much going for it among the younger generation. 

Before I talk about why I’m bullish on Baltimore of all places, let me first get into what built Silicon Valley. It’s fairly well-established that the defense industry played a role and, as the folk legend goes, all of that government money fueled a few decades of innovation. That’s sort-of true. However, money isn’t some magic powder that one can sprinkle on peoples’ heads and make innovation happen. It’s the autonomy that comes with the money that leads to innovation. When smart people call the shots over their own work, they get things done that move the world forward at fifty times the rate, with a thousand times the quality, as they would if they were traditionally managed.

Not all of the innovation in Silicon Valley came from defense contractors. In fact, I’d argue that much of it didn’t come directly from that industry at all. Those cushy, well-paid basic research jobs in the public sector certainly contributed some of the Valley’s innovations, but far from all of it. An equally large contribution came from private players in competition (on worker autonomy) with those cushy public jobs. When the average technically-literate college graduate could grab what would now be (adjusted for inflation and localized housing costs) a six-figure job that had total job security and full autonomy over his work, private companies also had to step up their game and create high-autonomy jobs on their end. A company that was private and therefore personally riskier had to make the work interesting and creative. (Capitalism, though it can excel in this regard, isn’t innately innovative. It needs to have the right conditions, and a refusal of top-talent to work on bland rent-seeking activity– because it has better options– is one such condition.) Hewlett-Packard, in its heyday, was a primary example of a good private-sector citizen. When business when bad, instead of laying people off, pay was cut across the board but compensated with proportional time off. What’s now a cargo cult of foosball tables and “unlimited” vacation (meaning, no year-end reimbursement for unused days while your boss decides your vacation allotment) was originally born in a time when companies had to compete, on interesting work and autonomy and working culture, with an extremely generous ecosystem comprised of the public sector and government contractors.

This arms race doesn’t really exist in Silicon Valley anymore, and that might be why, culturally, that ecosystem is losing wind. Startups compete with each other a bit, but that competition is only meaningful over the small percentage of engineers (and a larger percentage of product executives) who come out on top of the celebrity culture that exists there. For the underclass, who don’t know any top-tier venture capitalists and haven’t completed an exit, they take what they’re given, and they don’t have any more autonomy than they would at a company like Microsoft or IBM. Sure, it’s a little bit pricey to hire a talented engineer– at 15 years of experience, she’s probably making $175,000 per year– but that has more to do with the local cost-of-living than anything else, and that’s not a large sum of money for a corporate to spend on someone whose productive capability is worth 10 to 25 times that. It’s expensive to hire her, relatively speaking, but not competitive. If she doesn’t want the job, someone else will take it.

So, in sum, what is it that built Silicon Valley and is no longer there? Cheap real estate was a big part of that, in large part for the freedom and sense of ownership that exists in a place where normal people can buy land (as they once could, in Northern California). But the much bigger factor was an economy that forced technology companies to compete on worker autonomy, career coherency, and interesting work with a public-funded sphere that, while bureaucratic issues existed and much of the work couldn’t be shared with the public, gave technologists lifelong job security, opportunities for basic research, and extremely high levels of autonomy by private-sector standards. The result of this competition was that startups with shitty ideas never got off the ground (would that were true now!) because no technically capable person would work at one. The west coast didn’t have all the technology companies, of course; but this competition on worker autonomy and interesting projects forced it to produce the best ones.

The VC-funded startup world is trying to replicate this former decades-long period of success. It will fail. Why? Well, it brings one ingredient, which is money. It has lots of that. It doesn’t deliver on engineer autonomy, though. Venture capitalists can’t judge technical talent at the high end– people who can reliably judge talent at that level are even rarer than the small set of people who have it– so they give money based on social network effects and “track record”, which ultimately means that the people who get funded are those who are good at raising money. Some of those people are highly intelligent, but they’re rarely creators or innovators, and the latter tend to lose when they have to compete on social polish with salesmen. When the money’s granted, it happens is a top-down way with founders ending up with astronomically more power than the people working with them. The result of this is the generation of scads of non-technical companies that happen to be involved in technology. It creates something bland and unsatisfying, just like the VC-funded ecosystem of today.

One way to think about this is to look at foreign aid to impoverished countries. If it’s given in a raw cash form, it often fails to achieve humanitarian progress, because the corrupt government captures all of it. The few are enriched, while the many (who need the aid) don’t get it. It might be used to buy guns instead of butter. Distribution is a real problem. Now, look at the VC-funded world. It shows the same unhealthy pattern: give lots of money but distribute it poorly, and the wrong people get it and almost none of it’s used for the intended aim. There’s a lot of money being thrown at Northern California, but it’s being distributed in a way that is actually harmful to innovation: it enriches people with an orthogonal if not negatively correlated (to the ability to innovate) skill set, it drives up costs of living, and it creates a fleet of businesses that look innovative but are actually, because of the existential pressures on them, even stingier when it comes to matters of employee autonomy and interesting work/experimentation than the supposedly stodgy corporations of the old system.

Of course, I am not saying that money is unimportant or that venture capital funds have to be harmful. Both are far from the truth. It’s only that the money is beneficial only when it provides the autonomy that begets innovation. This Hollywood-for-ugly-people business model doesn’t have that effect. Those dollars, when they end up in housing prices (entropic waste heat) actually end up doing a lot of damage.

Now it’s time to talk about why I think Baltimore is a strong bet for 10-15 years into the future. Don’t get me wrong: it’s probably not even a top-20 technology hub right now. As a whole, the city has some serious issues, but the nice parts are safe and affordable and the place has the “smart city” feel of a place like Boston, Seattle, or Minneapolis, which means that there’s seriously strong potential there. That’s far from enough to justify calling a winner right now, because if you’ve traveled enough (and I’ve done multiple cross-country road trips) you realize that smart people are everywhere and there are probably two dozen cities with “serious potential”. There’s more to it. Baltimore has something that venture capitalists, to be frank, don’t much like; but that’s actually good for everyone. Investors who know the city agree that it has a lot of engineering talent, but that a large proportion of it is tied up in “cushy” government and contractor jobs, and the fear is the typical VC-funded startup will struggle to compete. Those companies will be paying a little more, if they paid Bay Area/New York salaries, than the government think tanks, but not enough (for most people) to justify the sacrifices in terms of job security, interesting work, career coherency, and overall autonomy.

When you’re building a typical VC startup– a get-giant-tomorrow-or-get-lost red-ocean gambit that needs to execute fast– you’d rather compete on salary with Google (an operational cost) than compete on employee autonomy with well-heeled government agencies (which has greater effects on how the business is run). If you really have to grow 30% per month to survive, then you need “take-charge, strong leadership” (i.e. a top-down autocratic culture). You won’t have long-term creative health, but you’re not even thinking about the long run at that point, because you’re fighting for short-term survival. When you’re at that extreme (very-high risk, very-high growth) you need the vicious but undeniable efficiency of a follow-or-leave dictatorship. Those are the companies that VCs know how to evaluate, fund, and run: get-big-or-die gambits that grow into corporate megaliths, scare existing corporate megaliths enough to get bought at a panic price, or (as most do) fall to pieces, all inside of five years. Competing with Google or Microsoft on salary brings a predictable, manageable cost; competing with a government-funded think-tank on employee autonomy rules out the get-big-or-die business strategies that involve chronic crunch time and mandate autocracy.

On the other hand, competing with “cushy” government jobs in this way is not an issue for the mid-risk / mid-growth space sometimes derided as “lifestyle businesses”– in fact, to compete on autonomy and interesting work  and that was the style of thinking (before the emergence of the VC-powered mess) that built the first Silicon Valley, back in the day when it was something to admire, not crack jokes about.

All of this is far from saying that Baltimore is guaranteed to become a technology startup hub in the next 10 years. There are far too many variables in play to make that call as of now. It has a certain poorly-understood but historically potent local advantage, for sure, and I think it’s a decent bet for that and other reasons.

Technology: we can change our leadership, or we can quit.

Technology has lost its “golden child” image, with piñatas of Google buses being beaten to shit in San Francisco, invasions of privacy making national headlines, and an overall sense in the country that our leadership’s exceptionalist reputation as the “good” nouveau riche is not deserved and must end. To put it bluntly, the rest of the country doesn’t like us– any of us– anymore. We’ve lost that good faith, in technology, that allowed us to be rich (well, a few of us) and not hated, even in the midst of a transformational, generation-eating recession. 2013 will be remembered as the year when popular opinion of “Silicon Valley” imploded. As much as I despise VC-istan, that is not a good thing, because popular opinion will not separate VC-istan’s upper crust from Silicon Valley or technology as a whole.

After decades of the kinds of mismanagement that only prosperity can support, we’ve developed an industry that, despite having the best individual contributors in the world– has the worst leadership out there.

Additionally, even within the Valley morale is challenged. The truth about the VC-funded ecosystem is that it’s no longer an alternative to the traditional corporate ladder, but merely a shitty corporate ladder (the transitions being worker to founder to investor) in which disposable companies allow executives to do things to peoples’ careers that they’d never get away with in larger companies. There’s a satirical song called “The Dream of the ’90s” about a resurgence of unambitious immaturity in Portland’s hipster culture. VC-istan is a similarly nostalgic 1990s-derived culture, except centered around ambitious immaturity. Perhaps it was more real in the 1990s, but the venture-funded world now is a Disney-fied caricature of entrepreneurship dominated by rich kids who take no real risks because their backers have already decided on a range of outcomes, and will provide “entrepreneur-in-residence” soft landings for their well-connected proteges, no matter what happens. It’s not about building things anymore; it’s about using Daddy’s contacts to play startup for a few years and relish telling older and much more talented people what to do.

People are waking up to this. VC-istan is under attack. I just hope it doesn’t take down the rest of technology with it.

The reputation of this ecosystem is falling to pieces. As it were, individual technology companies go to great lengths to defend their reputations, and only relinquish those when there’s enormous benefit (in the billions) to be made through the compromise. As technology firms see it, and they’re not wrong, their ability to execute and to attract talent is strongly determined by the company’s reputation. Why is reputation so much more important to a software firm than to, say, a steel or oil company? A few things come to mind. Obviously, internal reputation (morale) is extremely important in software. The difference between an unmotivated versus a motivated steel worker might be a factor of 2 or 3; in software, it’s at least 10. Second, and there are a variety of reasons for this, most talented people don’t care all that much about money, at least not in the classic economic sense. They don’t want to be poor, but they’d rather have smart co-workers, interesting work, and a supportive managerial environment and be comfortable than lose those things and make 25% more. (We also believe that we’ll make more money, in the long term, if we work is quality environments where we can succeed.) Most reflective people know that “rich” is relative and economic rewards lend themselves to hedonic adaptation quickly. As Don Draper said, this form of happiness is “a moment before you need more happiness”. So you can’t court the best software engineers with a 10- or even 50-percent advantage in salary; you have to convince them that your company will give them interesting work and benefit their careers. That’s hard to do when a company has a damaged reputation. From experience, we know not to trust even most of the companies with googd reputations, much less the ones whose images have already been tarnished.

Sadly for them, it’s probably 80 percent of the Fortune 500 where the top 5% of software engineers would simply refuse to work, unless given a salary that would put them above even most executives, or in desperate need of short-term employment. These companies don’t end up with minimal engineering power; they end up with zero, because they can’t attract talent from outside, they overlook the high-potential people within, and talented people who come in never form a critical mass that might give them any political immunity to the overwhelming mediocrity (that is a threat even in the prestigious companies). On the other hand, Google and Facebook have more top-5% engineers than they know what to do with. Talent is clustering and clumping like never before, both in terms of employer selection and geography. So not only are the stakes of reputation high, but most firms end up as losers, bereft of top talent and doomed to watch their IT organizations slide into inefficiency, if not failure. Sturgeon’s Law is painfully felt everywhere in technology. If you’re a programmer looking for work, you find out quickly that most engagements are low in quality. On the other hand, if you’re a hiring manager, you find most engineering applicants to be incompetent at worst and badly-taught (i.e. betrayed, and sometimes irreparably damaged, by years of shitty work experience) at best.

Despite its problems, there’s money in technology. There’s so much fucking money in it that it has tolerated abysmal leadership for a long time. The Valley is so rich that the points don’t matter. Fired unjustly? Another job awaits you. Moron promoted (or, better yet, externally hired) above you? Job hop. Unfortunately, that won’t last forever and not everyone is positioned to benefit from this fluidity. Besides, some of the volatility injected into technology by bad management is just unnecessary and counterproductive. We’ve set patterns in place that won’t survive the future, in which traditional corporate software’s place diminishes. (Software and technology themselves will live on; that’s another discussion.) There will still be money, but the patterns that earn it will be different, and old processes won’t work. After decades of the kinds of mismanagement that only prosperity can support, we’ve developed an industry that, despite having the best individual contributors in the world– has the worst leadership out there.

Now, we’re seeing the backlash. No one gives a shit about Google Glass when the people who’ve lived in The Mission for fifty years are being pushed out by spoiled white kids who would never deign even to learn Spanish because “there’ll be an app for that in 5 years”. It’s no longer cool to have “invites” to some goofy social experiment when everyone knows that their data’s being sold to shady third parties and that full profile access is often a workplace perk. Finally, technology startups have gone full-cycle from being a risky, conventionally denigrated career move (1980s) to a really great opportunity (1990s) to “cool” (2005-12) to somewhat less cool (post-2013). This is happening because we no longer have the carte blanche abundance of opportunity that allows us to be prosperous even with horrendous leadership. There’s still a ton of opportunity out there, but the easy wins are gone, and we can’t let “the business side” run on auto-pilot because the age in which bad leadership is acceptable is ending. We can’t put our heads down and expect the men in suits  to do what’s right; they only did that when everyone could get rich because the victories were so facile, but that’s no longer true (if it ever really was) and we need to take more responsibility for our own direction and destiny. We can handle that stuff; trust me.

We’ll need to move our current executives and “hip” investors and tech press aside and let new players come in; but we can keep technology alive. And we owe it to future generations. Technology is just too important for us to let the people currently running this game continue to screw it up.

Tech companies: open allocation is your only real option.

I wrote, about a month ago, about Valve’s policy of allowing employees to transfer freely within the company, symbolized by placing wheels under the desk (thereby creating a physical marker of their superior corporate culture that makes traditional tech perks look like toys) and expecting employees to self-organize. I’ve taken to calling this seemingly radical notion open allocation– employees have free rein to work on projects as they choose, without asking for permission or formal allocation– and I’m convinced that, despite seeming radical, open allocation is the only thing that actually works in software. There’s one exception. Some degree of closed allocation is probably necessary in the financial industry because of information barriers (mandated by regulators) and this might be why getting the best people to stay in finance is so expensive. It costs that much to keep good people in a company where open allocation isn’t the norm, and where the workflow is so explicitly directed and constrained by the “P&L” and by justifiable risk aversion. If you can afford to give engineers 20 to 40 percent raises every year and thereby compete with high-frequency-trading (HFT) hedge funds, you might be able to retain talent under closed allocation. If not, read on.

Closed allocation doesn’t work. What do I mean by “doesn’t work”? I mean that, as things currently go in the software industry, most projects fail. Either they don’t deliver any business value, or they deliver too little, or they deliver some value but exert long-term costs as legacy vampires. Most people also dislike their assigned projects and put minimal or even negative productivity into them. Good software is exceedingly rare, and not because software engineers are incompetent, but because when they’re micromanaged, they stop caring. Closed allocation and micromanagement provide an excuse for failure: I was on a shitty project with no upside. I was set up to fail. Open allocation blows that away: a person who has a low impact because he works on bad projects is making bad choices and has only himself to blame.

Closed allocation is the norm in software, and doesn’t necessarily entail micromanagement, but it creates the possibility for it, because of the extreme advantage it gives managers over engineers. An engineer’s power under closed allocation is minimal: his one bit of leverage is to change jobs, and that almost always entails changing companies. In a closed-allocation shop, project importance is determined prima facie by executives long before the first line of code is written, and formalized in magic numbers called “headcount” (even the word is medieval, so I wonder if people piss at the table, at these meetings, in order to show rank) that represent the hiring authority (read: political strength) of various internal factions. The intention of headcount numbers is supposed to be to prevent reckless hiring by the company on the whole, and that’s an important purpose, but their actual effect is to make internal mobility difficult, because most teams would rather save their headcount for possible “dream hires” who might apply from outside in the future, rather than risk a spot on an engineer with an average performance review history (which is what most engineers will have). Headcount bullshit makes it nearly impossible to transfer unless (a) someone likes you on a personal basis, or (b) you have a 90th-percentile performance review history (in which case you don’t need a transfer). Macroscopic hiring policies (limits, and sometimes freezes) are necessary to prevent the company from over-hiring, but internal headcount limits are one of the worst ideas ever. If people want to move, and the leads of those projects deem them qualified, there’s no reason not to allow this. It’s good for the engineers and for the projects that have more motivated people working on them.

When open allocation is in play, projects compete for engineers, and the result is better projects. When closed allocation is in force, engineers compete for projects, and the result is worse engineers. 

When you manage people like children, that’s what they become. Traditional, 20th-century management (so-called “Theory X”) is based on the principle that people are lazy and need to be intimidated into working hard, and that they’re unethical and need to be terrified of the consequences of stealing from the company, with a definition of “stealing” that includes “poaching” clients and talent, education on company time, and putting their career goals over the company’s objectives. In this mentality, the only way to get something decent out of a worker is to scare him by threatening to turn off his income– suddenly and without appeal. Micromanagement and Theory X are what I call the Aztec Syndrome: the belief in many companies that if there isn’t a continual indulgence in sacrifice and suffering, the sun will stop rising.

Psychologists have spent decades trying to answer the question, “Why does work suck?” The answer might be surprising. People aren’t lazy, and they like to work. Most people do not dislike the activity of working, but dislike the subordinate context (and closed allocation is all about subordination). For example, peoples’ minute-by-minute self-reported happiness tends to drop precipitously when they arrive at the office, and rise when they leave it, but it improves once they start actually working. They’re happier not to be at an office, but if they’re in an office, they’re much happier when working than when idle. (That’s why workplace “goofing off” is such a terrible idea; it does nothing for office stress and it lengthens the day.) People like work. It’s part of who we are. What they don’t like, and what enervates them, is the subordinate context and the culturally ingrained intimidation. This suggests the so-called “Theory Y” school of management, which is that people are intrinsically motivated to work hard and do good things, and that management’s role is to remove obstacles.

Closed allocation is all about intimidation: if you don’t have this project, you don’t have a job. Tight headcount policies and lockout periods make internal mobility extraordinarily difficult– much harder than getting hired at another company. The problem is that intimidation doesn’t produce creativity and it erodes peoples’ sense of ethics (when people are under duress, they feel less responsible for what they are doing). It also provides the wrong motivation: the goal becomes to avoid getting fired, rather than to produce excellent work.

Also, if the only way a company can motivate people to do a project is to threaten to turn off a person’s income, that company should really question whether that project’s worth doing at all.

Open allocation is not the same thing as “20% time”, and it isn’t a “free-for-all”. Open allocation does not mean “everyone gets to do what they want”. A better way to represent it is: “Lead, follow, or get out of the way” (and “get out of the way” means “leave the company”). To lead, you have to demonstrate that your product is of value to the business, and convince enough of your colleagues to join your project that it has enough effort behind it to succeed. If your project isn’t interesting and doesn’t have business value, you won’t be able to convince colleagues to bet their careers on it and the project won’t happen. This requires strong interpersonal skills and creativity. Your colleagues decide, voting with their feet, if you’re a leader, not “management”. If you aren’t able to lead, then you follow, until you have the skill and credibility to lead your own project. There should be no shame in following; that’s what most people will have to do, especially when starting out.

“20% time” (or hack days) should be exist as well, but that’s not what I’m talking about. Under open allocation, people are still expected to show that they’ve served the needs of the business during their “80% time”. Productivity standards are still set by the projects, but employees choose which projects (and sets of standards) they want to pursue. Employees unable to meet the standards of one project must find another one. 20% time is more open, because it entails permission to fail. If you want to do a small project with potentially high impact, or to prove that you have the ability to lead by starting a skunk-works project, or volunteer, take courses, or attend conferences on company time, that’s what it’s for. During their “80% time”, people are still expected to lead or follow on a project with some degree of sanction. They can’t just “do whatever they want”.

Four types of projects. The obvious question that open allocation raises is, “Who does the scut work?” The answer is simple: people do it if they will get promoted, formally or informally, for doing it, or if their project directly relies on it. In other words, the important but unpleasant work gets done, by people who volunteer to do it. I want to emphasize “gets done”. Under closed allocation, a lot of the unpleasant stuff never really gets done well, especially if unsexy projects don’t lead to promotions, because people are investing most of their energy into figuring out how to get to better projects. The roaches are swept under the carpet, and people plan their blame strategies months in advance.

If we classify projects into four categories by important vs. unimportant, and interesting vs. unpleasant, we can assess what happens under open allocation. Important and interesting projects are never hard to staff. Unimportant but interesting projects are for 20% time; they might succeed, and become important later, but they aren’t seen as critical until they’re proven to have real business value, so people are allowed to work on them but are strongly encouraged to also find and concentrate on work that’s important to the business. Important but unpleasant projects are rewarded with bonuses, promotions, and the increased credibility accorded to those who do undesirable but critical work. These bonuses should be substantial (six and occasionally even seven figures for critical legacy rescues); if the project is actually important, it’s worth it to actually pay. If it’s not, then don’t spend the money. Unimportant and unpleasant projects, under open allocation, don’t get done. That’s how it should be. This is the class of undesirable, “death march” projects that closed-allocation nurtures (they never go away, because to suggest they aren’t worth doing is an affront to the manager that sponsors them and a career-ending move) but that open allocation eliminates. Under open allocation, people who transfer away from these death marches aren’t “deserters”. It’s management’s fault if, out of a whole company, no one wants to work on the project. Either the project’s not important, or they didn’t provide enough enticement.

Closed allocation is irreducibly political. Compare two meanings of the three-word phrase, “I’m on it”. In an open-allocation shop, “I’m on it” is a promise to complete a task, or at least to try to do it. It means, “I’ve got this.” In a closed-allocation shop, “I’m on it” means “political forces outside of my control require me to work only on this project.”

People complain about the politics at their closed-allocation jobs, but they shouldn’t, because it’s inevitable that politics will eclipse the matter of actually getting work done. It happens every time, like clockwork. The metagame becomes a million times more important than actually sharpening pencils or writing code. If you have closed allocation, you’ll have a political rat’s nest. There’s no way to avoid it. In closed allocation, the stakes of project allocation are so high that people are going to calculate every move based on future mobility. Hence, politics. What tends to happen is that a four-class system emerges, resulting from the four categories of work that I developed above. The most established engineers, who have the autonomy and leverage to demand the best projects, end up in the “interesting and important” category. They get good projects the old-fashioned way: proving that they’re valuable to the company, then threatening to leave if they aren’t reassigned. Engineers who are looking for promotions into managerial roles tend to take on the unpleasant but important work, and attempt to coerce new and captive employees into doing the legwork. The upper-middle class of engineers can take the interesting but unimportant work, but it tends to slow their careers if they intend to stay at the same company (they learn a lot, but they don’t build internal credibility). The majority and the rest, who have no significant authority over what they work on, get a mix, but a lot of them get stuck with the uninteresting, unimportant work (and closed-allocation shops generate tons of that stuff) that exists for reasons rooted in managerial politics.

What are the problems with open allocation? The main issue with open allocation is that it seems harder to manage, because it requires managers to actively motivate people to do the important but unpleasant work. In closed allocation, people are told to do work “because I said so”. Either they do it, or they quit, or they get fired. It’s binary, which seems simple. There’s no appeal process when people fail projects or projects fail people– and no one ever knows which happened– and extra-hierarchical collaboration is “trimmed”, and efforts can be tracked by people who think a single spreadsheet can capture everything important about what is happening in the company. Closed-allocation shops have hierarchy and clear chains of command, and single-points-of-failure (because a person can be fired from a whole company for disagreeing with one manager) out the proverbial wazoo. They’re Soviet-style command economies that somehow ended up being implemented within supposedly “capitalist” companies, but they “appear” simple to manage, and that’s why they’re popular. The problem with closed allocation policies is that they lead to enormous project failure rates, inefficient allocation of time, talent bleeds, and unnecessary terminations. In the long term, all of this unplanned and surprising garbage work makes the manager’s job harder, more complex, and worse. When assessing the problems associated with open allocation (such as increased managerial complexity) it’s important to consider that the alternative is much worse.

How do you do it? The challenging part of open allocation is enticing people to do unpleasant projects. There needs to be a reward. Make the bounty too high, and people come in with the wrong motivations (capturing the outsized reward, rather than getting a fair reward while helping the company) and the perverse incentives can even lead to “rat farming” (creating messes in the hopes of being asked to repair them at a premium). Make it too low, and no one will do it, because no one wise likes a company well enough to risk her own career on a loser project (and part of what makes a bad project bad is that, absent recognition, it’s career-negative to do undesirable work). Make the reward too monetary and it looks bad on the balance sheet, and gossip is a risk: people will talk if they find out a 27-year-old was paid $800,00o in stock options (note: there had better be vesting applied) even if it’s justified in light of the legacy dragon being slain. Make it too career-focused and you have people getting promotions they might not deserve, because doing unpleasant work doesn’t necessarily give a person technical authority in all areas. It’s hard to get the carrot right. The appeal of closed allocation is that the stick is a much simpler tool: do this shit or I’ll fire you.

The project has to be “packaged”. It can’t be all unpleasant and menial work, and it needs to be structured to involve some of the leadership and architectural tasks necessary for the person completing it to actually deserve the promised promotion. It’s not “we’ll promote you because you did something grungy” but “if you can get together a team to do this, you’ll all get big bonuses, and you’ll get a promotion for leading it.” Management also needs to have technical insight on hand in order to do this: rather than doing grunt work as a recurring cost, kill it forever with automation.

An important notion in all this is that of a committed project. Effectively, this is what the executives should create if they spot a quantum of work that the business needs but that is difficult and does not seem to be enjoyable in the estimation of the engineers. These shouldn’t be created lightly. Substantial cash and stock bonuses (vested, over the expected duration of the project) and promotions are associated with completing these projects, and if more than 25% of the workload is committed projects, something’s being done wrong. A committed project offers high visibility (it’s damn important; we need this thing) and graduation into a leadership role. No one is “assigned” to a committed project. People “step up” and work on them because of the rewards. If you agree to work on a committed project, you’re expected to make a good-faith effort to see it through for an agreed-upon period of time (typically, a year). You do it no matter how bad it gets (unless you’re incapable) because that’s what leadership is. You should not “flake out” because you get bored. Your reputation is on the line.

Companies often delegate the important but undesirable work in an awkward way. The manager gets a certain credibility for taking on a grungy project, because he’s usually at a level where he has basic autonomy over his work and what kinds of projects he manages. If he can motivate a team to accomplish it, he gets a lot of credit for taking on the gnarly task. The workers, under closed allocation, get zilch. They were just doing their jobs. The consequence of this is that a lot of bodies end up buried by people who are showing just enough presence to remain in good standing, but putting the bulk of their effort into moving to something better. Usually, it’s new hires without leverage who get staffed on these bad projects.

I’d take a different approach to committed projects. Working on one requires (as the name implies) commitment. You shouldn’t flake out because something more attractive comes along. So only people who’ve proven themselves solid and reliable should be working on (much less leading) them. To work on one (beyond a 20%-time basis) you have to have been at the company for at least a year, senior enough for the leadership to believe that you have the ability to deliver, and in strong standing at the company. Unless hired at senior roles, I’d never let a junior hire take on a committed project unless it was absolutely required– too much risk.

How do you fire people? When I was in school, I enjoyed designing and playing with role-playing systems. Modeling a fantasy world is a lot of fun. Once I developed an elaborate health mechanic that differentiated fatigue, injury, pain, blood loss, and “magic fatigue” (which affected magic users) and aggregated them (determining attribute reductions and eventual incapacitation) in what I considered to be a novel way. One small detail I didn’t include was death, so the first question I got was, “How do you die?” Of course, blood loss and injuries could do it. In a no-magic, medieval world, loss of the head is an incapacitating and irreversible injury, and exsanguination is likewise. However, in a high-magic world, “death” is reversible. Getting roasted, eaten and digested by a dragon might be reversible. But there has to be a possibility (though it doesn’t require a dedicated game mechanic) for a character to actually die in the permanent, create-a-new-character sense of the word. Otherwise there’s no sense of risk in the game: it’s just rolling dice to see how fast you level up. My answer was to leave that decision to the GM. In horror campaigns, senseless death (and better yet, senseless insanity) is part of the environment. It’s a world in which everything is trying to kill you and random shit can end your quest. But in high-fantasy campaigns with magic and cinematic storylines, I’m averse to characters being “killed by the dice”. If the character is at the end of his story arc, or does something inane like putting his head in a dragon’s mouth because he’s level 27 and “can’t be killed”, then he dies for real. Not “0 hit points”, but the end of his earthly existence. But he shouldn’t die because the player is hapless enough to roll 4 “1″s in a row on a d10. Shit happens.

The major problem with “rank and yank” (stack-ranking with enforced culling rates) and especially closed allocation is that a lot of potentially great employees are killed by the dice. It becomes part of the rhythm of the company for good people to get inappropriate projects or unfair reviews, blow up mailing lists or otherwise damage morale when it pisses them off, then get fired or quit in a huff. Yawn… another one did that this week. As I alluded in my Valve essay, this is the Welch Effect: the ones who get fired under rank-and-yank policies are rarely low performers, but junior members of macroscopically underperforming teams (who rarely have anything to do with this underperformance). The only way to enforce closed allocation is to fire people who fail to conform to it, but this also means culling the unlucky whose low impact (for which they may not be at fault) appears like malicious noncompliance.

Make no mistake: closed allocation is as much about firing people as guns are about killing people. If people aren’t getting fired, many will work on what they want to anyway (ignoring their main projects) and closed allocation has no teeth. In closed allocation shops, firings become a way for the company to clean up its messes. “We screwed this guy over by putting him on the wrong project; let’s get rid of him before he pisses all over morale.” Firings and pseudo-firings (“performance improvement plans” and transfer blocks and intentional dead-end allocations) become common enough that they’re hard to ignore. People see them, and that they sometimes happen to good people. And they scare people, especially because the default in non-financial tech companies is to fire quickly (“fail fast”) and without severance. It’s a really bad arrangement.

Do open-allocation shops have to fire people? The answer is an obvious “yes”, but it should be damn rare. The general rule of good firing is: mentor subtracters, fire dividers. Subtracters are good-faith employees who aren’t pulling their weight. They try, but they’re not focused or skilled enough to produce work that would justify keeping them on the payroll. Yet. Most employees start as subtractors, and the amount of time it takes to become an adder varies. Most companies try to set guidelines for how long an employee is allowed to take to become an adder (usually about 6 months). I’d advise against setting a firm timeframe, because what’s important is now how fast a person has learned (she might have had a rocky start) but how fast, and more importantly how well, she can learn.

Subtracters are, except in an acute cash crisis when they must be laid off for business reasons, harmless. They contribute microscopically to the burn rate, but they’re usually producing some useful work, and getting better. They’ll be adders and multipliers soon. Dividers are the people who make whole teams (or possibly the whole company) less productive. Unethical people are dividers, but so are people whose work is of so low quality that messes are created for others, and people whose outsized egos produce conflicts. Long-term (18+ months) subtractors become “passive” dividers because of their morale effects, and have to be fired for the same reason. Dividers smash morale, and they’re severe culture threats. No matter how rich your company is and how badly you may want not to fire people, you have to get rid of dividers if they don’t reform immediately. Dividers ratchet up their toxicity until they are capable of taking down an entire company. Firing can be difficult because many dividers shine as individual contributors (“rock stars”) but taketh away in their effects on morale, but there’s no other option.

My philosophy of firing is that the decision should be made rarely, swiftly, for objective reasons, and with a severance package sufficient to cover the job search (unless the person did something illegal or formally unethical) that includes non-disclosure, non-litigation and non-disparagement. This isn’t about “rewarding failure”. It’s about limiting risk. When you draft “performance improvement plans” to justify termination without severance, you’re externalizing the cost to people who have to work with a divider who’s only going to get worse post-PIP. Companies escort fired employees out of the building, which is a harsh but necessary risk-limiting measure; but it’s insane to leave a PIP’d employee in access for two months. Moreover, when you cold-fire someone, you’re inviting disparagement, gossip, and lawsuits. Just pay the guy to go away. It’s the cheapest and lowest-variance option. Three months of severance and you never see the guy again. Good. Six months and you he speaks highly of you and your company: he had a rocky time, you took care of him, and he’s (probably) better-off now. (If you’re tight on money, which most startups are, stay closer to the 3-month mark. You need to keep expenses low more than you need fired employees to be your evangelists. If you’re really tight, replace the severance with a “gardening leave” package that continues his pay only until he starts his next job.)

If you don’t fire dividers, you end up with something that looks a lot like closed allocation. Dividers can be managers (a manager can only be a multiplier or divider, and in my experience, at least half are dividers) or subordinates, but dividers tend to intimidate. Subordinate passive dividers intimidate through non-compliance (they won’t get anything done) while active dividers either use interpersonal aggression or sabotage to threaten or upset people (often for no personal gain). Managerial (or proto-managerial) dividers tend to threaten career adversity (including bad reviews, negative gossip, and termination) in order to force people to put the manager’s career goals above their own. They can’t motivate through leadership, so they do it using intimidation and (if available) authority, and they draw people into captivity to get done the work they want, without paying for it on a fair market (i.e. providing an incentive to do the otherwise undesirable work). At this point, what you have is a closed-allocation company. What this means is that open allocation has to be protected: you do it by firing the threats.

If I were running a company, I think I’d have a 70% first-year “firing” (by which I mean removal from management; I’d allow lateral moves into IC roles for those who desired to do so) rate for titled managers. By “titled manager”, I mean someone with the authority and obligation to participate in dispute resolution, terminations and promotions, and packaging committed projects. Technical leadership opportunities would be available to anyone who could convince people to follow them, but to be a titled people manager you’d have to pass a high bar. (You’d have to be as good at it as I would be, and for 70 to 80 percent of the managers I’ve observed, I’d do a better job.) This high attrition rate would be offset by a few cultural factors and benefits. First, “failing” in the management course wouldn’t be stigmatized because it would be well-understood that most people either end it voluntarily, or aren’t asked to continue. People would be congratulated for trying out, and they’d still be just as eligible to lead projects– if they could convince others to follow. Second, those who aspired specifically to people-management and weren’t selected would be entitled (unless fully terminated for doing something unethical or damaging) to a six-month leave period in which they’d be permitted to represent themselves as employed. That’s what B+ and A- managers would get– the right to remain as individual contributors (at the same rank and pay) and, if they didn’t want that, a severance offer along with a strong reference if they wished to pursue people management in other companies– but not at this one.

Are there benefits to closed allocation? I can answer this with strong confidence. No, not in typical technology companies. None exist. The work that people are “forced” to do is of such low quality that, on balance, I’d say it provides zero expectancy. In commodity labor, poorly motivated employees are about half as productive as average ones, and the best are about twice as productive. Intimidating the degenerate slackers into bringing themselves up to 0.5x from zero makes sense. In white-collar work and especially in technology, those numbers seem to be closer to -5 and +20, not 0.5 and 2.

You need closed (or at least controlled) allocation over engineers if there is material proprietary information where even superficial details would represent, if divulged, an unacceptable breach: millions of dollars lost, company under existential threat, classified information leaked. You impose a “need-to-know” system over everything sensitive. However, this most often requires keeping untrusted, or just too many people, out of certain projects (which would be designated as committed projects under open allocation). It doesn’t require keeping people stuck on specific work. Full-on closed allocation is only necessary when there are regulatory requirements that demand it (in some financial cases) or extremely sensitive proprietary secrets involved in most of the work– and comments in public-domain algorithms don’t count (statistical arbitrage strategies do).

What does this mean? Fundamentally, this issue comes down to a simple rule: treat employees like adults, and that’s what they’ll be. Investment banks and hedge funds can’t implement total open allocation, so they make up the difference through high compensation (often at unambiguously adult levels) and prestige (which enables lateral promotions for those who don’t move up quickly). On the other hand, if you’re a tiny startup with 30-year-old executives, you can’t afford banking bonuses, and you don’t have the revolving door into $400k private equity and hedge fund positions that the top banks do, so employee autonomy (open allocation) is the only way for you to do it. If you want adults to work for you, you have to offer autonomy at a level currently considered (even in startups) to be extreme.

If you’re an engineer, you should keep an eye out for open-allocation companies, which will become more numerous as the Valve model proves itself repeatedly and all over the place (it will, because the alternative is a ridiculous and proven failure). Getting good work will improve your skills and, in the long run, your career. So work for open-allocation shops if you can. Or, you can work in a traditional closed-allocation company and hope you get (and continue to get) handed good projects. That means you work for (effectively, if not actually) a bank or a hedge fund, and that’s fine, but you should expect to be compensated accordingly for the reduction in autonomy. If you work for a closed-allocation ad exchange, you’re a hedge-fund trader and you deserve to be paid like one.

If you’re a technology executive, you need to seriously consider open allocation. You owe it to your employees to treat them like adults, and you’ll be pleasantly surprised to find that that’s what they become. You also owe it to your managers to free them from the administrative shit-work (headcount fights, PIPs and terminations) that closed allocation generates. Finally, you owe it to yourself; treat yourself to a company whose culture is actually worth caring about.

Why isn’t the U.S. innovating? Some answers.

This post is in direct response to this thread on Hacker News, focused on the question: why isn’t the U.S. building great new things, as much as it used to? There are a number of reasons. I’ll examine a few of them and, in the interest of keeping the discussion short, I’m going to analyze a few of the less-cited ones. The influence of the short-sighted business mentality, political corruption, and psychological risk-aversion on this country’s meager showing in innovation over the past 40 years are well-understood, so I’m going to focus on some of the less well-announced problems.

1. Transport as microcosm

For a case study in national failure, consider human transportation in the United States since 1960. It’s shameful: no progress at all. We’ve become great at sending terabits of data around the globe, and we’re not bad at freight transportation, but we’re awful when it comes to moving people. Our trains are laughable to the extent that we consider premium a level of speed (Acela, at 120 miles per hour) that Europeans just call “trains”. Worse yet, for a family of four, air and rail travel are actually more expensive per mile than the abominably inefficient automobile. As a country, we should be massively embarrassed by the state of human transportation.

Human transportation in the U.S. has an air of having given up. We haven’t progressed– in speed or service or price– since the 1960s. The most common way of getting to work is still a means (automotive) that scales horribly (traffic jams, “rush hour”) and we still use airplanes (instead of high-speed trains) for mid-distance travel, a decision that made some sense in the context of the Cold War but is wasteful and idiotic now. This isn’t just unpleasant and expensive, but also dangerous, in light of the environmental effects of greenhouse gases.

Why so stagnant? The problem is that we have, for the most part, given up on “hard” problems. By “hard”, I don’t mean “difficult” so much as “physical”. As a nation, we’ve become symbolic manipulators, often involved in deeply self- and mutually-referential work, who avoid interacting with physical reality as much as we can. Abstraction has been immensely useful, especially in computing, but it has also led us away from critically important physical “grunt” work to the point where a lot of people never do it.

I don’t mean to imply that no one does that kind of work in the United States. A fair number of people do, but the classes of people who manage large companies have, in almost all cases, never worked in a job that required physical labor rather than simply directing others in what to do. So to them, and to many of us as offices replace factories, the physical world is a deeply scary place that doesn’t play on our terms.

2. Losing the “rest of the best”.

One doesn’t have to look far to find complaints by vocal scientists, researchers, and academics that the best students are being “poached” by Wall Street and large-firm law (“biglaw”) instead of going into science and technology. One nuance that must be attached to that complaint: it’s not true. At least, not as commonly voiced.

The “best of the best” (99.5th percentile and up) still overwhelmingly prefer research and technology over finance. Although very few research jobs match the compensation available to even mediocre performers in finance, the work is a lot more rewarding. Banking is all about making enough money by age 40 never to have to work again; a job with high autonomy (as in research) makes work enjoyable. Moreover, banking and biglaw require a certain conformity that makes a 99.5th-percentile intellect a serious liability. That top investment bankers seem outright stupid from a certain vantage point does not make them easy competition; they are more difficult competition because of their intellectual limitations. So, for these reasons and many more, the best of the best are still becoming professors, technologists, and if sufficiently entrepreneurial, startup founders.

What is changing is that the “rest of the best” have been losing interest in science and research. The decline of scientific and academic job markets has been mild for the best-of-the-best, who are still able to find middle-class jobs and merely have fewer choices, but catastrophic for the rest-of-the-best. When the decision is to be made between a miserable adjunct professorship at an uninspiring university, versus a decent shot at a seven-figure income in finance, the choice becomes obvious.

America loves winner-take-all competitions, so outsized rewards for A players, to the detriment of B players, seems like something the American society ought to considered just and valuable. The problem is that this doesn’t work for the sciences and technology. First, the “idea people” need a lot of support in order to bring their concepts to fruition. The A players are generally poor at selling their vision and communicating why their ideas are useful (i.e. why they should be paid for something that doesn’t look like work) and the B players have better options than becoming second-rate scientists, given how pathetic the scientific and academic careers now are for non-”rock stars”. What is actually happening with regard to the talent spectrum is the emergence of a bimodal distribution. With the filtering out of the B players, academia is becoming a two-class industry split between A and C players because the second-tier jobs are not compelling enough to attract the B players. This two-class dynamic is never good for an industry. In fact, it’s viciously counterproductive because the C players are often so incompetent that their contributions are (factoring in morale costs) negative.

This two-class phenomenon has already happened in computer programming, to distinctly negative effects that are responsible for the generally low quality of software. What I’ve observed is that there are very few middling programmers. The great programmers take  jobs in elite technology companies or found startups. The bad programmers work on uninspiring projects in the bowels of corporate nowhere– back-office work in banks, boring enterprise software, et al. There isn’t much interaction between the two tiers– virtually two separate industries– and with this lack of cross-pollination, the bad programmers don’t get much encouragement to get better. As designing decent, usable software is very challenging for the great programmers, one can imagine what’s created when bad programmers do it.

In reality, the B players are quite important for a variety of reasons. First is that this categorization is far from static and B players often turn into A players as they mature. (This is necessary in order to replace the A players who become lazy after getting tenure in academia, or reaching some comparable platform of comfort in other industries.) Second is that B players are likely to become A players in other domains, later– as politicians and business executives– and it’s far better to have people in those positions of power who are scientifically literate. Third is that a lot of the work in science and technology isn’t glamorous and doesn’t require genius, but does require enough insight and competence as to require at least a B (not C or lower) player. If B players aren’t adequately compensated for this work and therefore can’t be hired into it, such tasks either get passed to A players (taking up time that could be used on more challenging work) or to C players (who do such a poor job that more competent peoples’ time must be employed, in any case, to check and fix their work).

Science, research, and academia are now careers that one should only enter if one has supreme confidence of acquiring “A player” status, because the outcomes for anyone else are abysmal. In the long term, that makes the scientific and research community less friendly to people who may not be technically superior but will benefit the sciences indirectly by enabling cross-linkages between science and the rest of society. The result of this is a slow decline in the status of society and technology as time passes.

3. No one takes out the trash. 

Software companies find that, if they don’t manage their code by removing or fixing low-quality code, they become crippled later by “legacy” code and technical decisions that were reasonable at one time, but proved counterproductive later on. This isn’t only a problem with software, but with societies in general. Bad laws are hard to unwrite, and detrimental interest groups are difficult to refuse once they establish a presence.

Healthcare reform is a critical example of this. President Obama found fixing the murderously broken, private-insurance-based healthcare system to be politically unfeasible due to entrenched political dysfunction. This sort of corruption can be framed as a morality debate; but from a functional perspective, it manifests not as a subjective matter of “bad people” but more objectively as a network of inappropriate relationships and perverse dependencies. In this case, I refer to the interaction between private health insurance companies (which profit immensely from a horrible system) and political officials (who are given incentives not to change it, through the campaign-finance system).

Garbage collection in American society is not going to be easy. Too many people are situated in positions that benefit from the dysfunction– like urban cockroaches, creatures that thrive in damaged environments– and the country now has an upper class defined by parasitism and corruption rather than leadership. Coercively healing society will likely lead to intense (and possibly violent) retribution from those who currently benefit from its failure and who will perceive themselves as deprived if it is ever fixed.

What does this have to do with innovation? Simply put, if society is full of garbage– inappropriate relationships that hamper good decision-making, broken and antiquated policies and regulations, institutions that don’t work, the wrong people in positions of power– then an innovator is forced to negotiate an obstacle course of idiocy in order to get anything done. There just isn’t room if the garbage is let to stay. Moreover, since innovations often endanger people in power, there are some who fight actively to keep the trash in place, or even to make more of it.

4. M&A has replaced R&D.

A person who wants the autonomy, risk-tolerance, and upside potential (in terms of contribution, if not remuneration) of an R&D job is unlikely to find it in the 21st-century, with the practical death of blue-sky research. Few of those jobs exist, many who have them stay “emeritus” forever instead of having the decency to retire and free up positions, and getting one without a PhD from a top-5 university (if not a post-doc) is virtually unheard-of today. Gordon Gekko and the next-quarter mentality have won. These high-autonomy R&D jobs only exist in the context of a marketing expense– a company hiring for a famous researcher for the benefit of saying that he or she works there. Where has the rest of R&D gone? Startups. Instead of funding R&D, large companies are now buying startups, letting the innovation occur on someone else’s risk.

There is some good in this. A great idea can turn into a full-fledged company instead of being mothballed because it cannibalizes something else in the client company’s product portfolio. There is also, especially from the perspective of compensation, a lot more upside in being an entrepreneur than a salaried employee at an R&D lab. All of this said, there are some considerable flaws in this arrangement. First is that a lot of research– projects that might take 5 to 10 years to produce a profitable result– will never be done under this model. Second is that getting funding, for a startup, generally has more to do with inherited social connections, charisma, and a perception of safety in investment, than with the quality of the idea. This is an intractable trait of the startup game because “the idea” is likely to be reinvented between first funding and becoming a full-fledged business. The result of this is that far too many “Me, too” startups and gimmicks get funded and too little innovation exists. Third and most severe is what happens upon failure. When an initiative in an R&D lab fails, the knowledge acquired from this remains within the company. The parts that worked can be salvaged, and what didn’t work is remembered and mistakes are less likely to be repeated. With startups, the business ceases to exist outright and its people dissipate. The individual knowledge merely scatters, but the institutional knowledge effectively ceases to exist.

For the record, I think startups are great and that anything that makes it easier to start a new company should be encouraged. I even find it hard to hate “acqu-hiring” if only because, for the practice’s well-studied flaws, it creates a decent market for late-stage startup companies. All that said, startups are generally a replacement for most R&D. They were never meant to replace in-house innovation.

5. Solutions?

The problems the U.S. faces are well-known, but can this be fixed? Can the U.S. become an innovative powerhouse again? It’s certainly possible, but in the current economic and political environment, the outlook is very poor. Over the past 40 years, we’ve been gently declining rather than crashing and, to the good, I believe we’ll continue doing so, rather than collapsing. Given the dead weight of political conservatism, an entrenched and useless upper class, and a variety of problems with attitudes and the culture, our best realistic hope is slow, relative decline and absolute improvement– that as the world becomes more innovative, so will the U.S. The reason I consider this “improvement-but-relative-decline” both realistic and the best possibility is that a force (such as a world-changing technological innovation) that heals the world can also reverse American decline, but one less powerful than a world-healing force cannot save the U.S. from calamity. It would not be such a terrible outcome. American “decline” is a foregone conclusion– and it’s neither right nor sustainable for so few people to have so much control– but just as the average Briton is better off now than in 1900, and arguably better off than the average American now, this need not be a bad thing.

Clearing out the garbage that retards American innovation is probably not politically or economically feasible. I don’t see it being done in a proactive way; I see the garbage disappearing, if it does, through rot and disintegration rather than aggressive cleanup. But I think it’s valuable, for the future, to understand what went wrong in order to give the next center of innovation a bit more longevity. I hope I’ve done my part in that.