The difference between unfairness and injustice, and why it matters

My last post was about evidence for abuse in Hacker News’s ranking system, but I’ve also written at length on the failings of human organizations, in addition to the moral collapse of the venture-funded ecosystem (VC-istan). Having written a lot about social justice issues, and having been attacked personally for doing so, requires that I draw attention to a critical difference, which is that between unfairness and injustice. They are not two words for the same thing, and the differences between them are relevant to all of the social justice debates surrounding the startup and software ecosystems (and, perhaps, to work in general) cannot be ignored.

Unfairness

Unfairness exists. There’s some amount of it that can never be made to go away, especially because perfect fairness is impossible to define. Some people are born with advantages, and others are not. It’s just a fact of life. Children learn about unfairness early on and many complain when they realize just how overwhelmingly much unfairness the world has. “Life isn’t fair,” the adults say. The message is: get over it. On unfairness in the abstract, getting over it is the healthiest thing that one can do. Like death, there is no escaping it. Some people are rich and others are poor. Some are smart, others are dumb. Some are attractive, others are ugly. Then there are the unfairnesses that emerge from chance later in life, that can be impossible to decompose into luck vs. skill factors because of the strategic complexity. Some people have chance encounters that others don’t, but is that skill (in searching) or luck? Who knows? Sometimes, a business that ought to succeed fails, or vice versa. One can think of unfairness as like an impersonal error term– in other words, a noise factor– in the world. It’s not a good thing, but it’s inevitable and any zero-tolerance policy toward unfairness (especially the natural kind) will have side effects more undesirable than the unfairness itself.

That’s not to say that fairness isn’t important, and that people shouldn’t strive for a fairer world. They should, but there’s a tradeoff in most social problems between efficiency and fairness. More regulated arrangements are fairer, but impose drag and might (at the extreme) make everyone poorer. Less-regulated social arrangements that do not impose fairness constraints are often more efficient, but inequitable. I’m not going to go, at length, into the matter of how to find the best point on the fairness-efficiency spectrum. I will make the remark that while maximizing efficiency often allows unfairness (especially in the short term) there are plenty of unfairnesses that detract from efficiency.

Injustice

Injustice is an execrable subclass of unfairness. Natural unfairness– diseases, disasters, and accidents of birth– aren’t injustices in the political sense (which I will use) because no human is responsible for them. Shit just happens. Additionally, unfairness that emerges from true human accident isn’t injustice– only error.

Rather, injustice occurs when humans increase unfairness, either through cowardice or malicious intent. When people are brought down by bad luck, that’s unfair. When they are kept down by social stigma and others’ moral weakness, that’s unjust. Why do people commit injustice? There are a number of reasons, but the cause behind most intentional injustice is that is an expression of power; in fact, the way humans typically define power is in terms of the ability to inflict social injustice on other people, or to grant improper favors.

I can’t be sure of this, but I suspect that one of the reasons why injustice is so common in the human world is that, for the people at the highest levels of power, it’s actually addictive. People enjoy out-of-band power in beneficial and harmless forms (fast computers, automobiles, fireworks) but there are many who get a thrill from the damaging kind. Obviously, there are few people (even among the most corrupt) who get up in the morning and say, “I’m going to inflict social injustice today”. It’s more accurate to say that they love The Game. We all have our version of The Game; for me, it’s the cutting edge of technology and computation; for them, it’s human politics without regularization. A just victory means that one worked harder or played better or maybe was just a bit lucky; but an unjust victory is a sure sign of high social status. To many people, the latter is more enjoyable.

The importance of the difference

People who complain about unfairness are ignored or despised. They’re seen as insufferable whiners who raise a stink about issues that, like bad weather and death, no one can do anything about. So, people learn in childhood that they’re not supposed to complain about life’s intrinsic unfairness, that it’s depressing and obnoxious to do so. So they don’t. Unfortunately, most take this further and refuse to complain about the frank injustice that they observe around them. Since injustice comes from human origins, and usually malignant intent, this is the wrong strategy. Nothing can be done about the abstract but inflexible fact that life’s unfair in many ways, but things can– and absolutely should– be done about human injustice.

This is one of my moral problems with the current Silicon Valley elite. They espouse a libertarian worldview that, while it need not embrace unfairness per se, values individual freedom highly while accepting a large degree of unfairness. That’s fine. It’s not inconsistent, and while I disagree with them on how they trade efficiency off against unfairness (I tend to think that unfairness produces inefficiency for a long time before it is realized to be doing so) I don’t think they are morally defective for thinking a different way about the matter; they might be right. How much regularization society needs to operate at its best is an open problem. I cannot say the same for the increasing number who tolerate injustice. For that, there is no excuse.

Why I am so certain that Silicon Valley is following (and quite rapidly doing so) the crooked, downward moral paths of those supposedly benighted elites it claims to be replacing? The celebration and tolerance of injustices, especially pertaining to the advantages that come from connections to its parochial king-makers– those connections would be meaningless in a meritocracy– is the primary sign. When a group of people develops an entrenched upper class, there is a larger set who feel an irresistible urge to associate with these winners (it becoming nearly impossible to defeat them) and who begin to rationalize the injustices coming from above. It’s not unethical management and reckless firing; it’s a “lean startup” that “fails fast”.  That’s where the VC-funded ecosystem is now. It has gone beyond tolerance of the (morally acceptable, in my view) unfairness attendant to differing economic outputs and noisy returns; it now accepts the injustice inherent in being a “who you know” oligarchy– a feudal reputation economy driven by personal introduction and favor trading, because the supposed thirst for talent is purely marketing copy– instead of a “what you know” meritocracy.

Is it worthwhile to complain about such injustices? That’s a hard question to answer. Obviously, I think the answer is “yes”, and not because I intend to change it. The people who are running this game have already shown who they are, and that’s by building something so ugly while they had the power. They will not be convinced to be otherwise. However, like all elites, they will become complacent and be replaced by something else, and that will cause opportunities to form. Long before we rise up and take those opportunities, we must study the failures of our predecessors in order not to repeat their mistakes.

Regarding the anger toward VC-istan, that I have chronicled but also directed, that’s why I do what I do. I want the next phase of technological innovation to be superior. I wouldn’t be writing if I didn’t consider that both important and quite feasible. 

Gervais / MacLeod 8: Human Nature, Theories X, Y, Z, and A.

Well, this is yet another “second-to-last post” in the Gervais/MacLeod series (See: Part 1Part 2Part 3Part 4Part 5Part 6, Part 7) as I’ve realized that I need to cover one more topic: human nature, especially in the context of the corporate organization (e.g. Theories X and Y). What is it? Is it inherently good, or evil? Is it natural for people to be altruistic, or selfish? I addressed the morality and civility spectra and it should be obvious that I am neither committed to the idea of an inherently bad or good human nature. Mostly, I think people are localistic. We are altruistic to people we consider near to us in genetic, tribal, cultural, or emotional terms. We’re generally indifferent to those we regard at the periphery, favoring the needs of our tribe. Good and evil don’t escape from this localism; they just handle it differently. Good attempts to transcend this localism and (perhaps cautiously) grow the neighborhood of concern: expanding it to all citizens of a polity, then all humans, then all living beings. Evil, not always being egoistic, turns this localism into militancy. Both involve an outside-the-system comprehension of localism that is somewhat rare, leaving most people in an alignment considered neutral. Morally neutral people are best described as weakly good. Assuming they have a strong sense of what good and evil are, they’d prefer to be good, but this preference is not strong and they do not have a burning desire to seek good at personal or localistic risk.

The civility spectrum, between law and chaos, reflects peoples’ biases toward organizations and those who lead them. While lawful good people will oppose an evil society and chaotic good will support a good one, the truth about most societies and organizations is that they are themselves morally neutral, so a person’s civility (bias in favor or against establishment) will influence her tendency to oppose or support power more than the sign-comparison of her and its moral alignments. Lawful people think organizations tend to be better than the people who comprise them; chaotic people think they tend to be worse than the people who make them up. For my part, I’m chaotic, but just slightly. I think that individual people average a C+ on the moral scale (A being good, F being evil) and organizations tend to average a C-. Chaotic bias makes it natural to see corporations as “evil”; in reality, most of them are indifferent profit maximizers.

Interesting enough, software engineering is intrinsically chaotic. Because software requires exact precision, while human communication is inherently ambiguous, large software teams do not perform well. The per-person productivity of a large development team is substantially lower than that of an individual engineer. A team of 10 might be 2-2.5 times as productive as a single engineer. This leads us, as technologists, toward the (chaotic, possibly faulty) assumption that organizations are inherently less than the sum of their parts, because that is clearly true of software engineering teams.

Management theorists have questioned human nature, generating two opposite sets of assumptions about the typical employee of a corporation.

Theory X (presumed egoism): employees are intrinsically lazy, selfish, and amoral. If they are not watched, they will steal. If they are not prodded, they will slack. They are not to be trusted. The manager’s job is to intimidate people into getting their work done and not doing things that hurt the company.

Theory Y (presumed altruism): employees are intrinsically motivated and inclined to help the organization. If they are given appropriate work, they’ll do well. The manager’s job is to nurture talent and then get out of people’s way, so they can get work done.

Theory X is socially unacceptable, but a better representative than Y of how business executives actually think. Theory Y is how executives and organizations present their mentality, because it’s more socially acceptable. So which is right? Neither entirely. Theory X is ugly, but it has some virtues. First, it can be, perversely, more egalitarian than Theory Y. Theory X distrusts everyone, including the most talented and best positioned. Executives are no better than worker bees; everyone must be monitored and a bit scared. Theory Y, which is focused on talent and development, requires (non-egalitarian) decisions about whom to develop. Second, Theory X is more tolerant of scaling, because large-scale societies run (by necessity) on X-ish assumptions. To keep a Theory-Y organization intact, you cannot hire before you trust. Only in the technological era (where small groups can deliver massive returns) has it been possible for growth-oriented organizations to hire so selectively as to make Theory-Y organizational policies tenable.

My ideology (e.g. open allocation) might be seen as “extreme Theory Y”, but that’s not because I believe Theory Y is inerrant. It’s not. Reality is somewhere between X and Y. I believe that organizations ought to take the Y-ward direction largely (on this spectrum) for the same reason that archers aim slightly above their targets. With the actual leadership of most organizations tending toward egoism and X-ness, an organization that doesn’t set inflexible, constitutional Theory-Y pillars (for some concerns) is going to suffer a severe X-ward bias. X-ism is tolerable for concave industrial work, but in the convex world, organizations need to be somewhat Theory Y. How X (or Y) should an organization be? There’s actually a very simple and absolutely correct answer here: trust employees with their own time and energy, distrust those who want to control others’. It really is that simple– a rarity in human affairs– and to continue with anything else is moronic. Employees who volunteer to use their own energies toward something they believe will benefit the organization should be trusted to do so; those who exhibit a desire for dominance over others should be deeply distrusted.

There’s one thing I haven’t addressed, which is which Theory is actually more in force. Theory X was the industrial norm from antiquity to about 1925, when Henry Ford discovered that being a jerk (which almost all industrialists at the time were) was bad for business. High wages for employees meant a strong consumer base. Eight-hour work days were just as productive as longer ones, with fewer accidents. While there were some severe bumps in the road (Great Depression, World War II) the following 50 years saw the emergence of a large middle class, and a changing workforce. Theory Y, at least in aspiration, set in, along with the growth of positive psychology and even the 1950s-70s countercultures, which were more of a reaction against perceived hypocrisy (in organizations claiming to be Theory Y) .

With Theory-Y organizations– especially in research and development– we cracked the German Enigma, sent people to the moon, advanced science more in one half-century than had ever been thought possible, invented the Internet, and grew the global economy at an astounding 5.7 percent per year. Theory Y was the dominant organizational culture from 1925 to about 1975. Then something happened in the counterculture. The 1950s counterculture was mild, liberal, and cautious about the potential for organizational overreach, but tame by modern standards. The 1960s took these seeds of dissent to their logical (civil rights, Great Society) and illogical (Tim Leary, Weathermen) conclusions. The 1970s counterculture was transitional, meek, and reactive to the failed aspirations of the 1960s. In the 1980s, the counterculture was: Let’s Be Dickheads Again. Thus emerged the golden age of private equity, rampant cocaine use (exacerbating its already-present tendency toward context-free arrogance and vacuous superiority) among the upper class, and pro-corporate “greed is good” mentalities. The yuppie generation disgusted their (cautiously liberal, as befit the 1940s-60s) parents with how illiberal and materialistic they were.

Theory Y failed in the 1980s. If your employees are coming into work looking to steal your secrets and launch their private equity careers, you actually can’t trust them. This decade of betrayal, greed, and organizational dissolution proved Theory Y inadequate. Bad people exist at all levels. Some people will try to steal from their employers, employees, and colleagues.

If the Gilded Age nightmare of Pinkertons and company towns was the height of Theory X, and the mid-20th century United States was that of Theory Y, what came after? The chaos of the 1980s settled down, and I think what emerged in its wake can be called Theory Z. By 1995, corporations had been looted at bottom and top (mostly, at the top) and had ceased to inspire. Technology startups were taking on corporate behemoths of much greater size. People at the bottoms of corporations (MacLeod Losers) were beginning to recognize that presumed upward mobility could no longer be believed in. The arrogant egoism of the coked-up 1980s ubermenschen had faded somewhat, but the bilateral altruism existing between the paternalistic corporation and employee was forever gone as well. People returned to localism in personal alignment: trying to do right by the people they care about, and the people near them.

Theory Z (prevailing localism): a few employees will be unusually egoistic or altruistic, but most are going to be localist. Interpersonal loyalty will bind them together, and growing affinity within the group will encourage “pro-social” behavior. People who feel excluded by the group will defect; those who feel included will cooperate. The manager’s job is to build a great team– to use an intuition for human localism to direct that tendency toward pro-organizational behaviors– and to marginalize or separate from (i.e. fire) those whom it excludes.

Theory Z is the most accurate of the 3 “human nature” calculi put forward thus far, insofar as it covers most of an organization. One might also note that these 3 theories correspond neatly to the MacLeod hierarchy. The executive suite (MacLeod Sociopaths) tends to be dominated by Theory-X mentality. These people know that they shouldn’t be trusted, so they aren’t inclined to trust anyone else. Clueless middle-managers tend to overestimate human nature and have a Y-ish bias. MacLeod Losers want to be socially acceptable and get along well with the group. The Loser world, driven by interpersonal and team affinity, is a Theory-Z one. They want to get along, and will manage their effort level to the exact point that keeps in the best social standing– the Socially Acceptable Middling Effort (SAME).

Theory Z may be the most accurate model of the MacLeod Loser class that does most of the work in an organization. This said, Theory Z also has some severe defects, having generated a cargo cult of teamism. Organizations waste time and money on pointless “team-building” paraphernalia: “mandatory fun” retreats that no one enjoys, in-office perks that adolescentize the workforce but detract from actually getting stuff done. A person is judged not on her individual merits, but based on (a) the social status, outside of her control, of the team on which she has landed and (b) as a tiebreaker, her performance on that team. The top people in the organization (rather disgustingly) call themselves “the leadership team”. Teamism also creates closed allocation, of which I’m not a fan. People who attempt to serve the organization directly by moving to more appropriate teams (which their native teams and managers view negatively as attempts to swing to higher-status teams) are viewed as “not team players” and, instead of being allowed transfer, are discarded. Teamism is especially defective in software, where large teams are almost never productive. Theory Z conformity actually solves the industrial problem: what’s the best way to manage concave work? Concave work is that in which the difference between mediocrity and excellence is minimal in comparison to that between mediocrity and noncompliance (zero) and variance reduction (at which management excels) is desirable. It doesn’t solve the technological problem that emerges when we confront convex work, in which the difference between excellence and mediocrity is critical and that between mediocrity and nonperformance is negligible.

The industrial paradigm is heavily oriented toward concave work. To see that, consider educational testing. Students are given very easy problems (most of the difficulty being in artificial resource limits– timed, closed-book exams) so that an average performer will get 85 percent right. The pass/fail line is then set at 70%– in other words, no more than twice the defect rate of the average. If we wanted to re-orient exams toward a convex world, we’d give students very hard problems so that average performers only get 20% (the pass rate might be 10-15%) and call excellence 40%. I’m not actually saying that’s a good idea– I’m out of my depth on these sorts of educational issues– but this is just one way in which in the presumed concavity of industrial work is visible in the pedagogical training people get before entering it.

Why did Theories X, Y, and Z exist? What will replace them? To answer this, it’s useful to look at humanity in several stages– agrarian, industrial, and technological– based on the prevailing rate of economic growth. In the agrarian era, from 10000 BC to about 1750, economic growth was slow (0.01 to 1.0 percent per year) and generally imperceptible in a human lifetime, especially in comparison to the local rises and falls of empires. Most people who wanted to get rich had to steal or kill. Mercantilism was the predominant economic theory, slavery was he most common form of organizational labor, and Malthus was right– not in his modeling of food production growth as linear, it being a slow-growing exponential function; but in his assumption that human population growth exceeded agrarian economic growth. (England didn’t have a Malthusian catastrophe, the Industrial Revolution intervening, but an overwhelming number of societies have had them. Some have argued that England, in the 19th century, outsourced its Malthusian problem to Ireland.) Economics in the agrarian era could be approximated as zero-sum; with population growing as fast the economy did, the average human’s standard of living didn’t improve much. Machiavelli probably wrote The Prince as satire, but it was apropos of the political climate of the time, and any time before or up to about 250 years after that.

The industrial world came into being gradually, with the advent of science and, later, rational government. It started in the late 17th century, and by the 18th, progress was (while slow) visible. Malthus, despite his pessimistic projections, acknowledged that growth existed: it just wasn’t happening very fast in 1798– about 0.9% per year. This rate being too slow to sustain human population growth, economics truly earned its name of “the dismal science”. Personally, I define the industrial threshold (very arbitrarily) as the point (early 19th-century) at which global economic growth reached 1.0% per year. Since I define the technological threshold at 10% per year, we haven’t gotten there yet. (More on this here.) But the most interesting companies (technology firms oriented toward convex work) have that capability.

The architects of the industrial world were quick to realize that coercive labor wouldn’t suit their needs: the jobs were too complex and variable to leave to people who’d been deprived of all autonomy (slaves). This had to be replaced with a semi-coercive model in which employees had some freedom: they’d need to have a boss to survive, but they could choose which one. Industrialists studied sailors (pirates, privateers, explorers and merchants– all different in how ships were run) to learn about group sociology apart from the agrarian state. They studied militaries, large organizations which had left important duties to non-coercive labor (and less important ones to semi-coercive conscripted labor) for centuries. They looked at prisons to see how free people handled the temporary loss of liberty that would be similar to a merchant’s conscription into a middle-management office role. (Slaves were rarely put into prisons, but beaten or killed.) As most complex organizations of the time were semi-coercive, vicious, and prone to violence (that was often a part of the business) this naturally led into a Theory-X mindset: bring ‘em in, and don’t beat ‘em so hard they can’t work, but don’t trust ‘em either.

The zero-sum world of agrarian humanity suffered a major blow in the mid-19th century when the industrialized nations began abolishing slavery, but human behavior is slow to change. Progressive mentalities began to form within nation states, but the old ways of interaction still existed between them, and also between advanced nations and the colonized people. That blew up spectacularly in the World Wars. By 1945, it was evident that being a jerk was not going to work anymore. Racism, for one example, lost all intellectual respectability after what Hitler did. Militant localism (jingoism) had to be replaced by a climate of prevailing respect and positive-sum thinking. The U.S. rebuilt the economies of nations it had defeated at war, instead of inflicting further economic penalty as occurred after World War I. The corporate analogue of these changes came out of positive psychology and political progressivism: Theory Y.

Unfortunately, while Theory Y built good organizations, it left them unable to defend themselves against bad people, as the 1980s showed us. Academia and basic research, in the U.S., still haven’t recovered from the barbarian attacks. Rather, it’s ongoing. Global economic growth dropped– from 5.7% per year to about 3.5– due to society’s disinvestment in progress and science. (It has recovered somewhat, to 4.8%, largely because of the declining relevance of the gutted U.S.) The chaos of the 1980s left working Americans bereft of faith in institutions and in the people they worked with. This led to the more cautious and accurate Theory Z, which correctly models human localism but prescribes a managerial style based on conformity and mediocrity– solving the concave/industrial problems, but failing at the convex/technological ones.

So, what is human nature? Are people inherently altruistic, egoistic, or localistic? We’ve seen a tendency toward localism– somewhere between altruism and egoism– as a default. Does this mean that “human nature is localistic”? Can we say that human nature is morally neutral (rather than good or evil, as some philosophers have suggested)? For my part, I don’t. I’m not convinced that it’s anything, because I don’t hold strong beliefs in human nature. I’m not sure that there’s a there there. We can understand biophysics mathematically, observe sociality, and experience spirituality, but a complete understanding of ourselves eludes us. “Human nature” is a “God of the gaps”.

Personally, my philosophical and religious beliefs are most in line with Zen Buddhism. It would be un-Zen to say that I am or am not a Buddhist, so I won’t, but I believe that the Zen approach to reality is among the most accurate. Most phenomena are empty. People tend weakly toward moral good, but circumstances can easily steer normal people toward lawful evil (Milgram Experiment) or even the chaotic kind (Stanford Prison Experiment). Theory X presumes a hostile human nature, as a slaveowner might. Theory Y presumes an altruistic one, leaving organizations unable to defend themselves against bad actors. Theory Z correctly concludes the human default to be localism  but settles prematurely for mediocrity and cargo-cult teamism. None of these are well-equipped to tackle the needs of the technological era, in which the fast rate of growth and change necessitate unlocking creative energies, while a certain caution is needed regarding those who might wish to subvert the organization, or gain inappropriate dominance over it.

Theory Z gets what Y did not– that there are “toxic” bad actors out there that the organization must reject– but takes a stupidly teamist approach. People aren’t fired from Theory-Z organizations because they’re harmful, bad people, but because they’re “not a team player”. The effort is almost never exerted to assess alternative possibilities to individual defect, such as (a) a defective or poorly-configured team, (b) bad management, or (c) no-fault lack-of-fit. All of these are more common than the extremely damaging but rare toxic individual.

In the convex world, creative output isn’t going to come from “teams”, at least not in the managerial sense where the teammates have little control over membership and organization, and in which “team” is conflated with “career goals of the manager”. (Note: a manager who says “not a team player” is actually saying, “not a me player”.)  Theory-Z management tries to control human localism, corralling people together and saying, “Be a team, now!” That doesn’t work very well. Rather, the creative energies that can produce technological-era progress come from individuals who sometimes choose to form teams, and sometimes to work alone.

Why is Theory Z just as foolish as X and Y? X and Y inaccurately claim “human nature” to have a strong directional bias toward self-serving egoism or pro-organizational altruism. It does not. Theory Z maintains a belief in “human nature” and assumes it to be inflexibly localist, because that’s an observed default. I maintain that “human nature” is pretty damn empty. People are mutable. Don’t settle for bland localism; you’ll get pointless institutions that way. People can be very good; try to make it happen. They can also be very evil; try not to have that happen. They will sometimes form teams; that is fine. They will sometimes work alone; that is also fine. Judge people on their actions and not assumptions about some “nature” that is illegible at best and nonexistent at worst.

How does one convert this into an actionable management style? Lord Acton said it very well:

Judge talent at its best and character at its worst.

Theory X fails because it allows no room for excellence (talent at its best). Theory Y fails to account for bad actors (character at its worst). Theory Z throws its hands up in the air and mediocritizes: let’s all just get along and be a team. How do we assess talent or character? The truth is that we can’t; we can only look at peoples’ actions. In practice, this usually gives us enough data. If people show even the potential for excellence, that should be explored and encouraged. On the other hand, it should be very rare (if ever?) that a person is presumed to have good character and given more power over others than is absolutely necessary. So I actually nailed it, already, above. Here is an upgrade of Theory Y that is more robust against problematic people:

Theory A: trust employees with their own time and energy; distrust those who want to control others’.

That is where I’ll stop for today.

Pride in death

Human attitudes toward death are often negative: the transition is met with fear among many, and outright terror by some. Positive emotions, such as relief from suffering and the hope for something better afterward, are occasionally associated with it, but the general feeling humans feel toward death is a negative one, as if it were an undesirable event that should be procrastinated as much as possible. We fight it until the very end, even though death is guaranteed to win. It’s not our fault that we’re this way; we’re biologically programmed to be so. So we have a deep-seated tendency to put everything we have into keeping death away from us as much as we can. This has a negative side effect: when we cannot hold it back any longer, and when death rushes in, many people around the dying person take an attitude of defeat. This attitude toward death and aging I find very harmful. That said, acknowledging the certainty of death, I’ve often wondered if the process is deserving of a different and somewhat unconventional emotional response: pride, not in the vain sense of the word, but in a self-respecting and upright sense. I don’t mean to suggest that one “should” take such a counterintuitive attitude toward death, but only to suggest the thought experiment surrounding what if one did take pride in one’s mortality. What, exactly, would that look like?

Death is a big deal: an irreversible step into something that is possibly wonderful but certainly unknown. If any process can be called uncertain and risky, death outclasses anything else that we do, by far. Moving to another country? That’s nothing compared to dying. Death is a major transition, and the only reason we do not associate it with courage is because it is completely involuntary and universal: everyone, including both the most and least courageous, must do it. But if lifespans were infinite and death were a choice, it’s not one that many people would make. Death, in this light, could be viewed as immensely courageous.

Before I go any further, it’s important to state that suicide is generally not courageous, at least not in the self-destructive form that is most common in this world. Self-destruction (whether it results in death or merely in lesser forms of ruin) is the ultimate in cowardice. That said, choosing to die for another’s benefit, or to escape life when terminally ill, are different matters, and I don’t consider those deaths “suicides”. Suicide, in the most rash and revolting form, is an overstepping act of self-destruction driven by bad impulses and fear or hatred of one’s own existence. To attempt to give up on existence or eradicate one’s self is not courageous, but that’s not what I’m talking about. When I say that death is courageous, I do not go so far as to say that forcing it to come is a courageous act, but more that offering oneself up for it, if this “offer” were not an inflexible prerequisite for physical existence, would be considered extremely courageous. To venture into what is possibly another world, and possibly nonexistence, with no hope of return? Even with the best possibilities of this journey being far superior to anything this existence can offer, few (even among the religiously devout and unwaveringly faithful) would take it. I’m not even sure if I could bring myself to do it. For a person freed from death’s inevitability, whether or not to die would be a very difficult decision, and probably one that even most religious believers, solid in their belief in an afterlife, would procrastinate for a very long time.

That said, modern society does not view death as a process that may be full of promise. Instead, our society’s attitude toward death is negative and mechanistic, so far as it views death as the ultimate in failure. We describe a car or computer as “dead” when it fails beyond repair, and (accurately, biologically speaking) describe a cell as dead when it can no longer perform necessary biological functions, such as self-repair and reproduction. That which is “dead” has failed beyond hope and is now of such low utility that, on account of its mass and volume, it’s now a burden. This analogy applies to the human body– its failure is the cause of biological death, and it is utterly useless after death– but to the human person? The comparison, I think, is unfair. After a life well-lived, the soul might be in a victorious or brilliant state. We really don’t know. We know that we have to deal with a corpse, and that a person is no longer around to laugh at our jokes, but we haven’t a clue what the experience is like for that person. Being mostly selfish creatures– I make this observation neutrally, and it applies to me as much as anyone else– we reflexively view death as a negative, mainly because of the incredible pain that others’ deaths bring upon us. We don’t know what it’s like to die, but we hate when those we love die.

The image of death in our society is quite negative, and possibly unfairly, but it is natural that a society like ours would despite death. We view the suffering it causes every day, and even if it might have incredible subjective benefits for those who are dying, we never see them (and those who have seen them, if they exist, don’t blog). Our view of dying is even more disparaging. We view death as something that overtakes people after a long and horrible fight that has exhausted them. In the traditional Western view, a person dies when there is nothing left of that person. Dying isn’t treated as the graduation into another state, but the gradual winding down into nothingness, a reversal of the emergence from oblivion that is held to exist before conception. This view of death leads us to view the dying and dead as frail, defeated, failed creatures; rather than beings that have bravely ventured into the unknown– an unknown that may even entail nonexistence.

This attitude of pride in death may seem untenable. As I alluded, can something be courageous when it’s utterly involuntary? I’ll freely admit that such an attitude may seem bizarre. But equally and differently bizarre is the idea (unspoken but implicit in the modern Western attitude toward death, despite being passively rejected by most people in the West) that death certainly leads to nothingness, or to divine judgment; or, for that matter, any claim of certainty regarding what happens after death. For this, it’s the incredible uncertainty in death that makes going into it, in a way, courageous. Or, at least, it must be possible to go into death with courage.

Should death be feared? I would argue “no”. At this point, I venture into a sort of benevolent hypocrisy by saying there is no point in fearing death, since I certainly have not extinguished my fear of it. I know that my death will come, but I certainly don’t want it to come now. I’m not ready. I don’t know when I will be ready; I hope this won’t be the case, but maybe I’ll feel, at age 90, just as unready to die as I feel now at 27. I’ll certainly admit that I have no desire to hasten the process, and share the general desire to prolong my life that almost all humans have. We naturally have a deep-seated fear of anything that reduces our reproductive fitness, and death has this effect in a most dramatic and irreversible way. We also have an intellectual dislike for the concept of nonexistence, even though nonexistence itself cannot possibly be unpleasant. Finally, what most terrifying about death is the possibility of a negative afterlife.

In order to assess whether fear of death is warranted, we have to attack these valid reasons for people to be wary of it. First, on the biological aspects: death does reduce an individual’s reproductive fitness, but dying is also something we’re programmed to do: after a certain point, we age and die. In this light alone, death in advanced age cannot be viewed as a failure; it’s just what human bodies do. On the more cerebral concept of nonexistence, there is not much to say about this other than the fact that there’s no reason to fear this, since it is not experienced but is the absence of experience. I would not like to find out that I am wrong and that there’s nothing after death; luckily, if there is nothing after death, I will never find out. For this reason, to fear nonexistence makes little sense.

Negative afterlife possibilities deserve a bit of discussion. History is littered with peoples’ attempts, many quite successful, to use the uncertainty associated with death to their own benefit, and to gain political power by claiming (under pretense of divine authority) that behaviors they find undesirable will result in extreme and terrifying post-death results, painting a picture of a world run by an insane, malicious, and wrathful God who almost certainly does not exist. I say that such Bronze Age monsters “almost certainly” do not exist because the world makes too much sense for such a being to have created it, and the explanation that this invisible beast was created by a power-hungry person in his own image becomes infinitely more likely. Still, most extant religions contain vestiges of these coercive and perverse behaviors– assertions of divine sadism and vengeance. As a deist who believes one can reason about divinity by observing human existence, I reject such assertions. Filtering out everything in this stuff-people-made-up-to-get-power category, we blast all certain claims to knowledge of the afterlife and are left with moderate-but-inconclusive evidence and deep uncertainty. But there is evidence, if certainly not proof! Subjective experiences of those who have near-death experience suggest a profound and spiritual nature to death– not the fade-out expected of a failing brain before it winds down for good, but a powerful and quite often (but not always) positive experience– and, although in its infancy, research into the matter of reincarnation is promising. What little we know about existence after death suggests that (1) the vengeful gods invented by coercive religions are cartoon characters, not beasts we shall face after death, (2) it is more likely that consciousness persists after death than that it does not, though we do not have, and probably never will have, sufficient knowledge to rule out either, (3) post-death experiences tend to be positive and spiritual, insofar as we can assess them, and (4) that these observations combined with death’s inevitability make it pointless to view death with hatred or fear.

All that said, I don’t think it’s appropriate or useful for me, on this topic, to expound on what I think happens after death, since I don’t really know. In this body, I haven’t done it yet and, once I do, there will be no reliable way for me to report back. For this reason, let’s take a different tack and consider the concept of pride-in-mortality from a pragmatic viewpoint. If one can view one’s impending death with pride and courage instead of fear and hatred, what does that mean while we are still living?

First, to take pride in death allows for it to be an inherently dignified process. Many illnesses and modes of death are horrifying and I wouldn’t wish those on anyone, but the painful process of dying is probably not all there is to death, just as the pain of birth is certainly not the entirety of life. Death itself can be dignified, respected, and even admired. That we will all do it means that we are all dignified creatures. All living things desire happiness, dislike suffering, and will die. The third of these is a deep commonality that deserves respect. Many Buddhists will agree that, since all people are dying at all times, each of us is deserving of compassion. I’ll take it further. Since each of us is going to plunge headlong into deep uncertainty; for this, if nothing else– and some people make it hard to find a single other thing worthy of admiration– each living being deserves to be admired and respected. I am not the first to remark that, in mortality, we are all finally equal.

All this said, a death’s most relevant feature is that it is the end of a life. To make death dignified and to die courageously is good, but these accomplishments should be considered merely consequences of a much greater (and all-encompassing) project: to make life dignified, for everyone, and to live courageously. That is the much harder part, and it does not make sense to approach one project without tackling the other.

A Nonconventional Philosophical Argument for Survival

In this essay, I present one case for the existence of an afterlife. It is not a scientific argument. It’s certainly not a proof, and offers nothing in the way of scientific evidence. There is, as of now, nothing nearing proof of any sort of afterlife, and although reincarnation research is promising, it’s in such a primitive state right now that nothing it offers can survive the sort of hard rigor science requires, and if its findings were to be true, they would only raise a host of new questions. Hard evidence is scant on all sides of the debate and, scientifically speaking, absolutely no one knows what happens after death.

In fact, I’d argue that the physical sciences, as they are, intrinsically cannot answer many questions of consciousness including, most likely, the matter of the afterlife. Physical science relies utterly on a certain pragmatic reduction: two physically identical objects– that is, two objects that respond identically to all objective methods of observation– are considered equivalent. Scientists need this reduction to get anything done; if carbon atoms had individual “personalities” that scientists were required to understand, one simply couldn’t comprehend organic chemistry. So scientists generally assume that a carbon atom in a specific state is perfectly identical to another carbon atom in the same physical state, and this assumption proves valid and useful throughout all of the physical sciences– biology, chemistry, and physics.

Where this reduction fails, and probably the only place where it fails, is on the question of consciousness. Let’s assume the technology exists, at some point in the future, to create an exact physical copy of a person. (It’s unlikely that this will ever be possible with sufficient precision, due to the impossibility of reproducing an arbitrary quantum state, but let’s assume otherwise for the sake of argument.) Assuming that a “spark of life” can be injected into the replica, this person is likely to be indistinguishable from the original to all observers except for the individual copied and the copy, who might retain separate consciousnesses. Will this newly created person– an operational biological machine, at least– have a consciousness or not? I’m agnostic on that one, and there is no scientific way of knowing, but let’s assume that the answer is “yes”, as most materialists (who believe consciousness is a byproduct our purely physical processes) would. Will he or she have the same consciousness as the original? Everything in my experience leads me to believe that the answer is no, and the vast majority of people (including most monists) would agree with me. The original and the copy would, from the point of creation, harbor separate consciousnesses (they are not physically linked) that would begin diverging immediately.

This, to me, is the fundamental strike against mind uploading and physical immortality. It may be physically possible to copy all of a body’s information, but commanding its consciousness (after destruction of the original) to bind to the copy is impossible. It’s extremely likely that humans will defeat aging by 2175, if not long before then, meaning that the first 1000-year-old will be born before the end of my natural life (ca. 2065). But I do not expect any success whatsoever in the endeavor of mind uploading; destruction of the whole brain will always spell out the same fate that it does now: the irreversible end of one’s earthly existence. (Fifth-millennium humans are likely to confront this problem by storing their brains in extremely safe repositories, while interacting electronically and remotely with robotic “bodies” in the physical world as well as virtual bodies in simulated worlds.) With this in mind, as well as the probable impossibility of physical immortality given the likely eventual fate of the universe, it should be obvious even to the most optimistic transhumanists what nearly all humans who have lived have taken, and most humans even now take, for granted: we will all die. This, of course, terrifies and depresses a lot people, involving a credible threat of nonexistence and even the (very remote, in my opinion) possibility of fates far worse.

I’m going to put forward an argument that suggests that, if there is a reasonable universe, our consciousness survives death. This is an argument that, although it proves nothing and relies on an assumption (a reasonable universe) that many reject, I have not heard before. At the least, I find it viscerally convincing and interesting. Here is that argument: virtually every phenomenon humans have ever investigated turns out, in truth, to be far more fascinating than any hypothesis offered before the truth was known.

1. Math

I’m going to start this discussion in mathematics with a familiar constant: pi, or the ratio between a circle’s circumference and its diameter. Believed in Biblical times to be 3, it was later estimated with ratios like 22/7, 201/64, 3.14, or the phenomenally accurate Chinese estimate, 355/113. Archimedes, applying contemporary knowledge of geometry to the regular 96-sided polygon, painstakingly proved that this ratio was between 3 10/71 and 3 1/7, but could not determine its exact value. One can imagine this fact to be distressing. All of these estimates were known to be only approximations of this constant, but for two millennia after the constant’s definition, it was still not known whether an exact fractional representation of the number existed.

A similar quandary existed surrounding the square root of 2, which is the ratio between a square’s diagonal and the length of one of its sides, although this number’s irrationality was far easier to prove, as the Pythagoreans did some time around the 5th century BCE. Before the Pythagorean proof of the irrationality of the square root of 2, and Cantor’s (much later) proof that an overwhelming majority of real numbers must be irrational, it was quite reasonable to expect pi to be a rational number. Before the Pythagorean discovery, the only numbers humans had ever known about either were integers (whole numbers) or were, or could be, the ratio of two integers. No one knew what integral ratios the square root of 2 or pi were, but it must have seemed likely that they were rational, those being the only numbers humans had the language to precisely describe. Of course, it turned out that they were not, though pi’s irrationality was not proven till the 18th century, more than two millennia after the establishment of irrational numbers.

At least some people did not want this. They desperately wished to find the ratio that was pi, and only found in the end, that none existed. Even the advent of algebra in the first millennium CE did not make pi accessible, since the number is (again, as proved much later) transcendental: unlike the square root of 2, which can be algebraically extracted (the solution to x^2 – 2 = 0) from the rationals, pi cannot. This made life and mathematics a fair bit more difficult, and many may have met the discovery of irrational numbers with displeasure, but it certainly made mathematics far more interesting.

Pi emerges, sometimes unexpected, in all of mathematics. Just for one particularly elegant example, the infinite sum of reciprocal squares (i.e. 1/1 + 1/4 + 1/9 + 1/16 + 1/25 + …) is pi squared, divided by 6. Although no more than 100 digits of pi are needed to provide sufficient accuracy for any physical purpose, we have algorithms today that enable us to generate pi’s digits (into the billions) extremely quickly. The number may be inaccessible through ratios and algebraic expressions, but we can very easily compute it with as much precision as we wish, which is more than can be said for the truly inaccessible noncomputable numbers. Still, we can’t answer some basic questions about it. Whether the digits of pi are normal (that is, behave as if generated by a uniform, random source) is an open question. Strange statistical patterns (such as a paucity or even eventual absence, possibly after 10^200 digits, of ’3′ digits) may exist in pi’s digits in some base, but it is utterly unknown whether any do.

The beauty and “interestingness” of mathematics are difficult to put into words, but I would argue that they stem from apprehension of the infinite. As soon as the concept of prime numbers existed, people must have desired to attain a list of all of them. People are natural collectors, and the acquisitive drive must have led many to wish to “have” all the prime numbers. Using a beautiful argument that a modern high schooler could understand, Euclid proved this impossible: there are infinitely many of them. This marvelous result established that, although we cannot “reach” the infinite, we can reason about it in non-trivial ways. In my opinion, Euclid’s theroem is the birth of modern mathematics, which (even in its finite and discrete manifestations, where asymptotic behaviors and general patterns within the finite are the true objects of study, due to humanity’s insatiable curiosity about what is next) is the art of reasoning about infinity.

From such findings, a magnificent number of beautiful, surprising, and awe-inspiring results followed. Cantor proved that not all infinities are equal, but that for each infinity we can define a far larger one. Later, as formal mathematics is an infinite collection of statements generated by a finitely-describable set of statements, Gödel helped establish the ability of mathematics to reason about itself, in fact proving the incompleteness of any formal system: no logical system capable of arithmetic will be able to decisively prove or disprove all statements. (A byproduct of his doing so was Gödel’s embedding of a list-processing language into number theory, arguably inventing an ancestor of Lisp.) Alonzo Church and Alan Turing established analogous results regarding computation, and a consequence of their work was no less than laying the foundations for modern computer science.

Despite the obvious epistemological problems with “counterfactual mathematics”– pi simply could not be any other number– I’ll note that if pi had been rational, had the list of primes been finite, or had formal mathematics been complete, the world would have been a far more boring place, and much less would have been done with mathematics.

2. The physical sciences

In mathematics, people generally agree on what they know and what they don’t. If the question of the completeness of formal systems could be put in a way that would have made sense to a 5th-century BCE mathematician, he would admit that he had no idea whether the proposition was true or false. In the natural world, this is true among scientists, but it’s not true among people as a whole. At least in the context of large groups willing to believe the most credible explanation put to them; people, when they don’t know something, seem to make up stories about it. For as long as humans have existed, they’ve invented explanations for natural phenomena. Those explanations have mostly been wrong and, moreover, quite frankly a bit boring.

Ancient Greeks, at least among the less educated, seemed to believe that lightning was a bolt of fire thrown by an ill-tempered man named Zeus. Boring, and wrong. In fact, it’s the energy transfer admitted by the motion of subatomic particles, governed by attractions and repulsions of truly immense force; an energy source that, if harnessed properly, enables a host of extremely powerful technologies that are only in their infancy to this day. Interesting, and right. Before Newton, earthly objects were believed to fall because they possessed an intrinsic material property called gravity, while the heavens possessed levity and could remain aloft. Boring, and wrong. We now know that these behaviors can all be explained by a single force (gravity) that not only allows us to reason about cosmic machinery, but also admits such bizarre beasts as black holes and general relativity. Interesting, and right. Likewise, the stars were once held to be tiny holes in a giant black dome behind which a brilliant fire burned. Boring, and wrong. In fact, they’re massive nuclear-powered furnaces, borne of gas clouds and gravity, that glow for billions of years, occasionally eject hot, ionic material, and sometimes die violently in a style of explosion (supernova) of such atom-smashing force as to create chemical elements that otherwise should not exist, and we require many of those elements for our existence. Interesting, and right. Finally, it was once believed (and by some, it still is) that our bodies and those of all animals were designed from scratch and immutably fixed by a deity with specific tastes but a tendency toward slightly sadistic design. Boring, and (almost certainly) wrong. We now know that all of these species emerged from four billion years of natural selection, that an enormous number of powerful and strange animals once existed, and that accelerations of this evolutionary process began happening (in the context of an immensely complicated and frenetic terrestrial ecosystem) half a billion years ago, and continue to this day. Interesting, and right.

The general pattern is this: humans invent an explanation that is the best they can come up with at the time. It turns out to be primitive, wrong, and boring. Accumulation of knowledge and the invention of superior tools allows people to discover the truth and, although it offends many greatly, it ends up being far more interesting, and inevitably ends up opening a number of fruitful questions and applications. The right answer is always more fascinating than the explanations given, and it always opens up more questions.

3. Afterlife

I’m a deist. I do not believe in an anthropomorphic, interventionist deity, and certainly not in the villainous, supernatural characters of Bronze Age works of literature that, if taken literally, are almost certainly heavily fictional. However, I have faith in a reasonable universe. I admit that this is an extraordinary claim. For example, it’s not impossible that, in the next 5 seconds, my consciousness will, with no cause, abruptly leave my body and enter that of the man shoveling snow outside my window, or of a 7th-century Chinese peasant, or of an extraterrestrial being in another galaxy. I simply believe it will not happen. I likewise admit that it is possible that I am suddenly eradicated by the sudden dematerialization of every atom in my body, but I regard even asteroid strikes and random heart attacks as far more credible threats to my existence.

If we live in an unreasonable universe, we can’t reason about or know anything. In an unreasonable universe, I might exist only this second and my memories of previous existence may be false.  There is truly nothing we can know, reason about, or credibly believe in an unreasonable universe. There is just absurdity. An unreasonable universe doesn’t mean that we can’t use reason and do science– on pragmatic grounds, we can, so long as they work. An unreasonable universe merely admits the possibility that they might stop working– that, two seconds from now, the gravitational constant may begin doubling every nanosecond, collapsing our world instantly into a black hole.

Most religions posit a fundamentally unreasonable universe governed by capricious gods, but materialistic monism also establishes an unreasonable universe. Although the abiogenic origin of life and the evolution of complex organisms can be explained (and in evolution’s case, already has been adequately explained) by natural processes, the emergence of qualia, or consciousness, out of an elaborate pinball machine posits an unreasonable universe in which conscious beings pop into existence merely because a sufficiently complex physical system exists. It is deus ex machina, except with somewhat less impressive beings than gods emerging from it. Such a universe is an unreasonable one. This does not mean that materialists are wrong! It is reasonable and defensible to believe in an unreasonable universe, and moreover it is unreasonable to outright reject the possibility of an unreasonable universe, when absolutely no proof of a reasonable universe has been made available. It is faith alone that leads me to believe in a reasonable universe.

For a note on that, I acknowledge that the “reasonability” of a universe does not always correlate with its plausibility, as I see it. The materialists, for all I know, could be utterly right. In fact, I find the unreasonable universe of the materialist atheist far more credible than the perverse and semi-reasonable universe of Biblical literalism (which has the obvious markings of people making shit up to terrify others and acquire power) from any Abrahamic religion. If I were forced to believe in one or the other, I would take the former without hesitation.

I believe in a reasonable universe, and I find myself asking, “What happens after death?” I must admit that I don’t know. I don’t think anyone knows. The best I can do is look for patterns based on the (possibly ridiculous and wrong) assumption of a reasonable universe. However, the pattern that I’ve seen, as I’ve discussed above, is that virtually every question about the world turns out to have an answer that is far more interesting than any explanation humans have invented. It is reasonable, although not certain, that the same pattern applies to death. When the truth is revealed (as it is to all of us, unless there is no afterlife) I expect it to be far more interesting than any scenario humans have invented.

Afterlife scenarios invented by humans are always either insufferably boring, or (in the case of hells) non-boring only on account of being so terrible (but if eternal hells exist, we do not live in a reasonable universe). Materialists believe that consciousness is purely a result of physical processes and therefore annihilated upon death. Boring. “Pop” Christianity posits a harp-laden, sky-like heaven in which the dead “smile down” upon the living– a modern form of ancestor veneration wherein heaven is like a retirement community. Boring. Biblical literalists believe in a final battle between “good” (represented by a murderous, racist, ill-tempered and misogynistic deity) and “evil” in which the final outcome is already determined. Boring. Ancient Greeks believed the dead lingered in a dank cave overseen by a guy named Hades. Boring. All of these have the markings of being the somewhat creative but utterly boring ad unfulfilling explanations that humans invent when they don’t understand something.

I haven’t yet discussed reincarnation, some sort of which I believe is the most likely afterlife scenario. It’s not boring, but “reincarnation” is not so much an answer as a proposition that raises more questions. “Reincarnation” is not a specific afterlife so much as an afterlife schema admitting a plethora of possibilities. Questions raised include the following. What, if anything, happens between lives? Is our reincarnation progressive, as indicated by what seems like an intensifying trend of increasingly complex consciousness (and incremental improvements, over the course of history, in human existence) in this world, or is it stagnant, chaotic, cyclical, or even regressive? Does a deity assist us by lining us up with appropriate rebirths, or do we decide on our rebirths, or is the process essentially mindless? Can humans reincarnate as animals, or as beings on other planets? How atomic is the “soul”, i.e. does it carry a personal identity, as Hindus assert, or is it as much affected by the forms it takes as it affects them, as many Buddhists assert? What is the role of the impersonal, almost mechanical force known as karma, do any deities intervene with it and, if so, how and why?

I have my beliefs, not perfectly formed, on all these matters, and I admit that they are artifacts of faith. They emerge from my (possibly ridiculous) faith in a reasonable universe and my estimate of what a reasonable universe, after death and from a vantage point where these questions might finally be answerable, might look like. I am, of course, just one human trying to figure things out. That is the best I believe any of us can do, since such animals as “prophets” and the gods they have invented almost certainly do not exist.

I’m deeply agnostic on many matters, but if asked what happens after death, or what is the meaning of life, I’d answer as follows. No one knows for sure, obviously, but I’m overwhelmingly convinced that the answer is far, far more interesting than any explanation put forth by humans thus far (including any that I could come up with).

With that, I yield the floor.