A Nonconventional Philosophical Argument for Survival

In this essay, I present one case for the existence of an afterlife. It is not a scientific argument. It’s certainly not a proof, and offers nothing in the way of scientific evidence. There is, as of now, nothing nearing proof of any sort of afterlife, and although reincarnation research is promising, it’s in such a primitive state right now that nothing it offers can survive the sort of hard rigor science requires, and if its findings were to be true, they would only raise a host of new questions. Hard evidence is scant on all sides of the debate and, scientifically speaking, absolutely no one knows what happens after death.

In fact, I’d argue that the physical sciences, as they are, intrinsically cannot answer many questions of consciousness including, most likely, the matter of the afterlife. Physical science relies utterly on a certain pragmatic reduction: two physically identical objects– that is, two objects that respond identically to all objective methods of observation– are considered equivalent. Scientists need this reduction to get anything done; if carbon atoms had individual “personalities” that scientists were required to understand, one simply couldn’t comprehend organic chemistry. So scientists generally assume that a carbon atom in a specific state is perfectly identical to another carbon atom in the same physical state, and this assumption proves valid and useful throughout all of the physical sciences– biology, chemistry, and physics.

Where this reduction fails, and probably the only place where it fails, is on the question of consciousness. Let’s assume the technology exists, at some point in the future, to create an exact physical copy of a person. (It’s unlikely that this will ever be possible with sufficient precision, due to the impossibility of reproducing an arbitrary quantum state, but let’s assume otherwise for the sake of argument.) Assuming that a “spark of life” can be injected into the replica, this person is likely to be indistinguishable from the original to all observers except for the individual copied and the copy, who might retain separate consciousnesses. Will this newly created person– an operational biological machine, at least– have a consciousness or not? I’m agnostic on that one, and there is no scientific way of knowing, but let’s assume that the answer is “yes”, as most materialists (who believe consciousness is a byproduct our purely physical processes) would. Will he or she have the same consciousness as the original? Everything in my experience leads me to believe that the answer is no, and the vast majority of people (including most monists) would agree with me. The original and the copy would, from the point of creation, harbor separate consciousnesses (they are not physically linked) that would begin diverging immediately.

This, to me, is the fundamental strike against mind uploading and physical immortality. It may be physically possible to copy all of a body’s information, but commanding its consciousness (after destruction of the original) to bind to the copy is impossible. It’s extremely likely that humans will defeat aging by 2175, if not long before then, meaning that the first 1000-year-old will be born before the end of my natural life (ca. 2065). But I do not expect any success whatsoever in the endeavor of mind uploading; destruction of the whole brain will always spell out the same fate that it does now: the irreversible end of one’s earthly existence. (Fifth-millennium humans are likely to confront this problem by storing their brains in extremely safe repositories, while interacting electronically and remotely with robotic “bodies” in the physical world as well as virtual bodies in simulated worlds.) With this in mind, as well as the probable impossibility of physical immortality given the likely eventual fate of the universe, it should be obvious even to the most optimistic transhumanists what nearly all humans who have lived have taken, and most humans even now take, for granted: we will all die. This, of course, terrifies and depresses a lot people, involving a credible threat of nonexistence and even the (very remote, in my opinion) possibility of fates far worse.

I’m going to put forward an argument that suggests that, if there is a reasonable universe, our consciousness survives death. This is an argument that, although it proves nothing and relies on an assumption (a reasonable universe) that many reject, I have not heard before. At the least, I find it viscerally convincing and interesting. Here is that argument: virtually every phenomenon humans have ever investigated turns out, in truth, to be far more fascinating than any hypothesis offered before the truth was known.

1. Math

I’m going to start this discussion in mathematics with a familiar constant: pi, or the ratio between a circle’s circumference and its diameter. Believed in Biblical times to be 3, it was later estimated with ratios like 22/7, 201/64, 3.14, or the phenomenally accurate Chinese estimate, 355/113. Archimedes, applying contemporary knowledge of geometry to the regular 96-sided polygon, painstakingly proved that this ratio was between 3 10/71 and 3 1/7, but could not determine its exact value. One can imagine this fact to be distressing. All of these estimates were known to be only approximations of this constant, but for two millennia after the constant’s definition, it was still not known whether an exact fractional representation of the number existed.

A similar quandary existed surrounding the square root of 2, which is the ratio between a square’s diagonal and the length of one of its sides, although this number’s irrationality was far easier to prove, as the Pythagoreans did some time around the 5th century BCE. Before the Pythagorean proof of the irrationality of the square root of 2, and Cantor’s (much later) proof that an overwhelming majority of real numbers must be irrational, it was quite reasonable to expect pi to be a rational number. Before the Pythagorean discovery, the only numbers humans had ever known about either were integers (whole numbers) or were, or could be, the ratio of two integers. No one knew what integral ratios the square root of 2 or pi were, but it must have seemed likely that they were rational, those being the only numbers humans had the language to precisely describe. Of course, it turned out that they were not, though pi’s irrationality was not proven till the 18th century, more than two millennia after the establishment of irrational numbers.

At least some people did not want this. They desperately wished to find the ratio that was pi, and only found in the end, that none existed. Even the advent of algebra in the first millennium CE did not make pi accessible, since the number is (again, as proved much later) transcendental: unlike the square root of 2, which can be algebraically extracted (the solution to x^2 – 2 = 0) from the rationals, pi cannot. This made life and mathematics a fair bit more difficult, and many may have met the discovery of irrational numbers with displeasure, but it certainly made mathematics far more interesting.

Pi emerges, sometimes unexpected, in all of mathematics. Just for one particularly elegant example, the infinite sum of reciprocal squares (i.e. 1/1 + 1/4 + 1/9 + 1/16 + 1/25 + …) is pi squared, divided by 6. Although no more than 100 digits of pi are needed to provide sufficient accuracy for any physical purpose, we have algorithms today that enable us to generate pi’s digits (into the billions) extremely quickly. The number may be inaccessible through ratios and algebraic expressions, but we can very easily compute it with as much precision as we wish, which is more than can be said for the truly inaccessible noncomputable numbers. Still, we can’t answer some basic questions about it. Whether the digits of pi are normal (that is, behave as if generated by a uniform, random source) is an open question. Strange statistical patterns (such as a paucity or even eventual absence, possibly after 10^200 digits, of ‘3’ digits) may exist in pi’s digits in some base, but it is utterly unknown whether any do.

The beauty and “interestingness” of mathematics are difficult to put into words, but I would argue that they stem from apprehension of the infinite. As soon as the concept of prime numbers existed, people must have desired to attain a list of all of them. People are natural collectors, and the acquisitive drive must have led many to wish to “have” all the prime numbers. Using a beautiful argument that a modern high schooler could understand, Euclid proved this impossible: there are infinitely many of them. This marvelous result established that, although we cannot “reach” the infinite, we can reason about it in non-trivial ways. In my opinion, Euclid’s theroem is the birth of modern mathematics, which (even in its finite and discrete manifestations, where asymptotic behaviors and general patterns within the finite are the true objects of study, due to humanity’s insatiable curiosity about what is next) is the art of reasoning about infinity.

From such findings, a magnificent number of beautiful, surprising, and awe-inspiring results followed. Cantor proved that not all infinities are equal, but that for each infinity we can define a far larger one. Later, as formal mathematics is an infinite collection of statements generated by a finitely-describable set of statements, Gödel helped establish the ability of mathematics to reason about itself, in fact proving the incompleteness of any formal system: no logical system capable of arithmetic will be able to decisively prove or disprove all statements. (A byproduct of his doing so was Gödel’s embedding of a list-processing language into number theory, arguably inventing an ancestor of Lisp.) Alonzo Church and Alan Turing established analogous results regarding computation, and a consequence of their work was no less than laying the foundations for modern computer science.

Despite the obvious epistemological problems with “counterfactual mathematics”– pi simply could not be any other number– I’ll note that if pi had been rational, had the list of primes been finite, or had formal mathematics been complete, the world would have been a far more boring place, and much less would have been done with mathematics.

2. The physical sciences

In mathematics, people generally agree on what they know and what they don’t. If the question of the completeness of formal systems could be put in a way that would have made sense to a 5th-century BCE mathematician, he would admit that he had no idea whether the proposition was true or false. In the natural world, this is true among scientists, but it’s not true among people as a whole. At least in the context of large groups willing to believe the most credible explanation put to them; people, when they don’t know something, seem to make up stories about it. For as long as humans have existed, they’ve invented explanations for natural phenomena. Those explanations have mostly been wrong and, moreover, quite frankly a bit boring.

Ancient Greeks, at least among the less educated, seemed to believe that lightning was a bolt of fire thrown by an ill-tempered man named Zeus. Boring, and wrong. In fact, it’s the energy transfer admitted by the motion of subatomic particles, governed by attractions and repulsions of truly immense force; an energy source that, if harnessed properly, enables a host of extremely powerful technologies that are only in their infancy to this day. Interesting, and right. Before Newton, earthly objects were believed to fall because they possessed an intrinsic material property called gravity, while the heavens possessed levity and could remain aloft. Boring, and wrong. We now know that these behaviors can all be explained by a single force (gravity) that not only allows us to reason about cosmic machinery, but also admits such bizarre beasts as black holes and general relativity. Interesting, and right. Likewise, the stars were once held to be tiny holes in a giant black dome behind which a brilliant fire burned. Boring, and wrong. In fact, they’re massive nuclear-powered furnaces, borne of gas clouds and gravity, that glow for billions of years, occasionally eject hot, ionic material, and sometimes die violently in a style of explosion (supernova) of such atom-smashing force as to create chemical elements that otherwise should not exist, and we require many of those elements for our existence. Interesting, and right. Finally, it was once believed (and by some, it still is) that our bodies and those of all animals were designed from scratch and immutably fixed by a deity with specific tastes but a tendency toward slightly sadistic design. Boring, and (almost certainly) wrong. We now know that all of these species emerged from four billion years of natural selection, that an enormous number of powerful and strange animals once existed, and that accelerations of this evolutionary process began happening (in the context of an immensely complicated and frenetic terrestrial ecosystem) half a billion years ago, and continue to this day. Interesting, and right.

The general pattern is this: humans invent an explanation that is the best they can come up with at the time. It turns out to be primitive, wrong, and boring. Accumulation of knowledge and the invention of superior tools allows people to discover the truth and, although it offends many greatly, it ends up being far more interesting, and inevitably ends up opening a number of fruitful questions and applications. The right answer is always more fascinating than the explanations given, and it always opens up more questions.

3. Afterlife

I’m a deist. I do not believe in an anthropomorphic, interventionist deity, and certainly not in the villainous, supernatural characters of Bronze Age works of literature that, if taken literally, are almost certainly heavily fictional. However, I have faith in a reasonable universe. I admit that this is an extraordinary claim. For example, it’s not impossible that, in the next 5 seconds, my consciousness will, with no cause, abruptly leave my body and enter that of the man shoveling snow outside my window, or of a 7th-century Chinese peasant, or of an extraterrestrial being in another galaxy. I simply believe it will not happen. I likewise admit that it is possible that I am suddenly eradicated by the sudden dematerialization of every atom in my body, but I regard even asteroid strikes and random heart attacks as far more credible threats to my existence.

If we live in an unreasonable universe, we can’t reason about or know anything. In an unreasonable universe, I might exist only this second and my memories of previous existence may be false.  There is truly nothing we can know, reason about, or credibly believe in an unreasonable universe. There is just absurdity. An unreasonable universe doesn’t mean that we can’t use reason and do science– on pragmatic grounds, we can, so long as they work. An unreasonable universe merely admits the possibility that they might stop working– that, two seconds from now, the gravitational constant may begin doubling every nanosecond, collapsing our world instantly into a black hole.

Most religions posit a fundamentally unreasonable universe governed by capricious gods, but materialistic monism also establishes an unreasonable universe. Although the abiogenic origin of life and the evolution of complex organisms can be explained (and in evolution’s case, already has been adequately explained) by natural processes, the emergence of qualia, or consciousness, out of an elaborate pinball machine posits an unreasonable universe in which conscious beings pop into existence merely because a sufficiently complex physical system exists. It is deus ex machina, except with somewhat less impressive beings than gods emerging from it. Such a universe is an unreasonable one. This does not mean that materialists are wrong! It is reasonable and defensible to believe in an unreasonable universe, and moreover it is unreasonable to outright reject the possibility of an unreasonable universe, when absolutely no proof of a reasonable universe has been made available. It is faith alone that leads me to believe in a reasonable universe.

For a note on that, I acknowledge that the “reasonability” of a universe does not always correlate with its plausibility, as I see it. The materialists, for all I know, could be utterly right. In fact, I find the unreasonable universe of the materialist atheist far more credible than the perverse and semi-reasonable universe of Biblical literalism (which has the obvious markings of people making shit up to terrify others and acquire power) from any Abrahamic religion. If I were forced to believe in one or the other, I would take the former without hesitation.

I believe in a reasonable universe, and I find myself asking, “What happens after death?” I must admit that I don’t know. I don’t think anyone knows. The best I can do is look for patterns based on the (possibly ridiculous and wrong) assumption of a reasonable universe. However, the pattern that I’ve seen, as I’ve discussed above, is that virtually every question about the world turns out to have an answer that is far more interesting than any explanation humans have invented. It is reasonable, although not certain, that the same pattern applies to death. When the truth is revealed (as it is to all of us, unless there is no afterlife) I expect it to be far more interesting than any scenario humans have invented.

Afterlife scenarios invented by humans are always either insufferably boring, or (in the case of hells) non-boring only on account of being so terrible (but if eternal hells exist, we do not live in a reasonable universe). Materialists believe that consciousness is purely a result of physical processes and therefore annihilated upon death. Boring. “Pop” Christianity posits a harp-laden, sky-like heaven in which the dead “smile down” upon the living– a modern form of ancestor veneration wherein heaven is like a retirement community. Boring. Biblical literalists believe in a final battle between “good” (represented by a murderous, racist, ill-tempered and misogynistic deity) and “evil” in which the final outcome is already determined. Boring. Ancient Greeks believed the dead lingered in a dank cave overseen by a guy named Hades. Boring. All of these have the markings of being the somewhat creative but utterly boring ad unfulfilling explanations that humans invent when they don’t understand something.

I haven’t yet discussed reincarnation, some sort of which I believe is the most likely afterlife scenario. It’s not boring, but “reincarnation” is not so much an answer as a proposition that raises more questions. “Reincarnation” is not a specific afterlife so much as an afterlife schema admitting a plethora of possibilities. Questions raised include the following. What, if anything, happens between lives? Is our reincarnation progressive, as indicated by what seems like an intensifying trend of increasingly complex consciousness (and incremental improvements, over the course of history, in human existence) in this world, or is it stagnant, chaotic, cyclical, or even regressive? Does a deity assist us by lining us up with appropriate rebirths, or do we decide on our rebirths, or is the process essentially mindless? Can humans reincarnate as animals, or as beings on other planets? How atomic is the “soul”, i.e. does it carry a personal identity, as Hindus assert, or is it as much affected by the forms it takes as it affects them, as many Buddhists assert? What is the role of the impersonal, almost mechanical force known as karma, do any deities intervene with it and, if so, how and why?

I have my beliefs, not perfectly formed, on all these matters, and I admit that they are artifacts of faith. They emerge from my (possibly ridiculous) faith in a reasonable universe and my estimate of what a reasonable universe, after death and from a vantage point where these questions might finally be answerable, might look like. I am, of course, just one human trying to figure things out. That is the best I believe any of us can do, since such animals as “prophets” and the gods they have invented almost certainly do not exist.

I’m deeply agnostic on many matters, but if asked what happens after death, or what is the meaning of life, I’d answer as follows. No one knows for sure, obviously, but I’m overwhelmingly convinced that the answer is far, far more interesting than any explanation put forth by humans thus far (including any that I could come up with).

With that, I yield the floor.

Programming, like writing, gets harder as you get better.

Writing’s hard. I don’t think anyone who has done it and taken it seriously, whether in creative fiction or in precise, technical nonfiction, disagrees with this claim. What makes it either very difficult or endlessly rewarding, depending on one’s perspective, is that it remains challenging as one progresses, because one’s increasing competency is paced equally with, if not outpaced by, escalating standards to which one’s own work is held. Many of the millions of wannabe novelists out there believe that the reason they haven’t written (or even started) “their novel” is because that first novel is just too damn hard. Most writers would say that, although it’s the hardest to sell, the first novel is actually the easiest to write. No reputation is at risk, the first novel is expected to be mediocre, and most importantly, one’s own intense self-criticism hasn’t set in, at least not in full force, yet. From what I’m told by experienced authors, writing never gets easier, even for the immensely talented and skilled. One notable exception to this exists, and it’s those who are writing only to make money and who have consciously decided to make writing purely a matter of economic optimization. As far as I’m concerned, such people don’t qualify as writers, but that’s another topic for another time.

Programming is similar. Performing specific tasks, obviously, becomes easier as a programmer’s competency grows, but good programmers don’t want only to “solve the problem”. They want to solve the problem correctly, which entails writing code that is generally useful, extensible, and of high aesthetic quality. The code should not be needlessly slow, complicated, or brittle, even if those concerns are irrelevant to the immediate use case (e.g. an inefficient algorithm may be acceptable on small data sets, but is intolerable in code that might be expected to scale to larger inputs). “Kludges”– inelegant solutions– and “anti-patterns”, such as busy-waiting to implement an event loop, that may be acceptable to a novice programmer trying to just get a program working, become embarrassments to intermediate programmers and intolerable for experts.

Definitions of good programming often diverge, as well. In the 1960s, self-modifying, clever and fast assembly code might be considered “good”, as it solved hard problems at record speed, although it would be opaque to anyone required to maintain it in the future. The scope of an average program was smaller than it is today, and a large project written in such a style would likely be discarded if major revisions were required by anyone other than the writer of the original code. In the 2010s, such unmaintainable code would hardly be considered good code, even if it were 20% faster than a more maintainable alternative. Then again, such may be perfectly acceptable code if generated by a compiler, as humans rarely read the machine or assembly code their compilers create.

Though there is no strong consensus on what constitutes good code, it’s a matter on which many programmers are immensely opinionated. It has to work, obviously, but that’s setting a low bar. Even the worst programmers can make software “work” according to a minimal specification, given enough time and allowance for inelegance; but the code of a bad or even mediocre programmer is often so unpleasant to read, use, and maintain that it inspires a gnawing and universal desire to throw it all out and start anew.

For my part, I would say that a good programmer must be a good teacher. The code and documentation should be instructive of how the code is to be used, and how each component works. Ideally, programmers would develop in such a manner that the function of every line of code is self-evident, due both to the innate clarity of the language (a virtue of, say, OCaml) and the quality of documentation. In practice, most managers will never budget sufficient time to make this a reality, but it’s what software engineers should aim for when they can.

Here we venture into the thicket of aesthetics, where every rule has exceptions that must be learned through practice, and where “known unknowns”– matters on which one knows of one’s lack of knowledge– are only a fraction of total unknowns. (An example of a “known unknown”, for me, would be the German language. I know that it exists, and a few stray words and grammatical principles, but I can’t read or write it. An unknown unknown would be any of the six thousand extant languages that I’ve never heard of.) And that is what makes writing, and software engineering as well, increase in difficulty as one’s skill increases. As one’s knowledge increases, one’s awareness of the gaps in one’s knowledge increase at a more rapid rate. One’s perceived “knowledge ratio” decreases as one’s actual ignorance wanes. A problem with one known solution is easy to solve; when there are ten, and when one knows there might be a hundred more worthy of study, selecting which is best becomes very difficult.

A genre of essay I sometimes find myself writing is the “problem essay”, the first act of which describes an undesirable or inefficient situation, with the second act managing its logical conclusions and avenues and approach, and the final act proposing solutions. This is a very common pattern in writing. Here would be the point at which I propose a “solution”, but I, frankly, don’t have one. To tell the truth, I don’t know if the counterintuitive tendency of a craft’s difficulty to increase with one’s improving skill and knowledge is a “problem” in the first place.

Actually, as a game-design snob who enjoys a well-structured challenge, I rather like this aspect of disciplines like writing and computer programming. It keeps things interesting.