# Why 0.999999… = 1. Also, bashing OOP.

This might be a step away from my usual blog fare, just to focus on something cool. Off and on, I enjoy studying logic. I spent a year in a math PhD program, before I was pulled away from it by Wall Street. I don’t regret that decision in the least; graduate school is often fun, but the academic career in general ain’t. That game has been ruined for a good two generations or so. However, sometimes I miss the mathematics itself, even if it seems to take five times as long to really understand a new field when you’re out of school and have to work.

Here’s an interesting question: why, on a fundamental and logical level, is 0.9999999… = 1? This is one aspect of the real numbers that, for example, injects an irritating complication into Cantor’s diagonal argument (for the uncountability of real numbers) because that non-unique decimal representation must be “special cased”.

With all the benefits of higher mathematics, we can sum the infinite series. However, that leaves us reliant on an analytical concept: a limit. It’s true within mathematics as we use it. But is it inherently logically true? There are other mathematical regimes that have different rules. What is it about the real number system that says there can’t be some real number that is greater than every element of {0.9, 0.99, …} but less than 1? Or one that violates other “intuitive” rules that come out of the identification of the real number with the observed (if potentially illusory) physical artifact of continuum? Or (even) one is that is the multiplicative inverse of 0, something we’ve known since grade school doesn’t exist? Why aren’t there a bunch of weird real numbers we don’t know about? What is it, logically, about the real number system, that prevents them?

Construction of the reals

For review, let’s consider what the real numbers are. We’re going to be speaking purely in formal terms. Mathematics starts with the natural numbers generated by a base element 0 and a successor operator S such that {0, S0, SS0, …}– in shorthand, {0, 1, 2, …}– has no duplicates. Those are the familiar non-negative integers (or natural numbers) of 0, 1, 2, and so on. The full set of integers (Z) can be constructed using pairs (n, p) such that either n = 0 or p = 0, and defining mathematical operations appropriately. (A more common approach is to remove the additional qualification that n or p must be 0, and define integers as equivalence classes.) Rational numbers can be constructed from integers as pairs (n, d) where gcd(n, d) = 1, i.e. there are no common factors, and d > 0. That gets us to the rational numbers, Q

Georg Cantor proved something beautiful and weird about the reals: that the set of them is uncountable, or a much larger infinity than the size of the natural numbers (the smallest infinity). I’ll get back to that. I only point it out now to show that the notion of what a real number is has surprising depth. It might be intuitive that the infinity within an inch is “no bigger than” the infinity of inches within boundless geometric space (assumed to be countable) but that is not so.

We have real numbers because we want “dividing lines” (I’ll return to that concept, more formally) within the rationals to exist. For example, there’s a set of rational numbers, {q such that q^2 < 2}. Its edges (the positive and square root of 2) do not exist in the world of rational numbers, but we want to believe that there’s a greater framework (the reals) in which numbers such as “q for which q squared equals 2″ do exist.

What are often called “reals” in computing are floating point numbers, which are nothing like real numbers and don’t even follow their rules. Trust me, you don’t want that rant.

One construction of the real numbers is a computational one. A real number r is a function Q -> {True, False} such that:

1. If p < q, then r(p) = False implies r(q) = False. (Decreasing monotonicity).
2. There exist some p and q such that r(p) = True and r(q) = False.
3. There is no p such that r(p) = True but r(q) = False for all q > p.

The first principle means that there’s at most one “switch-over” point where r goes from being True on all values to being False. The second means that it’s not uniformly True or False on all rational numbers: i.e. exactly one “switch-over” point. For the third, if that point occurs exactly at a rational number, then it is False for that specific number (to avoid two function existing that represent the same rational number).

For example, the square root of 2 is represented by the function (in Clojure) whose “switch-over” point is the square root of 2:

``` (fn [x] (or (< x 0) (< (* x x) 2))) ```

All the rational numbers (e.g. 1, 1.4, 1.41, 1.412…) less than the square root of 2 will, when this function’s applied to it, return True. All those greater than it (e.g. 2, 1.5, 1.42, 1.413…) will return False.

In an imperative language (like C or Python) it would look like this:

``` bool sqrt2(int x) { .   if (x < 0) return true; .   else if (x * x < 2) return true; .   else return false; } ```

In practice, mathematicians prefer to use sets (Dedekind cuts) instead of functions, because the foundations for sets are better defined (functions are often defined in terms of sets, although this is not without controversy). For our purposes, they’re isomorphic: a set X in some larger domain D is equivalent to a function D -> {True, False} returning True on all elements of X. I’m using a function-based approach largely because it matches the computational worldview of most of my audience.

There are a few interesting things to say here. First, not all functions on rational numbers are valid real numbers. For example, this one:

`(fn [x] (< (* x x) 2)))`

violates the first principle of monotonicity, because it returns False for -2 but True for 0. This one:

`(fn [x] true)`

is not a valid real number because it has no rational q for which r(q) = False. It corresponds to an “infinite” real number, greater than all rational numbers. Why is that a problem? I’ll get to that, too.

Sizes of infinite sets

One of the most important mathematical revelations of the modern era is that there are different sizes of infinity. Why is this important? Let’s speak again in a highly intuitive (for a practical-minded person) framework of computation. (Actually, what a computer is is far from intuitive; but I’ll side-step that for now.) Let’s say that we’re playing a two-person game. There’s some published set T of which I (“Player A”) pick an element k. Your job, as Player B, is to guess what I picked, but you have an arbitrary amount of time in which to do it. We’re interested in whether the game is an eventual win for Player B.

If that set T is of the natural numbers, player B will win. Let’s say that I pick 431,892,655,173,902. If player B guesses the natural numbers in order, starting from 0, he’ll eventually pick my k. In fact, every k will be selected after some finite number of steps; the whole set is eventually traversed. It will take an extremely long time (too long for the game to be practical) but, in theoretical terms, this game is a Player-B win. The full set of the integers have the same property; there’s a way to “list” them such that all are covered: {0, 1, -1, 2, -2, 3, -3, …}. This is also true of the rationals, and of finite-length strings over any finite alphabet (i.e. conceivable words). We call such sets countable.

Real numbers don’t have this property, as Georg Cantor proved. The “guessing game” I described, for reals, is not a player-B win. Whatever list player B might be using, I can construct a real number that differs from every element of it. Given infinite time (assuming player B’s list to be static) there are numbers I can choose, as player A, that he will never guess. Whoops.

There are practical wrinkles to all of this. Turing, Church, and Godel all defined computability in terms of functions from one countably infinite set (such as integers, or strings over a finite alphabet) to another. There are an uncountably infinite number of such functions, but only a countable number can be computed, because the computation must be described in some string (“program”) of finite length. Uncountable infinities are infinitely larger than countable ones, so that means that only an infinitesimal fraction of mathematical or computational functions that exist “platonically” are even accessible by humans. Why? Well, when it comes to Turing Machines and all the attendant results about the limits of computation, the joke is really on us. We may or may not be Turing machines; we don’t know enough about our brain states, consciousness, or metaphysics. If we exist after death, we might have access to “non-computable knowledge” or communion with superior beings; but in this world, as humans, we’re limited by the constraints of Turing Machines, at least insofar as we can communicate losslessly with another person. What we can think may or may not be limited by the computable, but what we can communicate or prove is under such limitations.

Let’s jump back to my guessing game. As Player A, I can theoretically choose a real number that will not occur on B’s list; and that would not be true over the natural numbers. In fact, if I pick “a random real number” it won’t be on Player B’s list with probability one. Can one generate “a random real number”? No, at least not in a completed (or “eager”, in computational terms) form. Let’s assume that there are “random” physical processes: a truly random 10-sided die. This could be used to generate such a number, but it would take infinite time to do so; player B would perceive it (reasonably) as Player A “making it up as he goes along” to thwart him, and he’d be right because the number would never exist in completed form. In practical reality, if I have to choose a specific real number that I must know in completed form, I’m limited to the countable set of numbers that I can describe in finite time. In fact, I must be able to compute them in order to assess whether Player B’s choice is correct, and even that is not enough. (Whether two computable numbers, with different descriptions, are identical is not always computable.)

The major takeaway from this is that, even with the constraints above on the real numbers, the vast majority of them are inaccessible to us in any mathematical language, with the accessible set being infinitesimally small. Remember also that, as defined above, real numbers are an infinitesimally small subset of functions. That’s one reason why the notion of mathematical function and computation diverge. It also may shine some light on why no general-purpose programming language will ever support semantically proper “function equality”. It’s not even theoretically possible.

In fact, if we have two descriptions of real numbers, we may not always be able to tell (even with infinite time and computing resources) whether they describe the same real number. Sometimes it’s obvious. For example:

`(fn [x] (or (< x 0) (< (* x x) 2)))`

and

`(fn [x] (or (< x 0) (< (* x x x x) 4)))`

correspond to identical functions and, in the construction above, the same real number. This can’t be performed, no matter how smart you are or how much you have in the way of computing resources, in the general case. Any programming language that provides a true real number type (as opposed to floating-point numbers) would have to issue a warning about equality comparison: it may not terminate.

Object orientation, and sizes of things, based on their properties

We’ve taken a bottom-up view of what natural numbers are: everything generated by 0 and the successor operator, S, and nothing more. This corresponds to the C/Ocaml view of programming where everything is defined, at root, from some set of well-known primitives (e.g. integers, strings). But there is another view of programming that tends to be top-down, classifying things based on properties and behaviors.

For example, take mathematical integers. They aren’t the same thing as the “int” type of a typical computer language. In Java, for one example, int corresponds to an integer that must fit within 32 bits, and behaves in truth like Z_2^32 (integers modulo 2^32) rather than the full (infinite) set. Basic algebraic properties of integers don’t necessarily hold. For example, for mathematical integers, ab = 0 => a = 0 or b = 0. For 32-bit “integers” under typical behavior (without error-checking that turns an overflow into an exception– that is, an event either explicitly handled in the code, or resulting in a crash) this is not true: if a and b are both 2^16, then their product (due to overflow) will be 2^32, which is converted to 0 to fit within a 32-bit space. Whoops!

We can get integers that behave the way we want (excluding memory limitations of the host machine, which are usually not a problem) but we have to allow them to grow to arbitrary size. Under the hood, they’re built on top of the much more efficient fixed-size “integers”, just as integers themselves can be built from concatenations of bits (0/1 variables). Here’s the rub: the application-level programmer– say, a number theorist who regularly needs to work with large integers– shouldn’t be concerned with the underlying details of how those large numbers are represented in the machine. She wants to know that 561,345,802,654,197,833 times 5,174,120,929,311,340,549,616 won’t equal 0 (because, in mathematics, that’s not true) but doesn’t care how the bytes are laid out in the physical machine. (Endian? What’s that?) What she really cares about are top-view behaviors of these numbers, not how they are built from the primitives that machines know how to handle.

Thus, we have a view of programming where people focus on the behaviors of things rather than what they are at a deep level. It’s not just integers to which this applies; files are also complex and implementation-dependent under the hood, but the user shouldn’t be concerned with the details of, say, a robotic arm reading a rotating magnetic disk. This leads us to the “object-oriented” view of programming. Now, modern “object orientation” has been associated with ugly and over-complicated designs, shoddy engineering, mediocre aesthetic tastes, and heaping deskfuls of business bullshit, so it might be surprising to hear someone like me saying anything positive about “object-oriented programming”. I’ll say that there’s a lot of merit in the original vision, which is focused on abstraction. A mathematical integer is an abstraction, and so is the concept of a file; each, to a programmer, allows exploitation of certain functionality without concern for the underlying, and immensely complex, physical system.

In the object-oriented world, we don’t have the possibility for perfect knowledge of something. In the bottom-up world, we might represent a date as an ordered triple, e.g. (1776, 7, 4). We know everything there is to know about it, as an ordered triple of three numbers, but there are shortcomings to this approach. We don’t necessarily have a good idea of its semantic meaning. Nothing, for example, tells us that we’re representing July 4 and not April 7, 1776, since both date conventions are valid. We don’t even know that it’s a date; it could be some triple of numbers assembled for other reasons. In the object-oriented world, we’re more likely to have an object and know that it’s a Date. It might have a .toStringUS method that returns “07-04-1776″ and a .toStringEU method that returns “04-07-1776″. What other methods might it have? We don’t always know; often we don’t care. Object orientation allows us to work with things of which we have imperfect knowledge. There’s some good to that, because in designing large industrial or computational systems it is exceedingly rare and usually prohibitively costly (if not impossible) to have perfect knowledge. There’s also an incredible amount of bad that comes with it; modern business “object-oriented programming” has become a school of programming that celebrates (rather than cautiously contending with) imperfect knowledge and complexity. It’s dangerous ground. Object inheritance, for example, should be used extremely rarely in practical programming; it’s the 21st-century goto statement.

Having taken my soapbox, let me explain the theoretical aspect of a top-down (or object-oriented worldview). Instead of data types being constructed from lower ones (e.g. date as a triple of three integers) there is a “top” type called Object. Data types correspond to what we know about something. With an Object, all we know is that it exists. (We might have more in practice, like Java’s .toString and .hashCode, but only because Java precludes objects that don’t have such methods defined.) It’s hard to make use of something when knowing absolutely nothing about it, so usually there is a hierarchy of increasingly specific types, traits, or aspects to something. We might call an object Stringable if it has a .toString method (i.e., a way to represent it as a set of characters in some alphabet). We might call it Immutable if it has no internal state (that is, its response to any method call is the same no matter what methods have been called before it). We may have a class called Matrix for objects that can be viewed as mathematical matrix (and support the expected operations) with subclasses SparseMatrix and DenseMatrix to implement the differing engineering tradeoffs for the two cases, and the beauty of this is that the programmer can often write code that works for a Matrix without concern to which kind is being operated on. The program might switch implementations dynamically for efficiency’s sake, allowing the application-level user to simply write mathematics and have the machine deal with engineering. In other words, the application-level programmer gets to work in the realm of what but doesn’t need to care how something is constructed.

Above, I spoke on the construction of, say, the natural numbers and the real numbers. Okay, neat. There is another logical way to think of such things, however, which is to look at their properties. For example, first-order Peano arithmetic defines the natural numbers (N) like so:

1. There is a natural number called 0.
2. There is an equality relation = that is reflexive, symmetric, and transitive. That is, for all x, y, z in N: x = x; x = y => y = x; x = y and y = z => x = z.
3. Natural numbers are closed under equality, i.e. if x = y and x is a natural number, so is y.
4. There is a unary operator on natural numbers S. such that no x exists with Sx = 0, Sx = Sy => x = y , and if x is not 0, there exists y such that Sy = x.
5. (An axiom schema.) For any logical formula p with one free variable, if p(0) and p(x) => (p(S(x)), then p(x) for all x in N.

The fifth of these is mathematical induction, the basis of all number theory, because it’s the only way to prove something nontrivial (e.g. an integer is 0, 1, or has a prime factor) to be true over all natural numbers.

Computationally, we know how to add two numbers; but we’re working logically, which means we describe addition and multiplication (declaratively) based on how they behave:

1. x + 0 = x.
2. x + Sy = Sx + y.
3. x * 0 = 0.
4. x * Sy = x * y + x.

The familiar algebraic properties (commutativity, associativity) can be derived from these.

I’m going to define an additional operator ` for “predecessor” and assign it the following properties:

1. `0 = 0.
2. `Sx = x.

There are a few things to be discussed. First, “logical formula with one free variable”  is akin to (in computation) a template more than a function. We can’t represent all functions as logical formulas. Logical formulas in a finite alphabet form a countably infinite set, but there are an uncountable number of such functions (I haven’t proved this, but Cantor’s diagonal argument can be adapted to the purpose) so the vast majority have no logical representation.

Thus, the induction axiom present above is weaker than the full induction axiom allowed by second-order logic (logic that permits quantification over sets). The stronger induction axiom is:

• For all sets K, if 0 is in K and x in K => Sx in K, then all x in N are in K. (It’s the quantification over all sets that makes this second-order.)

Logically, the weaker axiom schema gives us induction only over sets describable, within that arithmetic system, by logical formulas. The full, second-order formulation gives us induction on all sets.

Here’s a specific point of divergence between the two. Are all natural numbers finite? To answer this, we need to define finite. We call a natural number x finite if and only if the set Preds(x) = {`x, “x, “`x, …} contains 0. (That is, if we keep subtracting one, we’ll eventually reach zero.) As we think of them, all natural numbers are finite. However, there’s absolutely nothing in the five properties listed above that precludes non-finite (non-standard) natural numbers. There isn’t much that can do with them; but we can’t rule out their existence in first order logic. Without quantification over sets, we can’t exclude the potential for weird, “other” numbers.

If we have the second-order induction axiom, we can say that all the natural numbers (thus defined) are finite by letting K equal the set of finite natural numbers (0 is finite, and x being finite implies Sx being finite, because Preds(Sx) is a superset of Preds(x)). We cannot do that with the weaker first-order induction “axiom schema” because “is finite” is not a logical formula in arithmetic, as defined above.

Why is this so troubling? Let’s look at the first four axioms in object-oriented terms.

1. There is a class called Nat. It has a method called Nat/zero (or, Nat.zero, in Java) that produces an instance of that class corresponding to 0.
2. Each instance (this) of Nat has a method called .equal(Nat that) that returns true if this and that correspond to the same natural number. It behaves an equivalence relation. We don’t allow an instance of Nat to .equal anything other than a Nat.
3. Each instance of Nat has a method called .succ (no parameters) that returns another natural number that is never .equal to zero. Also, x.succ.equal(y.succ) if and only if x.equal(y).
4. For all instances this of Nat, either this.equal(Nat.zero) or there is an additional method .invSucc such that this.invSucc.succ.equal(this). Nat.zero may not have such a method; it might not exist, or return an error (undefined behavior!)

We could (although it would be hilariously inefficient) derive our arithmetic operators in terms of these contractually required methods, which means that the existence functions to realize them is guaranteed. For example:

``` (defn is-zero [this] (.equal this Nat/zero)) (defn pred [this] (if (is-zero this) Nat/zero (.invSucc this))) (defn add [this that] (if (is-zero that) this (add (.succ this) (.invSucc that)))) (defn multiply [this that] (if (is-zero that) Nat/zero (add this (multiply this (.invSucc that))))) (defn compare [this that] (if (is-zero this) (if (is-zero that) :eq :lt) (if (is-zero that) :gt (compare (.invSucc this) (.invSucc that))) ```

We get well-behaved (terminating in finite time) add and multiply functions because, while they are recursive, they’re defined so as to eventually reach the base case (when that is Nat/zero) in which no recursion occurs– so long as our Nats are finite, meaning that iteratively calling invSucc will eventually give us a zero. But that’s nowhere in the contract! If we allow non-standard Nats to be constructed (perhaps there’s a Nat/nonstandard method that generates one) then all bets are off, because our addition and multiplication functions as defined above will not terminate.

We only get the guaranteed finiteness of Nats, as said above, by including the second-order induction axiom (which allows us to assert, by induction, that Nats must be finite). So why would anyone use the less powerful first-order logic? Quantification over sets (i.e. with variables allowed to range over all sets) is a big deal. There are plenty of sets (even infinite ones) that we can compute or realize; for example, the set of prime numbers. But the vast majority of that uncountable space of sets we cannot; they will never exist in the real world. When we quantify over sets, we’re talking about a world of objects that is both harder to reason about formally, but also more detached from intuition, in that sets may exist mathematically that cannot be realized in the physical world. (That’s why a mathematician can double an orange.) The downside, of course, in restricting ourselves to first order reasoning is that we can’t exclude (taking a top-down approach in which we reason based on behavior) bizarre “other” possibilities, such as nonstandard “natural numbers”.

Why aren’t there more real numbers?

The computational approach I use to define real numbers correlates roughly to our understanding of real numbers, which is as possibly nonterminating decimals. A nonterminating decimal, read left to right, gives rational lower and upper bounds for the numbers’ true value.

What are our intuitive expectations of real numbers, R?

1. There’s a 0 and a 1 and 0 != 1. (That is, there are at least two of them and they’re different.)
2. There’s an addition operator + and a multiplication operator *. The commutative, associative, and distributive properties for each hold. (Field properties.)
3. Zero is the additive identity and one is the multiplicative identity, i.e. x + 0 = x and x * 1 = x for all x.
4. The set {0, 0 + 1, 0 + 1 + 1, 0 + 1 + 1 + 1, …} has no duplicates. (Characteristic zero.) Thus, the natural numbers N are embedded in R.
5. Both addition and multiplication have inverses, e.g. for all x there’s a unique -x such that x + (-x) = 0; and every nonzero x has a unique x^(-1) such that x * x^(-1) = 1. (This allows subtraction and division to be defined.) This means that the rationals, Q, are embedded in R.
6. There’s an ordering relationship < such that x < y and y < z => x < z; exactly one of {x = y, x < y, y < x} is true. We can create this logically in the first order; it works to define x < y if and only if there exists no real r such that r^2 = xy. (We’re saying that x < y if and only if xy is negative, but we can’t define the concept of “negative” as we usually would– in comparison to zero– without the ordering, so we use the nonexistence of a real square root to define negativeness in first-order logic.)
7. Something (perhaps vague) about the continuum. There shouldn’t be “holes” in it. The boundary at which x^3 > 2 becomes true, for example, should be included in the set. We feel entitled an “intermediate value” property; if some function f (e.g. x^2 – 2has f(x) < 0 and f(y) > 0, we believe that there must be a z between them where f(z) = 0. It would break calculus, and therefore physics, if that weren’t true.

Properties 1-5 are satisfied just fine by the rational numbers. Defining a suitable ordering on them (see 6) is not hard; we know which ones are negative (is the numerator a natural number?) and x < y if and only if xy is negative, which we define (as above) algebraically using the nonexistence of a square root. The seventh property, desired for the reals, is not true of the rationals; because such “boundaries” (e.g. the square root of 2) are not included in the rational numbers. That’s a problem for us. It’s why we feel, intuitively, that the rational numbers don’t tell the whole story.

Our Dedekind cut/function solution solves the ordering problem and the issue of continuum; but how do we know that that’s the right way to handle real numbers? If we don’t have any second-order axioms about R, we don’t. “The rules” that we’re used to may not apply; there might be “non-standard” elements that can’t be constructed as above. They might obey the field properties of +, *, have some place in the real number system’s ordering, and yet look nothing like real numbers. Is it possible that such “dark numbers” exist in any real number system that we’d regard as useful? No, but to show why, we have to use the second-order axiom about the real numbers that is the lynchpin of analysis: the least upper bound property:

• Every set K of real numbers that has an upper bound (that is, an x such that y < x for all y in K) has a least upper bound (that is, a z such that z < x for all x with the upper-bound property) in R.

This is not a statement about individual real numbers; it’s a quantification over sets of them, and that makes it a lot more powerful. This property is not true of the rationals; for example, {x in Q : x < 0 or x^2 < 2} does not have a least upper bound in Q, because for any such rational upper bound we can find a smaller one applying Newton’s method. It is true of the real numbers, as defined the Dedekind cuts (or analogous functions). It’s a number system free of holes.

So (logically speaking) why can’t there more real numbers? Perhaps there could be infinitesimals or other exotic beasts, as seen in (for example) the surreal number system, even if they aren’t needed to fulfill the logical requirements. Recall our computational approach to the real numbers (isomorphic to Dedekind’s construction):

Again, a real number r is a function Q -> {True, False} such that:

1. If p < q, then r(p) = False implies r(q) = False. (Decreasing monotonicity).
2. There exist some p and q such that r(p) = True and r(q) = False.
3. There is no p such that r(p) = True but r(q) = False for all q > p.

How does this second-order axiom exclude the existence of pathological (non-standard) reals? Or, let’s look at it another way. What breaks if there are non-standard reals?

Let’s first argue that each of the qualities of a real number (as defined from a function, above) are all necessary. The first rule (decreasing monotonicity) is reflective of the reals being ordered and embedding the order relationship that exists for rationals (for r, s in R, r < s if and only if r(p) => s(p) for all rational p).

The second trait above corresponds to the concept of an infinite real number; one that is larger (or smaller) than all rational numbers. The good news is that there are no “infinite” real numbers; or every positive real r is less than some (standard) natural n. Why? Assume the opposite. Then {0, 0+1, 0+1+1, …} = N has an upper bound in some r. Then it must have a least upper bound s, which implies the existence of a natural number ns – 1 because s – 1 cannot be an upper bound of N. Since the naturals are closed under the successor operation, n + 1 is also a natural number that is greater than s. Contradiction.

In other words, since N doesn’t have a least upper bound, it doesn’t have any upper bound. We can similarly show that there are no negative-infinite real numbers and conclude that every real number r has some natural N for which -N < r < N. This seems obvious (as does 0.999999… = 1) but here I’m interested in why it is true. The least upper bound property, when first taught, seems like an obvious truth but irrelevant to the matter of the integers (a clearly unbounded set). In fact, it’s the reason why the real number system can’t include an (effectively infinite) element that bounds the integers.

What about the third trait? That’s only relevant to the rational numbers, which are countable and we understand them well. We’ve defined the corresponding real number (as a function) for q to return False at q; that was an arbitrary choice in that we could have defined to return True there, but we needed to pick one to avoid having multiple definitions that apply to the same rational number.

What happens, though, if we admit the existence of real numbers that aren’t constructed according so such rules? That is, we have reals R according to the Dedekind cuts, and some additional (disjoint) set U of real numbers that follow the logical rules of the reals, but aren’t part of that construction. Then choose some u in U. The rational numbers are all included in R, so it’s irrational. This means that for every rational q either u > q or u < q, and the transitivity of the ordering relation means that it behaves identically to some r constructible using the Dedekind approach. That is, for all q, r < q if and only if uq

One might think this means that we’re done. The rationals are dense in the reals, right? Well, that’s analytic property. We don’t have real analysis yet; we haven’t proven any of that stuff. All we have about r and u is that they compare identically to every rational number. Then let t = |r – u|. Since r != u, then t is positive. But it’s less than any positive rational number. It’s “infinitesimal”. So t < 1/N for any integer N, which means that 1/tN for any integer N. However, we already proved that infinitely large real numbers don’t exist.

One of the neat things about a proof by contradiction is how the absurdity ripples backward to what you want to prove, kind of like a program’s stack trace unwinding. (It’s good to inspect this process, because reductio ad absurdum is only valid if you make one false assumption.) For example, either the natural numbers have a least upper bound or they have no upper bound in R (axiomatic). We’ve shown before that a least upper bound for N is absurd, so there is no upper bound of N, which means there’s no infinite real number. If there’s no infinite real number, there’s no infinitesimal real number (multiplicative inversion). Then, two real numbers that respond identically in comparison to any rational number must differ by zero. That means that they’re identical. That means this u chosen from this “nonstandard real” set U is equal to an existing real number under the Dedekind construction. That means U has no elements distinct from R, but because it is defined to be disjoint from it, it is therefore empty.

Thus, if the least-upper-bound trait of real numbers is true, there are no real numbers except for those admitted by the Dedekind construction. Its utility in proving that fact is why it’s such an important axiom.

That is why 0.9999999…. = 1.