As concerns numbers, one starts with the natural or counting numbers, […, 3, 2, 1, 0] and their susceptibility to addition and multiplication. There are various ways of building up formal structures providing numbers to fill the gaps between natural numbers and handle negative values; these aim to preserve what was true for the naturals,

- a.(b+c) = a.b + a.c,
- a+b = a+c implies b = c and
- a.b = a.c implies b = c unless a is 0,

but to provide extra values which fill in gaps

that come to
light when the last two, cancellation, lead us to ask, given the sum a+b or
product a.b, and knowledge of either a or b, what is the value of the other ?
The cancellation rules tell us that any value which works is the only value
which works, but it doesn't tell us that we'll necessarily find a value which
works, unless we have some clear prior knowledge about the numbers
involved. Fortunately, this is easy to bypass, effectively by inventing
imaginary numbers like 2/3 or −1 and then showing how practical and
useful they are, and how well they model real phenomena.

So far so good: we've now got the rational

numbers, each of which can
be written as n/m for some integer n and positive integer m. At this
point, missing

numbers show up from two directions at once,

- polynomials
We can build the rationals by asking for solutions to

given x+y and x, tell me y

and similar for x.y; so why not ask for solutions to other equations, too ? For example: given x.x, what is x ? or: given 4.x.x.x +3.x.x +2.x + 1, what is x ? The polynomial equations involved are all expressed in terms of addition, multiplication and natural numbers, so it surely can't be all*that*much harder than solving for one unknown summand or factor given the sum, or product.Well, OK, it's harder, but human ingenuity won out and gave us the algebraic numbers, among which

*all*polynomial equations are soluble, not just the easy ones that gave us the rationals, but also the famous simple cases like x.x = 2 and x.x = −1, along with many more. The algebraic numbers are, to the complex numbers, what the rationals are to the real line – once we've filled in the other kind of gap.Here's a fore-taste of the gaps of smoothness. The rationals have a certain

grain

to them, albeit fractal; and the algebraics aren't the only holes in that grainy-ness, either. Take a look at the unit circle in two dimensions – that is, the set of pairs, [x, y], of numbers for which x.x +y.y =1. That's a polynomial equation and it produces a nice easy curve which is the epitome of smooth – the circle. So surely it can't be all that hard to measure lengths along such a well-behaved curve ?- convergence
It's possible to construct sequences of rationals for which, given any positive tolerance ε, there is some point beyond which any two numbers in the sequence lie within ε of one another. Formally, the sequence is a mapping ({rationals}: s(i) ←i |{naturals}) and, given ε, there is some N for which, whenever i > N and j > N, s(j) < s(i)+ε and s(i) < s(j)+ε. Such sequences are described as

**convergent**.An easy example is 1/(1+i) ←i. It's generally accepted that, if ε > 0, then there is some N for which N.ε > 1: for any i, j > N, we have 1/i < 1/N < 1/N +1/j < ε + 1/j and likewise with i, j swapped; so the sequence is convergent. Furthermore, along the way, we've seen that, for every tolerance there's a point in the sequence beyond which all its values are within that tolerance of 0. The sequence 1/(1+i) ←i is said to

converge

on 0.A convergent sequence s is said to

**converge**to and a value, S, precisely if: for any tolerance ε, there is some natural N for which s(i) is within ε of S whenever i > N. This worked fine for 1/(1+i) ←i, but it proves embarrassingly easy to construct convergent sequences of rationals which don't converge to any algebraic number. So we invent numbers to which our convergent sequences converge and show how they fit in with the rationals and the algebraic numbers.

which makes for a good moment to pause,

Even with just the positive rationals, we can begin to dream of
differentiation. Take a look the graph of x.x ←x or any other function you
dream up, notice that chords spanning a point on the curve all have similar
gradients; the shorter they are, the more alike. Each chord is characterized by
the values of x at which it starts and stops; call the difference between these
the span

of the chord. At any point on the curve, for any (positive)
tolerance epsilon, there is some precision, delta, for which: any two chords
spanning at most delta about our point have gradients which differ by less than
epsilon. So we can measure

the gradient at

our point on the curve
to any tolerance, no matter how demanding. When we do this to the curve of a
polynomial, like x.x ←x, we find that the gradient is, itself, a
polynomial: power(i+1) has gradient (i+1).power(i, x) ←x, power(0) has
gradient 0, adding two polynomials adds their gradients, multiplying a
polynomial by a constant scales its gradient proportionately.

So we built all our original numbers by asking for solutions to equations
expressible entirely in terms of addition and multiplication. Just as we
started with the natural numbers

, we can now start with the
polynomial functions

, on which we can do addition, multiplication and
differentiation. So, naturally enough, we ask what equations we can invent, with
the available operations on functions; and we try to solve them. Ask for a
function which is equal to the square of its derivative and I will give you
x.x/4 ←x, a.k.a. power(2)/4. But ask for a function which is simply equal
to its derivative and you shall be hard pushed to find a suitable polynomial
– though the sequence

- sum({polynomials}: power(i) / factorial(i) ←i |n) ←n

looks promising [factorial and power are defined, on natural numbers, by: factorial(0) is 1, power(0) is 1 ←x; and, for every natural i, factorial(i+1) is (i+1).factorial(i) and power(i+1) is x.power(i,x) ←x].

given some numbers n and m, for a number i for which n+i = m or n.i = m. Filling those holes gives us what are known as the rational numbers or proper fractions – each can be written as n/m for some integer n and positive integer m.

One might as readily have observed that, among the naturals, a.a = b.b implies a=b and begged the question: given a.a's value, what's a ?

The way I was introduced to the reals started from the naturals; equipped it
with additive inverses and, for the positive naturals, multiplicative inverses,
thus obtaining the rationals; defined a notion of convergence

for
sequences of rationals and constructed the reals as the collection of all
convergent sequences of rationals subject to an equivalence which amounts
to they converge together

. This applies additive and multiplicative
completion followed by topological

completion. For ease of expression of
the last, I'm going to delay additive completion to last: but here's a quick
overview of the whole construction:

## The Classical Reals

Start with the positive (i.e. non-empty) naturals, on which we have addition and multiplication, both of which are cancellable, symmetric and associative. To perform multiplicative completion, we represent a ratio as an atomic relation, using n←m to encode the ratio between n and m. We can define multiplication, addition and an order on such pairs; and an equivalence which all three respect.

- (a←b)<(c←d)
- iff a.d < b.c
- (a←b).(c←d)
- = (a.c ← b.d)
- (a←b)+(c←d)
- = (a.d+b.c ← b.d)
- Positive Rational
- relates (a←b) to (c←d) iff a.d = b.c
Given pairs (a←b) and (c←d), we can compute the naturals a.b and b.c; either these are equal or one of them is < the other; so either Positive Rational relates (a←b) to (c←d) or one of them is < the other. To perform multiplicative completion, construct an equivalence on pairs (using the atomic relation n←m, for any given n and m, as representation of the pair) of naturals as follows:

- Positive Rational
- relates (a←b) to (c←d) iff: a.d = b.c with a, b, c and d all positive naturals
That Positive Rational is symmetric and reflexive follows directly from the symmetry of natural multiplication. If it relates (a←b) to (c←d) and the latter to (e←f) then we have a.d = b.c and c.f = d.e, so a.d.f = b.c.f = b.d.e whence, cancelling d from the first and last, a.f = b.e; so it also relates (a←b) to (e←f); so Positive Rational is transitive, hence an equivalence.

We can define

multiplicationanddivisionbetween pairs of positive naturals by:

- (a←b).(c←d) = (a.c ← b.d)
- (a←b)/(c←d) = (a.d ← b.c) = (a←b).(d←c)
Since p/q = p.reverse(q) we can naturally interpret reverse as multiplicative inversion. First note that if (c←d) = (a←b).(n←n) = (a.n ← b.n) we obtain a.d = a.b.n = a.n.b = c.b so Positive Rational deems (n.a ← n.b) equivalent to (a←b) and (n←n) is a multiplicative identity for all n. Clearly n.m = n.m implies Positive Rational relates (n←n) to (m←m) for all positive naturals n, m, so all such multiplicative identities are equivalent; and (c←d).reverse(c←d) = (c←d).(d←c) = (c.d ← c.d) so q.reverse(q) is always a multiplicative identity.

Now suppose Positive Rational deems (a←b) equivalent to (n←m) and (c←d) equivalent to (h←k); so a.m=b.n and c.k=d.h; let (e←f) = (a←b).(c←d) = (a.c←b.d) and (i←j) = (n←m).(h←k) = (n.h←m.k); observe that e.j = a.c.m.k = b.n.d.h = f.i, so Positive Rational deems (a←b).(c←d) and (n←m).(h←k), i.e. our multiplication on pairs of positive naturals respects the equivalence, so serves as a multiplication on the equivalence classes of Positive Rational.

At this stage we can read (a←b) as a/b or as b/a by embedding the positive naturals in the pairs thereof as (: (a←1) ← a :) or (: (1←b) ←b :), respectively. Settling on the former, we can define addition on pairs by:

- (a←b)+(c←d) = (a.d+c.b ← b.d)
Again, consider (n←m) and (h←k) equivalent to (a←b) and (c←d) respectively: write (e←f) = (a.d+c.b ← b.d) and (i←j) = (n.k+h.m ← m.k) and observe:

- e.j
- = (a.d + c.b).m.k
- = a.m.d.k + c.m.b.k
- = b.n.d.k + d.h.m.b
- = (n.k + h.m).b.d
- = i.f
so addition also respects the equivalence.

In (at least a modern rendition of) Cantor's demonstration that there
are more

reals than naturals, the reals between 0 and 1 are represented
in binary and, thereby, as collections of naturals – label the digits
after the binary point, numbering from the left as 0, 1, 2, 3, …, n,
… and identify any sequence of binary digits as the collection of labels
for which the corresponding digit is a 1. This is a remarkably convenient
representation for the relevant real numbers

, provided we allow infinite
sequences of digits. One has to take care about the equivalence of a sequence
with finitely many 1 digits and the result of replacing the last 1 with a zero
and having a 1 in every subsequent position; but one can define addition on
subsets of the naturals as a model of the (limits of) diadic rationals.

To avoid trouble with infinite regress, it'd be nice to define addition
using just intersection, union and the mutual complement

consisting of
those members of either set which aren't members of the other.

To model binary addition:

S, c, n = {}, ?, infinity # ? in {0, 1} while n-- > 0: if n in U: c = 1 + c if n in V: c = 1 + c assert c in {0, 1, 2, 3} c, m = divmod(c, 2) if m: S.insert(n) assert c in {0, 1}

S now represents the sum of the numbers represented by U and V.
If c is 1 we're discarding the carried bit

.

- if 1+n is in both U and V, we definitely carry to n
- if we carried to 1+n and 1+n is in either U or V, we carry to n
- if we carry to n and n is either both in U and in V, or in neither, n is in S
- if we don't carry to n and n is in just one of U, V, but not in the other, n is in S

C = {n: we carried to n} Approach via a sequence Q defined by: Q(0) = (|successor:U∩V) Q(1+n) = Q(n)∪(|successor:Q(n)∩(U∪V)) then use unite(|Q:{naturals}) as C. Is a given natural's membership of C always decidable from this ? Plausibly not, for suitable U and V. What if membership of U and V is decidable, then ?

In any case, we obtain a model of the non-negative real numbers which are less than two. There is a natural ordering; given U, V, consider {n in U: n not in V} and {n in V: n not in U}; these are manifestly disjoint so, if they have any members, we can identify the smallest of each and know the two are different, so imply an order – if the former's is smaller than the latter's, V is less than U; and vice versa – and if either is empty, one of U, V subsumes the other. If U and V subsume one another, they're the same collection so equal; otherwise, if one subsumes the other, the former has a member the latter lacks so the latter is less than the former.

Well, anyhow, the model should tell us how to build an arithmetic which can
be used as a model of a continuum, albeit a cyclic one. We can cut

the
cycle at either 0 = {} or 1 = {0}; the former reads the numbers as ranging from
0 up to 2; the latter reads the numbers as ranging from −1 to 1 and,
effectively, reads 0 as contributing −1 to the sum (rather than 1 as
described above), so reads {0} as −1 (while {0,1} is −1+half =
−half).