No matter how clever my definitions, they have to start somewhere. In any case, I can explain nothing without the language in which I write, so it makes sense to accept (at least some of) what that language takes for granted. Where practical, of course, instead of leaning on your command of English, I've defined English words to have formal meanings reasonably close to common usage, though sometimes the usage in question is that of the community of anglophone mathematicians (which gets a bit self-referential but, hey, that's language for you - it don't mean nowt without intertextuality). I hope this won't overly inconvenience anyone (fool enough to be) translating my pages into another language: but I see little point in trying to second-guess all the languages I don't know, so I'll just get on with doing the best I can in the one I know best. To a fair extent, I've managed to arrange that I do give formal meanings to the words I use: but I trust you will not be surprised to discover that somewhere I've had to just fall back on informality, without even pretending at formality. Here it is.


The meaning of language is contained in the structure of the instances of its use that we, its users, have encountered. When Richard Feynman broke the code that was the Maya language, he did so by analysing the structure of the texts, or codices, at his disposal. Naturally, this is at its easiest on highly structured texts and ones discussing familiar things: that's why we put things like counting and the periodic table of the elements on space probes we've sent to the stars - they're highly structured and we've persuaded ourselves that any aliens likely to receive the message are likely to be familiar with them.

In this sense, definitions aren't what tells you the meanings of things: they're just statements, in among all the other ones making up the discourse, which can be used as a skeleton of the overall structure of the language used. The ideal reader infers the meanings of the words from the whole of the text, using the definitions as, at most, a particularly easy piece of structure to comprehend. Of course, in practice, any actual reader comes to the fray with prior knowledge of some of the words in use, which saves a great deal of work in attaining comprehension: and I, as author (and, naturally, a regular reader) of the text, strive to build definitions which exploit that prior knowledge (and fit together in a reasonably simple structure) as best I can.

If the definitions are skeleton, then, the analogy suggests we view the structure, as a whole, as a body. While it is my skeleton that supports the greater part of my body, the soles of my feet (which actually support it all) are not part of that skeleton. They are given form by their internal structure, along with which they are held together by the structure of the bones they underlie. Just so, the definitions I give form a skeleton for the corpus of information that I hang off them: but, at the very foundation, they cannot stand without a few things, notably collections, which I do not define. I believe that a patient reader could make sense of the greater part of my definitions by observing the relations between them and comprehending the structure. The rest of the discourse may then be understood by building outwards from this skeleton, even to the extent of handling a few of the things on which the definitions nominally stand. I would like to think that a patient, intelligent reader could do this without prior knowledge of English, learning (my slightly stilted written dialect of) the language into the bargain.


I have tried, in devising denotations, to allow as much variety as reasonable: in particular, to allow such denotations as are orthodox in any field with which I am familiar. Fortunately, the accumulated wisdom of programming language design gives some good insights into how to do this.


A fixed point of a mapping, f, is some input, x, accepted by f for which f(x)=x. I shall describe a collection, L, as closed under some mapping f precisely if f accepts every member of L and k in L implies f(k) is also in L. I shall say that a mapping, f, preserves distinction in a collection, L, precisely if, for a, b in L: f(a)=f(b) iff a=b - and, formally, at most one member of L is rejected by f (so f rejects a and b in L only if a=b). I shall say that a mapping, f, regenerates a collection L precisely if every member of L is f(a) for some a in L.

A collection differs from its successor precisely if it is not a member of itself: equally, unite[{{c},c}] = c precisely when c is a member of itself. Given a collection, L, of which each member is a collection which is not a member of itself (so, c in L implies c is not in c): describe L as

a limit collection
iff c in L implies successor(c) in L.
iff c in L implies c is an ordinal sub-collection of L and c's successor is a member of L's successor.

So each member is contained in L and has, as successor, either L or one of L's members. The empty collection, 0, is trivially an ordinal (just as it is trivially a limit collection), and so is the successor of any ordinal.

iff, for any mapping, f, which accepts every value in L with output in L whenever its input is in L, the following two conditions are equivalent:

As ever, notice that 0 is finite. A collection is then finite precisely if: every mapping which preserves distinction in it also regenerates it; and every mapping which regenerates it preserves distinction in it.

Some interesting suggestions:

successor perserves distinctness in {collection}
ie collections with equal successor are equal.

Proof: given collections a, b with equal successor, let U= {c in a: c not in {a,b}} and V= {c in b: c not in {a,b}}.

Thus unite[{U,{a,b}}] subsumes unite[{a,{a}}] subsumes a subsumes U and similar, and neither a nor b is a member of either U or V. Now, unite[{a,{a}}] = unite[{b,{b}}], so we can cross over the containment chain above and its similar to get: unite[{U,{a,b}}] subsumes b subsumes V and unite[{V,{a,b}}] subsumes a subsumes U. Now, neither a nor b is a member of either U or V, so if either is contained in some unite[{W,{a,b}}], then it is contained in W too. We thus have U subsumes V subsumes U, so U and V are equal.

So unite[{U,{a,b}}] subsumes the successor, which subsumes both a and b, each of which subsumes U. Since U has neither a nor b as a member, the successor, having both as members, must be unite[{U,{a,b}}] (note that if a=b, this is equally unite[{U,{a}}]=unite[{U,{b}}]). Thus we have the successor as unite[{a,{a}}] = unite[{U,{a,b}}] = unite[{b,{b}}] with both a and b (trivially) as members.

Now, b is in unite[{a,{a}}], so b=a or b is in a; likewise, a=b or a is in b, so b=a or (b is in a and a is in b). Now, b in a and a in b implies: b subsumes unite[{U,{a}}] and a subsumes unite[{U,{b}}], with each contained in the successor, so b is in { unite[{U,{a}}], unite[{U,{a,b}}] } and similarly for a. Thus, b=a or (b is in { unite[{U,{a}}], unite[{U,{a,b}}] } and a is in { unite[{U,{b}}], unite[{U,{a,b}}] }).

If one of a, b is their successor, then either the other is unite[{U,{successor}}] or they are equal - in which case they and their successor are unite[{U,{successor}}]. Thus, if either a or b is their successor, we obtain: successor= unite[{U,{successor},{unite[{U,successor}]}}]. Consider a collection p not in U for which p= unite[{U,{p},{unite[{U,{p}}]}}]. By my semantic rule for namings, this is the same entity as b would be if it were unite[{U,{b,a}}] with a= unite[{U,{b}}]. However, a collection q= unite[{U,{q}}] satisfies q= unite[{U,{q},{q}}]= unite[{U,{q},{unite[{U,{q}}]}}]; so is of p's form, whence equal to it. Thus if b= unite[{U,{a,b}}] and a=unite[{U,{b}}], we obtain b=q and a=unite[{U,{q}}]=q=b. Likewise with a and b swapped. So if either of a, b is equal to the successor, then so is the other.

We thus obtain, for a, b with equal successor, a=b or (a= unite[{U,{b}}] and b= unite[{U,{a}}]). Now, a=unite[{U,{b}}] and b=unite[{U,{a}}] yield a=unite[{U,{unite[{U,{a}}]}}] and b=unite[{U,{unite[{U,{b}}]}}] so, again, the rule of namings finds them to be the same entity. Thus, finally, a=b.

the successor of any finite collection is finite

Proof: suppose L is the successor of finite K and let f be a mapping which accepts every value in L, with a in L implying f(a) in L. We then have two conditions to prove equivalent. Notice that L subsumes K, so f also accepts every value in K, which is finite: and that K is in L, so that f accepts K with f(K) in L. The proof repeatedly depends on the fact that every member of L is either K or a member of K.

the only finite limit is {}

Proof: consider a non-empty limit collection - I must show that it cannot be finite. First, we have that successor preserves distinctness on {collection} and so on our limit collection, and that the limit is closed under successor. Consequently, if we can show that successor does not regenerate our limit collection, we have shown that it is not finite. Note that members of our limit are not members of themselves, so they are distinct from their successors.

So, on any limit collection, successor preserves distinctness of inputs as distinctness of outputs. If I can now show that any non-empty limit contains a member which is not the successor of any of its members, I have then proven that the limit is not finite.

The natural numbers

One can define the natural numbers in at least two ways:

Now, in principle, this might be an empty intersection: there might be no limit collection containing 0. Now, for all that is x a member of A is, in general, undecidable, there are some elements which we can prove do lie in Natural. These are, specifically, the values which are members of any limit collection of which 0 is a member. First off, by definition, there's 0. But if a limit collection contains a collection which is not a member of itself, it necessarily contains its successor

More Denotations - mappings in brackets

Naïve Set Theory

A function is a mapping, f, for which delegate(f) rejects f: a set is a collection which is a function.

Loosely, this says that a set is a collection each member of which is constructible a priori to the collection itself. In particular, no set can be a member of itself: the collection of sets which are not members of themselves is the collection of all sets, and isn't a set (because, for it to be a set it would necessarily, as one of its members, have to be constructible before itself - which is, of course, absurd). This gets us out of Russell's paradox - we don't get a contradiction out of it not being a member of itself. [Discussions of sets are apt to assume the law of the excluded middle (reductio ad absurdum: if I can reason from the contrary of some statement to an absurdity, then the statement was true).]

One set, S, is said to be a subset of another, T, precisely if every member of S is a member of T. For any set, S, the collection of subsets of S is a set: it is called the power-set of S and written P(S). There is a natural function (once we have defined these) between P(S) and {(S|:2)}, for any set 2 having exactly two elements, say 2={in,out}: this function maps a subset T of S to (S| (T| t->in :), s->out :) and is trivially inverse to ({(S|:{in,out})}| f->(|f:{in}) :P(S)). Consequently, some folk write 2S (the conventional notation for {(S|:2)}) for P(S).

The empty collection, {} or empty, is a set, is a subset of every set, is equal to its Cartesian product with any collection, is thus a relation between empty and any collection and is, indeed, a function from empty to any set. It has only one subset, itself, and so, furthermore, is the only relation between empty and any set (or vice versa). In particular, it is the only (partial) function from empty to any set: when I wish to refer to it in that rôle, for some set S, I'll write it as (empty::S) or (empty|:S); it is trivially monic. All other sets are described as non-empty: any non-empty set has at least one member. Because Empty is the only relation between any set and Empty, any such relation, f, necessarily has (|f)=Empty, whence it is also the only function (::Empty): as such, it is epic (whence (Empty|:Empty) is iso). In particular, the existence of a function (S|:Empty) implies S=Empty.

In due course, I must embelish this page with a formal statement of the Zermelo-Fränkel axiomatisation of set theory.

Written by Eddy.