No matter how clever my definitions, they have to start somewhere. In any case, I can explain nothing without the language in which I write, so it makes sense to accept (at least some of) what that language takes for granted. Where practical, of course, instead of leaning on your command of English, I've defined English words to have formal meanings reasonably close to common usage, though sometimes the usage in question is that of the community of anglophone mathematicians (which gets a bit self-referential but, hey, that's language for you - it don't mean nowt without intertextuality). I hope this won't overly inconvenience anyone (fool enough to be) translating my pages into another language: but I see little point in trying to second-guess all the languages I don't know, so I'll just get on with doing the best I can in the one I know best. To a fair extent, I've managed to arrange that I do give formal meanings to the words I use: but I trust you will not be surprised to discover that somewhere I've had to just fall back on informality, without even pretending at formality. Here it is.
The meaning of language is contained in the structure of the instances of
its use that we, its users, have encountered. When Richard Feynman broke the
code
that was the Maya language, he did so by analysing the structure of the
texts, or codices
, at his disposal. Naturally, this is at its easiest on
highly structured texts and ones discussing familiar things: that's why we put
things like counting and the periodic table of the elements on space probes
we've sent to the stars - they're highly structured and we've persuaded
ourselves that any aliens likely to receive the message are likely to be
familiar with them.
In this sense, definitions
aren't what tells you the meanings of things:
they're just statements, in among all the other ones making up the discourse,
which can be used as a skeleton
of the overall structure of the language used.
The ideal reader infers the meanings of the words from the whole of the text,
using the definitions
as, at most, a particularly easy piece of structure to
comprehend. Of course, in practice, any actual reader comes to the fray with
prior knowledge of some of the words in use, which saves a great deal of work in
attaining comprehension: and I, as author (and, naturally, a regular reader) of
the text, strive to build definitions which exploit that prior knowledge (and
fit together in a reasonably simple structure) as best I can.
If the definitions are skeleton, then, the analogy suggests we view the structure, as a whole, as a body. While it is my skeleton that supports the greater part of my body, the soles of my feet (which actually support it all) are not part of that skeleton. They are given form by their internal structure, along with which they are held together by the structure of the bones they underlie. Just so, the definitions I give form a skeleton for the corpus of information that I hang off them: but, at the very foundation, they cannot stand without a few things, notably collections, which I do not define. I believe that a patient reader could make sense of the greater part of my definitions by observing the relations between them and comprehending the structure. The rest of the discourse may then be understood by building outwards from this skeleton, even to the extent of handling a few of the things on which the definitions nominally stand. I would like to think that a patient, intelligent reader could do this without prior knowledge of English, learning (my slightly stilted written dialect of) the language into the bargain.
I have tried, in devising denotations, to allow as much variety as reasonable: in particular, to allow such denotations as are orthodox in any field with which I am familiar. Fortunately, the accumulated wisdom of programming language design gives some good insights into how to do this.
A fixed point of a mapping, f, is some input, x, accepted
by f for which f(x)=x. I shall describe a collection, L, as closed
under some mapping f precisely if f accepts every member of L and k in
L implies f(k) is also in L. I shall say that a mapping, f, preserves
distinction in a collection, L, precisely if, for a, b in L:
f(a)=f(b)
iff a=b
- and, formally, at most one member of L is rejected by f
(so f rejects a and b in L only if a=b). I shall say that a mapping, f,
regenerates a collection L precisely if every member of L is
f(a) for some a in L.
A collection differs from its successor precisely if it is not a member of
itself: equally, unite[{{c},c}] = c precisely when c is a member of itself.
Given a collection, L, of which each member is a collection which is not a
member of itself (so, c in L
implies c is not in c
): describe L as
c in Limplies
successor(c) in L.
c in Limplies
c is an ordinal sub-collection of L and c's successor is a member of L's successor.
So each member is contained in L and has, as successor, either L or one of L's members. The empty collection, 0, is trivially an ordinal (just as it is trivially a limit collection), and so is the successor of any ordinal.
a, b in L with f(a) = f(b)implies
a=b,
b in Limplies
for some a in L, f(a)=b.
As ever, notice that 0 is finite. A collection is then finite precisely if: every mapping which preserves distinction in it also regenerates it; and every mapping which regenerates it preserves distinction in it.
Some interesting suggestions:
Proof: given collections a, b with equal successor, let U= {c in a: c not in {a,b}} and V= {c in b: c not in {a,b}}.
Thus unite[{U,{a,b}}] subsumes unite[{a,{a}}] subsumes a subsumes U and
similar, and neither a nor b is a member of either U or V. Now, unite[{a,{a}}]
= unite[{b,{b}}], so we can cross over the containment chain above and its
similar
to get: unite[{U,{a,b}}] subsumes b subsumes V and
unite[{V,{a,b}}] subsumes a subsumes U. Now, neither a nor b is a member of
either U or V, so if either is contained in some unite[{W,{a,b}}], then it is
contained in W too. We thus have U subsumes V subsumes U, so U and V are
equal.
So unite[{U,{a,b}}] subsumes the successor, which subsumes both a and b, each of which subsumes U. Since U has neither a nor b as a member, the successor, having both as members, must be unite[{U,{a,b}}] (note that if a=b, this is equally unite[{U,{a}}]=unite[{U,{b}}]). Thus we have the successor as unite[{a,{a}}] = unite[{U,{a,b}}] = unite[{b,{b}}] with both a and b (trivially) as members.
Now, b is in unite[{a,{a}}], so b=a or b is in a
; likewise, a=b or a is
in b
, so b=a or (b is in a and a is in b)
. Now, b in a and a in b implies: b
subsumes unite[{U,{a}}] and a subsumes unite[{U,{b}}], with each contained in
the successor, so b is in { unite[{U,{a}}], unite[{U,{a,b}}] } and similarly for
a. Thus, b=a or (b is in { unite[{U,{a}}], unite[{U,{a,b}}] } and a is in {
unite[{U,{b}}], unite[{U,{a,b}}] })
.
If one of a, b is their successor, then either the other is unite[{U,{successor}}] or they are equal - in which case they and their successor are unite[{U,{successor}}]. Thus, if either a or b is their successor, we obtain: successor= unite[{U,{successor},{unite[{U,successor}]}}]. Consider a collection p not in U for which p= unite[{U,{p},{unite[{U,{p}}]}}]. By my semantic rule for namings, this is the same entity as b would be if it were unite[{U,{b,a}}] with a= unite[{U,{b}}]. However, a collection q= unite[{U,{q}}] satisfies q= unite[{U,{q},{q}}]= unite[{U,{q},{unite[{U,{q}}]}}]; so is of p's form, whence equal to it. Thus if b= unite[{U,{a,b}}] and a=unite[{U,{b}}], we obtain b=q and a=unite[{U,{q}}]=q=b. Likewise with a and b swapped. So if either of a, b is equal to the successor, then so is the other.
We thus obtain, for a, b with equal successor, a=b or (a= unite[{U,{b}}]
and b= unite[{U,{a}}])
. Now, a=unite[{U,{b}}] and b=unite[{U,{a}}] yield
a=unite[{U,{unite[{U,{a}}]}}] and b=unite[{U,{unite[{U,{b}}]}}] so, again, the
rule of namings finds them to be the same entity. Thus, finally, a=b.
Proof: suppose L is the successor of finite K and let f be a mapping
which accepts every value in L, with a in L
implying f(a) in L
. We then
have two conditions to prove equivalent. Notice that L subsumes K, so f also
accepts every value in K, which is finite: and that K is in L, so that f accepts
K with f(K) in L. The proof repeatedly depends on the fact that every member
of L is either K or a member of K.
a, b in L with f(a) = f(b)implies
a=band suppose b is some member of L: we must find a member, a, of L for which f(a) = b.
To tidy the proof, introduce a mapping, g, derived from f: this maps any k in K with f(k) in K to g(k)=f(k); otherwise, f(k)=K. No k in K has k=K: so no k in K has f(k)=f(K). If f(K)=K, this means every k in K has f(k) in K; otherwise f(K) is in K and g maps any k with f(k)=K to g(k)=f(K). Thus g accepts all values in K, yielding answers in K.
No k in K has f(k)=f(K): so g(k)=f(K) implies f(k)=K, and any k with g(k) not equal to f(K) has f(k)=g(k) in K. Thus, for any h, k in K with g(h)=g(k): either g(h)=f(K)=g(k), whence f(h)=K=f(k), or f(h)=g(h)=g(k)=f(k); so f(h)=f(k) and h=k by our given supposition.
K is finite: and we've just seen that k, h in K: g(k)=g(h)
implies k=h
,
so we deduce that every k in K is g(h) for some h in K. Thus if f(K)=K, so each
h in K has g(h)=f(h), every k in K is f(h) for some h in K and K is f(K), so
every b in L is f(a) for some a in L. Otherwise, we have some m in K with
g(m)=f(K), whence f(m)=K, and any other h in K has g(h)=f(h): so f(K) is f(a)
for some a in L, any other member of K is f(h) for some h in K and K is f(m).
As this accounts for all members of L, we now have every member of L as f(a) for
some a in L.
Every k in K, in particular, is f(h) for some h in L and there is some m in
L with f(m)=K. Again, define a mapping g derived from f: again, when k in K has
f(k) in K, g(k)=f(k); but now, when f(k)=K (eg m, if it is in K) either
set g(k) to f(K), if this is in K, or set it to k. Thus, if k in K is f(K),
then f(K) is not K=f(m), so m is not K and, as f(m)=K, g(m) is f(K)=k:
otherwise, k is not f(K) so it is f(h) for some h in K, which thus has
g(h)=f(h)=k. Thus every member of K is g(h) for some h in K. K is finite, so
h, k in K: g(h)=g(k)
now implies h=k.
Suppose f(K)=K and consider any k in K with g(k)=k - such as would happen if
f(k)=K or f(k)=k. Since k is in K and f(K) is not, k is f(h) for some h in K:
this thus has f(h) in K so g(h)=f(h)=k=g(k) and h=k Thus f(k)=k. So any k in K
with g(k)=k has f(k)=k, and no k in K has f(k)=K=f(K). Consequently, any h, k
in K with f(h)=f(k) have f(h) and f(k) in K, so that g(h)=f(h)=f(k)=g(k) and h=k;
taken with no k in K having f(k)=K, this then gives for a,b in L: f(a)=f(b) iff a=b
when f(K)=K.
Alternatively, f(K) is in K, there is an m in K with f(m)=K and g(m)=f(K):
now any k in K with f(k)=f(K) or f(k)=K has g(k)=f(K)=g(m), so k=m, and we know
that f(K) is in K so not equal to K=f(m), so no k in K has f(k)=f(K). Any a, b
in L with f(a)=f(b) then have one of: f(a)=f(K) implies a=K and f(b)=f(K)
implies b=K, so a=b; or f(a)=K=f(b) is not f(K) so a, b are in K, so a=m=b; or
f(a)=f(b) in K whence g(a)=f(a)=f(b)=g(b) and a=b. Thus, again, for a,b in L:
f(a)=f(b) iff a=b
.
Proof: consider a non-empty limit collection - I must show that it cannot be finite. First, we have that successor preserves distinctness on {collection} and so on our limit collection, and that the limit is closed under successor. Consequently, if we can show that successor does not regenerate our limit collection, we have shown that it is not finite. Note that members of our limit are not members of themselves, so they are distinct from their successors.
So, on any limit collection, successor preserves distinctness of inputs as distinctness of outputs. If I can now show that any non-empty limit contains a member which is not the successor of any of its members, I have then proven that the limit is not finite.
One can define the natural numbers in at least two ways:
Now, in principle, this might be an empty intersection: there might be
no limit collection containing 0.
Now, for all that is x a member of A
is, in general, undecidable, there
are some elements which we can prove do lie in Natural.
These are, specifically,
the values which are members of any limit collection of which 0 is a member.
First off, by definition, there's 0. But if a limit collection contains a
collection which is not a member of itself, it necessarily contains its successor
A function is a mapping, f, for which delegate(f) rejects f: a set is a collection which is a function.
Loosely, this says that a set is a collection each member of which is constructible a priori to the collection itself. In particular, no set can be a member of itself: the collection of sets which are not members of themselves is the collection of all sets, and isn't a set (because, for it to be a set it would necessarily, as one of its members, have to be constructible before itself - which is, of course, absurd). This gets us out of Russell's paradox - we don't get a contradiction out of it not being a member of itself. [Discussions of sets are apt to assume the law of the excluded middle (reductio ad absurdum: if I can reason from the contrary of some statement to an absurdity, then the statement was true).]
One set, S, is said to be a subset of another, T, precisely if every member
of S is a member of T. For any set, S, the collection of subsets of S is a set:
it is called the power-set
of S and written P(S). There is a natural function
(once we have defined these) between P(S) and {(S|:2)}, for any set 2 having
exactly two elements, say 2={in,out}: this function maps a subset T of S to (S|
(T| t->in :), s->out :) and is trivially inverse to ({(S|:{in,out})}|
f->(|f:{in}) :P(S)). Consequently, some folk write 2^{S} (the
conventional notation for {(S|:2)}) for P(S).
The empty collection, {} or empty
, is a set, is a subset of every set, is
equal to its Cartesian product with any collection, is thus a relation between
empty and any collection and is, indeed, a function from empty to any set. It
has only one subset, itself, and so, furthermore, is the only relation between
empty and any set (or vice versa). In particular, it is the only
(partial) function from empty to any set: when I wish to refer to it in that
rôle, for some set S, I'll write it as (empty::S) or (empty|:S); it is
trivially monic. All other sets are described as non-empty: any non-empty set
has at least one member. Because Empty is the only relation between any set and
Empty, any such relation, f, necessarily has (|f)=Empty, whence it is also the
only function (::Empty): as such, it is epic (whence (Empty|:Empty) is iso). In
particular, the existence of a function (S|:Empty) implies S=Empty.
In due course, I must embelish this page with a formal statement of the Zermelo-Fränkel axiomatisation of set theory.
Written by Eddy.