A scientific or mathematical discourse concerns itself with
entities external
to the text (e.g. an electron or a circle; or equally
the meaning
of some ((typically) other) text); the discourse describes
relationships among the external entities. To simplify the description of
relationships, context may provide names for some entities; where a text within
the discourse describes a relationship among external entities, it does so by
associating names with the external entities so related and making
some statements
employing those names; the reader
notionally substitutes
the extenal entities in place of the names, in the
process of reading meaning into the text, so that the text's meaning says
things about
the external entities, thereby characterising the relationships
among them. The text of the statement may also, of course, use names to which
that text itself gives meaning; and names to which the discourse's context has
given meaning.
There may be arbitrarily many external entities (hence names) involved in a
given relationship and there need not, necessarily, be any
intrinsic order
to the names. Generally, one endeavours to break down
relationships in some way that ensures the namespaces are not huge; and one does
not tend to think of the text as describing a relationship
unless there
are two or more names involved. The degenerate case involving no external
entities is simply a statement; that involving just one amounts to a
specification of a property that an entity may have.
Where the namespace involves (countably) many names, one can employ the
following trick (due to Haskell B. Currie): select one name, pretend it
is n
; interpret the relationship's text as if n's meaning had been
supplied by context; this reads it as a relationship among the entities
associated with the remaining names; call this the curried
relationship. We can now construct a two-name relationship using n
and
some name not otherwise used in the text, pretend m
: in so far as our
original relationship's text holds true with some particular meaning for n, our
two-name relationship relates this meaning to the curried relationship, m,
obtained by having context provide this meaning for n in the original
relationship. By this means we can describe any many-name relationship as a
two-name one, one of the names in which expects to mean a relationship with one
fewer name – which can again be described as a two-name relationship
either because it is one (we started with a three-name one) or because it still
has many names; many
, for these purposes, means more than
two
. Thus we can encode all relationships using relationships with at most
two names each.
This process obliges one, when encoding a many-name relationship, to select
an ordering of the names; so one may as well use the ordering to describe
two-name relationships in terms which don't attend to the names used. I'll
define a relation to be a two-name relationship accompanied by
a choice of order among the two names; and use r relates x to y
as my
representation for the statement of r's relationship with x used in place of r's
first name and y in place of its second. The relationship, with this
substitution: holds true precisely if r relates x to y; is false precisely if r
does not relate x to y; and is undecidable precisely if r relates x to y
is undecidable.
We can also describe any one-name relationship
as a relation which
relates a to b if a and b mean the same thing and the relationship holds true
when that meaning is used in place of its one name. If we really want, we can
even describe a no-name relationship, i.e. statement, by a two-name relationship
which relates x to y, regardless of the meanings of x and y, precisely if the
statement is true. For many-name relationships, we can use Currie's trick; by
convention, I'll take the name selected at each step as the last
(or
rightward) name of the two-name relationship at that step
(see below). This enables us to describe all
relationships in terms of relations.
There are other ways of encoding relationships, some of which I shall use: but the tools needed to do so can be built using relations. For example, one can encode a relationship, R, as a collection of mappings, each of which, f, is a mapping from R's names to meanings, one per name; for which R's text reads true when each name is replaced by f's meaning for it. But, for that, one needs collections and mappings.
The way evaluation of functions
is (orthodoxly) written
has g(f(x,y))
call for x to be passed to f as an input, followed by y,
and for f's output to be passed on to g. Thus functions are evaluated in
right-to-left order while inputs are supplied in left-to-right order; keeping
subtexts near to the subtexts they are most closely involved in (and having
(f(x))(y) and f(x,y) equivalent) obliges argument-supply and function-evaluation
to read in opposite orders.
When g or f is replaced by some substantial text which denotes a function,
it will be easier for readers to follow what is happening, in function
evaluation, if the rightward end of each function's text discusses its input
– which appears to the right of the function when it is evaluated –
and the leftward end of each function's text discusses its output, which will be
needed by a function applied to its left – i.e. after
it
in evaluation order. So, in support of right-to-left order for evaluation,
texts I'll use to denote mappings (which will be my approximate equivalent of
othodoxy's functions) will discuss output at their left-ward end, input at their
right-ward end.
Such texts will be particular uses of denotational templates which will, in
general, denote relations; now r(x) lacks an orthodox meaning when r is a
relation, so I'm not obliged to tie standard reading of relations to the
unavoidable reversal implicit in function evaluation
, or to my consequent
right-to-left reading of mappings. Given that the flow of text, as I write it,
is left-to-right, I chose to read relations in that order: a relation relates
its left values to its right values, though reading the relation as a mapping,
if it is one, we find it maps its right values to its left values; so, as a
relation, a mapping relates its outputs to its inputs.
Mappings decidedly care about being read from right to left; relations are
less fussy; so where a relation relates one value, x, to another, y, I'll
indicate this using the token ← (which might conceivably display as a
left-pointing arrow) with x on its left and y on its right; as far as reading a
text as a relation goes, x ← y
just says the relation relates x to
y; but when the relation is given to be a mapping, x ← y
says y is
mapped to x – i.e. that there is only one value the relation relates to y,
namely the given x.
One case in which I shall use Currie's encoding is in the description of binary operators. In general a binary operator is really a three-name relationship: it combines two values to produce a third. Given the above order for mappings, c = b*a could be encoded by *'s associated relation relating c to a relation which relates b to a. This maps, to each possible composite, c, the relation (: c = b*a; b←a :), which looks like it offers an interesting (if unorthodox) perspective…
Equally (and more orthodoxly), for any a, we can identify the mapping which maps b to a*b for each b for which this is defined: thereby identify the mapping, call it s, which yields this as output when given a as input; which clearly also encodes *, this time as a relation which relates s(a) to a, wherein s(a) relates a*b to b; when s(a) is a mapping, a*b = s(a,b).
Inevitably the two descriptions are equivalent – via permutation operations the determined reader can build out of transpose and reverse – and I may well exploit both. The second description (is more (familiar and) orthodox and) has practical virtues which make it my preferred encoding of the binary operator.
An expression using a binary operator, *, denotes the result of combining a with b as a*b. One may use an arbitrary relation, star, to define a binary operator, *: the expression a*b then asserts that star relates, to a, some relation which relates, to b, some value, c; and a*b denotes an arbitrary value c for which this holds true. When * is then used to define ({non-empty relations}: (: a*b ← b :) ← a :), call this s, the relation (: a*b ← b :) = s(a) is the union of the relations which star relates to a (whenever at least one such relation is non-empty): star and s encode the same 3-relationship, but may encode it as distinct relations.
Two no-name relationships immediately arise, corresponding to the
statements false
and true
: these I encode as relations empty and
all; empty relates x to y
iff false
; all relates x to y
iff true
; i.e.
I encode a one-name relationship as a relation which relates x to y precisely if x and y mean the same as one another and, when that meaning is substituted for the single name of the relationship, the resulting statement holds true. The one-name relationship which is false regardless of what is substituted for the name is thus encoded by empty – it doesn't relate anything to anything.
The one-name relationship which holds true for every meaning gives us a
relation for which the name the universal identity
is appropriate: it
relates x to x for every meaning x but otherwise doesn't relate anything to
anything; it subsumes the encoding of any one-name relationship; any relation,
r, that it subsumes may be interpreted as an encoding of a one-name relationship
– namely, using name n
, r relates n to n
. So relations
subsumed by the universal identity are synonymous with one-name relationships,
a.k.a. unary predicates
; I call their encodings collections
(which
correspond loosely with orthodoxy's sets).
A classic paper on
representing information as
relationships: Codd's
paper, The relational model for database management
, based one one
from June 1970's Communications of the ACM: A Relational Model of
Data for Large Shared Data Banks.