]>
I define composition on relations; it's a combiner and, in an important sense, it serves as the universal combiner. Its definition is, given relations r and s,
It can be used to combine any two relations whatsoever, albeit many cases shall give the empty relation as composite simply because the left operand has no right value that's a left value of the right operand (there is no y, in the above, that r has as right value and s as left value). Swapping the order of its operands typically changes the result: r&on;s need not be equal to s&on;r.
From the definition, given any relations a, b and c, observe
that: (a&on;b)&on;c relates w to z
precisely if a relates w to some x
which b relates to some y which c relates to z
precisely if a&on;(b&on;c)
relates w to z
; whence that (a&on;b)&on;c = a&on;(b&on;c) –
composition is associative
. Thus we
are able to write a&on;b&on;c unambiguously and likewise for longer (finite)
sequences of relations.
Every relation is an operand that &on; accepts both on its right and on its left; and every composite of relations is a relation, so {relations} is closed under &on; and &on; is closed. Given associativity, it is thus a combiner. As it combines any relation with any relation, it is furthermore flat. It is neither cancellable nor complete; it does have an identity (the universal identity (: x ←x :) composes on either side of any r to give r) but most relations don't have inverses under it (if only for want of some left or right values of the universal identity).
Given relations r, s consider reverse(r&on;s); this relates z to x precisely if r&on;s relates x to z, precisely if r relates x to some y that s relates to z, precisely if reverse(s) relates z to some y that reverse(r) relates to x, precisely if reverse(s)&on;reverse(r) relates z to x. Thus reverse(r&on;s) = reverse(s)&on;reverse(r) and reverse is a (self-inverse) coautomorphism of the binary operator &on; (or cofunctor on its category).
A mapping is simply a relation that, for any given right value (input) only
relates one left value to it (the output to which it maps the input); a monic is
likewise a relation that, for any given left value, only relates it to one right
value; the reverse of a mapping is monic, and vice versa. Formally, a relation
r is a mapping precisely if r relates w to y and x to y
implies w =
x
; r is monic precisely if r relates x to y and x to z
implies y =
z
. If we now compose two mappings, f&on;g, and the composite relates v to z
and w to z, we can infer that it related v via some x to z and w via some y to
z; since g then relates x to z and y to z, with g a mapping, we can infer x = y,
thus f relates v to x and w to x = y hence, as f is a mapping, v = w; this being
true whenever f&on;g relates v and w to the same z, we can infer that f&on;g is
a mapping and any composite of mappings is a mapping. Likewise, every composite
of monics is a monic.
For any relation r, its collection of left values is (: x ←x :r) and its collection of right values is (r: x ←x :). I also define end-relations, on these two collections (respectively), by
The former is r's left-relation, among its left values; the latter is
its right-relation, among its right values. Each is transitive and reflexive
(because subsumes
is). So now let us consider how the collections of
values and the end-relations of a composite relate to those of the relations
composed.
Every left value of r&on;s is necessarily a left value of r, so (: x←x
:r) subsumes (: x←x :r&on;s) and likewise (s: x←x :) subsumes (r&on;s:
x←x :). It's eminently possible for r to have left values that it only
relates to right values that aren't left values of s, and likewise with left
swapped with right and r swapped with s, so in each case the subsumes
definitely can be strict – r can have left values, and s can have right
values, that r&on;s lacks.
Every value of (| r&on;s :) is a left value of r; and we may have some left
values of r that aren't left values of r&on;s. In particular, r may relate x to
y, x to b and c to b for some y that is a left value of s, b that isn't and c
that r doesn't relate to anything else. In such a case, {u: r relates x to u}
subsumes {u: r relates c to u}, so (| r :) relates x to c; but c is not a value
of (| r&on;s :); so (| r&on;s :) definitely doesn't subsume (| r :). On the
other hand, as x is a left value of r&on;s (because r relates x to y, which is a
left value of s), any w that (| r :) relates to x must also (be related by r to
y, hence) be a left value of r&on;s and, since r relates w to every y that r
relates x to, via which r&on;s can relate x to anything, r&on;s must in fact
relate w to everything it relates x to; so x is a left value of r&on;s and (|
r :) relates w to x
implies (| r&on;s :) relates w to x
. Thus we
find that (| r&on;s :) subsumes (: (| r :) :r&on;s), the restriction of (| r :)
to those left values that r does actually relate to some left value of s. The
corresponding statement for the right-relations is a little more unwieldy, (:
r&on;s |) subsumes (: (: s |) :(r&on;s:x←x:)) but, in any case, what we
have isn't particularly neat.
So let's look instead at (| r :s), the left-equivalence of the restriction of r to only those right values that s has as a left-value; this cuts out any left values of r that aren't left values of r&on;s. It may also relate some left values of r to one another that (| r :) didn't: if r relates x to y, w to y, x to b and w to c, with b and c distinct, while s has y but neither b nor c as a left value, then (| r :) won't relate x to y or y to x, while (| r :s) relates each to the other. We can, however, be sure that (| r :s) subsumes (: (| r :) :r&on;s), just as (| r&on;s :) does, for exactly the same reason. For the right-relation, naturally, (r: s |) likewise subsumes (: (: s |) :(r&on;s:x←x:)).
So first suppose that (| r :s) relates w to x; we know r relates each of
them to at least some left value of s, hence both are left values of r&on;s. So
suppose r&on;s relates x to z; it necessarily does so via some y, that r relates
x to and that s relates to z; as (| r :s) relates w to x, r also relates w to y,
hence r&on;s relates w to z. Thus (| r :s) relates w to x implies r&on;s
relates x to z implies r&on;s relates w to z
, hence (| r&on;s :) relates w
to x; so (| r&on;s :) subsumes (| r :s). Likewise, (: r&on;s |) subsumes (r: s
|).
When (| r&on;s :) relates w to x, it's possible that r relates w and x to some y that s relates to z and r also relates x, but not w, to some c that s relates to z; because this last doesn't add anything to what r&on;s relates x to, (| r&on;s :) doesn't notice it, but (| r :s) does, so won't relate w to x. Thus the converse of the preceding doesn't hold: (| r :s) needn't subsume (| r&on;s :) and, likewise, (r: s |) needn't subsumes (: r&on;s |).
The tendency to get empty composite, arising from composition happily combining any two arbitrary relations, somewhat limits what one can usefully say about composition; however, we can define restrictions of it that play nicer, at the expense of being more picky about what relations they compose. We may even be able to improve on the preceding section's slightly unsatisfactory results.
Orthodoxy spends a lot of time looking at functions (U: |V)
for various sets U, V; and typically only wants to compose functions (U: f |V)
after functions (V: g |W), for some set W; because every left value of g is a
right value of f, we are assured that the composite is (U: f&on;g |W); it hasn't
ignored any of g, although it might have ignored some of f. The price I pay for
working with equivalences in place of sets is that I have to be rather more
careful about what condition I impose in place of every left value of g is a
right value of f
. When U, V and W get replaced by equivalences, I need (U:
f |V) to respect the equivalence U: if f relates u to x and v to y with x and y
V-equivalent, I need u and v to be U-equivalent for f to feel enough like a
mapping; and I need f to at least have a right value V-equivalent to each member
of V, albeit some members of V might not be right values of f. Given similar
for g, similar should then be implied for f&on;g. Since g's left values are apt
to be a canonical (from g's point of view) member in each V-equivalence class, I
think I do want to insist that (U: f |V) actually does have every member of V as
a right value, not just a right value V-equivalent to each member of V. The
V-equivalence of any left values of g for W-equivalent right values then
combines with the U-equivalence of f's left values for the right values it gets
from g to ensure f&on;g's left values for W-equivalent right values are indeed
U-equivalent. I think (: (:U|)&on;f |) needs to subsume V, in fact; and I think
it's probably satisfactory to oblige f to do enough of the (:U|) part to ensure
that (:f|) subsumes (|V:).
So I describe composition of r after s, r&on;s, as
In each case, the given assertion about end-relations also implies a matching assertion for the end-collections (r: x←x :) in place of (:r|) and (: x←x :s) in place of (|s:).
The first of these is the usual constraint for composability of functions (with support for equivalences tacitly included) and is the one that'll see most use, in particular by homomorphisms. These restricted compositions are categoric rather than flat (as raw composition is), as are the proper compositions of homomorphisms (of various types) derived (usually) from clean composition.
When a composite r&on;s is clean, (s: x←x :) = (r&on;s: x←x :) because (the left side fatuously subsumes the right and) every left value, that s relates to any given right value, is a right value of r so r relates something to that hence r&on;s relates this something to the given right value of s, making it also a right value of r&on;s. In like manner, for a co-clean composite, we get (: x←x :r) = (: x←x :r&on;s); and, in the iso-clean case, both equalities hold. The main upshot of this is that composing two functions cleanly gives a function that accepts all the inputs that the right component does; f&on;g accepts all inputs g accepts.
Next consider the case of three relations r, s, t with r&on;s clean and s&on;t clean; because (r&on;s: x←x :) = (s: x←x :) which subsumes (: x←x :t), the composition (r&on;s)&on;t is also clean; and, since (r: x←x :) subsumes (: x←x :s) which necessarily subsumes (: x←x :s&on;t), the composition r&on;(s&on;t) is also clean. Thus a chain of composition, in which each adjacent pair compose cleanly, is clean as a whole, not just at each pair. Naturally, the same duly holds also for co-clean and iso-clean compositions.
This is an incomplete discussion !
We can use the relationship between end-relations of composites and those of
components to (at least partially) characterise whether a given relation may be
expressed as a composite, for given candidates as left-most and right-most
component: if r is to be factorised as compose([f,…,g]), (|r:) and (:r|)
must subsume ((:x←x:r)| (|f:) :) and (: (:g|) |(r:x←x:)),
respectively. [The left |
seen in ((:x←x:r)| (|f:) :) asserts that
every left value of r is a left value of f, while the whole denotation stands
for the restriction of (|f:) to r's left values.]
If we allow any components between f and g – the … in [f,…,g] – we can reliably replace them all by their composite – i.e. replace [f,…,g] with [f, compose([…]), g] – so the only cases to consider are whether f&on;g or f&on;q&on;g can be r. For given f we can ask whether there's some g for which f&on;g is r; we can trivially pose this as a request for some q for which f&on;q&on;(:r|) is r; then g is q&on;(:r|). Likewise, for given g, asking for an f for which f&on;g is r can be posed as a request for q with (|r:)&on;q&on;g = r; f is then (|r:)&on;q. So, in all cases, questions about the ability to factorise r via given left and/or right components are, without loss of generality of form: given r, f and g is there a q for which f&on;q&on;g = r ? Clearly, the above gives necessary conditions; are they sufficient ?
Now, if we're given r, f and g for which (|r:) subsumes ((:x←x:r)| (|f:) :) and (:r|) subsumes (: (:g|) |(r:x←x:)), can we find a q for which r is f&on;q&on;g ? Note that, given arbitrary relations f, g and r, we can use (|f:)&on;r&on;(:g|) to obtain a replacement for r which does meet the given conditions; albeit the replacement may quite readily be empty. Note that if either (|r:) or (:r|) is empty then so is the other and so is r, in which case q = empty solves our problem. If either f or g is empty and r satisfies the constraints given, then r is also empty. Indeed, when r is empty, q = empty will always give f&on;q&on;g = empty, regardless of f and g: empty is factorisable via anything and only empty is factorisable via empty.
Written by Eddy.