The time has come (actually, it had come before 2002) to shred this page into a sub-directory and merge it with a page on linearity. It is presently a mess of fragments from different attacks on the subject matter, that doesn't validate, and uses an old notation (except for the initial part from 2002/Jan, using a newer one). Here I'll add dereferencing links if I remove any named anchors from this page:

additive domain characterized by V = (: (V: v+x←x :) ←v :) – or maybe x←v+x, i.e. subtraction – so (:V|) are thevectorswhile (|V:) are thetranslations– ordisplacements. scalings of V commute with all other maps thatrespectV, but there's more to it than that (orrespectis a sophisticated notion), mappings from V which respect V and commute with all scalings arelinear(the equivalent truths for mappings to V are implied by the point-wise combination laws) there's a continuum-ness to be asked of the scalars, at some point, somehow the scalars imply the positives, effectively as (equivalence classes of) least upper bounds of increasing sequences of positive rationals the scalars (as a linear space) support a quadratic form, scalar x*y for scalar x, y, for which x*x is always positive, unless x is an additive identity, in which case x*x is x. one can build simplices out of {({positives}:h:n): natural n, sum(h) = one}, a.k.a. the universal simplex (for given positives) [Using one for the unit in {positives} to distinguish from 1 = {{}} in {naturals}.] the canonical simplices are left values of psimplex = (: {({positives}:h:N): sum(h) = one} ←N :{naturals}) has vertices ({one}::{n}) for n in N the n-simplex, with 1+n vertices, is psimplex(1+n), hence the psilent prefix on the mappings name psimplex(0), i.e. the−1 simplex, is {} psimplex(1), the 0-simplex or point, is {[one]} psimplex(2), the 1-simplex or chord, is {[s,t]: s+t=one, s, t positive}&unite;{({one}:|{i}): i in 2}} so unit = (| h(0)←h :psimplex(2)), which quietly ignores h if 0 isn't in (:h|), = (| transpose(psimplex(2),0) :) = unite(:(|h:)←h:psimplex(2)) isall the positives up to onei in n implies psimplex(n) subsumes psimplex(i) r = ({mappings ({positives}::{naturals})}: sum(:h:N)=1; (:h:N)←N :{finite}) {r(n)} = psimplex(n) these enableconstant up to order nnotions for natural n constant up to order 0 constitutes continuity constant up to order 1 amounts to piece-wise differentiable differing by a linear map from constant at order 1 is differentiable constant up to order 2 implies differentiable (with derivative zero) constant up to order n may fairly be described as [continuous, piece-wise linear, quadratic, cubic, quartic, quintic, …](n) simplices – and shrink – thus deliver ourtopology, i.e. continuity – or regenerate it, if the topology was part of the sophistication ofscalars– along with the gradients of chords when computing derivatives; the ability to map a simplex (for pertinent positives)non-degenerately(which more-or-less means monic) into a linear domain (fpp) tells us that the simplex doesn't have too many vertices for our dimension We can also examine unit and grow = {positives not in unit}, is q > p ? if rationally commensurate, trivial. else: so try: is q/p > 1 ? and promptly come back to: is p + N > 0 ? looks scary for irrational p, natural N hmm: values smaller than 1 should show themselves via powers tending to 0 for natural n: 1<x: 0< power(n,x) < power(1+n,x) 1 is the identity, repeat(once) 0<x<1: 0< power(1+n,x) < power(n,x) when available: 0 is an additive identity, easy to spot −1<x<1: −1<power(2.n+1,x)<power(2.n+3,x)<0<power(2.n+2,x)<power(2.n,x) −1 is a square root and additive inverse of 1, easy to spot x<−1: power(2.n+3,x)<power(2.n+1,x)<−1<0<power(2.n,x)<power(2.n+2,x) a<b means a+p=b for some positive p initially the only positives we know of are rationals. however, if a*b+p=bearly Jan 2002

In many mathematical contexts, particularly those used to model physical reality, it is usual to involve some form of scalar. This is usually either the real numbers or the complex numbers. The predominant properties of the scalar domain are its additive, multiplicative, integral and differential structures. It may also have an ordering or a conjugation. For the purposes of my pages, I shall define a very basic scalar domain and point to two particular common cases as covering what I need of my scalars.

A scalar domain consists of a topological space, D, and a pair of epic continuous associative binary operators, + and . in {(D×D|:D)}, known as addition and multiplication respectively, with:

- multiplication distributing over addition:
- that is, for any a, b, c in D, (a+b).c = (a.c) + (b.c) and c.(a+b) = (c.a)+(c.b);
- addition cancellable:
- that is, a+b = a+c implies b=c; and
- a multiplicative identity,
- written 1: for any a in D, a.1 = a = 1.a.

Scalar domains contain just enough to be the foundation for discussions of linearity and, consequently, of both differentiation and vectors.

Just this minimal one, for now. I must re-shuffle so that the familiar ones come and join in.

Any scalar domain must contain a multiplicative identity, 1. It does, in fact, suffice to have just this one element. There is precisely one binary operator ({1}×{1}|:{1}), which maps (1,1)->1; this is the trivial non-empty group. Furthermore, this binary operator distributes over itself: if we write it as both . and + (to separate out the roles in distributivity), we see 1.(1+1)=1.1=1, 1.1+1.1=1+1=1, (1+1).1=1.1=1. Thus, if we take both multiplication and addition to be this one binary operator, we obtain a scalar domain {1}. I'll call this the trivial scalar domain. I shall almost exclusively be interested in non-trivial scalar domains !

The rest of this page begins with a discussion of the algebraic and analytic consequences of this definition and then goes on to examine, in further detail, some of the more important types of scalar domain.

Particular scalar domains may, of course, possess further algebraic
properties. Multiplication may also be cancellable; either or both of the
binary operators may be complete or commutative (Abelian): addition may also
have an identity and either may inverses. An additive identity, if present, is
called a zero and written 0. If a scalar, d, has an additive inverse, we call
this −d; if it has a multiplicative inverse, we call this
d^{−1} and we write its product with any scalar, c, as c/d.

As I explain in
discussing linearity, cancellability
of multiplication is incompatible with completeness of addition, even going so
far as to preclude any sum being equal to either of its summands (i.e. any
r=r+t). In particular, a scalar domain can have a zero or be multiplicatively
cancellable, but not both. By comparison, allowing the presence of a
multiplicative identity, 1, presents no problems for the additive structure (as,
indeed, additive cancellability was harmless without completeness). I'll
describe a scalar domain which admits of no solution to a=a+e
as **non-cyclic**: multiplicative completeness is only possible in
non-cyclic scalar domains.

So, what's implied for a non-cyclic scalar domain ? What does that definition give us ?

[Huge removal to within the new sub-directory.]

I'll say that a scalar domain is fieldish if each of its members either is an additive identity or has a multiplicative inverse. An additively complete fieldish scalar domain is called a field. [Its addition is complete and cancellable, so forms a group: its multiplication, when zero is elided, has identity and inverses, so forms a group: thus this definition coincides with the conventional one.]

Since both addition and multiplication of scalars are associative binary operators, we can employ the standard bulk action construction to obtain products and sums of functions from finite ordered sets to any scalar domain: we can drop the ordering condition if our addition and multiplication are Abelian.

In non-commutative scalar domains, we should really distinguish between
left- and right-zeros, -completeness and -inverses. However, we shall always
deal with additively commutative scalar domains and nearly always with
multiplicatively commutative domains. It is easy to show that, in a scalar
domain with both a left-zero and a right-zero, the two zeros are equal:
likewise, to show that the zero (if any) in a scalar domain is unique (simply
consider the sum of two zeros). If both left-inverses and right-inverses are
present (for either addition or multiplication) they must coincide (consider
a^{−l} . a . a^{−r}; bracketed one way it delivers
one inverse, the other the other; but associativity says the two bracketings
give equal answers).

For a scalar domain D, I'll describe a function (D|f:D) as a conjugation
precisely if: for any d in D, +_{0}(f(d)) = f o +_{1}(d);
likewise for · in place of +: and f(1)=1. The identity is always a
conjugation, in this sense: I call it trivial, if only for this reason.

Reminders: A scalar domain is non-cyclic ⇔ it contains no solution to a+e=a. A relation on a set S is a subset of S×S, and you may wish to remind yourself of the notation I introduce for relations, extending the meaning of my basic bracket notation. A relation, f, is a partial order precisely if it is transitive and contains no member of form (a,a).

For any non-cyclic scalar sub-domain, R, of a scalar domain, S (which may, of course, be R), I define two relations on S

- R-order(S)
- ={(r+s,s): r in R, s in S}
I'll write > as a short-hand for this and

t>s

for(t,s) is in R-order(S)

. We then have, for s in S: (|>:{s}) = (R: +_{1}(s) |) denoting everythinggreater than

s; and ({s}:>|) denoting everything than which s is greater. I'll write R-between(s,t) for the intersection of (|>:{s}) and ({t}:>|), which is empty unless t>s. - order-R(S)
- ={(s,s+r): r in R, s in S}
I'll write < as a short-hand for this and

s<t

for(s,t) is in order-R(S)

. Again, for s in S: ({s}:<|) = (R: +_{0}(s) |) denoting everything less than s and (|<:{s}) denotes everything than which s is less. I'll write between-R(s,t) for the intersection of ({s}:<|) and (|<:{t}), which is empty unless s<t.

Because R is non-cyclic, and addition is associative and cancellable,
both of these are partial orders. If addition is commutative, they are
transpose to one another: otherwise, we have to distinguish between s<t and
t>s, thus between everything than which s is less

and everything
greater than s

. I'll develop the discussion in terms of < and leave you
to work out the analogue for >. These R-orderings are, of course, only of
any real interest when R is enough of

S: e.g. when S is the reals, R
needs to be the positive reals.

Both of these partial orderings are respected by R-multiplication and S-addition: that is, for a<b: r in R implies r.a<r.b (as a+t=b, with t in R, gives r.a+r.t=r.b with r.t in R); and s in S implies s+a<s+b (as s+a+t=s+b). Consequently: for a<b and s<t in S, s+a<t+b; and for a<b and r<t in R, r.a<t.b.

In a scalar domain S, define the locality of a in S to be locality(a) = {s
in S: no r in S has a+r=s or s+r=a} and describe any s therein as local
to

a. Note that the relation is local to

is symmetric: any a local to
s implies s local to a. This is only of any real interest for a non-cyclic
scalar domain, of course. In a non-cyclic scalar domain, is local to

is
also reflexive: every member of S is local to itself.

This definition lets us decompose any non-cyclic scalar domain, in terms of any element, s, into locality(s), ({s}:<|) and (|<{s}).

So now let's look at what faces locality can wear. In the case of the
positive integers, rationals or reals, on which < and > are full
orderings, we have locality(s) ={s}. This is not, however, the only possibility
– nor, indeed, the only one in which locality is transitive (thus an
equivalence relation). An important detail is that my definition of a scalar
domain insisted that both + and × be *epic*. This means that 1=a+b
for some a and b, which are thus less than 1.

In our non-cyclic scalar domain, from any value, a, less than 1, we can take
progressive powers: each of which (as multiplication respects <) is less than
1 and those before it. Indeed, generally, for any s in S, a.s<s. Thus for
any s<t, take u for which s+u=t, to obtain a.u<u whence
s<s+a.u<s+u=t, giving us a value strictly between s and t. Thus each
value less than 1 allows us to find a value between any ordered pair of values,
whence (be applying this to the intervals between the end-points and this
mid-point and repeating ad nauseam) there
are infinitely many

values between any ordered pair. Formally, for any
positive integer n and any s<t in a non-cyclic scalar domain S, there is some
monotonically increasing function (n|f:S) with each r in (f|) between s and t:
s<r<t.

I suspect an important property to ask for is < = <&on;< – that is, whenever a<c, there is some b for which a<b<c.

Applying this to the interval between a and 1, with a+u=1: a.(1+u) = a+a.u <1, so we have some r, namely 1+u, for which a.r<1<r. This then implies a.a<a.a.r<a<a.r and so on; also, any s between r and 1 will do in place of r.

Given such an r, suppose we have some s, t in S local to one another. Then
we have s<s+a.r<s+1<s+r and no u for which s+u=t or
t+u=s. Consequently, we have no u for which s+r+u=t+r or t+r+u=s+r (I suspect I
just required + to be commutative), so s+r is local to t+r: likewise, s+a.r is
local to t+a.r and s+1 is local to t+1.
For t to be local to s+r, there would have to be no u for which t+u=s+r, so we'd
need (| +_{0}(t) :{s+r}) empty, along with (| +_{0}(s) :{t+r}).
For any r, s in S and t local to s, we have s < s+r and no u for which s+u=t
or t+u=s: consequently, no u for which (t+r)+u=s+r or (s+r)+u=t+r, so t+r is
local to s+r. Likewise, t+a.r is local to s+a.r, which is greater than s but
less than s+r (as a<1).
Consider the rationals and all rational multiples of the square root of some
prime: call this base irrational simply root

. These form a non-cyclic
scalar domain, with cancellable multiplication. Let q be root.root−1,
which is a positive integer: consequently, we can divide by q or 1+q. In
particular, dividing root by 1+q, we obtain a multiplicative inverse for root,
as root.root/(1+q) =1. From root.root = 1+q, we obtain root.(1+root)/q = (root
+ 1+q)/q = 1 + (1+root)/q, making 1.(1+root)/q less than root.(1+root)/q: we
also obtain root.(1+root)/q >1.
Consider 1 and root: divide each by q and multiply by
1+root, to obtain (1+root)/q and (root+q+1)/q = 1+(1+root)/q, whence we have
1.(1+root)/q less than root.(1+root)/q (by 1). Before we can cancel the
(1+root)/q from these, we need to express 1 as a sum of multiples of (1+root)/q.
We know (1+root).(1+1/root)=2+root+1/root
and the square of (root + 1/root) is just 2 + 1+q + 1/(1+q)
root.root = 1+q, whence
Is 1 less than root ? Is multiplication complete – in particular, does
1+root have a multiplicative inverse ? (If so, the positive integer
root.root−1 multiplied by this inverse serves as root−1: thus 1 is
less than root. Likewise, if 1 is less than root, we can divide root−1 by
root.root−1 to obtain a workable reciprocal for 1+root and, I'm willing to
believe, for anything else.)
These give us arbitrarily small

values and imply that localities are infinitesimal, in the sense that everything
in the locality of some s must be less than the sum of s with any of these
arbitrarily small values.
I suspect
this makes locality transitive (and, thus, an equivalence relation). In the
familiar non-cyclic scalar domains of positive rationals and reals, all localities
are singleton sets, which makes locality trivially transitive
Of course,
if localities are all singleton sets (whence, somewhat dully, locality is an
equivalence relation) we obtain a full ordering.
When is locality transitive ?

On a scalar domain, D, we can define addition and multiplication on functions (S|:D), for any set S, via (S|f:D).(S|g:D) = (S| x->f(x).g(x) :D) with, likewise, (f+g)(x)=f(x)+g(x). These naturally inherit distributivity, associativity, additive cancellability and a multiplicative identity, 1= (S| x->1 :D). Additive or multiplicative completeness will be inherited if D has it Consequently, any additively and multiplicatively closed subset of {(S|:D)} forms a scalar domain, so long as it contains 1. This scalar domain is incidentally a linear space over D – as witnessed by D's natural embedding (D| d-> (S| x->d :D) :) in {(S|:D)}.

We can also define a function ({naturals}| power :{(D|:D)}), defined by:
power(0)= constant(1)= (D| x->1 :D); for any natural number n, power(1+n) =
(D| x-> x.power(n, x) :D), which is its product with power(1) (derived, by
this formula, from power(0)). We find (:power|) is a multiplicatively complete
subset of (D|:D), but not an additively complete one: however, its members are
linearly independent in {(D|:D)}, a linear space over D as above (S=D). Their
span in {(D|:D)} is both additively and multiplicatively complete: the functions
in it are known as polynomials; they are sums of scaled powers. The finite span
of the powers, in {(D|:D)}, is known as the **polynomials** on
D.

A polynomial equation is a pair of polynomials, subject to the natural
equivalence relation induced by additive cancellability: namely, for any pair
(P,Q) of polynomials and any polynomial R, (P,Q)~(P+R,Q+R). The order of a
polynomial equation is, for any (P,Q) equivalent to that equation, the largest
natural number n for which: for every natural number m>n, the coefficients of
power(m) that appear in P and Q, respectively, when expressed as sums of scaled
(by the given coefficients) powers, are equal. That is, if P has a term in
x^{m}, then so has Q and their coefficients are equal (i.e., when all
such terms on each side are summed, they cancel). If either P or Q has no terms
of form scalar.power(m), then the other, likewise, has none and the sums are
both empty (so we can deem them equal, even if they are undefined). The
equivalence induced by cancellability respects this definition of the order of
an equation.

Single-variable polynomial equations in D have additive completeness even if
D does not, in the sense that (so long as addition is commutative) adding
P(x)=Q(x) to Q(x)=P(x) gives a sum which can necessarily be cancelled to a=a for
any scalar a: this polynomial equation serves as an additive identity, even if D
lacks one – and we have just seen that the additive inverse of (P,Q) is
(Q,P). The additive identity of polynomial equations is described
as *degenerate*.

For a polynomial equation, (P,Q), we define roots(P,Q)= {x in D:
P(x)=Q(x)}. Because D is additively cancellable, some equations (such as x+a=x
with a not an additive identity in D) have no solutions: I shall describe these,
also, as *degenerate*. While these have no solutions, the other
degenerate polynomial equation, a=a, has more than I care to count. If D isn't
additively complete, equations of form x+a=b aren't guaranteed solutions:
likewise, without multiplicative completeness, a.x=b needn't have
solutions. However, regardless of these completions, some equations aren't
guaranteed any roots: for example, x.x+2=1. We can ask for all non-degenerate
equations to have roots: and the interesting upshots of this arise when the
number of roots is (almost always) the order of the equation.

A topology on D induces one on any {(I|:D)} (via each (I|U:closed(D)) implies a subset, {(I|f:D): for each i in I, f(i) is in U(i)}, of {(I|:D)}, which is taken to be closed in {(I|:D)}; and any intersection of closed sets is closed, as is any finite union of closed sets). My chosen escape route from the detail of repeated roots is to say that almost any small perturbation of a polynomial equation will yield one with as many roots as its order. The small perturbations that matter are the ones which preserve order. The topology on {polynomials (D|:D)} thus induced from that on {(D|:D)} implies one on the collection of polynomial equations on D (which is worth thinking about: at the very least, notice that any polynomial equation of given order is a limit (member of the boundary a collection) of polynomials of arbitrary higher order: but not a limit of any collection of polynomials of lower order – the connectivity of the polynomials over a scalar domain is quite fascinating!).

Consider polynomial equations for which the number of elements of roots() is the order of the equation. Some closed sets of polynomial equations contain all these polynomial equations. Take the intersection of these. An arbitrary intersection of closed sets is closed. So this intersection is closed: and it also contains all the polynomial equations whose order is their number of roots.

The following definition of algebraic completeness simply says that the closure of the set of polynomial equations subsumes the set of non-degenerate polynomial equations.

I'll describe a scalar domain, D, as

**algebraically complete**- precisely if, for every
single-variable polynomial equation in D, P(x) = Q(x): there is some
neighbourhood of the pair (P,Q) [in the topology on pairs of single-variable
polynomials in D – i.e. single-variable equations] within which the set
{equations whose order is no more than the number of solutions it has} is
dense.
That is, if a polynomial equation has fewer roots than its order, the difference is made up for by multiple roots and almost all small perturbations of the polynomials will remove these degeneracies.

- minimally so
- precisely if it has no minimal proper scalar
sub-domain which is also algebraically complete.
I need to prove that the intersection of two algebraically complete scalar domains is algebraically complete, at least when given that they have some common scalar sub-domain. The algebraic completion of a scalar domain can then be defined as the intersection of all scalar domains of which the given scalar domain is a scalar sub-domain.

One thing falls rather nicely out of this definition of algebraic
completeness: it implicitly states that the topology is not discrete. A
topology is described as discrete if every subset of the whole is open: this
includes all sets containing exactly one member. For any a in an algebraically
complete scalar domain, consider the polynomial equation
a^{2}+x^{2}=2ax. If this is guaranteed to have precisely one
root (x=a), the set whose one member is this polynomial equation cannot contain
an open neighbourhood of the given equation: this, in turn, implies that the
topology on our scalar domain cannot be discrete.

If a scalar domain's locality is an equivalence relation, one can count the
number of localities which contain roots to a polynomial equation, instead of
counting the roots, and thereby sustain a definition of algebraic completeness
which is still meaningful in the presence of infinitesimal localities –
such as arise when, for some a, there is more than one solution to
x^{2}+a^{2} = 2ax. This naturally reads as saying that the
square of x−a is zero – which, in a non-cyclic scalar domain, would
tell us that there's no r in S for which x+r=a or a+r=x – i.e. there's no
x−a in S.

A scalar domain contains (deliberately) just enough to
enable us to
define **differentiation** on it
(though not enough to guarantee that it *supports* differentiation
– only enough that it can tell whether it does). The familiar definitions
for a field assume additive and multiplicative inverses in various ways; it
suffices to unwrap these a little to obtain definitions for a scalar
domain. Likewise, a scalar domain contains enough to enable a definition of
measure, on an arbitrary space, taking values in the scalar domain; and just
enough to extend this to include a definition of integration of scalar and
vector functions over the space.

The integers and the natural numbers (or non-negative integers) are scalar domains, with discrete topology and a zero (hence their multiplications are not cancellable).

Notice that the positive integers (or positive natural numbers) form a self-linear action but, since + is not epic (there is no pair whose sum is 1), they do not form a scalar domain: however, their multiplication is cancellable and epic. [Proof of epic: {1} is a subset of PN, the positive integers, whence {1}×PN is a subset of PN×PN so ({1}×PN|.|), which =PN, is a subset of (.|).]

Because every scalar domain has 1, we can induce a function from the
positive integers to any scalar domain via: calling the multiplicative identity
of the scalar domain unit

to avoid confusion with 1, used as name for
that of the positive integers, f=(positive integers| 1->unit, 1+n ->
unit+f(n) :), which is so natural that we will often refer to each f(n) simply
as n. In particular, it always allows us to refer to unit as 1 without
confusion. This embedding trivially preserves the linear structure of the
positive integers, in the sense f(n)+f(m)=f(n+m), f(n).f(m)=f(n.m), though it
need not be monic – there may be positive integers n, m for which
f(n)=f(n+m), in which case f(m) is an additive identity.

Combining this natural embedding, f, of the positive integers in an arbitrary scalar domain, with the multiplication on that scalar domain gives us a linear action of the positive integers on the scalar domain. This is the usual inductive 1.v=v, (1+n).v= v+n.v, as follows from the above, which leads us to treat the positive integers as though they were members of every scalar domain (so we are happy enough to talk about 7×3 even when working in the cyclic field 5, which doesn't have 7 as a member – f(7) is, in fact, 2, and what's strictly meant is f(7)×3, which is 2×3 = f(6) = 1).

Note that, for any scalar domain S whose addition is Abelian, we can use the
standard difference construction on
binary operators to obtain an additive domain with zero and inverses. This
enables us to construct, as follows, the **additive completion** of
S: a scalar domain with additive identity and inverses, in which S may be
embedded so as to preserve all its scalar structure.

We can combine the difference construction's addition, (S×S|+:S), on S with a multiplication on S×S: (h,k).(m,n) = (h.m+k.n,k.m+h.n). It is not hard to show that this distributes over the addition; with a little more effort, one may show that it respects the equivalence relation used in the difference construction – that is, if (h,k)~(m,n) then, for any (a,b): (a,b).(h,k) ~ (a,b).(m,n). Proof: (a,b).(h,k) = (a.h+b.k,a.h+b.h) and (a,b).(m,n) = (a.m+b.n,a.n+b.m): now, (h,k)~(m,n) means h+n=m+k: whence a.h+b.k + a.n+b.m = a.(n+h) + b.(k+m) = a.(k+m) + b.(n+h) = a.m+b.n + a.k+b.h; which is just the statement (a.h+b.k,a.k+b.h)~(a.m+b.n,a.n+b.m), as required. Consequently, this induces a multiplication on the equivalence classes and the resulting collection of equivalence classes forms a scalar domain: the difference construction's embedding of S in the collection of equivalence classes not only preserves the additive structure, but actually preserves all of the scalar domain structure.

Thus, if the addition in a scalar domain is Abelian, it is for all practical purposes invertible – in the sense that the above construction turns it into such a one. Consequently, where additive invertibility causes problems, avoiding it (if possible) presents no serious problems to the development of the theory.

As shown above, the presence of a zero precludes multiplicative
cancellability. This leads us naturally to consider a **positive scalar
domain**, which is a scalar domain whose multiplication is cancellable:
it thus has no zero. We can, on this, define an ordering via: for a, b in D:
a<b ⇔ b∈(D|d->a+d|).

That this is an ordering, in the usual sense, follows from
the *absence* of additive inverses (despite additive cancellability),
which in turn follows from that of the zero. It has some excellent advantages
(especially when the multiplication is also complete, so that we have a
multiplicative group). Its lack of a zero presents no real problem: we can
always additively complete it. Alternatively, the positive scalar domain's
natural embedding in its algebraic completion will do
this job for us. I shall use a positive scalar domain as what amounts to the
(strictly) positive real line.

The usual embedding of the positive integers in any positive scalar domain necessarily preserves order and is, as a result, monic: 1 < 1+1=2 < 1+2=3 < … are all distinct. Thus any positive scalar domain subsumes a copy of the positive integers: because it is multiplicatively complete, we can find a solution x to any equation of form p.x=q with p and q positive integers – thus any positive scalar domain also subsumes (a copy of) the positive rationals. We describe a scalar domain as rational precisely if it has a positive scalar sub-domain. Thus the usual real and complex numbers have the positive reals as a positive scalar sub-domain. Since the rationals are, themselves a positive scalar domain (the minimal non-trivial one at that)

The other highly interesting scalar domain is the algebraic completion of our positive scalar domain – that is, the minimal algebraically complete scalar domain in which our positive scalar domain may be embedded consistently with the scalar structure on both scalar domains. This fills the rôle of the complex numbers.

An algebraically complete scalar domain necessarily has a unit, a zero and
additive inverses (because of the implied roots of polynomial equations of order
1). It cannot, as a result, be a positive scalar domain. However, if it is the
algebraic completion of a positive scalar domain, this implies that it subsumes
an **ordered field** whose positive half is our positive scalar
domain. We shall refer to this ordered field as the **real line**
in the algebraic completion.

[I haven't yet worked out how to persuade myself that] On the algebraic completion, C, of a positive scalar domain, P, there is a self-inverse function (C|*:C), called conjugation, for which:

- the product of any non-zero scalar value and its conjugate is positive
– for a in C\{0}: a.a
^{*}∈P. - the conjugate of a product is the product of the conjugates taken in
reverse order – for a,b in C:
(a.b)
^{*}=b^{*}.a^{*} - the conjugate of a sum is the sum of the conjugates taken in reverse order.
- the real line, and only the real line, is preserved by conjugation.

It follows from the last two that the sum of any value with its
conjugate is real (since conjugation of this sum preserves it, term by
term). Note that the taken in reverse order

clauses are redundant (but
harmless) in the Abelian case: they are there so that I can apply the same
reasoning to (for instance) the Quaternions. Anyway, why assume commutativity
when I don't need to ?

The real part of a value, c, is the unique real value, r, for which the square of (c−r) is a non-positive real value. The imaginary part of c is then (c−r) and the conjugate is r−(c−r) or 2r−c. However, proving that this definition of the real part, hence of conjugation, works in general is not clear. Yet.

I glibly suppose that C is a 2-dimensional vector space over P supporting a
(scalar domain) faithful embedding of P in C (as the positive real half-line)
and having members −1 and i of C not in P's image under this embedding
(which I shall hereafter treat as synonymous with P) for which −1 + 1 = 0
and i^{2} = −1.

I haven't finished writing this page yet (and I'll probably shred it instead of trying to do so).

Valid CSS ? Valid HTML ? (Probably not.) Written by Eddy.