When we come to analyse linear maps and metrics on vector spaces, polynomial equations provide powerful analytical tools, whose solution depends on solutions to scalar polynomial equations. Algebraically completeness then becomes a valuable property for the scalar domain. As ever, it is desirable to contain at least the positive rationals in the domain, which in turn implies at least the algebraic numbers - which are dense in the familiar complex plane. On this basis, I chose to restrict discussion to algebraically complete scalar domains which have a positive scalar sub-domain. It is beyond my immediate wit to prove that this implies that the algebraically complete domain is a two-dimensional vector space over the positive one, but it suffices to consider cases where we can define a conjugation that has the familiar properties that implies.

In any algebraically complete C having a positive scalar sub-domain, P (with what fullness property relative to C ? does it simply suffice that P subsumes the rationals ?), we have 1 and 2 in P so can write the polynomial equation 2+x.x=1: algebraic completeness in C implies some solution, i, to this equation. This plainly cannot be in the positive scalar domain, which lacks a value for i.i+1. Algebraic completion of C implies additive completeness, whence C subsumes an additive completion for P, which I'll call R: this is definitely a one-dimensional P-vector space. Multiplication by i in C, with i.i.i.i=1, then defines a period-four symmetry of C, with negation (its square) being a period-two symmetry of R. The scalar domain generated by P and i is then a two-dimensional P-vector space. This is true for any solution to 2+x.x=1: we know we have i and i.i.i as solutions, could there be more ?

Let Complex be an algebraically complete scalar domain containing a positive
scalar sub-domain, Positive, for which we have a function, called conjugation
and denoted (Complex| c-> c^{*} :Complex), satisfying: for any a, b
in Complex,

- (a+b)
^{*}= (b^{*})+(a^{*}) - (a.b)
^{*}= (b^{*}).(a^{*}) - (a
^{*}).a is in Positive

The square of an antilinear auto is a linear auto. This means that the
square roots one can consider for a linear auto split into two kinds: the linear
ones and the antilinear ones. Multiplying an antilinear by a phase yields the
same square (the square of sa, with s scalar and a antilinear, is
s.s^{*}.aoa, with aoa being the square of a). For comparison, scaling a
linear square root of 1 by i will yield a linear square root of -1. This is
much of why antilinear maps are (perhaps surprisingly) better at respecting the
structure induced by a positive scalar sub-domain of their scalar domain than
are their linear map cousins.