Linear algebra emerges when a context deals with values that can be added and (in one sense or another) scaled, yielding values of the same kind (thus likewise amenable to addition and scaling).

A minimal sense of scaling can be obtained from addition alone; repeated
addition always implies a representation of the positive naturals among the
scalings. This may be augmented in various ways (e.g. real or complex
scaling); the values by which context provides for one to scale are known
as **scalars**; a scaling is construed as multiplication by a
scalar and scalars must themselves be amenable to addition and scaling. The
identity on the scalable values is always a scaling, construed as
multiplication by a scalar that naturally corresponds with the natural
1. This scalar is thus known as 1; it serves as a multiplicative identity
among scalings. If there is an additive identity among the scalable values it
is called zero and the mapping which maps each scalable value to it fits in,
with the scalings that correspond to positive naturals, as a natural
representation of the natural 0, hence the scalar associated with it is called
0.

Given a specific scalable value a, the mapping on scalable values that
just adds a to its input, (: b+a ←b :), is described
as **translation** by a (or through a); and the mapping on
scalars that maps each to its result of scaling a, (: x.a ←x :), is known
as a's **ray**. The crucial properties of the addition and
scaling are:

- Addition and multiplication must be
**associative**and**Abelian** - Abelian (a.k.a. commutative) just means you can swap the order of operands; a+b = b+a whenever either is meaningful; and a.b = b.a likewise. Associativity means that (a.b).c = a.(b.c) whenever both are meaningful, likewise (a+b)+c = a+(b+c); consequently, we can meaningfully talk about a.b.c or a+b+c without needing parentheses to indicate the order in which the additions or multiplicaions are to be performed. Taken together, these imply that we can define sums (of several scalable values or of several scalars) and products (of several scalars and at most one scalable value) without reference to the order in which we add or multiply the values.
- Addition and non-zero scaling must
be
**cancellable** - i.e. each translation is monic, as is each non-zero scaling and the ray
of each non-zero scalable value. Whenever both a.e and c.e are meaningful,
a.e = c.e must imply
either a = c or e is an additive identity

. Likewise, whenever a+e and c+e are meaningful, a+e = c+e must imply a = c. Cancellability makes it possible to augment the scalings with ratios and the values and scalings with differences so as to make the scalable values into an additive group, the scalings also into an additive group and the non-zero scalings into a multiplicative group. More orthodox treatments take at least some of these augmentations for granted and, when they take all, refer to the scalable values asvectors

. - Scaling must
**distribute over**addition - With a, b as scalars and x, y as scaleable values, we require (a+b).x =
(a.x)+(b.x) and a.(x+y) = (a.x)+(a.y). In consequence of this, taken together
with the dropping of parentheses made possible by associativity, it is usual
to omit the parentheses around products, writing a.(x+y) = a.x +a.y and
similar; when several values and scalings are combined with no interposed
parentheses, all multiplication is
done first

, then the addition; alternatively, this may be expressed as multiplicationbinding more tightly than

addition.

The actual scalings in use may be restricted to whole numbers or they may form a continuum such as the real numbers; there may (as arises among the complex numbers) be some scalars whose squares are additive inverses for 1, or there may be none such (as for reals and naturals). For the sake of generality, I shall thus refer to the collection of scalars in use as {scalars}.

Context may deal with more than one collection of scalable values; within
each such collection, all of the above holds; if the sets of scalings in use
by some such collections are isomorphic, we can represent the scalings as
multiplication by a common set of scalars; this gives structure to the
relationships between the disparate collections of scalable values. Given one
collection, U, whose members may be scaled by {scalars} and added to one
another, and a second collection V, whose members may likewise be scaled and
added to one another, I describe a relation (V:r:U) as **linear**
– formally: {scalars}-linear

or linear over {scalars}

– precisely if:

- for every scalar c, whenever r relates u to v, it also relates c.u to c.v; and
- whenever r relates u to v and w to y, it also relates u+w to v+y.

When r is a mapping, these can be summarised by r(u+a.w) = r(u)
+a.r(w) for every u, w in U and scalar a; a linear mapping is also known as a
linear map. When a collection (construed as the identity mapping on its
members) is linear, these conditions simply say that adding members or scaling
a member always yields a member; I describe such a collection as a linear
space

; when, furthermore, it is an additive group (as may be achieved by
the augmentation mentioned above) it is known as a vector space

and its
members as vectors. The collections of left and right values of a linear
relation are necessarily always linear spaces; and {scalars} itself is, by
specification, always a {scalars}-linear space. The collection of linear maps
from a linear space U to {scalars} is also necessarily a linear space; it is
known as the **dual** of U, dual(U) = {linear maps ({scalars}:
|U)}.

Given a relation r whose left values lie in some linear space, V, and
right values lie in some linear space U, the **span** of r is the
linear relation obtained from it by imposing linearity; formally

- span = (: (: sum(s.g) ← sum(s.f) ; n is natural, ({scalars}:s|n), (V:g|n) and (U:f|n) are lists and r&on;f subsumes g :) ←r :)

i.e. whenever f and g are equal-length lists of r's right and left
values, respectively, and r relates each g(i) to the matching f(i), we can
select an arbitrary list, of the same length, of scalars, apply each scaling
to the matching enties in f and g, sum the results and span(r) will relate the
scaled-g sum to the scaled-f sum. (Note that this only involves finite sums;
this leads to some complications if there are any infinite linearly
independent sequences.) This can be characterised as the linear
completion

of r. When r is simply a collection, its span is just the set
of values one can obtain by scaling and summing r's members.

Given a mapping, we can take the span of its collection of outputs
(i.e. left values); if no proper sub-relation of the mapping has, as the span
of *its* collection of outputs, the same linear space then the mapping
is described as **linearly independent**; this necessarily
implies that the mapping is also monic. A linearly independent mapping is
known as a **basis** of the span of its collection of
outputs. When a mapping (:b|n) is linearly independent, a mapping
(dual(span({b(i): i in n})): p |n) is described as a dual basis

of b
precisely if: for each i in n, p(i,b(i)) = 1 and, for each *other* j in
n, j ≠ i, p(i,b(j)) = 0.