Tensor product

It is important to note that the tensor, as written, takes two dual vectors—this is an important point that will be dealt with later. In the case of finite dimensions, there is not a strong distinction between a space and its dual, however, it does matter in infinite dimensions and, moreover, getting the regular-vs-dual part right is essential to ensuring that the idea of tensors being developed here corresponds correctly to other senses in which they are viewed, such as in terms of transformations, which is common in physics.

The tensors constructed this way generate a vector space themselves when we add and scale them in the natural componentwise fashion and, in fact, all multilinear functionals of the type given can be written as some sum of outer products, which we may call pure tensors or simple tensors. This is sufficient to define the tensor product when we can write vectors and transformations in terms of matrices, however, to get a fully general operation, a more abstract approach will be required. Especially, we would like to isolate the "essential features" of the tensor product without having to specify a particular basis for its construction, and that is what we will do in the following sections.

To achieve that aim, the most natural way to proceed is to try and isolate an essential characterizing property, which will describe, out of all possible vector spaces we could build from V and W, the one which (up to isomorphism) is their tensor product, and which will apply without consideration of any arbitrary choices such as a choice of basis. And the way to do this is to flip the tensor concept "inside out"—instead of viewing the tensors as objects that act upon vectors in the manner of a bilinear map, we will view them instead as objects to be acted upon to produce a bilinear map. The trick is in recognizing that the Kronecker product "preserves all the information" regarding which vectors went into it: the ratios of vector components can be derived from

The above definition will work for any vector space in which we can specify a basis, since we can just rebuild it as the free vector space over that basis: the above construction exactly mirrors how you represent vectors via the Hamel basis construction by design. In effect, we haven't gained anything ... until we do this.

This is useful to us because the outer product satisfies the following linearity properties, which can be proven by simple algebra on the corresponding matrix expressions:

Equality between two concrete tensors is then obtained if using the above rules will permit us to rearrange one sum of outer products into the other by suitably decomposing vectors—regardless of if we have a set of actual basis vectors. Applying that to our example above, we see that of course we have

In this way, the tensor product becomes a bifunctor from the category of vector spaces to itself, covariant in both arguments.[3]

A dyadic product is the special case of the tensor product between two vectors of the same dimension.

This characterization can simplify proofs about the tensor product. For example, the tensor product is symmetric, meaning there is a canonical isomorphism:

Similar reasoning can be used to show that the tensor product is associative, that is, there are natural isomorphisms

The category of vector spaces with tensor product is an example of a symmetric monoidal category.

The universal-property definition of a tensor product is valid in more categories than just the category of vector spaces. Instead of using multilinear (bilinear) maps, the general tensor product definition uses multimorphisms.[4]

Let n be a non-negative integer. The nth tensor power of the vector space V is the n-fold tensor product of V with itself. That is

be the natural multilinear embedding of the Cartesian power of V into the tensor power of V. Then, by the universal property, there is a unique isomorphism

Tensors equipped with their product operation form an algebra, called the tensor algebra.

The interplay of evaluation and coevaluation can be used to characterize finite-dimensional vector spaces without referring to bases.[8]

Given two finite dimensional vector spaces U, V over the same field K, denote the dual space of U as U*, and the K-vector space of all linear maps from U to V as Hom(U,V). There is an isomorphism,

Furthermore, given three vector spaces U, V, W the tensor product is linked to the vector space of all linear maps, as follows:

The tensor product of two modules A and B over a commutative ring R is defined in exactly the same way as the tensor product of vector spaces over a field:

More generally, the tensor product can be defined even if the ring is non-commutative. In this case A has to be a right-R-module and B is a left-R-module, and instead of the last two relations above, the relation

Let A be a right R-module and B be a left R-module. Then the tensor product of A and B is an abelian group defined by

is not usually injective. For example, tensoring the (injective) map given by multiplication with n, n : ZZ with Z/nZ yields the zero map 0 : Z/nZZ/nZ, which is not injective. Higher Tor functors measure the defect of the tensor product being not left exact. All higher Tor functors are assembled in the derived tensor product.

A particular example is when A and B are fields containing a common subfield R. The tensor product of fields is closely related to Galois theory: if, say, A = R[x] / f(x), where f is some irreducible polynomial with coefficients in R, the tensor product can be calculated as

Hilbert spaces generalize finite-dimensional vector spaces to countably-infinite dimensions. The tensor product is still defined; it is the tensor product of Hilbert spaces.

When the basis for a vector space is no longer countable, then the appropriate axiomatic formalization for the vector space is that of a topological vector space. The tensor product is still defined, it is the topological tensor product.

Some vector spaces can be decomposed into direct sums of subspaces. In such cases, the tensor product of two spaces can be decomposed into sums of products of the subspaces (in analogy to the way that multiplication distributes over addition).

Vector spaces endowed with an additional multiplicative structure are called algebras. The tensor product of such algebras is described by the Littlewood–Richardson rule.

This is a special case of the product of tensors if they are seen as multilinear maps (see also tensors as multilinear maps). Thus the components of the tensor product of multilinear forms can be computed by the Kronecker product.

It should be mentioned that, though called "tensor product", this is not a tensor product of graphs in the above sense; actually it is the category-theoretic product in the category of graphs and graph homomorphisms. However it is actually the Kronecker tensor product of the adjacency matrices of the graphs. Compare also the section Tensor product of linear maps above.

The most general setting for the tensor product is the monoidal category. It captures the algebraic essence of tensoring, without making any specific reference to what is being tensored. Thus, all tensor products can be expressed as an application of the monoidal category to some particular setting, acting on some particular objects.

A number of important subspaces of the tensor algebra can be constructed as quotients: these include the exterior algebra, the symmetric algebra, the Clifford algebra, the Weyl algebra, and the universal enveloping algebra in general.

The symmetric algebra is constructed in a similar manner, from the symmetric product

Array programming languages may have this pattern built in. For example, in APL the tensor product is expressed as ○.× (for example A ○.× B or A ○.× B ○.× C). In J the tensor product is the dyadic form of */ (for example a */ b or a */ b */ c).

Note that J's treatment also allows the representation of some tensor fields, as a and b may be functions instead of constants. This product of two functions is a derived function, and if a and b are differentiable, then a */ b is differentiable.

However, these kinds of notation are not universally present in array languages. Other array languages may require explicit treatment of indices (for example, MATLAB), and/or may not support higher-order functions such as the Jacobian derivative (for example, Fortran/APL).