# Linear form

In mathematics, a linear form (also known as a linear functional,[1] a one-form, or a covector) is a linear map from a vector space to its field of scalars (often, the real numbers or the complex numbers).

The "constant zero function," mapping every vector to zero, is trivially a linear functional. Every other linear functional (such as the ones below) is surjective (that is, its range is all of k).

Linear functionals first appeared in functional analysis, the study of vector spaces of functions. A typical example of a linear functional is integration: the linear transformation defined by the Riemann integral

In finite dimensions, a linear functional can be visualized in terms of its level sets, the sets of vectors which map to a given value. In three dimensions, the level sets of a linear functional are a family of mutually parallel planes; in higher dimensions, they are parallel hyperplanes. This method of visualizing linear functionals is sometimes introduced in general relativity texts, such as Gravitation by Misner, Thorne & Wheeler (1973).

Linear functionals are particularly important in quantum mechanics. Quantum mechanical systems are represented by Hilbert spaces, which are antiisomorphic to their own dual spaces. A state of a quantum mechanical system can be identified with a linear functional. For more information see bra–ket notation.

In the theory of generalized functions, certain kinds of generalized functions called distributions can be realized as linear functionals on spaces of test functions.

Every non-degenerate bilinear form on a finite-dimensional vector space V induces an isomorphism VV : vv such that

The inverse isomorphism is VV : vv, where v is the unique element of V such that

In an infinite dimensional Hilbert space, analogous results hold by the Riesz representation theorem. There is a mapping VV from V into its continuous dual space V.

where δ is the Kronecker delta. Here the superscripts of the basis functionals are not exponents but are instead contravariant indices.

due to linearity of scalar multiples of functionals and pointwise linearity of sums of functionals. Then

So each component of a linear functional can be extracted by applying the functional to the corresponding basis vector.

Modules over a ring are generalizations of vector spaces, which removes the restriction that coefficients belong to a field. Given a module M over a ring R, a linear form on M is a linear map from M to R, where the latter is considered as a module over itself. The space of linear forms is always denoted Homk(V, k), whether k is a field or not. It is an right module, if V is a left module.

The existence of "enough" linear forms on a module is equivalent to projectivity.[8]

This relationship was discovered by Henry Löwig in 1934 (although it is usually credited to F. Murray),[11] and can be generalized to arbitrary finite extensions of a field in the natural way. It has many important consequences, some of which will now be described.

Continuous linear functionals have nice properties for analysis: a linear functional is continuous if and only if its kernel is closed,[14] and a non-trivial continuous linear functional is an open map, even if the (topological) vector space is not complete.[15]

Any two linear functionals with the same kernel are proportional (i.e. scalar multiples of each other). This fact can be generalized to the following theorem.