# Exterior algebra

When regarded in this manner, the exterior product of two vectors is called a 2-blade. More generally, the exterior product of any number *k* of vectors can be defined and is sometimes called a *k*-blade. It lives in a space known as the *k-*th exterior power. The magnitude of the resulting *k*-blade is the oriented hypervolume of the *k*-dimensional parallelotope whose edges are the given vectors, just as the magnitude of the scalar triple product of vectors in three dimensions gives the volume of the parallelepiped generated by those vectors.

The definition of the exterior algebra makes sense for spaces not just of geometric vectors, but of other vector-like objects such as vector fields or functions. In full generality, the exterior algebra can be defined for modules over a commutative ring, and for other structures of interest in abstract algebra. It is one of these more general constructions where the exterior algebra finds one of its most important applications, where it appears as the algebra of differential forms that is fundamental in areas that use differential geometry. The exterior algebra also has many algebraic properties that make it a convenient tool in algebra itself. The association of the exterior algebra to a vector space is a type of functor on vector spaces, which means that it is compatible in a certain way with linear transformations of vector spaces. The exterior algebra is one example of a bialgebra, meaning that its dual space also possesses a product, and this dual product is compatible with the exterior product. This dual algebra is precisely the algebra of alternating multilinear forms, and the pairing between the exterior algebra and its dual is given by the interior product.

The Cartesian plane **R**^{2} is a real vector space equipped with a basis consisting of a pair of unit vectors

are a pair of given vectors in **R**^{2}, written in components. There is a unique parallelogram having **v** and **w** as two of its sides. The *area* of this parallelogram is given by the standard determinant formula:

The fact that this coefficient is the signed area is not an accident. In fact, it is relatively easy to see that the exterior product should be related to the signed area if one tries to axiomatize this area as an algebraic construct. In detail, if A(**v**, **w**) denotes the signed area of the parallelogram of which the pair of vectors **v** and **w** form two adjacent sides, then A must satisfy the following properties:

With the exception of the last property, the exterior product of two vectors satisfies the same properties as the area. In a certain sense, the exterior product generalizes the final property by allowing the area of a parallelogram to be compared to that of any chosen parallelogram in a parallel plane (here, the one with sides **e**_{1} and **e**_{2}). In other words, the exterior product provides a *basis-independent* formulation of area.^{[6]}

For vectors in a 3-dimensional oriented vector space with a bilinear scalar product, the exterior algebra is closely related to the cross product and triple product. Using a standard basis (**e**_{1}, **e**_{2}, **e**_{3}), the exterior product of a pair of vectors

where (**e**_{1} ∧ **e**_{2}, **e**_{2} ∧ **e**_{3}, **e**_{3} ∧ **e**_{1}) is a basis for the three-dimensional space Λ^{2}(**R**^{3}). The coefficients above are the same as those in the usual definition of the cross product of vectors in three dimensions with a given orientation, the only differences being that the exterior product is not an ordinary vector, but instead is a 2-vector, and that the exterior product does not depend on the choice of orientation.

where **e**_{1} ∧ **e**_{2} ∧ **e**_{3} is the basis vector for the one-dimensional space Λ^{3}(**R**^{3}). The scalar coefficient is the triple product of the three vectors.

The cross product and triple product in a three dimensional Euclidean vector space each admit both geometric and algebraic interpretations. The cross product **u** × **v** can be interpreted as a vector which is perpendicular to both **u** and **v** and whose magnitude is equal to the area of the parallelogram determined by the two vectors. It can also be interpreted as the vector consisting of the minors of the matrix with columns **u** and **v**. The triple product of **u**, **v**, and **w** is a signed scalar representing a geometric oriented volume. Algebraically, it is the determinant of the matrix with columns **u**, **v**, and **w**. The exterior product in three dimensions allows for similar interpretations: it, too, can be identified with oriented lines, areas, volumes, etc., that are spanned by one, two or more vectors. The exterior product generalizes these geometric notions to all vector spaces and to any number of dimensions, even in the absence of a scalar product.

The exterior product ∧ of two elements of Λ(*V*) is the product induced by the tensor product ⊗ of *T*(*V*). That is, if

More generally, if *σ* is a permutation of the integers [1, ..., *k*], and *x*_{1}, *x*_{2}, ..., *x*_{k} are elements of *V*, it follows that

In particular, if *x*_{i} = *x*_{j} for some *i* ≠ *j*, then the following generalization of the alternating property also holds:

The *k*th **exterior power** of *V*, denoted Λ^{k}(*V*), is the vector subspace of Λ(*V*) spanned by elements of the form

If *α* ∈ Λ^{k}(*V*), then *α* is said to be a ** k-vector**. If, furthermore,

*α*can be expressed as an exterior product of

*k*elements of

*V*, then

*α*is said to be

**decomposable**. Although decomposable

*k*-vectors span Λ

^{k}(

*V*), not every element of Λ

^{k}(

*V*) is decomposable. For example, in

**R**

^{4}, the following 2-vector is not decomposable:

is a basis for Λ^{k}(*V*). The reason is the following: given any exterior product of the form

every vector v_{j} can be written as a linear combination of the basis vectors e_{i}; using the bilinearity of the exterior product, this can be expanded to a linear combination of exterior products of those basis vectors. Any exterior product in which the same basis vector appears more than once is zero; any exterior product in which the basis vectors do not appear in the proper order can be reordered, changing the sign whenever two basis vectors change places. In general, the resulting coefficients of the basis k-vectors can be computed as the minors of the matrix that describes the vectors v_{j} in terms of the basis e_{i}.

By counting the basis elements, the dimension of Λ^{k}(*V*) is equal to a binomial coefficient:

Any element of the exterior algebra can be written as a sum of k-vectors. Hence, as a vector space the exterior algebra is a direct sum

(where by convention Λ^{0}(*V*) = *K*, the field underlying V, and Λ^{1}(*V*) = *V*), and therefore its dimension is equal to the sum of the binomial coefficients, which is 2^{n}.

If *α* ∈ Λ^{k}(*V*), then it is possible to express *α* as a linear combination of decomposable *k*-vectors:

The **rank** of the *k*-vector *α* is the minimal number of decomposable *k*-vectors in such an expansion of *α*. This is similar to the notion of tensor rank.

Rank is particularly important in the study of 2-vectors (Sternberg 1964, §III.6) (Bryant et al. 1991). The rank of a 2-vector *α* can be identified with half the rank of the matrix of coefficients of *α* in a basis. Thus if *e*_{i} is a basis for *V*, then *α* can be expressed uniquely as

where *a*_{ij} = −*a*_{ji} (the matrix of coefficients is skew-symmetric). The rank of the matrix *a*_{ij} is therefore even, and is twice the rank of the form *α*.

The exterior product of a *k*-vector with a *p*-vector is a (*k* + *p*)-vector, once again invoking bilinearity. As a consequence, the direct sum decomposition of the preceding section

gives the exterior algebra the additional structure of a graded algebra, that is

The exterior product is graded anticommutative, meaning that if *α* ∈ Λ^{k}(*V*) and *β* ∈ Λ^{p}(*V*), then

In addition to studying the graded structure on the exterior algebra, Bourbaki (1989) studies additional graded structures on exterior algebras, such as those on the exterior algebra of a graded module (a module that already carries its own gradation).

Given any unital associative *K*-algebra *A* and any *K*-linear map *j* : *V* → *A* such that *j*(*v*)*j*(*v*) = 0 for every *v* in *V*, then there exists *precisely one* unital algebra homomorphism *f* : Λ(*V*) → *A* such that *j*(*v*) = *f*(*i*(*v*)) for all *v* in *V* (here *i* is the natural inclusion of *V* in Λ(*V*), see above).

To construct the most general algebra that contains *V* and whose multiplication is alternating on *V*, it is natural to start with the most general associative algebra that contains *V*, the tensor algebra *T*(*V*), and then enforce the alternating property by taking a suitable quotient. We thus take the two-sided ideal *I* in *T*(*V*) generated by all elements of the form *v* ⊗ *v* for *v* in *V*, and define Λ(*V*) as the quotient

(and use ∧ as the symbol for multiplication in Λ(*V*)). It is then straightforward to show that Λ(*V*) contains *V* and satisfies the above universal property.

As a consequence of this construction, the operation of assigning to a vector space *V* its exterior algebra Λ(*V*) is a functor from the category of vector spaces to the category of algebras.

Rather than defining Λ(*V*) first and then identifying the exterior powers Λ^{k}(*V*) as certain subspaces, one may alternatively define the spaces Λ^{k}(*V*) first and then combine them to form the algebra Λ(*V*). This approach is often used in differential geometry and is described in the next section.

Given a commutative ring *R* and an *R*-module *M*, we can define the exterior algebra Λ(*M*) just as above, as a suitable quotient of the tensor algebra **T**(*M*). It will satisfy the analogous universal property. Many of the properties of Λ(*M*) also require that *M* be a projective module. Where finite dimensionality is used, the properties further require that *M* be finitely generated and projective. Generalizations to the most common situations can be found in Bourbaki (1989).

Exterior algebras of vector bundles are frequently considered in geometry and topology. There are no essential differences between the algebraic properties of the exterior algebra of finite-dimensional vector bundles and those of the exterior algebra of finitely generated projective modules, by the Serre–Swan theorem. More general exterior algebras can be defined for sheaves of modules.

If *K* is a field of characteristic 0,^{[11]} then the exterior algebra of a vector space *V* over *K* can be canonically identified with the vector subspace of T(*V*) consisting of antisymmetric tensors. Recall that the exterior algebra is the quotient of T(*V*) by the ideal *I* generated by elements of the form *x* ⊗ *x*.

Let T^{r}(*V*) be the space of homogeneous tensors of degree *r*. This is spanned by decomposable tensors

The **antisymmetrization** (or sometimes the **skew-symmetrization**) of a decomposable tensor is defined by

Although this product differs from the tensor product, the kernel of *Alt* is precisely the ideal *I* (again, assuming that *K* has characteristic 0), and there is a canonical isomorphism

Suppose that *V* has finite dimension *n*, and that a basis **e**_{1}, ..., **e**_{n} of *V* is given. then any alternating tensor *t* ∈ A^{r}(*V*) ⊂ *T*^{r}(*V*) can be written in index notation as

The exterior product of two alternating tensors *t* and *s* of ranks *r* and *p* is given by

The components of this tensor are precisely the skew part of the components of the tensor product *s* ⊗ *t*, denoted by square brackets on the indices:

Given two vector spaces *V* and *X* and a natural number *k*, an **alternating operator** from *V*^{k} to *X* is a multilinear map

such that whenever *v*_{1}, ..., *v*_{k} are linearly dependent vectors in *V*, then

The above discussion specializes to the case when *X* = *K*, the base field. In this case an alternating multilinear function

Under such identification, the exterior product takes a concrete form: it produces a new anti-symmetric map from two given ones. Suppose *ω* : *V*^{k} → *K* and *η* : *V*^{m} → *K* are two anti-symmetric maps. As in the case of tensor products of multilinear maps, the number of variables of their exterior product is the sum of the numbers of their variables. Depending on the choice of identification of elements of exterior power with multilinear forms, the exterior product is defined as

where, if the characteristic of the base field *K* is 0, the alternation Alt of a multilinear map is defined to be the average of the sign-adjusted values over all the permutations of its variables:

When the field *K* has finite characteristic, an equivalent version of the second expression without any factorials or any constants is well-defined:

Suppose that *V* is finite-dimensional. If *V*^{∗} denotes the dual space to the vector space *V*, then for each *α* ∈ *V*^{∗}, it is possible to define an antiderivation on the algebra Λ(*V*),

This derivation is called the **interior product** with *α*, or sometimes the **insertion operator**, or **contraction** by *α*.

Suppose that **w** ∈ Λ^{k}*V*. Then **w** is a multilinear mapping of *V*^{∗} to *K*, so it is defined by its values on the *k*-fold Cartesian product *V*^{∗} × *V*^{∗} × ... × *V*^{∗}. If *u*_{1}, *u*_{2}, ..., *u*_{k−1} are *k* − 1 elements of *V*^{∗}, then define

Additionally, let *i*_{α}*f* = 0 whenever *f* is a pure scalar (i.e., belonging to Λ^{0}*V*).

These three properties are sufficient to characterize the interior product as well as define it in the general infinite-dimensional case.

Suppose that *V* has finite dimension *n*. Then the interior product induces a canonical isomorphism of vector spaces

In the geometrical setting, a non-zero element of the top exterior power Λ^{n}(*V*) (which is a one-dimensional vector space) is sometimes called a **volume form** (or **orientation form**, although this term may sometimes lead to ambiguity). The name orientation form comes from the fact that a choice of preferred top element determines an orientation of the whole exterior algebra, since it is tantamount to fixing an ordered basis of the vector space. Relative to the preferred volume form *σ*, the isomorphism is given explicitly by

If, in addition to a volume form, the vector space *V* is equipped with an inner product identifying *V* with *V*^{∗}, then the resulting isomorphism is called the **Hodge star operator**, which maps an element to its **Hodge dual**:

where id is the identity mapping, and the inner product has metric signature (*p*, *q*) — *p* pluses and *q* minuses.

For *V* a finite-dimensional space, an inner product (or a pseudo-Euclidean inner product) on *V* defines an isomorphism of *V* with *V*^{∗}, and so also an isomorphism of Λ^{k}*V* with (Λ^{k}*V*)^{∗}. The pairing between these two spaces also takes the form of an inner product. On decomposable *k*-vectors,

the determinant of the matrix of inner products. In the special case *v*_{i} = *w*_{i}, the inner product is the square norm of the *k*-vector, given by the determinant of the Gramian matrix (⟨*v*_{i}, *v*_{j}⟩). This is then extended bilinearly (or sesquilinearly in the complex case) to a non-degenerate inner product on Λ^{k}*V*. If *e*_{i}, *i* = 1, 2, ..., *n*, form an orthonormal basis of *V*, then the vectors of the form

With respect to the inner product, exterior multiplication and the interior product are mutually adjoint. Specifically, for **v** ∈ Λ^{k−1}(*V*), **w** ∈ Λ^{k}(*V*), and *x* ∈ *V*,

where *x*^{♭} ∈ *V*^{∗} is the musical isomorphism, the linear functional defined by

for all *y* ∈ *V*. This property completely characterizes the inner product on the exterior algebra.

Indeed, more generally for **v** ∈ Λ^{k−l}(*V*), **w** ∈ Λ^{k}(*V*), and **x** ∈ Λ^{l}(*V*), iteration of the above adjoint properties gives

There is a correspondence between the graded dual of the graded algebra Λ(*V*) and alternating multilinear forms on *V*. The exterior algebra (as well as the symmetric algebra) inherits a bialgebra structure, and, indeed, a Hopf algebra structure, from the tensor algebra. See the article on tensor algebras for a detailed treatment of the topic.

The exterior product of multilinear forms defined above is dual to a coproduct defined on Λ(*V*), giving the structure of a coalgebra. The **coproduct** is a linear function Δ : Λ(*V*) → Λ(*V*) ⊗ Λ(*V*) which is given by

on elements *v*∈*V*. The symbol 1 stands for the unit element of the field *K*. Recall that *K* ⊂ Λ(*V*), so that the above really does lie in Λ(*V*) ⊗ Λ(*V*). This definition of the coproduct is lifted to the full space Λ(*V*) by (linear) homomorphism. The correct form of this homomorphism is not what one might naively write, but has to be the one carefully defined in the coalgebra article. In this case, one obtains

Expanding this out in detail, one obtains the following expression on decomposable elements:

Observe that the coproduct preserves the grading of the algebra. Extending to the full space Λ(*V*), one has

The tensor symbol ⊗ used in this section should be understood with some caution: it is *not* the same tensor symbol as the one being used in the definition of the alternating product. Intuitively, it is perhaps easiest to think it as just another, but different, tensor product: it is still (bi-)linear, as tensor products should be, but it is the product that is appropriate for the definition of a bialgebra, that is, for creating the object Λ(*V*) ⊗ Λ(*V*). Any lingering doubt can be shaken by pondering the equalities (1 ⊗ *v*) ∧ (1 ⊗ *w*) = 1 ⊗ (*v* ∧ *w*) and (*v* ⊗ 1) ∧ (1 ⊗ *w*) = *v* ⊗ *w*, which follow from the definition of the coalgebra, as opposed to naive manipulations involving the tensor and wedge symbols. This distinction is developed in greater detail in the article on tensor algebras. Here, there is much less of a problem, in that the alternating product Λ clearly corresponds to multiplication in the bialgebra, leaving the symbol ⊗ free for use in the definition of the bialgebra. In practice, this presents no particular problem, as long as one avoids the fatal trap of replacing alternating sums of ⊗ by the wedge symbol, with one exception. One can construct an alternating product from ⊗, with the understanding that it works in a different space. Immediately below, an example is given: the alternating product for the *dual space* can be given in terms of the coproduct. The construction of the bialgebra here parallels the construction in the tensor algebra article almost exactly, except for the need to correctly track the alternating signs for the exterior algebra.

In terms of the coproduct, the exterior product on the dual space is just the graded dual of the coproduct:

where the tensor product on the right-hand side is of multilinear linear maps (extended by zero on elements of incompatible homogeneous degree: more precisely, *α* ∧ *β* = *ε* ∘ (*α* ⊗ *β*) ∘ Δ, where *ε* is the counit, as defined presently).

The **counit** is the homomorphism *ε* : Λ(*V*) → *K* that returns the 0-graded component of its argument. The coproduct and counit, along with the exterior product, define the structure of a bialgebra on the exterior algebra.

Suppose that *V* and *W* are a pair of vector spaces and *f* : *V* → *W* is a linear map. Then, by the universal property, there exists a unique homomorphism of graded algebras

In particular, Λ(*f*) preserves homogeneous degree. The *k*-graded components of Λ(*f*) are given on decomposable elements by

The components of the transformation Λ^{k}(*f*) relative to a basis of *V* and *W* is the matrix of *k* × *k* minors of *f*. In particular, if *V* = *W* and *V* is of finite dimension *n*, then Λ^{n}(*f*) is a mapping of a one-dimensional vector space Λ^{n}*V* to itself, and is therefore given by a scalar: the determinant of *f*.

In particular, the exterior algebra of a direct sum is isomorphic to the tensor product of the exterior algebras:

In applications to linear algebra, the exterior product provides an abstract algebraic manner for describing the determinant and the minors of a matrix. For instance, it is well known that the determinant of a square matrix is equal to the volume of the parallelotope whose sides are the columns of the matrix (with a sign to track orientation). This suggests that the determinant can be *defined* in terms of the exterior product of the column vectors. Likewise, the *k* × *k* minors of a matrix can be defined by looking at the exterior products of column vectors chosen *k* at a time. These ideas can be extended not just to matrices but to linear transformations as well: the determinant of a linear transformation is the factor by which it scales the oriented volume of any given reference parallelotope. So the determinant of a linear transformation can be defined in terms of what the transformation does to the top exterior power. The action of a transformation on the lesser exterior powers gives a basis-independent way to talk about the minors of the transformation.

All results obtained from other definitions of the determinant, trace and adjoint can be obtained from this definition (since these definitions are equivalent). Here are some basic properties related to these new definitions:

In physics, many quantities are naturally represented by alternating operators. For example, if the motion of a charged particle is described by velocity and acceleration vectors in four-dimensional spacetime, then normalization of the velocity vector requires that the electromagnetic force must be an alternating operator on the velocity. Its six degrees of freedom are identified with the electric and magnetic fields.

The decomposable *k*-vectors have geometric interpretations: the bivector *u* ∧ *v* represents the plane spanned by the vectors, "weighted" with a number, given by the area of the oriented parallelogram with sides *u* and *v*. Analogously, the 3-vector *u* ∧ *v* ∧ *w* represents the spanned 3-space weighted by the volume of the oriented parallelepiped with edges *u*, *v*, and *w*.

Decomposable *k*-vectors in Λ^{k}*V* correspond to weighted *k*-dimensional linear subspaces of *V*. In particular, the Grassmannian of *k*-dimensional subspaces of *V*, denoted Gr_{k}(*V*), can be naturally identified with an algebraic subvariety of the projective space **P**(Λ^{k}*V*). This is called the Plücker embedding.

The exterior algebra has notable applications in differential geometry, where it is used to define differential forms.^{[21]} Differential forms are mathematical objects that evaluate the length of vectors, areas of parallelograms, and volumes of higher-dimensional bodies, so they can be integrated over curves, surfaces and higher dimensional manifolds in a way that generalizes the line integrals and surface integrals from calculus. A differential form at a point of a differentiable manifold is an alternating multilinear form on the tangent space at the point. Equivalently, a differential form of degree *k* is a linear functional on the *k*-th exterior power of the tangent space. As a consequence, the exterior product of multilinear forms defines a natural exterior product for differential forms. Differential forms play a major role in diverse areas of differential geometry.

In particular, the exterior derivative gives the exterior algebra of differential forms on a manifold the structure of a differential graded algebra. The exterior derivative commutes with pullback along smooth mappings between manifolds, and it is therefore a natural differential operator. The exterior algebra of differential forms, equipped with the exterior derivative, is a cochain complex whose cohomology is called the de Rham cohomology of the underlying manifold and plays a vital role in the algebraic topology of differentiable manifolds.

In representation theory, the exterior algebra is one of the two fundamental Schur functors on the category of vector spaces, the other being the symmetric algebra. Together, these constructions are used to generate the irreducible representations of the general linear group; see fundamental representation.

The exterior algebra over the complex numbers is the archetypal example of a superalgebra, which plays a fundamental role in physical theories pertaining to fermions and supersymmetry. A single element of the exterior algebra is called a **supernumber**^{[22]} or Grassmann number. The exterior algebra itself is then just a one-dimensional superspace: it is just the set of all of the points in the exterior algebra. The topology on this space is essentially the weak topology, the open sets being the cylinder sets. An *n*-dimensional superspace is just the *n*-fold product of exterior algebras.

Let *L* be a Lie algebra over a field *K*, then it is possible to define the structure of a chain complex on the exterior algebra of *L*. This is a *K*-linear mapping

The Jacobi identity holds if and only if ∂∂ = 0, and so this is a necessary and sufficient condition for an anticommutative nonassociative algebra *L* to be a Lie algebra. Moreover, in that case Λ*L* is a chain complex with boundary operator ∂. The homology associated to this complex is the Lie algebra homology.

The exterior algebra is the main ingredient in the construction of the Koszul complex, a fundamental object in homological algebra.

The exterior algebra was first introduced by Hermann Grassmann in 1844 under the blanket term of *Ausdehnungslehre*, or *Theory of Extension*.^{[23]}
This referred more generally to an algebraic (or axiomatic) theory of extended quantities and was one of the early precursors to the modern notion of a vector space. Saint-Venant also published similar ideas of exterior calculus for which he claimed priority over Grassmann.^{[24]}

The algebra itself was built from a set of rules, or axioms, capturing the formal aspects of Cayley and Sylvester's theory of multivectors. It was thus a *calculus*, much like the propositional calculus, except focused exclusively on the task of formal reasoning in geometrical terms.^{[25]}
In particular, this new development allowed for an *axiomatic* characterization of dimension, a property that had previously only been examined from the coordinate point of view.

The import of this new theory of vectors and multivectors was lost to mid 19th century mathematicians,^{[26]}
until being thoroughly vetted by Giuseppe Peano in 1888. Peano's work also remained somewhat obscure until the turn of the century, when the subject was unified by members of the French geometry school (notably Henri Poincaré, Élie Cartan, and Gaston Darboux) who applied Grassmann's ideas to the calculus of differential forms.

A short while later, Alfred North Whitehead, borrowing from the ideas of Peano and Grassmann, introduced his universal algebra. This then paved the way for the 20th century developments of abstract algebra by placing the axiomatic notion of an algebraic system on a firm logical footing.