# Linear subspace

In mathematics, and more specifically in linear algebra, a **linear subspace**, also known as a **vector subspace**^{[1]}^{[note 1]} is a vector space that is a subset of some larger vector space. A linear subspace is usually simply called a *subspace* when the context serves to distinguish it from other types of subspaces.

If *V* is a vector space over a field *K* and if *W* is a subset of *V*, then *W* is a **linear subspace** of *V* if under the operations of *V*, *W* is a vector space over *K*. Equivalently, a nonempty subset *W* is a subspace of *V* if, whenever *w*_{1}, *w*_{2} are elements of *W* and *α*, *β* are elements of *K*, it follows that *αw*_{1} + *βw*_{2} is in *W*.^{[2]}^{[3]}^{[4]}^{[5]}^{[6]}

As a corollary, all vector spaces are equipped with at least two (possibly different) linear subspaces: the zero vector space consisting of the zero vector alone and the entire vector space itself. These are called the **trivial subspaces** of the vector space.^{[7]}

Let the field *K* be the set **R** of real numbers, and let the vector space *V* be the real coordinate space **R**^{3}.
Take *W* to be the set of all vectors in *V* whose last component is 0.
Then *W* is a subspace of *V*.

Let the field be **R** again, but now let the vector space *V* be the Cartesian plane **R**^{2}.
Take *W* to be the set of points (*x*, *y*) of **R**^{2} such that *x* = *y*.
Then *W* is a subspace of **R**^{2}.

In general, any subset of the real coordinate space **R**^{n} that is defined by a system of homogeneous linear equations will yield a subspace.
(The equation in example I was *z* = 0, and the equation in example II was *x* = *y*.)
Geometrically, these subspaces are points, lines, planes and spaces that pass through the point **0**.

Again take the field to be **R**, but now let the vector space *V* be the set **R**^{R} of all functions from **R** to **R**.
Let C(**R**) be the subset consisting of continuous functions.
Then C(**R**) is a subspace of **R**^{R}.

Keep the same field and vector space as before, but now consider the set Diff(**R**) of all differentiable functions.
The same sort of argument as before shows that this is a subspace too.

From the definition of vector spaces, it follows that subspaces are nonempty, and are closed under sums and under scalar multiples.^{[8]} Equivalently, subspaces can be characterized by the property of being closed under linear combinations. That is, a nonempty set *W* is a subspace if and only if every linear combination of finitely many elements of *W* also belongs to *W*.
The equivalent definition states that it is also equivalent to consider linear combinations of two elements at a time.

In a topological vector space *X*, a subspace *W* need not be topologically closed, but a finite-dimensional subspace is always closed.^{[9]} The same is true for subspaces of finite codimension (i.e., subspaces determined by a finite number of continuous linear functionals).

Descriptions of subspaces include the solution set to a homogeneous system of linear equations, the subset of Euclidean space described by a system of homogeneous linear parametric equations, the span of a collection of vectors, and the null space, column space, and row space of a matrix. Geometrically (especially over the field of real numbers and its subfields), a subspace is a flat in an *n*-space that passes through the origin.

A natural description of a 1-subspace is the scalar multiplication of one non-zero vector **v** to all possible scalar values. 1-subspaces specified by two vectors are equal if and only if one vector can be obtained from another with scalar multiplication:

This idea is generalized for higher dimensions with linear span, but criteria for equality of *k*-spaces specified by sets of *k* vectors are not so simple.

A dual description is provided with linear functionals (usually implemented as linear equations). One non-zero linear functional **F** specifies its kernel subspace **F** = 0 of codimension 1. Subspaces of codimension 1 specified by two linear functionals are equal, if and only if one functional can be obtained from another with scalar multiplication (in the dual space):

It is generalized for higher codimensions with a system of equations. The following two subsections will present this latter description in details, and the remaining four subsections further describe the idea of linear span.

The solution set to any homogeneous system of linear equations with *n* variables is a subspace in the coordinate space *K*^{n}:

For example, the set of all vectors (*x*, *y*, *z*) (over real or rational numbers) satisfying the equations

In a finite-dimensional space, a homogeneous system of linear equations can be written as a single matrix equation:

The set of solutions to this equation is known as the null space of the matrix. For example, the subspace described above is the null space of the matrix

Every subspace of *K*^{n} can be described as the null space of some matrix (see § Algorithms below for more).

The subset of *K*^{n} described by a system of homogeneous linear parametric equations is a subspace:

For example, the set of all vectors (*x*, *y*, *z*) parameterized by the equations

is a two-dimensional subspace of *K*^{3}, if *K* is a number field (such as real or rational numbers).^{[note 2]}

In linear algebra, the system of parametric equations can be written as a single vector equation:

The expression on the right is called a linear combination of the vectors
(2, 5, −1) and (3, −4, 2). These two vectors are said to **span** the resulting subspace.

In general, a **linear combination** of vectors **v**_{1}, **v**_{2}, ... , **v**_{k} is any vector of the form

If the vectors **v**_{1}, ... , **v**_{k} have *n* components, then their span is a subspace of *K*^{n}. Geometrically, the span is the flat through the origin in *n*-dimensional space determined by the points **v**_{1}, ... , **v**_{k}.

*xz*-plane can be reached from the origin by first moving some distance in the direction of (1, 0, 0) and then moving some distance in the direction of (0, 0, 1).

A system of linear parametric equations in a finite-dimensional space can also be written as a single matrix equation:

In this case, the subspace consists of all possible values of the vector **x**. In linear algebra, this subspace is known as the column space (or image) of the matrix *A*. It is precisely the subspace of *K*^{n} spanned by the column vectors of *A*.

The row space of a matrix is the subspace spanned by its row vectors. The row space is interesting because it is the orthogonal complement of the null space (see below).

In general, a subspace of *K*^{n} determined by *k* parameters (or spanned by *k* vectors) has dimension *k*. However, there are exceptions to this rule. For example, the subspace of *K*^{3} spanned by the three vectors (1, 0, 0), (0, 0, 1), and (2, 0, 3) is just the *xz*-plane, with each point on the plane described by infinitely many different values of *t*_{1}, *t*_{2}, *t*_{3}.

for
(*t*_{1}, *t*_{2}, ... , *t _{k}*) ≠ (

*u*

_{1},

*u*

_{2}, ... ,

*u*).

_{k}^{[note 3]}If

**v**

_{1}, ...,

**v**

_{k}are linearly independent, then the

**coordinates**

*t*

_{1}, ...,

*t*for a vector in the span are uniquely determined.

_{k}A **basis** for a subspace *S* is a set of linearly independent vectors whose span is *S*. The number of elements in a basis is always equal to the geometric dimension of the subspace. Any spanning set for a subspace can be changed into a basis by removing redundant vectors (see § Algorithms below for more).

*S*. In particular, every vector that satisfies the above equations can be written uniquely as a linear combination of the two basis vectors:

The set-theoretical inclusion binary relation specifies a partial order on the set of all subspaces (of any dimension).

A subspace cannot lie in any subspace of lesser dimension. If dim *U* = *k*, a finite number, and *U* ⊂ *W*, then dim *W* = *k* if and only if *U* = *W*.

For example, the sum of two lines is the plane that contains them both. The dimension of the sum satisfies the inequality

Here, the minimum only occurs if one subspace is contained in the other, while the maximum is the most general case. The dimension of the intersection and the sum are related by the following equation:^{[15]}

Most algorithms for dealing with subspaces involve row reduction. This is the process of applying elementary row operations to a matrix, until it reaches either row echelon form or reduced row echelon form. Row reduction has the following important properties:

If we instead put the matrix *A* into reduced row echelon form, then the resulting basis for the row space is uniquely determined. This provides an algorithm for checking whether two row spaces are equal and, by extension, whether two subspaces of *K*^{n} are equal.

This produces a basis for the column space that is a subset of the original column vectors. It works because the columns with pivots are a basis for the column space of the echelon form, and row reduction does not change the linear dependence relationships between the columns.

If the final column of the reduced row echelon form contains a pivot, then the input vector **v** does not lie in *S*.

*A*are a basis for the null space of the corresponding matrix.