Product (mathematics)

The order in which real or complex numbers are multiplied has no bearing on the product; this is known as the commutative law of multiplication. When matrices or members of various other associative algebras are multiplied, the product usually depends on the order of the factors. Matrix multiplication, for example, is non-commutative, and so is multiplication in other algebras in general as well.

There are many different kinds of products in mathematics: besides being able to multiply just numbers, polynomials or matrices, one can also define products on many different algebraic structures.

Integers allow positive and negative numbers. Their product is determined by the product of their positive amounts, combined with the sign derived from the following rule:

(This rule is a necessary consequence of demanding distributivity of multiplication over addition, and is not an additional rule.)

Two fractions can be multiplied by multiplying their numerators and denominators:

The rigorous definition of the product of two real numbers is a byproduct of the Construction of the real numbers. This construction implies that, for every real number a there is a set A of rational number such that a is the least upper bound of the elements of A:

The geometric meaning is that the magnitudes are multiplied and the arguments are added.

The product of a sequence consisting of only one number is just that number itself; the product of no factors at all is known as the empty product, and is equal to 1.

The convolution of the square wave with itself gives the triangular function

Two functions from the reals to itself can be multiplied in another way, called the convolution.

Under the Fourier transform, convolution becomes point-wise function multiplication.

There are many different kinds of products in linear algebra. Some of these have confusingly similar names (outer product, exterior product) with very different meanings, while others have very different names (outer product, tensor product, Kronecker product) and yet convey essentially the same idea. A brief overview of these is given in the following sections.

The scalar product also allows one to define an angle between two vectors:

The cross product of two vectors in 3-dimensions is a vector perpendicular to the two factors, with length equal to the area of the parallelogram spanned by the two factors.

A linear mapping can be defined as a function f between two vector spaces V and W with underlying field F, satisfying[3]

in which bV and bW denote the bases of V and W, and vi denotes the component of v on bVi, and Einstein summation convention is applied.

Now we consider the composition of two linear mappings between finite dimensional vector spaces. Let the linear mapping f map V to W, and let the linear mapping g map W to U. Then one can get

in which the i-row, j-column element of F, denoted by Fij, is fji, and Gij=gji.

The composition of more than two linear mappings can be similarly represented by a chain of matrix multiplication.

In other words: the matrix product is the description in coordinates of the composition of linear functions.

Given two finite dimensional vector spaces V and W, the tensor product of them can be defined as a (2,0)-tensor satisfying:

The tensor product, outer product and Kronecker product all convey the same general idea. The differences between these are that the Kronecker product is just a tensor product of matrices, with respect to a previously-fixed basis, whereas the tensor product is usually given in its intrinsic definition. The outer product is simply the Kronecker product, limited to vectors (instead of matrices).

In general, whenever one has two mathematical objects that can be combined in a way that behaves like a linear algebra tensor product, then this can be most generally understood as the internal product of a monoidal category. That is, the monoidal category captures precisely the meaning of a tensor product; it captures exactly the notion of why it is that tensor products behave the way they do. More precisely, a monoidal category is the class of all things (of a given type) that have a tensor product.

In set theory, a Cartesian product is a mathematical operation which returns a set (or product set) from multiple sets. That is, for sets A and B, the Cartesian product A × B is the set of all ordered pairs (a, b)—where a ∈ A and b ∈ B.[5]

The class of all things (of a given type) that have Cartesian products is called a Cartesian category. Many of these are Cartesian closed categories. Sets are an example of such objects.

The empty product on numbers and most algebraic structures has the value of 1 (the identity element of multiplication), just like the empty sum has the value of 0 (the identity element of addition). However, the concept of the empty product is more general, and requires special treatment in logic, set theory, computer programming and category theory.

A few of the above products are examples of the general notion of an internal product in a monoidal category; the rest are describable by the general notion of a product in category theory.

All of the previous examples are special cases or examples of the general notion of a product. For the general treatment of the concept of a product, see product (category theory), which describes how to combine two objects of some kind to create an object, possibly of a different kind. But also, in category theory, one has: