Product (mathematics)
Arithmetic operations  

In mathematics, a product is the result of multiplication, or an expression that identifies objects (numbers or variables) to be multiplied, called factors. For example, 21 is the product of 3 and 7 (the result of multiplication), and [math]\displaystyle{ x\cdot (2+x) }[/math] is the product of [math]\displaystyle{ x }[/math] and [math]\displaystyle{ (2+x) }[/math] (indicating that the two factors should be multiplied together). When one factor is an integer, the product is called a multiple.
The order in which real or complex numbers are multiplied has no bearing on the product; this is known as the commutative law of multiplication. When matrices or members of various other associative algebras are multiplied, the product usually depends on the order of the factors. Matrix multiplication, for example, is noncommutative, and so is multiplication in other algebras in general as well.
There are many different kinds of products in mathematics: besides being able to multiply just numbers, polynomials or matrices, one can also define products on many different algebraic structures.
Product of two numbers
Product of a sequence
The product operator for the product of a sequence is denoted by the capital Greek letter pi Π (in analogy to the use of the capital Sigma Σ as summation symbol).^{[1]} For example, the expression [math]\displaystyle{ \textstyle \prod_{i=1}^{6}i^2 }[/math]is another way of writing [math]\displaystyle{ 1 \cdot 4 \cdot 9 \cdot 16 \cdot 25 \cdot 36 }[/math].^{[2]}
The product of a sequence consisting of only one number is just that number itself; the product of no factors at all is known as the empty product, and is equal to 1.
Commutative rings
Commutative rings have a product operation.
Residue classes of integers
Residue classes in the rings [math]\displaystyle{ \Z/N\Z }[/math] can be added:
 [math]\displaystyle{ (a + N\Z) + (b + N\Z) = a + b + N\Z }[/math]
and multiplied:
 [math]\displaystyle{ (a + N\Z) \cdot (b + N\Z) = a \cdot b + N\Z }[/math]
Convolution
Two functions from the reals to itself can be multiplied in another way, called the convolution.
If
 [math]\displaystyle{ \int\limits_{\infty}^\infty f(t)\,\mathrm{d}t \lt \infty\qquad\mbox{and}\qquad \int\limits_{\infty}^\infty g(t)\,\mathrm{d}t \lt \infty, }[/math]
then the integral
 [math]\displaystyle{ (f*g) (t) \;:= \int\limits_{\infty}^\infty f(\tau)\cdot g(t  \tau)\,\mathrm{d}\tau }[/math]
is well defined and is called the convolution.
Under the Fourier transform, convolution becomes pointwise function multiplication.
Polynomial rings
The product of two polynomials is given by the following:
 [math]\displaystyle{ \left(\sum_{i=0}^n a_i X^i\right) \cdot \left(\sum_{j=0}^m b_j X^j\right) = \sum_{k=0}^{n+m} c_k X^k }[/math]
with
 [math]\displaystyle{ c_k = \sum_{i+j=k} a_i \cdot b_j }[/math]
Products in linear algebra
There are many different kinds of products in linear algebra. Some of these have confusingly similar names (outer product, exterior product) with very different meanings, while others have very different names (outer product, tensor product, Kronecker product) and yet convey essentially the same idea. A brief overview of these is given in the following sections.
Scalar multiplication
By the very definition of a vector space, one can form the product of any scalar with any vector, giving a map [math]\displaystyle{ \R \times V \rightarrow V }[/math].
Scalar product
A scalar product is a bilinear map:
 [math]\displaystyle{ \cdot : V \times V \rightarrow \R }[/math]
with the following conditions, that [math]\displaystyle{ v \cdot v \gt 0 }[/math] for all [math]\displaystyle{ 0 \not= v \in V }[/math].
From the scalar product, one can define a norm by letting [math]\displaystyle{ \v\ := \sqrt{v \cdot v} }[/math].
The scalar product also allows one to define an angle between two vectors:
 [math]\displaystyle{ \cos\angle(v, w) = \frac{v \cdot w}{\v\ \cdot \w\} }[/math]
In [math]\displaystyle{ n }[/math]dimensional Euclidean space, the standard scalar product (called the dot product) is given by:
 [math]\displaystyle{ \left(\sum_{i=1}^n \alpha_i e_i\right) \cdot \left(\sum_{i=1}^n \beta_i e_i\right) = \sum_{i=1}^n \alpha_i\,\beta_i }[/math]
Cross product in 3dimensional space
The cross product of two vectors in 3dimensions is a vector perpendicular to the two factors, with length equal to the area of the parallelogram spanned by the two factors.
The cross product can also be expressed as the formal^{[loweralpha 1]} determinant:
 [math]\displaystyle{ \mathbf{u \times v} = \begin{vmatrix} \mathbf{i} & \mathbf{j} & \mathbf{k} \\ u_1 & u_2 & u_3 \\ v_1 & v_2 & v_3 \\ \end{vmatrix} }[/math]
Composition of linear mappings
A linear mapping can be defined as a function f between two vector spaces V and W with underlying field F, satisfying^{[3]}
 [math]\displaystyle{ f(t_1 x_1 + t_2 x_2) = t_1 f(x_1) + t_2 f(x_2), \forall x_1, x_2 \in V, \forall t_1, t_2 \in \mathbb{F}. }[/math]
If one only considers finite dimensional vector spaces, then
 [math]\displaystyle{ f(\mathbf{v}) = f\left(v_i \mathbf{b_V}^i\right) = v_i f\left(\mathbf{b_V}^i\right) = {f^i}_j v_i \mathbf{b_W}^j, }[/math]
in which b_{V} and b_{W} denote the bases of V and W, and v_{i} denotes the component of v on b_{V}^{i}, and Einstein summation convention is applied.
Now we consider the composition of two linear mappings between finite dimensional vector spaces. Let the linear mapping f map V to W, and let the linear mapping g map W to U. Then one can get
 [math]\displaystyle{ g \circ f(\mathbf{v}) = g\left({f^i}_j v_i \mathbf{b_W}^j\right) = {g^j}_k {f^i}_j v_i \mathbf{b_U}^k. }[/math]
Or in matrix form:
 [math]\displaystyle{ g \circ f(\mathbf{v}) = \mathbf{G} \mathbf{F} \mathbf{v}, }[/math]
in which the irow, jcolumn element of F, denoted by F_{ij}, is f^{j}_{i}, and G_{ij}=g^{j}_{i}.
The composition of more than two linear mappings can be similarly represented by a chain of matrix multiplication.
Product of two matrices
Given two matrices
 [math]\displaystyle{ A = (a_{i,j})_{i=1\ldots s;j=1\ldots r} \in \R^{s\times r} }[/math] and [math]\displaystyle{ B = (b_{j,k})_{j=1\ldots r;k=1\ldots t}\in \R^{r\times t} }[/math]
their product is given by
 [math]\displaystyle{ B \cdot A = \left( \sum_{j=1}^r a_{i,j} \cdot b_{j,k} \right)_{i=1\ldots s;k=1\ldots t} \;\in\R^{s\times t} }[/math]
Composition of linear functions as matrix product
There is a relationship between the composition of linear functions and the product of two matrices. To see this, let r = dim(U), s = dim(V) and t = dim(W) be the (finite) dimensions of vector spaces U, V and W. Let [math]\displaystyle{ \mathcal U = \{u_1, \ldots, u_r\} }[/math] be a basis of U, [math]\displaystyle{ \mathcal V = \{v_1, \ldots, v_s\} }[/math] be a basis of V and [math]\displaystyle{ \mathcal W = \{w_1, \ldots, w_t\} }[/math] be a basis of W. In terms of this basis, let [math]\displaystyle{ A = M^{\mathcal U}_{\mathcal V}(f) \in \R^{s\times r} }[/math] be the matrix representing f : U → V and [math]\displaystyle{ B = M^{\mathcal V}_{\mathcal W}(g) \in \R^{r\times t} }[/math] be the matrix representing g : V → W. Then
 [math]\displaystyle{ B\cdot A = M^{\mathcal U}_{\mathcal W} (g \circ f) \in \R^{s\times t} }[/math]
is the matrix representing [math]\displaystyle{ g \circ f : U \rightarrow W }[/math].
In other words: the matrix product is the description in coordinates of the composition of linear functions.
Tensor product of vector spaces
Given two finite dimensional vector spaces V and W, the tensor product of them can be defined as a (2,0)tensor satisfying:
 [math]\displaystyle{ V \otimes W(v, m) = V(v) W(w), \forall v \in V^*, \forall w \in W^*, }[/math]
where V^{*} and W^{*} denote the dual spaces of V and W.^{[4]}
For infinitedimensional vector spaces, one also has the:
The tensor product, outer product and Kronecker product all convey the same general idea. The differences between these are that the Kronecker product is just a tensor product of matrices, with respect to a previouslyfixed basis, whereas the tensor product is usually given in its intrinsic definition. The outer product is simply the Kronecker product, limited to vectors (instead of matrices).
The class of all objects with a tensor product
In general, whenever one has two mathematical objects that can be combined in a way that behaves like a linear algebra tensor product, then this can be most generally understood as the internal product of a monoidal category. That is, the monoidal category captures precisely the meaning of a tensor product; it captures exactly the notion of why it is that tensor products behave the way they do. More precisely, a monoidal category is the class of all things (of a given type) that have a tensor product.
Other products in linear algebra
Other kinds of products in linear algebra include:
 Hadamard product
 Kronecker product
 The product of tensors:
Cartesian product
In set theory, a Cartesian product is a mathematical operation which returns a set (or product set) from multiple sets. That is, for sets A and B, the Cartesian product A × B is the set of all ordered pairs (a, b)—where a ∈ A and b ∈ B.^{[5]}
The class of all things (of a given type) that have Cartesian products is called a Cartesian category. Many of these are Cartesian closed categories. Sets are an example of such objects.
Empty product
The empty product on numbers and most algebraic structures has the value of 1 (the identity element of multiplication), just like the empty sum has the value of 0 (the identity element of addition). However, the concept of the empty product is more general, and requires special treatment in logic, set theory, computer programming and category theory.
Products over other algebraic structures
Products over other kinds of algebraic structures include:
 the Cartesian product of sets
 the direct product of groups, and also the semidirect product, knit product and wreath product
 the free product of groups
 the product of rings
 the product of ideals
 the product of topological spaces^{[1]}
 the Wick product of random variables
 the cap, cup, Massey and slant product in algebraic topology
 the smash product and wedge sum (sometimes called the wedge product) in homotopy
A few of the above products are examples of the general notion of an internal product in a monoidal category; the rest are describable by the general notion of a product in category theory.
Products in category theory
All of the previous examples are special cases or examples of the general notion of a product. For the general treatment of the concept of a product, see product (category theory), which describes how to combine two objects of some kind to create an object, possibly of a different kind. But also, in category theory, one has:
 the fiber product or pullback,
 the product category, a category that is the product of categories.
 the ultraproduct, in model theory.
 the internal product of a monoidal category, which captures the essence of a tensor product.
Other products
 A function's product integral (as a continuous equivalent to the product of a sequence or as the multiplicative version of the normal/standard/additive integral. The product integral is also known as "continuous product" or "multiplical".
 Complex multiplication, a theory of elliptic curves.
See also
 Indefinite product
 Infinite product
 Iterated binary operation – Repeated application of an operation to a sequence
 Multiplication – Arithmetical operation
Notes
 ↑ Here, "formal" means that this notation has the form of a determinant, but does not strictly adhere to the definition; it is a mnemonic used to remember the expansion of the cross product.
References
 ↑ ^{1.0} ^{1.1} Weisstein, Eric W.. "Product" (in en). https://mathworld.wolfram.com/Product.html.
 ↑ "Summation and Product Notation". https://math.illinoisstate.edu/day/courses/old/305/contentsummationnotation.html.
 ↑ Clarke, Francis (2013). Functional analysis, calculus of variations and optimal control. Dordrecht: Springer. pp. 9–10. ISBN 9781447148203.
 ↑ Boothby, William M. (1986). An introduction to differentiable manifolds and Riemannian geometry (2nd ed.). Orlando: Academic Press. p. 200. ISBN 0080874398. https://archive.org/details/introductiontodi0000boot.
 ↑ Moschovakis, Yiannis (2006). Notes on set theory (2nd ed.). New York: Springer. p. 13. ISBN 0387316094.
Bibliography
 Jarchow, Hans (1981). Locally convex spaces. Stuttgart: B.G. Teubner. ISBN 9783519022244. OCLC 8210342.
Original source: https://en.wikipedia.org/wiki/Product (mathematics).
Read more 