Multilinear map

From HandWiki
Revision as of 22:18, 6 February 2024 by John Marlo (talk | contribs) (simplify)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Short description: Vector-valued function of multiple vectors, linear in each argument


In linear algebra, a multilinear map is a function of several variables that is linear separately in each variable. More precisely, a multilinear map is a function

[math]\displaystyle{ f\colon V_1 \times \cdots \times V_n \to W\text{,} }[/math]

where [math]\displaystyle{ V_1,\ldots,V_n }[/math] ([math]\displaystyle{ n\in\mathbb Z_{\ge0} }[/math]) and [math]\displaystyle{ W }[/math] are vector spaces (or modules over a commutative ring), with the following property: for each [math]\displaystyle{ i }[/math], if all of the variables but [math]\displaystyle{ v_i }[/math] are held constant, then [math]\displaystyle{ f(v_1, \ldots, v_i, \ldots, v_n) }[/math] is a linear function of [math]\displaystyle{ v_i }[/math].[1] One way to visualize this is to imagine two orthogonal vectors; if one of these vectors is scaled by a factor of 2 while the other remains unchanged, the cross product likewise scales by a factor of two. If both are scaled by a factor of 2, the cross product scales by a factor of [math]\displaystyle{ 2^2 }[/math].

A multilinear map of one variable is a linear map, and of two variables is a bilinear map. More generally, for any nonnegative integer [math]\displaystyle{ k }[/math], a multilinear map of k variables is called a k-linear map. If the codomain of a multilinear map is the field of scalars, it is called a multilinear form. Multilinear maps and multilinear forms are fundamental objects of study in multilinear algebra.

If all variables belong to the same space, one can consider symmetric, antisymmetric and alternating k-linear maps. The latter two coincide if the underlying ring (or field) has a characteristic different from two, else the former two coincide.

Examples

  • Any bilinear map is a multilinear map. For example, any inner product on a [math]\displaystyle{ \mathbb R }[/math]-vector space is a multilinear map, as is the cross product of vectors in [math]\displaystyle{ \mathbb{R}^3 }[/math].
  • The determinant of a matrix is an alternating multilinear function of the columns (or rows) of a square matrix.
  • If [math]\displaystyle{ F\colon \mathbb{R}^m \to \mathbb{R}^n }[/math] is a Ck function, then the [math]\displaystyle{ k }[/math]th derivative of [math]\displaystyle{ F }[/math] at each point [math]\displaystyle{ p }[/math] in its domain can be viewed as a symmetric [math]\displaystyle{ k }[/math]-linear function [math]\displaystyle{ D^k\!F\colon \mathbb{R}^m\times\cdots\times\mathbb{R}^m \to \mathbb{R}^n }[/math].[citation needed]

Coordinate representation

Let

[math]\displaystyle{ f\colon V_1 \times \cdots \times V_n \to W\text{,} }[/math]

be a multilinear map between finite-dimensional vector spaces, where [math]\displaystyle{ V_i\! }[/math] has dimension [math]\displaystyle{ d_i\! }[/math], and [math]\displaystyle{ W\! }[/math] has dimension [math]\displaystyle{ d\! }[/math]. If we choose a basis [math]\displaystyle{ \{\textbf{e}_{i1},\ldots,\textbf{e}_{id_i}\} }[/math] for each [math]\displaystyle{ V_i\! }[/math] and a basis [math]\displaystyle{ \{\textbf{b}_1,\ldots,\textbf{b}_d\} }[/math] for [math]\displaystyle{ W\! }[/math] (using bold for vectors), then we can define a collection of scalars [math]\displaystyle{ A_{j_1\cdots j_n}^k }[/math] by

[math]\displaystyle{ f(\textbf{e}_{1j_1},\ldots,\textbf{e}_{nj_n}) = A_{j_1\cdots j_n}^1\,\textbf{b}_1 + \cdots + A_{j_1\cdots j_n}^d\,\textbf{b}_d. }[/math]

Then the scalars [math]\displaystyle{ \{A_{j_1\cdots j_n}^k \mid 1\leq j_i\leq d_i, 1 \leq k \leq d\} }[/math] completely determine the multilinear function [math]\displaystyle{ f\! }[/math]. In particular, if

[math]\displaystyle{ \textbf{v}_i = \sum_{j=1}^{d_i} v_{ij} \textbf{e}_{ij}\! }[/math]

for [math]\displaystyle{ 1 \leq i \leq n\! }[/math], then

[math]\displaystyle{ f(\textbf{v}_1,\ldots,\textbf{v}_n) = \sum_{j_1=1}^{d_1} \cdots \sum_{j_n=1}^{d_n} \sum_{k=1}^{d} A_{j_1\cdots j_n}^k v_{1j_1}\cdots v_{nj_n} \textbf{b}_k. }[/math]

Example

Let's take a trilinear function

[math]\displaystyle{ g\colon R^2 \times R^2 \times R^2 \to R, }[/math]

where Vi = R2, di = 2, i = 1,2,3, and W = R, d = 1.

A basis for each Vi is [math]\displaystyle{ \{\textbf{e}_{i1},\ldots,\textbf{e}_{id_i}\} = \{\textbf{e}_{1}, \textbf{e}_{2}\} = \{(1,0), (0,1)\}. }[/math] Let

[math]\displaystyle{ g(\textbf{e}_{1i},\textbf{e}_{2j},\textbf{e}_{3k}) = f(\textbf{e}_{i},\textbf{e}_{j},\textbf{e}_{k}) = A_{ijk}, }[/math]

where [math]\displaystyle{ i,j,k \in \{1,2\} }[/math]. In other words, the constant [math]\displaystyle{ A_{i j k} }[/math] is a function value at one of the eight possible triples of basis vectors (since there are two choices for each of the three [math]\displaystyle{ V_i }[/math]), namely:

[math]\displaystyle{ \{\textbf{e}_1, \textbf{e}_1, \textbf{e}_1\}, \{\textbf{e}_1, \textbf{e}_1, \textbf{e}_2\}, \{\textbf{e}_1, \textbf{e}_2, \textbf{e}_1\}, \{\textbf{e}_1, \textbf{e}_2, \textbf{e}_2\}, \{\textbf{e}_2, \textbf{e}_1, \textbf{e}_1\}, \{\textbf{e}_2, \textbf{e}_1, \textbf{e}_2\}, \{\textbf{e}_2, \textbf{e}_2, \textbf{e}_1\}, \{\textbf{e}_2, \textbf{e}_2, \textbf{e}_2\}. }[/math]

Each vector [math]\displaystyle{ \textbf{v}_i \in V_i = R^2 }[/math] can be expressed as a linear combination of the basis vectors

[math]\displaystyle{ \textbf{v}_i = \sum_{j=1}^{2} v_{ij} \textbf{e}_{ij} = v_{i1} \times \textbf{e}_1 + v_{i2} \times \textbf{e}_2 = v_{i1} \times (1, 0) + v_{i2} \times (0, 1). }[/math]

The function value at an arbitrary collection of three vectors [math]\displaystyle{ \textbf{v}_i \in R^2 }[/math] can be expressed as

[math]\displaystyle{ g(\textbf{v}_1,\textbf{v}_2, \textbf{v}_3) = \sum_{i=1}^{2} \sum_{j=1}^{2} \sum_{k=1}^{2} A_{i j k} v_{1i} v_{2j} v_{3k}, }[/math]

or in expanded form as

[math]\displaystyle{ \begin{align} g((a,b),(c,d)&, (e,f)) = ace \times g(\textbf{e}_1, \textbf{e}_1, \textbf{e}_1) + acf \times g(\textbf{e}_1, \textbf{e}_1, \textbf{e}_2) \\ &+ ade \times g(\textbf{e}_1, \textbf{e}_2, \textbf{e}_1) + adf \times g(\textbf{e}_1, \textbf{e}_2, \textbf{e}_2) + bce \times g(\textbf{e}_2, \textbf{e}_1, \textbf{e}_1) + bcf \times g(\textbf{e}_2, \textbf{e}_1, \textbf{e}_2) \\ &+ bde \times g(\textbf{e}_2, \textbf{e}_2, \textbf{e}_1) + bdf \times g(\textbf{e}_2, \textbf{e}_2, \textbf{e}_2). \end{align} }[/math]

Relation to tensor products

There is a natural one-to-one correspondence between multilinear maps

[math]\displaystyle{ f\colon V_1 \times \cdots \times V_n \to W\text{,} }[/math]

and linear maps

[math]\displaystyle{ F\colon V_1 \otimes \cdots \otimes V_n \to W\text{,} }[/math]

where [math]\displaystyle{ V_1 \otimes \cdots \otimes V_n\! }[/math] denotes the tensor product of [math]\displaystyle{ V_1,\ldots,V_n }[/math]. The relation between the functions [math]\displaystyle{ f }[/math] and [math]\displaystyle{ F }[/math] is given by the formula

[math]\displaystyle{ f(v_1,\ldots,v_n)=F(v_1\otimes \cdots \otimes v_n). }[/math]

Multilinear functions on n×n matrices

One can consider multilinear functions, on an n×n matrix over a commutative ring K with identity, as a function of the rows (or equivalently the columns) of the matrix. Let A be such a matrix and ai, 1 ≤ in, be the rows of A. Then the multilinear function D can be written as

[math]\displaystyle{ D(A) = D(a_{1},\ldots,a_{n}), }[/math]

satisfying

[math]\displaystyle{ D(a_{1},\ldots,c a_{i} + a_{i}',\ldots,a_{n}) = c D(a_{1},\ldots,a_{i},\ldots,a_{n}) + D(a_{1},\ldots,a_{i}',\ldots,a_{n}). }[/math]

If we let [math]\displaystyle{ \hat{e}_j }[/math] represent the jth row of the identity matrix, we can express each row ai as the sum

[math]\displaystyle{ a_{i} = \sum_{j=1}^n A(i,j)\hat{e}_{j}. }[/math]

Using the multilinearity of D we rewrite D(A) as

[math]\displaystyle{ D(A) = D\left(\sum_{j=1}^n A(1,j)\hat{e}_{j}, a_2, \ldots, a_n\right) = \sum_{j=1}^n A(1,j) D(\hat{e}_{j},a_2,\ldots,a_n). }[/math]

Continuing this substitution for each ai we get, for 1 ≤ in,

[math]\displaystyle{ D(A) = \sum_{1\le k_1 \le n} \ldots \sum_{1\le k_i \le n} \ldots \sum_{1\le k_n \le n} A(1,k_{1})A(2,k_{2})\dots A(n,k_{n}) D(\hat{e}_{k_{1}},\dots,\hat{e}_{k_{n}}). }[/math]

Therefore, D(A) is uniquely determined by how D operates on [math]\displaystyle{ \hat{e}_{k_{1}},\dots,\hat{e}_{k_{n}} }[/math].

Example

In the case of 2×2 matrices, we get

[math]\displaystyle{ D(A) = A_{1,1}A_{1,2}D(\hat{e}_1,\hat{e}_1) + A_{1,1}A_{2,2}D(\hat{e}_1,\hat{e}_2) + A_{1,2}A_{2,1}D(\hat{e}_2,\hat{e}_1) + A_{1,2}A_{2,2}D(\hat{e}_2,\hat{e}_2), \, }[/math]

where [math]\displaystyle{ \hat{e}_1 = [1,0] }[/math] and [math]\displaystyle{ \hat{e}_2 = [0,1] }[/math]. If we restrict [math]\displaystyle{ D }[/math] to be an alternating function, then [math]\displaystyle{ D(\hat{e}_1,\hat{e}_1) = D(\hat{e}_2,\hat{e}_2) = 0 }[/math] and [math]\displaystyle{ D(\hat{e}_2,\hat{e}_1) = -D(\hat{e}_1,\hat{e}_2) = -D(I) }[/math]. Letting [math]\displaystyle{ D(I) = 1 }[/math], we get the determinant function on 2×2 matrices:

[math]\displaystyle{ D(A) = A_{1,1}A_{2,2} - A_{1,2}A_{2,1} . }[/math]

Properties

  • A multilinear map has a value of zero whenever one of its arguments is zero.

See also

References

  1. Lang, Serge (2005) [2002]. "XIII. Matrices and Linear Maps §S Determinants". Algebra. Graduate Texts in Mathematics. 211 (3rd ed.). Springer. pp. 511–. ISBN 978-0-387-95385-4. https://books.google.com/books?id=Fge-BwqhqIYC&pg=PA511.