Commutation matrix

From HandWiki
Revision as of 20:24, 8 February 2024 by JOpenQuest (talk | contribs) (over-write)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

In mathematics, especially in linear algebra and matrix theory, the commutation matrix is used for transforming the vectorized form of a matrix into the vectorized form of its transpose. Specifically, the commutation matrix K(m,n) is the nm × mn matrix which, for any m × n matrix A, transforms vec(A) into vec(AT):

K(m,n) vec(A) = vec(AT) .

Here vec(A) is the mn × 1 column vector obtain by stacking the columns of A on top of one another:

[math]\displaystyle{ \operatorname{vec}(\mathbf{A}) = [\mathbf{A}_{1,1}, \ldots, \mathbf{A}_{m,1}, \mathbf{A}_{1,2}, \ldots, \mathbf{A}_{m,2}, \ldots, \mathbf{A}_{1,n}, \ldots, \mathbf{A}_{m,n}]^{\mathrm{T}} }[/math]

where A = [Ai,j]. In other words, vec(A) is the vector obtained by vectorizing A in column-major order. Similarly, vec(AT) is the vector obtaining by vectorizing A in row-major order.

In the context of quantum information theory, the commutation matrix is sometimes referred to as the swap matrix or swap operator [1]

Properties

  • The commutation matrix is a special type of permutation matrix, and is therefore orthogonal. In particular, K(m,n) is equal to [math]\displaystyle{ \mathbf P_\pi }[/math], where [math]\displaystyle{ \pi }[/math] is the permutation over [math]\displaystyle{ \{1,\dots,mn\} }[/math] for which
[math]\displaystyle{ \pi(i + m(j-1)) = j + n(i-1), \quad i = 1,\dots,m, \quad j = 1,\dots,n. }[/math]
  • Replacing A with AT in the definition of the commutation matrix shows that K(m,n) = (K(n,m))T. Therefore in the special case of m = n the commutation matrix is an involution and symmetric.
  • The main use of the commutation matrix, and the source of its name, is to commute the Kronecker product: for every m × n matrix A and every r × q matrix B,
[math]\displaystyle{ \mathbf{K}^{(r, m)} (\mathbf{A} \otimes \mathbf{B}) \mathbf{K}^{(n, q)} = \mathbf{B} \otimes \mathbf{A}. }[/math]
This property is often used in developing the higher order statistics of Wishart covariance matrices.[2]
  • The case of n=q=1 for the above equation states that for any column vectors v,w of sizes m,r respectively,
[math]\displaystyle{ \mathbf{K}^{(r,m)}(\mathbf v \otimes \mathbf w) = \mathbf w \otimes \mathbf v. }[/math]
This property is the reason that this matrix is referred to as the "swap operator" in the context of quantum information theory.
  • Two explicit forms for the commutation matrix are as follows: if er,j denotes the j-th canonical vector of dimension r (i.e. the vector with 1 in the j-th coordinate and 0 elsewhere) then
[math]\displaystyle{ \mathbf{K}^{(r, m)} = \sum_{i=1}^r \sum_{j=1}^m \left(\mathbf{e}_{r,i} {\mathbf{e}_{m,j}}^{\mathrm{T}}\right) \otimes \left(\mathbf{e}_{m,j} {\mathbf{e}_{r,i}}^{\mathrm{T}}\right) = \sum_{i=1}^r \sum_{j=1}^m \left(\mathbf{e}_{r,i} \otimes \mathbf{e}_{m,j}\right) \left( \mathbf{e}_{m,j} \otimes \mathbf{e}_{r,i}\right)^{\mathrm{T}} . }[/math]
  • The commutation matrix may be expressed as the following block matrix:
[math]\displaystyle{ \mathbf{K}^{(m,n)} = \begin{bmatrix} \mathbf{K}_{1,1} & \cdots & \mathbf{K}_{1,n}\\ \vdots & \ddots & \vdots\\ \mathbf{K}_{m,1} & \cdots & \mathbf{K}_{m,n}, \end{bmatrix}, }[/math]
Where the p,q entry of n x m block-matrix Ki,j is given by
[math]\displaystyle{ \mathbf K_{ij}(p,q) = \begin{cases} 1 & i=q \text{ and } j = p,\\ 0 & \text{otherwise}. \end{cases} }[/math]
For example,
[math]\displaystyle{ \mathbf K^{(3,4)} = \left[\begin{array}{ccc|ccc|ccc|ccc} 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0\\ \hline 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0\\ \hline 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1\end{array}\right]. }[/math]

Code

For both square and rectangular matrices of m rows and n columns, the commutation matrix can be generated by the code below.

Python

import numpy as np


def comm_mat(m, n):
    # determine permutation applied by K
    w = np.arange(m * n).reshape((m, n), order="F").T.ravel(order="F")

    # apply this permutation to the rows (i.e. to each column) of identity matrix and return result
    return np.eye(m * n)[w, :]

Alternatively, a version without imports:

# Kronecker delta
def delta(i, j):
    return int(i == j)


def comm_mat(m, n):
    # determine permutation applied by K
    v = [m * j + i for i in range(m) for j in range(n)]

    # apply this permutation to the rows (i.e. to each column) of identity matrix
    I = [[delta(i, j) for j in range(m * n)] for i in range(m * n)]
    return [I[i] for i in v]

MATLAB

function P = com_mat(m, n)

% determine permutation applied by K
A = reshape(1:m*n, m, n);
v = reshape(A', 1, []);

% apply this permutation to the rows (i.e. to each column) of identity matrix
P = eye(m*n);
P = P(v,:);

R

# Sparse matrix version
comm_mat = function(m, n){
  i = 1:(m * n)
  j = NULL
  for (k in 1:m) {
    j = c(j, m * 0:(n-1) + k)
  }
  Matrix::sparseMatrix(
    i = i, j = j, x = 1
  )
}

Example

Let [math]\displaystyle{ A }[/math] denote the following [math]\displaystyle{ 3 \times 2 }[/math] matrix:

[math]\displaystyle{ A = \begin{bmatrix} 1 & 4 \\ 2 & 5 \\ 3 & 6 \\ \end{bmatrix}. }[/math]

[math]\displaystyle{ A }[/math] has the following column-major and row-major vectorizations (respectively):

[math]\displaystyle{ \mathbf v_{\text{col}} = \operatorname{vec}(A) = \begin{bmatrix} 1 \\ 2 \\ 3 \\ 4 \\ 5 \\ 6 \\ \end{bmatrix} , \quad \mathbf v_{\text{row}} = \operatorname{vec}(A^{\mathrm T}) = \begin{bmatrix} 1 \\ 4 \\ 2 \\ 5 \\ 3 \\ 6 \\ \end{bmatrix}. }[/math]

The associated commutation matrix is

[math]\displaystyle{ K = \mathbf K^{(3,2)} = \begin{bmatrix} 1 & \cdot & \cdot & \cdot & \cdot & \cdot \\ \cdot & \cdot & \cdot & 1 & \cdot & \cdot \\ \cdot & 1 & \cdot & \cdot & \cdot & \cdot \\ \cdot & \cdot & \cdot & \cdot & 1 & \cdot \\ \cdot & \cdot & 1 & \cdot & \cdot & \cdot \\ \cdot & \cdot & \cdot & \cdot & \cdot & 1 \\ \end{bmatrix}, }[/math]

(where each [math]\displaystyle{ \cdot }[/math] denotes a zero). As expected, the following holds:

[math]\displaystyle{ K^\mathrm{T} K = KK^\mathrm{T} =\mathbf I_6 }[/math]
[math]\displaystyle{ K \mathbf v_{\text{col}} = \mathbf v_{\text{row}} }[/math]

References

  1. Watrous, John (2018). The Theory of Quantum Information. Cambridge University Press. pp. 94. 
  2. von Rosen, Dietrich (1988). "Moments for the Inverted Wishart Distribution". Scand. J. Stat. 15: 97–109. 
  • Jan R. Magnus and Heinz Neudecker (1988), Matrix Differential Calculus with Applications in Statistics and Econometrics, Wiley.