Low-rank approximation

From HandWiki
Short description: Mathematical concept

In mathematics, low-rank approximation is a minimization problem, in which the cost function measures the fit between a given matrix (the data) and an approximating matrix (the optimization variable), subject to a constraint that the approximating matrix has reduced rank. The problem is used for mathematical modeling and data compression. The rank constraint is related to a constraint on the complexity of a model that fits the data. In applications, often there are other constraints on the approximating matrix apart from the rank constraint, e.g., non-negativity and Hankel structure.

Low-rank approximation is closely related to numerous other techniques, including principal component analysis, factor analysis, total least squares, latent semantic analysis, orthogonal regression, and dynamic mode decomposition.

Definition

Given

  • structure specification [math]\displaystyle{ \mathcal{S} : \mathbb{R}^{n_p} \to \mathbb{R}^{m\times n} }[/math],
  • vector of structure parameters [math]\displaystyle{ p\in\mathbb{R}^{n_p} }[/math],
  • norm [math]\displaystyle{ \| \cdot \| }[/math], and
  • desired rank [math]\displaystyle{ r }[/math],
[math]\displaystyle{ \text{minimize} \quad \text{over } \widehat p \quad \|p - \widehat p\| \quad\text{subject to}\quad \operatorname{rank}\big(\mathcal{S}(\widehat p)\big) \leq r. }[/math]

Applications

Basic low-rank approximation problem

The unstructured problem with fit measured by the Frobenius norm, i.e.,

[math]\displaystyle{ \text{minimize} \quad \text{over } \widehat D \quad \|D - \widehat D\|_{\text{F}} \quad\text{subject to}\quad \operatorname{rank}\big(\widehat D\big) \leq r }[/math]

has an analytic solution in terms of the singular value decomposition of the data matrix. The result is referred to as the matrix approximation lemma or Eckart–Young–Mirsky theorem. This problem was originally solved by Erhard Schmidt[1] in the infinite dimensional context of integral operators (although his methods easily generalize to arbitrary compact operators on Hilbert spaces) and later rediscovered by C. Eckart and G. Young.[2] L. Mirsky generalized the result to arbitrary unitarily invariant norms.[3] Let

[math]\displaystyle{ D = U\Sigma V^{\top} \in \mathbb{R}^{m\times n}, \quad m \leq n }[/math]

be the singular value decomposition of [math]\displaystyle{ D }[/math], where [math]\displaystyle{ \Sigma=:\operatorname{diag}(\sigma_1,\ldots,\sigma_m) }[/math] is the [math]\displaystyle{ m\times n }[/math] rectangular diagonal matrix with the singular values [math]\displaystyle{ \sigma_1\geq\ldots\geq\sigma_m }[/math]. For a given [math]\displaystyle{ r\in\{1,\dots,m-1\} }[/math], partition [math]\displaystyle{ U }[/math], [math]\displaystyle{ \Sigma }[/math], and [math]\displaystyle{ V }[/math] as follows:

[math]\displaystyle{ U =: \begin{bmatrix} U_1 & U_2\end{bmatrix}, \quad \Sigma =: \begin{bmatrix} \Sigma_1 & 0 \\ 0 & \Sigma_2 \end{bmatrix}, \quad\text{and}\quad V =: \begin{bmatrix} V_1 & V_2 \end{bmatrix}, }[/math]

where [math]\displaystyle{ U_1 }[/math] is [math]\displaystyle{ m\times r }[/math], [math]\displaystyle{ \Sigma_1 }[/math] is [math]\displaystyle{ r\times r }[/math], and [math]\displaystyle{ V_1 }[/math] is [math]\displaystyle{ r\times n }[/math]. Then the rank-[math]\displaystyle{ r }[/math] matrix, obtained from the truncated singular value decomposition

[math]\displaystyle{ \widehat D^* = U_1 \Sigma_1 V_1^{\top}, }[/math]

is such that

[math]\displaystyle{ \|D-\widehat D^*\|_{\text{F}} = \min_{\operatorname{rank}(\widehat D) \leq r} \|D-\widehat D\|_{\text{F}} = \sqrt{\sigma^2_{r+1} + \cdots + \sigma^2_m}. }[/math]

The minimizer [math]\displaystyle{ \widehat D^* }[/math] is unique if and only if [math]\displaystyle{ \sigma_{r+1}\neq\sigma_{r} }[/math].

Proof of Eckart–Young–Mirsky theorem (for spectral norm)

Let [math]\displaystyle{ A\in\mathbb{R}^{m\times n} }[/math] be a real (possibly rectangular) matrix with [math]\displaystyle{ m\leq n }[/math]. Suppose that

[math]\displaystyle{ A = U\Sigma V^\top }[/math]

is the singular value decomposition of [math]\displaystyle{ A }[/math]. Recall that [math]\displaystyle{ U }[/math] and [math]\displaystyle{ V }[/math] are orthogonal matrices, and [math]\displaystyle{ \Sigma }[/math] is an [math]\displaystyle{ m\times n }[/math] diagonal matrix with entries [math]\displaystyle{ (\sigma_{1}, \sigma_{2}, \cdots ,\sigma_{m}) }[/math] such that [math]\displaystyle{ \sigma_1 \geq \sigma_2 \geq \cdots \geq \sigma_m\geq 0 }[/math].

We claim that the best rank-[math]\displaystyle{ k }[/math] approximation to [math]\displaystyle{ A }[/math] in the spectral norm, denoted by [math]\displaystyle{ \|\cdot\|_2 }[/math], is given by

[math]\displaystyle{ A_k := \sum_{i=1}^k \sigma_i u_i v_i^\top }[/math]

where [math]\displaystyle{ u_i }[/math]and [math]\displaystyle{ v_i }[/math] denote the [math]\displaystyle{ i }[/math]th column of [math]\displaystyle{ U }[/math] and [math]\displaystyle{ V }[/math], respectively.

First, note that we have

[math]\displaystyle{ \|A-A_k\|_2 = \left\|\sum_{i=1}^{\color{red}{n}} \sigma_i u_i v_i^\top - \sum_{i=1}^{\color{red}{k}} \sigma_i u_i v_i^\top\right\|_2 = \left\|\sum_{i=\color{red}{k+1}}^n \sigma_i u_i v_i^\top \right\|_2 = \sigma_{k+1} }[/math]

Therefore, we need to show that if [math]\displaystyle{ B_k = XY^\top }[/math] where [math]\displaystyle{ X }[/math] and [math]\displaystyle{ Y }[/math] have [math]\displaystyle{ k }[/math] columns then [math]\displaystyle{ \|A-A_k\|_2=\sigma_{k+1}\leq \|A-B_k\|_2 }[/math].

Since [math]\displaystyle{ Y }[/math] has [math]\displaystyle{ k }[/math] columns, then there must be a nontrivial linear combination of the first [math]\displaystyle{ k+1 }[/math] columns of [math]\displaystyle{ V }[/math], i.e.,

[math]\displaystyle{ w = \gamma_1v_1+\cdots+\gamma_{k+1}v_{k+1}, }[/math]

such that [math]\displaystyle{ Y^\top w = 0 }[/math]. Without loss of generality, we can scale [math]\displaystyle{ w }[/math] so that [math]\displaystyle{ \|w\|_2 = 1 }[/math] or (equivalently) [math]\displaystyle{ \gamma_1^2+\cdots +\gamma_{k+1}^2 = 1 }[/math]. Therefore,

[math]\displaystyle{ \|A-B_k\|_2^2 \geq \|(A-B_k)w\|_2^2 = \|Aw\|_2^2 = \gamma_1^2\sigma_1^2+\cdots+\gamma_{k+1}^2\sigma_{k+1}^2\geq \sigma_{k+1}^2. }[/math]

The result follows by taking the square root of both sides of the above inequality.

Proof of Eckart–Young–Mirsky theorem (for Frobenius norm)

Let [math]\displaystyle{ A\in\mathbb{R}^{m\times n} }[/math] be a real (possibly rectangular) matrix with [math]\displaystyle{ m\leq n }[/math]. Suppose that

[math]\displaystyle{ A = U\Sigma V^\top }[/math]

is the singular value decomposition of [math]\displaystyle{ A }[/math].

We claim that the best rank [math]\displaystyle{ k }[/math] approximation to [math]\displaystyle{ A }[/math] in the Frobenius norm, denoted by [math]\displaystyle{ \|\cdot\|_F }[/math], is given by

[math]\displaystyle{ A_k = \sum_{i=1}^k \sigma_i u_i v_i^\top }[/math]

where [math]\displaystyle{ u_i }[/math] and [math]\displaystyle{ v_i }[/math] denote the [math]\displaystyle{ i }[/math]th column of [math]\displaystyle{ U }[/math] and [math]\displaystyle{ V }[/math], respectively.

First, note that we have

[math]\displaystyle{ \|A-A_k\|_F^2 = \left\|\sum_{i=k+1}^n \sigma_i u_iv_i^\top \right\|_F^2 = \sum_{i=k+1}^n \sigma_i^2 }[/math]

Therefore, we need to show that if [math]\displaystyle{ B_k = XY^\top }[/math] where [math]\displaystyle{ X }[/math] and [math]\displaystyle{ Y }[/math] have [math]\displaystyle{ k }[/math] columns then

[math]\displaystyle{ \|A-A_k\|_F^2=\sum_{i=k+1}^n\sigma_{i}^2\leq \|A-B_k\|_F^2. }[/math]

By the triangle inequality with the spectral norm, if [math]\displaystyle{ A = A' + A'' }[/math] then [math]\displaystyle{ \sigma_1(A)\leq \sigma_1(A') + \sigma_1(A'') }[/math]. Suppose [math]\displaystyle{ A'_k }[/math] and [math]\displaystyle{ A''_k }[/math] respectively denote the rank [math]\displaystyle{ k }[/math] approximation to [math]\displaystyle{ A' }[/math] and [math]\displaystyle{ A'' }[/math] by SVD method described above. Then, for any [math]\displaystyle{ i,j\geq1 }[/math]

[math]\displaystyle{ \begin{align} \sigma_i(A')+\sigma_j(A'') &= \sigma_1(A'-A'_{i-1}) + \sigma_1(A''-A''_{j-1})\\ &\geq \sigma_1(A-A'_{i-1}-A''_{j-1})\\ &\geq \sigma_1(A-A_{i+j-2})\qquad (\text{since } {\rm rank}(A'_{i-1}+A''_{j-1})\leq {\rm rank\,}(A_{i+j-2}))\\ &=\sigma_{i+j-1}(A). \end{align} }[/math]

Since [math]\displaystyle{ \sigma_{k+1}(B_k) = 0 }[/math], when [math]\displaystyle{ A' = A-B_k }[/math] and [math]\displaystyle{ A'' = B_k }[/math] we conclude that for [math]\displaystyle{ i\geq 1,j=k+1 }[/math]

[math]\displaystyle{ \sigma_i(A-B_k) \geq \sigma_{k+i}(A). }[/math]

Therefore,

[math]\displaystyle{ \|A-B_k\|_F^2 = \sum_{i=1}^n \sigma_i(A-B_k)^2 \geq \sum_{i=k+1}^n\sigma_i(A)^2 = \|A-A_k\|_F^2, }[/math]

as required.

Weighted low-rank approximation problems

The Frobenius norm weights uniformly all elements of the approximation error [math]\displaystyle{ D - \widehat D }[/math]. Prior knowledge about distribution of the errors can be taken into account by considering the weighted low-rank approximation problem

[math]\displaystyle{ \text{minimize} \quad \text{over } \widehat D \quad \operatorname{vec}(D - \widehat D)^{\top} W \operatorname{vec}(D - \widehat D) \quad\text{subject to}\quad \operatorname{rank}(\widehat D) \leq r, }[/math]

where [math]\displaystyle{ \text{vec}(A) }[/math] vectorizes the matrix [math]\displaystyle{ A }[/math] column wise and [math]\displaystyle{ W }[/math] is a given positive (semi)definite weight matrix.

The general weighted low-rank approximation problem does not admit an analytic solution in terms of the singular value decomposition and is solved by local optimization methods, which provide no guarantee that a globally optimal solution is found.

In case of uncorrelated weights, weighted low-rank approximation problem also can be formulated in this way:[4][5] for a non-negative matrix [math]\displaystyle{ W }[/math] and a matrix [math]\displaystyle{ A }[/math] we want to minimize [math]\displaystyle{ \sum_{i,j} ( W_{i,j}( A_{i,j} - B_{i,j} ))^2 }[/math] over matrices, [math]\displaystyle{ B }[/math], of rank at most [math]\displaystyle{ r }[/math].

Entry-wise Lp low-rank approximation problems

Let [math]\displaystyle{ \|A\|_p = \left(\sum_{i,j}|A_{i,j}^p|\right)^{1/p} }[/math]. For [math]\displaystyle{ p = 2 }[/math], the fastest algorithm runs in [math]\displaystyle{ nnz(A) + n \cdot poly(k/\epsilon) }[/math] time.[6][7] One of the important ideas been used is called Oblivious Subspace Embedding (OSE), it is first proposed by Sarlos.[8]

For [math]\displaystyle{ p = 1 }[/math], it is known that this entry-wise L1 norm is more robust than the Frobenius norm in the presence of outliers and is indicated in models where Gaussian assumptions on the noise may not apply. It is natural to seek to minimize [math]\displaystyle{ \|B - A\|_1 }[/math].[9] For [math]\displaystyle{ p=0 }[/math] and [math]\displaystyle{ p \geq 1 }[/math], there are some algorithms with provable guarantees.[10][11]

Distance low-rank approximation problem

Let [math]\displaystyle{ P = \{ p_1 , \ldots, p_m \} }[/math] and [math]\displaystyle{ Q = \{ q_1, \ldots, q_n \} }[/math] be two point sets in an arbitrary metric space. Let [math]\displaystyle{ A }[/math] represent the [math]\displaystyle{ m \times n }[/math] matrix where [math]\displaystyle{ A_{i,j} = dist(p_i,q_i) }[/math]. Such distances matrices are commonly computed in software packages and have applications to learning image manifolds, handwriting recognition, and multi-dimensional unfolding. In an attempt to reduce their description size,[12][13] one can study low rank approximation of such matrices.

Distributed/Streaming low-rank approximation problem

The low-rank approximation problems in the distributed and streaming setting has been considered in.[14]

Image and kernel representations of the rank constraints

Using the equivalences

[math]\displaystyle{ \operatorname{rank}(\widehat D) \leq r \quad\iff\quad \text{there are } P\in\R^{m\times r} \text{ and } L\in\R^{r\times n} \text{ such that } \widehat D = PL }[/math]

and

[math]\displaystyle{ \operatorname{rank}(\widehat D) \leq r \quad\iff\quad \text{there is full row rank } R\in\R^{m - r\times m} \text{ such that } R \widehat D = 0 }[/math]

the weighted low-rank approximation problem becomes equivalent to the parameter optimization problems

[math]\displaystyle{ \text{minimize} \quad \text{over } \widehat D, P \text{ and } L \quad \operatorname{vec}^{\top}(D - \widehat D) W \operatorname{vec}(D - \widehat D) \quad\text{subject to}\quad \widehat D = PL }[/math]

and

[math]\displaystyle{ \text{minimize} \quad \text{over } \widehat D \text{ and } R \quad \operatorname{vec}^{\top}(D - \widehat D) W \operatorname{vec}(D - \widehat D) \quad\text{subject to}\quad R \widehat D = 0 \quad\text{and}\quad RR^{\top} = I_r, }[/math]

where [math]\displaystyle{ I_r }[/math] is the identity matrix of size [math]\displaystyle{ r }[/math].

Alternating projections algorithm

The image representation of the rank constraint suggests a parameter optimization method in which the cost function is minimized alternatively over one of the variables ([math]\displaystyle{ P }[/math] or [math]\displaystyle{ L }[/math]) with the other one fixed. Although simultaneous minimization over both [math]\displaystyle{ P }[/math] and [math]\displaystyle{ L }[/math] is a difficult biconvex optimization problem, minimization over one of the variables alone is a linear least squares problem and can be solved globally and efficiently.

The resulting optimization algorithm (called alternating projections) is globally convergent with a linear convergence rate to a locally optimal solution of the weighted low-rank approximation problem. Starting value for the [math]\displaystyle{ P }[/math] (or [math]\displaystyle{ L }[/math]) parameter should be given. The iteration is stopped when a user defined convergence condition is satisfied.

Matlab implementation of the alternating projections algorithm for weighted low-rank approximation:

function [dh, f] = wlra_ap(d, w, p, tol, maxiter)
[m, n] = size(d); r = size(p, 2); f = inf;
for i = 2:maxiter
    % minimization over L
    bp = kron(eye(n), p);
    vl = (bp' * w * bp) \ bp' * w * d(:);
    l  = reshape(vl, r, n);
    % minimization over P
    bl = kron(l', eye(m));
    vp = (bl' * w * bl) \ bl' * w * d(:);
    p  = reshape(vp, m, r);
    % check exit condition
    dh = p * l; dd = d - dh;
    f(i) = dd(:)' * w * dd(:);
    if abs(f(i - 1) - f(i)) < tol, break, end
endfor

Variable projections algorithm

The alternating projections algorithm exploits the fact that the low rank approximation problem, parameterized in the image form, is bilinear in the variables [math]\displaystyle{ P }[/math] or [math]\displaystyle{ L }[/math]. The bilinear nature of the problem is effectively used in an alternative approach, called variable projections.[15]

Consider again the weighted low rank approximation problem, parameterized in the image form. Minimization with respect to the [math]\displaystyle{ L }[/math] variable (a linear least squares problem) leads to the closed form expression of the approximation error as a function of [math]\displaystyle{ P }[/math]

[math]\displaystyle{ f(P) = \sqrt{\operatorname{vec}^{\top}(D)\Big( W - W (I_n \otimes P) \big( (I_n \otimes P)^{\top} W (I_n \otimes P) \big)^{-1} (I_n \otimes P)^{\top} W \Big) \operatorname{vec}(D)}. }[/math]

The original problem is therefore equivalent to the nonlinear least squares problem of minimizing [math]\displaystyle{ f(P) }[/math] with respect to [math]\displaystyle{ P }[/math]. For this purpose standard optimization methods, e.g. the Levenberg-Marquardt algorithm can be used.

Matlab implementation of the variable projections algorithm for weighted low-rank approximation:

function [dh, f] = wlra_varpro(d, w, p, tol, maxiter)
prob = optimset(); prob.solver = 'lsqnonlin';
prob.options = optimset('MaxIter', maxiter, 'TolFun', tol); 
prob.x0 = p; prob.objective = @(p) cost_fun(p, d, w);
[p, f ] = lsqnonlin(prob); 
[f, vl] = cost_fun(p, d, w); 
dh = p * reshape(vl, size(p, 2), size(d, 2));

function [f, vl] = cost_fun(p, d, w)
bp = kron(eye(size(d, 2)), p);
vl = (bp' * w * bp) \ bp' * w * d(:);
f = d(:)' * w * (d(:) - bp * vl);

The variable projections approach can be applied also to low rank approximation problems parameterized in the kernel form. The method is effective when the number of eliminated variables is much larger than the number of optimization variables left at the stage of the nonlinear least squares minimization. Such problems occur in system identification, parameterized in the kernel form, where the eliminated variables are the approximating trajectory and the remaining variables are the model parameters. In the context of linear time-invariant systems, the elimination step is equivalent to Kalman smoothing.

A Variant: convex-restricted low rank approximation

Usually, we want our new solution not only to be of low rank, but also satisfy other convex constraints due to application requirements. Our interested problem would be as follows,

[math]\displaystyle{ \text{minimize} \quad \text{over } \widehat p \quad \|p - \widehat p\| \quad\text{subject to}\quad \operatorname{rank}\big(\mathcal{S}(\widehat p)\big) \leq r \text{ and } g(\widehat p) \leq 0 }[/math]

This problem has many real world applications, including to recover a good solution from an inexact (semidefinite programming) relaxation. If additional constraint [math]\displaystyle{ g(\widehat p) \leq 0 }[/math] is linear, like we require all elements to be nonnegative, the problem is called structured low rank approximation.[16] The more general form is named convex-restricted low rank approximation.

This problem is helpful in solving many problems. However, it is challenging due to the combination of the convex and nonconvex (low-rank) constraints. Different techniques were developed based on different realizations of [math]\displaystyle{ g(\widehat p) \leq 0 }[/math]. However, the Alternating Direction Method of Multipliers (ADMM) can be applied to solve the nonconvex problem with convex objective function, rank constraints and other convex constraints,[17] and is thus suitable to solve our above problem. Moreover, unlike the general nonconvex problems, ADMM will guarantee to converge a feasible solution as long as its dual variable converges in the iterations.

See also

References

  1. E. Schmidt, Zur Theorie der linearen und nichtlinearen Integralgleichungen, Math. Annalen 63 (1907), 433-476. doi:10.1007/BF01449770
  2. C. Eckart, G. Young, The approximation of one matrix by another of lower rank. Psychometrika, Volume 1, 1936, Pages 211–8. doi:10.1007/BF02288367
  3. L. Mirsky, Symmetric gauge functions and unitarily invariant norms, Q.J. Math. 11 (1960), 50-59. doi:10.1093/qmath/11.1.50
  4. Srebro, Nathan; Jaakkola, Tommi (2003). "Weighted Low-Rank Approximations". ICML'03. https://www.aaai.org/Papers/ICML/2003/ICML03-094.pdf. 
  5. Razenshteyn, Ilya; Song, Zhao; Woodruff, David P. (2016). "Weighted Low Rank Approximations with Provable Guarantees". STOC '16 Proceedings of the forty-eighth annual ACM symposium on Theory of Computing. https://dl.acm.org/citation.cfm?id=2897639. 
  6. Clarkson, Kenneth L.; Woodruff, David P. (2013). "Low Rank Approximation and Regression in Input Sparsity Time". STOC '13 Proceedings of the forty-fifth annual ACM symposium on Theory of Computing. 
  7. Nelson, Jelani; Nguyen, Huy L. (2013). "OSNAP: Faster numerical linear algebra algorithms via sparser subspace embeddings". FOCS '13. 
  8. Sarlos, Tamas (2006). "Improved approximation algorithms for large matrices via random projections". FOCS'06. 
  9. Song, Zhao; Woodruff, David P.; Zhong, Peilin (2017). "Low Rank Approximation with Entrywise L1-Norm Error". STOC '17 Proceedings of the forty-ninth annual ACM symposium on Theory of Computing. 
  10. Bringmann, Karl; Kolev, Pavel; Woodruff, David P. (2017). "Approximation Algorithms for L0-Low Rank Approximation". NIPS'17. 
  11. Chierichetti, Flavio; Gollapudi, Sreenivas; Kumar, Ravi; Lattanzi, Silvio; Panigrahy, Rina; Woodruff, David P. (2017). "Algorithms for Lp Low-Rank Approximation". ICML'17. 
  12. Bakshi, Ainesh L.; Woodruff, David P. (2018). "Sublinear Time Low-Rank Approximation of Distance Matrices". NeurIPS. 
  13. Indyk, Piotr; Vakilian, Ali; Wagner, Tal; Woodruff, David P. (2019). "Sample-Optimal Low-Rank Approximation of Distance Matrices". COLT. 
  14. Boutsidis, Christos; Woodruff, David P.; Zhong, Peilin (2016). "Optimal Principal Component Analysis in Distributed and Streaming Models". STOC. 
  15. G. Golub and V. Pereyra, Separable nonlinear least squares: the variable projection method and its applications, Institute of Physics, Inverse Problems, Volume 19, 2003, Pages 1-26.
  16. Chu, Moody T.; Funderlic, Robert E.; Plemmons, Robert J. (2003). "structured low-rank approximation". Linear Algebra and Its Applications 366: 157–172. doi:10.1016/S0024-3795(02)00505-0. 
  17. "A General System for Heuristic Solution of Convex Problems over Nonconvex Sets". https://stanford.edu/~boyd/papers/pdf/ncvx.pdf. 
  • M. T. Chu, R. E. Funderlic, R. J. Plemmons, Structured low-rank approximation, Linear Algebra and its Applications, Volume 366, 1 June 2003, Pages 157–172 doi:10.1016/S0024-3795(02)00505-0

External links