Non-negative least squares

From HandWiki
Revision as of 16:53, 8 February 2024 by Corlink (talk | contribs) (correction)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

In mathematical optimization, the problem of non-negative least squares (NNLS) is a type of constrained least squares problem where the coefficients are not allowed to become negative. That is, given a matrix A and a (column) vector of response variables y, the goal is to find[1]

[math]\displaystyle{ \operatorname{arg\,min}\limits_\mathbf{x} \|\mathbf{Ax} - \mathbf{y}\|_2^2 }[/math] subject to x ≥ 0.

Here x ≥ 0 means that each component of the vector x should be non-negative, and ‖·‖2 denotes the Euclidean norm.

Non-negative least squares problems turn up as subproblems in matrix decomposition, e.g. in algorithms for PARAFAC[2] and non-negative matrix/tensor factorization.[3][4] The latter can be considered a generalization of NNLS.[1]

Another generalization of NNLS is bounded-variable least squares (BVLS), with simultaneous upper and lower bounds αixi ≤ βi.[5]:291[6]

Quadratic programming version

The NNLS problem is equivalent to a quadratic programming problem

[math]\displaystyle{ \operatorname{arg\,min}\limits_\mathbf{x \ge 0} \left(\frac{1}{2} \mathbf{x}^\mathsf{T} \mathbf{Q}\mathbf{x} + \mathbf{c}^\mathsf{T} \mathbf{x}\right), }[/math]

where Q = ATA and c = AT y. This problem is convex, as Q is positive semidefinite and the non-negativity constraints form a convex feasible set.[7]

Algorithms

The first widely used algorithm for solving this problem is an active-set method published by Lawson and Hanson in their 1974 book Solving Least Squares Problems.[5]:291 In pseudocode, this algorithm looks as follows:[1][2]

  • Inputs:
    • a real-valued matrix A of dimension m × n,
    • a real-valued vector y of dimension m,
    • a real value ε, the tolerance for the stopping criterion.
  • Initialize:
    • Set P = ∅.
    • Set R = {1, ..., n}.
    • Set x to an all-zero vector of dimension n.
    • Set w = AT(yAx).
    • Let wR denote the sub-vector with indexes from R
  • Main loop: while R ≠ ∅ and max(wR) > ε:
    • Let j in R be the index of max(wR) in w.
    • Add j to P.
    • Remove j from R.
    • Let AP be A restricted to the variables included in P.
    • Let s be vector of same length as x. Let sP denote the sub-vector with indexes from P, and let sR denote the sub-vector with indexes from R.
    • Set sP = ((AP)T AP)−1 (AP)Ty
    • Set sR to zero
    • While min(sP) ≤ 0:
      • Let α = min xi/xisi for i in P where si ≤ 0.
      • Set x to x + α(sx).
      • Move to R all indices j in P such that xj ≤ 0.
      • Set sP = ((AP)T AP)−1 (AP)Ty
      • Set sR to zero.
    • Set x to s.
    • Set w to AT(yAx).
  • Output: x

This algorithm takes a finite number of steps to reach a solution and smoothly improves its candidate solution as it goes (so it can find good approximate solutions when cut off at a reasonable number of iterations), but is very slow in practice, owing largely to the computation of the pseudoinverse ((AP)T AP)−1.[1] Variants of this algorithm are available in MATLAB as the routine lsqnonneg[8][1] and in SciPy as optimize.nnls.[9]

Many improved algorithms have been suggested since 1974.[1] Fast NNLS (FNNLS) is an optimized version of the Lawson—Hanson algorithm.[2] Other algorithms include variants of Landweber's gradient descent method[10] and coordinate-wise optimization based on the quadratic programming problem above.[7]

See also

References

  1. 1.0 1.1 1.2 1.3 1.4 1.5 Chen, Donghui; Plemmons, Robert J. (2009). "Nonnegativity constraints in numerical analysis". Symposium on the Birth of Numerical Analysis. 
  2. 2.0 2.1 2.2 Bro, Rasmus; De Jong, Sijmen (1997). "A fast non-negativity-constrained least squares algorithm". Journal of Chemometrics 11 (5): 393. doi:10.1002/(SICI)1099-128X(199709/10)11:5<393::AID-CEM483>3.0.CO;2-L. 
  3. Lin, Chih-Jen (2007). "Projected Gradient Methods for Nonnegative Matrix Factorization". Neural Computation 19 (10): 2756–2779. doi:10.1162/neco.2007.19.10.2756. PMID 17716011. http://www.csie.ntu.edu.tw/~cjlin/papers/pgradnmf.pdf. 
  4. Boutsidis, Christos; Drineas, Petros (2009). "Random projections for the nonnegative least-squares problem". Linear Algebra and Its Applications 431 (5–7): 760–771. doi:10.1016/j.laa.2009.03.026. 
  5. 5.0 5.1 Lawson, Charles L.; Hanson, Richard J. (1995). "23. Linear Least Squares with Linear Inequality Constraints". Solving Least Squares Problems. SIAM. pp. 161. doi:10.1137/1.9781611971217.ch23. ISBN 978-0-89871-356-5. https://epubs.siam.org/doi/abs/10.1137/1.9781611971217.ch23. 
  6. Stark, Philip B.; Parker, Robert L. (1995). "Bounded-variable least-squares: an algorithm and applications". Computational Statistics 10: 129. http://digitalassets.lib.berkeley.edu/sdtr/ucb/text/394.pdf. 
  7. 7.0 7.1 Franc, Vojtěch; Hlaváč, Václav; Navara, Mirko (2005). "Sequential Coordinate-Wise Algorithm for the Non-negative Least Squares Problem". Computer Analysis of Images and Patterns. Lecture Notes in Computer Science. 3691. pp. 407–414. doi:10.1007/11556121_50. ISBN 978-3-540-28969-2. 
  8. "lsqnonneg". https://www.mathworks.com/help/matlab/ref/lsqnonneg.html. 
  9. "scipy.optimize.nnls". http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.nnls.html. 
  10. Johansson, B. R.; Elfving, T.; Kozlov, V.; Censor, Y.; Forssén, P. E.; Granlund, G. S. (2006). "The application of an oblique-projected Landweber method to a model of supervised learning". Mathematical and Computer Modelling 43 (7–8): 892. doi:10.1016/j.mcm.2005.12.010.