Nullspace property
In compressed sensing, the nullspace property gives necessary and sufficient conditions on the reconstruction of sparse signals using the techniques of [math]\displaystyle{ \ell 1 }[/math]-relaxation. The term "nullspace property" originates from Cohen, Dahmen, and DeVore.[1] The nullspace property is often difficult to check in practice, and the restricted isometry property is a more modern condition in the field of compressed sensing.
The technique of [math]\displaystyle{ \ell 1 }[/math]-relaxation
The non-convex [math]\displaystyle{ \ell_0 }[/math]-minimization problem,
[math]\displaystyle{ \min\limits_{x} \|x\|_0 }[/math] subject to [math]\displaystyle{ Ax = b }[/math],
is a standard problem in compressed sensing. However, [math]\displaystyle{ \ell_0 }[/math]-minimization is known to be NP-hard in general.[2] As such, the technique of [math]\displaystyle{ \ell 1 }[/math]-relaxation is sometimes employed to circumvent the difficulties of signal reconstruction using the [math]\displaystyle{ \ell_0 }[/math]-norm. In [math]\displaystyle{ \ell 1 }[/math]-relaxation, the [math]\displaystyle{ \ell 1 }[/math] problem,
[math]\displaystyle{ \min\limits_{x} \|x\|_1 }[/math] subject to [math]\displaystyle{ Ax = b }[/math],
is solved in place of the [math]\displaystyle{ \ell_0 }[/math] problem. Note that this relaxation is convex and hence amenable to the standard techniques of linear programming - a computationally desirable feature. Naturally we wish to know when [math]\displaystyle{ \ell 1 }[/math]-relaxation will give the same answer as the [math]\displaystyle{ \ell_0 }[/math] problem. The nullspace property is one way to guarantee agreement.
Definition
An [math]\displaystyle{ m \times n }[/math] complex matrix [math]\displaystyle{ A }[/math] has the nullspace property of order [math]\displaystyle{ s }[/math], if for all index sets [math]\displaystyle{ S }[/math] with [math]\displaystyle{ s=|S| \leq n }[/math] we have that: [math]\displaystyle{ \|\eta_S\|_1 \lt \|\eta_{S^C}\|_1 }[/math] for all [math]\displaystyle{ \eta \in \ker{A} \setminus \left\{0\right\} }[/math].
Recovery Condition
The following theorem gives necessary and sufficient condition on the recoverability of a given [math]\displaystyle{ s }[/math]-sparse vector in [math]\displaystyle{ \mathbb{C}^n }[/math]. The proof of the theorem is a standard one, and the proof supplied here is summarized from Holger Rauhut.[3]
[math]\displaystyle{ \textbf{Theorem:} }[/math] Let [math]\displaystyle{ A }[/math] be a [math]\displaystyle{ m \times n }[/math] complex matrix. Then every [math]\displaystyle{ s }[/math]-sparse signal [math]\displaystyle{ x \in \mathbb{C}^n }[/math] is the unique solution to the [math]\displaystyle{ \ell_1 }[/math]-relaxation problem with [math]\displaystyle{ b = Ax }[/math] if and only if [math]\displaystyle{ A }[/math] satisfies the nullspace property with order [math]\displaystyle{ s }[/math].
[math]\displaystyle{ \textit{Proof:} }[/math] For the forwards direction notice that [math]\displaystyle{ \eta_S }[/math] and [math]\displaystyle{ -\eta_{S^C} }[/math] are distinct vectors with [math]\displaystyle{ A(-\eta_{S^C}) = A(\eta_S) }[/math] by the linearity of [math]\displaystyle{ A }[/math], and hence by uniqueness we must have [math]\displaystyle{ \|\eta_S\|_1 \lt \|\eta_{S^C}\|_1 }[/math] as desired. For the backwards direction, let [math]\displaystyle{ x }[/math] be [math]\displaystyle{ s }[/math]-sparse and [math]\displaystyle{ z }[/math] another (not necessary [math]\displaystyle{ s }[/math]-sparse) vector such that [math]\displaystyle{ z \neq x }[/math] and [math]\displaystyle{ Az = Ax }[/math]. Define the (non-zero) vector [math]\displaystyle{ \eta = x - z }[/math] and notice that it lies in the nullspace of [math]\displaystyle{ A }[/math]. Call [math]\displaystyle{ S }[/math] the support of [math]\displaystyle{ x }[/math], and then the result follows from an elementary application of the triangle inequality: [math]\displaystyle{ \|x\|_1 \leq \|x - z_S\|_1 + \|z_S\|_1 = \|\eta_S\|_1+\|z_S\|_1 \lt \|\eta_{S^C}\|_1+\|z_S\|_1 = \|-z_{S^C}\|_1+\|z_S\|_1 = \|z\|_1 }[/math], establishing the minimality of [math]\displaystyle{ x }[/math]. [math]\displaystyle{ \square }[/math]
References
- ↑ Cohen, Albert; Dahmen, Wolfgang; DeVore, Ronald (2009-01-01). "Compressed sensing and best 𝑘-term approximation". Journal of the American Mathematical Society 22 (1): 211–231. doi:10.1090/S0894-0347-08-00610-3. ISSN 0894-0347.
- ↑ Natarajan, B. K. (1995-04-01). "Sparse Approximate Solutions to Linear Systems". SIAM J. Comput. 24 (2): 227–234. doi:10.1137/S0097539792240406. ISSN 0097-5397.
- ↑ Rauhut, Holger (2011). Compressive Sensing and Structured Random Matrices.
Original source: https://en.wikipedia.org/wiki/Nullspace property.
Read more |