Johnson–Lindenstrauss lemma
In mathematics, the Johnson–Lindenstrauss lemma is a result named after William B. Johnson and Joram Lindenstrauss concerning low-distortion embeddings of points from high-dimensional into low-dimensional Euclidean space. The lemma states that a set of points in a high-dimensional space can be embedded into a space of much lower dimension in such a way that distances between the points are nearly preserved. In the classical proof of the lemma, the embedding is a random orthogonal projection. The lemma has applications in compressed sensing, manifold learning, dimensionality reduction, and graph embedding. Much of the data stored and manipulated on computers, including text and images, can be represented as points in a high-dimensional space (see vector space model for the case of text). However, the essential algorithms for working with such data tend to become bogged down very quickly as dimension increases.[1] It is therefore desirable to reduce the dimensionality of the data in a way that preserves its relevant structure. The Johnson–Lindenstrauss lemma is a classic result in this vein.
Also, the lemma is tight up to a constant factor, i.e. there exists a set of points of size m that needs dimension
- [math]\displaystyle{ \Omega \left(\frac{\log(m)}{\varepsilon^2}\right) }[/math]
in order to preserve the distances between all pairs of points within a factor of [math]\displaystyle{ (1 \pm \varepsilon) }[/math].[2][3]
Lemma
Given [math]\displaystyle{ 0 \lt \varepsilon \lt 1 }[/math], a set [math]\displaystyle{ X }[/math] of [math]\displaystyle{ m\in\mathbb Z_{\ge1} }[/math] points in [math]\displaystyle{ \mathbb{R}^N }[/math] ([math]\displaystyle{ N\in\mathbb Z_{\ge0} }[/math]), and an integer [math]\displaystyle{ n \gt 8(\ln m)/\varepsilon^2 }[/math][disputed ], there is a linear map [math]\displaystyle{ f: \mathbb{R}^N \rightarrow \mathbb{R}^n }[/math] such that
- [math]\displaystyle{ (1-\varepsilon)\|u-v\|^2 \leq \|f(u) - f(v)\|^2 \leq (1+\varepsilon)\|u-v\|^2 }[/math]
for all [math]\displaystyle{ u,v \in X }[/math].
The formula can be rearranged:[math]\displaystyle{ (1+\varepsilon)^{-1}\|f(u)-f(v)\|^2 \leq \|u-v\|^2 \leq (1-\varepsilon)^{-1}\|f(u)-f(v)\|^2 }[/math]
Alternatively, for any [math]\displaystyle{ \epsilon\in(0,1) }[/math] and any integer [math]\displaystyle{ n\ge15(\ln m)/\varepsilon^2 }[/math][Note 1] there exists a linear function [math]\displaystyle{ f: \mathbb{R}^N \rightarrow \mathbb{R}^n }[/math] such that the restriction [math]\displaystyle{ f|_X }[/math] is [math]\displaystyle{ (1+\varepsilon) }[/math]-bi-Lipschitz.[Note 2]
The classical proof of the lemma takes ƒ to be a scalar multiple of an orthogonal projection [math]\displaystyle{ P }[/math] onto a random subspace of dimension [math]\displaystyle{ n }[/math] in [math]\displaystyle{ \mathbb{R}^N }[/math]. An orthogonal projection collapses some dimensions of the space it is applied to, which reduces the length of all vectors, as well as distance between vectors in the space. Under the conditions of the lemma, concentration of measure ensures there is a nonzero chance that a random orthogonal projection reduces pairwise distances between all points in [math]\displaystyle{ X }[/math] by roughly a constant factor [math]\displaystyle{ c }[/math]. Since the chance is nonzero, such projections must exist, so we can choose one [math]\displaystyle{ P }[/math] and set [math]\displaystyle{ f(v) = Pv/c }[/math].
To obtain the projection algorithmically, it suffices with high probability to repeatedly sample orthogonal projection matrices at random. If you keep rolling the dice, you will eventually obtain one in polynomial random time.
Alternate statement
A related lemma is the distributional JL lemma. This lemma states that for any [math]\displaystyle{ 0 \lt \varepsilon, \delta \lt 1/2 }[/math] and positive integer [math]\displaystyle{ d }[/math], there exists a distribution over [math]\displaystyle{ \mathbb{R}^{k \times d} }[/math] from which the matrix [math]\displaystyle{ A }[/math] is drawn such that for [math]\displaystyle{ k = O(\varepsilon^{-2} \log(1/\delta)) }[/math] and for any unit-length vector [math]\displaystyle{ x \in \mathbb{R}^{d} }[/math], the claim below holds.[4]
- [math]\displaystyle{ P(|\Vert Ax\Vert_2^2-1|\gt \varepsilon)\lt \delta }[/math]
One can obtain the JL lemma from the distributional version by setting [math]\displaystyle{ x = (u-v)/\|u-v\|_2 }[/math] and [math]\displaystyle{ \delta \lt 1/n^2 }[/math] for some pair u,v both in X. Then the JL lemma follows by a union bound over all such pairs.
Speeding up the JL transform
Given A, computing the matrix vector product takes [math]\displaystyle{ O(kd) }[/math] time. There has been some work in deriving distributions for which the matrix vector product can be computed in less than [math]\displaystyle{ O(kd) }[/math] time.
There are two major lines of work. The first, Fast Johnson Lindenstrauss Transform (FJLT),[5] was introduced by Ailon and Chazelle in 2006. This method allows the computation of the matrix vector product in just [math]\displaystyle{ d\log d + k^{2+\gamma} }[/math] for any constant [math]\displaystyle{ \gamma\gt 0 }[/math].
Another approach is to build a distribution supported over matrices that are sparse.[6] This method allows keeping only an [math]\displaystyle{ \varepsilon }[/math] fraction of the entries in the matrix, which means the computation can be done in just [math]\displaystyle{ kd\varepsilon }[/math] time. Furthermore, if the vector has only [math]\displaystyle{ b }[/math] non-zero entries, the Sparse JL takes time [math]\displaystyle{ kb\varepsilon }[/math], which may be much less than the [math]\displaystyle{ d\log d }[/math] time used by Fast JL.
Tensorized random projections
It is possible to combine two JL matrices by taking the so-called face-splitting product, which is defined as the tensor products of the rows (was proposed by V. Slyusar[7] in 1996[8][9][10][11][12] for radar and digital antenna array applications). More directly, let [math]\displaystyle{ {C}\in\mathbb R^{3\times 3} }[/math] and [math]\displaystyle{ {D}\in\mathbb R^{3\times 3} }[/math] be two matrices. Then the face-splitting product [math]\displaystyle{ {C}\bullet {D} }[/math] is[8][9][10][11][12]
- [math]\displaystyle{ {C} \bullet {D} = \left[ \begin{array} { c } {C}_1 \otimes {D}_1\\\hline {C}_2 \otimes {D}_2\\\hline {C}_3 \otimes {D}_3\\ \end{array} \right]. }[/math]
This idea of tensorization was used by Kasiviswanathan et al. for differential privacy.[13]
JL matrices defined like this use fewer random bits, and can be applied quickly to vectors that have tensor structure, due to the following identity:[10]
- [math]\displaystyle{ (\mathbf{C} \bull \mathbf{D})(x\otimes y) = \mathbf{C}x \circ \mathbf{D} y = \left[ \begin{array} { c } (\mathbf{C}x)_1 (\mathbf{D} y)_1 \\ (\mathbf{C}x)_2 (\mathbf{D} y)_2 \\ \vdots \end{array}\right] }[/math],
where [math]\displaystyle{ \circ }[/math] is the element-wise (Hadamard) product. Such computations have been used to efficiently compute polynomial kernels and many other linear-algebra algorithms[clarification needed].[14]
In 2020[15] it was shown that if the matrices [math]\displaystyle{ C_1, C_2, \dots, C_c }[/math] are independent [math]\displaystyle{ \pm1 }[/math] or Gaussian matrices, the combined matrix [math]\displaystyle{ C_1 \bullet \dots \bullet C_c }[/math] satisfies the distributional JL lemma if the number of rows is at least
- [math]\displaystyle{ O(\epsilon^{-2}\log1/\delta + \epsilon^{-1}(\tfrac1c\log1/\delta)^c) }[/math].
For large [math]\displaystyle{ \epsilon }[/math] this is as good as the completely random Johnson-Lindenstrauss, but a matching lower bound in the same paper shows that this exponential dependency on [math]\displaystyle{ (\log1/\delta)^c }[/math] is necessary. Alternative JL constructions are suggested to circumvent this.
See also
Notes
- ↑ Or any integer [math]\displaystyle{ n\gt 128(\ln m)/(9\varepsilon^2). }[/math]
- ↑ This result follows from the above result. Sketch of proof: Note [math]\displaystyle{ 1/(1+\varepsilon)\lt \sqrt{1-3\varepsilon/4} }[/math] and [math]\displaystyle{ \sqrt{1+3\varepsilon/4}\lt \sqrt{1+\varepsilon}\lt 1+\varepsilon }[/math] for all [math]\displaystyle{ \varepsilon\in(0,1) }[/math]. Do casework for 1=m and 1<m, applying the above result to [math]\displaystyle{ 3\varepsilon/4 }[/math] in the latter case, noting [math]\displaystyle{ 128/9\lt 15. }[/math]
References
- ↑ For instance, writing about nearest neighbor search in high-dimensional data sets, Jon Kleinberg writes: "The more sophisticated algorithms typically achieve a query time that is logarithmic in n at the expense of an exponential dependence on the dimension d; indeed, even the average case analysis of heuristics such as k-d trees reveal an exponential dependence on d in the query time. Kleinberg, Jon M. (1997), "Two Algorithms for Nearest-neighbor Search in High Dimensions", Proceedings of the Twenty-Ninth Annual ACM Symposium on Theory of Computing, STOC '97, New York, NY, USA: ACM, pp. 599–608, doi:10.1145/258533.258653, ISBN 0-89791-888-6.
- ↑ Larsen, Kasper Green; Nelson, Jelani (2017), "Optimality of the Johnson-Lindenstrauss Lemma", Proceedings of the 58th Annual IEEE Symposium on Foundations of Computer Science (FOCS), pp. 633–638, doi:10.1109/FOCS.2017.64
- ↑ Nielsen, Frank (2016), "10. Fast approximate optimization in high dimensions with core-sets and fast dimension reduction", Introduction to HPC with MPI for Data Science, Springer, pp. 259–272, ISBN 978-3-319-21903-5, https://www.researchgate.net/publication/313162957
- ↑ Johnson, William B.; Lindenstrauss, Joram (1984), "Extensions of Lipschitz mappings into a Hilbert space", in Beals, Richard; Beck, Anatole; Bellow, Alexandra et al., Conference in modern analysis and probability (New Haven, Conn., 1982), Contemporary Mathematics, 26, Providence, RI: American Mathematical Society, pp. 189–206, doi:10.1090/conm/026/737400, ISBN 0-8218-5030-X, https://archive.org/details/conferenceinmode0000conf/page/189
- ↑ Ailon, Nir; Chazelle, Bernard (2006), "Approximate nearest neighbors and the fast Johnson–Lindenstrauss transform", Proceedings of the 38th Annual ACM Symposium on Theory of Computing, New York: ACM Press, pp. 557–563, doi:10.1145/1132516.1132597, ISBN 1-59593-134-1
- ↑ Kane, Daniel M.; Nelson, Jelani (2014), "Sparser Johnson-Lindenstrauss Transforms", Journal of the ACM 61 (1): 1, doi:10.1145/2559902. A preliminary version of this paper was published in the Proceedings of the Twenty-Third Annual ACM-SIAM Symposium on Discrete Algorithms, 2012.
- ↑ Esteve, Anna; Boj, Eva; Fortiana, Josep (2009), "Interaction terms in distance-based regression", Communications in Statistics 38 (18–20): 3498–3509, doi:10.1080/03610920802592860
- ↑ 8.0 8.1 Slyusar, V. I. (December 27, 1996), "End products in matrices in radar applications.", Radioelectronics and Communications Systems 41 (3): 50–53, http://slyusar.kiev.ua/en/IZV_1998_3.pdf
- ↑ 9.0 9.1 Slyusar, V. I. (1997-05-20), "Analytical model of the digital antenna array on a basis of face-splitting matrix products.", Proc. ICATT-97, Kyiv: 108–109, http://slyusar.kiev.ua/ICATT97.pdf
- ↑ 10.0 10.1 10.2 Slyusar, V. I. (1997-09-15), "New operations of matrices product for applications of radars", Proc. Direct and Inverse Problems of Electromagnetic and Acoustic Wave Theory (DIPED-97), Lviv.: 73–74, http://slyusar.kiev.ua/DIPED_1997.pdf
- ↑ 11.0 11.1 Slyusar, V. I. (March 13, 1998), "A Family of Face Products of Matrices and its Properties", Cybernetics and Systems Analysis C/C of Kibernetika I Sistemnyi Analiz.- 1999. 35 (3): 379–384, doi:10.1007/BF02733426, http://slyusar.kiev.ua/FACE.pdf
- ↑ 12.0 12.1 Slyusar, V. I. (2003), "Generalized face-products of matrices in models of digital antenna arrays with nonidentical channels", Radioelectronics and Communications Systems 46 (10): 9–17, http://slyusar.kiev.ua/en/IZV_2003_10.pdf
- ↑ Kasiviswanathan, Shiva Prasad; Rudelson, Mark; Smith, Adam D.; Ullman, Jonathan R. (2010), "The price of privately releasing contingency tables and the spectra of random matrices with correlated rows", in Schulman, Leonard J., Proceedings of the 42nd ACM Symposium on Theory of Computing, STOC 2010, Cambridge, Massachusetts, USA, 5–8 June 2010, Association for Computing Machinery, pp. 775–784, doi:10.1145/1806689.1806795
- ↑ Woodruff, David P. (2014), Sketching as a Tool for Numerical Linear Algebra, Foundations and Trends in Theoretical Computer Science, 10, doi:10.1561/0400000060
- ↑ Ahle, Thomas; Kapralov, Michael; Knudsen, Jakob (2020), "Oblivious Sketching of High-Degree Polynomial Kernels", ACM-SIAM Symposium on Discrete Algorithms, Association for Computing Machinery, doi:10.1137/1.9781611975994.9
Further reading
- Achlioptas, Dimitris (2003), "Database-friendly random projections: Johnson–Lindenstrauss with binary coins", Journal of Computer and System Sciences 66 (4): 671–687, doi:10.1016/S0022-0000(03)00025-4. Journal version of a paper previously appearing at PODC 2001.
- Baraniuk, Richard; Davenport, Mark; DeVore, Ronald; Wakin, Michael (2008), "A simple proof of the restricted isometry property for random matrices", Constructive Approximation 28 (3): 253–263, doi:10.1007/s00365-007-9003-x.
- Dasgupta, Sanjoy; Gupta, Anupam (2003), "An elementary proof of a theorem of Johnson and Lindenstrauss", Random Structures & Algorithms 22 (1): 60–65, doi:10.1002/rsa.10073, http://cseweb.ucsd.edu/~dasgupta/papers/jl.pdf.
- "On fiber diameters of continuous maps", American Mathematical Monthly 123 (4): 392–397, 2016, doi:10.4169/amer.math.monthly.123.4.392
- Slyusar, V. I. (1997-05-20), "Analytical model of the digital antenna array on a basis of face-splitting matrix products.", Proc. ICATT-97, Kyiv: 108–109, http://slyusar.kiev.ua/ICATT97.pdf
- Slyusar, V. I. (March 13, 1998), "A Family of Face Products of Matrices and its Properties", Cybernetics and Systems Analysis C/C of Kibernetika I Sistemnyi Analiz.- 1999. 35 (3): 379–384, doi:10.1007/BF02733426, http://slyusar.kiev.ua/FACE.pdf.
- The Modern Algorithmic Toolbox Lecture #4: Dimensionality Reduction, 2023, https://web.stanford.edu/class/cs168/l/l4.pdf
Original source: https://en.wikipedia.org/wiki/Johnson–Lindenstrauss lemma.
Read more |