Manifold alignment

From HandWiki
Revision as of 15:28, 6 February 2024 by Gametune (talk | contribs) (over-write)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Short description: Class of machine learning algorithms

Manifold alignment is a class of machine learning algorithms that produce projections between sets of data, given that the original data sets lie on a common manifold. The concept was first introduced as such by Ham, Lee, and Saul in 2003,[1] adding a manifold constraint to the general problem of correlating sets of high-dimensional vectors.[2]

Overview

Manifold alignment assumes that disparate data sets produced by similar generating processes will share a similar underlying manifold representation. By learning projections from each original space to the shared manifold, correspondences are recovered and knowledge from one domain can be transferred to another. Most manifold alignment techniques consider only two data sets, but the concept extends to arbitrarily many initial data sets.

Consider the case of aligning two data sets, [math]\displaystyle{ X }[/math] and [math]\displaystyle{ Y }[/math], with [math]\displaystyle{ X_i \in \mathbb{R}^m }[/math] and [math]\displaystyle{ Y_i \in \mathbb{R}^n }[/math].

Manifold alignment algorithms attempt to project both [math]\displaystyle{ X }[/math] and [math]\displaystyle{ Y }[/math] into a new d-dimensional space such that the projections both minimize distance between corresponding points and preserve the local manifold structure of the original data. The projection functions are denoted:

[math]\displaystyle{ \phi_{X}:\,\mathbb{R}^{m}\rightarrow\mathbb{R}^{d} }[/math]

[math]\displaystyle{ \phi_{Y}:\,\mathbb{R}^{n}\rightarrow\mathbb{R}^{d} }[/math]

Let [math]\displaystyle{ W }[/math] represent the binary correspondence matrix between points in [math]\displaystyle{ X }[/math] and [math]\displaystyle{ Y }[/math]:

[math]\displaystyle{ W_{i,j}=\begin{cases} 1 & if\, X_{i}\leftrightarrow Y_{j}\\ 0 & otherwise \end{cases} }[/math]

Let [math]\displaystyle{ S_X }[/math] and [math]\displaystyle{ S_Y }[/math] represent pointwise similarities within data sets. This is usually encoded as the heat kernel of the adjacency matrix of a k-nearest neighbor graph.

Finally, introduce a coefficient [math]\displaystyle{ 0 \le \mu \le 1 }[/math], which can be tuned to adjust the weight of the 'preserve manifold structure' goal, versus the 'minimize corresponding point distances' goal.

With these definitions in place, the loss function for manifold alignment can be written:

[math]\displaystyle{ \arg\min_{\phi_{X},\phi_{Y}}\mu\sum_{i,j}\left\Vert \phi_{X}\left(X_{i}\right)-\phi_{X}\left(X_{j}\right)\right\Vert ^{2}S_{X,i,j}+\mu\sum_{i,j}\left\Vert \phi_{Y}\left(Y_{i}\right)-\phi_{Y}\left(Y_{j}\right)\right\Vert ^{2}S_{Y,i,j}+\left(1-\mu\right)\sum_{i,j}\Vert\phi_{X}\left(X_{i}\right)-\phi_{Y}\left(Y_{j}\right)\Vert^{2}W_{i,j} }[/math]

Solving this optimization problem is equivalent to solving a generalized eigenvalue problem using the graph laplacian[3] of the joint matrix, G:

[math]\displaystyle{ G=\left[\begin{array}{cc} \mu S_X & \left(1-\mu\right)W\\ \left(1-\mu\right)W^T & \mu S_Y \end{array}\right] }[/math]

Inter-data correspondences

The algorithm described above requires full pairwise correspondence information between input data sets; a supervised learning paradigm. However, this information is usually difficult or impossible to obtain in real world applications. Recent work has extended the core manifold alignment algorithm to semi-supervised [4] , unsupervised [5] , and multiple-instance [6] settings.

One-step vs. two-step alignment

The algorithm described above performs a "one-step" alignment, finding embeddings for both data sets at the same time. A similar effect can also be achieved with "two-step" alignments [7] [8] , following a slightly modified procedure:

  1. Project each input data set to a lower-dimensional space independently, using any of a variety of dimension reduction algorithms.
  2. Perform linear manifold alignment on the embedded data, holding the first data set fixed, mapping each additional data set onto the first's manifold. This approach has the benefit of decomposing the required computation, which lowers memory overhead and allows parallel implementations.

Instance-level vs. feature-level projections

Manifold alignment can be used to find linear (feature-level) projections, or nonlinear (instance-level) embeddings. While the instance-level version generally produces more accurate alignments, it sacrifices a great degree of flexibility as the learned embedding is often difficult to parameterize. Feature-level projections allow any new instances to be easily embedded in the manifold space, and projections may be combined to form direct mappings between the original data representations. These properties are especially important for knowledge-transfer applications.

Applications

Manifold alignment is suited to problems with several corpora that lie on a shared manifold, even when each corpus is of a different dimensionality. Many real-world problems fit this description, but traditional techniques are not able to take advantage of all corpora at the same time. Manifold alignment also facilitates transfer learning, in which knowledge of one domain is used to jump-start learning in correlated domains.

Applications of manifold alignment include:

  • Cross-language information retrieval / automatic translation[8]
    • By representing documents as vector of word counts, manifold alignment can recover the mapping between documents of different languages.
    • Cross-language document correspondence is relatively easy to obtain, especially from multi-lingual organizations like the European Union.
  • Transfer learning of policy and state representations for reinforcement learning[8]
  • Alignment of protein NMR structures[8]
  • Accelerating model learning in robotics by sharing data generated by other robots [9]

See also

References

  1. Ham, Ji Hun; Daniel D. Lee; Lawrence K. Saul (2003). "Learning high dimensional correspondences from low dimensional manifolds". Proceedings of the Twentieth International Conference on Machine Learning (ICML-2003). ftp://ftp.cis.upenn.edu/pub/datamining/public_html/ReadingGroup/papers/corr_icmlws03.pdf. 
  2. Hotelling, H (1936). "Relations between two sets of variates". Biometrika 28 (3–4): 321–377. doi:10.2307/2333955. http://cbio.ensmp.fr/~jvert/svn/bibli/local/Hotelling1936Relation.pdf. 
  3. Belkin, M; P Niyogi (2003). "Laplacian eigenmaps for dimensionality reduction and data representation". Neural Computation 15 (6): 1373–1396. doi:10.1162/089976603321780317. http://www.cse.ohio-state.edu/~mbelkin/papers/LEM_NC_03.pdf. 
  4. Ham, Ji Hun; Daniel D. Lee; Lawrence K. Saul (2005). "Semisupervised alignment of manifolds". Proceedings of the Annual Conference on Uncertainty in Artificial Intelligence. http://cseweb.ucsd.edu/~saul/papers/semi_aistats05.pdf. 
  5. Wang, Chang; Sridhar Mahadevan (2009). "Manifold Alignment without Correspondence". The 21st International Joint Conference on Artificial Intelligence. http://www.cs.umass.edu/~chwang/papers/IJCAI-2009-MA.pdf. 
  6. Wang, Chang; Sridhar Mahadevan (2011). "Heterogeneous Domain Adaptation using Manifold Alignment". The 22nd International Joint Conference on Artificial Intelligence. http://www-all.cs.umass.edu/~chwang/IJCAI2011-DA.pdf. Retrieved 2011-12-14. 
  7. Lafon, Stephane; Yosi Keller; Ronald R. Coifman (2006). "Data fusion and multicue data matching by diffusion maps". IEEE Transactions on Pattern Analysis and Machine Intelligence 28 (11): 1784–1797. doi:10.1109/tpami.2006.223. PMID 17063683. http://www.eng.biu.ac.il/~kellery1/share/dataanalysis/kernel%20methods/Data%20Fusion%20and%20Multicue%20Data%20Matching%20by%20Diffusion%20Maps.pdf. 
  8. 8.0 8.1 8.2 8.3 Wang, Chang; Sridhar Mahadevan (2008). "Manifold Alignment using Procrustes Analysis". The 25th International Conference on Machine Learning. http://www.cs.umass.edu/~chwang/papers/ICML-2008.pdf. [yes|permanent dead link|dead link}}]
  9. Makondo, Ndivhuwo; Benjamin Rosman; Osamu Hasegawa (2015). "Knowledge Transfer for Learning Robot Models via Local Procrustes Analysis". The 15th IEEE-RAS International Conference on Humanoid Robots (Humanoids). doi:10.1109/HUMANOIDS.2015.7363502. 

Further reading