Latent space
A latent space, also known as a latent feature space or embedding space, is an embedding of a set of items within a manifold in which items resembling each other are positioned closer to one another. Position within the latent space can be viewed as being defined by a set of latent variables that emerge from the resemblances from the objects.
In most cases, the dimensionality of the latent space is chosen to be lower than the dimensionality of the feature space from which the data points are drawn, making the construction of a latent space an example of dimensionality reduction, which can also be viewed as a form of data compression.[1] Latent spaces are usually fit via machine learning, and they can then be used as feature spaces in machine learning models, including classifiers and other supervised predictors.
The interpretation of the latent spaces of machine learning models is an active field of study, but latent space interpretation is difficult to achieve. Due to the black-box nature of machine learning models, the latent space may be completely unintuitive. Additionally, the latent space may be high-dimensional, complex, and nonlinear, which may add to the difficulty of interpretation.[2] Some visualization techniques have been developed to connect the latent space to the visual world, but there is often not a direct connection between the latent space interpretation and the model itself. Such techniques include t-distributed stochastic neighbor embedding (t-SNE), where the latent space is mapped to two dimensions for visualization. Latent space distances lack physical units, so the interpretation of these distances may depend on the application.[3]
Embedding models
Several embedding models have been developed to perform this transformation to create latent space embeddings given a set of data items and a similarity function. These models learn the embeddings by leveraging statistical techniques and machine learning algorithms. Here are some commonly used embedding models:
- Word2Vec:[4] Word2Vec is a popular embedding model used in natural language processing (NLP). It learns word embeddings by training a neural network on a large corpus of text. Word2Vec captures semantic and syntactic relationships between words, allowing for meaningful computations like word analogies.
- GloVe:[5] GloVe (Global Vectors for Word Representation) is another widely used embedding model for NLP. It combines global statistical information from a corpus with local context information to learn word embeddings. GloVe embeddings are known for capturing both semantic and relational similarities between words.
- Siamese Networks:[6] Siamese networks are a type of neural network architecture commonly used for similarity-based embedding. They consist of two identical subnetworks that process two input samples and produce their respective embeddings. Siamese networks are often used for tasks like image similarity, recommendation systems, and face recognition.
- Variational Autoencoders (VAEs):[7] VAEs are generative models that simultaneously learn to encode and decode data. The latent space in VAEs acts as an embedding space. By training VAEs on high-dimensional data, such as images or audio, the model learns to encode the data into a compact latent representation. VAEs are known for their ability to generate new data samples from the learned latent space.
Multimodality
Multimodality refers to the integration and analysis of multiple modes or types of data within a single model or framework. Embedding multimodal data involves capturing relationships and interactions between different data types, such as images, text, audio, and structured data.
Multimodal embedding models aim to learn joint representations that fuse information from multiple modalities, allowing for cross-modal analysis and tasks. These models enable applications like image captioning, visual question answering, and multimodal sentiment analysis.
To embed multimodal data, specialized architectures such as deep multimodal networks or multimodal transformers are employed. These architectures combine different types of neural network modules to process and integrate information from various modalities. The resulting embeddings capture the complex relationships between different data types, facilitating multimodal analysis and understanding.
Applications
Embedding latent space and multimodal embedding models have found numerous applications across various domains:
- Information Retrieval: Embedding techniques enable efficient similarity search and recommendation systems by representing data points in a compact space.
- Natural Language Processing: Word embeddings have revolutionized NLP tasks like sentiment analysis, machine translation, and document classification.
- Computer Vision: Image and video embeddings enable tasks like object recognition, image retrieval, and video summarization.
- Recommendation Systems: Embeddings help capture user preferences and item characteristics, enabling personalized recommendations.
- Healthcare: Embedding techniques have been applied to electronic health records, medical imaging, and genomic data for disease prediction, diagnosis, and treatment.
- Social Systems: Embedding techniques can be used to learn latent representations of social systems such as internal migration systems,[8] academic citation networks,[9] and world trade networks.[10]
See also
- Induced topology
- Clustering algorithm
- Intrinsic dimension
- Latent semantic analysis
- Manifold hypothesis
- Nonlinear dimensionality reduction
- Self-organizing map
References
- ↑ Liu, Yang; Jun, Eunice; Li, Qisheng; Heer, Jeffrey (June 2019). "Latent Space Cartography: Visual Analysis of Vector Space Embeddings" (in en). Computer Graphics Forum 38 (3): 67–78. doi:10.1111/cgf.13672. ISSN 0167-7055. https://onlinelibrary.wiley.com/doi/pdf/10.1111/cgf.13672.
- ↑ Li, Ziqiang; Tao, Rentuo; Wang, Jie; Li, Fu; Niu, Hongjing; Yue, Mingdao; Li, Bin (February 2021). "Interpreting the Latent Space of GANs via Measuring Decoupling". IEEE Transactions on Artificial Intelligence 2 (1): 58–70. doi:10.1109/TAI.2021.3071642. ISSN 2691-4581. https://ieeexplore.ieee.org/document/9399843.
- ↑ Arvanitidis, Georgios; Hansen, Lars Kai; Hauberg, Søren (13 December 2021). "Latent Space Oddity: on the Curvature of Deep Generative Models". arXiv:1710.11379 [stat.ML].
- ↑ Mikolov, Tomas; Sutskever, Ilya; Chen, Kai; Corrado, Greg S; Dean, Jeff (2013). "Distributed Representations of Words and Phrases and their Compositionality". Advances in Neural Information Processing Systems (Curran Associates, Inc.) 26. https://proceedings.neurips.cc/paper/2013/hash/9aa42b31882ec039965f3c4923ce901b-Abstract.html.
- ↑ Pennington, Jeffrey; Socher, Richard; Manning, Christopher (October 2014). "Glove: Global Vectors for Word Representation". Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Doha, Qatar: Association for Computational Linguistics. pp. 1532–1543. doi:10.3115/v1/D14-1162. https://aclanthology.org/D14-1162.
- ↑ Chicco, Davide (2021), Cartwright, Hugh, ed., "Siamese Neural Networks: An Overview" (in en), Artificial Neural Networks, Methods in Molecular Biology (New York, NY: Springer US) 2190: pp. 73–94, doi:10.1007/978-1-0716-0826-5_3, ISBN 978-1-0716-0826-5, PMID 32804361, https://doi.org/10.1007/978-1-0716-0826-5_3, retrieved 2023-06-26
- ↑ Kingma, Diederik P.; Welling, Max (2019-11-27). "An Introduction to Variational Autoencoders" (in English). Foundations and Trends in Machine Learning 12 (4): 307–392. doi:10.1561/2200000056. ISSN 1935-8237. https://www.nowpublishers.com/article/Details/MAL-056.
- ↑ Gürsoy, Furkan; Badur, Bertan (2022-10-06). "Investigating internal migration with network analysis and latent space representations: an application to Turkey" (in en). Social Network Analysis and Mining 12 (1): 150. doi:10.1007/s13278-022-00974-w. ISSN 1869-5469. PMID 36246429. PMC 9540093. https://doi.org/10.1007/s13278-022-00974-w.
- ↑ Asatani, Kimitaka; Mori, Junichiro; Ochi, Masanao; Sakata, Ichiro (2018-05-21). "Detecting trends in academic research from a citation network using network representation learning" (in en). PLOS ONE 13 (5): e0197260. doi:10.1371/journal.pone.0197260. ISSN 1932-6203. PMID 29782521.
- ↑ García-Pérez, Guillermo; Boguñá, Marián; Allard, Antoine; Serrano, M. Ángeles (2016-09-16). "The hidden hyperbolic geometry of international trade: World Trade Atlas 1870–2013" (in en). Scientific Reports 6 (1): 33441. doi:10.1038/srep33441. ISSN 2045-2322. PMID 27633649.
Original source: https://en.wikipedia.org/wiki/Latent space.
Read more |