Topic model

From HandWiki
Short description: Statistical model

In statistics and natural language processing, a topic model is a type of statistical model for discovering the abstract "topics" that occur in a collection of documents. Topic modeling is a frequently used text-mining tool for discovery of hidden semantic structures in a text body. Intuitively, given that a document is about a particular topic, one would expect particular words to appear in the document more or less frequently: "dog" and "bone" will appear more often in documents about dogs, "cat" and "meow" will appear in documents about cats, and "the" and "is" will appear approximately equally in both. A document typically concerns multiple topics in different proportions; thus, in a document that is 10% about cats and 90% about dogs, there would probably be about 9 times more dog words than cat words. The "topics" produced by topic modeling techniques are clusters of similar words. A topic model captures this intuition in a mathematical framework, which allows examining a set of documents and discovering, based on the statistics of the words in each, what the topics might be and what each document's balance of topics is.

Topic models are also referred to as probabilistic topic models, which refers to statistical algorithms for discovering the latent semantic structures of an extensive text body. In the age of information, the amount of the written material we encounter each day is simply beyond our processing capacity. Topic models can help to organize and offer insights for us to understand large collections of unstructured text bodies. Originally developed as a text-mining tool, topic models have been used to detect instructive structures in data such as genetic information, images, and networks. They also have applications in other fields such as bioinformatics[1] and computer vision.[2]


An early topic model was described by Papadimitriou, Raghavan, Tamaki and Vempala in 1998.[3] Another one, called probabilistic latent semantic analysis (PLSA), was created by Thomas Hofmann in 1999.[4] Latent Dirichlet allocation (LDA), perhaps the most common topic model currently in use, is a generalization of PLSA. Developed by David Blei, Andrew Ng, and Michael I. Jordan in 2002, LDA introduces sparse Dirichlet prior distributions over document-topic and topic-word distributions, encoding the intuition that documents cover a small number of topics and that topics often use a small number of words.[5] Other topic models are generally extensions on LDA, such as Pachinko allocation, which improves on LDA by modeling correlations between topics in addition to the word correlations which constitute topics. Hierarchical latent tree analysis (HLTA) is an alternative to LDA, which models word co-occurrence using a tree of latent variables and the states of the latent variables, which correspond to soft clusters of documents, are interpreted as topics.

File:Topic model scheme.webm

Topic models for context information

Approaches for temporal information include Block and Newman's determination of the temporal dynamics of topics in the Pennsylvania Gazette during 1728–1800. Griffiths & Steyvers used topic modeling on abstracts from the journal PNAS to identify topics that rose or fell in popularity from 1991 to 2001 whereas Lamba & Madhusushan [7] used topic modeling on full-text research articles retrieved from DJLIT journal from 1981–2018. In the field of library and information science, Lamba & Madhusudhan [7][8][9][10] applied topic modeling on different Indian resources like journal articles and electronic theses and resources (ETDs). Nelson [11] has been analyzing change in topics over time in the Richmond Times-Dispatch to understand social and political changes and continuities in Richmond during the American Civil War. Yang, Torget and Mihalcea applied topic modeling methods to newspapers from 1829–2008. Mimno used topic modelling with 24 journals on classical philology and archaeology spanning 150 years to look at how topics in the journals change over time and how the journals become more different or similar over time.

Yin et al.[12] introduced a topic model for geographically distributed documents, where document positions are explained by latent regions which are detected during inference.

Chang and Blei[13] included network information between linked documents in the relational topic model, to model the links between websites.

The author-topic model by Rosen-Zvi et al.[14] models the topics associated with authors of documents to improve the topic detection for documents with authorship information.

HLTA was applied to a collection of recent research papers published at major AI and Machine Learning venues. The resulting model is called The AI Tree. The resulting topics are used to index the papers at to help researchers track research trends and identify papers to read, and help conference organizers and journal editors identify reviewers for submissions.


In practice, researchers attempt to fit appropriate model parameters to the data corpus using one of several heuristics for maximum likelihood fit. A recent survey by Blei describes this suite of algorithms.[15] Several groups of researchers starting with Papadimitriou et al.[3] have attempted to design algorithms with probable guarantees. Assuming that the data were actually generated by the model in question, they try to design algorithms that probably find the model that was used to create the data. Techniques used here include singular value decomposition (SVD) and the method of moments. In 2012 an algorithm based upon non-negative matrix factorization (NMF) was introduced that also generalizes to topic models with correlations among topics.[16]

In 2018 a new approach to topic models was proposed: it is based on stochastic block model[17]

Topic models for quantitative biomedicine

Topic models are being used also in other contexts. For examples uses of topic models in biology and bioinformatics research emerged.[18] Recently topic models has been used to extract information from dataset of cancers' genomic samples.[19] In this case topics are biological latent variables to be inferred.

See also


  1. Blei, David (April 2012). "Probabilistic Topic Models". Communications of the ACM 55 (4): 77–84. doi:10.1145/2133806.2133826. 
  2. Cao, Liangliang, and Li Fei-Fei. "Spatially coherent latent topic model for concurrent segmentation and classification of objects and scenes." 2007 IEEE 11th International Conference on Computer Vision. IEEE, 2007.
  3. 3.0 3.1 Papadimitriou, Christos; Raghavan, Prabhakar; Tamaki, Hisao; Vempala, Santosh (1998). "Latent Semantic Indexing: A probabilistic analysis" (Postscript). Proceedings of ACM PODS: 159–168. doi:10.1145/275487.275505. ISBN 978-0897919968. 
  4. Hofmann, Thomas (1999). "Probabilistic Latent Semantic Indexing". Proceedings of the Twenty-Second Annual International SIGIR Conference on Research and Development in Information Retrieval. 
  5. Blei, David M.; Ng, Andrew Y. (January 2003). "Latent Dirichlet allocation". Journal of Machine Learning Research 3: 993–1022. doi:10.1162/jmlr.2003.3.4-5.993. 
  7. 7.0 7.1 Lamba, Manika jun (2019). "Mapping of topics in DESIDOC Journal of Library and Information Technology, India: a study". Scientometrics 120 (2): 477–505. doi:10.1007/s11192-019-03137-5. ISSN 0138-9130. 
  8. Lamba, Manika jun (2019). "Metadata Tagging and Prediction Modeling: Case Study of DESIDOC Journal of Library and Information Technology (2008-2017)". World Digital Libraries 12: 33–89. doi:10.18329/09757597/2019/12103. ISSN 0975-7597. 
  9. Lamba, Manika may (2019). "Author-Topic Modeling of DESIDOC Journal of Library and Information Technology (2008-2017), India". Library Philosophy and Practice. 
  10. Lamba, Manika sep (2018). "Metadata Tagging of Library and Information Science Theses: Shodhganga (2013-2017)". ETD2018:Beyond the boundaries of Rims and Oceans. Taiwan,Taipei. 
  11. Nelson, Rob. "Mining the Dispatch". Digital Scholarship Lab, University of Richmond. 
  12. Yin, Zhijun (2011). "Geographical topic discovery and comparison". Proceedings of the 20th International Conference on World Wide Web: 247–256. doi:10.1145/1963405.1963443. ISBN 9781450306324. 
  13. Chang, Jonathan (2009). "Relational Topic Models for Document Networks". Aistats 9: 81–88. 
  14. Rosen-Zvi, Michal (2004). "The author-topic model for authors and documents". Proceedings of the 20th Conference on Uncertainty in Artificial Intelligence: 487–494. 
  15. Blei, David M. (April 2012). "Introduction to Probabilistic Topic Models" (PDF). Comm. ACM 55 (4): 77–84. doi:10.1145/2133806.2133826. 
  16. Sanjeev Arora; Rong Ge; Ankur Moitra (April 2012). "Learning Topic Models—Going beyond SVD". arXiv:1204.1956 [cs.LG].
  17. Martin Gerlach; Tiago Pexioto; Eduardo Altmann (2018). "A network approach to topic models". Science Advances 4 (7): eaaq1360. doi:10.1126/sciadv.aaq1360. PMID 30035215. Bibcode2018SciA....4.1360G. 
  18. Liu, L. et al. (2016). "An overview of topic modeling and its current applications in bioinformatics". SpringerPlus 5 (1): 1608. doi:10.1186/s40064-016-3252-8. PMID 27652181. 
  19. Valle, F.; Osella, M.; Caselle, M. (2020). "A Topic Modeling Analysis of TCGA Breast and Lung Cancer Transcriptomic Data". Cancers 12 (12): 3799. doi:10.3390/cancers12123799. PMID 33339347. 

Further reading

External links