Word-sense induction
In computational linguistics, word-sense induction (WSI) or discrimination is an open problem of natural language processing, which concerns the automatic identification of the senses of a word (i.e. meanings). Given that the output of word-sense induction is a set of senses for the target word (sense inventory), this task is strictly related to that of word-sense disambiguation (WSD), which relies on a predefined sense inventory and aims to solve the ambiguity of words in context.
Approaches and methods
The output of a word-sense induction algorithm is a clustering of contexts in which the target word occurs or a clustering of words related to the target word. Three main methods have been proposed in the literature:[1][2]
- Context clustering
- Word clustering
- Co-occurrence graphs
Context clustering
The underlying hypothesis of this approach is that, words are semantically similar if they appear in similar documents, with in similar context windows, or in similar syntactic contexts.[3] Each occurrence of a target word in a corpus is represented as a context vector. These context vectors can be either first-order vectors, which directly represent the context at hand, or second-order vectors, i.e., the contexts of the target word are similar if their words tend to co-occur together. The vectors are then clustered into groups, each identifying a sense of the target word. A well-known approach to context clustering is the Context-group Discrimination algorithm [4] based on large matrix computation methods.
Word clustering
Word clustering is a different approach to the induction of word senses. It consists of clustering words, which are semantically similar and can thus bear a specific meaning. Lin’s algorithm [5] is a prototypical example of word clustering, which is based on syntactic dependency statistics, which occur in a corpus to produce sets of words for each discovered sense of a target word.[6] The Clustering By Committee (CBC) [7] also uses syntactic contexts, but exploits a similarity matrix to encode the similarities between words and relies on the notion of committees to output different senses of the word of interest. These approaches are hard to obtain on a large scale for many domain and languages.
Co-occurrence graphs
The main hypothesis of co-occurrence graphs assumes that the semantics of a word can be represented by means of a co-occurrence graph, whose vertices are co-occurrences and edges are co-occurrence relations. These approaches are related to word clustering methods, where co-occurrences between words can be obtained on the basis of grammatical [8] or collocational relations.[9] HyperLex is the successful approaches of a graph algorithm, based on the identification of hubs in co-occurrence graphs, which have to cope with the need to tune a large number of parameters.[10] To deal with this issue several graph-based algorithms have been proposed, which are based on simple graph patterns, namely Curvature Clustering, Squares, Triangles and Diamonds (SquaT++), and Balanced Maximum Spanning Tree Clustering (B-MST).[11] The patterns aim at identifying meanings using the local structural properties of the co-occurrence graph. A randomized algorithm which partitions the graph vertices by iteratively transferring the mainstream message (i.e. word sense) to neighboring vertices[12] is Chinese Whispers. By applying co-occurrence graphs approaches have been shown to achieve the state-of-the-art performance in standard evaluation tasks.
Applications
- Word-sense induction has been shown to benefit Web Information Retrieval when highly ambiguous queries are employed.[9]
- Simple word-sense induction algorithms boost Web search result clustering considerably and improve the diversification of search results returned by search engines such as Yahoo![13]
- Word-sense induction has been applied to enrich lexical resources such as WordNet.[14]
Software
- SenseClusters is a freely available open source software package that performs both context clustering and word clustering.
See also
- Word Sense Disambiguation
- Grammar induction
- Polysemy
References
- ↑ Navigli, R. (2009). "Word Sense Disambiguation: A Survey". ACM Computing Surveys 41 (2): 1–69. doi:10.1145/1459352.1459355. http://www.dsi.uniroma1.it/~navigli/pubs/ACM_Survey_2009_Navigli.pdf.
- ↑ Nasiruddin, M. (2013). "A State of the Art of Word Sense Induction: A Way Towards Word Sense Disambiguation for Under-Resourced Languages". TALN-RÉCITAL 2013. Les Sables d'Olonne, France. pp. 192–205. http://www.taln2013.org/actes/www/RECITAL-2013/actes/recital-2013-long-015.pdf.
- ↑ Van de Cruys, T. (2010). "Mining for Meaning. The Extraction of Lexico-Semantic Knowledge from Text". http://www.timvandecruys.be/media/papers/VandeCruys2010Mining.pdf.
- ↑ Schütze, H. (1998). "Dimensions of meaning". 1992 ACM/IEEE Conference on Supercomputing. Los Alamitos, CA: IEEE Computer Society Press. pp. 787–796. doi:10.1109/SUPERC.1992.236684.
- ↑ Lin, D. (1998). "Automatic retrieval and clustering of similar words". 17th International Conference on Computational linguistics (COLING). Montreal, Canada. pp. 768–774. http://www.aclweb.org/anthology-new/P/P98/P98-2127.pdf.
- ↑ Van de Cruys, Tim; Apidianaki, Marianna (2011). "Latent Semantic Word Sense Induction and Disambiguation". http://www.timvandecruys.be/media/papers/VandeCruysApidianaki2011Latent.pdf.
- ↑ Lin, D.; Pantel, P. (2002). "Discovering word senses from text". 8th International Conference on Knowledge Discovery and Data Mining (KDD). Edmonton, Canada. pp. 613–619.
- ↑ Widdows, D.; Dorow, B. (2002). "A graph model for unsupervised lexical acquisition". 19th International Conference on Computational Linguistics (COLING). Taipei, Taiwan. pp. 1–7. http://www.aclweb.org/anthology-new/C/C02/C02-1114.pdf.
- ↑ 9.0 9.1 Véronis, J. (2004). "Hyperlex: Lexical cartography for information retrieval". Computer Speech and Language 18 (3): 223–252. doi:10.1016/j.csl.2004.05.002. http://sites.univ-provence.fr/veronis/pdf/2004-hyperlex-CSL.pdf.
- ↑ Agirre, E.; Martinez, D.; De Lacalle, O. Lopez; Soroa, A.. "Two graph-based algorithms for state-of-the-art WSD". 2006 Conference on Empirical Methods in Natural Language Processing (EMNLP). Sydney, Australia. pp. 585–593. http://ixa.si.ehu.es/Ixa/Argitalpenak/Artikuluak/1149260582/publikoak/emnlp.pdf.
- ↑ Di Marco, A.; Navigli, R. (2013). "Clustering and Diversifying Web Search Results with Graph-Based Word Sense Induction". Computational Linguistics 39 (3): 709–754. doi:10.1162/coli_a_00148. http://www.dsi.uniroma1.it/%7Enavigli/pubs/CL_2013_DiMarco_Navigli.pdf.
- ↑ Biemann, C. (2006). "Chinese Whispers - an Efficient Graph Clustering Algorithm and its Application to Natural Language Processing Problems". http://wortschatz.uni-leipzig.de/~cbiemann/pub/2006/BiemannTextGraph06.pdf.
- ↑ Navigli, R.; Crisafulli, G.. "Inducing Word Senses to Improve Web Search Result Clustering". 2010 Conference on Empirical Methods in Natural Language Processing (EMNLP 2010). Massachusetts, USA: MIT Stata Center. pp. 116–126. http://www.dsi.uniroma1.it/~navigli/pubs/EMNLP_2010_Navigli_Crisafulli.pdf.
- ↑ Nasiruddin, M.; Schwab, D.; Tchechmedjiev, A.; Sérasset, G.; Blanchon, H.. "Induction de sens pour enrichir des ressources lexicales (Word Sense Induction for the Enrichment of Lexical Resources)". 21ème conférence sur le Traitement Automatique des Langues Naturelles (TALN 2014). Marseille, France. pp. 598–603. http://www.taln2014.org/proceedings/TALN/ArticlesCourts/Paper_P-L2.3.pdf.
Original source: https://en.wikipedia.org/wiki/Word-sense induction.
Read more |