Semantic similarity
Semantic similarity is a metric defined over a set of documents or terms, where the idea of distance between items is based on the likeness of their meaning or semantic content as opposed to lexicographical similarity. These are mathematical tools used to estimate the strength of the semantic relationship between units of language, concepts or instances, through a numerical description obtained according to the comparison of information supporting their meaning or describing their nature.[1][2] The term semantic similarity is often confused with semantic relatedness. Semantic relatedness includes any relation between two terms, while semantic similarity only includes "is a" relations.[3] For example, "car" is similar to "bus", but is also related to "road" and "driving".
Computationally, semantic similarity can be estimated by defining a topological similarity, by using ontologies to define the distance between terms/concepts. For example, a naive metric for the comparison of concepts ordered in a partially ordered set and represented as nodes of a directed acyclic graph (e.g., a taxonomy), would be the shortest-path linking the two concept nodes. Based on text analyses, semantic relatedness between units of language (e.g., words, sentences) can also be estimated using statistical means such as a vector space model to correlate words and textual contexts from a suitable text corpus. The evaluation of the proposed semantic similarity / relatedness measures are evaluated through two main ways. The former is based on the use of datasets designed by experts and composed of word pairs with semantic similarity / relatedness degree estimation. The second way is based on the integration of the measures inside specific applications such as information retrieval, recommender systems, natural language processing, etc.
Terminology
The concept of semantic similarity is more specific than semantic relatedness, as the latter includes concepts as antonymy and meronymy, while similarity does not.[4] However, much of the literature uses these terms interchangeably, along with terms like semantic distance. In essence, semantic similarity, semantic distance, and semantic relatedness all mean, "How much does term A have to do with term B?" The answer to this question is usually a number between −1 and 1, or between 0 and 1, where 1 signifies extremely high similarity.
Visualization
An intuitive way of visualizing the semantic similarity of terms is by grouping together terms which are closely related and spacing wider apart the ones which are distantly related. This is also common in practice for mind maps and concept maps.
A more direct way of visualizing the semantic similarity of two linguistic items can be seen with the Semantic Folding approach. In this approach a linguistic item such as a term or a text can be represented by generating a pixel for each of its active semantic features in e.g. a 128 x 128 grid. This allows for a direct visual comparison of the semantics of two items by comparing image representations of their respective feature sets.
Applications
In biomedical informatics
Semantic similarity measures have been applied and developed in biomedical ontologies.[5][6] They are mainly used to compare genes and proteins based on the similarity of their functions[7] rather than on their sequence similarity, but they are also being extended to other bioentities, such as diseases.[8]
These comparisons can be done using tools freely available on the web:
- ProteInOn can be used to find interacting proteins, find assigned GO terms and calculate the functional semantic similarity of UniProt proteins and to get the information content and calculate the functional semantic similarity of GO terms.[9]
- CMPSim provides a functional similarity measure between chemical compounds and metabolic pathways using ChEBI based semantic similarity measures.[10]
- CESSM provides a tool for the automated evaluation of GO-based semantic similarity measures.[11]
In geoinformatics
Similarity is also applied in geoinformatics to find similar geographic features or feature types:[12]
- SIM-DL similarity server[13] can be used to compute similarities between concepts stored in geographic feature type ontologies.
- Similarity Calculator can be used to compute how well related two geographic concepts are in the Geo-Net-PT ontology.[14][15]
- The OSM[16] semantic network can be used to compute the semantic similarity of tags in OpenStreetMap.[17]
In computational linguistics
Several metrics use WordNet, a manually constructed lexical database of English words. Despite the advantages of having human supervision in constructing the database, since the words are not automatically learned the database cannot measure relatedness between multi-word term, non-incremental vocabulary.[4][18]
In natural language processing
Natural language processing (NLP) is a field of computer science and linguistics. Sentiment analysis, Natural language understanding and Machine translation (Automatically translate text from one human language to another) are a few of the major areas where it is being used. For example, knowing one information resource in the internet, it is often of immediate interest to find similar resources. The Semantic Web provides semantic extensions to find similar data by content and not just by arbitrary descriptors.[19][20][21][22][23][24][25][26][27] Deep learning methods have become an accurate way to gauge semantic similarity between two text passages, in which each passage is first embedded into a continuous vector representation.[28][29][30]
In ontology matching
Semantic similarity plays a crucial role in ontology alignment, which aims to establish correspondences between entities from different ontologies. It involves quantifying the degree of similarity between concepts or terms using the information present in the ontology for each entity, such as labels, descriptions, and hierarchical relations to other entities. Traditional metrics used in ontology matching are based on a lexical similarity between features of the entities, such as using the Levenshtein distance to measure the edit distance between entity labels.[31] However, it is difficult to capture the semantic similarity between entities using these metrics. For example, when comparing two ontologies describing conferences, the entities "Contribution" and "Paper" may have high semantic similarity since they share the same meaning. Nonetheless, due to their lexical differences, lexicographical similarity alone cannot establish this alignment. To capture these semantic similarities, embeddings are being adopted in ontology matching.[32] By encoding semantic relationships and contextual information, embeddings enable the calculation of similarity scores between entities based on the proximity of their vector representations in the embedding space. This approach allows for efficient and accurate matching of ontologies since embeddings can model semantic differences in entity naming, such as homonymy, by assigning different embeddings to the same word based on different contexts.[32]
Measures
Topological similarity
There are essentially two types of approaches that calculate topological similarity between ontological concepts:
- Edge-based: which use the edges and their types as the data source;
- Node-based: in which the main data sources are the nodes and their properties.
Other measures calculate the similarity between ontological instances:
- Pairwise: measure functional similarity between two instances by combining the semantic similarities of the concepts they represent
- Groupwise: calculate the similarity directly not combining the semantic similarities of the concepts they represent
Some examples:
Edge-based
- Pekar et al.[33]
- Cheng and Cline[34]
- Wu et al.[35]
- Del Pozo et al.[36]
- IntelliGO: Benabderrahmane et al.[6]
Node-based
- Resnik[37]
- based on the notion of information content. The information content of a concept (term or word) is the logarithm of the probability of finding the concept in a given corpus.
- only considers the information content of lowest common subsumer (lcs). A lowest common subsumer is a concept in a lexical taxonomy ( e.g. WordNet), which has the shortest distance from the two concepts compared. For example, animal and mammal both are the subsumers of cat and dog, but mammal is lower subsumer than animal for them.
- Lin[38]
- based on Resnik's similarity.
- considers the information content of lowest common subsumer (lcs) and the two compared concepts.
- Maguitman, Menczer, Roinestad and Vespignani[39]
- Generalizes Lin's similarity to arbitrary ontologies (graphs).
- Jiang and Conrath[40]
- based on Resnik's similarity.
- considers the information content of lowest common subsumer (lcs) and the two compared concepts to calculate the distance between the two concepts. The distance is later used in computing the similarity measure.
- Align, Disambiguate, and Walk: Random walks on Semantic Networks[41]
Node-and-relation-content-based
- applicable to ontology
- consider properties (content) of nodes
- consider types (content) of relations
- based on eTVSM[42]
- based on Resnik's similarity[43]
Pairwise
- maximum of the pairwise similarities
- composite average in which only the best-matching pairs are considered (best-match average)
Groupwise
Statistical similarity
Statistical similarity approaches can be learned from data, or predefined. Similarity learning can often outperform predefined similarity measures. Broadly speaking, these approaches build a statistical model of documents, and use it to estimate similarity.
- LSA (latent semantic analysis):[44][45] (+) vector-based, adds vectors to measure multi-word terms; (−) non-incremental vocabulary, long pre-processing times
- PMI (pointwise mutual information): (+) large vocab, because it uses any search engine (like Google); (−) cannot measure relatedness between whole sentences or documents
- SOC-PMI (second-order co-occurrence pointwise mutual information): (+) sort lists of important neighbor words from a large corpus; (−) cannot measure relatedness between whole sentences or documents
- GLSA (generalized latent semantic analysis): (+) vector-based, adds vectors to measure multi-word terms; (−) non-incremental vocabulary, long pre-processing times
- ICAN (incremental construction of an associative network): (+) incremental, network-based measure, good for spreading activation, accounts for second-order relatedness; (−) cannot measure relatedness between multi-word terms, long pre-processing times
- NGD (normalized Google distance): (+) large vocab, because it uses any search engine (like Google); (−) can measure relatedness between whole sentences or documents but the larger the sentence or document, the more ingenuity is required (Cilibrasi & Vitanyi, 2007).[46]
- TSS (Twitter semantic similarity):[47] large vocab, because it use online tweets from Twitter to compute the similarity. It has high temporary resolution that allows to capture high frequency events. Open source
- NCD (normalized compression distance)
- ESA (explicit semantic analysis) based on Wikipedia and the ODP
- SSA (salient semantic analysis)[48] which indexes terms using salient concepts found in their immediate context.
- n° of Wikipedia (noW),[49] inspired by the game Six Degrees of Wikipedia,[50] is a distance metric based on the hierarchical structure of Wikipedia. A directed-acyclic graph is first constructed and later, Dijkstra's shortest path algorithm is employed to determine the noW value between two terms as the geodesic distance between the corresponding topics (i.e. nodes) in the graph.
- VGEM (vector generation of an explicitly-defined multidimensional semantic space):[51] (+) incremental vocab, can compare multi-word terms (−) performance depends on choosing specific dimensions
- SimRank
- NASARI:[52] Sparse vector representations constructed by applying the hypergeometric distribution over the Wikipedia corpus in combination with BabelNet taxonomy. Cross-lingual similarity is currently also possible thanks to the multilingual and unified extension.[53]
Semantics-based similarity
- Marker passing: Combining lexical decomposition for automated ontology creation and marker passing, the approach of Fähndrich et al. introduces a new type of semantic similarity measure.[54] Here markers are passed from the two target concepts carrying an amount of activation. This activation might increase or decrease depending on the relations weight with which the concepts are connected. This combines edge and node based approaches and includes connectionist reasoning with symbolic information.
- Good common subsumer (GCS)-based semantic similarity measure[55]
Semantics similarity networks
- A semantic similarity network (SSN) is a special form of semantic network. designed to represent concepts and their semantic similarity. Its main contribution is reducing the complexity of calculating semantic distances. Bendeck (2004, 2008) introduced the concept of semantic similarity networks (SSN) as the specialization of a semantic network to measure semantic similarity from ontological representations.[56] Implementations include genetic information handling.
Gold standards
Researchers have collected datasets with similarity judgements on pairs of words, which are used to evaluate the cognitive plausibility of computational measures. The golden standard up to today is an old 65 word list where humans have judged the word similarity.[57][58]
See also
- Analogy
- Componential analysis
- Coherence (linguistics)
- Levenshtein distance
- Semantic differential
- Semantic similarity network
- Terminology extraction
- Word2vec
References
- ↑ Harispe S.; Ranwez S.; Janaqi S.; Montmain J. (2015). "Semantic Similarity from Natural Language and Ontology Analysis". Synthesis Lectures on Human Language Technologies 8 (1): 1–254. doi:10.2200/S00639ED1V01Y201504HLT027.
- ↑ Feng Y.; Bagheri E.; Ensan F.; Jovanovic J. (2017). "The state of the art in semantic relatedness: a framework for comparison". Knowledge Engineering Review 32: 1–30. doi:10.1017/S0269888917000029.
- ↑ A. Ballatore; M. Bertolotto; D.C. Wilson (2014). "An evaluative baseline for geo-semantic relatedness and similarity". GeoInformatica 18 (4): 747–767. doi:10.1007/s10707-013-0197-8. Bibcode: 2014arXiv1402.3371B.
- ↑ 4.0 4.1 Budanitsky, Alexander; Hirst, Graeme (2001). "Semantic distance in WordNet: An experimental, application-oriented evaluation of five measures". Workshop on WordNet and Other Lexical Resources, Second Meeting of the North American Chapter of the Association for Computational Linguistics (Pittsburgh). https://ftp.cs.toronto.edu/pub/gh/Budanitsky+Hirst-2001.pdf.
- ↑ Guzzi, Pietro Hiram; Mina, Marco; Cannataro, Mario; Guerra, Concettina (2012). "Semantic similarity analysis of protein data: assessment with biological features and issues". Briefings in Bioinformatics 13 (5): 569–585. doi:10.1093/bib/bbr066. PMID 22138322.
- ↑ 6.0 6.1 Benabderrahmane, Sidahmed; Smail Tabbone, Malika; Poch, Olivier; Napoli, Amedeo; Devignes, Marie-Domonique. (2010). "IntelliGO: a new vector-based semantic similarity measure including annotation origin". BMC Bioinformatics 11: 588. doi:10.1186/1471-2105-11-588. PMID 21122125.
- ↑ Chicco, D; Masseroli, M (2015). "Software suite for gene and protein annotation prediction and similarity search". IEEE/ACM Transactions on Computational Biology and Bioinformatics 12 (4): 837–843. doi:10.1109/TCBB.2014.2382127. PMID 26357324. https://doi.org/10.1109/TCBB.2014.2382127.
- ↑ Köhler, S; Schulz, MH; Krawitz, P; Bauer, S; Dolken, S; Ott, CE; Mundlos, C; Horn, D et al. (2009). "Clinical diagnostics in human genetics with semantic similarity searches in ontologies". American Journal of Human Genetics 85 (4): 457–64. doi:10.1016/j.ajhg.2009.09.003. PMID 19800049.
- ↑ "ProteInOn". http://xldb.fc.ul.pt/biotools/proteinon/.
- ↑ "CMPSim". http://xldb.di.fc.ul.pt/biotools/cmpsim/.
- ↑ "CESSM". http://xldb.fc.ul.pt/biotools/cessm/.
- ↑ Janowicz, K.; Raubal, M.; Kuhn, W. (2011). "The semantics of similarity in geographic information retrieval". Journal of Spatial Information Science 2 (2): 29–57. doi:10.5311/josis.2011.2.3. http://www.josis.org/index.php/josis/article/view/26/23.
- ↑ "Algorithm, implementation and application of the SIM-DL similarity server". Second International Conference on Geospatial Semantics (GEOS 2007). 2007. pp. 128–145.
- ↑ "Geo-Net-PT Similarity Calculator". http://xldb.fc.ul.pt/wiki/Geographic_Similarity_calculator_GeoSSM.
- ↑ "Geo-Net-PT". http://xldb.fc.ul.pt/wiki/Geo-Net-PT_02_in_English.
- ↑ "OSM Semantic Network". OSM Wiki.
- ↑ A. Ballatore; D.C. Wilson; M. Bertolotto. "Geographic Knowledge Extraction and Semantic Similarity in OpenStreetMap". Knowledge and Information Systems: 61–81. http://irserver.ucd.ie/bitstream/handle/10197/3973/2012_-_Geographic_Knowledge_Extraction_and_Semantic_Similarity_in_OpenStreetMap_-_Ballatore_et_al.pdf?sequence=1.
- ↑ Kaur, I.; Hornof, A.J. (2005). "A comparison of LSA, wordNet and PMI-IR for predicting user click behavior". Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. pp. 51–60. doi:10.1145/1054972.1054980. ISBN 978-1-58113-998-3.
- ↑ Similarity-based Learning Methods for the Semantic Web (C. d'Amato, PhD Thesis)
- ↑ Gracia, J.; Mena, E. (2008). "Web-Based Measure of Semantic Relatedness". Proceedings of the 9th International Conference on Web Information Systems Engineering (WISE '08): 136–150. http://disi.unitn.it/~p2p/RelatedWork/Matching/Gracia_wise08.pdf.
- ↑ Raveendranathan, P. (2005). Identifying Sets of Related Words from the World Wide Web. Master of Science Thesis, University of Minnesota Duluth.
- ↑ Wubben, S. (2008). Using free link structure to calculate semantic relatedness. In ILK Research Group Technical Report Series, nr. 08-01, 2008.
- ↑ Juvina, I., van Oostendorp, H., Karbor, P., & Pauw, B. (2005). Towards modeling contextual information in web navigation. In B. G. Bara & L. Barsalou & M. Bucciarelli (Eds.), 27th Annual Meeting of the Cognitive Science Society, CogSci2005 (pp. 1078–1083). Austin, Tx: The Cognitive Science Society, Inc.
- ↑ Navigli, R., Lapata, M. (2007). Graph Connectivity Measures for Unsupervised Word Sense Disambiguation, Proc. of the 20th International Joint Conference on Artificial Intelligence (IJCAI 2007), Hyderabad, India, January 6–12th, 2007, pp. 1683–1688.
- ↑ Pirolli, P. (2005). "Rational analyses of information foraging on the Web". Cognitive Science 29 (3): 343–373. doi:10.1207/s15516709cog0000_20. PMID 21702778.
- ↑ Pirolli, P.; Fu, W.-T. (2003). "SNIF-ACT: A model of information foraging on the World Wide Web". Lecture Notes in Computer Science. 2702. pp. 45–54. doi:10.1007/3-540-44963-9_8. ISBN 978-3-540-40381-4.
- ↑ Turney, P. (2001). Mining the Web for Synonyms: PMI versus LSA on TOEFL. In L. De Raedt & P. Flach (Eds.), Proceedings of the Twelfth European Conference on Machine Learning (ECML-2001) (pp. 491–502). Freiburg, Germany.
- ↑ Reimers, Nils; Gurevych, Iryna (November 2019). "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks". Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). Hong Kong, China: Association for Computational Linguistics. pp. 3982–3992. doi:10.18653/v1/D19-1410. https://www.aclweb.org/anthology/D19-1410.
- ↑ Mueller, Jonas; Thyagarajan, Aditya (2016-03-05). "Siamese Recurrent Architectures for Learning Sentence Similarity" (in en). Thirtieth AAAI Conference on Artificial Intelligence 30. doi:10.1609/aaai.v30i1.10350. https://www.aaai.org/ocs/index.php/AAAI/AAAI16/paper/view/12195.
- ↑ Kiros, Ryan; Zhu, Yukun; Salakhutdinov, Russ R; Zemel, Richard; Urtasun, Raquel; Torralba, Antonio; Fidler, Sanja (2015), Cortes, C.; Lawrence, N. D.; Lee, D. D. et al., eds., "Skip-Thought Vectors", Advances in Neural Information Processing Systems 28 (Curran Associates, Inc.): pp. 3294–3302, http://papers.nips.cc/paper/5950-skip-thought-vectors.pdf, retrieved 2020-03-13
- ↑ Cheatham, Michelle; Hitzler, Pascal (2013). "Advanced Information Systems Engineering". in Alani, Harith; Kagal, Lalana; Fokoue, Achille et al. (in en). The Semantic Web – ISWC 2013. 7908. Berlin, Heidelberg: Springer. pp. 294–309. doi:10.1007/978-3-642-41338-4_19. ISBN 978-3-642-41338-4.
- ↑ 32.0 32.1 Sousa, G., Lima, R., & Trojahn, C. (2022). An eye on representation learning in ontology matching. OM@ISWC.
- ↑ Pekar, Viktor; Staab, Steffen (2002). "Taxonomy learning". Proceedings of the 19th international conference on Computational linguistics –. 1. pp. 1–7. doi:10.3115/1072228.1072318.
- ↑ Cheng, J; Cline, M; Martin, J; Finkelstein, D; Awad, T; Kulp, D; Siani-Rose, MA (2004). "A knowledge-based clustering algorithm driven by Gene Ontology". Journal of Biopharmaceutical Statistics 14 (3): 687–700. doi:10.1081/BIP-200025659. PMID 15468759.
- ↑ Wu, H; Su, Z; Mao, F; Olman, V; Xu, Y (2005). "Prediction of functional modules based on comparative genome analysis and Gene Ontology application". Nucleic Acids Research 33 (9): 2822–37. doi:10.1093/nar/gki573. PMID 15901854.
- ↑ Del Pozo, Angela; Pazos, Florencio; Valencia, Alfonso (2008). "Defining functional distances over Gene Ontology". BMC Bioinformatics 9: 50. doi:10.1186/1471-2105-9-50. PMID 18221506.
- ↑ Philip Resnik (1995). Chris S. Mellish. ed. "Using information content to evaluate semantic similarity in a taxonomy". Proceedings of the 14th International Joint Conference on Artificial Intelligence (IJCAI'95) 1: 448–453. Bibcode: 1995cmp.lg...11007R.
- ↑ Dekang Lin. 1998. An Information-Theoretic Definition of Similarity. In Proceedings of the Fifteenth International Conference on Machine Learning (ICML '98), Jude W. Shavlik (Ed.). Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 296–304
- ↑ Ana Gabriela Maguitman, Filippo Menczer, Heather Roinestad, Alessandro Vespignani: Algorithmic detection of semantic similarity. WWW 2005: 107–116
- ↑ J. J. Jiang and D. W. Conrath. Semantic Similarity Based on Corpus Statistics and Lexical Taxonomy. In International Conference on Research on Computational Linguistics (ROCLING X), pages 9008+, September 1997
- ↑ M. T. Pilehvar, D. Jurgens and R. Navigli. Align, Disambiguate and Walk: A Unified Approach for Measuring Semantic Similarity.. Proc. of the 51st Annual Meeting of the Association for Computational Linguistics (ACL 2013), Sofia, Bulgaria, August 4–9, 2013, pp. 1341–1351.
- ↑ Dong, Hai (2009). "A Hybrid Concept Similarity Measure Model for Ontology Environment". On the Move to Meaningful Internet Systems: OTM 2009 Workshops. Lecture Notes in Computer Science. 5872. pp. 848–857. doi:10.1007/978-3-642-05290-3_103. ISBN 978-3-642-05289-7. Bibcode: 2009LNCS.5872..848D. https://www.researchgate.net/publication/44241193.
- ↑ Dong, Hai (2011). "A context-aware semantic similarity model for ontology environments". Concurrency and Computation: Practice and Experience 23 (2): 505–524. doi:10.1002/cpe.1652. https://www.researchgate.net/publication/220105255.
- ↑ Landauer, T. K.; Dumais, S. T. (1997). "A solution to Plato's problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge". Psychological Review 104 (2): 211–240. doi:10.1037/0033-295x.104.2.211. http://www.stat.cmu.edu/%7Ecshalizi/350/2008/readings/Landauer-Dumais.pdf.
- ↑ Landauer, T. K.; Foltz, P. W.; Laham, D. (1998). "Introduction to Latent Semantic Analysis". Discourse Processes 25 (2–3): 259–284. doi:10.1080/01638539809545028. http://lsa.colorado.edu/papers/dp1.LSAintro.pdf.
- ↑ "Google Similarity Distance". http://iknowate.blogspot.com/2011/10/google-similarity-distance.html.
- ↑ Carrillo, F.; Cecchi, G. A.; Sigman, M.; Slezak, D. F. (2015). "Fast Distributed Dynamics of Semantic Networks via Social Media". Computational Intelligence and Neuroscience 2015: 712835. doi:10.1155/2015/712835. PMID 26074953. PMC 4449913. http://downloads.hindawi.com/journals/cin/2015/712835.pdf.
- ↑ "Samer Hassan". http://www.samerhassan.com/images/4/48/Hassan.pdf.[|permanent dead link|dead link}}]
- ↑ Wilson Wong; Wei Liu; Mohammed Bennamoun (November 2006). "Featureless similarities for terms clustering using tree-traversing ants". PCAR '06: Proceedings of the 2006 international symposium on Practical cognitive agents and robots. pp. 177–191. doi:10.1145/1232425.1232448. http://doi.acm.org/10.1145/1232425.1232448.
- ↑ "6 Degrees of Wikipedia". May 28, 2008. http://chronicle.com/wiredcampus/article/3041/six-degrees-of-wikipedia.
- ↑ "Defining the Dimensions of the Human Semantic Space". 2008. https://raw.githubusercontent.com/lyoshenka/papers/master/pp718-veksler.pdf.
- ↑ J. Camacho-Collados; M. T. Pilehvar; R. Navigli (2015). "NASARI: a Novel Approach to a Semantically-Aware Representation of Items". Proceedings of the North American Chapter of the Association of Computational Linguistics (NAACL 2015). Denver, US. pp. 567–577. http://aclweb.org/anthology/N/N15/N15-1059.pdf.
- ↑ J. Camacho-Collados; M. T. Pilehvar; R. Navigli (July 27–29, 2015). "A Unified Multilingual Semantic Representation of Concepts". Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics (ACL 2015). Beijing, China. pp. 741–751. http://aclweb.org/anthology/P/P15/P15-1072.pdf.
- ↑ Fähndrich J.; Weber S.; Ahrndt S. (2016). "Multiagent System Technologies". MATES 2016. 9872. Springer. Available at author version
- ↑ C. d'Amato; S. Staab; N. Fanizzi (2008). "Knowledge Engineering: Practice and Patterns". pp. 48–63. doi:10.1007/978-3-540-87696-0_7.
- ↑ Bendeck, F. (2008). WSM-P Workflow Semantic Matching Platform, PhD dissertation, University of Trier, Germany. Verlag Dr. Hut. ASIN 3899638549.
- ↑ Rubenstein, Herbert, and John B. Goodenough. Contextual correlates of synonymy. Communications of the ACM, 8(10):627–633, 1965.
- ↑ For a list of datasets, and an overview of the state of the art see https://www.aclweb.org/.
- ↑ Rubenstein, Herbert; Goodenough, John B. (1965-10-01). "Contextual correlates of synonymy". Communications of the ACM 8 (10): 627–633. doi:10.1145/365628.365657.
- ↑ Miller, George A.; Charles, Walter G. (1991-01-01). "Contextual correlates of semantic similarity". Language and Cognitive Processes 6 (1): 1–28. doi:10.1080/01690969108406936. ISSN 0169-0965.
- ↑ "Placing search in context" (in EN). ACM Transactions on Information Systems 20: 116–131. 2002-01-01. doi:10.1145/503104.503110.
Sources
- Chicco, D; Masseroli, M (2015). "Software suite for gene and protein annotation prediction and similarity search". IEEE/ACM Transactions on Computational Biology and Bioinformatics 12 (4): 837–843. doi:10.1109/TCBB.2014.2382127. PMID 26357324. https://doi.org/10.1109/TCBB.2014.2382127.
- Cilibrasi, R.L.; Vitanyi, P.M.B. (2007). "The Google Similarity Distance". IEEE Trans. Knowledge and Data Engineering 19 (3): 370–383. doi:10.1109/TKDE.2007.48.
- Dumais, S (2003). "Data-driven approaches to information access". Cognitive Science 27 (3): 491–524. doi:10.1207/s15516709cog2703_7.
- Gabrilovich, E. and Markovitch, S. (2007). Computing Semantic Relatedness using Wikipedia-based Explicit Semantic Analysis, Proceedings of the 20th International Joint Conference on Artificial Intelligence (IJCAI), Hyderabad, India, January 2007.
- Lee, M. D., Pincombe, B., & Welsh, M. (2005). An empirical evaluation of models of text document similarity. In B. G. Bara & L. Barsalou & M. Bucciarelli (Eds.), 27th Annual Meeting of the Cognitive Science Society, CogSci2005 (pp. 1254–1259). Austin, Tx: The Cognitive Science Society, Inc.
- Lemaire, B., & Denhiére, G. (2004). Incremental construction of an associative network from a corpus. In K. D. Forbus & D. Gentner & T. Regier (Eds.), 26th Annual Meeting of the Cognitive Science Society, CogSci2004. Hillsdale, NJ: Lawrence Erlbaum Publisher.
- Lindsey, R.; Veksler, V.D.; Grintsvayg, A.; Gray, W.D. (2007). "The Effects of Corpus Selection on Measuring Semantic Relatedness". Proceedings of the 8th International Conference on Cognitive Modeling, Ann Arbor, MI. http://sitemaker.umich.edu/iccm2007.org/files/lindsey__veksler__grintsvayg____gray.pdf.
- Navigli, R., Lapata, M. (2010). "An Experimental Study of Graph Connectivity for Unsupervised Word Sense Disambiguation". IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 32(4), IEEE Press, 2010, pp. 678–692.
- Veksler, V.D.; Gray, W.D. (2006). "Test Case Selection for Evaluating Measures of Semantic Distance". Proceedings of the 28th Annual Meeting of the Cognitive Science Society, CogSci2006. http://csjarchive.cogsci.rpi.edu/Proceedings/2006/docs/p2624.pdf.
- Wong, W., Liu, W. & Bennamoun, M. (2008) Featureless Data Clustering. In: M. Song and Y. Wu; Handbook of Research on Text and Web Mining Technologies; IGI Global. ISBN:978-1-59904-990-8 (the use of NGD and noW for term and URI clustering)
External links
Survey articles
- Conference article: C. d'Amato, S. Staab, N. Fanizzi. 2008. On the Influence of Description Logics Ontologies on Conceptual Similarity. In Proceedings of the 16th international conference on Knowledge Engineering: Practice and Patterns Pages 48 – 63. Acitrezza, Italy, Springer-Verlag
- Journal article on the more general topic of relatedness, also including similarity: Z. Zhang, A. Gentile, F. Ciravegna. 2013. Recent advances in methods of lexical semantic relatedness – a survey. Natural Language Engineering 19 (4), 411–479, Cambridge University Press
- Book: S. Harispe, S. Ranwez, S. Janaqi, J. Montmain. 2015. Semantic Similarity from Natural Language and Ontology Analysis, Morgan & Claypool Publishers.
Original source: https://en.wikipedia.org/wiki/Semantic similarity.
Read more |