Semantic analytics
Semantic analytics, also termed semantic relatedness, is the use of ontologies to analyze content in web resources. This field of research combines text analytics and Semantic Web technologies like RDF. Semantic analytics measures the relatedness of different ontological concepts.
Some academic research groups that have active project in this area include Kno.e.sis Center at Wright State University among others.
History
An important milestone in the beginning of semantic analytics occurred in 1996, although the historical progression of these algorithms is largely subjective. In his seminal study publication, Philip Resnik established that computers have the capacity to emulate human judgement. Spanning the publications of multiple journals, improvements to the accuracy of general semantic analytic computations all claimed to revolutionize the field. However, the lack of a standard terminology throughout the late 1990s was the cause of much miscommunication. This prompted Budanitsky & Hirst to standardize the subject in 2006 with a summary that also set a framework for modern spelling and grammar analysis.[1]
In the early days of semantic analytics, obtaining a large enough reliable knowledge bases was difficult. In 2006, Strube & Ponzetto demonstrated that Wikipedia could be used in semantic analytic calculations.[2] The usage of a large knowledge base like Wikipedia allows for an increase in both the accuracy and applicability of semantic analytics.
Methods
Given the subjective nature of the field, different methods used in semantic analytics depend on the domain of application. No singular methods is considered correct, however one of the most generally effective and applicable method is explicit semantic analysis (ESA).[3] ESA was developed by Evgeniy Gabrilovich and Shaul Markovitch in the late 2000s.[4] It uses machine learning techniques to create a semantic interpreter, which extracts text fragments from articles into a sorted list. The fragments are sorted by how related they are to the surrounding text.
Latent semantic analysis (LSA) is another common method that does not use ontologies, only considering the text in the input space.
Applications
- Entity linking
- Ontology building / knowledge base population
- Search and query tasks
- Natural language processing
- Spoken dialog systems (e.g., Amazon Alexa, Google Assistant, Microsoft's Cortana)
- Artificial intelligence
- Knowledge management
The application of semantic analysis methods generally streamlines organizational processes of any knowledge management system. Academic libraries often use a domain-specific application to create a more efficient organizational system. By classifying scientific publications using semantics and Wikipedia, researchers are helping people find resources faster. Search engines like Semantic Scholar provide organized access to millions of articles.
See also
- Relationship extraction
- Semantic similarity
- Text analytics
References
- ↑ Budanitsky, Alexander, and Graeme Hirst. "Evaluating WordNet-Based Measures of Lexical Semantic Relatedness." Comput. Linguist. 32, no. 1 (March 2006): 13–47. doi:10.1162/coli.2006.32.1.13
- ↑ Strube, Michael, and Simone Paolo Ponzetto. "WikiRelate! Computing Semantic Relatedness Using Wikipedia. In Proceedings of the 21st National Conference on Artificial Intelligence, Volume 2, 1419–1424. AAAI'06. Boston, Massachusetts: AAAI Press, 2006.
- ↑ Z. Zhang, A. L. Gentile, and F. Ciravegna, "Recent advances in methods of lexical semantic relatedness – a survey", Natural Language Engineering, vol. 19, no. 04, pp. 411–479, Oct. 2013.
- ↑ Evgeniy Gabrilovich and Shaul Markovitch. 2007. "Computing semantic relatedness using Wikipedia-based explicit semantic analysis". In IJcAI, 1606–1611. Retrieved October 9, 2016.
External links
Original source: https://en.wikipedia.org/wiki/Semantic analytics.
Read more |