Concept mining

From HandWiki
Revision as of 16:54, 6 February 2024 by Rtextdoc (talk | contribs) (simplify)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Concept mining is an activity that results in the extraction of concepts from artifacts. Solutions to the task typically involve aspects of artificial intelligence and statistics, such as data mining and text mining.[1][2] Because artifacts are typically a loosely structured sequence of words and other symbols (rather than concepts), the problem is nontrivial, but it can provide powerful insights into the meaning, provenance and similarity of documents.

Methods

Traditionally, the conversion of words to concepts has been performed using a thesaurus,[3] and for computational techniques the tendency is to do the same. The thesauri used are either specially created for the task, or a pre-existing language model, usually related to Princeton's WordNet.

The mappings of words to concepts[4] are often ambiguous. Typically each word in a given language will relate to several possible concepts. Humans use context to disambiguate the various meanings of a given piece of text, where available machine translation systems cannot easily infer context.

For the purposes of concept mining, however, these ambiguities tend to be less important than they are with machine translation, for in large documents the ambiguities tend to even out, much as is the case with text mining.

There are many techniques for disambiguation that may be used. Examples are linguistic analysis of the text and the use of word and concept association frequency information that may be inferred from large text corpora. Recently, techniques that base on semantic similarity between the possible concepts and the context have appeared and gained interest in the scientific community.

Applications

Detecting and indexing similar documents in large corpora

One of the spin-offs of calculating document statistics in the concept domain, rather than the word domain, is that concepts form natural tree structures based on hypernymy and meronymy. These structures can be used to generate simple tree membership statistics, that can be used to locate any document in a Euclidean concept space. If the size of a document is also considered as another dimension of this space then an extremely efficient indexing system can be created. This technique is currently in commercial use locating similar legal documents in a 2.5 million document corpus.

Clustering documents by topic

Standard numeric clustering techniques may be used in "concept space" as described above to locate and index documents by the inferred topic. These are numerically far more efficient than their text mining cousins, and tend to behave more intuitively, in that they map better to the similarity measures a human would generate.

See also

References

  1. Yuen-Hsien Tseng, Chun-Yen Chang, Shu-Nu Chang Rundgren, and Carl-Johan Rundgren, " Mining Concept Maps from News Stories for Measuring Civic Scientific Literacy in Media[|permanent dead link|dead link}}]", Computers and Education, Vol. 55, No. 1, August 2010, pp. 165-177.
  2. Li, Keqian; Zha, Hanwen; Su, Yu; Yan, Xifeng (November 2018). "Concept Mining via Embedding". 2018 IEEE International Conference on Data Mining (ICDM). IEEE. pp. 267–276. doi:10.1109/icdm.2018.00042. ISBN 978-1-5386-9159-5. http://dx.doi.org/10.1109/icdm.2018.00042. 
  3. Yuen-Hsien Tseng, " Automatic Thesaurus Generation for Chinese Documents", Journal of the American Society for Information Science and Technology, Vol. 53, No. 13, Nov. 2002, pp. 1130-1138.
  4. Yuen-Hsien Tseng, " Generic Title Labeling for Clustered Documents", Expert Systems With Applications, Vol. 37, No. 3, 15 March 2010, pp. 2247-2254 .