Collective classification
In network theory, collective classification is the simultaneous prediction of the labels for multiple objects, where each label is predicted using information about the object's observed features, the observed features and labels of its neighbors, and the unobserved labels of its neighbors.[1] Collective classification problems are defined in terms of networks of random variables, where the network structure determines the relationship between the random variables. Inference is performed on multiple random variables simultaneously, typically by propagating information between nodes in the network to perform approximate inference. Approaches that use collective classification can make use of relational information when performing inference. Examples of collective classification include predicting attributes (ex. gender, age, political affiliation) of individuals in a social network, classifying webpages in the World Wide Web, and inferring the research area of a paper in a scientific publication dataset.
Motivation and background
Traditionally, a major focus of machine learning is to solve classification problems. (For example, given a collection of e-mails, we wish to determine which are spam, and which are not.) Many machine learning models for performing this task will try to categorize each item independently, and focus on predicting the class labels separately. However, the prediction accuracy for the labels whose values must be inferred can be improved with knowledge of the correct class labels for related items. For example, it is easier to predict the topic of a webpage if we know the topics of the webpages that link to it. Similarly, the chance of a particular word being a verb increases if we know that the previous word in the sentence is a noun; knowing the first few characters in a word can make it much easier to identify the remaining characters. Many researchers have proposed techniques that attempt to classify samples in a joint or collective manner, instead of treating each sample in isolation; these techniques have enabled significant gains in classification accuracy.[1][2]
Example
Consider the task of inferring the political affiliation of users in a social network, where some portion of these affiliations are observed, and the remainder are unobserved. Each user has local features, such as their profile information, and links exist between users who are friends in this social network. An approach that does not collectively classify users will consider each user in the network independently and use their local features to infer party affiliations. An approach which performs collective classification might assume that users who are friends tend to have similar political views, and could then jointly infer all unobserved party affiliations while making use of the rich relational structure of the social network.
Definition
Consider the semi supervised learning problem of assigning labels to nodes in a network by using knowledge of a subset of the nodes' labels. Specifically, we are given a network represented by a graph [math]\displaystyle{ G }[/math] with a set of nodes [math]\displaystyle{ V }[/math] and an edge set [math]\displaystyle{ E }[/math] representing relationships among nodes. Each node [math]\displaystyle{ v_i\in V }[/math] is described by its attributes: a feature vector [math]\displaystyle{ x_i \in X }[/math] and its label (or class) [math]\displaystyle{ y_i\in Y }[/math].
[math]\displaystyle{ V }[/math] can further be divided into two sets of nodes: [math]\displaystyle{ L }[/math], the set of nodes for which we know the correct label values (observed variables), and [math]\displaystyle{ U }[/math], the nodes whose labels must be inferred. The collective classification task is to label the nodes in [math]\displaystyle{ U }[/math] with a label from a label set [math]\displaystyle{ L=\{L_1,\cdots L_q\} }[/math].
In such settings, traditional classification algorithms assume that the data is drawn independently and identically from some distribution (iid). This means that the labels inferred for nodes whose label is unobserved are independent of each other. One does not make this assumption when performing collective classification. Instead, there are three distinct types of correlations that can be utilized to determine the classification or label of [math]\displaystyle{ v }[/math]:
- The correlations between the label of [math]\displaystyle{ v }[/math] and the observed attributes of [math]\displaystyle{ v }[/math]. Traditional iid classifiers which make use of feature vectors are an example of approaches that use this correlation.
- The correlations between the label of [math]\displaystyle{ v }[/math] and the observed attributes (including observed labels) of nodes in the neighborhood of [math]\displaystyle{ v }[/math].
- The correlations between the label of [math]\displaystyle{ v }[/math] and the unobserved labels of objects in the neighborhood of [math]\displaystyle{ v }[/math].
Collective classification refers to the combined classification of a set of interlinked objects using the three above types of information.
Methods
There are several existing approaches to collective classification. The two major methods are iterative methods and methods based on probabilistic graphical models.[3]
Iterative methods
The general idea for iterative methods is to iteratively combine and revise individual node predictions so as to reach an equilibrium. When updating predictions for individual nodes is a fast operation, the complexity of these iterative methods will be the number of iterations needed for convergence. Though convergence and optimality is not always mathematically guaranteed, in practice, these approaches will typically converge quickly to a good solution, depending on the graph structure and problem complexity. The methods presented in this section are representative of this iterative approach.
Label propagation
A natural assumption in network classification is that adjacent nodes are likely to have the same label (i.e., contagion or homophily). The predictor for node [math]\displaystyle{ V_{i} }[/math] using the label propagation method is a weighted average of its neighboring labels [math]\displaystyle{ Y_{N_i} }[/math][4]
Iterative Classification Algorithms (ICA)
While label propagation is surprisingly effective, it may sometimes fail to capture complex relational dynamics. More sophisticated approaches can use richer predictors. Suppose we have a classifier [math]\displaystyle{ h }[/math] that has been trained to classify a node [math]\displaystyle{ v_i }[/math] given its features [math]\displaystyle{ X_i }[/math] and the features [math]\displaystyle{ X_{N_i} }[/math] and labels [math]\displaystyle{ Y_{N_i} }[/math] of its neighbors [math]\displaystyle{ N_i }[/math]. Iterative classification applies uses a local classifier for each node, which uses information about current predictions and ground truth information about the node's neighbors, and iterates until the local predictions converge to a global solution. Iterative classification is an “algorithmic framework,” in that it is agnostic to the choice of predictor; this makes it a very versatile tool for collective classification. [5][6][7]
Collective classification with graphical models
Another approach to collective classification is to represent the problem with a graphical model and use learning and inference techniques for the graphical modeling approach to arrive at the correct classifications. Graphical models are tools for joint, probabilistic inference, making them ideal for collective classification. They are characterized by a graphical representation of a probability distribution [math]\displaystyle{ P }[/math], in which random variables are nodes in a graph [math]\displaystyle{ G }[/math]. Graphical models can be broadly categorized by whether the underlying graph is directed (e.g., Bayesian networks or collections of local classifiers) or undirected (e.g., Markov random fields (MRF)).
Gibbs sampling
Gibbs sampling is a general framework for approximating a distribution. It is a Markov chain Monte Carlo algorithm, in that it iteratively samples from the current estimate of the distribution, constructing a Markov chain that converges to the target (stationary) distribution. The basic idea for Gibbs Sampling is to sample for the best label estimate for [math]\displaystyle{ y_i }[/math] given all the values for the nodes in [math]\displaystyle{ N_i }[/math] using local classifier [math]\displaystyle{ f }[/math] for a fixed number of iterations. After that, we sample labels for each [math]\displaystyle{ y_i\in Y }[/math] and maintain count statistics for the number of times we sampled label [math]\displaystyle{ l }[/math] for node [math]\displaystyle{ y_i }[/math]. After collecting a predefined number of such samples, we output the best label assignment for node [math]\displaystyle{ y_i }[/math] by choosing the label that was assigned the maximum number of times to [math]\displaystyle{ y_i }[/math] while collecting samples.[8][9]
Loopy belief propagation
For certain undirected graphical models, it is possible to efficiently perform exact inference via message passing, or belief propagation algorithms.[10] These algorithms follow a simple iterative pattern: each variable passes its "beliefs" about its neighbors' marginal distributions, then uses the incoming messages about its own value to update its beliefs. Convergence to the true marginals is guaranteed for tree-structured MRFs, but is not guaranteed for MRFs with cycles.
Statistical relational learning is often used to address collective classification problems. A variety of SRL methods has been applied to the collective classification setting. Some of the methods include direct methods such probabilistic relational models (PRM),[11] coupled conditional models such as link-based classification,[12] and indirect methods such as Markov logic networks (MLN)[13] and Probabilistic Soft Logic (PSL).[14]
Applications
Collective classification is applied in many domains which exhibit relational structure, such as:
- Social network analysis, where collective approaches to node classification tasks such as detecting malicious users can utilize information about relationships between nodes.[15][16]
- Entity resolution, where one can make use of co-authorship relationships to identify authors of papers.[17]
- Named entity recognition, where some approaches treat this as a text sequence labeling problem and jointly infer the labels of every word in a sentence, typically by using a conditional random field which models a linear chain of dependencies between the labels of adjacent words in the sentence.[18]
- Document classification, where for example inter-document semantic similarities can be collectively utilized as signals that certain documents belong to the same class.[19]
- Computational biology, where graphical models such as Markov random fields are utilized to jointly infer relations between biological entities such as genes.[20]
- Computer vision, where for example collective classification can be applied to recognizing multiple objects simultaneously.[21]
See also
- Machine learning
- Classification
- Similarity (network science)
- Graph (discrete mathematics)
- Statistical relational learning
- Bayesian Networks
- Markov Random Field
References
- ↑ 1.0 1.1 Sen, Prithviraj; Namata, Galileo; Bilgic, Mustafa; Getoor, Lise; Galligher, Brian; Eliassi-Rad, Tina (2008). "Collective Classification in Network Data". AI Magazine 29 (3): 93. doi:10.1609/aimag.v29i3.2157. ISSN 0738-4602.
- ↑ Kajdanowicz, Tomasz; Kazienko, Przemysław (2018). "Collective Classification". Encyclopedia of Social Network Analysis and Mining. pp. 253–265. doi:10.1007/978-1-4939-7131-2_45. ISBN 978-1-4939-7130-5.
- ↑ London, Ben; Getoor, Lise (2014). "Collective Classification of Network Data". Data Classification: Algorithms and Applications 29: 399–416.
- ↑ Zhu, Xiaojin (2002). Learning From Labeled and Unlabeled Data With Label Propagation.
- ↑ Neville, Jennifer; Jensen, David (2000). "Iterative classification in relational data". AAAI Workshop on Learning Statistical Models From Relational Data (SRL). AAAI. p. 8. https://www.cs.purdue.edu/homes/neville/papers/neville-jensen-srl2000.pdf.
- ↑ Chakrabarti, Soumen; Dom, Byron; Indyk, Piotr (1998). "Enhanced Hypertext Categorization Using Hyperlinks". Proceedings of the 1998 ACM SIGMOD International Conference on Management of Data. Association for Computing Machinery (ACM). p. 307–318. doi:10.1145/276304.276332.
- ↑ Jensen, David; Neville, Jennifer; Gallagher, David (2000). "Why collective inference improves relational classification". ACM SIGKDD international conference on Knowledge discovery and data mining. Association for Computing Machinery (ACM). p. 5. doi:10.1145/1014052.1014125.
- ↑ Mackskassy, Sofus; Provost, Foster (2007). "Classification in Network Data: A Toolkit and a Univariate Case Study". Journal of Machine Learning Research: 935 - 983. http://www.jmlr.org/papers/volume8/macskassy07a/macskassy07a.pdf.
- ↑ Geman, Stuart; Donald, Foster (1990). "Stochastic Relaxation, Gibbs Distributions, and the Bayesian Restoration of Images". Readings in Uncertain Reasoning. Morgan Kaufmann Publishers Inc.. p. 452–472.
- ↑ Yedidia, J.S.; Freeman, W.T.; Y. (January 2003). "Understanding Belief Propagation and Its Generalizations". in Lakemeyer, Gerhard; Nebel, Bernhard. Exploring Artificial Intelligence in the New Millennium. Morgan Kaufmann. pp. 239–236. ISBN 978-1-55860-811-5. http://www.merl.com/publications/TR2001-022/. Retrieved 2009-03-30.
- ↑ Getoor, Lise; Friedman, Nir; Koller, Daphne; Taskar, Benjamin (2002). Learning Probabilistic Models of Link Structure. https://users.umiacs.umd.edu/~getoor/Publications/jmlr02.pdf.
- ↑ Lu, Qing; Getoor, Lise (2003). "Link based Classification". International Conference on Machine Learning (ICML). https://users.umiacs.umd.edu/~getoor/Publications/icml03.pdf.
- ↑ Dominogs, Pedro; Richardson, Matthew (2006). Markov logic networks. http://nlp.jbnu.ac.kr/PGM/slides_other/mln.pdf.
- ↑ Bach, Stephen; Broecheler, Matthias; Huang, Bert; Getoor, Lise (2017). "Hinge-Loss Markov Random Fields and Probabilistic Soft Logic". Journal of Machine Learning Research 18: 1–67.
- ↑ Jaafor, Omar; Birregah, Babiga (2017-07-31). "Proceedings of the 2017 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining 2017". New York, NY, USA: ACM. pp. 827–835. doi:10.1145/3110025.3110128. ISBN 978-1-4503-4993-2.
- ↑ Fakhraei, Shobeir; Foulds, James; Shashanka, Madhusudana; Getoor, Lise (2015). "Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining". New York, New York, USA: ACM Press. pp. 1769–1778. doi:10.1145/2783258.2788606. ISBN 978-1-4503-3664-2.
- ↑ Bhattacharya, Indrajit; Getoor, Lise (2007). "Collective entity resolution in relational data". ACM Transactions on Knowledge Discovery from Data (Association for Computing Machinery (ACM)) 1 (1): 5. doi:10.1145/1217299.1217304. ISSN 1556-4681.
- ↑ Luo, Ling; Yang, Zhihao; Yang, Pei; Zhang, Yin; Wang, Lei; Lin, Hongfei; Wang, Jian (2017-11-24). Wren, Jonathan. ed. "An attention-based BiLSTM-CRF approach to document-level chemical named entity recognition". Bioinformatics (Oxford University Press (OUP)) 34 (8): 1381–1388. doi:10.1093/bioinformatics/btx761. ISSN 1367-4803. PMID 29186323.
- ↑ Burford, Clint; Bird, Steven; Baldwin, Timothy (2015). "Proceedings of the Fourth Joint Conference on Lexical and Computational Semantics". Stroudsburg, PA, USA: Association for Computational Linguistics. pp. 106–116. doi:10.18653/v1/s15-1012.
- ↑ Žitnik M, Zupan B (2015). "Gene network inference by fusing data from diverse distributions.". Bioinformatics 31 (12): i230-9. doi:10.1093/bioinformatics/btv258. PMID 26072487.
- ↑ Triebel, Rudolph; Mozos, Óscar Martínez; Burgard, Wolfram (2008). "Collective Classification for Labeling of Places and Objects in 2D and 3D Range Data". Data Analysis, Machine Learning and Applications. Berlin, Heidelberg: Springer Berlin Heidelberg. pp. 293–300. doi:10.1007/978-3-540-78246-9_35. ISBN 978-3-540-78239-1. http://eprints.lincoln.ac.uk/id/eprint/9565/1/triebel2007gfkl_book.pdf.
Original source: https://en.wikipedia.org/wiki/Collective classification.
Read more |