Structured prediction
Machine learning and data mining |
---|
Structured prediction or structured (output) learning is an umbrella term for supervised machine learning techniques that involves predicting structured objects, rather than scalar discrete or real values.[1]
Similar to commonly used supervised learning techniques, structured prediction models are typically trained by means of observed data in which the true prediction value is used to adjust model parameters. Due to the complexity of the model and the interrelations of predicted variables the process of prediction using a trained model and of training itself is often computationally infeasible and approximate inference and learning methods are used.
Applications
For example, the problem of translating a natural language sentence into a syntactic representation such as a parse tree can be seen as a structured prediction problem[2] in which the structured output domain is the set of all possible parse trees. Structured prediction is also used in a wide variety of application domains including bioinformatics, natural language processing, speech recognition, and computer vision.
Example: sequence tagging
Sequence tagging is a class of problems prevalent in natural language processing, where input data are often sequences (e.g. sentences of text). The sequence tagging problem appears in several guises, e.g. part-of-speech tagging and named entity recognition. In POS tagging, for example, each word in a sequence must receive a "tag" (class label) that expresses its "type" of word:
The main challenge of this problem is to resolve ambiguity: the word "sentence" can also be a verb in English, and so can "tagged".
While this problem can be solved by simply performing classification of individual tokens, that approach does not take into account the empirical fact that tags do not occur independently; instead, each tag displays a strong conditional dependence on the tag of the previous word. This fact can be exploited in a sequence model such as a hidden Markov model or conditional random field[2] that predicts the entire tag sequence for a sentence, rather than just individual tags, by means of the Viterbi algorithm.
Techniques
Probabilistic graphical models form a large class of structured prediction models. In particular, Bayesian networks and random fields are popular. Other algorithms and models for structured prediction include inductive logic programming, case-based reasoning, structured SVMs, Markov logic networks, Probabilistic Soft Logic, and constrained conditional models. Main techniques:
- Conditional random field
- Structured support vector machine
- Structured k-Nearest Neighbours
- Recurrent neural network, in particular Elman network
Structured perceptron
One of the easiest ways to understand algorithms for general structured prediction is the structured perceptron of Collins.[3] This algorithm combines the perceptron algorithm for learning linear classifiers with an inference algorithm (classically the Viterbi algorithm when used on sequence data) and can be described abstractly as follows. First define a "joint feature function" Φ(x, y) that maps a training sample x and a candidate prediction y to a vector of length n (x and y may have any structure; n is problem-dependent, but must be fixed for each model). Let GEN be a function that generates candidate predictions. Then:
- Let [math]\displaystyle{ w }[/math] be a weight vector of length n
- For a pre-determined number of iterations:
- For each sample [math]\displaystyle{ x }[/math] in the training set with true output [math]\displaystyle{ t }[/math]:
- Make a prediction [math]\displaystyle{ \hat{y}={\operatorname{arg\,max}}\, \{{y} \in {GEN}({x})\}\,({w}^{T}\, \phi({x}, {y})) }[/math]
- Update [math]\displaystyle{ w }[/math], from [math]\displaystyle{ \hat{y} }[/math] to [math]\displaystyle{ t }[/math]: [math]\displaystyle{ {w}={w}+{c}(-\phi({x}, \hat{y})+ \phi({x}, {t})) }[/math], [math]\displaystyle{ c }[/math] is learning rate
In practice, finding the argmax over [math]\displaystyle{ {GEN}({x}) }[/math] will be done using an algorithm such as Viterbi or an algorithm such as max-sum, rather than an exhaustive search through an exponentially large set of candidates.
The idea of learning is similar to multiclass perceptron.
References
- ↑ Gökhan BakIr, Ben Taskar, Thomas Hofmann, Bernhard Schölkopf, Alex Smola and SVN Vishwanathan (2007), Predicting Structured Data, MIT Press.
- ↑ 2.0 2.1 Lafferty, J.; McCallum, A.; Pereira, F. (2001). "Conditional random fields: Probabilistic models for segmenting and labeling sequence data". pp. 282–289. http://www.cis.upenn.edu/~pereira/papers/crf.pdf.
- ↑ Collins, Michael (2002). "Discriminative training methods for hidden Markov models: Theory and experiments with perceptron algorithms". Proc. EMNLP. 10. http://acl.ldc.upenn.edu/W/W02/W02-1001.pdf.
- Noah Smith, Linguistic Structure Prediction, 2011.
- Michael Collins, Discriminative Training Methods for Hidden Markov Models, 2002.
External links
Original source: https://en.wikipedia.org/wiki/Structured prediction.
Read more |