Knowledge distillation
In machine learning, knowledge distillation or model distillation is the process of transferring knowledge from a large model to a smaller one. While large models (such as very deep neural networks or ensembles of many models) have higher knowledge capacity than small models, this capacity might not be fully utilized. It can be just as computationally expensive to evaluate a model even if it utilizes little of its knowledge capacity. Knowledge distillation transfers knowledge from a large model to a smaller model without loss of validity. As smaller models are less expensive to evaluate, they can be deployed on less powerful hardware (such as a mobile device).[1]
Knowledge distillation has been successfully used in several applications of machine learning such as object detection,[2] acoustic models,[3] and natural language processing.[4] Recently, it has also been introduced to graph neural networks applicable to non-grid data.[5]
Concept of distillation
Transferring the knowledge from a large to a small model needs to somehow teach to the latter without loss of validity. If both models are trained on the same data, the small model may have insufficient capacity to learn a concise knowledge representation given the same computational resources and same data as the large model. However, some information about a concise knowledge representation is encoded in the pseudolikelihoods assigned to its output: when a model correctly predicts a class, it assigns a large value to the output variable corresponding to such class, and smaller values to the other output variables. The distribution of values among the outputs for a record provides information on how the large model represents knowledge. Therefore, the goal of economical deployment of a valid model can be achieved by training only the large model on the data, exploiting its better ability to learn concise knowledge representations, and then distilling such knowledge into the smaller model, that would not be able to learn it on its own, by training it to learn the soft output of the large model.[1]
A first example of distilling an artificial neural network into another network dates back to 1992, when Juergen Schmidhuber compressed or collapsed a hierarchy of recurrent neural networks (RNNs) into a single RNN, by distilling a higher level chunker network into a lower level automatizer network.[6][7] This facilitated downstream deep learning.
A related methodology to compress the knowledge of multiple models into a single neural network was called model compression in 2006. Compression was achieved by training a smaller model on large amounts of pseudo-data labelled by a higher-performing ensemble, optimising to match the logit of the compressed model to the logit of the ensemble.[8] Knowledge distillation is a generalisation of such approach, introduced by Geoffrey Hinton et al. in 2015,[1] in a preprint that formulated the concept and showed some results achieved in the task of image classification.
Knowledge distillation is also related to the concept of behavioral cloning discussed by Faraz Torabi et. al.[9]
Formulation
Given a large model as a function of the vector variable [math]\displaystyle{ \mathbf{x} }[/math], trained for a specific classification task, typically the final layer of the network is a softmax in the form
- [math]\displaystyle{ y_i(\mathbf{x}|t) = \frac{e^{\frac{z_i(\mathbf{x})}{t}}}{\sum_j e^{\frac{z_j(\mathbf{x})}{t}}} }[/math]
where [math]\displaystyle{ t }[/math] is a parameter called temperature, that for a standard softmax is normally set to 1. The softmax operator converts the logit values [math]\displaystyle{ z_i(\mathbf{x}) }[/math] to pseudo-probabilities, and higher values of temperature have the effect of generating a softer distribution of pseudo-probabilities among the output classes. Knowledge distillation consists of training a smaller network, called the distilled model, on a dataset called transfer set (different than the dataset used to train the large model) using the cross entropy as loss function between the output of the distilled model [math]\displaystyle{ \mathbf{y}(\mathbf{x}|t) }[/math] and the output [math]\displaystyle{ \hat{\mathbf{y}}(\mathbf{x}|t) }[/math] produced by the large model on the same record (or the average of the individual outputs, if the large model is an ensemble), using a high value of softmax temperature [math]\displaystyle{ t }[/math] for both models[1]
- [math]\displaystyle{ E(\mathbf{x}|t) = -\sum_i \hat{y}_i(\mathbf{x}|t) \log y_i(\mathbf{x}|t) . }[/math]
In this context, a high temperature increases the entropy of the output, and therefore provides more information to learn for the distilled model compared to hard targets, at the same time reducing the variance of the gradient between different records and therefore allowing higher learning rates.[1]
If ground truth is available for the transfer set, the process can be strengthened by adding to the loss the cross-entropy between the output of the distilled model (computed with [math]\displaystyle{ t = 1 }[/math]) and the known label [math]\displaystyle{ \bar{y} }[/math]
- [math]\displaystyle{ E(\mathbf{x}|t) = -t^2 \sum_i \hat{y}_i(\mathbf{x}|t) \log y_i(\mathbf{x}|t) - \sum_i \bar{y}_i \log y_i(\mathbf{x}|1) }[/math]
where the component of the loss with respect to the large model is weighted by a factor of [math]\displaystyle{ t^2 }[/math] since, as the temperature increases, the gradient of the loss with respect to the model weights scales by a factor of [math]\displaystyle{ \frac{1}{t^2} }[/math].[1]
Relationship with model compression
Under the assumption that the logits have zero mean, it is possible to show that model compression is a special case of knowledge distillation. The gradient of the knowledge distillation loss [math]\displaystyle{ E }[/math] with respect to the logit of the distilled model [math]\displaystyle{ z_i }[/math] is given by
- [math]\displaystyle{ \begin{align} \frac{\partial}{\partial z_i} E &= -\frac{\partial}{\partial z_i} \sum_j \hat{y}_j \log y_j \\ &= -\frac{\partial}{\partial z_i} \hat{y}_i \log y_i + \left( -\frac{\partial}{\partial z_i} \sum_{k\neq i} \hat{y}_k \log y_k \right)\\ &= -\hat{y}_i \frac{1}{y_i} \frac{\partial}{\partial z_i} y_i + \sum_{k\neq i} \left( -\hat{y}_k \cdot \frac{1}{y_k} \cdot e^{\frac{z_k}{t}} \cdot \left( -\frac{1}{\left(\sum_j e^{\frac{z_j}{t}} \right)^2 }\right) \cdot e^{\frac{z_i}{t}} \cdot \frac{1}{t} \right)\\ &= -\hat{y}_i \frac{1}{y_i} \frac{\partial}{\partial z_i} \frac{e^{\frac{z_i}{t}}}{\sum_j e^{\frac{z_j}{t}}} + \sum_{k\neq i} \left( \hat{y}_k \cdot \frac{1}{y_k} \cdot y_k \cdot y_i \cdot \frac{1}{t} \right)\\ &= -\hat{y}_i \frac{1}{y_i} \left( \frac{\frac{1}{t} e^{\frac{z_i}{t}} \sum_j e^{\frac{z_j}{t}} - \frac{1}{t} \left( e^{\frac{z_i}{t}} \right)^2} {\left( \sum_j e^{\frac{z_j}{t}} \right)^2} \right) + \frac{y_i\sum_{k\neq i}\hat{y}_k}{t}\\ &= -\hat{y}_i \frac{1}{y_i} \left( \frac{y_i}{t} - \frac{y_i^2}{t} \right) + \frac{y_i(1-\hat{y}_i)}{t}\\ &= \frac{1}{t} \left( y_i - \hat{y}_i \right) \\ &= \frac{1}{t} \left( \frac{e^{\frac{z_i}{t}}}{\sum_j e^{\frac{z_j}{t}}} - \frac{e^{\frac{\hat{z}_i}{t}}}{\sum_j e^{\frac{\hat{z}_j}{t}}} \right) \\ \end{align} }[/math]
where [math]\displaystyle{ \hat{z}_i }[/math] are the logits of the large model. For large values of [math]\displaystyle{ t }[/math] this can be approximated as
- [math]\displaystyle{ \frac{1}{t} \left( \frac{1 + \frac{z_i}{t}}{N + \sum_j \frac{z_j}{t}} - \frac{1 + \frac{\hat{z}_i}{t}}{N + \sum_j \frac{\hat{z}_j}{t}} \right) }[/math]
and under the zero-mean hypothesis [math]\displaystyle{ \sum_j z_j = \sum_j \hat{z}_j = 0 }[/math] it becomes [math]\displaystyle{ \frac{z_i - \hat{z}_i}{NT^2} }[/math], which is the derivative of [math]\displaystyle{ \frac{1}{2} \left( z_i - \hat{z}_i \right)^2 }[/math], i.e. the loss is equivalent to matching the logits of the two models, as done in model compression.[1]
References
- ↑ 1.0 1.1 1.2 1.3 1.4 1.5 1.6 Hinton, Geoffrey; Vinyals, Oriol; Dean, Jeff (2015). "Distilling the knowledge in a neural network". arXiv:1503.02531 [stat.ML].
- ↑ Chen, Guobin; Choi, Wongun; Yu, Xiang; Han, Tony; Chandraker, Manmohan (2017). "Learning efficient object detection models with knowledge distillation". Advances in Neural Information Processing Systems: 742–751.
- ↑ Asami, Taichi; Masumura, Ryo; Yamaguchi, Yoshikazu; Masataki, Hirokazu; Aono, Yushi (2017). "Domain adaptation of DNN acoustic models using knowledge distillation". IEEE International Conference on Acoustics, Speech and Signal Processing. pp. 5185–5189.
- ↑ Cui, Jia; Kingsbury, Brian; Ramabhadran, Bhuvana; Saon, George; Sercu, Tom; Audhkhasi, Kartik; Sethy, Abhinav; Nussbaum-Thom, Markus et al. (2017). "Knowledge distillation across ensembles of multilingual models for low-resource languages". IEEE International Conference on Acoustics, Speech and Signal Processing. pp. 4825–4829.}
- ↑ Yang, Yiding; Jiayan, Qiu; Mingli, Song; Dacheng, Tao; Xinchao, Wang (2020). "Distilling Knowledge from Graph Convolutional Networks". Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition: 7072–7081. Bibcode: 2020arXiv200310477Y. https://openaccess.thecvf.com/content_CVPR_2020/papers/Yang_Distilling_Knowledge_From_Graph_Convolutional_Networks_CVPR_2020_paper.pdf.
- ↑ Schmidhuber, Jürgen (1992). "Learning complex, extended sequences using the principle of history compression". Neural Computation 4 (2): 234–242. doi:10.1162/neco.1992.4.2.234. ftp://ftp.idsia.ch/pub/juergen/chunker.pdf.
- ↑ Schmidhuber, Juergen (2022). "Annotated History of Modern AI and Deep Learning". arXiv:2212.11279 [cs.NE].
- ↑ Buciluǎ, Cristian; Caruana, Rich; Niculescu-Mizil, Alexandru (2006). "Model compression".
- ↑ Torabi, Faraz; Warnell, Garrett; Stone, Peter (2018). "Behavioral Cloning from Observation". arXiv:1805.01954 [cs.AI].
External links
Original source: https://en.wikipedia.org/wiki/Knowledge distillation.
Read more |