Multitask learning
Multitask learning (MTL) is a subfield of machine learning in which multiple learning tasks are solved at the same time, while exploiting commonalities and differences across tasks. This can result in improved learning efficiency and prediction accuracy for the taskspecific models, when compared to training the models separately.^{[1]}^{[2]}^{[3]} Early versions of MTL were called "hints".^{[4]}^{[5]}
In a widely cited 1997 paper, Rich Caruana gave the following characterization:
Multitask Learning is an approach to inductive transfer that improves generalization by using the domain information contained in the training signals of related tasks as an inductive bias. It does this by learning tasks in parallel while using a shared representation; what is learned for each task can help other tasks be learned better.^{[3]}
In the classification context, MTL aims to improve the performance of multiple classification tasks by learning them jointly. One example is a spamfilter, which can be treated as distinct but related classification tasks across different users. To make this more concrete, consider that different people have different distributions of features which distinguish spam emails from legitimate ones, for example an English speaker may find that all emails in Russian are spam, not so for Russian speakers. Yet there is a definite commonality in this classification task across users, for example one common feature might be text related to money transfer. Solving each user's spam classification problem jointly via MTL can let the solutions inform each other and improve performance.^{[6]} Further examples of settings for MTL include multiclass classification and multilabel classification.^{[7]}
Multitask learning works because regularization induced by requiring an algorithm to perform well on a related task can be superior to regularization that prevents overfitting by penalizing all complexity uniformly. One situation where MTL may be particularly helpful is if the tasks share significant commonalities and are generally slightly under sampled.^{[8]}^{[6]} However, as discussed below, MTL has also been shown to be beneficial for learning unrelated tasks.^{[8]}^{[9]}
Methods
Task grouping and overlap
Within the MTL paradigm, information can be shared across some or all of the tasks. Depending on the structure of task relatedness, one may want to share information selectively across the tasks. For example, tasks may be grouped or exist in a hierarchy, or be related according to some general metric. Suppose, as developed more formally below, that the parameter vector modeling each task is a linear combination of some underlying basis. Similarity in terms of this basis can indicate the relatedness of the tasks. For example, with sparsity, overlap of nonzero coefficients across tasks indicates commonality. A task grouping then corresponds to those tasks lying in a subspace generated by some subset of basis elements, where tasks in different groups may be disjoint or overlap arbitrarily in terms of their bases.^{[10]} Task relatedness can be imposed a priori or learned from the data.^{[7]}^{[11]} Hierarchical task relatedness can also be exploited implicitly without assuming a priori knowledge or learning relations explicitly.^{[8]}^{[12]} For example, the explicit learning of sample relevance across tasks can be done to guarantee the effectiveness of joint learning across multiple domains.^{[8]}
One can attempt learning a group of principal tasks using a group of auxiliary tasks, unrelated to the principal ones. In many applications, joint learning of unrelated tasks which use the same input data can be beneficial. The reason is that prior knowledge about task relatedness can lead to sparser and more informative representations for each task grouping, essentially by screening out idiosyncrasies of the data distribution. Novel methods which builds on a prior multitask methodology by favoring a shared lowdimensional representation within each task grouping have been proposed. The programmer can impose a penalty on tasks from different groups which encourages the two representations to be orthogonal. Experiments on synthetic and real data have indicated that incorporating unrelated tasks can result in significant improvements over standard multitask learning methods.^{[9]}
Transfer of knowledge
Related to multitask learning is the concept of knowledge transfer. Whereas traditional multitask learning implies that a shared representation is developed concurrently across tasks, transfer of knowledge implies a sequentially shared representation. Large scale machine learning projects such as the deep convolutional neural network GoogLeNet,^{[13]} an imagebased object classifier, can develop robust representations which may be useful to further algorithms learning related tasks. For example, the pretrained model can be used as a feature extractor to perform preprocessing for another learning algorithm. Or the pretrained model can be used to initialize a model with similar architecture which is then finetuned to learn a different classification task.^{[14]}
Group online adaptive learning
Traditionally Multitask learning and transfer of knowledge are applied to stationary learning settings. Their extension to nonstationary environments is termed Group online adaptive learning (GOAL).^{[15]} Sharing information could be particularly useful if learners operate in continuously changing environments, because a learner could benefit from previous experience of another learner to quickly adapt to their new environment. Such groupadaptive learning has numerous applications, from predicting financial timeseries, through content recommendation systems, to visual understanding for adaptive autonomous agents.
Mathematics
Reproducing Hilbert space of vector valued functions (RKHSvv)
The MTL problem can be cast within the context of RKHSvv (a complete inner product space of vectorvalued functions equipped with a reproducing kernel). In particular, recent focus has been on cases where task structure can be identified via a separable kernel, described below. The presentation here derives from Ciliberto et al., 2015.^{[7]}
RKHSvv concepts
Suppose the training data set is [math]\displaystyle{ \mathcal{S}_t =\{(x_i^t,y_i^t)\}_{i=1}^{n_t} }[/math], with [math]\displaystyle{ x_i^t\in\mathcal{X} }[/math], [math]\displaystyle{ y_i^t\in\mathcal{Y} }[/math], where t indexes task, and [math]\displaystyle{ t \in 1,...,T }[/math]. Let [math]\displaystyle{ n=\sum_{t=1}^Tn_t }[/math]. In this setting there is a consistent input and output space and the same loss function [math]\displaystyle{ \mathcal{L}:\mathbb{R}\times\mathbb{R}\rightarrow \mathbb{R}_+ }[/math] for each task: . This results in the regularized machine learning problem:

[math]\displaystyle{ \min_{f \in \mathcal{H}}\sum _{t=1} ^T \frac{1}{n_t} \sum _{i=1} ^{n_t} \mathcal{L}(y_i^t, f_t(x_i^t))+\lambda f_\mathcal{H} ^2 }[/math]
(
)
where [math]\displaystyle{ \mathcal{H} }[/math] is a vector valued reproducing kernel Hilbert space with functions [math]\displaystyle{ f:\mathcal X \rightarrow \mathcal{Y}^T }[/math] having components [math]\displaystyle{ f_t:\mathcal{X}\rightarrow \mathcal {Y} }[/math].
The reproducing kernel for the space [math]\displaystyle{ \mathcal{H} }[/math] of functions [math]\displaystyle{ f:\mathcal X \rightarrow \mathbb{R}^T }[/math] is a symmetric matrixvalued function [math]\displaystyle{ \Gamma :\mathcal X\times \mathcal X \rightarrow \mathbb{R}^{T \times T} }[/math] , such that [math]\displaystyle{ \Gamma (\cdot ,x)c\in \mathcal{H} }[/math] and the following reproducing property holds:

[math]\displaystyle{ \langle f(x),c \rangle _ {\mathbb{R}^T} = \langle f,\Gamma (x,\cdot ) c \rangle _ {\mathcal {H}} }[/math]
(
)
The reproducing kernel gives rise to a representer theorem showing that any solution to equation 1 has the form:

[math]\displaystyle{ f(x)=\sum _{t=1}^T \sum _{i=1}^{n_t} \Gamma(x,x_i^t)c_i^t }[/math]
(
)
Separable kernels
The form of the kernel Γ induces both the representation of the feature space and structures the output across tasks. A natural simplification is to choose a separable kernel, which factors into separate kernels on the input space X and on the tasks [math]\displaystyle{ \{1,...,T\} }[/math]. In this case the kernel relating scalar components [math]\displaystyle{ f_t }[/math] and [math]\displaystyle{ f_s }[/math] is given by [math]\displaystyle{ \gamma((x_i,t),(x_j,s )) = k(x_i,x_j)k_T(s,t)=k(x_i,x_j)A_{s,t} }[/math]. For vector valued functions [math]\displaystyle{ f\in \mathcal H }[/math] we can write [math]\displaystyle{ \Gamma(x_i,x_j)=k(x_i,x_j)A }[/math], where k is a scalar reproducing kernel, and A is a symmetric positive semidefinite [math]\displaystyle{ T\times T }[/math] matrix. Henceforth denote [math]\displaystyle{ S_+^T=\{\text{PSD matrices} \} \subset \mathbb R^{T \times T} }[/math] .
This factorization property, separability, implies the input feature space representation does not vary by task. That is, there is no interaction between the input kernel and the task kernel. The structure on tasks is represented solely by A. Methods for nonseparable kernels Γ is an current field of research.
For the separable case, the representation theorem is reduced to [math]\displaystyle{ f(x)=\sum _{i=1} ^N k(x,x_i)Ac_i }[/math]. The model output on the training data is then KCA , where K is the [math]\displaystyle{ n \times n }[/math] empirical kernel matrix with entries [math]\displaystyle{ K_{i,j}=k(x_i,x_j) }[/math], and C is the [math]\displaystyle{ n \times T }[/math] matrix of rows [math]\displaystyle{ c_i }[/math].
With the separable kernel, equation 1 can be rewritten as

[math]\displaystyle{ \min _{C\in \mathbb{R}^{n\times T}} V(Y,KCA) + \lambda tr(KCAC^{\top}) }[/math]
(
)
where V is a (weighted) average of L applied entrywise to Y and KCA. (The weight is zero if [math]\displaystyle{ Y_i^t }[/math] is a missing observation).
Note the second term in P can be derived as follows:
 [math]\displaystyle{ \begin{align} \f\^2_\mathcal{H} &= \left\langle \sum _{i=1} ^n k(\cdot,x_i)Ac_i, \sum _{j=1} ^n k(\cdot ,x_j)Ac_j \right\rangle_{\mathcal H } \\ &= \sum _{i,j=1} ^n \langle k(\cdot,x_i)A c_i, k(\cdot ,x_j)Ac_j\rangle_{\mathcal H } & \text{(bilinearity)} \\ &= \sum _{i,j=1} ^n \langle k(x_i,x_j)A c_i, c_j\rangle_{\mathbb R^T } & \text{(reproducing property)} \\ &= \sum _{i,j=1} ^n k(x_i,x_j) c_i^\top A c_j=tr(KCAC^\top ) \end{align} }[/math]
Known task structure
Task structure representations
There are three largely equivalent ways to represent task structure: through a regularizer; through an output metric, and through an output mapping.
Regularizer — With the separable kernel, it can be shown (below) that [math]\displaystyle{ f^2_\mathcal{H} = \sum_{s,t=1}^T A^\dagger _{t,s} \langle f_s, f_t \rangle _{\mathcal H_k} }[/math], where [math]\displaystyle{ A^\dagger _{t,s} }[/math] is the [math]\displaystyle{ t,s }[/math] element of the pseudoinverse of [math]\displaystyle{ A }[/math], and [math]\displaystyle{ \mathcal H_k }[/math] is the RKHS based on the scalar kernel [math]\displaystyle{ k }[/math], and [math]\displaystyle{ f_t(x)=\sum _{i=1} ^n k(x,x_i)A_t^\top c_i }[/math]. This formulation shows that [math]\displaystyle{ A^\dagger _{t,s} }[/math] controls the weight of the penalty associated with [math]\displaystyle{ \langle f_s, f_t \rangle _{\mathcal H_k} }[/math]. (Note that [math]\displaystyle{ \langle f_s, f_t \rangle _{\mathcal H_k} }[/math] arises from [math]\displaystyle{ f_t_{\mathcal H_k} = \langle f_t, f_t \rangle _{\mathcal H_k} }[/math].)
[math]\displaystyle{ \begin{align} \f\^2_\mathcal{H} &= \left\langle \sum _{i=1} ^n \gamma ((x_i,t_i),\cdot )c_i^{t_i}, \sum _{j=1} ^n \gamma ((x_j,t_j), \cdot )c_j^{t_j}\right\rangle_{\mathcal H } \\ &=\sum _{i,j=1} ^n c_i^{t_i} c_j^{t_j} \gamma ((x_i,t_i),(x_j,t_j)) \\ &=\sum _{i,j=1} ^n \sum _{s,t=1} ^T c_i^{t} c_j^{s} k(x_i,x_j)A_{s,t} \\ &=\sum _{i,j=1} ^n k(x_i,x_j) \langle c_i, A c_j\rangle_{\mathbb R^T} \\ &=\sum _{i,j=1} ^n k(x_i,x_j) \langle c_i, A A^\dagger A c_j\rangle_{\mathbb R^T} \\ &=\sum _{i,j=1} ^n k(x_i,x_j) \langle Ac_i, A^\dagger A c_j\rangle_{\mathbb R^T} \\ &=\sum _{i,j=1} ^n \sum _{s,t=1} ^T (Ac_i)^t (A c_j)^s k(x_i,x_j) A^\dagger_{s,t} \\ &= \sum _{s,t=1} ^T A^\dagger_{s,t} \langle \sum _{i=1} ^n k(x_i,\cdot )(Ac_i)^t, \sum _{j=1} ^n k(x_j,\cdot )(A c_j)^s \rangle _{\mathcal H_k} \\ &= \sum _{s,t=1} ^T A^\dagger_{s,t} \langle f_t, f_s \rangle _{\mathcal H_k} \end{align} }[/math]
Output metric — an alternative output metric on [math]\displaystyle{ \mathcal Y^T }[/math] can be induced by the inner product [math]\displaystyle{ \langle y_1,y_2 \rangle _\Theta=\langle y_1,\Theta y_2 \rangle_{\mathbb R^T} }[/math]. With the squared loss there is an equivalence between the separable kernels [math]\displaystyle{ k(\cdot,\cdot)I_T }[/math] under the alternative metric, and [math]\displaystyle{ k(\cdot,\cdot)\Theta }[/math], under the canonical metric.
Output mapping — Outputs can be mapped as [math]\displaystyle{ L:\mathcal Y^T \rightarrow \mathcal \tilde Y }[/math] to a higher dimensional space to encode complex structures such as trees, graphs and strings. For linear maps L, with appropriate choice of separable kernel, it can be shown that [math]\displaystyle{ A=L^\top L }[/math].
Task structure examples
Via the regularizer formulation, one can represent a variety of task structures easily.
 Letting [math]\displaystyle{ A^\dagger = \gamma I_T + ( \gamma  \lambda)\frac {1} T \mathbf{1}\mathbf{1}^\top }[/math] (where [math]\displaystyle{ I_T }[/math] is the TxT identity matrix, and [math]\displaystyle{ \mathbf{1}\mathbf{1}^\top }[/math] is the TxT matrix of ones) is equivalent to letting Γ control the variance [math]\displaystyle{ \sum_t  f_t  \bar f _{\mathcal H_k} }[/math] of tasks from their mean [math]\displaystyle{ \frac 1 T \sum_t f_t }[/math]. For example, blood levels of some biomarker may be taken on T patients at [math]\displaystyle{ n_t }[/math] time points during the course of a day and interest may lie in regularizing the variance of the predictions across patients.
 Letting [math]\displaystyle{ A^\dagger = \alpha I_T +(\alpha  \lambda )M }[/math] , where [math]\displaystyle{ M_{t,s} = \frac 1 {G_r} \mathbb I(t,s\in G_r) }[/math] is equivalent to letting [math]\displaystyle{ \alpha }[/math] control the variance measured with respect to a group mean: [math]\displaystyle{ \sum _{r} \sum _{t \in G_r } f_t  \frac 1 {G_r} \sum _{s\in G_r)} f_s }[/math]. (Here [math]\displaystyle{ G_r }[/math] the cardinality of group r, and [math]\displaystyle{ \mathbb I }[/math] is the indicator function). For example, people in different political parties (groups) might be regularized together with respect to predicting the favorability rating of a politician. Note that this penalty reduces to the first when all tasks are in the same group.
 Letting [math]\displaystyle{ A^\dagger = \delta I_T + (\delta \lambda)L }[/math], where [math]\displaystyle{ L=DM }[/math] is the Laplacian for the graph with adjacency matrix M giving pairwise similarities of tasks. This is equivalent to giving a larger penalty to the distance separating tasks t and s when they are more similar (according to the weight [math]\displaystyle{ M_{t,s} }[/math],) i.e. [math]\displaystyle{ \delta }[/math] regularizes [math]\displaystyle{ \sum _{t,s}f_t  f_s _{\mathcal H _k }^2 M_{t,s} }[/math].
 All of the above choices of A also induce the additional regularization term [math]\displaystyle{ \lambda \sum_t f _{\mathcal H_k} ^2 }[/math] which penalizes complexity in f more broadly.
Learning tasks together with their structure
Learning problem P can be generalized to admit learning task matrix A as follows:

[math]\displaystyle{ \min _{C \in \mathbb{R}^{n\times T},A \in S_+^T} V(Y,KCA) + \lambda tr(KCAC^{\top})+F(A) }[/math]
(
)
Choice of [math]\displaystyle{ F:S_+^T\rightarrow \mathbb R_+ }[/math] must be designed to learn matrices A of a given type. See "Special cases" below.
Optimization of Q
Restricting to the case of convex losses and coercive penalties Ciliberto et al. have shown that although Q is not convex jointly in C and A, a related problem is jointly convex.
Specifically on the convex set [math]\displaystyle{ \mathcal C=\{(C,A)\in \mathbb R^{n \times T}\times S_+^T  Range(C^\top KC)\subseteq Range(A)\} }[/math], the equivalent problem

[math]\displaystyle{ \min _{C ,A \in \mathcal C } V(Y,KC) + \lambda tr(A^\dagger C^{\top}KC)+F(A) }[/math]
(
)
is convex with the same minimum value. And if [math]\displaystyle{ (C_R, A_R) }[/math] is a minimizer for R then [math]\displaystyle{ (C_R A^\dagger _R, A_R) }[/math] is a minimizer for Q.
R may be solved by a barrier method on a closed set by introducing the following perturbation:

[math]\displaystyle{ \min _{C \in \mathbb{R}^{n\times T},A \in S_+^T} V(Y,KC) + \lambda tr(A^\dagger (C^{\top}KC+\delta^2I_T))+F(A) }[/math]
(
)
The perturbation via the barrier [math]\displaystyle{ \delta ^2 tr(A^\dagger) }[/math] forces the objective functions to be equal to [math]\displaystyle{ +\infty }[/math] on the boundary of [math]\displaystyle{ R^{n \times T}\times S_+^T }[/math] .
S can be solved with a block coordinate descent method, alternating in C and A. This results in a sequence of minimizers [math]\displaystyle{ (C_m,A_m) }[/math] in S that converges to the solution in R as [math]\displaystyle{ \delta_m \rightarrow 0 }[/math], and hence gives the solution to Q.
Special cases
Spectral penalties  Dinnuzo et al^{[16]} suggested setting F as the Frobenius norm [math]\displaystyle{ \sqrt{tr(A^\top A)} }[/math]. They optimized Q directly using block coordinate descent, not accounting for difficulties at the boundary of [math]\displaystyle{ \mathbb R^{n\times T} \times S_+^T }[/math].
Clustered tasks learning  Jacob et al^{[17]} suggested to learn A in the setting where T tasks are organized in R disjoint clusters. In this case let [math]\displaystyle{ E\in \{0,1\}^{T\times R} }[/math] be the matrix with [math]\displaystyle{ E_{t,r}=\mathbb I (\text{task }t\in \text{group }r) }[/math]. Setting [math]\displaystyle{ M = I  E^\dagger E^T }[/math], and [math]\displaystyle{ U = \frac 1 T \mathbf{11}^\top }[/math], the task matrix [math]\displaystyle{ A^\dagger }[/math] can be parameterized as a function of [math]\displaystyle{ M }[/math]: [math]\displaystyle{ A^\dagger(M) = \epsilon _M U+\epsilon_B (MU)+\epsilon (IM) }[/math] , with terms that penalize the average, between clusters variance and within clusters variance respectively of the task predictions. M is not convex, but there is a convex relaxation [math]\displaystyle{ \mathcal S_c = \{M\in S_+^T:IM\in S_+^T \land tr(M) = r \} }[/math]. In this formulation, [math]\displaystyle{ F(A)=\mathbb I(A(M)\in \{A:M\in \mathcal S_C\}) }[/math].
Generalizations
Nonconvex penalties  Penalties can be constructed such that A is constrained to be a graph Laplacian, or that A has low rank factorization. However these penalties are not convex, and the analysis of the barrier method proposed by Ciliberto et al. does not go through in these cases.
Nonseparable kernels  Separable kernels are limited, in particular they do not account for structures in the interaction space between the input and output domains jointly. Future work is needed to develop models for these kernels.
Applications
Spam filtering
Using the principles of MTL, techniques for collaborative spam filtering that facilitates personalization have been proposed. In large scale open membership email systems, most users do not label enough messages for an individual local classifier to be effective, while the data is too noisy to be used for a global filter across all users. A hybrid global/individual classifier can be effective at absorbing the influence of users who label emails very diligently from the general public. This can be accomplished while still providing sufficient quality to users with few labeled instances.^{[18]}
Web search
Using boosted decision trees, one can enable implicit data sharing and regularization. This learning method can be used on websearch ranking data sets. One example is to use ranking data sets from several countries. Here, multitask learning is particularly helpful as data sets from different countries vary largely in size because of the cost of editorial judgments. It has been demonstrated that learning various tasks jointly can lead to significant improvements in performance with surprising reliability.^{[19]}
Software package
The MultiTask Learning via StructurAl Regularization (MALSAR) Matlab package^{[20]} implements the following multitask learning algorithms:
 MeanRegularized MultiTask Learning^{[21]}^{[22]}
 MultiTask Learning with Joint Feature Selection^{[23]}
 Robust MultiTask Feature Learning^{[24]}
 TraceNorm Regularized MultiTask Learning^{[25]}
 Alternating Structural Optimization^{[26]}^{[27]}
 Incoherent LowRank and Sparse Learning^{[28]}
 Robust LowRank MultiTask Learning
 Clustered MultiTask Learning^{[29]}^{[30]}
 MultiTask Learning with Graph Structures
See also
 Artificial intelligence
 Artificial neural network
 Automated machine learning (AutoML)
 Evolutionary computation
 General game playing
 Humanbased genetic algorithm
 Kernel methods for vector output
 Multitask optimization
 Robot learning
 Transfer learning
References
 ↑ Baxter, J. (2000). A model of inductive bias learning" Journal of Artificial Intelligence Research 12:149198, Online paper
 ↑ Thrun, S. (1996). Is learning the nth thing any easier than learning the first?. In Advances in Neural Information Processing Systems 8, pp. 640646. MIT Press. Paper at Citeseer
 ↑ ^{3.0} ^{3.1} Caruana, R. (1997). "Multitask learning". Machine Learning 28: 41–75. doi:10.1023/A:1007379606734. http://www.cs.cornell.edu/~caruana/mlj97.pdf.
 ↑ Suddarth, S., Kergosien, Y. (1990). Ruleinjection hints as a means of improving network performance and learning time. EURASIP Workshop. Neural Networks pp. 120129. Lecture Notes in Computer Science. Springer.
 ↑ AbuMostafa, Y. S. (1990). "Learning from hints in neural networks". Journal of Complexity 6 (2): 192–198. doi:10.1016/0885064x(90)90006y.
 ↑ ^{6.0} ^{6.1} Weinberger, Kilian. "Multitask Learning". http://www.cs.cornell.edu/~kilian/research/multitasklearning/multitasklearning.html.
 ↑ ^{7.0} ^{7.1} ^{7.2} Ciliberto, C. (2015). "Convex Learning of Multiple Tasks and their Structure". arXiv:1504.03101 [cs.LG].
 ↑ ^{8.0} ^{8.1} ^{8.2} ^{8.3} Hajiramezanali, E. & Dadaneh, S. Z. & Karbalayghareh, A. & Zhou, Z. & Qian, X. Bayesian multidomain learning for cancer subtype discovery from nextgeneration sequencing count data. 32nd Conference on Neural Information Processing Systems (NIPS 2018), Montréal, Canada. arXiv:1810.09433
 ↑ ^{9.0} ^{9.1} RomeraParedes, B., Argyriou, A., BianchiBerthouze, N., & Pontil, M., (2012) Exploiting Unrelated Tasks in MultiTask Learning. http://jmlr.csail.mit.edu/proceedings/papers/v22/romera12/romera12.pdf
 ↑ Kumar, A., & Daume III, H., (2012) Learning Task Grouping and Overlap in MultiTask Learning. http://icml.cc/2012/papers/690.pdf
 ↑ Jawanpuria, P., & Saketha Nath, J., (2012) A Convex Feature Learning Formulation for Latent Task Structure Discovery. http://icml.cc/2012/papers/90.pdf
 ↑ Zweig, A. & Weinshall, D. Hierarchical Regularization Cascade for Joint Learning. Proceedings: of 30th International Conference on Machine Learning (ICML), Atlanta GA, June 2013. http://www.cs.huji.ac.il/~daphna/papers/Zweig_ICML2013.pdf
 ↑ Szegedy, Christian; Wei Liu, Youssef; Yangqing Jia, Tomaso; Sermanet, Pierre; Reed, Scott; Anguelov, Dragomir; Erhan, Dumitru; Vanhoucke, Vincent et al. (2015). "Going deeper with convolutions". 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). pp. 1–9. doi:10.1109/CVPR.2015.7298594. ISBN 9781467369640.
 ↑ Roig, Gemma. "Deep Learning Overview". https://www.mit.edu/~9.520/fall15/slides/class24/deep_learning_overview.pdf.
 ↑ Zweig, A. & Chechik, G. Group online adaptive learning. Machine Learning, DOI 10.1007/s10994017 56615, August 2017. http://rdcu.be/uFSv
 ↑ Dinuzzo, Francesco (2011). "Learning output kernels with block coordinate descent.". Proceedings of the 28th International Conference on Machine Learning (ICML11). http://machinelearning.wustl.edu/mlpapers/paper_files/ICML2011Dinuzzo_54.pdf.
 ↑ Jacob, Laurent (2009). "Clustered multitask learning: A convex formulation". Advances in Neural Information Processing Systems. Bibcode: 2008arXiv0809.2085J.
 ↑ Attenberg, J., Weinberger, K., & Dasgupta, A. Collaborative EmailSpam Filtering with the HashingTrick. http://www.cse.wustl.edu/~kilian/papers/ceas2009paper11.pdf
 ↑ Chappelle, O., Shivaswamy, P., & Vadrevu, S. MultiTask Learning for Boosting with Application to Web Search Ranking. http://www.cse.wustl.edu/~kilian/papers/multiboost2010.pdf
 ↑ Zhou, J., Chen, J. and Ye, J. MALSAR: MultitAsk Learning via StructurAl Regularization. Arizona State University, 2012. http://www.public.asu.edu/~jye02/Software/MALSAR. Online manual
 ↑ Evgeniou, T., & Pontil, M. (2004). Regularized multi–task learning. Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining (pp. 109–117).
 ↑ Evgeniou, T.; Micchelli, C.; Pontil, M. (2005). "Learning multiple tasks with kernel methods". Journal of Machine Learning Research 6: 615. http://jmlr.org/papers/volume6/evgeniou05a/evgeniou05a.pdf.
 ↑ Argyriou, A.; Evgeniou, T.; Pontil, M. (2008a). "Convex multitask feature learning". Machine Learning 73 (3): 243–272. doi:10.1007/s1099400750408.
 ↑ Chen, J., Zhou, J., & Ye, J. (2011). Integrating lowrank and groupsparse structures for robust multitask learning. Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining.
 ↑ Ji, S., & Ye, J. (2009). An accelerated gradient method for trace norm minimization. Proceedings of the 26th Annual International Conference on Machine Learning (pp. 457–464).
 ↑ Ando, R.; Zhang, T. (2005). "A framework for learning predictive structures from multiple tasks and unlabeled data". The Journal of Machine Learning Research 6: 1817–1853. http://www.jmlr.org/papers/volume6/ando05a/ando05a.pdf.
 ↑ Chen, J., Tang, L., Liu, J., & Ye, J. (2009). A convex formulation for learning shared structures from multiple tasks. Proceedings of the 26th Annual International Conference on Machine Learning (pp. 137–144).
 ↑ Chen, J., Liu, J., & Ye, J. (2010). Learning incoherent sparse and lowrank patterns from multiple tasks. Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining (pp. 1179–1188).
 ↑ Jacob, L., Bach, F., & Vert, J. (2008). Clustered multitask learning: A convex formulation. Advances in Neural Information Processing Systems， 2008
 ↑ Zhou, J., Chen, J., & Ye, J. (2011). Clustered multitask learning via alternating structure optimization. Advances in Neural Information Processing Systems.
External links
 The Biosignals Intelligence Group at UIUC
 Washington University in St. Louis Depart. of Computer Science
Software
 The MultiTask Learning via Structural Regularization Package
 Online MultiTask Learning Toolkit (OMT) A generalpurpose online multitask learning toolkit based on conditional random field models and stochastic gradient descent training (C#, .NET)
Original source: https://en.wikipedia.org/wiki/Multitask learning.
Read more 