This HTML5 document contains 66 embedded RDF statements represented using HTML+Microdata notation.

The embedded RDF content will be recognized by any processor of HTML5 Microdata.

Namespace Prefixes

PrefixIRI
dcthttp://purl.org/dc/terms/
dbohttp://dbpedia.org/ontology/
foafhttp://xmlns.com/foaf/0.1/
dbthttp://dbpedia.org/resource/Template:
rdfshttp://www.w3.org/2000/01/rdf-schema#
freebasehttp://rdf.freebase.com/ns/
rdfhttp://www.w3.org/1999/02/22-rdf-syntax-ns#
owlhttp://www.w3.org/2002/07/owl#
n5http://en.wikipedia.org/wiki/
dbphttp://dbpedia.org/property/
dbchttp://dbpedia.org/resource/Category:
provhttp://www.w3.org/ns/prov#
xsdhhttp://www.w3.org/2001/XMLSchema#
goldhttp://purl.org/linguistics/gold/
dbrhttp://dbpedia.org/resource/

Statements

Subject Item
dbr:Multi-task_learning
rdf:type
dbo:ProgrammingLanguage
rdfs:label
Multi-task learning
rdfs:comment
Multi-task learning (MTL) is a subfield of machine learning in which multiple learning tasks are solved at the same time, while exploiting commonalities and differences across tasks. This can result in improved learning efficiency and prediction accuracy for the task-specific models, when compared to training the models separately. Early versions of MTL were called "hints". In a widely cited 1997 paper, Rich Caruana gave the following characterization:
owl:sameAs
freebase:m.03rsv7
dbp:wikiPageUsesTemplate
dbt:EquationRef dbt:Reflist dbt:EquationNote dbt:NumBlk dbt:Mathcal dbt:Math_theorem dbt:Proof dbt:Short_description dbt:Mvar dbt:Div_col dbt:Div_col_end dbt:Math
dct:subject
dbc:Machine_learning
gold:hypernym
dbr:Approach
prov:wasDerivedFrom
n5:Multi-task_learning?oldid=1068794703&ns=0
dbo:wikiPageID
938663
dbo:wikiPageLength
31172
dbo:wikiPageRevisionID
1068794703
dbo:wikiPageWikiLink
dbr:Vector-valued_function dbr:GoogLeNet dbr:Generalization_error dbr:Adjacency_matrix dbr:Artificial_intelligence dbr:Human-based_genetic_algorithm dbr:Overfitting dbr:Anti-spam_techniques dbr:Loss_function dbr:Artificial_neural_network dbr:Regularization_(mathematics) dbr:Linear_combination dbr:Coercive_function dbr:Inner_product_space dbr:Automated_machine_learning dbr:Transfer_learning dbr:Conditional_random_field dbc:Machine_learning dbr:Decision_tree dbr:Robot_learning dbr:C_Sharp_(programming_language) dbr:Feature_learning dbr:Stochastic_gradient_descent dbr:Complete_metric_space dbr:Inductive_bias dbr:Convex_optimization dbr:Regularization_by_spectral_filtering dbr:Sparse_matrix dbr:.NET_Framework dbr:General_game_playing dbr:Reproducing_kernel_Hilbert_space dbr:Machine_learning dbr:Feature_(machine_learning) dbr:Kernel_methods_for_vector_output dbr:Statistical_classification dbr:Convolutional_neural_network dbr:Multitask_optimization dbr:Multi-label_classification dbr:Orthogonality dbr:Laplacian_matrix dbr:Multiclass_classification dbr:Evolutionary_computation
dbo:abstract
Multi-task learning (MTL) is a subfield of machine learning in which multiple learning tasks are solved at the same time, while exploiting commonalities and differences across tasks. This can result in improved learning efficiency and prediction accuracy for the task-specific models, when compared to training the models separately. Early versions of MTL were called "hints". In a widely cited 1997 paper, Rich Caruana gave the following characterization: Multitask Learning is an approach to inductive transfer that improves generalization by using the domain information contained in the training signals of related tasks as an inductive bias. It does this by learning tasks in parallel while using a shared representation; what is learned for each task can help other tasks be learned better. In the classification context, MTL aims to improve the performance of multiple classification tasks by learning them jointly. One example is a spam-filter, which can be treated as distinct but related classification tasks across different users. To make this more concrete, consider that different people have different distributions of features which distinguish spam emails from legitimate ones, for example an English speaker may find that all emails in Russian are spam, not so for Russian speakers. Yet there is a definite commonality in this classification task across users, for example one common feature might be text related to money transfer. Solving each user's spam classification problem jointly via MTL can let the solutions inform each other and improve performance. Further examples of settings for MTL include multiclass classification and multi-label classification. Multi-task learning works because regularization induced by requiring an algorithm to perform well on a related task can be superior to regularization that prevents overfitting by penalizing all complexity uniformly. One situation where MTL may be particularly helpful is if the tasks share significant commonalities and are generally slightly under sampled. However, as discussed below, MTL has also been shown to be beneficial for learning unrelated tasks.
foaf:isPrimaryTopicOf
n5:Multi-task_learning