A Brief Review of Domain Adaptation

A survey of domain adaptation with ~80 references

Released in: A Brief Review of Domain Adaptation

Contributor:

Summary

Classical machine learning assumes that the training and test sets come from the same distributions. Therefore, a model learned from the labeled training data is expected to perform well on the test data. However, This assumption may not always hold in real-world applications where the training and the test data fall from different distributions, due to many factors, e.g., collecting the training and test sets from different sources, or having an out-dated training set due to the change of data over time. In this case, there would be a discrepancy across domain distributions, and naively applying the trained model on the new dataset may cause degradation in the performance. Domain adaptation is a sub-field within machine learning that aims to cope with these types of problems by aligning the disparity between domains such that the trained model can be generalized into the domain of interest. This paper focuses on unsupervised domain adaptation, where the labels are only available in the source domain. It addresses the categorization of domain adaptation from different viewpoints. Besides, It presents some successful shallow and deep domain adaptation approaches that aim to deal with domain adaptation problems.

2020

Year Released

Key Links & Stats

ML Tasks

  1. Domain Adaptation

ML Platform

  1. Not Applicable

Modalities

  1. General

Verticals

  1. General

CG Platform

  1. Not Applicable

Related organizations

University of Georgia