Large-scale labeled training datasets have enabled deep neural networks to excel across a wide range of benchmark vision tasks. However, in many applications, it is prohibitively expensive and time-consuming to obtain large quantities of labeled data. To cope with limited labeled training data, many have attempted to directly apply models trained on a large-scale labeled source domain to another sparsely labeled or unlabeled target domain. Unfortunately, direct transfer across domains often performs poorly due to the presence of domain shift or dataset bias. Domain adaptation is a machine learning paradigm that aims to learn a model from a source domain that can perform well on a different (but related) target domain. In this paper, the authors review the latest single-source deep unsupervised domain adaptation methods focused on visual tasks and discuss new perspectives for future research. The paper begins with the definitions of different domain adaptation strategies and the descriptions of existing benchmark datasets. The authors then summarize and compare different categories of single-source unsupervised domain adaptation methods, including discrepancy-based methods, adversarial discriminative methods, adversarial generative methods, and self-supervision-based methods. Finally, they discuss future research directions with challenges and possible solutions.
Key Links & Stats
A Review of Single-Source Deep Unsupervised Visual Domain Adaptation