Unsupervised Domain Adaptation via Calibrating Uncertainties

Abstract

Unsupervised domain adaptation (UDA) aims at inferring class labels for unlabeled target domain given a related labeled source dataset. Intuitively, the model trained on labeled data will produce high uncertainty estimation for unseen data. Under this assumption, models trained in the source domain would produce high uncertainties when tested on the target domain. In this work, we build on this assumption and propose to adapt from source and target domain via calibrating their predictive uncertainties. We employ variational Bayes learning for uncertainty estimation which is quantified as the predicted Renyi entropy on ´ the target domain. We discuss the theoretical properties of our proposed framework and demonstrate its effectiveness on three domain-adaptation tasks.

Publication
Computer Vision and Pattern Recognition 2019, Uncertainty and Robustness in Deep Visual Learning Workshop