z-logo
open-access-imgOpen Access
Deep cycle autoencoder for unsupervised domain adaptation with generative adversarial networks
Author(s) -
Zhou Qiang,
Zhou Wen'an,
Yang Bin,
Huan Jun
Publication year - 2019
Publication title -
iet computer vision
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.38
H-Index - 37
eISSN - 1751-9640
pISSN - 1751-9632
DOI - 10.1049/iet-cvi.2019.0304
Subject(s) - discriminator , autoencoder , computer science , artificial intelligence , classifier (uml) , pattern recognition (psychology) , encoder , adversarial system , deep learning , feature learning , machine learning , telecommunications , detector , operating system
Deep learning is a powerful tool for domain adaptation by learning robust high‐level domain invariant representations. Recently, adversarial domain adaptation models are applied to learn representations with adversarial training manners in feature space. However, existing models often ignore the generation process for domain adaptation. To tackle this problem, deep cycle autoencoder (DCA) is proposed that integrates a generation procedure into the adversarial adaptation methods. The proposed DCA consists of four parts, a shared encoder, two separated decoders, a discriminator and a linear classifier. With the labelled source images and unlabelled target images as inputs, the encoder extracts high‐level representations for both source and target domains, and the two decoders reconstruct the inputs with the latent representations separately. The shared encoder is pitted against the discriminator; the encoder tries to confuse the discriminator while discriminator aims at distinguishing which domain the latent representations come from. DCA adopts both adversarial loss and maximum mean discrepancy loss in the latent space for distribution alignment. The classifier is trained with both the source original and reconstructed image representations. Extensive experimental results have demonstrated the effectiveness and the reliability of the proposed methods.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here