z-logo
open-access-imgOpen Access
Cross-domain Generative Learning for Fine-Grained Sketch-Based Image Retrieval
Author(s) -
Kaiyue Pang,
Yi-Zhe Song,
Tony Xiang,
Timothy M. Hospedales
Publication year - 2017
Language(s) - English
Resource type - Conference proceedings
DOI - 10.5244/c.31.46
Subject(s) - sketch , computer science , generative grammar , domain (mathematical analysis) , artificial intelligence , image retrieval , image (mathematics) , computer vision , algorithm , mathematics , mathematical analysis
The key challenge for learning a fine-grained sketch-based image retrieval (FG-SBIR) model is to bridge the domain gap between photo and sketch. Existing models learn a deep joint embedding space with discriminative losses where a photo and a sketch can be compared. In this paper, we propose a novel discriminative-generative hybrid model by introducing a generative task of cross-domain image synthesis. This task enforces the learned embedding space to preserve all the domain invariant information that is useful for cross-domain reconstruction, thus explicitly reducing the domain gap as opposed to existing models. Extensive experiments on the largest FG-SBIR dataset Sketchy [19] show that the proposed model significantly outperforms state-of-the-art discriminative FG-SBIR models.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom