
Semantic combined network for zero‐shot scene parsing
Author(s) -
Wang Yinduo,
Zhang Haofeng,
Wang Shidong,
Long Yang,
Yang Longzhi
Publication year - 2020
Publication title -
iet image processing
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.401
H-Index - 45
eISSN - 1751-9667
pISSN - 1751-9659
DOI - 10.1049/iet-ipr.2019.0870
Subject(s) - parsing , computer science , artificial intelligence , shot (pellet) , set (abstract data type) , test set , machine learning , bridge (graph theory) , pattern recognition (psychology) , semantics (computer science) , natural language processing , medicine , chemistry , organic chemistry , programming language
Recently, image‐based scene parsing has attracted increasing attention due to its wide application. However, conventional models can only be valid on images with the same domain of the training set and are typically trained using discrete and meaningless labels. Inspired by the traditional zero‐shot learning methods which employ auxiliary side information to bridge the source and target domains, the authors propose a novel framework called semantic combined network (SCN), which aims at learning a scene parsing model only from the images of the seen classes while targeting on the unseen ones. In addition, with the assistance of semantic embeddings of classes, the proposed SCN can further improve the performances of traditional fully supervised scene parsing methods. Extensive experiments are conducted on the data set Cityscapes, and the results show that the proposed SCN can perform well on both zero‐shot scene parsing (ZSSP) and generalised ZSSP settings based on several state‐of‐the‐art scenes parsing architectures. Furthermore, the authors test the proposed model under the traditional fully supervised setting and the results show that the proposed SCN can also significantly improve the performances of the original network models.