
Widening residual skipped network for semantic segmentation
Author(s) -
Su Wen,
Wang Zengfu
Publication year - 2017
Publication title -
iet image processing
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.401
H-Index - 45
eISSN - 1751-9667
pISSN - 1751-9659
DOI - 10.1049/iet-ipr.2017.0070
Subject(s) - residual , segmentation , computer science , artificial intelligence , natural language processing , information retrieval , computer vision , algorithm
Over the past two years deep convolutional neural networks have pushed the performance of computer vision systems to soaring heights on semantic segmentation. In this study, the authors present a novel semantic segmentation method of using a deep fully convolutional neural network to achieve image segmentation results with more precise boundary localisation. The above segmentation engine is trainable, and consists of an encoder network with widening residual skipped connections and a decoder network with a pixel‐wise classification layer. Here the encoder network with widening residual skipped connections allows the combination of shallow layer features and deep layer semantic features, and the decoder network with classification layer maps the low‐resolution encoder features to full resolution image with pixel‐wise classification. The experimental results on PASCAL VOC 2012 semantic segmentation dataset and Cityscapes dataset show that the proposed method is effective and competitive.