Exploring New Backbone and Attention Module for Semantic Segmentation in Street Scenes
Author(s) -
Lei Fan,
Wei-Chien Wang,
Fuyuan Zha,
Jiapeng Yan
Publication year - 2018
Publication title -
ieee access
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.587
H-Index - 127
ISSN - 2169-3536
DOI - 10.1109/access.2018.2880877
Subject(s) - aerospace , bioengineering , communication, networking and broadcast technologies , components, circuits, devices and systems , computing and processing , engineered materials, dielectrics and plasmas , engineering profession , fields, waves and electromagnetics , general topics for engineers , geoscience , nuclear engineering , photonics and electrooptics , power, energy and industry applications , robotics and control systems , signal processing and analysis , transportation
Semantic segmentation, as dense pixel-wise classification task, played an important tache in scene understanding. There are two main challenges in many state-of-the-art works: 1) most backbone of segmentation models that often were extracted from pretrained classification models generated poor performance in small categories because they were lacking in spatial information and 2) the gap of combination between high-level and low-level features in segmentation models has led to inaccurate predictions. To handle these challenges, in this paper, we proposed a new tailored backbone and attention select module for segmentation tasks. Specifically, our new backbone was modified from the original Resnet, which can yield better segmentation performance. Attention select module employed spatial and channel self-attention mechanism to reinforce the propagation of contextual features, which can aggregate semantic and spatial information simultaneously. In addition, based on our new backbone and attention select module, we further proposed our segmentation model for street scenes understanding. We conducted a series of ablation studies on two public benchmarks, including Cityscapes and CamVid dataset to demonstrate the effectiveness of our proposals. Our model achieved a mIoU score of 71.5% on the Cityscapes test set with only fine annotation data and 60.1% on the CamVid test set.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom