z-logo
open-access-imgOpen Access
Parallel global convolutional network for semantic image segmentation
Author(s) -
Bai Xing,
Zhou Jun
Publication year - 2021
Publication title -
iet image processing
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.401
H-Index - 45
eISSN - 1751-9667
pISSN - 1751-9659
DOI - 10.1049/ipr2.12025
Subject(s) - computer science , artificial intelligence , convolutional neural network , encoder , segmentation , block (permutation group theory) , pixel , computer vision , enhanced data rates for gsm evolution , pattern recognition (psychology) , image segmentation , encoding (memory) , deep learning , mathematics , geometry , operating system
In this paper, a novel convolutional neural network for fast semantic segmentation is presented. Deep convolutional neural networks have achieved great progress in the task of vision scene understanding. While the increase of the accuracy mainly depends on the increase of depth and width. This slows down large networks and consumes power. A fast and efficient convolutional neural network, PGCNet, aiming at segmenting high‐resolution images with a high speed is introduced. Compared with the competitive methods, the generated model has high performance with fewer parameters and floating point operations. First, a lightweight general architecture pre‐trained on ImageNet is relied on as the main encoder. Then, a novel lateral connection module to better transmit features from encoder to decoder. Third, a powerful method termed as PGCN block to extract features of each block in the encoder is proposed and an edge decoder is applied as a supervision for pixels on the edge of stuff and things during training. Experiments show that this method has great advantages. Based on the proposed PGCNet, 75.8% mean IoU is achieved on the cityscapes test set and 35.4 Hz on a standard Cityscapes image on GTX1080Ti.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here