z-logo
open-access-imgOpen Access
Semantic Segmentation of Urban Street Scenes Using Deep Learning
Author(s) -
Amani Y. Noori,
Shaimaa H. Shaker,
Raghad Abdulaali Azeez
Publication year - 2022
Publication title -
webology
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.259
H-Index - 18
ISSN - 1735-188X
DOI - 10.14704/web/v19i1/web19156
Subject(s) - computer science , artificial intelligence , python (programming language) , segmentation , computer vision , inference , object detection , pixel , robotics , task (project management) , image segmentation , deep learning , pattern recognition (psychology) , robot , programming language , management , economics
Scene classification is essential conception task used by robotics for understanding the environmental. The outdoor scene like urban street scene is composing of image with depth having greater variety than iconic object image. The semantic segmentation is an important task for autonomous driving and mobile robotics applications because it introduces enormous information need for safe navigation and complex reasoning. This paper introduces a model for classification all pixel’s image and predicates the right object that contains this pixel. This model adapts famous network image classification VGG16 with fully convolution network (FCN-8) and transfer learned representation by fine tuning for doing segmentation. Skip Architecture is added between layers to combine coarse, semantic, and local appearance information to generate accurate segmentation. This model is robust and efficiency because it efficient consumes low memory and faster inference time for testing and training on Camvid dataset. The output module is designed by using a special computer equipped by GPU memory NVIDIA GeForce RTX 2060 6G, and programmed by using python 3.7 programming language. The proposed system reached an accuracy 0.8804 and MIOU 73% on Camvid dataset.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here