z-logo
open-access-imgOpen Access
Multi‐modal deep network for RGB‐D segmentation of clothes
Author(s) -
Joukovsky B.,
Hu P.,
Munteanu A.
Publication year - 2020
Publication title -
electronics letters
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.375
H-Index - 146
eISSN - 1350-911X
pISSN - 0013-5194
DOI - 10.1049/el.2019.4150
Subject(s) - rgb color model , artificial intelligence , segmentation , computer science , ground truth , computer vision , pipeline (software) , modal , convolutional neural network , deep learning , pattern recognition (psychology) , process (computing) , polymer chemistry , chemistry , programming language , operating system
In this Letter, the authors propose a deep learning based method to perform semantic segmentation of clothes from RGB‐D images of people. First, they present a synthetic dataset containing more than 50,000 RGB‐D samples of characters in different clothing styles, featuring various poses and environments for a total of nine semantic classes. The proposed data generation pipeline allows for fast production of RGB, depth images and ground‐truth label maps. Secondly, a novel multi‐modal encoder–ecoder convolutional network is proposed which operates on RGB and depth modalities. Multi‐modal features are merged using trained fusion modules which use multi‐scale atrous convolutions in the fusion process. The method is numerically evaluated on synthetic data and visually assessed on real‐world data. The experiments demonstrate the efficiency of the proposed model over existing methods.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here