z-logo
Premium
Weakly Supervised Part‐wise 3D Shape Reconstruction from Single‐View RGB Images
Author(s) -
Niu Chengjie,
Yu Yang,
Bian Zhenwei,
Li Jun,
Xu Kai
Publication year - 2020
Publication title -
computer graphics forum
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.578
H-Index - 120
eISSN - 1467-8659
pISSN - 0167-7055
DOI - 10.1111/cgf.14158
Subject(s) - artificial intelligence , computer science , projection (relational algebra) , computer vision , point cloud , 3d reconstruction , artificial neural network , deep learning , iterative reconstruction , rgb color model , point (geometry) , image (mathematics) , differentiable function , pattern recognition (psychology) , algorithm , mathematics , geometry , mathematical analysis
Abstract In order for the deep learning models to truly understand the 2D images for 3D geometry recovery, we argue that single‐view reconstruction should be learned in a part‐aware and weakly supervised manner. Such models lead to more profound interpretation of 2D images in which part‐based parsing and assembling are involved. To this end, we learn a deep neural network which takes a single‐view RGB image as input, and outputs a 3D shape in parts represented by 3D point clouds with an array of 3D part generators. In particular, we devise two levels of generative adversarial network (GAN) to generate shapes with both correct part shape and reasonable overall structure. To enable a self‐taught network training, we devise a differentiable projection module along with a self‐projection loss measuring the error between the shape projection and the input image. The training data in our method is unpaired between the 2D images and the 3D shapes with part decomposition. Through qualitative and quantitative evaluations on public datasets, we show that our method achieves good performance in part‐wise single‐view reconstruction.

This content is not available in your region!

Continue researching here.

Having issues? You can contact us here