z-logo
open-access-imgOpen Access
Trilateral convolutional neural network for 3D shape reconstruction of objects from a single depth view
Author(s) -
Rivera Patricio,
Valarezo Añazco Edwin,
Choi MunTaek,
Kim TaeSeong
Publication year - 2019
Publication title -
iet image processing
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.401
H-Index - 45
eISSN - 1751-9667
pISSN - 1751-9659
DOI - 10.1049/iet-ipr.2019.0532
Subject(s) - computer science , convolutional neural network , artificial intelligence , computer vision , artificial neural network , pattern recognition (psychology)
In this study, the authors propose a novel three‐dimensional (3D) convolutional neural network for shape reconstruction via a trilateral convolutional neural network (Tri‐CNN) from a single depth view. The proposed approach produces a 3D voxel representation of an object, derived from a partial object surface in a single depth image. The proposed Tri‐CNN combines three dilated convolutions in 3D to expand the convolutional receptive field more efficiently to learn shape reconstructions. To evaluate the proposed Tri‐CNN in terms of reconstruction performance, the publicly available ShapeNet and Big Data for Grasp Planning data sets are utilised. The reconstruction performance was evaluated against four conventional deep learning approaches: namely, fully connected convolutional neural network, baseline CNN, autoencoder CNN, and a generative adversarial reconstruction network. The proposed experimental results show that Tri‐CNN produces superior reconstruction results in terms of intersection over union values and Brier scores with significantly less number of model parameters and memory.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here