
Grasping points detection of garments based on deep learning
Author(s) -
Chunhua Shan,
Yuantao Xie
Publication year - 2021
Publication title -
journal of physics. conference series
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.21
H-Index - 85
eISSN - 1742-6596
pISSN - 1742-6588
DOI - 10.1088/1742-6596/1871/1/012100
Subject(s) - artificial intelligence , computer science , visibility , task (project management) , computer vision , robot , deep learning , artificial neural network , clothing , engineering , archaeology , physics , systems engineering , optics , history
Robotic manipulation for rigid objects is a relatively easy task, while grasping highly deformable objects, such as garments, is still a big challenge for robots. This research detects the grasping points of hanging garment using deep learning to facilitate robot manipulation. A neural network is proposed to predict the cartesian coordinates and visibility of predefined grasping points. In order to reduce the impact of different clothing colors, depth images are used for the input of the model. It is inaccurate and unrealistic to label data manually because of complex dynamics of clothes; therefore, synthetic dataset is leveraged to train the neural network. This paper makes use of generative adversarial network (GAN) to translate synthetic data to real data. A real dataset is acquired taken by Azure Kinect sensor as the test dataset. The experimental results indicate that our method can provide accurate prediction of the grasping points and can be applied to real scenarios.