Open Access
Detection of material on a tray in automatic assembly line based on convolutional neural network
Author(s) -
Hu Dunli,
Zhang Yuting,
Xufeng Li,
Zhang Xiaoping
Publication year - 2021
Publication title -
iet image processing
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.401
H-Index - 45
eISSN - 1751-9667
pISSN - 1751-9659
DOI - 10.1049/ipr2.12302
Subject(s) - tray , convolutional neural network , computer science , artificial intelligence , segmentation , pattern recognition (psychology) , intersection (aeronautics) , blank , process (computing) , computer vision , feature extraction , engineering , mechanical engineering , aerospace engineering , operating system
Abstract In the process of detecting materials inside a tray in an automated production line, it is necessary to detect not only the known materials and blank space in the designated area, but also the unknown materials misplaced inside the tray. However, the supervised detection algorithm based on deep learning can only detect the known and blank material areas. Therefore, this paper proposed a phased material detection. The first stage is to detect the tray and then identify the material area in the second stage. In order to improve the tray detection accuracy during the first stage under the condition of a high intersection ratio, an improved YOLOv5s tray detection method is proposed. The structure of YOLOv5s is improved using the SENet. This paper proposes to use the rich geometric information of the shallow network and the high‐level semantic information to integrate the bypass features. MAP@0.5:0.95 of the improved model increased from 95.7% to 96.6% and MAP@0.95 from 78.5% to 90.8% . The challenge of detecting unknown wrong materials on a tray can be resolved through the recognition of material area segmentation images processed by using the improved pre‐detection algorithm, together with the relative position reference between the material area and the tray. The experimental results showed that the improved method proposed meets the industrial detection requirements with an overall recognition accuracy of 91% within a 250 ms detection interval.