z-logo
open-access-imgOpen Access
Multi-View Gait Recognition Based on a Spatial-Temporal Deep Neural Network
Author(s) -
Suibing Tong,
Yuzhuo Fu,
Xinwei Yue,
Hefei Ling
Publication year - 2018
Publication title -
ieee access
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.587
H-Index - 127
ISSN - 2169-3536
DOI - 10.1109/access.2018.2874073
Subject(s) - aerospace , bioengineering , communication, networking and broadcast technologies , components, circuits, devices and systems , computing and processing , engineered materials, dielectrics and plasmas , engineering profession , fields, waves and electromagnetics , general topics for engineers , geoscience , nuclear engineering , photonics and electrooptics , power, energy and industry applications , robotics and control systems , signal processing and analysis , transportation
This paper proposes a novel spatial-temporal deep neural network (STDNN) that is applied to multi-view gait recognition. The STDNN comprises a temporal feature network (TFN) and a spatial feature network (SFN). In TFN, a feature sub-network is adopted to extract the low-level edge features of gait silhouettes. These features are input to the spatial-temporal gradient (STG) network that adopts a STG unit and a long short-term memory unit to extract the STG features. In SFN, the spatial features of gait sequences are extracted by multilayer convolutional neural networks from a gait energy image. The SFN is optimized by classification loss and verification loss jointly, which makes inter-class variations larger than intra-class variations. After training, the TFN and the SFN are employed to extract temporal and spatial features, respectively, which are applied to multi-view gait recognition. Finally, the combined predicted probability is adopted to identify individuals by the differences in their gaits. To evaluate the performance of the STDNN, extensive evaluations are carried out based on the CASIA-B, OU-ISIR, and CMU MoBo data sets. The best recognition scores achieved by STDNN are 95.67% under an identical view, 93.64% under a cross-view, and 92.54% under a multi-view. State-of-the-art approaches are compared with the STDNN in various situations. The results show that the STDNN outperforms the other methods and demonstrates the great potential of the STDNN for practical applications in the future.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom