
Explainable gait recognition with prototyping encoder–decoder
Author(s) -
Jucheol Moon,
Yong-Min Shin,
Jin-Duk Park,
Nelson Hebert Minaya,
Won-Yong Shin,
SangIl Choi
Publication year - 2022
Publication title -
plos one
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.99
H-Index - 332
ISSN - 1932-6203
DOI - 10.1371/journal.pone.0264783
Subject(s) - interpretability , computer science , gait , wearable computer , wearable technology , artificial intelligence , artificial neural network , relevance (law) , encoder , set (abstract data type) , transparency (behavior) , sensitivity (control systems) , inertial measurement unit , machine learning , data mining , pattern recognition (psychology) , embedded system , engineering , medicine , physical medicine and rehabilitation , programming language , computer security , electronic engineering , political science , law , operating system
Human gait is a unique behavioral characteristic that can be used to recognize individuals. Collecting gait information widely by the means of wearable devices and recognizing people by the data has become a topic of research. While most prior studies collected gait information using inertial measurement units, we gather the data from 40 people using insoles, including pressure sensors, and precisely identify the gait phases from the long time series using the pressure data. In terms of recognizing people, there have been a few recent studies on neural network-based approaches for solving the open set gait recognition problem using wearable devices. Typically, these approaches determine decision boundaries in the latent space with a limited number of samples. Motivated by the fact that such methods are sensitive to the values of hyper-parameters, as our first contribution, we propose a new network model that is less sensitive to changes in the values using a new prototyping encoder–decoder network architecture. As our second contribution, to overcome the inherent limitations due to the lack of transparency and interpretability of neural networks, we propose a new module that enables us to analyze which part of the input is relevant to the overall recognition performance using explainable tools such as sensitivity analysis (SA) and layer-wise relevance propagation (LRP).