Open Access
Mobile sensor based human activity recognition: distinguishing of challenging activities by applying long short-term memory deep learning modified by residual network concept
Author(s) -
Seyed Vahab Shojaedini,
Mohamad Javad Beirami
Publication year - 2020
Publication title -
biomedical engineering letters
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.709
H-Index - 26
eISSN - 2093-985X
pISSN - 2093-9868
DOI - 10.1007/s13534-020-00160-x
Subject(s) - artificial intelligence , activity recognition , computer science , convolutional neural network , deep learning , smartwatch , residual , artificial neural network , machine learning , pattern recognition (psychology) , wearable computer , embedded system , algorithm
Automated recognition of daily human tasks is a novel method for continuous monitoring of the health of elderly people. Nowadays mobile devices (i.e. smartphone and smartwatch) are equipped with a variety of sensors, therefore activity classification algorithms have become as useful, low-cost, and non-invasive diagnostic modality to implement as mobile software. The aim of this article is to introduce a new deep learning structure for recognizing challenging (i.e. similar) human activities based on signals which have been recorded by sensors mounted on mobile devices. In the proposed structure, the residual network concept is engaged as a new substructure inside the main proposed structure. This part is responsible to address the problem of accuracy saturation in convolutional neural networks, thanks to its ability in jump over some layers which leads to reducing vanishing gradients effect. Therefore the accuracy of the classification of several activities is increased by using the proposed structure. Performance of the proposed method is evaluated on real life recorded signals and is compared with existing techniques in two different scenarios. The proposed structure is applied on two well-known human activity datasets that have been prepared in university of Fordham. The first dataset contains the recorded signals which arise from six different activities including walking, jogging, upstairs, downstairs, sitting, and standing. The second dataset also contains walking, jogging, stairs, sitting, standing, eating soup, eating sandwich, and eating chips. In the first scenario, the performance of the proposed structures is compared with deep learning schemes. The obtained results show that the proposed method may improve the recognition rate at least 5% for the first dataset against its own family alternatives in distinguishing challenging activities (i.e. downstairs and upstairs). For the second data set similar improvements is obtained for some challenging activities (i.e. eating sandwich and eating chips). These superiorities even reach to at least 28% when the capability of the proposed method in recognizing downstairs and upstairs is compared to its non-family methods for the first dataset. Increasing the recognition rate of the proposed method for challenging activities (i.e. downstairs and upstairs, eating sandwich and eating chips) in parallel with its acceptable performance for other non-challenging activities shows its effectiveness in mobile sensor-based health monitoring systems.