z-logo
open-access-imgOpen Access
Performance Characterization of Deep Learning Models for Breathing-based Authentication on Resource-Constrained Devices
Author(s) -
Jagmohan Chauhan,
Jathushan Rajasegaran,
Suranga Seneviratne,
Archan Misra,
Aruna Seneviratne,
Youngki Lee
Publication year - 2018
Publication title -
proceedings of the acm on interactive mobile wearable and ubiquitous technologies
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.199
H-Index - 5
ISSN - 2474-9567
DOI - 10.1145/3287036
Subject(s) - computer science , smartwatch , machine learning , artificial intelligence , wearable computer , inference , embedded system
Providing secure access to smart devices such as smartphones, wearables and various other IoT devices is becoming increasingly important, especially as these devices store a range of sensitive personal information. Breathing acoustics-based authentication offers a highly usable and possibly a secondary authentication mechanism for secure access. Executing sophisticated machine learning pipelines for such authentication on such devices remains an open problem, given their resource limitations in terms of storage, memory and computational power. To investigate this challenge, we compare the performance of an end-to-end system for both user identification and user verification tasks based on breathing acoustics on three type of smart devices: smartphone, smartwatch and Raspberry Pi using both shallow classifiers (i.e., SVM, GMM, Logistic Regression) and deep learning based classifiers (e.g., LSTM, MLP). Via detailed analysis, we conclude that LSTM models for acoustic classification are the smallest in size, have the lowest inference time and are more accurate than all other compared classifiers. An uncompressed LSTM model provides an average f-score of 80%-94% while requiring only 50--180 KB of storage (depending on the breathing gesture). The resulting inference can be done on smartphones and smartwatches within approximately 7--10 ms and 18--66 ms respectively, thereby making them suitable for resource-constrained devices. Further memory and computational savings can be achieved using model compression methods such as weight quantization and fully connected layer factorization: in particular, a combination of quantization and factorization achieves 25%--55% reduction in LSTM model size, with almost no loss in performance. We also compare the performance on GPUs and show that the use of GPU can reduce the inference time of LSTM models by a factor of 300%. These results provide a practical way to deploy breathing based biometrics, and more broadly LSTM-based classifiers, in future ubiquitous computing applications.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom