z-logo
open-access-imgOpen Access
AlertNet: Deep convolutional-recurrent neural network model for driving alertness detection
Author(s) -
P. C. Nissimagoudar,
Ashis Kumar Nandi,
Aakanksha Patil,
H. M. Gireesha
Publication year - 2021
Publication title -
international journal of power electronics and drive systems/international journal of electrical and computer engineering
Language(s) - English
Resource type - Journals
eISSN - 2722-2578
pISSN - 2722-256X
DOI - 10.11591/ijece.v11i4.pp3529-3538
Subject(s) - computer science , alertness , convolutional neural network , encoder , deep learning , artificial intelligence , sequence (biology) , pattern recognition (psychology) , artificial neural network , recurrent neural network , speech recognition , medicine , biology , pharmacology , genetics , operating system
Drowsy driving is one of the major problems which has led to many road accidents. Electroencephalography (EEG) is one of the most reliable sources to detect sleep on-set while driving as there is the direct involvement of biological signals. The present work focuses on detecting driver’s alertness using the deep neural network architecture, which is built using ResNets and encoder-decoder based sequence to sequence models with attention decoder. The ResNets with the skip connections allow training the network deeper with a reduced loss function and training error. The model is built to reduce the complex computations required for feature extraction. The ResNets also help in retaining the features from the previous layer and do not require different filters for frequency and time-invariant features. The output of ResNets, the features are input to encoder-decoder based sequence to sequence models, built using Bi-directional long-short memories. Sequence to Sequence model learns the complex features of the signal and analyze the output of past and future states simultaneously for classification of drowsy/sleepstage-1 and alert stages. Also, to overcome the unequal distribution (class-imbalance) data problem present in the datasets, the proposed loss functions help in achieving the identical error for both majority and minority classes during the raining of the network for each sleep stage. The model provides an overall-accuracy of 87.92% and 87.05%, a macro-F1-core of 78.06%, and 79.66% and Cohen's-kappa score of 0.78 and 0.79 for the Sleep-EDF 2013 and 2018 data sets respectively.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here