
Dimensionally improved residual neural network to detect driver distraction in real time
Author(s) -
M. Balamurugan,
R. Kalaiarasi
Publication year - 2021
Publication title -
journal of physics. conference series
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.21
H-Index - 85
eISSN - 1742-6596
pISSN - 1742-6588
DOI - 10.1088/1742-6596/1964/4/042037
Subject(s) - distraction , distracted driving , computer science , convolutional neural network , residual neural network , grasp , visibility , artificial neural network , artificial intelligence , computer security , simulation , physics , neuroscience , optics , programming language , biology
Bountiful reasons are there for crop up of road accidents. Predominantly most of the road accidents are turn out by the humans. In compare with other sources of road accidents, distracted driving is the foremost one. Driver gets distracted ascribed to many factors such as texting using phone, seeing outside of the car. The regime can grasp the road accidents by enforcing the traffic regulations moreover the regime seeks the assistance from the technology to reduce the car accidents due to driver distraction. There are numerous research undergoing to reduce the chance of the driver to get distracted by monitoring the physical activities of the driver while driving the four wheeler. Automatic detection of driver distraction can support to develop an apprise system by observing the activities of the driver that provide better result and evading the car accidents. Deep learning is a leading technology in automated industries to detect the traffic light signs and pedestrians. The novel ResNext 101 is the proposed work and inherited from its parent model called Residual Network (ResNet) to train and test the model. The “Next” in the word ResNext represents next dimension of ResNet called as “cardinality”. In this paper, with the help of Convolutional Neural Network (CNN), the system detects and captures the distracted moments of the driver from proper driving gestures automatically in the form of 2D image. The test results of the model outperforms and proved its significance over the existing driver detection algorithms by bagging higher efficacy with 97.6% accuracy in classifying the live driver posture.