z-logo
open-access-imgOpen Access
Automatic Traffic State Recognition Based on Video Features Extracted by an Autoencoder
Author(s) -
Xiaoyu Cai,
Qiongli Jing,
Bo Peng,
Yuanyuan Zhang,
Yuting Wang,
Ju Tang
Publication year - 2022
Publication title -
mathematical problems in engineering
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.262
H-Index - 62
eISSN - 1026-7077
pISSN - 1024-123X
DOI - 10.1155/2022/2850111
Subject(s) - autoencoder , artificial intelligence , artificial neural network , support vector machine , classifier (uml) , machine learning , cluster analysis , computer science , algorithm , pattern recognition (psychology)
Video surveillance has become an important measure of urban traffic monitoring and control. However, due to the complex and diverse video scenes, traffic data extraction from original videos is a sophisticated and difficult task, and corresponding algorithms are of high complexity and calculation cost. To reduce the algorithm complexity and subsequent computation cost, this study proposed an autoencoder model which effectively reduces the video dimension by optimizing structural parameters; thus several traffic recognition models can conduct image processing work based on dimension-reduced videos. Firstly, an optimal autoencoder model A ∗ with five hidden layers was constructed. Then, it was combined with a linear classifier, support vector machine, deep neural network, DNN linear classification method, and the k-means clustering method; thus, five traffic state recognition models were constructed: A ∗ -Linear, A ∗ -SVM, A ∗ -DNN, A ∗ -DNN_Linear, and A ∗ -k-means. Train and test results show that the accuracy rate and recall rate of A ∗ -linear, A ∗ -SVM, A ∗ -DNN, and A ∗ -DNN_Linear were 94.5%–97.1%, and the F1 score was 94.4%–97.1%; besides, the accuracy rate, recall rate, and F1 score of A ∗ -k-means were all approximately 95%, which suggests that the combination of the autoencoder A ∗ and common classification or clustering methods achieve good recognition performance. Comparison was also implemented among the five models proposed above and four CNN-based models such as AlexNet, LeNet, GoogLeNet, and VGG16, which shows that the five proposed modes achieve F1 scores of 94.4%–97.1%, while the four CNN-based models achieve F1 scores of 16.7%–94%, indicating that the proposed light weight design methods outperform more complex CNN-based models in traffic state recognition.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom