z-logo
open-access-imgOpen Access
Pixel‐Level Recognition of Pavement Distresses Based on U‐Net
Author(s) -
Deru Li,
Zhongdong Duan,
Xiaoyang Hu,
Dongchang Zhang
Publication year - 2021
Publication title -
advances in materials science and engineering
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.356
H-Index - 42
eISSN - 1687-8442
pISSN - 1687-8434
DOI - 10.1155/2021/5586615
Subject(s) - materials science , pixel , pattern recognition (psychology) , artificial intelligence , computer science
This study develops and tests an automatic pixel-level image recognition model to reduce the amount of manual labor required to collect data for road maintenance. Firstly, images of six kinds of pavement distresses, namely, transverse cracks, longitudinal cracks, alligator cracks, block cracks, potholes, and patches, are collected from four asphalt highways in three provinces in China to build a labeled pixel-level dataset containing 10,097 images. Secondly, the U-net model, one of the most advanced deep neural networks for image segmentation, is combined with the ResNet neural network as the basic classification network to recognize distressed areas in the images. Data augmentation, batch normalization, momentum, transfer learning, and discriminative learning rates are used to train the model. Thirdly, the trained models are validated on the test dataset, and the results of experiments show the following: if the types of pavement distresses are not distinguished, the pixel accuracy (PA) values of the recognition models using ResNet-34 and ResNet-50 as basic classification networks are 97.336% and 95.772%, respectively, on the validation set. When the types of distresses are distinguished, the PA values of models using the two classification networks are 66.103% and 44.953%, respectively. For the model using ResNet-34, the category pixel accuracy (CPA) and intersection over union (IoU) of the identification of areas with no distress are 99.276% and 99.059%, respectively. For areas featuring distresses in the images, the CPA and IoU of the model are the highest for the identification of patches, at 82.774% and 73.778%, and are the lowest for alligator cracks, at 14.077% and 12.581%, respectively.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom