
Disaster Detection on the Fly: Optimized Transformers for UAVs
Author(s) -
Branislava Jankovic,
Sabina Jangirova,
Waseem Ullah,
Latif U. Khan,
Mohsen Guizani
Publication year - 2025
Publication title -
ieee journal of selected topics in applied earth observations and remote sensing
Language(s) - English
Resource type - Magazines
SCImago Journal Rank - 1.246
H-Index - 88
eISSN - 2151-1535
pISSN - 1939-1404
DOI - 10.1109/jstars.2025.3596681
Subject(s) - geoscience , signal processing and analysis , power, energy and industry applications
Disaster management and recovery are one of the crucial tasks in today's world. However, in many countries, disaster management still relies on human intervention, which can present a significant challenge, particularly in remote or inaccessible regions where timely intervention is required. To mitigate these problems, advances in photogrammetry and remote sensing such as unmanned aerial vehicles (UAVs), which incorporate embedded platforms and optical sensors, need to be employed. The proposed approach allows onboard aerial image processing, and avoids network reliability, data security, and response time issues. However, problems caused by the limited hardware resources of UAVs must be addressed. Many existing real-time disaster detection solutions rely on lightweight convolutional neural networks (CNNs) specifically tailored to classify a limited set of disaster scenarios. However, such frameworks often struggle in real-world situations, where the diversity of disaster cases and the limited capacity of low-complexity models hinder accurate differentiation. This work presents a UAV-powered edge computing framework for disaster detection, utilizing our proposed transformer-based deep learning model optimized for real-time aerial image classification. The optimization was done using post-training quantization techniques. Moreover, we employ Explainable AI (XAI) techniques to enhance interpretability and visually highlight the regions the model focuses on when making predictions. To address the limited number of disaster cases in existing benchmark datasets and ensure real-world adoption of our model, we create a novel dataset, DisasterEye, containing various disaster scenes captured by UAVs and ground-level cameras. Our practical results reveal the efficacy of the proposed solution on both traditional and resource-limited devices. We reduce inference time and memory usage without compromising the model's accuracy on all benchmark datasets. Finally, the effectiveness of the presented system highlights that it can be used as a powerful solution for many realtime remote sensing applications on resource-constrained UAV platforms. The code and DisasterEye dataset are available at: https://github.com/Branislava98/TensorRT .
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom