z-logo
open-access-imgOpen Access
Back and Forward Incremental Learning Through Knowledge Distillation for Object Detection Unmanned Aerial Vehicles
Author(s) -
Qazi Mazhar Ul Haq,
Shahrul Amin,
Nandhagopal Chandrasekaran,
Chen-Hao Liao,
Yi-Jheng Huang
Publication year - 2025
Publication title -
ieee access
Language(s) - English
Resource type - Magazines
SCImago Journal Rank - 0.587
H-Index - 127
eISSN - 2169-3536
DOI - 10.1109/access.2025.3613768
Subject(s) - aerospace , bioengineering , communication, networking and broadcast technologies , components, circuits, devices and systems , computing and processing , engineered materials, dielectrics and plasmas , engineering profession , fields, waves and electromagnetics , general topics for engineers , geoscience , nuclear engineering , photonics and electrooptics , power, energy and industry applications , robotics and control systems , signal processing and analysis , transportation
Incremental object detection is essential for real-world applications where the models must continuously learn new object categories without forgetting previously acquired knowledge. However, the conventional training methods often suffer from catastrophic forgetting, leading to performance degradation on early tasks when learning new ones. To address this, we proposed a novel framework that integrates an attention-based feature enhancement module within a knowledge distillation-based incremental learning pipeline. The AFEM combines self-attention and cross-attention mechanisms to strengthen feature representations by enhancing spatial localization and classification across tasks. Our approach employs a teacher-student architecture, where the teacher model preserves prior knowledge while the student model learns new classes through multi-level distillation losses applied to features, region proposals, and final predictions. Experiments conducted on the Pascal VOC dataset demonstrate the effectiveness of our method. Our method achieves 60.802% AP for the base class. It maintains 56.969% AP after one incremental phase, with a forgetting rate of just 3.833% using Huber loss and achieving improved detection accuracy with MSE loss. These results indicate that our method effectively retains performance on the old classes while adapting to new ones.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom