z-logo
open-access-imgOpen Access
Visible to infrared transfer learning as a paradigm for accessible real-time object detection and classification in infrared imagery
Author(s) -
Yona Falinie A. Gaus,
Neelanjan Bhowmik,
Brian K. S. Isaac-Medina,
Toby P. Breckon
Publication year - 2020
Publication title -
durham research online (durham university)
Language(s) - English
Resource type - Conference proceedings
DOI - 10.1117/12.2573968
Subject(s) - computer science , artificial intelligence , convolutional neural network , object detection , computer vision , rgb color model , deep learning , infrared , transfer of learning , contextual image classification , detector , pattern recognition (psychology) , object (grammar) , image (mathematics) , telecommunications , optics , physics
Object detection from infrared-band (thermal) imagery has been a challenging problem for many years. With the advent of deep Convolutional Neural Networks (CNN), the automated detection and classification of objects of interest within the scene has become popularised due to the notable increases in performance over earlier approaches in the field. These advances in CNN approaches are underpinned by the availability of large-scale, annotated image datasets that are typically available for visible-band (RGB) imagery. By contrast, there is a lack of prior work that specifically targets object detection in infrared-band images, owing to limited datasets availability that stems from more the limited availability and access to infrared-band imagery and associated hardware in general. A viable solution to this problem is transfer learning which can enable the use of such CNN techniques within infrared-band (thermal) imagery, by leveraging prior training on visible-band (RGB) image datasets, and then subsequently only requiring a secondary, smaller volume of infrared-band (thermal) imagery for CNN model fine-tuning. This is performed by adopting an existing pre-trained CNN, pre-optimized for generalized object recognition in visible-band (RGB) imagery, and subsequently fine-tuning the resultant model weights towards our specific infrared-band (thermal) imagery domain task. We use of two state-of-art object detectors, Single Shot Detector (SSD) with a VGG-16 CNN backbone pre-trained on the ImageNet dataset, and You-Only-Look-Once (YOLOV3) with a DarkNet-53 CNN backbone pretrained on the MS-COCO dataset to illustrate our visible-band to infrared band transfer learning paradigm. Exemplar results reported over the FLIR Thermal and MultispectralFIR benchmark datasets show that significant improvements in mAP detection performance to f0.804MsFIR, 0.710FLIRg for SSD and f0.520MsFIR, 0.308FLIRg for YOLOV3 via the use of transfer learning from initial visible-band based CNN training.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here
Accelerating Research

Address

John Eccles House
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom