Open Access
Convolutional Neural Network Training for RGBN Camera Color Restoration Using Generated Image Pairs
Author(s) -
Zhenghao Han,
Li Li,
Weiqi Jin,
Xia Wang,
Gangcheng Jiao,
Hailin Wang
Publication year - 2020
Publication title -
ieee photonics journal
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.725
H-Index - 73
eISSN - 1943-0655
pISSN - 1943-0647
DOI - 10.1109/jphot.2020.3025088
Subject(s) - engineered materials, dielectrics and plasmas , photonics and electrooptics
RGBN cameras that can capture visible light and near-infrared (NIR) light simultaneously produce better color image quality in low-light-level conditions. However, these RGBN cameras introduce additional color bias caused by the mixing of visible information and NIR information. The color correction matrix model widely used in current commercial color digital cameras cannot handle the complicated mapping function between biased color and ground truth color. Convolutional neural networks (CNNs) are good at fitting such complicated relationships, but they require a large quantity of training image pairs of different scenes. In order to achieve satisfactory training results, large amounts of data must be captured manually, even when data augmentation techniques are applied, requiring significant time and effort. Hence, a data generation method for training pairs that are consistent with target RGBN camera parameters, based on an open access RGB-NIR dataset, is proposed. The proposed method is verified by training an RGBN camera color restoration CNN model with generated data. The results show that the CNN model trained with the generated data can achieve satisfactory RGBN color restoration performance with different RGBN sensors.