Remote Sensing Image Scene Classification Based on Fusion Method
Author(s) -
Liancheng Yin,
Peiyi Yang,
Keming Mao,
Qian Liu
Publication year - 2021
Publication title -
journal of sensors
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.399
H-Index - 43
eISSN - 1687-7268
pISSN - 1687-725X
DOI - 10.1155/2021/6659831
Subject(s) - fusion , artificial intelligence , computer vision , image (mathematics) , image fusion , computer science , remote sensing , pattern recognition (psychology) , geography , philosophy , linguistics
Remote sensing image scene classification is a hot research area for its wide applications. More recently, fusion-based methods attract much attention since they are considered to be an useful way for scene feature representation. This paper explores the fusion-based method for remote sensing image scene classification from another viewpoint. First, it is categorized as front side fusion mode, middle side fusion mode, and back side fusion mode. For each fusion mode, the related methods are introduced and described. Then, classification performances of the single side fusion mode and hybrid side fusion mode (combinations of single side fusion) are evaluated. Comprehensive experiments on UC Merced, WHU-RS19, and NWPU-RESISC45 datasets give the comparison result among various fusion methods. The performance comparisons of various modes, and interactions among different fusion modes are also discussed. It is concluded that (1) fusion is an effective way to improve model performance, (2) back side fusion is the most powerful fusion mode, and (3) method with random crop+multiple backbone+average achieves the best performance.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom