FloodNet: A Multi-level Multi-modal Fusion Network with Semantic Consistency Constraint Strategy for Flood Segmentation
Author(s) -
Qifeng Ge,
Teng Zhao,
Yihang Lin,
Zhenzhen Yan,
Chen Xu,
Xiaoping Du,
Xiangtao Fan
Publication year - 2025
Publication title -
ieee geoscience and remote sensing letters
Language(s) - English
Resource type - Magazines
SCImago Journal Rank - 1.372
H-Index - 114
eISSN - 1558-0571
pISSN - 1545-598X
DOI - 10.1109/lgrs.2025.3610188
Subject(s) - geoscience , power, energy and industry applications , communication, networking and broadcast technologies , components, circuits, devices and systems , computing and processing , signal processing and analysis
Flood segmentation using synthetic aperture radar (SAR) images is essential for determining the extent of inundation areas, and informing subsequent management recommendations. However, existing networks for flood segmentation using single modality SAR images often face inherent challenges, including interference from terrain shadows and water-like surfaces, leading to degraded segmentation performance. In this study, we introduced a multi-level multi-modal fusion network (FloodNet), in which an Adaptive Gated Feature Fusion Module (AGFFM) is designed to integrate multi-modal features from Sentinel-1 SAR images, Digital Elevation Model (DEM) and Joint Research Centre Global Surface Water (JRC-gsw). Furthermore, we proposed a semantic consistency constraint strategy to alleviate the blurring of water edges during the prediction process. Experiments on two publicly available flood datasets, C2S-Flood and ETCI-Flood, demonstrate the competitive performance of the proposed FloodNet compared with other state-of-the-art single- and multi-modal networks. The code is available at https://github.com/SuperPixelPioneer/Flood-Net.
Accelerating Research
Robert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom
Address
John Eccles HouseRobert Robinson Avenue,
Oxford Science Park, Oxford
OX4 4GP, United Kingdom