z-logo
open-access-imgOpen Access
Semantic Segmentation of Remote Sensing Images Based on Multi-Model Fusion
Author(s) -
Miao Zhang,
Likun Liu,
Lei Ren,
Yang Tang,
Yong Chen,
Jun Li
Publication year - 2020
Publication title -
journal of physics. conference series
Language(s) - English
Resource type - Journals
SCImago Journal Rank - 0.21
H-Index - 85
eISSN - 1742-6596
pISSN - 1742-6588
DOI - 10.1088/1742-6596/1575/1/012119
Subject(s) - computer science , conditional random field , segmentation , artificial intelligence , image stitching , convolutional neural network , pattern recognition (psychology) , image segmentation , feature (linguistics) , field (mathematics) , scale space segmentation , scale (ratio) , computer vision , geography , mathematics , cartography , philosophy , linguistics , pure mathematics
Convolutional neural networks have created a new field in the research of semantic segmentation of remote sensing images. However, different network structures have different effects on the semantic segmentation of different land types. In this paper, the original data set is expanded, and an improved U-Net model is used to train a model for each type of feature target. Then combined with conditional random field (CRF) and image overlapping strategy for optimization processing; Finally, the two binary classification models obtained by training are fused to obtain multi-classified semantic segmentation images. Solve the obvious problem of large-scale remote sensing image edge stitching. The experimental results show that this method has higher accuracy in solving the segmentation of large-scale remote sensing images.

The content you want is available to Zendy users.

Already have an account? Click here to sign in.
Having issues? You can contact us here