Top-down pyramid fusion network for high-resolution remote sensing semantic segmentation Article

Gu, Y, Hao, J, Chen, B et al. (2021). Top-down pyramid fusion network for high-resolution remote sensing semantic segmentation . 13(20), 10.3390/rs13204159

cited authors

  • Gu, Y; Hao, J; Chen, B; Deng, H

authors

abstract

  • In recent years, high-resolution remote sensing semantic segmentation based on data fusion has gradually become a research focus in the field of land classification, which is an indispensable task of a smart city. However, the existing feature fusion methods with bottom-up structures can achieve limited fusion results. Alternatively, various auxiliary fusion modules significantly increase the complexity of the models and make the training process intolerably expensive. In this paper, we propose a new lightweight model called top-down pyramid fusion network (TdPFNet) including a multi-source feature extractor, a top-down pyramid fusion module and a decoder. It can deeply fuse features from different sources in a top-down structure using high-level semantic knowledge guiding the fusion of low-level texture information. Digital surface model (DSM) data and open street map (OSM) data are used as auxiliary inputs to the Potsdam dataset for the proposed model evaluation. Experimental results show that the network proposed in this paper not only notably improves the segmentation accuracy, but also reduces the complexity of the multi-source semantic segmentation model.

publication date

  • October 1, 2021

Digital Object Identifier (DOI)

volume

  • 13

issue

  • 20