|

CMR-net: A cross modality reconstruction network for multi-modality remote sensing classification.

Researchers

Journal

Modalities

Models

Abstract

In recent years, the classification and identification of surface materials on earth have emerged as fundamental yet challenging research topics in the fields of geoscience and remote sensing (RS). The classification of multi-modality RS data still poses certain challenges, despite the notable advancements achieved by deep learning technology in RS image classification. In this work, a deep learning architecture based on convolutional neural network (CNN) is proposed for the classification of multimodal RS image data. The network structure introduces a cross modality reconstruction (CMR) module in the multi-modality feature fusion stage, called CMR-Net. In other words, CMR-Net is based on CNN network structure. In the feature fusion stage, a plug-and-play module for cross-modal fusion reconstruction is designed to compactly integrate features extracted from multiple modalities of remote sensing data, enabling effective information exchange and feature integration. In addition, to validate the proposed scheme, extensive experiments were conducted on two multi-modality RS datasets, namely the Houston2013 dataset consisting of hyperspectral (HS) and light detection and ranging (LiDAR) data, as well as the Berlin dataset comprising HS and synthetic aperture radar (SAR) data. The results demonstrate the effectiveness and superiority of our proposed CMR-Net compared to several state-of-the-art methods for multi-modality RS data classification.Copyright: © 2024 Wang et al. This is an open access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *