首页> 外文期刊>ISPRS Journal of Photogrammetry and Remote Sensing >Simplified object-based deep neural network for very high resolution remote sensing image classification
【24h】

Simplified object-based deep neural network for very high resolution remote sensing image classification

机译:基于简化的基于对象的极高分辨率遥感图像分类的深神经网络

获取原文
获取原文并翻译 | 示例
           

摘要

For the object-based classification of high resolution remote sensing images, many people expect that introducing deep learning methods can improve then classification accuracy. Unfortunately, the input shape for deep neural networks (DNNs) is usually rectangular, whereas the shapes of the segments output by segmentation methods are usually according to the corresponding ground objects; this inconsistency can lead to confusion among different types of heterogeneous content when a DNN processes a segment. Currently, most object-based methods utilizing convolutional neural networks (CNNs) adopt additional models to overcome the detrimental influence of such heterogeneous content; however, these heterogeneity suppression mechanisms introduce additional complexity into the whole classification process, and these methods are usually unstable and difficult to use in real applications. To address the above problems, this paper proposes a simplified object-based deep neural network (SO-DNN) for very high resolution remote sensing image classification. In SO-DNN, a new segment category label inference method is introduced, in which a deep semantic segmentation neural network (DSSNN) is used as the classification model instead of a traditional CNN. Since the DSSNN can obtain a category label for each pixel in the input image patch, different types of content are not mixed together; therefore, SO-DNN does not require an additional heterogeneity suppression mechanism. Moreover, SO-DNN includes a sample information optimization method that allows the DSSNN model to be trained using only pixel-based training samples. Because only a single model is used and only a pixel-based training set is needed, the whole classification process of SO-DNN is relatively simple and direct. In experiments, we use very high-resolution aerial images from Vaihingen and Potsdam from the ISPRS WG II/4 dataset as test data and compare SO-DNN with 6 traditional methods: O-MLP, O+CNN, OHSF-CNN, 2-CNN, JDL and U-Net. Compared with the best-performing method among these traditional methods, the classification accuracy of SO-DNN is improved by up to 7.71% and 10.78% for single images from Vaihingen and Potsdam, respectively, and the average classification accuracy is improved by 2.46% and 2.91% for the Vaihingen and Potsdam images, respectively. SO-DNN relies on fewer models and easier-to-obtain samples than traditional methods, and its stable performance makes SO-DNN more valuable for practical applications.
机译:对于基于对象的高分辨率遥感图像分类,许多人期望引入深度学习方法可以提高分类准确性。不幸的是,深神经网络(DNN)的输入形状通常是矩形的,而通过分段方法输出的段的形状通常根据相应的接地物体;当DNN处理段时,这种不一致导致不同类型的异构内容之间的混淆。目前,利用卷积神经网络(CNNS)的基于大多数基于对象的方法采用额外的模型来克服这种异质内容的不利影响;然而,这些异质性抑制机制在整个分类过程中引入额外的复杂性,并且这些方法通常是不稳定的并且难以在真实应用中使用。为了解决上述问题,本文提出了一种用于非常高分辨率遥感图像分类的简化对象的深神经网络(SO-DNN)。在SO-DNN中,引入了一种新的段类别标签推断方法,其中深度语义分割神经网络(DSSNN)用作分类模型而不是传统的CNN。由于DSSNN可以获得输入图像修补程序中的每个像素的类别标签,因此不同类型的内容不会混合在一起;因此,SO-DNN不需要额外的异质性抑制机制。此外,SO-DNN包括一种示例信息优化方法,其允许仅使用基于像素的训练样本进行训练的DSSNN模型。因为仅使用单个模型并且只需要一个基于像素的训练集,所以SO-DNN的整个分类过程相对简单和直接。在实验中,我们使用来自Vaihingen和Potsdam的非常高分辨率的空中图像,从ISPRS WG II / 4数据集中作为测试数据,并比较具有6种传统方法的SO-DNN:O-MLP,O + CNN,OHSF-CNN,2- CNN,JDL和U-Net。与这些传统方法中的最佳方法相比,SO-DNN的分类精度分别从Vaihingen和Potsdam的单幅图像提高了高达7.71%和10.78%,并且平均分类精度得到了2.46%和对于Vaihingen和Potsdam图像分别为2.91%。 SO-DNN依赖于较少的模型和比传统方法更容易获得的样品,并且其稳定的性能使SO-DNN对实际应用更有价值。

著录项

  • 来源
  • 作者单位

    Changchun Inst Technol Sch Comp Technol & Engn Changchun 130012 Peoples R China|Jilin Prov Key Lab Changbai Hist Culture & VR Rec Changchun 130012 Peoples R China;

    Univ Lancaster Lancaster Environm Ctr Lancaster LA1 4YQ England|UK Ctr Ecol & Hydrol Lancaster LA1 4AP England;

    Jilin Prov Key Lab Changbai Hist Culture & VR Rec Changchun 130012 Peoples R China;

    Changchun Inst Technol Sch Comp Technol & Engn Changchun 130012 Peoples R China|Jilin Prov Key Lab Changbai Hist Culture & VR Rec Changchun 130012 Peoples R China;

  • 收录信息
  • 原文格式 PDF
  • 正文语种 eng
  • 中图分类
  • 关键词

    CNN; Very high resolution; Semantic segmentation; Classification; OBIA;

    机译:CNN;非常高分辨率;语义细分;分类;obia;

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号