首页> 外文期刊>Remote Sensing >Transferring Deep Convolutional Neural Networks for the Scene Classification of High-Resolution Remote Sensing Imagery
【24h】

Transferring Deep Convolutional Neural Networks for the Scene Classification of High-Resolution Remote Sensing Imagery

机译:传输深卷积神经网络用于高分辨率遥感影像的场景分类

获取原文
           

摘要

Learning efficient image representations is at the core of the scene classification task of remote sensing imagery. The existing methods for solving the scene classification task, based on either feature coding approaches with low-level hand-engineered features or unsupervised feature learning, can only generate mid-level image features with limited representative ability, which essentially prevents them from achieving better performance. Recently, the deep convolutional neural networks (CNNs), which are hierarchical architectures trained on large-scale datasets, have shown astounding performance in object recognition and detection. However, it is still not clear how to use these deep convolutional neural networks for high-resolution remote sensing (HRRS) scene classification. In this paper, we investigate how to transfer features from these successfully pre-trained CNNs for HRRS scene classification. We propose two scenarios for generating image features via extracting CNN features from different layers. In the first scenario, the activation vectors extracted from fully-connected layers are regarded as the final image features; in the second scenario, we extract dense features from the last convolutional layer at multiple scales and then encode the dense features into global image features through commonly used feature coding approaches. Extensive experiments on two public scene classification datasets demonstrate that the image features obtained by the two proposed scenarios, even with a simple linear classifier, can result in remarkable performance and improve the state-of-the-art by a significant margin. The results reveal that the features from pre-trained CNNs generalize well to HRRS datasets and are more expressive than the low- and mid-level features. Moreover, we tentatively combine features extracted from different CNN models for better performance.
机译:学习高效的图像表示方法是遥感影像场景分类任务的核心。现有的解决场景分类任务的方法,无论是基于具有低水平手工设计特征的特征编码方法,还是基于无监督特征学习的方法,都只能生成代表性能力有限的中层图像特征,从而从根本上阻止了它们获得更好的性能。 。最近,深度卷积神经网络(CNN)是在大规模数据集上训练的层次结构,在对象识别和检测方面表现出惊人的性能。但是,仍不清楚如何将这些深度卷积神经网络用于高分辨率遥感(HRRS)场景分类。在本文中,我们研究了如何从这些成功的预训练CNN传递特征以进行HRRS场景分类。我们提出了两种通过从不同层提取CNN特征来生成图像特征的方案。在第一种情况下,从全连接层提取的激活向量被视为最终图像特征。在第二种情况下,我们从最后一个卷积层以多个比例提取密集特征,然后通过常用的特征编码方法将密集特征编码为全局图像特征。在两个公共场景分类数据集上进行的大量实验表明,即使使用简单的线性分类器,通过两个拟议方案获得的图像特征也可以带来显着的性能,并可以显着改善现有技术。结果表明,来自预训练的CNN的特征可以很好地推广到HRRS数据集,并且比低层和中层特征更具表现力。此外,我们尝试结合从不同CNN模型中提取的特征以获得更好的性能。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号