首页> 外文期刊>Journal of Applied Remote Sensing >Large patch convolutional neural networks for the scene classification of high spatial resolution imagery
【24h】

Large patch convolutional neural networks for the scene classification of high spatial resolution imagery

机译:大补丁卷积神经网络用于高分辨率空间图像的场景分类

获取原文
获取原文并翻译 | 示例
           

摘要

The increase of the spatial resolution of remote-sensing sensors helps to capture the abundant details related to the semantics of surface objects. However, it is difficult for the popular object-oriented classification approaches to acquire higher level semantics from the high spatial resolution remote-sensing (HSR-RS) images, which is often referred to as the "semantic gap." Instead of designing sophisticated operators, convolutional neural networks (CNNs), a typical deep learning method, can automatically discover intrinsic feature descriptors from a large number of input images to bridge the semantic gap. Due to the small data volume of the available HSR-RS scene datasets, which is far away from that of the natural scene datasets, there have been few reports of CNN approaches for HSR-RS image scene classifications. We propose a practical CNN architecture for HSR-RS scene classification, named the large patch convolutional neural network (LPCNN). The large patch sampling is used to generate hundreds of possible scene patches for the feature learning, and a global average pooling layer is used to replace the fully connected network as the classifier, which can greatly reduce the total parameters. The experiments confirm that the proposed LPCNN can learn effective local features to form an effective representation for different land-use scenes, and can achieve a performance that is comparable to the state-of-the-art on public HSR-RS scene datasets. (C) 2016 Society of Photo-Optical Instrumentation Engineers (SPIE)
机译:遥感传感器空间分辨率的提高有助于捕获与表面物体的语义有关的大量细节。但是,流行的面向对象分类方法很难从高空间分辨率遥感(HSR-RS)图像中获取更高级别的语义,这通常被称为“语义间隙”。卷积神经网络(CNN)是一种典型的深度学习方法,无需设计复杂的运算符,而是可以自动从大量输入图像中发现内在特征描述符,以弥合语义鸿沟。由于可用的HSR-RS场景数据集的数据量很小,与自然场景数据集的数据量相去甚远,因此,很少有CNN方法用于HSR-RS图像场景分类的报道。我们提出了一种用于HSR-RS场景分类的实用CNN架构,称为大型补丁卷积神经网络(LPCNN)。大补丁采样用于生成数百个可能的场景补丁以用于特征学习,而全局平均池化层用于代替全连接网络作为分类器,这可以大大减少总参数。实验证实,提出的LPCNN可以学习有效的局部特征,以形成针对不同土地使用场景的有效表示,并且可以实现与HSR-RS公共场景数据集上的最新技术相当的性能。 (C)2016年光电仪器工程师学会(SPIE)

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号