首页> 外文会议>Conference on Global Oceans : Singapore – U.S. Gulf Coast >Automatic in-situ instance and semantic segmentation of planktonic organisms using Mask R-CNN
【24h】

Automatic in-situ instance and semantic segmentation of planktonic organisms using Mask R-CNN

机译:使用面膜R-CNN自动原位实例和浮游生物的语义分割

获取原文

摘要

Planktonic organisms form the principal source for consumers on higher trophic levels in the food chain. Studying their community dispersion is vital to our understanding of the planet's ecological systems. With the recent technological advancements in imaging systems, capturing images of planktons in-situ is made possible by embedding mobile underwater robots with sophisticated camera systems and computing power that implement deep machine learning approaches. Efforts of applying deep learning methods to plankton imaging systems have been limited to classification, while detection and segmentation has been left to traditional methods in this context. There is a variety of publicly available datasets made suited for planktonic species classification. These datasets consist of images of individual species. Thus, they do not represent the actual environment, which is usually given by a scene representation more suited for object localization, detection and semantic segmentation. In this paper we propose a novel custom dataset [1] from planktonic images captured in-situ in a lab environment suited for supervised learning of object detection and instance segmentation. The data is tested in experiments using the state-of-the-art deep learning visual recognition method of Mask R-CNN. The experiment results show the potential of this method and create a baseline analysis module for real-time in-situ image processing. We provide a comparison of how the method is performing when trained on automatically processed and annotated images from existing segmentation frameworks using traditional methods. This comparison illustrates the importance of utilizing proper data and the potential for success if provided11All results, code and metrics used for the experiments are provided in: https://github.com/AILARON/Segmentation.
机译:浮游生物形成消费者在食物链中更高营养水平的主要来源。研究他们的社区分散对我们对地球生态系统的理解至关重要。随着成像系统的最近技术进步,通过将具有复杂的相机系统的移动水下机器人嵌入移动水下机器人和实现深层机器学习方法的计算能力来实现石板上的图像的捕获图像。将深度学习方法应用于Plankton成像系统的努力仅限于分类,而在这种情况下,检测和分割已经留给传统方法。有各种公开可用的数据集适用于浮游品种分类。这些数据集包括单个物种的图像。因此,它们不代表实际环境,该环境通常由场景表示,更适合对象本地化,检测和语义分割。在本文中,我们提出了一种从适合于对象检测和实例分割的监督学习的实验室环境中捕获的浮游图像的新型自定义数据集[1]。使用掩模R-CNN的最先进的深学习视觉识别方法在实验中测试数据。实验结果显示了该方法的潜力,并为实时原位图像处理创建基线分析模块。我们提供了在使用传统方法从现有分段框架自动处理和注释的图像训练时执行方法的比较。这种比较说明了利用适当数据和成功潜力的重要性 1 1 用于实验的所有结果,代码和指标都提供:https://github.com/ailaron/eymentation。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号