首页> 外文期刊>Ecological informatics: an international journal on ecoinformatics and computational ecology >Fish detection and species classification in underwater environments using deep learning with temporal information
【24h】

Fish detection and species classification in underwater environments using deep learning with temporal information

机译:水下环境中的鱼类检测和物种分类,使用时间信息深入学习

获取原文
获取原文并翻译 | 示例
           

摘要

It is important for marine scientists and conservationists to frequently estimate the relative abundance of fish species in their habitats and monitor changes in their populations. As opposed to laborious manual sampling, various automatic computer-based fish sampling solutions in underwater videos have been presented. However, an optimal solution for automatic fish detection and species classification does not exist. This is mainly because of the challenges present in underwater videos due to environmental variations in luminosity, fish camouflage, dynamic backgrounds, water murkiness, low resolution, shape deformations of swimming fish, and subtle variations between some fish species. To overcome these challenges, we propose a hybrid solution to combine optical flow and Gaussian mixture models with YOLO deep neural network, an unified approach to detect and classify fish in unconstrained underwater videos. YOLO based object detection system are originally employed to capture only the static and clearly visible fish instances. We eliminate this limitation of YOLO to enable it to detect freely moving fish, camouflaged in the background, using temporal information acquired via Gaussian mixture models and optical flow. We evaluated the proposed system on two underwater video datasets i.e., the LifeCLEF 2015 benchmark from the Fish4Knowledge repository and a dataset collected by The University of Western Australia (UWA). We achieve fish detection F-scores of 95.47% and 91.2%, while fish species classification accuracies of 91.64% and 79.8% on both datasets respectively. To our knowledge, these are the best reported results on these datasets, which show the effectiveness of our proposed approach.
机译:对于海洋科学家和保护主义者来说,重要的是估计其栖息地的鱼类的相对丰富,并监测人口的变化。与费力的手动采样相反,介绍了水下视频中的各种自动计算机的鱼采样解决方案。然而,不存在自动鱼类检测和物种分类的最佳解决方案。这主要是由于水下视频中存在的挑战,由于亮度,鱼伪装,动态背景,水杂音,低分辨率,游离鱼类形状变形的环境变化,以及一些鱼类之间的微妙变化。为了克服这些挑战,我们提出了一种混合解决方案,将光流和高斯混合模型与YOLO深度神经网络相结合,是一种统一的方法来检测和分类无约束水下视频中的鱼类。基于YOLO基于的物体检测系统最初用于仅捕获静态和清晰可见的鱼类实例。我们消除了YOLO的这种限制,使其能够使用通过高斯混合模型和光学流量获取的时间信息来检测在背景中伪装的自由移动的鱼。我们在Fish4knowledge储存库中评估了两个水下视频数据集中的建议系统,即来自Fish4knowledge储存库的Lifeclef 2015基准以及西澳大利亚大学(UWA)收集的数据集。我们达到了95.47%和91.2%的鱼类检测F分,而分别在两个数据集中分别分类为91.64%和79.8%的鱼类。为了我们的知识,这些是这些数据集上的最佳报告结果,这表明了我们提出的方法的有效性。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
获取原文

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号