首页> 美国卫生研究院文献>PLoS Clinical Trials >Fully convolutional network for rice seedling and weed image segmentation at the seedling stage in paddy fields
【2h】

Fully convolutional network for rice seedling and weed image segmentation at the seedling stage in paddy fields

机译:水稻秧田全卷积网络和秧期杂草图像分割

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

To reduce the cost of production and the pollution of the environment that is due to the overapplication of herbicide in paddy fields, the location information of rice seedlings and weeds must be detected in site-specific weed management (SSWM). With the development of deep learning, a semantic segmentation method with the SegNet that is based on fully convolutional network (FCN) was proposed. In this paper, RGB color images of seedling rice were captured in paddy field, and ground truth (GT) images were obtained by manually labeled the pixels in the RGB images with three separate categories, namely, rice seedlings, background, and weeds. The class weight coefficients were calculated to solve the problem of the unbalance of the number of the classification category. GT images and RGB images were used for data training and data testing. Eighty percent of the samples were randomly selected as the training dataset and 20% of samples were used as the test dataset. The proposed method was compared with a classical semantic segmentation model, namely, FCN, and U-Net models. The average accuracy rate of the SegNet method was 92.7%, whereas the average accuracy rates of the FCN and U-Net methods were 89.5% and 70.8%, respectively. The proposed SegNet method realized higher classification accuracy and could effectively classify the pixels of rice seedlings, background, and weeds in the paddy field images and acquire the positions of their regions.
机译:为了减少因在稻田中过度使用除草剂而导致的生产成本和环境污染,必须在特定地点的杂草管理(SSWM)中检测水稻幼苗和杂草的位置信息。随着深度学习的发展,提出了一种基于全卷积网络的基于SegNet的语义分割方法。本文在稻田中捕获了稻米的RGB彩色图像,并通过手动标记RGB图像中的像素的稻谷,背景和杂草三类来获得地面真实(GT)图像。计算类别权重系数以解决分类类别的数量不平衡的问题。 GT图像和RGB图像用于数据训练和数据测试。随机选择80%的样本作为训练数据集,并将20%的样本用作测试数据集。将该方法与经典的语义分割模型FCN和U-Net模型进行了比较。 SegNet方法的平均准确率为92.7%,而FCN和U-Net方法的平均准确率分别为89.5%和70.8%。提出的SegNet方法实现了较高的分类精度,可以有效地对稻田图像中水稻幼苗,背景和杂草的像素进行分类,并获取其区域位置。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号