高级检索+

基于改进Mask R-CNN的笼养死鸭识别方法

Dead Duck Recognition Method Based on Improved Mask R-CNN

  • 摘要: 针对规模化笼养肉鸭舍内死鸭识别采用人工作业方式时,存在作业效率低、劳动强度大、养殖成本高等问题,以层叠式笼养肉鸭为研究对象,提出了一种基于深度学习的笼养死鸭识别方法。为了采集数据,首先面向立体层叠式养殖环境设计了一款适用于肉鸭舍的自主巡检装备。针对笼养肉鸭舍铁丝网遮挡严重的问题,基于机器视觉对笼网进行修复,基于OpenCV对图像进行增强处理。构建了一种基于Mask R-CNN的死鸭识别模型,采用Swin Transformer对模型进行优化,解决了Mask R-CNN网络缺乏整合全局信息能力的问题。对比分析了SOLO v2、Mask R-CNN和Mask R-CNN+Swin Transformer模型识别笼内死鸭准确率。实验结果表明,在平均精度均值为90%的条件下,Mask R-CNN+Swin Transformer模型对笼内死鸭总体识别准确率可达95.8%,在自主巡检装备上的检测效果优于其他主流的目标检测算法。

     

    Abstract: Traditional manual methods for identifying dead ducks within large-scale stacked cage poultry houses have proven to be inefficient, labor-intensive, and costly. Focusing on stacked cage housing for meat ducks, a deep learning-based method was proposed for dead duck recognition. To collect the necessary dataset, a specialized autonomous inspection system tailored for meat duck housing within three-dimensional stacked environments was initially designed. To address the issue of severe wire mesh obstruction within the cage housing, machine vision techniques were employed to repair the cage mesh and enhance images by using OpenCV. A dead duck recognition model was constructed based on Mask R-CNN, and further optimized with the Swin Transformer to overcome the limitation of Mask R-CNN’s global information integration. The accuracy of dead duck recognition among the SOLO v2, Mask R-CNN, and Mask R-CNN+Swin Transformer models was compared and analyzed. Experimental results demonstrated that under the condition of mAP value of 90%, the Mask R-CNN+Swin Transformer model achieved an overall dead duck recognition rate of 95.8% within the duck cages, outperforming other mainstream object detection algorithms on the autonomous inspection equipment.

     

/

返回文章
返回