高级检索+

基于图像融合与改进YOLO11n的笼养死鸡检测

Dead chicken detection in cages using image fusion and improved YOLO11n

  • 摘要: 针对层叠式笼养黄羽肉鸡养殖环境中死鸡检测面临的遮挡严重、光照变化大、小目标检测难等问题,该研究提出了一种基于图像融合与改进YOLO11n模型的笼养死鸡检测方法。首先,为充分发挥热红外图像在温差感知与可见光图像在形态表达方面的互补优势,采用SIFT(scale invariant feature transform)特征点匹配与LMeds(least median of squares)算法对红外与可见光图像进行配准,并基于拉普拉斯金字塔方法构建融合图像,提升图像细节保留能力与结构一致性。其次,针对原始YOLO11n模型在复杂场景下对遮挡目标与小目标识别能力不足的问题,引入Dysample轻量化动态上采样模块,提高多尺度特征恢复精度,同时结合Focal Loss损失函数,有效缓解样本不均衡带来的训练偏差,提升模型的检测鲁棒性和召回能力。在构建的可见光、热红外与融合图像数据集上进行对比试验,结果表明:融合图像数据集在准确率、召回率与平均精度mAP(mean average precision)等多项指标上显著优于单一模态图像,融合图像输入下改进模型YOLO11n-DS-Focal-Loss的检测准确率达96.8%,召回率为92.6%,mAP@50和mAP@50:95分别为94.9%和68.3%,检测时间仅为8.3 ms。与YOLOv5n、YOLOv6n、YOLOv8n等主流轻量化模型相比,所提模型在精度、效率和稳定性方面均表现更优,尤其在高密度与部分遮挡条件下具备更强的目标识别能力。鸡舍巡检试验表明,通过设置合理的巡检频率,可有效缓解高日龄鸡只严重遮挡导致的漏检问题。该研究提出的融合图像构建方法与YOLO11n模型改进策略,可为规模化笼养鸡舍死鸡自动检测提供参考。

     

    Abstract: Yellow-feather broiler is a breed of chicken specifically for meat production in the poultry industry. It is often required to rapidly and accurately detect dead chickens in a stacked cage. Existing machine vision cannot fully meet the large-scale production in recent years, due mainly to severe target occlusion, frequent illumination variations, and the small size and posture similarity of dead chickens in high-density rearing environments. Single visible-light images and conventional deep learning detectors have also limited the robustness under weak lighting and heavy occlusion, especially when deployed on low-cost inspection robots. In this study, an improved YOLO11n model was proposed to detect dead chickens in caged poultry houses. Infrared–visible image was fused to enhance the robustness, accuracy, and real-time performance of dead chicken detection in complex cage-rearing environments, particularly suitable for deployment on resource-constrained inspection robots. Furthermore, the fusion strategy was designed for the thermal infrared images in temperature perception, while the visible-light images were used for structural and texture representation. Specifically, scale-invariant feature transform (SIFT) was employed to extract the feature points for matching between infrared and visible images with significantly different resolutions. The least median of squares (LMeds) algorithm was applied to eliminate mismatches for the registration robustness. Laplacian pyramid fusion was used to generate fused images for the detail preservation and structural consistency. Low computational complexity was also maintained, suitable for real-time applications. The original YOLO11n lightweight detector was modified to further improve detection performance under occlusion and small-target conditions. A Dysample dynamic upsampling module was introduced to enhance multi-scale feature recovery, thereby improving the representation of small and partially occluded targets in high-density cage environments. In addition, the Focal Loss function was incorporated into the training process to alleviate the negative impact of class imbalance between dead chickens and background samples. The training bias was effectively reduced for the model's robustness and recall performance. All training models were evaluated on three datasets, including visible-light, thermal infrared, and fused images. Experimental results demonstrated that the fusion-image detection significantly outperformed single-modality inputs over multiple evaluation metrics. Compared with the visible-light and thermal infrared datasets, the improved model with the fused image dataset also achieved superior detection accuracy, recall rate, and mean average precision. Taking the fused images as input, the improved YOLO11n-DS-Focal-Loss model achieved a detection precision of 96.8%, a recall rate of 92.6%, and mAP@50 and mAP@50:95 values of 94.9% and 68.3%, respectively, with a single-image inference time of only 8.3 ms. The accuracy and real-time performance were effectively balanced after optimization. Furthermore, the comparison was made on the mainstream lightweight detectors, including YOLOv5n, YOLOv6n, YOLOv8n, YOLOv10n, and YOLO12n. The superior performance was obtained, in terms of accuracy, efficiency, and stability, particularly under dense rearing and partial occlusion. Field inspection experiments were conducted in a real caged poultry house to further verify the effectiveness. The increasing inspection frequency significantly reduced the missed detections caused by the complete occlusion of dead chickens, especially for the broilers aged 40 days or above with dense feather coverage and larger body sizes. Previously occluded dead chickens were more visible from the different viewpoints, as the inspection rounds increased, indicating the high completeness and reliability of the detection. In conclusion, the infrared–visible image fusion with the improved YOLO11n framework can provide an effective and practical solution to detect dead chickens in large-scale caged poultry houses. The high accuracy and real-time performance with the low computational complexity are suitable for the deployment on the low-cost embedded inspection robots. This finding can also provide a valuable reference for the poultry house inspection in modern poultry farming.

     

/

返回文章
返回