高级检索+

基于YOLO11-RFL模型的甜菜田杂草轻量化检测方法

A Lightweight Weed Detection Method for Sugar Beet Fields Based on the YOLO11-RFL Model

  • 摘要: 精准高效地检测杂草并除草,能改善作物生长条件,并提高作物产量和增强经济效益。针对复杂农田环境下甜菜幼苗与杂草识别速度慢、精度低的问题,该研究提出了一种基于改进YOLO11n的甜菜田检测模型YOLO11-RFL(YOLO11 with Reparameterized Ghost-ELAN, Feature-Fused-Pyramid-Neck and Lightweight GN head)。首先,参考GhostNet思想,设计了重参数化幽灵层级聚合模块(Reparameterized Ghost-ELAN,RG-ELAN)替代YOLO11n的C3k2模块,在模块分支上采用重参数化技术,增强特征提取能力,降低模型计算量;其次,提出采用特征金字塔网络重构框架(Rethinking Features-Fused-Pyramid-Neck,RFPN)改进Neck网络,解决多尺度目标检测中的特征错位问题,提高实时目标检测的性能和效率;然后,采用GroupNorm改进检测头中的卷积部分,设计出轻量级共享卷积检测头(Lightweight Shared Convolutional Separator GN Detection Head,LSCSGND),增强小目标定位和分类能力,更准确地检测杂草目标;最后,引入知识蒸馏(教师-学生模型)框架,通过基于特征的CWD(Channel-Wise Knowledge Distillation)知识蒸馏策略,实现对YOLO11-RFL的计算效率与精度的协同优化。试验结果表明:与YOLO系列主流模型相比,YOLO11-RFL模型的网络性能得到进一步提升,精确率和召回率均高于YOLO11n,达到了80.7%和74.9%,平均检测精度mAP0.5和mAP0.5-0.95分别为80.7%和56.9%。模型参数量仅为2.03M,计算量为5.4G,相较于YOLO11n分别降低了21.3%和14.3%,展现出良好的工程部署可行性。在基于TensorRT的推理部署环境下,模型在实验田间的实时检测帧率达到107.5帧/s,验证了其在实际应用场景中具备优异的实时推理性能与较高的计算效率。本研究为甜菜田杂草的精准识别与高效治理提供技术支撑,对推动农业生产智能化具有重要意义。

     

    Abstract: Precise and efficient weed detection and removal can improve crop growth conditions, thereby enhancing crop yields and economic benefits. Addressing the issues of slow identification speed and low accuracy in distinguishing sugar beet seedlings from weeds in complex agricultural environments, this paper proposes a detection model based on an improved YOLO11n architecture—YOLO11-RFL (YOLO11 with Reparameterised Ghost-ELAN, Feature-Fused-Pyramid-Neck and Lightweight GN head). Firstly, drawing inspiration from GhostNet, a reparametrised Ghost-ELAN (RG-ELAN) module replaces YOLO11n's C3k2 component. This employs reparametrisation techniques at module branches to enhance feature extraction capabilities while reducing computational load. Secondly, we propose the Rethinking Features-Fused-Pyramid-Neck (RFPN) framework to refine the Neck network, addressing feature misalignment in multi-scale object detection to enhance real-time performance and efficiency. Thirdly, GroupNorm is employed to refine the convolutional layer within the detection head, designing the Lightweight Shared Convolutional Separator GN Detection Head (LSCSGND). This enhances small object localisation and classification capabilities, enabling more accurate weed detection; Finally, a knowledge distillation (teacher-student model) framework is introduced. Through a feature-based Channel-Wise Knowledge Distillation (CWD) strategy, it achieves synergistic optimisation of computational efficiency and accuracy for YOLO11-RFL. Experimental analysis conducted on the public LincolnBeet dataset demonstrates that compared to mainstream YOLO models, YOLO11-RFL achieves further enhanced network performance. Its precision and recall surpass those of YOLO11n, reaching 80.7% and 74.9% respectively. The mean average precision (mAP) at 0.5 and 0.5-0.95 stands at 80.7% and 56.9%. When evaluated on two distinct datasets—PDT and CWC—YOLO11-RFL demonstrated mAP0.5 improvements of 0.7 and 0.1 percentage points over YOLO11n respectively, validating the model's robustness across diverse environments. The model comprises only 2.03 million parameters and requires 5.4 gigabytes of computational resources, representing reductions of 21.3% and 14.3% respectively compared to YOLO11n, demonstrating excellent engineering deployment feasibility. To comprehensively validate the inference efficiency and deployability of the YOLO11-RFL model in practical applications, field deployment and performance testing were conducted at an experimental base. Within a TensorRT-based inference deployment environment, the model achieved a real-time detection frame rate of 107.5 frames per second in experimental fields, confirming its outstanding real-time inference performance and high computational efficiency in practical settings. Additionally, 296 real-world test samples were randomly collected from multiple representative areas within the experimental base to evaluate the model's detection performance. Experimental results indicate that the baseline model YOLO11n achieved an mAP0.5 of 80.4% on the test set, whereas the YOLO11-RFL model improved detection accuracy to 82.1% under identical conditions, confirming the proposed method's performance advantages in field environments. Furthermore, visualisation analysis of collected real-world images confirmed that the YOLO11-RFL model effectively mitigates accuracy degradation and object omission issues caused by environmental complexity. This research provides technical support for the precise identification and efficient management of weeds in sugar beet fields, holding significant implications for advancing intelligent agricultural production.

     

/

返回文章
返回