高级检索+

基于改进YOLOv8的大田农作物害虫小目标检测方法

Small object detection method for field crop pests based on improved YOLOv8

  • 摘要: 针对大田环境下害虫尺寸小且密集导致的漏检和检测精度不高的问题,提出了一种基于改进YOLOv8的农作物害虫小目标检测方法(FCDM-YOLOv8)。首先,对主干网络和颈部网络进行轻量化,减少模型参数量和计算量;其次,在颈部网络引入上下文聚合网络(context aggregation network,CONTAINER ),通过上下文增强提升对小目标害虫的检测精度;再次,移除主干网络P5层和大目标检测头,新增小目标检测层,使网络能保留更多的小目标特征;然后,使用动态检测头(dynamic head,Dyhead)取代YOLOv8网络的解耦头部,使模型更专注于密集的小目标区域,从而提取更多的小目标特征;最后,融合Focaler-IoU和MPDIoU作为边界框损失函数,提高小目标难例检测能力。试验结果表明,对于自采构建的大田环境害虫数据集,改进后的模型相比于基线模型YOLOv8n,mAP0.5和mAP0.5~0.95分别提升了5.1、2.8个百分点;在公开数据集VisDrone2019和coco2017-small上做泛化试验,改进模型在mAP0.5上分别提升了7.7和2.2个百分点,表明该模型具有泛化性。该研究基于FCDM-YOLOv8模型开发了一款作物害虫小目标检测系统,可以实现害虫精准检测并可视化检测结果,证明了该方法可为大田环境下智能化农作物害虫小目标检测提供技术支持。

     

    Abstract: To address the critical issues of high miss detection rates and insufficient detection accuracy caused by the small size and dense distribution of field pests, this study innovatively proposed a small object detection method for crop pest images based on improved YOLOv8, named FCDM-YOLOv8. First, in the backbone network, we replaced the original C2f module with a lightweight C2f-FE module to reduce the computational burden of the model. Additionally, we introduced depthwise separable convolution (DWConv) to replace ordinary convolutions in both the backbone and neck networks, further reducing the parameter count while effectively enhancing detection performance and operational efficiency. Second, in the neck network, we incorporated a context aggregation network (context aggregation network, CONTAINER). This network improves detection accuracy for small object pests by strengthening contextual information and refining feature representations, enabling better capture and recognition of dense pest groups. Third, we made adjustments to the model structure by removing the P5 layer and the large object detection head in the backbone network and adding a small object detection layer . These modifications allow the network to retain more feature information related to small object, thereby enhancing its ability to detect pests of small sizes. Fourth, we replaced the decoupled head in YOLOv8 with a dynamic detection head (dynamic head, Dyhead). The dynamic detection head adaptively adjusts detection strategies based on the density of object regions, enabling the model to focus more effectively on dense small objects and extract more useful feature information. Finally, regarding the loss function, we selected Focaler-MPDIoU as the bounding box loss function. This function is more effective in addressing challenges related to small objects and difficult examples, further improving detection accuracy and robustness. Through a series of strict experimental validations, the FCDM-YOLOv8 model achieved precision, recall, mAP0.5, and mAP0.5~0.95 of 81.4%, 73.5%, 80.1%, and 41.1% respectively on the self-collected and constructed field environment pest dataset. Compared to the baseline model YOLOv8n, the FCDM-YOLOv8 model improved precision by 2.0 percentage points, recall by 5.2 percentage points, mAP0.5 by 5.1 percentage points, and mAP0.5~0.95 by 2.8 percentage points. Additionally, the model size was reduced by 38.1%. Compared to mainstream object detection algorithms such as Faster R-CNN, SSD, and other YOLO series models, the FCDM-YOLOv8 model demonstrated the highest recall rate and mAP values while maintaining the lowest memory footprint. Visual comparisons with the baseline model also showed that the FCDM-YOLOv8 model significantly improved detection accuracy and reduced miss detection rates. Furthermore, generalization experiments were conducted on the public dataset VisDrone2019, and the accuracy, recall rate, mAP0.5, and mAP0.5~0.95 of the FCDM-YOLOv8 model reached 52.6%, 38.9%, 41.1%, and 24.3%, respectively, which were 7.7, 5.3, 7.7, and 4.9 percentage points higher than the baseline model YOLOv8n. Generalization experiments were conducted on the dataset COCO2017-small, and the accuracy, recall rate, mAP0.5, and mAP0.5~0.95 of the FCDM-YOLOv8 model reached 44.8%, 29.0%, 28.3%, and 16.0%, respectively, which were 1.9, 1.6, 2.2, and 1.5 percentage points higher than the baseline model YOLOv8n.These results indicate that the FCDM-YOLOv8 model possesses outstanding generalization capabilities and detection accuracy. Finally, a crop pest small target detection system was developed based on the FCDM-YOLOv8 model. The FCDM-YOLOv8 model was deployed on the backend, and the frontend was developed using the PyQt5 framework. The system can accurately identify and locate wheat spiders and aphids, providing technical support for precise pesticide application. At the same time, the number of targets in the image can be counted, which can help evaluate pest density. In summary, this method can provide strong technical support for intelligent small-object detection of crop pests in field environments.

     

/

返回文章
返回