高级检索+

融合深度学习与形态学处理的玉米田间杂草生长点定位方法

Fusion of deep learning and morphological processing for weed growth point localization in maize fields

  • 摘要: 为实现激光除草机器人对杂草生长点的精准、快速定位,该文针对玉米田间环境下杂草目标尺度小、形状不规则及相互遮挡导致定位不准的问题,提出了一种融合YOLOv11检测与数学形态学的杂草生长点定位方法。首先,为解决小目标、不规则及遮挡杂草的检测精度不足问题,构建了YOLOv11n-LBD(YOLOv11n-LAE-BiFPN-DyHeadDCNv4)轻量级检测模型,集成了轻量自适应提取模块(Lightweight adaptive extraction,LAE)、双向特征金字塔网络(Bidirectional feature pyramid network, BiFPN)与DyHeadDCNv4检测头,显著增强了复杂田间环境下对多尺度杂草的特征提取能力,提升了小目标与遮挡杂草的检测精度。改进模型在玉米田间自建杂草数据集上平均精度均值(Mean average precision,mAP)为94.8%,相比基线模型参数量和计算量分别降低了0.4M和0.8G,检测速度为79帧/s。针对检测框几何中心与杂草真实生长点存在较大偏差的问题,提出基于骨架密度峰值分析的杂草生长点定位方法(Skeleton density peak analysis for apical meristem localization, SDPL):基于YOLOv11n-LBD模型检测结果截取杂草的感兴趣区域,通过高斯滤波与Otsu自适应阈值分割构建高质量二值化掩膜,有效抑制背景干扰,为后续分析提供清晰的二值化图像;在此基础上,引入Zhang-Suen骨架细化提取与B样条曲线拟合相结合的骨架优化策略,在保留杂草拓扑结构的同时实现骨架的平滑与连贯,解决了传统骨架提取中存在的断裂与毛刺问题;以优化后的骨架分支交点作为候选点,通过分析各候选点圆形邻域内的像素密度分布,识别局部密度峰值,实现杂草生长点的精准定位。试验结果表明,该方法杂草生长点定位准确率达81.2%,相较于检测框几何中心定位法与YOLOv8-pose方法,分别提升14.0与3.0个百分点,为激光除草机器人杂草生长点的精准定位提供技术支持。

     

    Abstract: To address the challenge of inaccurate weed growth point localization in laser weeding—primarily caused by small target size, irregular morphology, and mutual occlusion in cornfield environments—this study proposes an integrated method that combines an improved deep learning detector with a novel skeleton and density peak analysis framework for apical meristem localization (SDPL).First, to enhance the detection of small, irregular, and partially occluded weeds, a lightweight model named YOLOv11n-LBD was developed. Based on YOLOv11n, the model incorporates three key enhancements: a Lightweight Adaptive Extraction (LAE) module for efficient and adaptive feature encoding, a Bidirectional Feature Pyramid Network (BiFPN) for improved multi-scale feature fusion with learnable weights, and a DyHeadDCNv4 detection head that integrates dynamic attention and deformable convolution for refined spatial and scale-aware feature representation. These modifications collectively strengthen the model's capability to extract discriminative features under complex field conditions. Evaluated on a custom dataset containing corn seedlings and five common weed species, the improved model achieved a mean Average Precision (mAP) of 94.8% and a recall of 93.2%. Compared to the baseline YOLOv11n, its parameter count and computational complexity were reduced by 0.4 million and 0.8 GFLOPs, respectively, while maintaining a real-time inference speed of 79 frames per second. This demonstrates a favorable balance between accuracy and efficiency suitable for mobile or embedded deployment in field robots.To overcome the significant positional deviation between the bounding box geometric center and the actual biological growth point of a weed, the SDPL localization pipeline was introduced. The process begins by cropping the region of interest (ROI) based on the YOLOv11n-LBD detection output. A high-quality binary mask is then generated using Gaussian filtering for noise suppression, followed by Otsu's adaptive thresholding for robust segmentation, effectively isolating the plant from the background. Morphological opening and closing operations are applied to suppress noise and enhance connectivity. Subsequently, a dedicated skeleton optimization framework is implemented. The initial single-pixel skeleton is extracted using the Zhang-Suen parallel thinning algorithm, preserving the topological structure. To address artifacts such as jaggedness and spurious branches, the raw skeleton is smoothed and refined through B-spline curve fitting, resulting in a continuous and natural skeletal representation. Candidate points for the growth point are defined as the junction points of the refined skeleton branches. Finally, the precise growth point is identified by analyzing the local pixel density distribution within a circular neighborhood around each candidate. The point exhibiting the highest local density peak, corresponding to the most concentrated area of plant tissue near the stem base, is selected as the final localization result. Experimental validation on field images showed that the proposed integrated method achieved a growth point localization accuracy of 81.2%. This is an improvement of 3 and 14 percentage points respectively compared to the key point detection and bounding box center positioning methods. The results demonstrate that the SDPL method, built upon robust detection and advanced morphological-densitometric analysis, performs reliably for weeds with irregular shapes, under partial occlusion, and even when overlapping with crop plants—scenarios where geometry-based methods often fail. This work provides a precise, practical, and computationally efficient visual perception solution for vision-guided laser weeding robots, contributing to targeted and energy-efficient automated weed control in sustainable agriculture.

     

/

返回文章
返回