高级检索+

设施环境下基于YOLOv8-seg的轨道区域分割与对轨控制方法

Rail Region Segmentation and Alignment Control Method Based on YOLOv8-seg in Protected Environment

  • 摘要: 针对设施环境中番茄种植光照变化显著、弱GPS定位导致寻轨与入轨不稳的问题,该研究提出一种基于改进YOLOv8-seg的轨道区域分割与对轨控制方法。首先,利用二维激光雷达获取作物行点云数据,提出基于局部密度自适应的改进DBSCAN聚类算法,并结合Theil-Sen估计拟合行线,实现机器人在行外的导航与位姿估计;随后,在接近目标轨道区域时,利用侧向相机采集图像,构建改进YOLOv8-seg分割网络对轨道区域进行识别。通过引入多尺度融合模块(MSC2f)、深度可分离卷积与GhostConv模块,提升模型在复杂光照条件下的边界识别能力与推理效率;最终,基于最小二乘法拟合轨道中心线,计算横向偏差并调整机器人位姿,实现轨道区域的精准识别与对轨控制。试验结果表明,改进YOLOv8-seg网络在轨道区域分割任务中的mAP50达96.43%,较原始YOLOv8-seg提升1.3个百分点;激光雷达聚类拟合得到的机器人中心至作物行距离误差平均为2.67cm,航向角误差平均为0.18°;在不同光照条件下,机器人中心与轨道中心的最大偏差不超过4.95cm,平均偏差为2.16cm。所提方法有效提升了设施番茄种植环境下轨道区域识别精度与上轨稳定性,具备良好的环境适应性。

     

    Abstract: Abatract: This study aims to address two critical challenges faced by agricultural robots in protected tomato cultivation environments: significant light variations and unstable GPS signals that lead to difficulties in rail finding and precise alignment. To enhance autonomy and operational efficiency in such environments, we propose an integrated method combining two-dimensional (2D) LiDAR and an improved YOLOv8-seg network for rail-region segmentation and alignment control. The proposed method consists of three main steps. First, during inter-row navigation, a 2D LiDAR scans the crop rows to obtain point cloud data. An improved DBSCAN clustering algorithm, which adapts to local density variations, is employed to process the point cloud data. This algorithm dynamically adjusts the ε and MinPts parameters based on the local and global density distributions of the point cloud. Combined with Theil-Sen estimation, it fits row lines and enables real-time extraction of the distance and yaw angle of the robot relative to the crop rows. This information is used to control the robot’s navigation along the inter-row path with centimeter-level accuracy. Second, as the robot approaches the target rail region, a side-mounted RGB camera captures images of the rail area. An enhanced YOLOv8-seg network is utilized to identify and segment the rail region. The network incorporates several improvements over the original YOLOv8-seg architecture. A multi-scale fusion module (MSC2f) is introduced to replace the original C2f module. This module uses parallel 3×3 and 5×5 convolutions to capture fine details in bright regions and larger structures in shadowed areas. Additionally, efficient channel attention (ECA) is embedded to suppress glare effects by dynamically adjusting channel weights. To further reduce computational complexity and improve inference speed, depthwise separable convolution and GhostConv modules are implemented. These modifications enable the network to achieve an mAP50 of 96.43%, representing a 1.3 percentage point improvement over the original YOLOv8-seg, while maintaining a lightweight model with only 1.41 million parameters and a frame rate of 96 fps on an RTX 4060 GPU. Finally, based on the segmentation results, the least squares method is used to fit the rail centerline. The lateral deviation between the robot and the rail centerline is calculated, and the robot’s pose is adjusted accordingly. The robot performs a 90° in-place turn and enters the rail region once the deviation is within the acceptable range. Comprehensive experiments were conducted in a multi-span greenhouse at the Lvgang Modern Agricultural Research Institute in Suqian City, Jiangsu Province, where tomato plants were cultivated in coconut coir bags with rail laid between ridges (rail width: 550 mm, adjacent rail spacing: 1760 mm, total rail length: 50 m). A total of 1216 rail images were collected under four typical light conditions (normal light: 800025000 lx, strong light: >25000 lx, low light: 2000–8000 lx, local strong light: >25000 lx in local areas), and data augmentation techniques such as dynamic blur, mirror flipping, and brightness adjustment were applied to expand the dataset to 3648 images. Experimental results demonstrate that the proposed method achieves an average lateral deviation of 2.16 cm and a maximum deviation of 4.95 cm under different lighting conditions, with a 100% success rate in 130 trials. The average distance error from the robot center to the crop row, as determined by LiDAR clustering and fitting, is 2.67 cm, and the average heading angle error is 0.16°, validated against data from an Xsens MTi-670 inertial measurement unit (IMU). The improved DBSCAN clustering algorithm achieves a clustering accuracy of 98.07%, which is 1.05 percentage points higher than the traditional DBSCAN algorithm, with a single-frame processing time reduced from 0.0046 s to 0.0017 s (a 63% speedup). In conclusion, the proposed method provides an effective solution for rail-region segmentation and alignment control in protected tomato cultivation environments. By integrating 2D LiDAR and an improved YOLOv8-seg network, the method achieves high precision and robustness in navigation and alignment tasks under varying lighting conditions, offering a promising technical pathway for the development of autonomous agricultural robots capable of operating efficiently in protected environments and potentially extendable to other protected crops such as strawberries and cucumbers, contributing to the advancement of intelligent protected agriculture.

     

/

返回文章
返回