高级检索+

大田作物行识别图像的快速标注与试验

A rapid image annotation method for row detection in field crops and experimental evaluation

  • 摘要: 针对大田作物图像标注工作量大、耗时长等问题,该研究提出了一种大田作物行识别图像的快速标注方法。以苗期玉米图像为对象,首先人工标记作物行冠层中心线的2个端点,然后基于中心线外扩法,以目标作物行的最小冠层宽度为基准,按一定宽度比例获取四边形标记框表示作物行区域。采用3种网络模型(Deeplabv3+、PSPNet、UNet)进行训练,分割作物行冠层区域,运用最小二乘法拟合作物行中心线,以作物行中心线航向角和横向偏移距离的平均值和标准差综合评价图像标注的性能,确定最佳宽度比例标注范围为(0.6~1.3)DD为最小冠层宽度)。开展有效性试验,以Deeplabv3+模型0.6D比例标注数据和逐像素标注数据为例,所建方法对左、中、右作物行的航向角平均值和标准差较逐像素标注方法分别减少了45.69%、36.25%;横向偏移距离的平均值和标准差较逐像素标注分别减少了44.22%、36.75%。结果表明,相较于传统逐像素标注方法,本文方法能够获得更高的作物行识别准确率,降低图像标注工作量,提高标注效率。开展适应性试验,以Deeplabv3+模型0.6D比例连续标注和分段标注为例,所建方法对左、中、右作物行的航向角平均值和标准差较分段标注分别减少了11.57%、7.27%;横向偏移距离的平均值和标准差较分段标注分别减少了19.68%、10.90%。本文方法同样适用于缺苗、端行场景,无需分段标注。研究成果可为提高作物行图像标注效率、缩短标注时间以及促进图像标注标准化提供有益参考。

     

    Abstract: Aiming at the problems of large workload, long time-consuming for image annotation of field crops, this study proposes a rapid image annotation method for row detection of field crops. Taking the seedling corn image as the research object, the method first manually marks the two endpoints of the crop row canopy centerline. Then, based on the centerline outward expansion method, the minimum canopy width of the target crop row is used as a reference, and quadrilateral bounding boxes representing the crop row region are generated by scaling the width by a certain ratio. Three network models (Deeplabv3+, Pspnet, Unet) are used for training to segment the crop row canopy region. The least squares method is applied to fit the crop row centerline. The performance of the image annotation method is comprehensively evaluated based on two metrics: the average value and standard deviation of the crop row centerline’s heading angle and lateral offset distance. The optimal annotation width ratio is determined to be between 0.6D and 1.3D(D is the minimum canopy width). To conduct the effectiveness test, taking the Deeplabv3+ model 0.6D ratio labeled data and pixel-by-pixel labeled data as examples, the average value and standard deviation of the heading angle of the left, middle and right crop rows of the established method were reduced by 45.69% and 36.25% respectively compared with the pixel-by-pixel labeling method; the average value and standard deviation of the lateral offset distance were reduced by 44.22% and 36.75% respectively compared with the pixel-by-pixel labeling. The results show that compared with the traditional pixel-by-pixel annotation methods, the proposed approach achieves higher crop row detection accuracy, significantly reduces annotation workload, and enhances efficiency. To conduct the adaptability test, taking the Deeplabv3+ model with 0.6D ratio continuous labeling and segmented labeling as examples, the proposed method reduced the average value and standard deviation of the heading angle for the left, middle, and right crop rows by 11.57% and 7.27%, respectively, compared with the segmented labeling method. Similarly, the average value and standard deviation of the lateral offset distance were reduced by 19.68% and 10.90%, respectively. The results show that the proposed approach is also applicable to scenarios with missing plants or broken row conditions, without requiring segmented annotations. This research provides valuable insights for improving crop row image annotation efficiency, reducing annotation time, and advancing the standardization of image annotation practices.

     

/

返回文章
返回