Abstract:
Aiming at the problems of large workload, long time-consuming for image annotation of field crops, this study proposes a rapid image annotation method for row detection of field crops. Taking the seedling corn image as the research object, the method first manually marks the two endpoints of the crop row canopy centerline. Then, based on the centerline outward expansion method, the minimum canopy width of the target crop row is used as a reference, and quadrilateral bounding boxes representing the crop row region are generated by scaling the width by a certain ratio. Three network models (Deeplabv3+, Pspnet, Unet) are used for training to segment the crop row canopy region. The least squares method is applied to fit the crop row centerline. The performance of the image annotation method is comprehensively evaluated based on two metrics: the average value and standard deviation of the crop row centerline’s heading angle and lateral offset distance. The optimal annotation width ratio is determined to be between 0.6
D and 1.3
D(
D is the minimum canopy width)
. To conduct the effectiveness test, taking the Deeplabv3+ model 0.6
D ratio labeled data and pixel-by-pixel labeled data as examples, the average value and standard deviation of the heading angle of the left, middle and right crop rows of the established method were reduced by 45.69% and 36.25% respectively compared with the pixel-by-pixel labeling method; the average value and standard deviation of the lateral offset distance were reduced by 44.22% and 36.75% respectively compared with the pixel-by-pixel labeling. The results show that compared with the traditional pixel-by-pixel annotation methods, the proposed approach achieves higher crop row detection accuracy, significantly reduces annotation workload, and enhances efficiency. To conduct the adaptability test, taking the Deeplabv3+ model with 0.6
D ratio continuous labeling and segmented labeling as examples, the proposed method reduced the average value and standard deviation of the heading angle for the left, middle, and right crop rows by 11.57% and 7.27%, respectively, compared with the segmented labeling method. Similarly, the average value and standard deviation of the lateral offset distance were reduced by 19.68% and 10.90%, respectively. The results show that the proposed approach is also applicable to scenarios with missing plants or broken row conditions, without requiring segmented annotations. This research provides valuable insights for improving crop row image annotation efficiency, reducing annotation time, and advancing the standardization of image annotation practices.