Abstract:
Using machine vision technology to measure the phenotypic parameters of lettuce is of great significance to explore the growth law of lettuce. The construction of lettuce individual identification and outer contour segmentation algorithms is an important prerequisite for accurate measurement of phenotypic parameters, but when lettuce is cultivated to harvest In the top view, the leaves of the lettuce individuals overlap and block each other, which greatly hinders the individual identification and outer contour segmentation of lettuce. In response to the above problems, this paper improves the Mask R-CNN neural network model, the mask branch adopts the class-agnostic mode, and the original convolution backbone is replaced by ResNeXt50 combined with FPN, which realizes the individual recognition and outer contour segmentation of butter lettuce under occlusion conditions. In order to verify and analyze the segmentation accuracy of the improved model, this paper uses the average accuracy AP75 and the average detection time as the evaluation indicators, and sets up comparative experiments with the original Mask R-CNN, DeepMask, and MNC segmentation models on different degrees of occlusion test sets. The results show that the average accuracy of the improved model reaches 98.7%, which is about 4% higher than the original model, and it can still maintain good segmentation accuracy on the heavily occluded test set. This study can provide an algorithm reference for the identification and segmentation of plant leaves under shading conditions, and also provide technical support for the extraction of phenotypic parameters of butter lettuce.