Abstract:
In order to improve the accuracy of tea bud recognition, improve the efficiency of automatic picking robot and reduce the cost of manual picking, this paper proposes a model for tea bud target detection. Through taking pictures of the buds of Baihao early tea, 179 images were obtained after screening, and 716 images were obtained after using Mosc data amplification. The data set was established, and the data set was divided according to the 7∶2∶1 ratio of training set, test set and validation set. In view of the low recognition accuracy caused by the overlap and occlusion of tea buds under complex background, this paper modifies the YOLOv5s model and adds the attention mechanism module SE and CBAM to the backbone network for comparison. The Neck network is changed from the original PAFPN to the BiFPN that can carry out two-way weighted fusion. The Head structure adds a P2 module for shallow sampling, and proposes a tea bud detection model. The experiment shows that the model has higher detection accuracy when YOLOv5s adds SE module combined with BiFPN, and the experimental results are cross-verified with ten folds. Compared with the baseline accuracy, the accuracy rate is increased by 10.46%, reaching 88.30%, and the average accuracy mAP is increased by 6.47%, reaching 85.83%. Finally, using the same data set and preprocessing method to compare YOLOv5m, Faster R-CNN and YOLOv4-tiny, it is proved that the experimental method proposed in this paper is more comprehensive than other classical deep learning methods, can more effectively improve the accuracy of tea bud detection and can provide theoretical basis for the tea automatic picker.