Abstract:
Areca yellowing disease is a systemic disease that severely compromises the growth of
Areca catechu palms, and the precise monitoring of infected plants is crucial for the sustainable development of the areca nut industry. Traditional monitoring methods, which primarily rely on visual inspection and manual surveys, are often inefficient and have a limited operational scope. Unmanned aerial vehicle (UAV) remote sensing, with its distinct advantages of high spatial resolution, flexible response capabilities, and low operational costs, offers an effective solution to the challenges posed by the frequent cloudy and rainy weather in Hainan, which often degrades the quality and availability of satellite optical imagery. However, current UAV-based remote sensing approaches for monitoring Areca yellowing disease still suffer from several limitations, including low identification accuracy and significant interference from similar ground objects, such as discolored shrubs. To address these challenges, this study began with field sampling to assess the disease severity of
A. catechu palms. Based on the observed disease characteristics and the separability in remote sensing imagery. Subsequently, we employed a MicaSense RedEdge-M multispectral camera mounted on a DJI Phantom 4 Pro V2.0 UAV to acquire 5-band multispectral imagery, encompassing the blue, green, red, near-infrared, and red-edge bands. We then calculated the Euclidean and Jeffreys-Matusita (J-M) distances to extract sensitive spectral indices with high separability, which were subsequently used to construct four distinct datasets for training typical models. A comprehensive and standardized detection dataset was then established through a rigorous process of image preprocessing, sample slicing, and manual verification, resulting in a total of 18,520 high-quality samples. This final dataset was partitioned into training, validation, and test sets at a ratio of 8:1:1. A precise identification model for infected plants was developed by implementing a series of targeted improvements to the YOLO (You Only Look Once) v10 algorithm. First, GhostNet was incorporated into the backbone network. This lightweight feature generation mechanism enhances the representation of effective information from key multispectral bands, reduces redundant feature computations, and improves the modeling capability for fine-grained disease textures. Second, in the feature fusion stage, a bidirectional feature pyramid network (BiFPN) was employed to replace parts of the original feature pyramid structure. This modification, with its learnable weights and bidirectional cross-scale fusion, strengthens the multi-scale feature aggregation capabilities, particularly in complex forest stand environments. Finally, the Shape-aware Intersection over Union (SIoU) Loss function was introduced into the regression branch of the detection head. This provides a more stable and geometrically constrained bounding box regression mechanism, which not only improves the localization accuracy of infected plants but also accelerates the model's convergence. The results demonstrate that the enhanced cross-scale feature fusion capability of the improved YOLOv10 model increased the mAP@0.5 to 90.6%, representing an improvement of approximately 20% compared to models such as Faster R-CNN, YOLOv8, and YOLOv10. Furthermore, the model achieved a precision of 91.2% and a recall of 92.7%, both of which significantly outperformed other classic deep learning models. This model achieves the rapid and accurate detection of Areca catechu plants infected with yellowing disease, effectively meeting the practical demands for the inspection and monitoring of the disease.