高级检索+

基于无人机多源遥感特征的苹果树LAI反演及特征贡献分析

UAV-Based Multi-Source Remote Sensing for Apple Tree LAI Inversion and Feature Contribution Analysis

  • 摘要: 针对苹果树冠层异质性强、传统测量效率低及光谱“饱和效应”的难题,该研究构建了基于无人机多模态特征的叶面积指数(leaf area index,LAI)反演框架。以终花期、春梢生长期与果实膨大期为研究对象,获取多光谱影像与激光雷达点云数据,并提取光谱指数(15项)、纹理特征(40项)与冠层结构特征(27项),形成光谱–纹理–结构的融合特征集。研究选取七种机器学习模型,开展不同物候期LAI估测性能对比分析,并基于 SHAP 方法综合评估不同物候期特征的贡献度。结果表明:DNN在不同物候期均表现出较高的精度,在终花期、春梢生长期和果实膨大期验证集上的R2分别为0.86、0.87和0.89,整体优于其余模型;(2)消融试验结果表明多模态融合能够增强LAI估测性能;(3)综合SHAP分析结果可知,冠层结构特征(如CC、DM_6、AIH 90th等)在苹果树各生育期的LAI反演中均占据显著优势。该研究验证了基于多模态融合与深度学习的 LAI 反演策略的有效性,为智慧果园的精准监测与精细化管理提供了技术支撑与理论依据。

     

    Abstract: Leaf Area Index (LAI) is a key parameter for describing canopy development and growth status in apple orchards. This study developed a multimodal unmanned aerial vehicle based framework for leaf area index estimation across three phenological stages of apple trees, including the end-flowering stage, spring shoot growth stage, and fruit expansion stage. Multispectral imagery and light detection and ranging point cloud data were acquired, and ground measurements were collected synchronously. A fused feature set was then constructed by integrating spectral information, texture information, and three-dimensional canopy structural information. Specifically, 15 spectral indices, 40 texture features, and 27 structural descriptors were extracted to characterize canopy reflectance, spatial heterogeneity, and geometric architecture. Seven regression models were evaluated, including Seven models—Backpropagation Neural Network (BPNN), Deep Neural Network (DNN), Gaussian Process Regression (GPR), Random Forest (RF), Ridge Regression (RIDGE), and Support Vector Regression (SVR). For each phenological stage, model performance was assessed using repeated random validation, and the coefficient of determination, root mean square error, and mean absolute error were used as evaluation metrics. In addition, ablation experiments were conducted to quantify the individual and combined contributions of spectral, textural, and structural information, and Shapley Additive Explanations were used to interpret feature importance across growth stages. The results showed that the Deep Neural Network achieved the best overall performance and the most stable accuracy among all models. For the end-flowering stage, spring shoot growth stage, and fruit expansion stage, the validation coefficient of determination reached 0.86, 0.87, and 0.89, respectively. These results indicate that the Deep Neural Network was highly effective for leaf area index estimation under different canopy conditions and phenological stages. In contrast, the other models showed greater fluctuations in validation performance, suggesting lower robustness to stage-dependent canopy variation. The ablation analysis further demonstrated that canopy structural information was the dominant source for leaf area index estimation in apple trees. When used alone, structural features consistently outperformed or matched the spectral and textural feature groups. Their advantage was especially evident during the spring shoot growth stage, when the coefficient of determination reached 0.79, reflecting the strong sensitivity of canopy geometry to leaf area changes during rapid vegetative development. At the end-flowering stage, the fusion of structural and textural information yielded the best dual-source performance, increasing the coefficient of determination to 0.84 and reducing the root mean square error to 0.16. A similar pattern was observed during the fruit expansion stage, where the combination of structural and textural information also produced the strongest two-source performance, with a coefficient of determination of 0.85 and a root mean square error of 0.13. When spectral, textural, and structural information were fully integrated, the estimation accuracy further improved, reaching coefficients of determination of 0.86, 0.87, and 0.89 across the three stages, with corresponding root mean square errors of 0.14, 0.14, and 0.12. The interpretation analysis confirmed the dominant role of canopy structural information in all growth stages. Structural descriptors related to canopy density and height distribution contributed most strongly to prediction, whereas spectral and texture information mainly served as complementary inputs that enhanced model discrimination under complex canopy conditions. Overall, this study demonstrates that multimodal feature fusion combined with deep learning can provide accurate and interpretable leaf area index estimation for apple orchards across phenological stages, and offers methodological support for orchard canopy monitoring and precision management.

     

/

返回文章
返回