高级检索+

多尺度交叉融合与边界感知的葡萄叶片病害分割网络

Segmenting grape leaf diseases using multi-scale cross-fusion and boundary-aware network

  • 摘要: 为解决葡萄叶片病害分割中病害区域形态多样、背景复杂与光照干扰导致的边缘模糊问题,该研究提出了一种多尺度交叉融合与边界感知的葡萄叶片病害分割网络。研究中多尺度交叉融合解码器通过结合多尺寸条形卷积核和交叉轴注意力机制,能够有效提取多尺度特征并捕获全局信息,提升了对不同大小病害区域的分割效果。此外,提出的轻量化边界感知引导模块,通过边界信息强化特征学习,增强了网络对边界信息的敏感性,有效提升了其对病害模糊边缘的识别能力,从而进一步提高了对病害区域的分割性能。试验结果表明,该网络在自建数据集上病害分割任务中,Dice相似系数和准确率分别达到86.3%和88.2%,能够满足葡萄叶片病害的分割需求。在公有数据集Plant Village上的试验结果显示,Dice相似系数和准确率分别达到85.2%和86.5%,验证了其良好的泛化性和实际应用潜力。在计算效率方面,该网络的参数量和浮点数运算量分别为3.75M和1.61GFLOPs,降低了计算成本并提升了运行效率。因此,该研究提出的网络为复杂环境下叶片病害区域的精确分割提供了一种更加高效且稳定的解决方案。

     

    Abstract: Early identification is often required for the effective prevention and control of the grape leaf diseases. However, an accurate segmentation is limited to the varying sizes and diverse shapes of the grape leaves and their diseased areas, as well as the complex backgrounds and edge blurriness that caused by lighting interference. Moreover, the existing models can be improved the performance at the cost of the increasing model size and computational complexity. It is also demand for their effective deployment on the resource-constrained mobile devices. In this study, a multi-scale cross-fusion and boundary-aware segmentation network (MCBNet) was proposed to detect the grape leaf diseases, in order to reduce the computational costs for the high segmentation accuracy. A multi-scale cross-fusion decoder was also developed to effectively integrate the feature maps from the different scales. Multi-scale strip convolutional kernels and a cross-axis attention mechanism were utilized to capture the multi-scale global features. Additionally, a boundary-aware guidance module was introduced for the model sensitive to the boundary features. As such, the segmentation performance was enhanced on the edge-blurred diseases of the varying sizes. The experimental results show that: 1) The MCBNet exhibited the outstanding performance on the dataset of the grape leaf diseases. Specifically, in the leaf segmentation task, the MCBNet was improved Dice and IoU metrics by 0.6 and 1.1 percentage points, respectively, compared with the second-best network. In the disease segmentation task, the Dice and IoU metrics were enhanced by 1.3 and 1.9 percentage points, respectively. The HD metric was utilized to measure the accuracy of the segmentation boundary. The MCBNet outperformed the second-best network by 4.0 and 0.4 percentage points in the leaf and disease segmentation tasks, respectively. Additionally, the MCBNet was improved Dice and HD metrics by 1.3 and 1.6 percentage points, respectively, compared with the lightweight MetaSeg network. The better performance was achieved in the parameter counting of only 3.75M and a computational cost of 1.61 GFLOPs. There was the excellent balance between high segmentation accuracy and low computational cost. 2) The public PlantVillage dataset was further validated the generalization of the MCBNet. In the disease segmentation task, the MCBNet was improved Dice, IoU, Se, and Pre metrics to 85.2%, 74.2%, 83.8%, and 86.5%, respectively, compared with the second-best network. Furthermore, the MCBNet outperformed the second-best network by 4.38 percentage points in the HD metric, indicating the better performance on the blurred boundaries. 3) Visualization results also confirmed that the MCBNet was utilized to capture the disease regions of the various sizes in both self-built dataset and public datasets, significantly reducing the missed detections. Moreover, the boundary-aware guidance module of the MCBNet was greatly enhanced to process the edge details, fully validating its exceptional segmentation performance. In conclusion, the MCBNet can be expected to offer an efficient and precise solution for the grape leaf disease segmentation under the complex environments. Its lightweight design can be deployed on the resource-constrained devices. Some limitations were still remained to balance the operational efficiency and deployment requirements. A lightweight backbone network was used for the feature extraction. However, the lightweight backbone network can limit the feature extraction for the segmentation accuracy of the disease regions. Future research can be utilized to optimize the network structure for the more precise capture of the disease regions. Additionally, the model training can still rely on a large amount of the high-quality pixel-level labeled data, which is time-consuming and costly. Therefore, the weakly supervised or semi-supervised learning can be introduced to reduce the reliance on the fine-grained annotations and lower data preparation costs. Finally, the domain adaptation can be added to enhance the stability and generalization under the variable and complex environments, such as the strong lighting or partial leaf occlusion.

     

/

返回文章
返回