高级检索+

基于改进DeepLabV3+网络的荔枝种植面积提取方法

Extracting lychee planting areas using improved DeepLabV3+ network

  • 摘要: 现有的荔枝种植面积遥感提取方法存在提取精度不高、分割效果欠佳、训练时间长以及模型复杂度高等问题。为此该研究提出了改进的DeepLabV3+模型,将主干网络Xception替换为MobileNetV2,保证精度的同时节约时间;构建DenseASPP模块增强多尺度特征提取;引入通道注意力机制和条带池化,抑制干扰,提高精度。并与SegFormer、PSPNet和UNet图像分割模型进行对比。结果表明,改进模型的平均交并比(mean intersection over union,MIoU)、平均像素精度(mean pixel accuracy,mPA)和准确率(accuracy,Ac)分别为83.55%、91.58%、91.15%,相比于原始的DeepLabV3+模型分别提高了8.15、5.27、4.97个百分点,而与其他模型对比,该模型通过结构优化将参数量压缩至5.8M,计算复杂度降为22.4 GFLOPs,较原始的DeepLabV3+降低94%,较PSPNet减少95%。研究结果为准确了解和掌握种植区的空间分布及变迁趋势提供参考。

     

    Abstract: A lychee (also called litchi) is one of the most favorite fruits in Asian areas. An accurate extraction of the plantation area is crucial to lychee production. Traditional remote sensing has failed to accurately measure large-scale plantation areas, due to limited time and space. It is often required for the extraction of the plantation area from the high-resolution images using deep learning. However, the existing deep learning cannot fully meet the requirement to extract the plantation area of the lychee tree. Specifically, the low accuracy of the feature extraction has confined to the segmentation of the original image, due to the long-time sample training and the high complexity of the original model. In this study, an improved DeepLabV3+ model was proposed to replace the original DeepLabV3+ backbone network Xception with the MobileNetV2. A better performance was achieved to effectively alleviate the overfitting in the loss function convergence. The high accuracy of the image segmentation with the time saving was also improved to extract the features of the lychee tree; The ASPP (Atrous Spatial Pyramid Pooling) module was also improved to the DenseASPP. A dense connectivity mechanism was also introduced using ASPP; The global information of the image was more effectively utilized to recognize the complex scenes when performing the Lychee Tree Multi-scale Feature Extraction. A null convolution also varied to connect the outputs of each layer with all the previous layers. A denser feature map was then formed after introduction; The Channel Attention Mechanism (Squeeze-and-Excitation) was used to better capture the semantic information of the lychee tree during semantic segmentation. The high accuracy of the segmentation was obtained in the edge region and the details; The Strip Pooling (SP) was introduced to enhance the spatial context understanding of the model, especially in the scenes with complex objects and backgrounds. The multi-scale pooling was utilized to capture more semantic information, In turn, the intersection and integration ratio (IoU) were computed to effectively suppress the interference for high accuracy; A systematic comparison was also carried out on the improved DeepLabV3+ model with the image segmentation of the SegFormer, PSPNet, and UNet models. The results show that the mean intersection over union (MIoU), the mean pixel accuracy (mPA), and the accuracy (Ac) of the improved model reached 83.55%, 91.58%, and 91.15%, respectively. There were 8.15, 5.27, and 4.97 percentage points higher than those of the original DeepLabV3+ model, respectively. The number of parameters was 5.8M after structural optimization. The computational complexity was reduced to 22.4 GFLOPs, which was 94% and 95% lower than the original DeepLabV3+ and the PSPNet, respectively. The improved mode can be expected to extract the plantation area of the lychee tree. The finding can also provide technical support to accurately extract the plantation area distribution of the lychee tree in the trend of spatial variation.

     

/

返回文章
返回