高级检索+

基于Fast-SCNN-mp的家蚕蛹雌雄在线实时鉴别方法

Online real-time gender identification method for silkworm chrysalises based on Fast-SCNN-mp model

  • 摘要: 家蚕蛹雌雄鉴别准确率决定育种效率与质量。针对家蚕蛹在线鉴别时因性腺特征缺失导致识别精度低的问题,该研究提出一种改进的轻量化实时语义分割模型Fast-SCNN-mp。首先,在特征提取阶段引入多尺度卷积注意力模块,强化判别性腺区域聚焦能力;然后,采用级联深度可分离卷积与瓶颈残差模块进行特征压缩与增强;最后,在特征聚合阶段集成金字塔池化模块,通过多级金字塔池化融合多尺度上下文信息。结果表明,针对多角度蚕蛹性腺缺失数据集,Fast-SCNN-mp模型的精确率(P)、召回率(R)、F1分数及准确率(A)分别达到98.57%、98.65%、98.61%与98.61%,与基础Fast-SCNN模型相比,分别提升2.79、2.73、2.76和2.79个百分点。在>72°~90°侧倾角度的数据集上,Fast-SCNN-mp模型的准确率为96.30%,与最优主流语义分割算法Mask2Former准确率持平,而最优的传统分类算法CCT(convolutional compact Transformer)准确率仅为81.48%。Fast-SCNN-mp模型处理速度达68.10 帧/s,较Mask2Former提升32.55倍,且模型参数量仅为2.17 M,更有利于在边缘设备上部署。该研究为家蚕蛹在线智能鉴别提供了高效可靠的技术方案,可为农业领域实时分类任务的模型优化与应用提供重要参考。

     

    Abstract: Breeding silkworms is one of the most important practices in silk production. The high efficiency and quality of the breeding can be required for the high accuracy of the gender identification on the silkworm pupae. Currently, gender identification can rely mainly on the manual observation of the gonad characteristics at the tail of pupae. It can take one week after pupation. However, manual identification cannot fully meet the requirements of the large-scale industry due to the labor intensity and cost. Machine vision can be expected in the field of silkworm pupae identification, due to the low cost, easy integration, and adaptability to online detection. All existing models have been constructed using ideal silkworm pupa images with the “intact gonads”. It is often required to consider the gonad feature defects that are caused by practical working conditions, such as the pupa placement angle deviation and pupa curling in online detection. In this study, an improved lightweight real-time semantic segmentation model, named Fast-SCNN-mp, was proposed using the basic Fast-SCNN model. The performance was improved after multi-dimensional optimization. A multi-scale convolutional attention module was also introduced to detect the gonadal regions during feature extraction; Cascaded depthwise separable convolutions and bottleneck residual module were adopted to realize the efficient feature compression and enhancement; A pyramid pooling module was integrated into the feature aggregation to fuse the multi-scale contextual information, particularly for the feature representation. A series of experiments was conducted on a gonad-defective dataset over the full tilt angle range of 0~18°, >18°~45°, >45°~72°, and >72°~90°, including 875 images of 5 silkworm pupae varieties. The results showed that the precision, recall, F1-score, and accuracy of the Fast-SCNN-mp model reached 98.57%, 98.65%, 98.61%, and 98.61%, respectively, which were 2.79, 2.73, 2.76, and 2.79 percentage points higher than those of the basic Fast-SCNN model. Furthermore, the 2 conventional classifications and 5 mainstream semantic segmentation were utilized to further verify the model. On the dataset with a roll angle >72°~90°, the Fast-SCNN-mp model achieved an accuracy of 96.30%, which was comparable to that of Mask2Former, the state-of-the-art mainstream semantic segmentation, whereas the accuracy of the optimal conventional classification, convolutional compact Transformer (CCT), only reached 81.48%. In terms of the model parameters and inference speed (FPS), the Fast-SCNN-mp model shared only 2.17 M parameters, which was the lowest among all models. Meanwhile, an inference speed of 68.10 FPS also outperformed all the rest, indicating a 32.55-fold increase, compared with the top-performing model Mask2Former. In conclusion, the Fast-SCNN-mp model effectively balanced the trade-off between performance and real-time requirements, indicating the high accuracy of identification, the light weight, and high inference efficiency. The findings can provide an efficient and reliable technical solution for the online intelligent identification of the silkworm pupae. A valuable reference can also offer model optimization and application in the real-time classification tasks in modern agriculture.

     

/

返回文章
返回