高级检索+

基于改进YOLOv11n的鲜黄花菜品质分级检测算法

Quality Grading Detection Algorithm for Fresh Daylily Based on Improved YOLOv11n

  • 摘要: 鲜黄花菜作为特色农产品,采后品质分级检测是其商品化处理和精深加工的关键环节。针对当前人工分拣效率低、主观性强及模型计算复杂度高、识别精度不足等问题,该研究建立以花蕾长度为核心的三级分级体系,通过空间标定与几何修正构建像素阈值转换模型,将物理长度映射为图像像素阈值,实现图像的分级标注,在此基础上,提出一种基于改进YOLOv11n鲜黄花菜品质分级检测模型PDi-YOLOv11n。首先,将骨干网络中传统的C3k2模块替换为融合了跨阶段部分连接特征金字塔压缩模块(cross-stage partial pyramid compression, CSPPC)的C3k2_CSPPC模块,增强模型在密集、复杂排列场景下的识别精度并降低计算冗余。其次,为改善鲜黄花菜漏检,定位不准的问题,引入动态上采样模块(dynamic upsampling, DySample)通过内容自适应机制,提升模型对鲜黄花菜的定位精度。最后,在颈部网络中引入融合倒置残差结构(inverted residual mobile block, iRMB)与高效多尺度注意力机制(efficient multi-scale attention, EMA)的iEMA机制,强化对鲜黄花菜的关键特征提取并抑制背景干扰,提高模型检测效率。试验结果表明,与原模型YOLOv11n相比,PDi-YOLOv11n在自建鲜黄花菜数据集上,对于特级、一级和二级鲜黄花菜平均检测精度分别提高了2.6%、5.3%、2.6%,总体的准确率、召回率和平均精度均值分别提高了4.3、2.4和2.6个百分点,对模型进行轻量化处理后,浮点运算量、参数量和模型大小分别缩减了3.3%、12.8%、13.0%。实用性方面,模型部署在基于Android的移动检测系统中,在保持较高分级检测精度的同时,移动端对高分辨率鲜黄花菜图像平均检测耗时为0.98 S。该模型在精度与效率方面具备良好平衡,可为鲜黄花菜自动化分级提供技术支撑。

     

    Abstract: As a distinctive specialty agricultural product, fresh daylily (Hemerocallis citrina) is not only valued for its nutritional properties but also plays a significant role in the agricultural economy of several regions. The post-harvest quality grading of fresh daylily is a critical step in its commercial processing, storage, and subsequent deep utilization, directly influencing both market value and processing efficiency. However, current grading practices predominantly rely on manual sorting, which is characterized by low efficiency, high labor intensity, and strong subjectivity, resulting in inconsistent grading outcomes. Moreover, existing automated detection models often suffer from high computational complexity and insufficient recognition accuracy, particularly when dealing with dense arrangements and morphological variations inherent to fresh daylily buds. To address these challenges, this study establishes a systematic three-level grading framework based on bud length as the primary criterion. A pixel threshold conversion model was developed through spatial calibration and geometric correction, enabling the accurate mapping of physical bud lengths to image pixel thresholds. This conversion supports precise image-based grading annotation and lays the foundation for automated visual inspection. Building upon this framework, a novel quality grading detection model, named PDi-YOLOv11n, is proposed based on an improved YOLOv11n architecture specifically tailored for fresh daylily. First, to enhance feature extraction under dense and complex arrangement scenarios while reducing computational redundancy, the conventional C3k2 module in the backbone network was replaced with a C3k2_CSPPC module, which integrates a Cross-Stage Partial Pyramid Compression (CSPPC) structure. This module effectively refines multi-scale feature representations by compressing spatial and channel information across hierarchical stages, thereby improving the model’s ability to discriminate overlapping or closely adjacent daylily buds. Second, to address the persistent issues of missed detection and inaccurate localization—especially for buds with varying orientations and occlusions—a dynamic upsampling module (DySample) was incorporated. Unlike fixed upsampling methods, DySample employs a content-adaptive mechanism that dynamically adjusts sampling locations based on feature context, significantly enhancing spatial localization accuracy and enabling more precise boundary delineation. Third, an iEMA mechanism was introduced into the neck network, integrating the Inverted Residual Mobile Block (iRMB) with the Efficient Multi-scale Attention (EMA) mechanism. This integration facilitates the extraction of salient features of fresh daylily by emphasizing discriminative regions across multiple scales while effectively suppressing background interference. The combined architecture thus improves detection efficiency and robustness under varying field conditions. Experimental evaluations were conducted on a self-constructed dataset comprising diverse images of fresh daylily collected under different lighting, orientation, and density conditions. Compared with the baseline YOLOv11n model, the proposed PDi-YOLOv11n achieved substantial improvements in detection precision across all grading categories. Specifically, the average detection precision for premium, first-grade, and second-grade fresh daylily increased by 2.6%, 5.3%, and 2.6%, respectively. Overall model performance showed notable gains, with precision, recall, and mean average precision improving by 4.3, 2.4, and 2.6 percentage points, respectively. In addition to accuracy improvements, model lightweighting was effectively achieved: the floating-point operations, number of parameters, and model size were reduced by 3.3%, 12.8%, and 13.0%, respectively, compared to the original model. These reductions demonstrate that the proposed modifications not only enhance detection capability but also improve computational efficiency, making the model suitable for resource-constrained deployment scenarios. To assess practical applicability, the model was deployed in an Android-based mobile detection system. Under real-world testing conditions using high-resolution images, the system maintained high grading detection accuracy while achieving an average inference time of 0.98 seconds per image on mobile devices. This demonstrates the model’s capability to support real-time or near-real-time field applications. Overall, the PDi-YOLOv11n model achieves a favorable balance between detection accuracy, computational efficiency, and deployment feasibility. The integrated approach—combining a robust length-based grading standard, targeted architectural enhancements, and effective lightweighting—provides a reliable and scalable solution for automated fresh daylily grading, offering substantial technical support for advancing post-harvest processing and quality control in the specialty agricultural sector.

     

/

返回文章
返回