高级检索+

基于改进YOLOv12n-seg的轻量级鸡胴体关键部位实例分割方法

Lightweight instance segmentation method for key parts of chicken carcass based on improved YOLOv12n-seg

  • 摘要: 针对现有鸡胴体图像识别方法在复杂场景下(环境光线变化、翅腿黏连、遮挡等)易误检、漏检,无法满足高效的鸡胴体关键部位精准识别的问题,该研究提出一种基于YOLOv12n-seg的复杂场景下鸡胴体关键部位实例分割方法DEF-YOLO-seg,并部署于鸡胴体多部位智能分切设备的可视化成像平台。首先,为增强模型在复杂工业场景下的鲁棒性,结合实际生产背景设计多维度数据增强策略,构建鸡胴体增强数据集。其次,为解决鸡胴体图像因翅腿黏连和遮挡导致的误检、漏检、错误分割,以及因区域特征表达弱引发的特征丢失问题,在骨干网络引入由DAttention进行创新得到的C3k2_DAttention模块,并探究其最佳应用位置,增强模型对关键部位的特征提取和动态区域聚焦能力。再次,引入EUCB模块改进颈部网络,在降低计算成本的同时提升特征融合效率与模型检测性能。最后,将Focaler-IoU应用于CIoU并探究最优的超参数组合,通过自适应关注不同类型回归样本,增强模型在复杂场景多样化检测任务中的泛化性能。在构建的鸡胴体数据集实例分割任务中,DEF-YOLO-seg的mAP50和mAP50-95分别达到95.5%和94.1%,相较于基线模型YOLOv12n-seg提升1.3%和2.8%,模型参数量为3.3M,在实际产线测试中的图像级准确率达到95%,单张图像推理时间<30ms。该模型可为后续鸡胴体智能分切技术与装备研发提供关键技术支撑。

     

    Abstract: Accurate segmentation of key parts in chicken carcass is crucial for intelligent cutting systems in modern poultry processing industry. To address the issues of false detection, missed detection, and inaccurate segmentation of key parts of chicken carcasses in complex industrial scenarios (e.g., adhesion between wings and drumsticks, occlusion, and uneven lighting), this study aimed to develop a lightweight, high-precision, and real-time instance segmentation model suitable for deployment on intelligent chicken carcass cutting equipment. First, an enhanced chicken carcass dataset was constructed, focusing on Sanhuang chickens and white-feathered chickens. It was expanded from 1090 original images to 5450 images, covering three types of scenarios: changes in ambient lighting, carcass occlusion, and compression-induced deformation. Multi-dimensional data augmentation techniques such as geometric transformation, illumination adjustment, and occlusion simulation were adopted to improve the model’s robustness. Second, the DEF-YOLO-seg model was developed by improving the YOLOv12n-seg as the baseline: (1) The C3k2_DAttention module was designed by fusing the C3k2 module with Deformable Attention (DAttention), which replaced the Area-Attention Enhanced Cross-Feature (A2C2f) module in the lower layer of the backbone network to enhance the feature extraction capability for adhered/occluded regions; (2) The Efficient Up-Convolution Block (EUCB) was introduced to replace the Upsample module in the neck network, reducing computational cost while improving feature fusion efficiency; (3) A composite loss function (Focaler-CIoU) combining Focaler-IoU and CIoU was constructed to adapt to the distribution characteristics of easy and difficult samples in complex scenarios. Finally, model training and testing were completed on a hardware platform equipped with an NVIDIA RTX 3090 GPU and an Intel Xeon Platinum 8362 CPU. The DEF-YOLO-seg model achieved a mean Average Precision at an IoU threshold of 0.5 (mAP50) of 95.5% and a mean Average Precision at IoU thresholds from 0.5 to 0.95 (mAP50-95) of 94.1%, which were 1.3 and 2.8 percentage points higher than those of the baseline YOLOv12n-seg, respectively. With a parameter count of 3.3M and a computational complexity of 11GFLOPs, the model’s inference time per image on a local computer was no more than 30 ms. Compared with mainstream models such as YOLOv9c-seg, YOLOv11n-seg, and YOLOv12n-seg, the proposed model maintained lightweight characteristics while achieving superior segmentation accuracy. Furthermore, the parameter sensitivity analysis reveals that the optimal Focaler-CIoU configuration (d=0.22, u=0.73) precisely matches the IoU distribution characteristics of chicken carcass data, and this finding highlights the importance of task-specific loss function design rather than using generic settings. In practical production line, the model’s image-level accuracy 95%, and the Dice coefficients of the neck, wings, and drumsticks increased from 0.85, 0.83 and 0.78 to 0.93, 0.92 and 0.90, respectively. It effectively solved the problems of missed detection, false detection, and false segmentation of small parts (e.g., neck and shank) under adhesion and occlusion conditions. The DEF-YOLO-seg model achieves a balance between segmentation accuracy, real-time performance, and deployment feasibility, and can be applied to intelligent chicken carcass cutting equipment, providing technical support for the intelligent upgrading of the food processing industry. Future research will focus on developing cutting path planning technology based on this model and further optimizing the balance between lightweight design and detection accuracy.

     

/

返回文章
返回