Journal of Textile Research ›› 2024, Vol. 45 ›› Issue (07): 196-203.doi: 10.13475/j.fzxb.20230403401

• Machinery & Equipment • Previous Articles     Next Articles

Detection method for residual yarn quantity based on improved Yolov5 model

SHI Weimin1(), LI Zhou1, LU Weijian1, TU Jiajia1,2, XU Yinzhe1   

  1. 1. Key Laboratory of Modern Textile Machinery & Technology of Zhejiang Province, Zhejiang Sci-Tech University, Hangzhou, Zhejiang 310018, China
    2. College of Automation, Zhejiang Institute of Mechanical and Electrical Engineering, Hangzhou, Zhejiang 310053, China
  • Received:2023-04-19 Revised:2024-03-12 Online:2024-07-15 Published:2024-07-15

Abstract:

Objective In the automatic production line of circular weft machines in knitting workshops, the identification of the residual yarn quantity of the spindle was the prerequisite and key to realizing the automatic loading and unloading of the spindle. The detection result of spindle residual amount was easily affected by many factors, such as background spindle, spindle type, yarn crease structure and so on. In order to ensure the accuracy and real-time performance of the information of spindle residual yarn quantity of yarn frame, a machine vision-based online detection technology of spindle residual yarn quantity was studied.

Method The improved Yolov5 model was adopted to detect the residual yarn quantity in a spindle, and the intercepted end picture of the spindle is extracted through perspective transformation, pixel average, contour detection and other operations to extract the inner and outer circle contours of the spindle. The circle fitting algorithm based on gradient descent designed in this paper was then adopted to fit the inner and outer circles of the spindle and obtain the inner and outer circle radii of the spindle. Finally, the principle of small-hole imaging was adopted to convert the pixel difference of the spindle into the actual residual yarn quantity.

Results In terms of model recognition, performance comparison of the three models showed that the model accuracy could be improved by 0.24% simply by improving the Yolov5 backbone network, and the accuracy could be further enhanced by 0.27% by incorporating the Shuffle-Attention mechanism. As for residual yarn quantity detection, detecting the residual yarn quantity demonstrated that the detection error of this algorithm was less than 3 mm, outperforming the Hough circle algorithm. With regards to the dataset, in order to cater to the practical production needs of factories, this paper created a dataset comprising spindles from the actual production process of factories.

Conclusion A method combining the improved Yolov5 with conventional image processing was proposed for sindle residual yarn quantity detection in the automated production line of circular weft machines. First, the spindle image was segmented using the enhanced Yolov5 model. Then, the segmented spindles image was processed by perspective transformation and end-face pixel averaging to effectively extract the inner and outer circular contours of the spindle. The circle fitting algorithm designed in this paper was utilized to fit the inner and outer circles of the spindle to complete the calculation of the residual yarn quantity the spindle. The improved YOLOv5 residual yarn quantity detection algorithm for spindle utilized an enhanced network structure and dataset. Therefore, it could be effectively applied to the on-line detection of residual yarn quantity in the spindle. It provided ideas for future applications in embedded devices.

Key words: improved Yolov5 model, perspective transformation, mean shift, gradient descent method, spindle residual yarn quantity, circular lenitting machine

CLC Number: 

  • TP391.4

Fig.1

Yarn bobbin image acquisition platform"

Fig.2

Flow of residual yarn quantity detection algorithm"

Fig.3

Improved Yolov5 network framework"

Fig.4

D-MB convolution module"

Fig.5

Network structure of Shuffle-Attention mechanism"

Fig.6

Detection rendering"

Fig.7

Perspective change process of spindle. (a) Camera model; (b) Image of original yarn tube; (c) Primary perspective change; (d) Secondary perspective change"

Fig.8

Measurement principle diagram of residual yarn quantity"

Tab.1

Performance comparison of three models"

模型 参数量/106 MAP/% FPS/(帧·s-1)
Yolov5 7.22 98.99 18
Yolov5+MB+FouseMB 2.11 99.23 24
Yolov5+MB+FouseMB+
Shuffle-Attention
3.255 99.50 20

Fig.9

Influence of different iteration steps on circle fitting accuracy and speed"

Fig.10

Inner and outer circle fitting of four spindtes with different residual yarn quantities"

Tab.2

Comparison of effect of proposed algorithm and Hof-circle algorithm in detecting residual yarn quantity"

图片编号 圆拟合算法 像素余纱量/像素 余纱量测试值/mm 实际余纱量/mm 误差/mm 相对误差/%
4(a) 梯度下降法 144.51 40.17 40.5 0.326 0.81
霍夫圆算法 142.10 39.39 0.996 2.74
4(b) 梯度下降法 88.06 24.48 25.0 0.519 2.08
霍夫圆算法 86.50 24.05 0.953 3.81
4(c) 梯度下降法 57.91 16.10 17.5 1.400 8.00
霍夫圆算法 56.91 15.82 1.678 9.59
4(d) 梯度下降法 51.80 5.40 3.0 2.400 80.01
霍夫圆算法 52.00 5.46 2.756 81.87

Fig.11

Total time spent by each algorithm"

[1] 阮建青, 赵吕航, 赵祚翔, 等. 成本上升对中国劳动密集型产业的影响:基于宁波纺织服装产业集群的研究[J]. 浙江大学学报(人文社会科学版), 2021, 51(6):119-133.
RUAN Jianqing, ZHAO Lvhang, ZHAO Zuoxiang, et al. The impact of rising costs on China's labor-intensive industries: a study based on Ningbo's textile and garment industry cluster[J]. Journal of Zhejiang University (Humanities and Social Science Edition), 2021, 51(6):119-133.
[2] 张文昌, 单忠德, 卢影. 基于机器视觉的纱笼纱杆快速定位方法[J]. 纺报学报, 2020, 41(12):137-143.
ZHANG Wenchang, SHAN Zhongde, LU Ying. Rapid positioning method of sarong yarn rod based on machine vision[J]. Journal of Textile Research, 2020, 41(12): 137-143.
doi: 10.13475/j.fzxb.20200300607
[3] 任慧娟, 金守峰, 顾金芋. 基于颜色特征的筒纱分拣机器人识别定位方法[J]. 轻工机械, 2020, 38(4):58-63.
REN Huijuan, JIN Shoufeng, GU Jinqian. Identification and positioning method of twig sorting robot based on color characteristics[J]. Light Industrial Machinery, 2020, 38(4):58-63.
[4] SHI Zhiwei, SHI Weimin, WANG Junru. The detection of thread roll's margin based on computer vision[J]. Sensors, 2021.DOI:10.3390/s21196331.
[5] TRIWAHYU Utomo, ADHA Imam Cahyadi, IGI ArdiYanto. Suction-based grasp point estimation in cluttered environment for robotic manipulator using deep learning-based affordance map[J]. International Journal of Automation and Computing, 2021, 18(2):277-287.
[6] 蓝金辉, 王迪, 申小盼. 卷积神经网络在视觉图像检测的研究进展[J]. 仪器仪表学报, 2020, 41(4):167-182.
LAN Jinhui, WANG Di, SHEN Xiaopan. Research progress of convolutional neural network in visual image detection[J]. Chinese Journal of Scientific Instrument, 2020, 41(4):167-182.
[7] HU Jie, SHEN Li, SUN Gang, et al. Squeeze-and-excitation networks[C]// Proceedings of 2018 IEEE Conference on Computer Vision and Pattern Recognition. Salt Lake City: IEEE, 2018:7132-7141.
[8] ZHANG Q, YANG Y. SA-Net: shuffle attention for deep convolutional neural networks[C]// Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). Toronto: IEEE, 2021: 2235-2239.
[9] 印明昂, 王钰烁, 孙志礼, 等. 一种自适应步长的复合梯度加速优化算法[J]. 东北大学学报(自然科学版), 2020, 41(9):1274-1279.
doi: 10.12068/j.issn.1005-3026.2020.09.010
YIN Mingang, WANG Yushuo, SUN Zhili, et al. A composite gradient acceleration optimization algorithm for adaptive step size[J]. Journal of Northeastern University(Natural Science Edition), 2020, 41(9):1274-1279.
[10] 杨朋波, 桑基韬, 张彪, 等. 面向图像分类的深度模型可解释性研究综述[J]. 软件学报, 2023, 34(1):230-254.
YANG Pengbo, SANG Jitao, ZHANG Biao, et al. Review of deep model interpretability for image classification[J]. Journal of Software, 2023, 34 (1):230-254.
[11] SAPAKOVA S, SAPAKOV A, YILIBULE Y. A YOLOv5-based model for real-time mask detection in challenging environments[J]. Procedia Computer Science, 2024, 231: 267-274.
[12] 宇文亮. 基于Kinect的物体模型建立及识别定位技术研究[D]. 哈尔滨: 哈尔滨工业大学, 2018: 14-16.
YU Wenliang. Research on object model establishment and recognition and positioning technology based on Kinect[D]. Harbin: Harbin Institute of Technology, 2018:14-16.
[1] . Fast fabric defect detection algorithm based on integral image [J]. JOURNAL OF TEXTILE RESEARCH, 2016, 37(11): 141-147.
[2] . Fabric image classification segmentation based on wavelet transform coefficient [J]. JOURNAL OF TEXTILE RESEARCH, 2012, 33(11): 57-60.
[3] ZHUGE Zhenrong;XU Min;LIU Yangfei. Fabric image segmentation algorithm based on Mean Shift [J]. JOURNAL OF TEXTILE RESEARCH, 2007, 28(10): 108-111.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!