Journal of Textile Research ›› 2024, Vol. 45 ›› Issue (05): 193-201.doi: 10.13475/j.fzxb.20230102201

• Machinery & Equipment • Previous Articles     Next Articles

Design of defect detection system for glass fiber plied yarn based on machine vision

YANG Jinpeng, JING Junfeng(), LI Jiguo, WANG Yuanbo   

  1. School of Electronics and Information, Xi'an Polytechnic University, Xi'an, Shaanxi 710048, China
  • Received:2023-01-11 Revised:2023-09-03 Online:2024-05-15 Published:2024-05-31

Abstract:

Objective It is important that tension abnormalities and yarn hairiness can be detected accurately and efficiently during the production of glass fiber plied yarns as a type of raw material for electronic fabric production. Manual inspection has disadvantages such as low efficiency, high leakage rate and long lag time. Therefore, a machine vision-based method for detecting defects in plied yarns is proposed to meet the need for accurate detection of defects in real time during plied yarn production.

Method The conventional algorithm pre-processes the image with threshold segmentation, open operation and contour extraction, and makes a preliminary judgement on the image by calculating whether the limits and heights of the contours are within the normal range, while the deep learning algorithm uses YOLOv5 and is optimized and accelerated using the TensorRT framework to make a secondary judgement on the image and locate defects. The proposed system used a Jetson Nano B01 as the hardware platform, an industrial camera to capture images of the plied yarn in real time, and a relay to control the winder and alarm light circuit.

Results In this research, the image size of the training data set was 1 280 pixel×288 pixel, and the types of defects were divided into two categories, namely uneven tension and hairiness, according to the actual requirements. The proportion of samples used in the training set, validation set and test set were 70%, 15% and 15%, respectively. The training was conducted using YOLOv5 weights, with a batch size of 32 samples, where the image size was adjusted to 640 pixel×640 pixel, and the number of processes set to 2, for a total of 300 iterations. Training and testing were conducted on a deep learning server with a primary configuration of an Intel Core(TM) i9-10900X CPU 3.70 GHz, a 24 GB GPU GeForce RTX 3090 graphics card, and 128 GB of running memory. The test results showed that pre-processing using the conventional algorithm had the advantage of higher speed and low loss, detecting much faster than using the deep learning network alone, significantly reducing the amount of network computation and increasing the detection efficiency of the system. The use of deep learning algorithms for secondary determination had the advantage of high accuracy and defect localization, effectively avoiding false detection caused by using traditional algorithms alone and improving detection accuracy and production efficiency. In the production of plied yarns, considering the extremely high production speed and the need for product quality, the accuracy of detection, the miss detection rate and the detection processing time were used as indicators, and plied yarn samples that were normal and free of defects as well as those containing tension irregularities and hairiness were selected for testing. The experimental results showed that the system achieved 99.07% detection accuracy and 1.4% leakage rate. The camera acquisition yarn processed one frame at an average of 0.007 s, the algorithm detection yarn processed one frame at an average of 0.005 6 s and the subsequent processing yarn processed one frame at an average of 0.004 9 s. The camera acquisition and detection processing speed was above 112 frames per second, which meets the actual production inspection needs and effectively improves the inspection efficiency and facilitates the automatic detection of plied yarn defects.

Conclusion The system is based on the Jetson Nano B01 as the hardware processing platform, and uses a combination of conventional algorithms and deep learning algorithms for detection. The system takes the advantages of fast processing speed of conventional algorithms for image pre-processing and preliminary judgement, and using the advantages of high accuracy of deep learning algorithms for secondary judgement when the conventional algorithms think there is a defect. It overcomes the shortcomings of the conventional algorithm's poor defect location ability and the deep learning algorithm's slow detection speed, while ensuring detection speed and accuracy. The Tkinter human-machine interface and the logging module provide the necessary functions for industrial sites. The system meets the need for real-time defect detection during the production of plied yarns, improving production efficiency and product quality.

Key words: machine vision, glass fiber plied yarn, yarn defect detection system, development board, automated detection

CLC Number: 

  • TP391.4

Fig.1

Schematic diagram of architectue detection system architectue"

Fig.2

Site installation diagram"

Fig.3

Original image of plied yarn. (a)Normal plied yarn; (b)Uneven tension;(c)Yarn hairiness"

Fig.4

Schematic diagram of software system architecture"

Tab.1

Key parameters camera"

参数名称 参数含义 允许范围
曝光时间/μs 快门打开到关闭的时间间隔 9~9 999 640
相机增益/dB 模拟信号的放大倍数 0~16.3
相机帧率/(帧·s-1) 单位时间内采集的图片数量 0.1~300.5
图像宽度/像素 采集图像的宽度 32~1 280
图像高度/像素 采集图像的高度 32~1 024
相机心跳/ms 监测相机是否连接的时间间隔 500~600 000

Fig.5

System detection interface"

Tab.2

Training parameters of YOLOv5"

参数名称 参数值
训练权重 YOLOv5s
批样本大小/个 32
图片尺寸 640×640
进程数/个 2
训练轮数 300

Fig.6

Loss curves"

Fig.7

P-R curve of model"

Fig.8

Flow chart of plied yarn detection algorithm"

Fig.9

Conventional algorithm processing diagram. (a)Original image of plied yarn; (b)Threshold segmentation; (c)Open arithmetic; (d)Extraction profile"

Fig.10

Deep learning algorithm detection diagram"

Tab.3

Test statistical results"

纱线种类 错分数量/个 正确数量/个 总数/个 准确率/% 缺陷纱线
漏检率/%
采集帧率/(帧·s-1) 检测帧率/(帧·s-1) 处理帧率/(帧·s-1)
正常合股纱 0 500 500 99.07 1.4 142 178 204
张力不均 8 492 500
毛羽 6 494 500
[1] 倪洁, 杨建平, 郁崇文. 股线与单纱捻系数比对粘胶股线性能的影响[J]. 纺织学报, 2021, 42(5): 46-50.
NI Jie, YANG Jianping, YU Chongwen. Effect of ratio of strands twist factor to single yarn twist factor on properties of viscose plied yarns[J]. Journal of Textile Research, 2021, 42(5): 46-50.
[2] 左亚君, 蔡赟, 王蕾, 等. 纯棉纱线合股数对织物性能的影响[J]. 纺织学报, 2021, 42(4): 74-79.
ZUO Yajun, CAI Yun, WANG Lei, et al. Influence of ply number of cotton yarns on fabrics performance[J]. Journal of Textile Research, 2021, 42(4): 74-79.
[3] 孙帅, 缪旭红, 张灵婕, 等. 经编纱线张力补偿装置的工作机制[J]. 纺织学报, 2018, 39(11): 140-144.
doi: 10.13475/j.fzxb.20171203305
SUN Shuai, MIAO Xuhong, ZHANG Lingjie, et al. Working mechanism of warp knitting yarn tension compensator[J]. Journal of Textile Research, 2018, 39(11): 140-144.
doi: 10.13475/j.fzxb.20171203305
[4] 景军锋, 郭根. 基于机器视觉的丝饼毛羽检测[J]. 纺织学报, 2019, 40(1): 147-152.
JING Junfeng, GUO Gen. Yarn packages hairiness detection based on machine vision[J]. Journal of Textile Research, 2019, 40(1): 147-152.
[5] 缪宇轩, 孟祥益, 夏港东, 等. 非接触式纱线张力监测系统的研制与开发[J]. 毛纺科技, 2020, 48(5): 71-76.
MIAO Yuxuan, MENG Xiangyi, XIA Gangdong, et al. Research and development of non-contact yarn tension monitoring system[J]. Wool Textile Journal, 2020, 48(5): 71-76.
[6] 路浩, 陈原. 基于机器视觉的碳纤维预浸料表面缺陷检测方法[J]. 纺织学报, 2020, 41(4): 51-57.
LU Hao, CHEN Yuan. Surface defect detection method of carbon fiber prepreg based on machine vision[J]. Journal of Textile Research, 2020, 41(4): 51-57.
[7] 李进飞, 李建强, 段玉堂, 等. 基于深度学习的管道纱线及其颜色检测[J]. 计算机系统应用, 2021, 30(6): 311-315.
LI Jinfei, LI Jianqiang, DUAN Yutang, et al. Pipe yarn and color detection based on deep learning[J]. Computer Systems & Applications, 2021, 30(6): 311-315.
[8] BAO Xiujuan, LI Lianhui, OU Weiqiang, et al. Robot intelligent grasping experimental platform combining Jetson NANO and machine vision[C]// Journal of Physics: Conference Series. IOP Publishing, 2022. DOI: 10.1088/1742-6596/2303/1/012053.
[9] EDEL G, KAPUSTIN V. Exploring of the MobileNet V1 and MobileNet V2 models on NVIDIA Jetson nano microcomputer[C]// Journal of Physics: Conference Series. IOP Publishing, 2022. DOI: 10.1088/1742-6596/2291/1/012008.
[10] 丁奇安, 刘龙申, 陈佳, 等. 基于Jetson Nano+YOLO v5的哺乳期仔猪目标检测[J]. 农业机械学报, 2022, 53(3): 277-284.
DING Qi'an, LIU Longshen, CHEN Jia, et al. Object detection of suckling piglets based on Jetson Nano and YOLOv5[J]. Transactions of the Chinese Society for Agricultural Machinery, 2022, 53(3): 277-284.
[11] REDMON J, DIVVALA S, GIRSHICK R, et al. You only look once: Unified, real-time object detection[C]// Conference on Computer Vision and Pattern Recognition. Las Vegas: IEEE, 2016: 779-788.
[12] REDMON J, FARHADI A. YOLO9000: better, faster, stronger[C]// IEEE conference on computer vision and pattern recognition. IEEE, 2017: 6517-6525.
[13] 任维佳, 杜玉红, 左恒力, 等. 棉花中异性纤维检测图像分割和边缘检测方法研究进展[J]. 纺织学报, 2021, 42(12): 196-204.
doi: 10.13475/j.fzxb.20210204309
REN Weijia, DU Yuhong, ZUO Hengli, et al. Research progress in image segmentation and edge detection methods for alien fibers detection in cotton[J]. Journal of Textile Research, 2021, 42(12): 196-204.
doi: 10.13475/j.fzxb.20210204309
[14] WU Dong, LIAO Manwen, ZHANG Weitian, et al. Yolop: you only look once for panoptic driving perception[J]. Machine Intelligence Research, 2022, 19(6): 550-562.
[15] WU Wentong, LIU Han, LI Lingling, et al. Application of local fully convolutional neural network combined with YOLOv5 algorithm in small target detection of remote sensing image[J]. PLoS ONE, 2021. DOI: 10.1371/journal.pone.0259283.
[16] 李丽圆, 李潇雁, 胡琸悦, 等. 基于回归模型与注意力的轻量化SAR舰船检测模型[J]. 红外与毫米波学报, 2022, 41(3): 618-625.
LI Liyuan, LI Xiaoyan, HU Zhuoyue, et al. The research on lightweight SAR ship detection method based on regression model and attention[J]. Journal of Infrared and Millimeter Waves, 2022, 41(3): 618-625.
[17] LIN T Y, DOLLAR P, GIRSHICK R, et al. Feature pyramid networks for object detection[C]// 2017 IEEE Conference on Computer Vision and Pattern Recognition. Honolulu: IEEE Computer Society, 2017:(936-944).
[18] LIU Shu, QI Lu, QIN Haifang, et al. Path aggregation network for instance segmentation[C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Los Alamitos, CA: IEEE Computer Society, 2018: 8759-8768.
[19] LIU Zhongqian, DING Dan. TensorRT acceleration based on deep learning OFDM channel compens-ation[C]// Journal of Physics: Conference Series. IOP Publishing, 2022. DOI:10.1088/1742-6596/2303/1/012047.
[20] ZHANG Xiaolong, QIAN Bin, SI Haitao, et al. Research on the computer intelligent recognition of electric appliance by YOLOv5 algorithm[C]// Journal of Physics: Conference Serie, 2021. DOI: 10.1088/1742-6596/2083/3/032078.
[1] WEN Jiaqi, LI Xinrong, FENG Wenqian, LI Hansen. Rapid extraction of edge contours of printed fabrics [J]. Journal of Textile Research, 2024, 45(05): 165-173.
[2] BAI Enlong, ZHANG Zhouqiang, GUO Zhongchao, ZAN Jie. Cotton color detection method based on machine vision [J]. Journal of Textile Research, 2024, 45(03): 36-43.
[3] GE Sumin, LIN Ruibing, XU Pinghua, WU Siyi, LUO Qianqian. Personalized customization of curved surface pillows based on machine vision [J]. Journal of Textile Research, 2024, 45(02): 214-220.
[4] SHI Weimin, HAN Sijie, TU Jiajia, LU Weijian, DUAN Yutang. Empty yarn bobbin positioning method based on machine vision [J]. Journal of Textile Research, 2023, 44(11): 105-112.
[5] CHEN Gang, JIN Guiyang, WU Jing, LUO Qian. Research and development of key technologies and whole-set equipment for intelligent sewing [J]. Journal of Textile Research, 2023, 44(08): 197-204.
[6] CHEN Taifang, ZHOU Yaqin, WANG Junliang, XU Chuqiao, LI Dongwu. Online detection of yarn breakage based on visual feature enhancement and extraction [J]. Journal of Textile Research, 2023, 44(08): 63-72.
[7] JI Yue, PAN Dong, MA Jiedong, SONG Limei, DONG Jiuzhi. Yarn tension non-contacts detection system on string vibration based on machine vision [J]. Journal of Textile Research, 2023, 44(05): 198-204.
[8] TAO Jing, WANG Junliang, XU Chuqiao, ZHANG Jie. Feature extraction method for ring-spun-yarn evenness online detection based on visual calibration [J]. Journal of Textile Research, 2023, 44(04): 70-77.
[9] WANG Bin, LI Min, LEI Chenglin, HE Ruhan. Research progress in fabric defect detection based on deep learning [J]. Journal of Textile Research, 2023, 44(01): 219-227.
[10] JIN Shoufeng, HOU Yize, JIAO Hang, ZHNAG Peng, LI Yutao. An improved AlexNet model for fleece fabric quality inspection [J]. Journal of Textile Research, 2022, 43(06): 133-139.
[11] ZHOU Qihong, PENG Yi, CEN Junhao, ZHOU Shenhua, LI Shujia. Yarn breakage location for yarn joining robot based on machine vision [J]. Journal of Textile Research, 2022, 43(05): 163-169.
[12] LÜ Wentao, LIN Qiqi, ZHONG Jiaying, WANG Chengqun, XU Weiqiang. Research progress of image processing technology for fabric defect detection [J]. Journal of Textile Research, 2021, 42(11): 197-206.
[13] WU Liubo, LI Xinrong, DU Jinli. Research progress of motion trajectory planning of sewing robot based on contour extraction [J]. Journal of Textile Research, 2021, 42(04): 191-200.
[14] TIAN Yuhang, WANG Shaozong, ZHANG Wenchang, ZHANG Qian. Rapid detection method of single-component dye liquor concentration based on machine vision [J]. Journal of Textile Research, 2021, 42(03): 115-121.
[15] FENG Wenqian, LI Xinrong, YANG Shuai. Research progress in machine vision algorithm for human contour detection [J]. Journal of Textile Research, 2021, 42(03): 190-196.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!