Journal of Textile Research ›› 2024, Vol. 45 ›› Issue (07): 173-180.doi: 10.13475/j.fzxb.20230708401

• Apparel Engineering • Previous Articles     Next Articles

Machine vision-based defect detection method for sewing stitch traces

CHEN Yufan1, ZHENG Xiaohu2,3,4(), XU Xiuliang5, LIU Bing6   

  1. 1. College of Information Science and Technology, Donghua University, Shanghai 201620, China
    2. Institute of Artificial Intelligence, Donghua University, Shanghai 201620, China
    3. Engineering Research Center of Artificial Intelligence for Textile Industry, Ministry of Education, Shanghai 201620, China
    4. Shanghai Industrial Big Data and Intelligent Systems Engineering Technology Center, Shanghai 201620, China
    5. HIKARI (Shanghai) Precise Machinery Scientific & Technology Co., Ltd., Shanghai 201599, China
    6. Hangzhou Zhongfu Technology & Innovation Research Institute Co., Ltd., Hangzhou, Zhejiang 311199, China
  • Received:2023-07-31 Revised:2024-03-26 Online:2024-07-15 Published:2024-07-15
  • Contact: ZHENG Xiaohu E-mail:xhzheng@dhu.edu.cn

Abstract:

Objective In order to solve the problems of slow speed, low efficiency, and high cost in conventional manual quality inspection methods for sewing thread, this study proposes a machine vision-based method for sewing thread defect detection in seams. This study aims to achieve fast, accurate, and automated identification of common defects such as cast thread, jumper thread, and broken thread in seams. This study also highlights the importance and necessity of improving product quality and production efficiency in the textile and garment industry.

Method This study adopts a two-step approach for defect detection. Firstly, a low-cost array camera was adopted to capture real-time images of the sewing seam and the DeblurGAN-v2 method was employed to remove motion blurriness from the images, aiming at improving image clarity. Secondly, the student-teacher feature pyramid matching method was applied for anomaly detection, which transfers the knowledge from a pre-trained ResNet-34 model as the teacher network to a student network with the same architecture, so as to learn the distribution of normal images. By comparing the differences between the feature pyramids generated by the two networks as a scoring method, the defect detection system made decisions on whether the image has anomalies, and marked the abnormal areas with a heat distribution map.

Results The defects of flat stitch fabric and overstitch fabric were tested and the performance of the proposed method was evaluated in terms of recall and accuracy rates. The results show that the proposed method can effectively detect various sewing thread defects and has high recall and accuracy rates for different types of defects. This study also provided some examples of defect detection results and scores for different types of defects.

Conclusion The feature pyramid matching technique is applied in the field of stitch trace detection. By adding the difficult sample mining technology, the average detection accuracy is increased to more than 95%, and the detection speed of a single image is less than 0.04 s. Aiming at image motion blur ring caused by jitter and fast movement. The DeblurGAN-v2 framework is used as the framework of deblurring algorithm, and the blueprint convolution is added to change the backbone network, and the processing speed of a single image is kept below 0.06 s. The model has excellent interference resistance and high processing speed, and can meet the requirement of stitch trace recognition.

Key words: sewing thread defect detection, machine vision, DeblurGAN-v2, student-teacher feature pyramid matching, anomaly detection

CLC Number: 

  • TP18

Fig.1

Structure of BSConv-S"

Tab.1

Performance comparison of deblurring algorithms"

算法类型 PSNR值/dB SSIM值 时间/s
DeblurGAN-v2 23.85 0.65 0.05
DeblurGAN 25.23 0.72 0.86
DeblurGAN-bsv2 27.52 0.84 0.05
MIMO-UNet 20.36 0.41 0.02

Fig.2

Image deblurring effect comparison.(a) Blurred image; (b) Image processed by DeblurGAN-bsv2 algorithm; (c) Image processed by DeblurGAN-v2 algorithm;(d) Image processed by DeblurGAN algorithm; (e) Image processed by MIMO-UNet algorithm"

Fig.3

Image resolution map. (a) Blurred image; (b) Deblurred image"

Fig.4

Diagram of teacher and student network architecture"

Tab.2

Performance comparison of different trunk models"

ResNet模型 数据集 正确率/% 处理速度/(帧·s-1)
ResNet-18 测试数据集 92.5 28
ResNet-34 测试数据集 94.6 22
ResNet-50 测试数据集 95.3 15

Tab.3

ResNet-34 framework diagram"

卷积层 ResNet-34结构 输出尺度
Conv1x(第1层) 7×7,64,步长为2 112×112
Conv2x(第2层) 3×3,最大池化步长为2
3 × 3,64   3 × 3,64 ×3
56×56
Conv3x(第3层) 3 × 3,128 3 × 3,128×4 28×28
Conv4x(第4层) 3 × 3,256 3 × 3,256×6 14×14
Conv5x(第5层) 3 × 3,512 3 × 3,512×3 7×7
平均池化,1 000维全连接层,
softmax激活函数
1×1

Tab.4

Detection effect of all kinds of defect"

缺陷类型 召回率/% 正确率/%
优化前 优化后 优化前 优化后
抛线 98.0 99.0 98.0 99.0
跳线 92.0 95.0 90.0 93.0
断线 98.0 100.0 100.0 100.0
主针线松 92.0 97.0 96.0 98.0
链针线断 97.0 98.0 98.0 100.0
上弯线紧 98.0 98.0 88.0 92.0

Fig.5

Detecting system"

Tab.5

Camera performance indexes"

分辨率/像素 帧率/
(帧·s-1)
数据
接口
曝光
时间/ms
工作温度
范围/℃
1 280×960 38 USB2.0 0.032 5~62 0~50

Fig.6

Stitch test results for fabric one. (a) Normal trace; (b) Cast trace; (c) Jumper trace; (d) Broken trace"

Fig.7

Stitch test results for fabric 2. (a) Normal trace; (b) Loose main needle thread; (c) Broken chain needle thread; (d) Tight upper bend thread"

Tab.6

Stitch detection score"

面料类别 线迹种类 结果 异常得分 区分阈值
正常线迹 ok 0.31 0.94
面料1 抛线线迹 nok 1.00 0.94
跳线线迹 nok 0.97 0.94
断线线迹 nok 0.99 0.94
正常线迹 ok 0.44 0.64
面料2 主针线松 nok 0.76 0.64
链针线断 nok 0.78 0.64
上弯线紧 nok 0.81 0.64

Tab.7

Detection results of deblurring images"

线迹种类 检测图像
总数
正确检测
个数
漏检
个数
正确
率/%
漏检
率/%
跳线线迹 100 91 5 91.00 5.00
断线线迹 100 100 0 100.00 0.00
抛线线迹 100 98 1 98.00 1.00
主针线松 100 98 3 98.00 3.00
上弯线紧 100 96 2 96.00 2.00
链针线断 100 100 1 100.00 1.00

Fig.8

Results of ablation experiment. (a) Clear image; (b) Blurred image; (c) Blurred image detection result; (d) Deblurred image; (e) Deblurred image detection result"

[1] 吴柳波, 李新荣, 杜金丽. 基于轮廓提取的缝纫机器人运动轨迹规划研究进展[J]. 纺织学报, 2021, 42(4):191-200.
WU Liubo, LI Xinrong, DU Jinli. Research progress of motion trajectory planning of sewing robot based on contour extraction[J]. Journal of Textile Research, 2021, 42(4): 191-200.
[2] 路浩, 陈原. 基于机器视觉的碳纤维预浸料表面缺陷检测方法[J]. 纺织学报, 2020, 41(4): 51-57.
LU Hao, CHEN Yuan. A method for surface defect detection of carbon fiber prepreg based on machine vision[J]. Journal of Textile Research, 2020, 41(4): 51-57.
[3] FANG B, LONG X, SUN F, et al. Tactile-based fabric defect detection using convolutional neural network with attention mechanism[J]. IEEE Transactions on Instrumentation and Measurement, 2022(71): 1-9.
[4] ABATI D, PORRELLO A, CALDERARA S, et al. Latent space autoregression for novelty detection[C]// 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Long Beach: IEEE, 2019: 481-490.
[5] WANG G, HAN S, DING E, et al. Student-teacher feature pyramid matching for unsupervised anomaly detection[J]. arXiv Preprint, 2021. DOI: 10.48550/arxiv.2103.04257.
[6] BERGMANN P, FAUSER M, SATTLEGGER D, et al. Uninformed students: student-teacher anomaly detection with discriminative latent embeddings[C]// 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Seattle: IEEE,2020:4183-4192.
[7] KIM J H, KIM N, PARK Y W, et al. Object detection and classification based on YOLOv5 with improved maritime dataset[J]. Journal of Marine Science and Engineering, 2022, 10(3): 377.
[8] REN S, HE K, GIRSHICK R, et al. Faster R-CNN: towards real-time object detection with region proposal networks[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(6):1137-1149.
doi: 10.1109/TPAMI.2016.2577031 pmid: 27295650
[9] ROZUMNYI D, OSWALD MR, FERRARI V, et al. Defmo: deblurring and shape recovery of fast moving objects[C]//2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition(CVPR). Nashville: IEEE, 2021: 3456-3465.
[10] KUPYN O, MARTYNIUK T, WU J, et al. Ddeblurring (orders-of-magnitude) faster and better[C]//2019 IEEE/CVF International Conference on Computer Vision (ICCV). Seoul: IEEE, 201: 8878-8887.
[11] HOWARD A, SANDLER M, CHU G, et al. Searching for mobilenetV3[C]// 2019 IEEE/CVF International Conference on Computer Vision (ICCV). Seoul: IEEE, 201: 1314-1324.
[12] HAASE D, AMTHOR M. Rethinking depthwise separable convolutions: how intra-kernel correlations lead to improved mobilenets[C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Seattle: IEEE, 2020: 14600-14609.
[13] DEFARD T, SETKOV A, LOESCH A, et al. PaDiM: a patch distribution modeling framework for anomaly detection and localization[C]// International Conference on Pattern Recognition. Cham: Springer International Publishing, 2021: 475-489.
[14] 新刚, 蔡逸超, 周晓, 等. 基于机器视觉的筒子纱缺陷在线检测系统[J]. 纺织学报, 2018, 39 (1):139-145.
MOU Xingang, CAI Yichao, ZHOU Xiao, et al. Online yarn cone defects detection system based on machine vision[J]. Journal of Textile Research, 2018, 39(1):139-145.
[1] WEN Jiaqi, LI Xinrong, FENG Wenqian, LI Hansen. Rapid extraction of edge contours of printed fabrics [J]. Journal of Textile Research, 2024, 45(05): 165-173.
[2] YANG Jinpeng, JING Junfeng, LI Jiguo, WANG Yuanbo. Design of defect detection system for glass fiber plied yarn based on machine vision [J]. Journal of Textile Research, 2024, 45(05): 193-201.
[3] BAI Enlong, ZHANG Zhouqiang, GUO Zhongchao, ZAN Jie. Cotton color detection method based on machine vision [J]. Journal of Textile Research, 2024, 45(03): 36-43.
[4] GE Sumin, LIN Ruibing, XU Pinghua, WU Siyi, LUO Qianqian. Personalized customization of curved surface pillows based on machine vision [J]. Journal of Textile Research, 2024, 45(02): 214-220.
[5] SHI Weimin, HAN Sijie, TU Jiajia, LU Weijian, DUAN Yutang. Empty yarn bobbin positioning method based on machine vision [J]. Journal of Textile Research, 2023, 44(11): 105-112.
[6] CHEN Gang, JIN Guiyang, WU Jing, LUO Qian. Research and development of key technologies and whole-set equipment for intelligent sewing [J]. Journal of Textile Research, 2023, 44(08): 197-204.
[7] CHEN Taifang, ZHOU Yaqin, WANG Junliang, XU Chuqiao, LI Dongwu. Online detection of yarn breakage based on visual feature enhancement and extraction [J]. Journal of Textile Research, 2023, 44(08): 63-72.
[8] JI Yue, PAN Dong, MA Jiedong, SONG Limei, DONG Jiuzhi. Yarn tension non-contacts detection system on string vibration based on machine vision [J]. Journal of Textile Research, 2023, 44(05): 198-204.
[9] TAO Jing, WANG Junliang, XU Chuqiao, ZHANG Jie. Feature extraction method for ring-spun-yarn evenness online detection based on visual calibration [J]. Journal of Textile Research, 2023, 44(04): 70-77.
[10] WANG Bin, LI Min, LEI Chenglin, HE Ruhan. Research progress in fabric defect detection based on deep learning [J]. Journal of Textile Research, 2023, 44(01): 219-227.
[11] JIN Shoufeng, HOU Yize, JIAO Hang, ZHNAG Peng, LI Yutao. An improved AlexNet model for fleece fabric quality inspection [J]. Journal of Textile Research, 2022, 43(06): 133-139.
[12] ZHOU Qihong, PENG Yi, CEN Junhao, ZHOU Shenhua, LI Shujia. Yarn breakage location for yarn joining robot based on machine vision [J]. Journal of Textile Research, 2022, 43(05): 163-169.
[13] LÜ Wentao, LIN Qiqi, ZHONG Jiaying, WANG Chengqun, XU Weiqiang. Research progress of image processing technology for fabric defect detection [J]. Journal of Textile Research, 2021, 42(11): 197-206.
[14] WU Liubo, LI Xinrong, DU Jinli. Research progress of motion trajectory planning of sewing robot based on contour extraction [J]. Journal of Textile Research, 2021, 42(04): 191-200.
[15] TIAN Yuhang, WANG Shaozong, ZHANG Wenchang, ZHANG Qian. Rapid detection method of single-component dye liquor concentration based on machine vision [J]. Journal of Textile Research, 2021, 42(03): 115-121.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!