Journal of Textile Research ›› 2023, Vol. 44 ›› Issue (03): 168-175.doi: 10.13475/j.fzxb.20220102308

• Apparel Engineering • Previous Articles     Next Articles

Human contour and parameter extraction from complex background

GU Bingfei1,2,3, ZHANG Jian1, XU Kaiyi1, ZHAO Songling1, YE Fan1, HOU Jue1,2,3()   

  1. 1. School of Fashion Design & Engineering, Zhejiang Sci-Tech University, Hangzhou, Zhejiang 310018, China
    2. Clothing Engineering Research Center of Zhejiang Province, Hangzhou, Zhejiang 310018, China
    3. Key Laboratory of Silk Culture Heritage and Products Design Digital Technology, Ministry of Culture and Tourism, Hangzhou, Zhejiang 310018, China
  • Received:2022-01-13 Revised:2022-10-27 Online:2023-03-15 Published:2023-04-14

Abstract:

Objective For the collection of human body images in non-contact two-dimensional measurement systems, most of the early studies reduced the difficulty of human contour extraction by means of restricting the shooting background. In order to acquire human contour and parameter information quickly and easily from photographs with a complex background, this study proposes the use of holistically-nested edge detection (HED) deep learning model to achieve the extraction of human contours and perform parameter extraction analysis.

Method A training set of 43 200 images was established by creating a human contour label map, data enhancement and pre-processing with 450 human photos with different backgrounds as the original image dataset. A deep learning network model was used for training and learning, and the optimal edge detection model was constructed after repeated training and tuning to achieve automatic human contour extraction. In addition, in order to verify that the extracted human silhouette form is consistent with the real human form, 13 human parameters are selected for error analysis.

Results After training and tuning the deep learning model to obtain the optimal model for human contour extraction, the test-set images were subsequently output (Fig.5), which shows that the optimized HED network model training extracteds clear human contours without other cluttered background information, but the human contour edge lines was thick. Therefore, the Zhang-Suen edge refinement algorithm was adoptrd to refine the human contours (Fig.5(d)). It can be seen that after the edge refinement, not only the edges of the human contour can be detailed and clear with no break in continuity, but also some non-human contour details in the original contour map can be removed to make the background cleaner. In addition, in order to verify the accuracy of extracting human contours, 40 subjects were selected and subjected to 3-D anthropometric measurements and 2-D photographs, respectively. Based on the human body photograph to extract the human body contour and feature point identification positioning, 13 measurements were extracted and compared with the corresponding 3-D point cloud manual measurements (Tab.3). It can be seen that the error range of 3 angle parameters are between 0.125 3°and 1.862 2°, and the error range of 10 ratio parameters are between 0.000 2 and 0.081 7. The data indicated that there was no significant difference between the contour measurements and the 3-D real values, and a high consistency was maintained between them, which further verified the feasibility and accuracy of extracting human contours based on 2-D photographs.

Conclusion This paper proposed a deep learning-based method for human contour extraction from complex backgrounds, use the HED network optimization model as the main framework to train and learn the human photo dataset, achieving the contour extraction and edge refinement of human photos based on complex backgrounds. Meanwhile, in order to verify the authenticity of the photo-extracted contours, 13 scale and angle parameters that can reflect the human body shape were selected, and error analysis was performed on the contour extraction values and 3-D point cloud measurements of 40 subjects. The results show that the improved HED network model can accurately extract clear and continuous human contours based on complex backgrounds with cleaner backgrounds; and there is no significant difference between the extracted values of contour-based parameters and 3-D point cloud measurements. This research proves the feasibility and accuracy of the method adopted in this research, and the research results can provide technical support for the research of non-contact 2-D measurement technology.

Key words: complex background, deep learning, human contour extraction, holistically-nested edge detection network, two-dimensional photo

CLC Number: 

  • TS941.17

Fig.1

Original images and human contour label maps. (a)Sample 1; (b)Sample 2"

Tab.1

Network training sample number configuration"

数据类型 原始图像样本数 增强图像样本数 样本总量
训练集 400 42 800 43 200
验证集 30 210 240
测试集 20 140 160

Fig.2

Data enhancement. (a)Rotation enhancement (rotate by 30° clockwise);(b)Mirror flip enhancement; (c)Salt-and-pepper noise;(d)Gaussian noise(rotate by 150° clockwise)"

Fig.3

HED(VGG19) network model structure"

Fig.4

Comparison of training results before and after network structure modification. (a) Original image;(b)HED(VGG16);(c)HED(VGG19);(d) Refined results"

Fig.5

Schematic diagram of measurement of morphological parameters"

Tab.2

Specific definition of morphological parameters"

序号 测量项目 测量及计算方法 序号 测量项目 测量及计算方法
1 身高(H) 头顶点至地面的垂直距离 14 颈身高比(RHN) 颈高(HN)/身高(H)
2 颈高(HN) 颈部截面至地面的垂直距离 15 肩身高比(RHS) 肩高(HS)/身高(H)
3 肩高(HS) 肩部截面至地面的垂直距离 16 腋身高比(RHA) 腋下高(HA)/身高(H)
4 腋下高(HA) 腋下部截面至地面的垂直距离 17 臀身高比(RHH) 臀高(HH)/身高(H)
5 臀高(HH) 臀部截面至地面的垂直距离 18 颈肩宽比(RWNS) 颈宽(WN)/肩宽(WS)
6 颈宽(WN) 左颈点(PLN)与右颈点(PRN)的水平距离 19 颈腋宽比(RWNA) 颈宽(WN)/腋下宽(WA)
7 颈厚(TN) 前颈点(PFN)与后颈点(PBN)的水平距离 20 颈臀宽比( R W N H) 颈宽(WN)/臀宽(WH)
8 肩宽(WS) 左肩端点(PLS)与右肩端点(PRS)的水平距离 21 颈肩厚比( R T N S) 颈宽(TN)/肩宽(TS)
9 肩厚(TS) 肩部截面外接矩形厚度( P F S P B S的水平距离) 22 颈腋厚比( R T N A) 颈宽(TN)/腋下宽(TA)
10 腋下宽(WA) 左腋点(PLA)与右腋点(PRA)的水平距离 23 颈臀厚比( R T N H) 颈宽(TN)/臀宽(TH)
11 腋下厚(TA) 腋下部截面外接矩形厚度(PFAPBA的水平距离) 24 肩斜角(AST) 右颈点( P R N)和右肩端点( P R S)的连线与水平线的夹角
12 臀宽(WW) 左臀点( P L H)与右臀点( P R H)的水平距离 25 背入角(ADE) 背凸点(PB)和后颈点(PBN)的连线与垂直线的夹角
13 臀厚(TW) 臀部截面外接矩形厚度(PFH)与(PBH)的水平距离 26 臀凸角(AHB) 臀凸点(PBH)与后腰点(PBW)的连线与垂直线的夹角

Fig.6

Human body part curve fitting. (a) Shoulder local curve extraction;(b)Polynomial fitting curve"

Tab.3

Error analysis table"

指标 颈身高比(RHN) 肩身高比(RHS) 腋身高比(RHA) 臀身高比(RHH) 颈肩宽比(RWNS) 颈腋宽比(RWNA) 颈臀宽比( R W N H)
CMV 0.845 1 0.807 7 0.744 8 0.518 4 0.317 8 0.383 0 0.367 9
RV 0.837 1 0.789 0 0.736 4 0.493 2 0.317 5 0.383 8 0.385 0
AE 0.009 3 0.020 4 0.015 5 0.025 2 0.012 7 0.014 5 0.019 1
ER 0.001 5~0.021 1 0.003 9~0.037 3 0.000 2~0.042 1 0.006 7~0.040 6 0.001 8~0.031 9 0.000 4~0.038 7 0.001 3~0.037 5
指标 颈肩厚比(RWNS) 颈腋厚比(RWNA) 颈臀厚比( R W N H) 肩斜角(AST)/(°) 背入角(ADE)/(°) 臀凸角(AHB)/(°)
CMV 0.722 5 0.570 4 0.519 9 26.668 7 19.437 9 13.145 2
RV 0.685 4 0.537 2 0.494 2 27.581 7 19.064 8 12.416 5
AE 0.081 6 0.042 7 0.032 6 0.998 6 1.185 7 0.778 6
ER 0.001 6~0.080 9 0.014 1~0.07 0.001 4~0.081 7 0.125 3~1.736 5 0.243 8~1.354 3 0.271 4~1.862 2
[1] GU B, LIU G, XU B. Individualizing women's suit patterns using body measurements from two-dimensional images[J]. Textile Research Journal, 2017, 87(6): 669-681.
doi: 10.1177/0040517516636001
[2] SATAM D, LIU Y, LEE H J. Intelligent design systems for apparel mass customization[J]. The Journal of The Textile Institute, 2011, 102(4): 353-365.
doi: 10.1080/00405000.2010.482351
[3] 王婷, 顾冰菲. 基于图像的人体颈肩部三维模型构建[J]. 纺织学报, 2021, 42(1): 125-132.
WANG Ting, GU Bingfei. 3-D modeling of neck-shoulder part based on human photos[J]. Journal of Textile Research, 2021, 42(1): 125-132.
doi: 10.1177/004051757204200210
[4] 甘应进, 陈东生, 孟爽, 等. 非接触式三维人体计测现状[J]. 纺织学报, 2005, 26(3): 145-161.
GAN Yingjin, CHEN Dongsheng, MENG Shuang, et al. Recent development of non-touch 3D body measure-ment[J]. Journal of Textile Research, 2005, 26(3): 145-161.
doi: 10.1177/004051755602600211
[5] GU B, LIU G, XU B. Girth prediction of young female body using orthogonal silhouettes[J]. The Journal of The Textile Institute, 2017, 108(1): 140-146.
doi: 10.1080/00405000.2016.1160756
[6] 顾冰菲, 李欣华, 钟泽君, 等. 基于人体数字图像的青年女体围度拟合[J]. 丝绸, 2019, 56(8): 46-51.
GU Bingfei, LI Xinghua, ZHONG Zejun, et al. Girth fitting of young women based on body digital images[J]. Journal of Silk, 2019, 56(8): 46-51.
[7] 王婷, 顾冰菲. 基于二维图像的青年女性颈肩部形态自动识别[J]. 纺织学报, 2020, 41(12): 111-117.
WANG Ting, GU Bingfei. Automatic identification of young women's neck-shoulder shapes based on images[J]. Journal of Textile Research, 2020, 41(12): 111-117.
[8] 冯文倩, 李新荣, 杨帅. 人体轮廓机器视觉检测算法的研究进展[J]. 纺织学报, 2021, 42(3): 190-196.
FENG Wenqian, LI Xinrong, YANG Shuai. Research progress in machine vision algorithm for human contour detection[J]. Journal of Textile Research, 2021, 42(3): 190-196.
[9] 李科, 毋涛, 刘青青. 基于深度图与改进Canny算法的人体轮廓提取[J]. 计算机技术与发展, 2021, 31(5): 67-72.
LI Ke, WU Tao, LIU Qingqing. Human contour extraction based on depth map and improved canny algorithm[J]. Computer Technology Development, 2021, 31(5): 67-72.
[10] 李翠锦, 瞿中. 基于深度学习的图像边缘检测算法综述[J]. 计算机应用, 2020, 40(11): 3280-3288.
doi: 10.11772/j.issn.1001-9081.2020030314
LI Cuijin, QU Zhong. Review of image edge detection algorithms based on deep learning[J]. Journal of Computer Applications, 2020, 40(11): 3280-3288.
doi: 10.11772/j.issn.1001-9081.2020030314
[11] 吴泽斌, 张东亮, 李基拓, 等. 复杂场景下的人体轮廓提取及尺寸测量[J]. 图学学报, 2020, 41(5): 740-749.
WU Zebin, ZHANG Dongliang, LI Jituo, et al. Contour recognition and information extraction of human bodies in complex scenes[J]. Journal of Graphics, 2020, 41(5): 740-749.
[12] DE SOUZA J W, HOLANDA G B, IVO R F, et al. Predicting body measures from 2D images using convolutional neural networks[C]// 2020 International Joint Conference on Neural Networks (IJCNN). Glasgow:IEEE, 2020: 1-6.
[13] SHEN X, HERTZMANN A, JIA J, et al. Automatic portrait segmentation for image stylization[J]. Computer Graphics Forum, 2016, 35(2): 93-102.
doi: 10.1111/cgf.12814
[14] XIE S, TU Z. Holistically-nested edge detection[C]// 2015 IEEE International Conference on Computer Vision (ICCV). Santiago:IEEE, 2015: 1395-1403.
[15] 赵启雯, 徐琨, 徐源. 基于HED网络的快速纸张边缘测方法[J]. 计算机与现代化, 2021(5): 1-5.
ZHAO Qiwen, XU Kun, XU Yuan. Fast paper edge detection method based on HED Network[J]. Computer and Modernization, 2021(5): 1-5.
[16] 焦安波, 何淼, 罗海波. 一种改进的HED网络及其在边缘检测中的应用[J]. 红外技术, 2019, 41(1): 72-77.
JIAO Anbo, HE Miao, LUO Haibo. Research on significant edge detection of infrared image based on deep learning[J]. Infrared Technology, 2019, 41(1): 72-77.
[17] LU J, BEHBOOD V, HAO P. Transfer learning using computational intelligence: a survey[J]. Knowledge-Based Systems, 2015, 80: 14-23.
doi: 10.1016/j.knosys.2015.01.010
[18] DENG J, DONG W, SOCHER R. Imagenet: a large-scale hierarchical image database[C]//2009 IEEE Conference on Computer Vision and Pattern Recognition. Miami:IEEE, 2009: 248-255.
[19] EVERINGHAM M, VAN Gool L, WILLIAMS C K I, et al. The pascal visual object classes (voc) challenge[J]. International Journal of Computer Vision, 2010, 88(2): 303-338.
doi: 10.1007/s11263-009-0275-4
[20] 赵小虎, 李晓, 叶圣, 等. 基于改进U-Net网络的多尺度番茄病害分割算法[J]. 计算机工程与应用, 2022, 58(10): 216-223.
doi: 10.3778/j.issn.1002-8331.2105-0201
ZHAO Xiaohu, LI Xiao, YE Sheng, et al. Multi-scale tomato disease segmentation algorithm based on improved U-net network[J]. Computer Engineering and Applications, 2022, 58(10): 216-223.
doi: 10.3778/j.issn.1002-8331.2105-0201
[21] ZHANG T Y, SUEN C Y. A fast parallel algorithm for thinning digital patterns[J]. Communications of the ACM, 1984, 27(3): 236-239.
doi: 10.1145/357994.358023
[22] 常庆贺, 吴敏华, 骆力明. 基于改进ZS细化算法的手写体汉字骨架提取[J]. 计算机应用与软件, 2020, 37(7): 107-113.
CHANG Qinghe, WU Minhua, LUO Liming. Handwritten Chinese character skeleton extraction based on improved ZS thinning algorithm[J]. Computer Applications and Software, 2020, 37(7): 107-113.
[1] LI Yang, PENG Laihu, LI Jianqiang, LIU Jianting, ZHENG Qiuyang, HU Xudong. Fabric defect detection based on deep-belief network [J]. Journal of Textile Research, 2023, 44(02): 143-150.
[2] CHEN Jia, YANG Congcong, LIU Junping, HE Ruhan, LIANG Jinxing. Cross-domain generation for transferring hand-drawn sketches to garment images [J]. Journal of Textile Research, 2023, 44(01): 171-178.
[3] WANG Bin, LI Min, LEI Chenglin, HE Ruhan. Research progress in fabric defect detection based on deep learning [J]. Journal of Textile Research, 2023, 44(01): 219-227.
[4] AN Yijin, XUE Wenliang, DING Yi, ZHANG Shunlian. Evaluation of textile color rubbing fastness based on image processing [J]. Journal of Textile Research, 2022, 43(12): 131-137.
[5] CHEN Jinguang, LI Xue, SHAO Jingfeng, MA Lili. Lightweight clothing detection method based on an improved YOLOv5 network [J]. Journal of Textile Research, 2022, 43(10): 155-160.
[6] JIANG Hui, MA Biao. Style similarity algorithm based on clothing style [J]. Journal of Textile Research, 2021, 42(11): 129-136.
[7] YANG Zhengyan, XUE Wenliang, ZHANG Chuanxiong, DING Yi, MA Yanxue. Recommendations for user's bottoms matching based on generative adversarial networks [J]. Journal of Textile Research, 2021, 42(07): 164-168.
[8] XU Qian, CHEN Minzhi. Garment grain balance evaluation system based on deep learning [J]. Journal of Textile Research, 2019, 40(10): 191-195.
[9] LIU Zhengdong, LIU Yihan, WANG Shouren. Depth learning method for suit detection in images [J]. Journal of Textile Research, 2019, 40(04): 158-164.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!