Journal of Textile Research ›› 2024, Vol. 45 ›› Issue (01): 112-119.doi: 10.13475/j.fzxb.20230103301

• Textile Engineering • Previous Articles     Next Articles

Single soldier camouflage small target detection based on boundary-filling

CHI Panpan1, MEI Chennan1, WANG Yan2, XIAO Hong2, ZHONG Yueqi1,3()   

  1. 1. College of Textiles, Donghua University, Shanghai 201620, China
    2. Systems Engineering Research Institute, Academy of Military Sciences, Beijing 100010, China
    3. Key Laboratory of Textiles Science & Technology, Ministry of Education, Donghua University, Shanghai 201620, China
  • Received:2023-01-28 Revised:2023-09-25 Online:2024-01-15 Published:2024-03-14

Abstract:

Objective In the automatic detection of siagle soldier camouflage, it is necessary to detect the targets at a long distance. In this scenario, the small size of the camouflaged target and the intensification of background fusion substantially increase the difficulty of detection. Therefore, a deep learning approach to tackle this challenge is proposed based on the deep learning network architecture and module structure.

Method The original dataset was extended using data augmentation and the network architecture was designed based on the BGNet model. SCNet was used for feature extraction of images, and EAM (edge-aware module) was used for detecting target edges. EFM (edge-guidance feature module) made use of the output of EAM to guide the network to locate and identify targets, NCD (neighbor con-connection decoder) was used for fusing the features from EFM output, and the CAM (context aggregation module) was employed to aggregate multi-level features to obtain the final output.

Results The quantitative results of the proposed model and the other models showed that PFNet performed poorly in this small target detection, and SINet-V2 and C2FNet had higher recognition rates but with lower recognition accuracy, indicating poor detection accuracy although they intersect with the true values. On the other hand, the BGNet model had lower recognition rates but with higher accuracy and structural similarity. The BFNet proposed in this paper was improved based on the BGNet, and after the improvement, the recognition rate was increased. At the same time, other indices measuring detection accuracy and object similarity were also improved. The proposed BFNet was found to be able to take both recognition rate and accuracy rate into account, and identify targets more accurately and comprehensively. The quantitative evaluation of the ablation experiments was carried out, and it showed that the modified EFM improved the recognition rateIby 1.35%, indicating that more targets are able to be recognized after the improvement. The modified CAM improved the recognition rate I by 0.51%, indicating that the improved CAM further improved the recognition rate I, while S, a measure of structural similarity, and the adaptive F value Fad were also hoisted, indicating that the recall rate was also improved considering the accuracy. With the modified EFM and CAM, the detection accuracy pA was slightly decreased, but the I value is improved by 1.87%. After modifying EFM and CAM, the accuracy pA was improved by 1.74% using SCNet (self-calibrated networks) as the backbone model, proving the SCNet model compensation for the decrease in accuracy caused by the improved module structure. The results of the final improvement scheme showed that the improvement rate of pA was 0.74% and the improvement rate of I was 1.35%, while the adaptive E metric Eϕadand weighted F-measure Fwβ were improved by 0.85% and 0.71%, respectively. The qualitative comparison of the proposed model with other models is shown. The baseline model could barely recognize small targets, while the improved model performs well in small camouflage target recognition task.

Conclusion The experimental results show that the proposed model performs well in the automatic detection tasks of single soldier camouflage, which indicates that the detection model in COS (camouflage object segmentation) task is suitable for single soldier camouflage detection, and the improved model offers higher the recognition rate, especially for detecting small target. The detection algorithm can be used as an aid for combatants and also provides an effective means to evaluate camouflage designs.

Key words: camouflage detection, camouflage object recognition, deep learning, small target detection, camouflage object segmentation

CLC Number: 

  • TS131.9

Fig.1

Examples of dataset"

Fig.2

Overall architecture of BFNet"

Fig.3

Self-calibrated convolutions of SCNet"

Fig.4

Modified edge-guidance feature module"

Fig.5

Modified context aggregation module"

Fig.6

Qualitative comparison with other methods"

Fig.7

Qualitative analysis of ablation experiments"

Tab.2

Quantitative comparison with paper reporting methods"

方法 评估指标
p A I $E_{\phi}^{\mathrm{ad}}$ F a d M S F β w
PFNet[13] 0.784 0.968 0.942 0.807 0.004 0.867 0.762
SINet-V2[12] 0.779 0.986 0.928 0.830 0.004 0.877 0.777
C2FNet[6] 0.785 0.981 0.946 0.816 0.004 0.872 0.772
BGNet[14] 0.814 0.964 0.943 0.840 0.003 0.891 0.801
BFNet(本文方法) 0.820 0.977 0.951 0.846 0.003 0.892 0.803

Tab.3

Quantitative evaluation for ablation studies"

序号 方法 评估指标
EFM+ EFM CAM+ CAM SCNet 随机裁剪 pA I $E_{\phi}^{\mathrm{ad}}$ Fad M S F β w
1# 0.814 0.964 0.943 0.840 0.003 0.891 0.801
2# 0.810 0.977 0.949 0.845 0.003 0.890 0.799
3# 0.818 0.973 0.952 0.847 0.003 0.892 0.804
4# 0.806 0.982 0.945 0.847 0.003 0.892 0.798
5# 0.820 0.977 0.951 0.846 0.003 0.892 0.803
6# 0.815 0.977 0.953 0.847 0.003 0.892 0.806
[1] TANKUS A, YESHURUN Y. Convexity-based visual camouflage breaking[J]. Computer Vision and Image Understanding, 2009, 82(3):208-237.
doi: 10.1006/cviu.2001.0912
[2] NAGABHUSAN U N. Camouflage defect identification: a novel approach[C]// 9th International Conference on Information Technology (ICIT'06). Orlando FL: IEEE Computer Society, 2006: 145-148.
[3] SENGOTTUVELAN P, WAHI A, SHANMUGAM A. Performance of decamouflaging through exploratory image analysis[C]// 2008 First International Conference on Emerging Trends in Engineering and Technology. Nagpur: IEEE Computer Society, 2008: 6-10.
[4] ZHENG Yunfei, ZHANG Xiongwei, CAO Tieyong, et al. Detection of people with camouflage pattern via dense deconvolution network[J]. IEEE Signal Processing Letters, 2019, 26(1):29-33.
doi: 10.1109/LSP.2018.2825959
[5] FANG Zheng, ZHANG Xiongwei, DENG Xiaotong, et al. Camouflage people detection via strong semantic dilation network[C]// Proceedings of the ACM Turing Celebration Conference-China. New York: Association for Computing Machinery, 2019:1-7.
[6] SUN Yujia, CHEN Geng, ZHOU Tao, et al. Context-aware cross-level fusion network for camouflaged object detection[C]// Proceedings of the Thirtieth International Joint Conference on Artificial Intelligence Main Track: IJCAI, 2021:1025-1031.
[7] YANG F, ZHAI Q, LI X, et al. Uncertainty-guided transformer reasoning for camouflaged object detec-tion[C]// Proceedings of the IEEE/CVF International Conference on Computer Vision. Montreal: IEEE/CVF, 2021: 4146-4155.
[8] LE T N, NGUYEN T V, NIE Z, et al. Anabranch network for camouflaged object segmentation[J]. Computer Vision and Image Understanding, 2019, 184: 45-56.
doi: 10.1016/j.cviu.2019.04.006
[9] ZHAI Qiang, LI Xin, YANG Fan, et al. Mutual graph learning for camouflaged object detection[C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE/CVF, 2021: 12997-13007.
[10] CHEN Tianyou, XIAO Jin, HU Xiaoguang, et al. Boundary-guided network for camouflaged object detection[J]. Knowledge-Based Systems, 2022.DOI:10.1016/j.knosys.2022.108901.
[11] LV Y, Zhang J, DAI Y, et al. Simultaneously localize, segment and rank the camouflaged objects[C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE/CVF, 2021: 11591-11601.
[12] FAN Dengping, JI Gepeng, CHENG Mingming, et al. Concealed Object Detection[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 44(10): 6024-6042.
doi: 10.1109/TPAMI.2021.3085766
[13] MEI Haiyang, JI Gepeng, WEI Ziqi, et al. Camouflaged object segmentation with distraction mining[C]// Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE/CVF, 2021: 8772-8781.
[14] SUN Yujia, WANG Shuo, CHEN Chenglizhao, et al. Boundary-guided camouflaged object detection[C]// Proceedings of the IEEE/CVF International Conference on Computer Vision. Montreal: IEEE/CVF, 2022:1335-1341.
[15] LIU Jiangjiang, HOU Qibin, CHENG Mingming, et al. Improving convolutional networks with self-calibrated convolutions[C]// 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway: IEEE/CVF, 2020:10093-10102.
[16] WEI Jun, WANG Shuhui, HUANG Qingming. F3NET: fusion, feedback and focus for salient object detection[J]. Proceedings of the AAAI Conference on Artificial Intelligence, 2020, 34(7):12321-12328.
doi: 10.1609/aaai.v34i07.6916
[17] XIE Enze, WANG Wenjia, WANG Wenhai, et al. Segmenting transparent objects in the wild[C]// Computer Vision-ECCV 2020: 16th European Conference. Berlin:Springer-Verlag, 2020:696-711.
[18] PERAZZI F, KRAHENBULHL P, PRITCH Y, et al. Saliency filters: Contrast based filtering for salient region detection[C]// 2012 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). Piscataway: IEEE/CVF, 2012: 733-740.
[19] FAN Dengping, JI Gepeng, CHENG Mingming, et al. Cognitive vision inspired object segmentation metric and loss function[J]. Scientia Sinica Informationis, 2021, 51(9):1475-1489.
doi: 10.1360/SSI-2020-0370
[20] MARGOLIN Ran, ZELNIK-MANOR L, TAL A. How to evaluate foreground maps?[C]// 2014 IEEE Conference on Computer Vision and Pattern Recognition. Piscataway: IEEE, 2014: 248-255.
[21] FAN Dengping, CHENG Mingming, LIU Yun, et al. Structre-measure: a new way to evaluate foreground maps[C]// 2017 IEEE International Conference on Computer Vision (ICCV). Piscataway: IEEE, 2017:4558-4567.
[1] LU Weijian, TU Jiajia, WANG Junru, HAN Sijie, SHI Weimin. Model for empty bobbin recognition based on improved residual network [J]. Journal of Textile Research, 2024, 45(01): 194-202.
[2] YANG Hongmai, ZHANG Xiaodong, YAN Ning, ZHU Linlin, LI Na'na. Robustness algorithm for online yarn breakage detection in warp knitting machines [J]. Journal of Textile Research, 2023, 44(05): 139-146.
[3] GU Bingfei, ZHANG Jian, XU Kaiyi, ZHAO Songling, YE Fan, HOU Jue. Human contour and parameter extraction from complex background [J]. Journal of Textile Research, 2023, 44(03): 168-175.
[4] LI Yang, PENG Laihu, LI Jianqiang, LIU Jianting, ZHENG Qiuyang, HU Xudong. Fabric defect detection based on deep-belief network [J]. Journal of Textile Research, 2023, 44(02): 143-150.
[5] CHEN Jia, YANG Congcong, LIU Junping, HE Ruhan, LIANG Jinxing. Cross-domain generation for transferring hand-drawn sketches to garment images [J]. Journal of Textile Research, 2023, 44(01): 171-178.
[6] WANG Bin, LI Min, LEI Chenglin, HE Ruhan. Research progress in fabric defect detection based on deep learning [J]. Journal of Textile Research, 2023, 44(01): 219-227.
[7] AN Yijin, XUE Wenliang, DING Yi, ZHANG Shunlian. Evaluation of textile color rubbing fastness based on image processing [J]. Journal of Textile Research, 2022, 43(12): 131-137.
[8] CHEN Jinguang, LI Xue, SHAO Jingfeng, MA Lili. Lightweight clothing detection method based on an improved YOLOv5 network [J]. Journal of Textile Research, 2022, 43(10): 155-160.
[9] JIANG Hui, MA Biao. Style similarity algorithm based on clothing style [J]. Journal of Textile Research, 2021, 42(11): 129-136.
[10] YANG Zhengyan, XUE Wenliang, ZHANG Chuanxiong, DING Yi, MA Yanxue. Recommendations for user's bottoms matching based on generative adversarial networks [J]. Journal of Textile Research, 2021, 42(07): 164-168.
[11] XU Qian, CHEN Minzhi. Garment grain balance evaluation system based on deep learning [J]. Journal of Textile Research, 2019, 40(10): 191-195.
[12] LIU Zhengdong, LIU Yihan, WANG Shouren. Depth learning method for suit detection in images [J]. Journal of Textile Research, 2019, 40(04): 158-164.
Viewed
Full text


Abstract

Cited

  Shared   
  Discussed   
No Suggested Reading articles found!