簡易檢索 / 詳目顯示

研究生: 馬瑞梅
Ma, Juei-Mei
論文名稱: 基於高斯混合模型背景分離或學生老師特徵金字塔圖像匹配應用於鋼筋焊點評估
Rebar Welding Point Defect Evaluation using GMM-Based Background Subtraction or Student-Teacher Feature Pyramid Matching
指導教授: 連震杰
Lien, Jenn-Jier
學位類別: 碩士
Master
系所名稱: 工學院 - 智慧製造國際碩士學位學程
International Master Program on Intelligent Manufacturing
論文出版年: 2023
畢業學年度: 111
語文別: 英文
論文頁數: 44
中文關鍵詞: 鋼筋焊接評估背景扣除高斯混合模型無監督異常檢測老師學生框架
外文關鍵詞: Rebar Welding Evaluation, Background Subtraction, Gaussian Mixture Model, Unsupervised Anomaly Detection, Student-Teacher framework
相關次數: 點閱:90下載:4
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 鋼筋焊接質量的重要性無疑在最終產品的結構完整性扮演最關鍵的角色。在實際應用中,鋼筋的預製件是在工廠用自然光下作焊接,因此場景中的照明可能會因白天或天氣條件而變化。在本文中,我們演示了使用機器學習方法和深度學習方法來重現傳統的人類目視檢查鋼筋焊縫外部的品質。第一個方法使用的是基於高斯混合模型背景,訓練後的模型能夠將輸入圖像與學習的背景區分開來進行焊點提取,並使用提取的焊點前景來評估實際的焊接數據,得到IOU 84.9%和分類精度100%。第二個方法,該論文演示了使用基於特徵金字塔的無監督老師學生異常檢測來進行鋼筋焊接點分割,實驗結果顯示可達到像素 ROC-AUC 99.8% 。

    The importance of rebar weld quality is undoubtedly the most crucial of components when it comes to structural integrity of the final product. In practice, prefabrication of reinforcing steels is welded at the factory with natural daylight, therefore the illumination in the scene could vary by daytime or weather conditions. In this paper, we demonstrate using a machine learning method and a deep learning method to relive traditional human visual inspection on the rebar weld exterior quality. First, the GMM-based background subtraction model learns the gradient saturation of a given image sequence. The trained model is capable of distinguishing incoming images pixels from the learned background for weld point extraction and the foreground mask is used to evaluate the collected realistic weld data with IOU 84.9% and classification accuracy 100%. The paper also demonstrates using feature pyramid based unsupervised teacher-student anomaly detection on rebar welding point segmentation and have shown with a result of pixel ROC-AUC 99.8%.

    摘要 I Abstract II 誌謝 III List of Tables VI List of Figures VII Chapter 1 Introduction 1 1.1 Motivation & Objective 1 1.2 Thesis Organization 2 1.3 Related Works 4 1.3.1 Weld defects definition 5 1.3.2 Background Subtraction Methods 7 1.3.3 Pixel-Level Unsupervised Anomaly Detection 8 1.4 Contributes 10 Chapter 2 System Setup and Hardware Specification 11 2.1 System Setup 11 2.2 Hardware Specification 14 Chapter 3 Framework of Welding Point Defect Evaluation using GMM-Based Background Subtraction 16 3.1 Framework of Welding Point Defect Evaluation using GMM-Based Background Subtraction 16 3.2 Background Model Testing 19 3.3 Data collection 22 3.4 Evaluation metrics 24 3.5 Experimental result 25 Chapter 4 Rebar Welding Point Defect Evaluation using Student-Teacher Feature Pyramid Matching for Anomaly Detection 27 4.1 Framework of Anomaly Detection using Student-Teacher Feature Pyramid Matching 27 4.2 Training Framework 29 4.3 Inference framework 31 4.4 Data collection 33 4.5 Evaluation metrics 35 4.6 Experimental Result 36 Chapter 5 Conclusion and Future Works 41 Reference 42

    [1] P. KaewTraKulPong and R. Bowden, “An Improved Adaptive Background Mixture Model for Real-Time Tracking with Shadow Detection,” Video-based surveillance systems: Computer vision and distributed processing, pp. 135-144, 2002.
    [2] Z. Zivkovic and F. Van Der Heijden, “Efficient Adaptive Density Estimation Per Image Pixel for the Task of Background Subtraction,” Pattern recognition letters, vol. 27, no. 7, pp. 773-780, 2006.
    [3] G. Wang, S. Han, E. Ding et al., “Student-Teacher Feature Pyramid Matching for Anomaly Detection,” arXiv preprint arXiv:2103.04257, 2021.
    [4] C. Stauffer and W. E. L. Grimson, “Adaptive Background Mixture Models for Real-Time Tracking,” in Proceedings. 1999 IEEE computer society conference on computer vision and pattern recognition (Cat. No PR00149), vol. 2, pp. 246-252, 1999.
    [5] N. Friedman and S. Russell, “Image Segmentation in Video Sequences: A Probabilistic Approach,” arXiv preprint arXiv:1302.1539, 2013.
    [6] Hayman and Eklundh, “Statistical Background Subtraction for a Mobile Observer,” in Proceedings Ninth IEEE International Conference on Computer Vision, pp. 67-74 vol. 1, 2003.
    [7] Z. Zivkovic and F. van der Heijden, “Recursive Unsupervised Learning of Finite Mixture Models,” IEEE transactions on pattern analysis and machine intelligence, vol. 26, no. 5, pp. 651-656, 2004.
    [8] C. Manaswini, “Efficient Vehicle Counting and Classification Using Robust Multi-Cue Consecutive Frame Subtraction,” Global Journal of Computer Science and Technology, vol. 13, no. 8-F, p. 7, 2013.
    [9] A. Abdusalomov and T. K. Whangbo, “Detection and Removal of Moving Object Shadows Using Geometry and Color Information for Indoor Video Streams,” Applied Sciences, vol. 9, no. 23, p. 5165, 2019.
    [10] M. Sabokrou, M. Fayyaz, M. Fathy et al., “Deep-Anomaly: Fully Convolutional Neural Network for Fast Anomaly Detection in Crowded Scenes,” Computer Vision and Image Understanding, vol. 172, pp. 88-97, 2018.
    [11] P. Bergmann, M. Fauser, D. Sattlegger et al., “Uninformed Students: Student-Teacher Anomaly Detection with Discriminative Latent Embeddings,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 4183-4192, 2020.
    [12] C. Baur, B. Wiestler, S. Albarqouni et al., “Deep Autoencoding Models for Unsupervised Anomaly Segmentation in Brain Mr Images,” in Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries: 4th International Workshop, BrainLes 2018, Held in Conjunction with MICCAI 2018, Granada, Spain, September 16, 2018, Revised Selected Papers, Part I 4, pp. 161-169, 2019.
    [13] T. Schlegl, P. Seeböck, S. M. Waldstein et al., “Unsupervised Anomaly Detection with Generative Adversarial Networks to Guide Marker Discovery,” in International conference on information processing in medical imaging, pp. 146-157, 2017.
    [14] T. Schlegl, P. Seeböck, S. M. Waldstein et al., “F-Anogan: Fast Unsupervised Anomaly Detection with Generative Adversarial Networks,” Medical image analysis, vol. 54, pp. 30-44, 2019.
    [15] S. Venkataramanan, K.-C. Peng, R. V. Singh et al., “Attention Guided Anomaly Localization in Images,” in European Conference on Computer Vision, pp. 485-503, 2020.
    [16] P. Jaccard, “The Distribution of the Flora in the Alpine Zone. 1,” New phytologist, vol. 11, no. 2, pp. 37-50, 1912.
    [17] D. Hendrycks, K. Lee, and M. Mazeika, “Using Pre-Training Can Improve Model Robustness and Uncertainty,” in International conference on machine learning, pp. 2712-2721, 2019.
    [18] V. Wilmet, S. Verma, T. Redl et al., “A Comparison of Supervised and Unsupervised Deep Learning Methods for Anomaly Detection in Images,” arXiv preprint arXiv:2107.09204, 2021.
    [19] M. Oquab, L. Bottou, I. Laptev et al., “Learning and Transferring Mid-Level Image Representations Using Convolutional Neural Networks,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1717-1724, 2014.
    [20] N. Cohen and Y. Hoshen, “Sub-Image Anomaly Detection with Deep Pyramid Correspondences,” arXiv preprint arXiv:2005.02357, 2020.
    [21] C.-L. Li, K. Sohn, J. Yoon et al., “Cutpaste: Self-Supervised Learning for Anomaly Detection and Localization,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 9664-9674, 2021.

    下載圖示 校內:立即公開
    校外:立即公開
    QR CODE