簡易檢索 / 詳目顯示

研究生: 張洛瑄
Chang, Lo-Hsuan
論文名稱: 應用深度學習於刀具量測與分類
Deep Learning-Based Tool Measurement and Classification
指導教授: 連震杰
Lien, Jenn-Jier
共同指導教授: 郭淑美
Guo, Shu-Mei
學位類別: 碩士
Master
系所名稱: 電機資訊學院 - 資訊工程學系
Department of Computer Science and Information Engineering
論文出版年: 2021
畢業學年度: 109
語文別: 英文
論文頁數: 57
中文關鍵詞: 刀具量測視覺分類前景分割
外文關鍵詞: Tool Measurement, Vision Classification, Foreground Subtraction
相關次數: 點閱:79下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 現代的工業科技進步,導致生產也走向自動化。生產自動化中其中一項重要目標就是需要自動量測,才可以進一步的做出判斷。在自動化量測中有達成的方法,可以使用實體的觸碰量測或是使用攝影機拍攝的方式進行電腦是覺得量測。我們這邊使用電腦視覺的ˊ方式做量測,因為可以達到更高的量測速度和保持良好的量測精準度。我們的量測為工業加工刀具進行基本的幾何量測,並且推算出刀具的種類。
    量測刀具的長寬前需要先分割出影像的那些部分屬於我們需要量測的刀具,哪些部分屬於背景的部分,為了達成這個結果,我們使用兩種不同的方法來處理這個問題。第一種方法為使用傳統電腦視覺的方式對影像進行前景分割。第二種方式為使用深度學習的技術來對影像進行前景分割。

    Now a days industrial technology advanced. Led to production become more and more automatic. In automatic production. One of the most important goal is to atomate measurement. With an accurate. measurement, can we make the correct decision and action. There are two type of measurement, direct contact measurement and computer vision measurement. Here we adopt computer measurement. The reason why we choose this method is because computer vision have better speed and can still retain a good accuracy. The project we are going to measure is the basic geometric property of industrial machining tool and estimate its tool type.
    For geometric measurement. We first need to perform a segmentation to the foreground tool. We use background segmentation to segment out part of the image which is background and which is foreground. We use two different methods to achieve this. First is traditional computer vision method. And second is deep learning method.

    摘要 I Abstract II 誌謝 III Content V Content of Figure VII Content of Table IX Chapter 1 Introduction 1 1.1 Motivation and Objective 1 1.2 Related Works 5 1.3 Organization of Thesis 6 1.4 Contributions 7 Chapter 2 System Specification and Function 7 2.1 Hardware Specification 7 2.2 Function 10 Chapter 3 Tool Extraction 11 3.1 Tool Extraction Based on Background Modeling or Foreground Extraction 11 3.2 Tool Extraction Using MOG2 12 3.3 Tool Extraction Using FgSegNet 21 Chapter 4 Geometric Measurement 26 4.1 ROI Cropping and Estimate Tilt Angel 26 4.2 Find Tool Length Using Histogram of Background Subtraction 27 Chapter 5 Tool Classification Using PCA and MLP 33 5.1 Framework of Tool Classification 33 5.2 Train Multi-Layer Perceptron 37 Chapter 6 Data Collection and Expirence Result 39 6.1 Data Collection 39 6.2 Experience Result on Modified MOG2 43 6.3 Experience Result on FgSegNet 46 6.3 Experience Result of Tool Classification 49 Chapter 7 Conclusion and Future Work 52 7.1 Conclusion 52 7.2 Application 53 7.3 Future Work 55 Reference 56

    [1] D. Zeng and M. Zhu, "Background Subtraction Using Multiscale Fully Convolutional Network," in IEEE Access, vol. 6, pp. 16010-16021, 2018.
    [2] M. Babaee, D.T. Dinh and G. Rigoll, “A deep convolutional neural network for video sequence background subtraction,” in Pattern Recognition 76, pp. 635-649, 2018.
    [3] M. Mandal, V. Dhar, A. Mishra and S.K. Vipparthi, "3DFR: A Swift 3D Feature Reductionist Framework for Scene Independent Change Detection," in IEEE Signal Processing Letters, vol. 26, no. 12, pp. 1882-1886, Dec. 2019.
    [4] K. Lim, W. Jang and C. Kim, "Background subtraction using encoder-decoder structured convolutional neural network," 2017 14th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), Lecce, 2017, pp. 1-6.
    [5] G. Guo, H. Wang, D. Bell, Y. Bi and K. Greer, "KNN model-based approach in classification, “In OTM Confederated International Conferences" On the Move to Meaningful Internet Systems", pp. 986-996, 2003.
    [6] Z. Zivkovic and F.V.D. Heijden, “Efficient adaptive density estimation per image pixel for the task of background subtraction, “in Pattern recognition letters 27.7, pp. 773-780, 2006.
    [7] J. Gracewell and M. John, "Dynamic background modeling using deep learning autoencoder network, “in Multimedia Tools and Application 79, no. 7, pp. 4639-4659, 2019.
    [8] D. Sakkos, H. Liu and L. Shao, “End-to-end video background subtraction with 3d convolutional neural networks,” in Multimedia and Tools and Application 77, no. 17, pp. 23023-23041, 2018.
    [9] Y. Xu, J. Dong, B. Zhang and D. Xu, “Background modeling methods in video analysis: A review and comparative evaluation,” in CAAI Transactions on Intelligence Technology 1, no. 1 pp. 43-60, 2016.
    [10] M. L. Zhang and Z. H. Zhou, “ML-KNN: A lazy learning approach to multi-label learning,” in Pattern Recognition 40, no. 7, pp. 2038-2048, 2007.
    [11] C. Stauffer and W. E. L. Grimson, “Adaptive background mixture models for real-time tracking,” in Proceedings. 1999 IEEE computer society conference on computer vision and patten recognition (Cat. No PR00149), Vol. 2, pp. 246-252, 1999.
    [12] Z. Zivkovic, “Improved adaptive Gaussian mixture model for background subtraction,” in Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004, Vol.2, pp. 28-31, 2004.
    [13] L. A. Lim and H. Y. Keles, “Learning multi scale features for foreground segmentation,” in Pattern Analysis and Application 23, no. 3, pp. 1369-1380, 2020.

    無法下載圖示 校內:2026-08-25公開
    校外:2026-08-25公開
    電子論文尚未授權公開,紙本請查館藏目錄
    QR CODE