| 研究生: |
張洛瑄 Chang, Lo-Hsuan |
|---|---|
| 論文名稱: |
應用深度學習於刀具量測與分類 Deep Learning-Based Tool Measurement and Classification |
| 指導教授: |
連震杰
Lien, Jenn-Jier |
| 共同指導教授: |
郭淑美
Guo, Shu-Mei |
| 學位類別: |
碩士 Master |
| 系所名稱: |
電機資訊學院 - 資訊工程學系 Department of Computer Science and Information Engineering |
| 論文出版年: | 2021 |
| 畢業學年度: | 109 |
| 語文別: | 英文 |
| 論文頁數: | 57 |
| 中文關鍵詞: | 刀具量測 、視覺分類 、前景分割 |
| 外文關鍵詞: | Tool Measurement, Vision Classification, Foreground Subtraction |
| 相關次數: | 點閱:79 下載:0 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
現代的工業科技進步,導致生產也走向自動化。生產自動化中其中一項重要目標就是需要自動量測,才可以進一步的做出判斷。在自動化量測中有達成的方法,可以使用實體的觸碰量測或是使用攝影機拍攝的方式進行電腦是覺得量測。我們這邊使用電腦視覺的ˊ方式做量測,因為可以達到更高的量測速度和保持良好的量測精準度。我們的量測為工業加工刀具進行基本的幾何量測,並且推算出刀具的種類。
量測刀具的長寬前需要先分割出影像的那些部分屬於我們需要量測的刀具,哪些部分屬於背景的部分,為了達成這個結果,我們使用兩種不同的方法來處理這個問題。第一種方法為使用傳統電腦視覺的方式對影像進行前景分割。第二種方式為使用深度學習的技術來對影像進行前景分割。
Now a days industrial technology advanced. Led to production become more and more automatic. In automatic production. One of the most important goal is to atomate measurement. With an accurate. measurement, can we make the correct decision and action. There are two type of measurement, direct contact measurement and computer vision measurement. Here we adopt computer measurement. The reason why we choose this method is because computer vision have better speed and can still retain a good accuracy. The project we are going to measure is the basic geometric property of industrial machining tool and estimate its tool type.
For geometric measurement. We first need to perform a segmentation to the foreground tool. We use background segmentation to segment out part of the image which is background and which is foreground. We use two different methods to achieve this. First is traditional computer vision method. And second is deep learning method.
[1] D. Zeng and M. Zhu, "Background Subtraction Using Multiscale Fully Convolutional Network," in IEEE Access, vol. 6, pp. 16010-16021, 2018.
[2] M. Babaee, D.T. Dinh and G. Rigoll, “A deep convolutional neural network for video sequence background subtraction,” in Pattern Recognition 76, pp. 635-649, 2018.
[3] M. Mandal, V. Dhar, A. Mishra and S.K. Vipparthi, "3DFR: A Swift 3D Feature Reductionist Framework for Scene Independent Change Detection," in IEEE Signal Processing Letters, vol. 26, no. 12, pp. 1882-1886, Dec. 2019.
[4] K. Lim, W. Jang and C. Kim, "Background subtraction using encoder-decoder structured convolutional neural network," 2017 14th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS), Lecce, 2017, pp. 1-6.
[5] G. Guo, H. Wang, D. Bell, Y. Bi and K. Greer, "KNN model-based approach in classification, “In OTM Confederated International Conferences" On the Move to Meaningful Internet Systems", pp. 986-996, 2003.
[6] Z. Zivkovic and F.V.D. Heijden, “Efficient adaptive density estimation per image pixel for the task of background subtraction, “in Pattern recognition letters 27.7, pp. 773-780, 2006.
[7] J. Gracewell and M. John, "Dynamic background modeling using deep learning autoencoder network, “in Multimedia Tools and Application 79, no. 7, pp. 4639-4659, 2019.
[8] D. Sakkos, H. Liu and L. Shao, “End-to-end video background subtraction with 3d convolutional neural networks,” in Multimedia and Tools and Application 77, no. 17, pp. 23023-23041, 2018.
[9] Y. Xu, J. Dong, B. Zhang and D. Xu, “Background modeling methods in video analysis: A review and comparative evaluation,” in CAAI Transactions on Intelligence Technology 1, no. 1 pp. 43-60, 2016.
[10] M. L. Zhang and Z. H. Zhou, “ML-KNN: A lazy learning approach to multi-label learning,” in Pattern Recognition 40, no. 7, pp. 2038-2048, 2007.
[11] C. Stauffer and W. E. L. Grimson, “Adaptive background mixture models for real-time tracking,” in Proceedings. 1999 IEEE computer society conference on computer vision and patten recognition (Cat. No PR00149), Vol. 2, pp. 246-252, 1999.
[12] Z. Zivkovic, “Improved adaptive Gaussian mixture model for background subtraction,” in Proceedings of the 17th International Conference on Pattern Recognition, 2004. ICPR 2004, Vol.2, pp. 28-31, 2004.
[13] L. A. Lim and H. Y. Keles, “Learning multi scale features for foreground segmentation,” in Pattern Analysis and Application 23, no. 3, pp. 1369-1380, 2020.
校內:2026-08-25公開