| 研究生: |
劉廷翰 Liu, Ting-Han |
|---|---|
| 論文名稱: |
基於高斯混合模型之前景偵測應用於視覺監視之研究 Study on Gaussian Mixture Model Based Foreground Detection for Visual Surveillance |
| 指導教授: |
鄭銘揚
Cheng, Ming-Yang |
| 學位類別: |
碩士 Master |
| 系所名稱: |
電機資訊學院 - 電機工程學系 Department of Electrical Engineering |
| 論文出版年: | 2017 |
| 畢業學年度: | 105 |
| 語文別: | 中文 |
| 論文頁數: | 98 |
| 中文關鍵詞: | 前景偵測 、背景減去法 、高斯混合模型 、視覺監視系統 |
| 外文關鍵詞: | Foreground Detection, Background Subtraction, Gaussian Mixture Model(GMM), Visual Surveillance System |
| 相關次數: | 點閱:143 下載:12 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
在視覺監視系統的應用中,如何從複雜多變的環境中精確且快速地分辨出背景與前景乃是一項重要且困難的課題。然而,在追求更加準確的偵測效果時,複雜的演算法會造成過度龐大的計算量而降低處理速度;反之,計算量較低之方法其精確度通常不符合要求。此外隨著科技的進步,攝影機所擷取之影像其解析度不斷上升,導致前景偵測計算量大幅度地增加。為克服上述困難,本論文旨在開發一套應用於工業環境中之即時前景偵測視覺系統,其中本論文以基於高斯混合模型法之背景減去法作為監視系統設計之基礎。高斯混合模型法之優點在於其精確度與計算速度之間有著良好的平衡且易於實作,並對於環境變化有一定程度之適應能力,因而被廣泛地應用於視覺監視系統當中。本論文針對其基本原理與近年來學者所提出之效能改進方法進行探究,並提出一基於自適應調整參數之計算速度改良方法。實驗結果顯示本論文所提出之方法確實能在高解析度影像當中以較高計算速度進行前景偵測,並獲得良好偵測結果。
In visual surveillance applications, how to quickly segment background and foreground from a complex environment is a challenging issue. In order to achieve better segmentation accuracy, often a complicated and sophisticated algorithm is needed. However, it often results in enormous computation burden so that the segmentation process will be slowing down. In contrast, a segmentation algorithm that requires less computation power may yield unsatisfactory results. Moreover, due to the advancement of modern technology, the camera’s resolution keeps increasing, which also results in larger computational burden for foreground detection. In order to cope with the aforementioned difficulty, this thesis aims at developing a real-time foreground detection system that can be used in industrial applications. In particular, the Gaussian Mixture Model (GMM) based background subtraction method is employed in the design of visual surveillance system developed in this thesis. The advantages of the GMM based background subtraction method include good tradeoff between accuracy and computational burden, easy to implement, and good adaptability to environmental change. As a result, it is extensively used in visual surveillance systems. By investigating the fundamental principle and other improved versions of the GMM based background subtraction method, this thesis proposes an adaptive parameter tuning approach to speed up the computation process of the GMM based background subtraction method. Experimental results indicate that the proposed approach indeed can speed up the segmentation process in a high resolution image and also attain satisfactory foreground detection accuracy.
[1] Y. Tian, A. Senior, and M. Lu, “Robust and Efficient Foreground Analysis in Complex Surveillance Videos,” Machine Vision and Applications, vol. 23, no. 5, pp. 967-983, Sep. 2012.
[2] R. J. Radke, S. Andra, O. Al-Kofahi, and B. Roysam “Image Change Detection Algorithms: A Systematic Survey,” IEEE Transaction on Image Processing, vol. 14, no. 3, pp. 294-307, Feb. 2005.
[3] A.W. Senior, Y. Tian, and M. Lu, “Interactive Motion Analysis for Video Surveillance and Long Term Scene Monitoring,” Proceedings of the International Conference on Computer Vision, Queenstown, New Zealand, Nov. 2010, pp. 164–174.
[4] M. Cristani, M. Farenzena, D. Bloisi, and V. Murino, “Background Subtraction for Automated Multisensor Surveillance: A Comprehensive Review,” EURASIP Journal on Advances in Signal Processing, vol. 2010, no. 43, pp. 1-24, Feb. 2010.
[5] Z.Q. Hou and C.Z. Han, A background reconstruction algorithm based on pixel intensity classification, Institute of Automation, School of Electronics and Information Engineering, Xi’an Jiaotong University, 2005.
[6] B. Lee, and M. Hedley, “Background estimation for video surveillance,” Image and Vision Computing New Zealand, pp. 315–320, 2002.
[7] 蘇助彬,基於視覺之移動目標物分類與人體動作分析,碩士論文,國立成功大學電機工程學系,2005。
[8] 王俊凱,基於改良式適應性背景相減法語多重影像特徵比對法之多功能及時視覺追蹤系統之設計與實現,碩士論文,國立成功大學電機工程學系,2004。
[9] 賴俊良,移動目標物視覺偵測與追蹤研究,碩士論文,國立成功大學電機工程學系,2004。
[10] 鐘承君,統計式背景模型應用於視覺監視之研究,碩士論文,國立成功大學電機工程學系,2008。
[11] B. D. Lucas, and T. Kanade, “An iterative image registration technique with an application to stereo vision,” in Proceedings of the International Joint Conference on Artificial Intelligence, Vancouver, BC, Canada, 1981, pp. 674-679.
[12] B. K. P. Horn, and B. G. Schunck, “Determining optical flow,” Artificial Intelligence, Vol. 17, no. 1–3, pp. 185-203, Aug. 1981.
[13] W. Wang and S. Maybank, “A survey on visual surveillance of object motion and behaviors,” IEEE Transactions on Systems, Man and Cybernetics-Part C: Application and Reviews, vol. 34, no.3, pp. 334-352, Jul.2004.
[14] T. Bouwmans, F. El-Baf, and B. Vachon, “Background Modeling Using Mixture of Gaussians for Foreground Detection: A Survey,” Recent Patents on Computer Science, Bentham Science Publishers, vol. 1, no. 3, pp. 219-237, Nov. 2008.
[15] G. Tesauro, “Practical Issues in Temporal Difference Learning,” Journal of Machine Learning, vol. 8, no. 3-4, pp. 257-277, May 1992.
[16] M. Piccardi, “Background subtraction techniques: a review,” in Proceedings of the IEEE International Conference on Systems, Man and Cybernetics, Hague, Netherlands, Oct. 2004, pp. 3099-3104.
[17] S. Cheung, and C. Kamath, “Robust background subtraction with foreground validation for urban traffic video,” EURASIP Journal on Advances in Signal Processing, vol. 2005, pp. 2330-2340, Aug. 2005.
[18] T. Bouwmans, F. El-Baf, and B. Vachon, “Statistical background modeling for foreground detection: A survey,” Handbook of Pattern Recognition and Computer Vision, vol. 4, no.2, pp. 181–199, Jan. 2010.
[19] T. Bouwmans, “Recent advanced statistical background modeling for foreground detection: A systematic survey,” Recent Patents on Computer Science, vol. 4, no. 3, pp. 147–171, Sep. 2011.
[20] T. Bouwmans, F. El-Baf, and B. Vachon, “Background modeling using mixture of Gaussians for foreground detection: A survey,” Recent Patents on Computer Science, vol. 1, No. 3, pp. 219–237, 2008.
[21] T. Bouwmans, “Subspace learning for background modeling: survey,” Recent Patents on Computer Science, vol. 2, no. 3, pp. 147–171, Nov. 2011.
[22] T. Bouwmans, and E. Zahzah, “Robust PCA via principal component pursuit: a review for a comparative evaluation in video surveillance,” Computer Vision and Image Understanding, vol. 122, pp. 22-24, May 2014.
[23] T. Bouwmans, J. Gonzalez, C. Shan, M. Piccardi, and L. Davis, “Special Issue on Background Modeling for Foreground Detection in Real-world Dynamic Scenes Mach,” Machine Vision and Applications, vol. 25, no. 5, pp. 1101-1103, July 2004.
[24] T. Bouwmans, “Traditional and recent approaches in background modeling for foreground detection: an overview,” Computer Science Review, vol. 11–12, pp. 31-66, May 2014.
[25] A. Elgammal, R. Duraiswami, D. Harwood, and L. S. Davis, “Background and foreground modeling using nonparametric kernel density estimation for visual surveillance,” Proceedings of the IEEE, vol. 90, no.7, pp. 1151-1163, Jul. 2002.
[26] A. Elgammal, R. Duraiswami, D. Harwood, and L. S. Davis, “Background and foreground modeling using nonparametric kernel density estimation for visual surveillance,” Proceedings of the IEEE, vol. 90, no. 7, pp. 1151-1163, Jul. 2002.
[27] A. Elgammal, Efficient nonparametric kernel density estimation for real time computer vision, Ph. D. dissertation, University of Maryland, 2002.
[28] P. Viola, Alignment by maximization of mutual information, Ph. D. dissertation, Massachusetts Institute of Technology, 1995.
[29] T. Parag, A. Elgammal, and A. Mittal, “A framework for feature selection for background subtraction,” in Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition, NY, USA, 2006, pp. 1916-1923.
[30] N. Friedman and S. Russell, “Image segmentation in video sequences: a probabilistic approach,” in Proceedings of the Conference on Uncertainty in Artificial Intelligence, Berkeley, CA, Aug. 1997, pp. 175-181.
[31] C. Stauffer and W. E. L. Grimson, “Adaptive background mixture models for real-time tracking,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Cambridge, MA, Jan. 2007, pp. 246-252.
[32] C. Stauffer, and W. E. L. Grimson, “Learning Patterns of Activity Using Real-Time Tracking,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 8, pp. 747-757, Aug. 2000.
[33] K. Goyal and J. Singhai, “Review of background subtraction methods using Gaussian mixture model for video surveillance systems,” Artificial Intelligence Review, pp.1-19, Jan. 2017.
[34] P. KaewTraKulPong, and R. Bowden, “An improved adaptive background mixture model for real-time tracking with shadow detection,” in Proceedings of the European Workshop on Advanced Video Based Surveillance Systems, London, U.K., Sep. 2001, pp. 149-158.
[35] D. S. Lee, “Effective gaussian mixture learning for video background subtraction,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no. 5, pp. 827-832, 2005.
[36] D. S. Lee, “Improved adaptive mixture learning for robust video background modeling,” in Proceedings of the International Association for Pattern Recognition Workshop on Machine Vision for Applications, Nara, Japan, Dec. 2002, pp. 443-446.
[37] Z. Zivkovic, and F. van der Heijden, “Efficient adaptive density estimation per image pixel for the task of background subtraction,” Pattern Recognition Letters, vol. 27, no. 7, pp. 773-780, May 2006.
[38] P. Gorur, and B. Amrutur, “Speeded up Gaussian Mixture Model Algorithm for Background Subtraction,” in Proceedings of the IEEE International Conference on Advanced Video and Signal-Based Surveillance, Klagenfurt, Austria, Sep. 2011, pp. 386-391.
[39] M. Shah, J. Deng, and B. Woodford, “Localized adaptive learning of mixture of Gaussians models for background extraction,” in Proceedings of the IEEE International Conference on Image and Vision Computing New Zealand, Queenstown, New Zealand, Feb. 2010, pp. 1-8.
[40] M. Shah, J. Deng, and B. Woodford, “Video background modeling: recent approaches,” Machine Vision and Applications, vol. 25, no. 5, pp. 1105–1119, Jul. 2014.
[41] Z. Chen, T. Ellis, “A self-adaptive Gaussian mixture model,” Computer Vision and Image Understanding, vol. 122, pp. 35-46, May 2014.
[42] K. Toyama, J. Krumm, B. Brumitt, and B. Meyers, “Wallflower: principles and practice of background maintenance,” in Proceedings of the IEEE International Conference on Computer Vision, Kerkyra, Greece, 1999, pp. 255-261.
[43] http://scenebackgroundmodeling.net/
[44] http://rgbd2017.na.icar.cnr.it/SBM-RGBDdataset.html