| 研究生: |
王俊凱 Wang, Chun-Kai |
|---|---|
| 論文名稱: |
基於改良式適應性背景相減法與多重影像特徵比對法之多功能即時視覺追蹤系統之設計與實現 Design and Implementation of a Multi-Purpose Real-time Visual Tracking System based on Modified Adaptive Background Subtraction and Multi-Cue Template Matching |
| 指導教授: |
鄭銘揚
Cheng, Ming-Yang |
| 學位類別: |
碩士 Master |
| 系所名稱: |
電機資訊學院 - 電機工程學系 Department of Electrical Engineering |
| 論文出版年: | 2004 |
| 畢業學年度: | 92 |
| 語文別: | 中文 |
| 論文頁數: | 102 |
| 中文關鍵詞: | 背景相減法 、視覺追蹤 、視覺伺服 、樣板比對法 |
| 外文關鍵詞: | background subtraction, template matching, visual servoing, visual tracking |
| 相關次數: | 點閱:112 下載:6 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
對於一視覺追蹤系統而言,如何準確且快速地偵測出畫面中的移動目標物並進行追蹤,一直是相當重要的研究課題。在移動目標物偵測方面,背景相減法由於其計算量少且偵測結果佳而被廣泛地使用。然而,該方法仍有對光源變化敏感及背景初始化等問題,使得該法在複雜的環境下無法準確地進行偵測。有鑑於此,本文提出改良式適應性背景相減法,藉由動態地更新背景模型來維持偵測結果的準確性。在目標物追蹤方面,SSD法一直是視覺伺服領域中經常使用的影像追蹤技術。然而該方法對於目標物的外形變化過於敏感,因而難以被應用於真實環境的追蹤應用中。為了克服此一困難,本文提出以多重影像特徵比對法來進行追蹤,藉由結合數種不同性質的目標物追蹤技術,來提升系統的適用性與強健性。本論文將所提之改良式適應性背景相減法以及多重影像特徵比對法運用於一自行建構之實驗用即時視覺追蹤系統,實驗結果顯示本論文所提之方法效果良好。
How to detect and track moving objects in video streams is an important and challenging research problem in visual tracking applications. Among the moving object detection algorithms, the background subtraction method has been widely used due to its less computational load and high detection quality. Nevertheless, with problems such as “sensitive to the changes of lighting” and “background initialization”, the background subtraction method often performs poorly in complex environments. To overcome this difficulty, a modified adaptive background subtraction method that can dynamically update the background model is proposed. On the other hand, the SSD method is a popular image tracking technique in visual servoing applications. However, the SSD method is very sensitive to the changes of target appearance. As a consequence, it may experience failures when applied to realistic environments. To overcome this difficulty, a multi-cue template matching algorithm that consists of several kinds of tracking algorithms is proposed to improve the practicability and robustness of the visual tracking system. To evaluate the performance of the proposed approach, the modified adaptive background subtraction method and the multi-cue template matching algorithm are applied to the real-time visual tracking system developed in our laboratory. Experimental results show that the proposed approach exhibits satisfactory performances.
[1]D. B. Zhang, L. Van Gool and A. Oosterlinck, “Stochastic predictive control of robot tracking systems with dynamic visual feedback,” in Proc. of the 1990 IEEE International Conference on Robotics and Automation, Vol.1, pp. 610-615, 1990.
[2]P. K. Allen, A. Timcenko, B. Yoshimi and P. Michelman, “Trajectory filtering and prediction for automated tracking and grasping of a moving object,” in Proc. of the 1992 IEEE International Conference on Robotics and Automation, Vol. 2, pp. 1850-1856, 1992.
[3]P. Y. Oh and P. K. Allen, “Visual Servoing by Partitioning Degrees of Freedom,” IEEE Trans. on Robotics and Automation, Vol. 17, pp.1-17, 2001.
[4]W. G. Yau, L. C. Fu and D. Liu, “Design and implementation of visual servoing system for realistic air target tracking,” in Proc. of the 2001 IEEE International Conference on Robotics and Automation, Vol. 1, pp. 229-234, 2001.
[5]D. Ayers and M. Shah, “Recognizing human actions in a static room,” in Proc. of the 5th IEEE Workshop on Applications of Computer Vision, pp. 42-47, 1998.
[6]K. Sage and S. Young, “Security applications of computer vision,” Aerospace and Electronic Systems Magazine, IEEE, Vol. 14, pp. 19-29, 1999.
[7]R. T. Collins, A. J. Lipton, T. Kanade, H. Fujiyoshi, D. Duggins, Y. Tsin, D. Tolliver, N. Enomoto, O. Hasegawa, P. Burt and L. Wixson, “A System for Video and Monitoring,” Technical Report, CMU-RI-TR-00-12, Carnegie Mellon University, Pittsburgh, 2000.
[8]A. J. Lipton, C. H. Heartwell, N. Haering and D. Madden, “Automated video protection , monitoring & detection,” IEEE Aerospace and Electronic Systems Magazine, Vol. 18, pp. 3-18, 2003.
[9]G. Xu and T. Sugimoto, “Rits Eye: a software-based system for real-time face detection and tracking using pan-tilt-zoom controllable camera,” in Proc. of the 14th International Conference on Pattern Recognition, Vol. 2, pp. 1194 -1197, 1998.
[10]R. Herpers, G. Verghese, K. Derpanis, R. McCready, J. MacLean, A. Levin, D. Topalovic, L. Wood, A. Jepson and J. K. Tsotsos, “Detection and tracking of faces in real environments,” in Proc. of the 1999 International Workshop on Recognition, Analysis, and Tracking of Faces and Gestures in Real-Time Systems, pp. 96-104, 1999.
[11]M. Yeasin and Y. Kuniyoshi, “Detecting and tracking human face and eye using an space-varying sensor and an active vision head,” in Proc. of the 2000 IEEE Conference on Computer Vision and Pattern Recognition, Vol.2, pp. 168-173, 2000.
[12]Y. Zhu and K. Fujimura, “Driver face tracking using Gaussian mixture model(GMM),” in Proc. of the 2003 IEEE Conference on Intelligent Vehicles Symposium, pp. 587-592, 2003.
[13]R. C. Gonzalez and R. E. Woods, Digital Image processing, 2nd edition, Prentice Hall, 2002, New Jersey.
[14]W. Long and Y. H. Yang, “Stationary background generation: Alternative two the difference of two images,” Pattern Recognition, Vol. 23, pp. 1351-1359, 1990.
[15]I. Haritaoglu, D. Harwood and L. Davis, “W4 : Who? When? Where? What? A real time system for detecting and tracking people,” in Proc. of the 3th International Conference on Face and Gesture Recognition, pp. 222-227, 1998.
[16]T. Horprasert, D. Harwood and L. Davis, “A Statistical approach for real-time robust background subtraction and shadow detection,” ICCV Frame rate workshop, 1999.
[17]C. Stauffer and W. Grimson, “Adaptive background mixture models for real-time tracking,” in Proc. of the IEEE Conference on Computer Vision and pattern Recognition, pp. 246-252, 1999.
[18]K. Toyama, J. Krumm, B. Brumitt, and B. Meryers, “Wallfolwer: Principles and practice of background maintenance,” in Proc. of the International Conference on Computer Vision, pp. 255-261, 1999.
[19]A. Elgammal, D. Harwood, and L. Davis, “Non-parametric model for background subtraction,” in Proc. of the 6th European Conference on Computer Vision, pp. 751-767, 2000.
[20]D. Gutchess, M. Trajkovics, E. Cohen-Solal, D. Lyons and A. K. Jain, “A background model initialization algorithm for video surveillance,” in Proc. of the 8th IEEE International Conference on Computer Vision, Vol. 1, pp. 733-740, 2001.
[21]P. J. Burt, C. Yen and X. Xu, “Local Correlation Measures for Motion Analysis: a Comparitive Study,” in Proc. of the IEEE Conference on Pattern Recognition Image Processing, pp. 269-274, 1982,.
[22]J. P Lewis, “Fast Template Matching,” Vision Interface, pp.120-123, 1995.
[23]P. Anandan, “Measuring Visual Motion from Image Sequences,” Technical Report, COINS-TR-87-21, COINS, Massachusetts University, 1987.
[24]S. Hutchinson, G. D. Hager, and P. I. Corke, “A Tutorial on Visual Servo Control,” IEEE Trans. on Robotics and Automation, Vol. 12, pp. 651-670, 1996.
[25]J. Hill and W. T. Park, “Real Time Control of a Robot with a Mobile Camera,” in Proc. of 9th ISIR, pp. 233-246, 1979.
[26]B. K. P. Horn and B. G. Schunck, “Determine Optical Flow,” Artificial Intelligence, Vol. 17, pp.185-203, 1981.
[27]B. Lucas and T. Kanade, “An Iterative Image Registration Technique with an Application to Stereo Vision,” in Proc. of DARPA Image Understanding Workshop, pp. 121-130, 1981.
[28]J. L. Barron, D. J. Fleet, S. S. Beauchemin and T. A. Burkitt, “Performance of Optical Flow Techniques,” in Proc. of the 1992 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 236-242, 1992.
[29]D. Murray and A. Basu, “Motion Tracking with Active Camera,” IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol.16, pp. 449-459, 1994.
[30]R. Kelly, “Robust Asymptotically Stable Visual Servoing of Planar Robots,” IEEE Trans. on Robotics and Automation, Vol. 12, pp. 759-766, 1996.
[31]R. Kelly, R. Carelli, O. Nasisi, B. Kuchen, and F. Reyes, “Stable Visual Servoing of Camera-in-Hand Robotic Systems,” IEEE/ASME Trans. on Mechatronics, Vol. 5, pp. 39-48, 2000.
[32]吳良杰,視覺感測器自動追蹤地面移動目標物,碩士論文,元智大學機械工程學系,1996。
[33]李建緯,平面移動物體追蹤系統之研製,碩士論文,國立交通大學電機與控制工程學系,1999。
[34]鍾維哲,基於影像光流之即時影像追蹤系統,碩士論文,國立成功大學電機工程學系,2000。
[35]陳寬益,即時視覺伺服追蹤系統之設計與實現,碩士論文,國立成功大學機械工程學系,2000。
[36]陳佩穎,以新型動態輪廓技術完成可追蹤任意形狀物體之強健影像伺服系統 ,碩士論文, 國立台灣大學電機工程學系,2003。
[37]陳俊壬,即時視覺伺服追蹤系統之運動偵測與估算,碩士論文,國立成功大學機械工程學系,2003。
[38]W. Brand, “Morphable 3D models from video,” in Proc. of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Vol. 2, pp. 456-463, 2001.
[39]N. Weng, Y. H. Yang, and R. Pierson, “3D surface reconstruction using optical flow for medical imaging,” in Proc. of the 1996 IEEE Conference on Nuclear Science Symposium, Vol. 3, pp. 1845-1849, 1996.
[40]J. F. Cohn, A. J. Zlochower, J. J. Lien and T. Kanade, “feature-point tracking by optical flow discriminates subtle differences in facial expression,” in Proc. of the 3th International Conference on Automatic Face and Gesture Recognition, pp. 396-401, 1998.
[41]A. J. Lipton, “Local application of optic flow to analyze rigid versus non-rigid motion,” Technical Report, CMU-RI-TR-99-13, Robotics Institute, Carnegie Mellon University, 1999.
[42]K. Dawson-Howe, “Active Surveillance using dynamic background subtraction,” Technical Report, TCD-CS-96-06, Trinity College, 1996.
[43]R. T. Collins, A. J. Lipton, T. Kanade, H. Fujiyoshi, D. Duggins, Y. Tsin, D. Tolliver, N. Enomoto, O. Hasegawa, P. Burt and L. Wixson, “A System for Video and Monitoring,” Technical Report, CMU-RI-TR-00-12, Robotics Institute, Carnegie Mellon University, 2000.
[44]N. Ohta, “A statistical approach to background subtraction for surveillance systems,” in Proc. of the 8th IEEE International Conference on Computer Vision, Vol. 2, pp. 481-486, 2001.
[45]B. Gloyer, H. K. Aghajani and T. Kailath, “Video-based freeway monitoring system using recursive vehicle tracking,” in Proc. of IS&T-SPIE Symposium on Electronic Imaging: Image and video Processing, 1995.
[46]D. Guo, Y. C. Hwang, Y. C. L. Adrian and C. Laugier, “Traffic monitoring using short-long term background memory,” in Proc. of the IEEE 5th International Conference on Intelligent Transportation Systems, pp. 124-129, 2002.
[47]C. Wren, A. Azarbyayejani, T. Darrell, and A. Pentland, “Pfinder: Real-time tracking of the human body,” IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol. 19, pp. 780-785, 1997.
[48]H. Fujiyoshi and A. J. Lipton, “Real-time human motion analysis by image skeletonization,” in Proc. of IEEE WACV98, pp. 15-21, 1998.
[49]W. Grimson, C. Stauffer, R. Romano, and L. Lee, “Using adaptive tracking to classify and monitor activities in a site,” in Proc. of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 22-29, 1998.
[50]K. Nickels and S. Hutchinson, “Model-based tracking of complex articulated objects,” IEEE Tran. on Robotics and Automation, Vol. 17, pp. 28- 36, 2001.
[51]T. Drummond and R. Cipolla, “Real-time visual tracking of complex structures,” IEEE Tran. On Pattern Analysis and Machine Intelligence, Vol. 24, pp. 932-946, 2002.
[52]W. J. Jong, “A Real-Time Image Tracking System Based on Optical Flow Computation,” Master Thesis, Dept. of Elec., NCKU, Taiwan, 2001.
[53]H. M. Jong, L. G. Chen and T. D. Chiueh, “Parallel Architectures for 3-Step Hierarchical Search Block – Matching Algorithm,” IEEE Trans. on Circuit and Systems for Video Technology, Vol. 4, pp. 407-416, 1994.
[54]R. Li, B. Zeng and M. Liou, “A New Three-Step Search Algorithm for Block Motion Estimation,” IEEE Trans. on Circuits and Systems for Video Technology, Vol. 4, pp. 438-442, 1994.
[55]S. Zhu and K. K. Ma, “A new diamond search algorithm for fast block matching motion estimation,” IEEE Trans. on Image Processing, Vol. 9, pp. 287-290, 2000.
[56]J. Y. Tham, S. Ranganath, M. Ranganath, and A. A. Kassim, “A novel unrestricted center-biased diamond search algorithm for block motion estimation,” IEEE Trans. on Circuits Systems for Video Technology, Vol. 8, pp. 369-377, 1998.
[57]L. K. Liu and E. Feig, “A block-based gradient descent search algorithm for block motion estimation in video coding,” IEEE Trans. on Circuits Systems for Video Technology, Vol. 6, pp. 419-423, 1996.
[58]D .J. Kang, J. Y. Kim and I. S. Kweon, “A stabilized snake constraint for tracking object boundaries,” in Proc. of the 2001 IEEE International Symposium on Industrial Electronics, Vol 1, pp. 672-677, 2001.
[59]W. Kim and J. J. Lee, “Visual tracking using Snake based on target's contour information,” in Proc. of the 2001 IEEE International Symposium on Industrial Electronics, Vol 1, pp. 176-180, 2001.
[60]J. Soh, H. S. Yoon, M. Wang and B. W. Min “Locating hands in complex images using color analysis,” in Proc. of the 1997 IEEE International Conference on Systems, Man, and Cybernetics, Vol 3, pp. 2142-2146, 1997.
[61]T. M. Chen, R. C. Luo and T. H. Hsiao, “Visual tracking using adaptive color histogram model,”in Proc. of the 25th IEEE Annual Conference on Industrial Electronics Society, Vol 3, pp. 1336-1341, 1999.
[62]J. Triesch and C. Von der Malsburg, “Self-Organized Integration of Adaptive Visual Cues for Face Tracking,” in Proc. of the 4th IEEE International Conference on Automatic Face and Gesture Recognition, pp. 102-107, 2000.
[63]D. Kragic and H. I. Christensen, “Cue Integration for Visual Servoing,” IEEE Trans. on Robotics and Automation, Vol. 17, pp. 18-27, 2001.
[64]C. E. Erdem, A. M. Tekalp and B. Sankur, “Metrics for performance evaluation of video object segmentation and tracking without ground-truth,” in Proc. of the IEEE International Conference on Image Processing, Vol. 2, pp. 69-72, 2001.