簡易檢索 / 詳目顯示

研究生: 賴俊良
Lai, Jun-Liang
論文名稱: 移動目標物視覺偵測與追蹤研究
A Study on Visual Detection and Tracking of Moving Targets
指導教授: 鄭銘揚
Cheng, Ming-Yang
學位類別: 碩士
Master
系所名稱: 電機資訊學院 - 電機工程學系
Department of Electrical Engineering
論文出版年: 2006
畢業學年度: 94
語文別: 中文
論文頁數: 87
中文關鍵詞: 多重影像特徵比對演算法無母數背景相減法
外文關鍵詞: nonparametric background subtraction method, multi-cue matching approach
相關次數: 點閱:123下載:5
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 在電腦視覺的領域中,移動物體偵測與追蹤一直都是重要的研究課題之一。在移動物體偵測部分,背景相減法常被用來偵測移動物體,但該法受背景擾動或光源的影響甚大。為避免此一缺點,本論文採用無母數背景相減法來偵測移動物體。此法可以處理背景畫面雜亂或背景並不完全靜止而是帶有些微物體移動之情形,例如:樹枝與樹葉晃動等。而在目標物追蹤方面,本論文使用多重影像特徵比對演算法,進行移動目標物之追蹤鎖定,其中結合主動式輪廓模型、機率式相似度比對、輪廓比對、色彩長條圖比對與樣板比對等資訊做為動態追蹤之相似度量測,以提高動態追蹤之準確性。當目標物產生形變時,主動式輪廓模型能夠隨著變更外部輪廓,因此較適用於會產生形變之目標物追蹤。尤其甚者,為了提高輪廓追蹤的準確度,本論文使用卡爾曼濾波器預測Snake輪廓。此外機率式相似度比對法將樣板的色彩資訊進行統計並求得樣板色彩之機率分佈,因此較不受目標物出現遮蔽物的影響。另外在伺服控制方面,本論文結合線性(α-β tracker)與非線性預測器(δ-ε filter)來預測移動目標物位置,以提升動態視覺追蹤之效能。

    Motion detection and tracking has always been an important part of the research of computer vision. Among the moving object detection algorithms, the background subtraction is a method typically used to segment moving objects. However it is easily influenced by “background disturbance” and “fluctuation of lighting”. In order to overcome this difficulty, the nonparametric background subtraction method is used to detect the moving objects in this study. This method can handle situations where the background of the scene is cluttered and not completely static, but contains subtle motions such as tree branches and bushes. On the other hand, a multi-cue matching approach is employed in this study to perform dynamic tracking/fixation of a moving object. This multi-cue matching approach includes active contour modeling, probability similarity matching, contour matching, color histogram matching and template matching. Since the active contour model can detect the contour of the moving target when the appearance of the target changes, it is suitable for flexible target tracking. Moreover, in order to improve the accuracy of contour tracking, we use a Kalman filter to predict the contour of the Snake model at the next time instant. In addition, probability similarity matching will gather the statistics of the template’s color information, and collect the color distribution of the template. Thus, this method will be affected less by objects that block the line of vision between the visual tracking system and the target. As for the servo control unit, a linear filter (g-h filter) and a nonlinear filter (δ-ε filter) are combined to predict the position of the moving target so that the performance of dynamic visual tracking can be improved.

    中文摘要................................................................Ⅰ 英文摘要................................................................Ⅱ 致謝....................................................................Ⅲ 目錄................................................................... Ⅳ 圖目錄..................................................................Ⅵ 表目錄..................................................................Ⅷ 第一章 緒論............................................................1 1.1 研究動機與目的..................................................1 1.2 文獻回顧........................................................2 1.3 本文架構........................................................4 第二章 移動目標物之偵測................................................6 2.1 適應性背景相減法................................................6 2.2 改良式適應性背景相減法..........................................8 2.2.1 物件標示........................................................9 2.2.2 重疊分類........................................................9 2.2.3 輪廓淬取.......................................................10 2.2.4 前景相似度量測.................................................11 2.2.5 靜止物體之背景更新.............................................13 2.3 無母數背景相減法...............................................14 2.3.1密度估測.....................................................17 2.3.2 核心密度估測................................................18 2.3.3 錯誤偵測抑制................................................19 2.3.4 背景更新....................................................20 第三章 動態視覺追蹤演算法.............................................23 3.1 多重影像特徵比對法.............................................24 3.1.1樣板比對法...................................................25 3.1.2 色彩長條圖比對法............................................26 3.1.3 輪廓比對法..................................................27 3.1.4 主動式輪廓模型..............................................28 3.1.5 機率式相似度比對法..........................................35 3.2 樣板比對搜尋法.................................................39 3.3 位置預測.......................................................41 3.3.1 卡爾曼濾波器................................................42 3.3.2 α-β tracker.................................................44 3.3.3 g-h filter..................................................46 3.3.4 dynamic circular filter.....................................48 3.3.5 modified filter.............................................51 第四章 即時視覺追蹤系統架構...........................................52 4.1 實驗硬體設備簡介...............................................52 4.2 視覺伺服追蹤系統建模...........................................56 第五章 實驗結果.......................................................59 5.1 靜態偵測模式之移動目標物偵測實驗...............................59 5.2 影像特徵相似度比對實驗.........................................68 5.3 動態之移動目標物追蹤實驗.......................................74 5.4 位置預測實驗...................................................76 第六章 結論與建議.....................................................80 參考文獻................................................................82 圖 目 錄 圖 2.1 適應性背景相減法之運作流程圖....................................8 圖 2.2 連續標示法示意圖................................................9 圖 2.3 邊界描繪示意圖.................................................10 圖 2.4 邊界描繪之模擬圖...............................................11 圖 2.5 計算前景相似度量測示意圖.......................................12 圖 2.6 改良式適應性背景相減法之架構圖.................................14 圖 3.1 動態視覺追蹤示意圖.............................................24 圖 3.2 多重影像特徵比對法.............................................24 圖 3.3 樣板比對法之示意圖.............................................26 圖 3.4 樣板輪廓特徵範例...............................................28 圖 3.5 Snake輪廓疊代示意圖............................................29 圖 3.6 Snake輪廓與控制點..............................................31 圖 3.7 貪婪式演算法搜尋之鄰域.........................................32 圖 3.8 由合成向量估側曲率.............................................33 圖 3.9 快速貪婪式演算法的兩種搜尋鄰域模式.............................34 圖 3.10 Snake輪廓演算示意圖............................................34 圖 3.11 三步搜尋法之示意圖.............................................40 圖 3.12 圓弧計算示意圖.................................................48 圖 4.1 系統架構示意圖.................................................52 圖 4.2 TOPCIA TP2000C攝影機...........................................53 圖 4.3 MuTech MV-500 影像擷取卡.......................................53 圖 4.4 PMC32-6000運動控制卡...........................................54 圖 4.5 二自由度之pan-tilt機構.........................................55 圖 4.6 攝影機之空間投影幾何關係.......................................57 圖 4.7 視覺伺服控制系統方塊圖.........................................58 圖 5.1 影片一使用改良式適應性背景相減法之實驗結果.....................61 圖 5.2 影片一使用無母數背景相減法之實驗結果...........................62 圖 5.3 影片二使用改良式適應性背景相減法之實驗結果.....................64 圖 5.4 影片二使用無母數背景相減法之實驗結果...........................65 圖 5.5 影片三使用改良式適應性背景相減法之實驗結果.....................67 圖 5.6 影片二使用無母數背景相減法之實驗結果...........................68 圖 5.7 目標物出現完全遮蔽情況.........................................70 圖 5.8 實驗一:多重影像特徵比對法之相似度量測結果.....................70 圖 5.9 目標物出現部分遮蔽情況.........................................71 圖 5.10 實驗二:多重影像特徵比對法之相似度量測結果.....................72 圖 5.11 目標物出現形變遮蔽情況.........................................73 圖 5.12 實驗三:多重影像特徵比對法之相似度量測結果.....................74 圖 5.13 目標物於快速移動下使用multi-cue matching之動態追蹤實驗結果.....75 圖 5.14 以線性馬達帶動目標物作直線運動.................................76 圖 5.15 第ㄧ個實驗之移動目標物位置預測結果.............................77 圖 5.16 第二個實驗之移動目標物位置預測結果.............................78 表 目 錄 表 3.1 搜尋法之比較...................................................40 表 4.1 PMC32-6000運動控制卡之硬體規格.................................55 表 4.2 Panasonic MSMA041A1E型AC伺服馬達規格...........................56 表 5.1 圖5.15中pan軸所預測之位置和實際位置的誤差量測..................78 表 5.2 圖5.15中tilt軸所預測之位置和實際位置的誤差量測.................78 表 5.3 圖5.16中pan軸所預測之位置和實際位置的誤差量測..................79 表 5.4 圖5.16中tilt軸所預測之位置和實際位置的誤差量測.................79

    參考文獻
    [1] A. Elgammal, R. Duraiswami, D. Harwood and L. S. Davis, “Background and Foreground Modeling Using Nonparametric Kernel Density Estimation for Visual Surveillance,” in Proc. of the IEEE, vol. 90, issue 7, pp.1151-1163, 2002.
    [2] P. J. Burt, C. Yen and X. Xu, “Local Correlation Measures for Motion Analysis: a Comparitive Study,” in Proc. of the IEEE Conference on Pattern Recognition Image Processing, pp. 269-274, 1982.
    [3] J. P Lewis, “Fast Template Matching,” Vision Interface, pp.120-123, 1995.
    [4] P. Anandan, “Measuring Visual Motion from Image Sequences,” Technical Report, COINS-TR-87-21, COINS, Massachusetts University, 1987.
    [5] S. Hutchinson, G. D. Hager and P. I. Corke, “A Tutorial on Visual Servo Control,” IEEE Trans. on Robotics and Automation, vol. 12, pp. 651-670, 1996.
    [6] 王俊凱,基於改良式適應性背景相減法與多重影像特徵比對法之多功能及時視覺追蹤系統之設計與實現,碩士論文,國立成功大學電機工程學系,2004。
    [7] H. W. Park, T. Schoepflin and Y. Kim, “Active contour model with gradient directional information: directional snake,” IEEE Trans. on Circuits and Systems for Video Technology, vol. 11, issue 2, pp.252-256, 2001.
    [8] J. Hill and W. T. Park, “Real Time Control of a Robot with a Mobile Camera,” in Proc. of 9th ISIR, pp. 233-246, 1979.
    [9] W. Brand, “Morphable 3D models from video,” in Proc. of the 2001 IEEE
    Computer Society Conference on Computer Vision and Pattern recognition, vol. 2, pp. 456-463, 2001.
    [10] J. Soh, H. S. Yoon, M. Wang and B. W. Min “Locating hands in complex images using color analysis,” in Proc. of the 1997 IEEE International Conference on Systems, Man, and Cybernetics, vol. 3, pp. 2142-2146, 1997.
    [11] T. M. Chen, R. C. Luo and T. H. Hsiao, “Visual tracking using adaptive color histogram model,” in Proc. of the 25th IEEE Annual Conference on Industrial Electronics Society, vol. 3, pp. 1336-1341, 1999.
    [12] D .J. Kang, J. Y. Kim and I. S. Kweon, “A stabilized snake constraint for tracking object boundaries,” in Proc. of the 2001 IEEE International Symposium on Industrial Electronics, vol. 1, pp. 672-677, 2001.
    [13] W. Kim and J. J. Lee, “Visual tracking using Snake based on target's contour information,” in Proc. of the 2001 IEEE International Symposium on Industrial Electronics, vol. 1, pp. 176-180, 2001.
    [14] E. Loutas, I. Pitas and C. Nikou, “Entropy-based metrics for the analysis
    of partial and total occlusion in video object tracking,” IEE Trans. on Vision, Image and Signal Processing, vol. 151, issue 6, pp. 487-497, 2004.
    [15] D. Kragic and H. I. Christensen, “Cue integration for visual servoing,” IEEE Trans. on Robotics and Automation, vol. 17, pp. 18-27, 2001.
    [16] 陳淑嬌,梯度向量流主動輪廓模型於子宮頸抹片細胞邊界偵測之應用,中興大學碩士論文,2003。
    [17] M. Kass, A. Witkin and D. Terzopoulos, “Snake: Active contour models,” International Journal of Computer Vision, vol. 1, pp. 321-331, 1987.
    [18] F. Leymarie and M. D. Levine, “Tracking deformable objects in the plane using an active contour model,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 15, pp. 617-634, 1993.
    [19] 何易展,細胞顯微影像之分割、追蹤與運動分析,成功大學碩士論文,2002。
    [20] J. R. Kim and Y. S. Moon, “Automatic localization and tracking of moving objects using adaptive snake algorithm,” in Proc. of the 2003 Joint Conference of the Fourth International Conference, pp. 729-733, 2003.
    [21] 陳佩穎,以新型動態輪廓技術完成可追蹤任意形狀物體之強健影像伺服系統,臺灣大學碩士論文,2003。
    [22] H. Jiang and M. S. Drew, “A predictive contour inertia snake model for general video tracking,” in Proc. of IEEE Conference on Image Processing, vol. 3, pp. III-413 - III-416, 2002.
    [23] E. Brookner, Tracking and Kalman Filtering Made Easy. New York: John Wiley & Sons, 1998.
    [24] K. C. C. Chan, L. Vika and L. Henry, “Radar Tracking for Air Surveillance in a Stressful Environment Using a Fuzzy-Gain Filter,” IEEE Trans. on Fuzzy System, vol. 5, no. 1, 1997.
    [25] T. H. Li and N. S. Pai, “Design of fuzzy Logic Based Estimators and Their Applications,” PhD Thesis, Dept. of Electrical Engineering, National Cheng Kung University, R. O. C., 2002.
    [26] W. Hu, T. Tan, L. Wang and S. Maybank, “A survey on visual surveillance of object motion and behaviors,” IEEE Trans. on Systems, Man and Cybernetics, vol. 34, pp. 334-352, 2004.
    [27] J. A. Piepmeier, G. V. McMurray and H. Lipkin, “Tracking a moving target with model independent visual servoing: a predictive estimation approach,” in Proc. of IEEE International Conference on Robotics and Automation, vol.3, pp. 2652-2657, 1998.
    [28] P. R. Kalata, “A generalized parameter for α-β and α-β-γ target trackers,” IEEE Trans. on Aerospace and Electronic system, vol. AES-20, No. 2, pp. 174-182, 1984.
    [29] R. Garrido, E. Gonzblez, A. Carvallo, E. Gortcheva, “An experimental study of predictors for visual servoing,” in Proc. of the 2000 IEEE International Symposium on Industrial Electronics, vol. 2, pp.602-606, 2000.
    [30] J. R. Kolodziej and T. Singh, “Target tracking via a dynamic circular filter/linear α-β filters in 2D,” in Proc. of IEEE Conference on American Control Conference, vol. 6, pp. 4343-4347, 2000.
    [31] R. T. Collins, A. J. Lipton, T. Kanade, H. Fujiyoshi, D. Duggins, Y. Tsin, D. Tolliver, N. Enomoto, O. Hasegawa, P. Burt and L. Wixson, “A system for surveillance and monitoring,” The Robotics Institute, Carnegie Mellon University, Pittsburgh PA, Tech. Rep. CMU-RI-TR-00-12, 2000.
    [32] E. Davies, Machine Vision: Theory, Algorithms and Practicalities. Academic Press, 1990.
    [33] M. Sonka, V. Hlavac and R. Boyle, Image Processing, Analysis, and Machine Vision. Pacific Grove: PWS, 1999.
    [34] R. C. Gonzalez and R. E. Woods, Digital Image Processing. New Jersey: Prentice-Hall, 2002.
    [35] D. Murray and A. Basu, “Motion tracking with an active camera,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 16, pp. 449-459, 1994.
    [36] C. R. Wern, A. Azarbayejani, T. Darrell, and A. P. Pentland, “Pfinder : Real-time tracking of human body,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 19, pp. 780-785, 1997.
    [37] K. P. Karmann and A. von Brandt, “Moving object recognition using and adaptive background memory,” in Proc. of Elsevier Science Publishers B.V on Time-Varying Image Processing and Moving Object Recognition, pp. 289-296, 1990.
    [38] K. P. Karmann, A. V. Brandt, and R. Gerl, “Moving object segmentation based on adaptive reference images,” in Proc. of Elsevier Science Publishers B.V on Signal Processing V: Theories and Application, pp. 951-954, 1990.
    [39] D. Koller, J. Weber, T. Huang, J. Malik, G. Ogasawara, B. Rao, and S. Russell, “Towards robust automatic traffic scene analysis in real-time," in Proc. of IEEE International Conference on Pattern Recognition, vol. 1, pp. 126-131, 1994.
    [40] N. Friedman and S. Russell, “Image segmentation in video sequences: A probabilistic approach,” in Uncertainty in Artificial Intelligence, 1997.
    [41] W. E. L. Grimson, C. Stauffer and R. Romano, “Using adaptive tracking to classify and monitor activities in a site,” in Proc. of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 22-29, 1998.
    [42] W. E. L. Grimson and C. Stauffer, “Adaptive background mixture models for real time tracking,” in Proc. of IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 246-252, 1999.
    [43] D. W. Scott, Mulivariate Density Estimation. Wiley-Interscience, 1992.
    [44] J. Triesch and C. Von der Malsburg, “Self-organized integration of adaptive visual cues for face tracking,” in Proc of IEEE International Conference on Automatic Face and Gesture Recognition, pp. 102-107, 2000.
    [45] C. E. Erdem, A. M. Tekalp and B. Sankur, “Metrics for performance evaluation of video object segmentation and tracking without ground-truth,” in Proc. of the IEEE International Conference on Image Processing, vol. 2, pp. 69-72, 2001.
    [46] L. D. Cohen, “Note on active contour models and balloons,” CVGIP Image Understanding, vol. 53, pp. 211-218, 1991.
    [47] C. Xu and J. L. Prince, “Snakes, shapes, and gradient vector flow,” IEEE Trans. on Image Processing, vol. 7, pp. 359-369, 1998.
    [48] R. G. N. Meegama and J. C. Rajapakse, “NURBS snakes,” Image and Vision Computing, vol. 21, pp. 551-562, 2003.
    [49] D. J. Williams and M. Shah, “A fast algorithm for active contours and curvature estimation,” CVGIP Image Understanding, vol. 55, pp. 14-26, 1992.
    [50] K. M. Lam and H. Yan, “Fast greedy algorithm for active contours,” Electronics Letters, vol. 30, pp. 21-23, 1994.
    [51] 孫嘉陽,基於動態輪廓模型之移動目標物即時偵測與追蹤研究,成功大學碩士論文,2004。
    [52] G. D. Hager and P. N. Belhumeur, “Efficient region tracking with
    parametric models of geometry and illumination,” IEEE Trans. on Pattern Analysis and Machine Intelligence, pp. 1025-1039, 1998.
    [53] N. Peterfreund, “Robust tracking of position and velocity with Kalman
    Snakes,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 21, no.6, pp. 564–569, 1999.
    [54] Y. Zhong, A. K. Jain and M. P. Dubuisson-Jolly, “Object tracking
    using deformable templates,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 22, no. 5, pp. 544-549, 2000.
    [55] S. Haykin, Communication systems. Wiley, New York, 1994, 3rd edn.
    [56] F. M. Reza, An introduction to information theory. Dover, New York,
    1994.
    [57] M. Skouson, Q. Guo and Z. Liang, “A bound on mutual information
    for image registration,” IEEE Trans. on Medical Imaging, pp. 843-846, 2001.
    [58] M. N. Do and M. Vetterli, “Texture similarity measurement using
    kullback-leibler distance on wavelet subbands,” in Proc. of International Conference. on Image Processing, vol. 3, pp. 730-733, 2000.
    [59] A. Papoulis, Probability, random variables, and stochastic processes.
    McGraw-Hill, Inc, New York, 1991.
    [60] R. Jain and A. K. Jain, “Displacement measurement and its application in inter-frame image coding,” IEEE Trans. on Communications, pp. 1799-1808, 1981.
    [61] H. Jong, L. Chen and T. Chiueh, “Parallel Architectures for 3-Step Hierarchical Search Block-Matching Algorithm,” IEEE Trans. on Circuit and Systems for Video Technology, vol. 4, no. 4, pp. 407-416, 1994.
    [62] S. Zhu and K. K. Ma, “A new diamond search algorithm for fast block-matching motion estimation,” IEEE Trans. on Image Processing, vol. 9, pp. 287-290, 2000.
    [63] J. Y. Tham, S. Ranganath, M. Ranganath and A. A. Kassim, “A novel unrestricted center-biased diamond search algorithm for block motion estimation,” IEEE Trans. on Circuits Systems for Video Technology, Vol. 8, pp. 369-377, 1998.
    [64] K. Kim and J. Hansen, “Development of fuzzy algorithm for tracking of maneuvering targets,” in Proc. of the IEEE Conference on Decision and Control, pp. 803-808, 1992.
    [65] S. McGinnity and G. W. Irwin, “Fuzzy logic approach to aneuvering target tracking,” in Proc. of Inst. Electrical Engineering, vol. 145, pp. 337-341, 1998.
    [66] J. E. Gray, A. S. Smith-Carroll and W. J. Murray, “What do filter coefficient relationships mean,” in Proc. of the Thirty-Sixth Southeastern Symposium on System Theory, pp. 36 - 40, 2004.
    [67] T. Kawase, K. Tsarunosono, N. Ehara and I. Sasase, “An Adaptive-Gain Alpha-Beta Tracker Combined with Circular Prediction For Maneuvering Target Tracking,” in Proc. of IEEE Region 10 Annual Conference on Speech and Image Technologies For Computing and Telecommunications, pp. 795-798, 1997.
    [68] http://ftp.cs.rdg.ac.uk/

    下載圖示 校內:2009-08-21公開
    校外:2009-08-21公開
    QR CODE