簡易檢索 / 詳目顯示

研究生: 吳俊賢
Wu, Chen-Shien
論文名稱: 未重疊視角之多相機下之人物追蹤
Humans Tracking across Multiple Cameras with Non-overlapping Views
指導教授: 詹寶珠
Chung, Pau-Choo
學位類別: 碩士
Master
系所名稱: 電機資訊學院 - 電腦與通信工程研究所
Institute of Computer & Communication Engineering
論文出版年: 2009
畢業學年度: 97
語文別: 英文
論文頁數: 34
中文關鍵詞: 未重疊視角隱藏馬可夫模型監視系統
外文關鍵詞: surveillance system, non-overlapping views, hidden Markov model
相關次數: 點閱:128下載:2
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 人物追蹤在監視系統中扮演著很重要的角色。在空間與時間上的移動資訊與人物的外表資
    訊是人物追蹤可以利用的重要線索。我們提出了一個可學習的架構來預測人物在未重疊視角
    之多相機間的轉移機率。
    在訓練的階段,我們首先利用場景的資訊來建立每隻相機中人物可以移動的區塊。接著我
    們記錄在這些區塊中人物進出的資料。我們利用了隱藏馬可夫模型來學習這些可移動的區塊
    中人物的轉移機率。像人物所走過的區塊關係這類的時間資訊也包含在隱藏馬可夫模型中。
    在測試的階段,我們提出了跨相機間的追蹤演算法來求出人物間的對應關係,利用相機間
    的拓樸關係與人物的顏色資訊來求出機率最大的對應關係。在測試的階段所找出的對應關係
    可以接著調整在訓練階段所求出的參數。最後我們採用了實際的監視影像來驗證我們的方法
    並且得到不錯的效果。

    Human tracking plays an important role in visual surveillance systems. Spatial-temporal
    movement and appearance of human provide significant visual cues to perform human tracking. We
    propose a method to estimate human transition probability across different views by a learning
    architecture.
    In the learning phase, we first use the prior knowledge to build the observed zones for each
    camera. Then, human tracking is performed to record the zones of the observed region where
    humans enter and leave. We use hidden Markov model (HMM) to learn the transition probability
    between observed zones. The time information such as the sequence of zone human moves is also
    imposed in HMM.
    In the testing phase, we present multi-camera tracking algorithm to perform correspondences
    between humans using the maximum a posteriori estimation framework by the human transition
    topology and appearance model. The parameters learned in the training phase will be updated with
    the incoming tracking results. We will show the experiment result using real world surveillance
    videos to evaluate our method.

    CHAPTER 1 INTRODUCTION.................................................................................................................................... 4 CHAPTER 2 BACKGROUND INFORMATION......................................................................................................... 9 2.1MONOCULAR APPROACHES.................................................................................................................................... 9 2.2MULTI-VIEW APPROACHES .................................................................................................................................. 10 2.2.1MULTIPLE TRACKING WITH OVERLAPPING VIEW.............................................................................................. 10 2.2.2MULTIPLE TRACKING WITH NON-OVERLAPPING VIEW..................................................................................... 12 CHAPTER 3 PROPOSED ARCHITECTURE: OFFLINE STAGE ......................................................................... 13 3.1 PRIOR KNOWLEDGE TO SCENE INFORMATION...................................................................................................... 14 3.2 COMPUTATION THE TRAJECTORY OF HUMAN....................................................................................................... 15 3.3MOTION MODEL FOR HUMAN TRACKING.............................................................................................................. 16 3.4APPEARANCE MODEL FOR HUMAN TRACKING...................................................................................................... 16 Color model ........................................................................................................................................................... 17 Probabilistic occupancy map ................................................................................................................................ 19 3.5 TRACKING HUMAN IN CAMERAS WITH OVERLAPPING VIEW................................................................................ 19 3.6 FIND ENTRY AND EXIT ZONES IN EACH CAMERAS................................................................................................. 20 3.7 BUILD CAMERA TOPOLOGY USING HIDDEN MARKOV MODEL ............................................................................ 20 CHAPTER 4 PROPOSED ARCHITECTURE: ONLINE STAGE ........................................................................... 24 4.1 PROBLEM FORMULATION..................................................................................................................................... 24 4.2 CORRESPONDENCE ESTABLISHMENT................................................................................................................... 25 4.3 COMPUTATION THE TRAJECTORY OF HUMAN ACROSS SCENE.............................................................................. 26 CHAPTER 5 EXPERIMENT RESULT....................................................................................................................... 27 CHAPTER 6 CONCLUSION AND FUTURE WORK............................................................................................... 32 BIBLIOGRAPHY.......................................................................................................................................................... 33

    [1] K. Kim ,T. H. Chalidabhongse and D. Harwood, “Real-time foreground-background
    segmentation using codebook model,” In Real-Time imaging, 2005.
    [2] J. Berclaz, F. Fleuret, and P. Fua, “Robust People Tracking with Global Trajectory
    Optimization,” In IEEE international conference on Computer Vision and Pattern Recognition
    (CVPR) ,pp. 744-750, 2006.
    [3] K.Nummiaro, E. Koller-Meier, and L. V. Gool, “Object tracking with an adaptive color-based
    particle filter,” In Proc. Symp. Pattern Recogn , 2002.
    [4] C.R. Huang, C.S. Chen, P. C. Chung, “Contrast Context Histogram - An Efficient
    Discriminating Local Descriptor for Object Recognition and Image Matching,” In Pattern
    Recognition, vol 41(no. 10), 2008
    [5] T. Kailath, “The Divergence and Bhattacharyya Distance Measures in Signal Selection,” In
    IEEE Transactions on Communication Technology, 1967
    [6] D. Beymer, “Person counting using stereo,” In Workshop on Human Motion, pp.127, 2000.
    [7] S.Khan and M. Shan, “Consisting Labeling of Tracked Objects in Multiple Cameras with
    Overlapping Fields of View,” In IEEE Trans. Pattern Analysis and Machine Intelligence
    (PAMI ), vol 25 ,2003.
    [8] I. Haritaoglu, D. harwood, and L. Davis, “Who, when, what: A real time system for detecting
    and tracking people,” International Conference on Face and Gesture Recognition, 1998
    [9] Francois Fleuret, Jerome Berclaz, Richard Lengagne and Pascal Fua, “Multi-camera People
    Tracking with a Probabilistic Occupancy Map,” In IEEE Trans. Pattern Analysis and Machine
    Intelligence (PAMI), vol. 30, no. 2, pp. 267-282, 2008.
    [10] Q. Cai and J. K. Aggarwal, “Automatic tracking of human motion in indoor scenes across
    multiple synchronized video streams,” In IEEE Proceedings on International Conference on
    Computer Vision, pp. 356, 1998
    34
    [11] M. Han, W. Xu, H. Tao, and Y. Gong, “An Algorithm for Multiple Object Trajectory
    Tracking,” In Proc. IEEE Conf. Computer Vision and Pattern Recognition, vol. 1, pp. 864-871,
    2004.
    [12] I. Haritaoglu, D. Harwood, and L. Davis, “Real-time Surveillance of People and Their
    Activities,” In IEEE Trans. Pattern Analysis and Machine Intelligence (PAMI), pp. 809-830,
    2000.
    [13] D. Comaniciu, V. Ramesh, and P. Meer, “Real-time Tracking of Non-rigid objects Using Mean
    Shift,” In Proc. IEEE Conf. Computer Vision and Pattern Recognition, vol. 2, pp. 142-149,
    2000.
    [14] T.Zhao and R. Nevatia, “Tracking Multiple Humans in Crowded Environment,” In Proc. IEEE
    Conf. Computer Vision and Pattern Recognition, vol. 2, pp. 406-413, 2004
    [15] K. Okuma, A. Taleghani, N. de Freitas, J.J little, and D. G. Lowe, “A boosted particle filter:
    multitarget detection and tracking,” In European Conference on Computer Vision, pp. 28-39,
    2004
    [16] J. Giebel, D. M. Gavrila, and C. Schnorr, “A Bayesian Framework for Multi-cue 3d Object
    Tracking,” In European Conference on Computer Vision, pp. 241-252, 2004
    [17] J. Krumm, S. Harris, B. Myere, B Brummit, M. Hae, and S. Shafer, “Multi-camera
    multi-person tracking for easy living,” in Proceedings of Third IEEE International Workshop
    on Visual Surveillance, pp. 3-10, 2000.
    [18] K. Otsuka and N. Mukawa, “Multi-view Occlusion Analysis for Tracking Densely Populated
    Objects Based on 2D Visual Angles,” in Proceedings of IEEE Conference on Computer Vision
    and Pattern Recognition, vol. 1, pp 90-97, 2004.
    [19] J. Kang, I. Cohen, and G. Medioni, “Continuous Tracking within and across Camera Streams,”
    In Proc. IEEE Conf. Computer Vision and Pattern Recognition, vol. 1, pp. 267-272, 2003.
    [20] R. Collins, A. Lipton, H. Fujiyoshi, and T. Kanada, “Algorithms for cooperative multisensor
    surveillance,” In Proceedings of the IEEE, vol. 89, pp. 1456-1477, 2001
    [21] T. Huang and S. Russell, “Object identification in a Bayesian Context,” In Proceedings of the
    Fifteenth International Joint Conference on Artificial Intelligence, pp. 1276-1283, 1997.
    35
    [22] H. Pasula, S. Rusell, M. Ostland, and Y. Ritov, “Tracking many objects with many sensors,”
    In Int. Joint Conf. on Artificial Intelligence, pp. 1160-1171, 1999.
    [23] V. kettnaker and R. Zabih, “Bayesian Multi-camera Surveillance,” In Proc. IEEE Conf.
    Computer Vision and Pattern Recognition, vol 2, pp. 259, 1999.
    [24] O. Javed, Z. Rasheed, K. Shafique, and M. shah, “Tracking across multiple cameras with
    disjoint views,” In IEEE Proceedings on International Conference on Computer Vision, vol 2,
    pp. 952-957, 2003
    [25] O. javed, K. Shafique, and M. Shah, “Appearance modeling for tracking in multiple
    non-overlapping cameras,” In Proc. IEEE Conf. Computer Vision and Pattern Recognition,
    vol 2, pp. 26-33, 2005
    [26] Kuan- Wen Chen, CHih-Chuan Lai, Yi-Ping Hung, and Chu-Song Chen, “An Adaptive
    Learning Method for Target Tracking across Multiple Cameras,” In Proc. IEEE Conf.
    Computer Vision and Pattern Recognition, 2008
    [27] A. Dempster, N. Laird, and D. Rubin, “Maximum Likelihood from Incomplete Data via the
    EM Algorithm,” Journal of the Royal Statistical Society, 1977
    [28] D. Markris and T. Ellis, “Automatic learning of an activity-based semantic scene model,” In
    Proceedings IEEE Conference on Advanced Video and Signal Based Surveillance, pp.
    183-188, 2003
    [29] S. M. Khan and M. Shah, “Tracking Multiple Occluding People by Localizing on Multiple
    Scene Planes,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, no. 3,
    pp. 505-519, 2009
    [30] W. Hu, M. Hu and X. Zhou, T. Tan. J. Lou, and S. Maybank, “Principal Axis-Based
    Correspondence between Multiple Cameras for People Tracking,” IEEE Transactions on
    Pattern Analysis and Machine Intelligence, vol. 28, no. 4, pp. 663-671, 2006
    [31] S. L. Dockstader and A. M. Tekalp, “Multiple camera tracking of interacting and occluded
    human motion,” Proc. IEEE, vol. 89, no. 10, pp. 1441–1455, 2001
    36
    [32] A. Mittal and L. S. Davis, “M2Tracker: a multi-view approach to segmenting and tracking
    people in a cluttered scene,” IJCV, vol. 51, no. 3, pp. 189–203, 2003
    [33] J. Black and T. Ellis, “Multi camera image tracking,” Pattern Recognition, vol. 24, pp.
    1256-1267, 2006.
    [34] S. Calderara, R. Vezzani, A. Prati, and R. Cucchiara, "Entry edge of field of view for
    multi-camera tracking in distributed video surveillance," Advanced Video and Signal Based
    Surveillance, 2005. AVSS 2005. IEEE Conference on , vol., no., pp. 93-98, 15-16 Sept. 2005.
    [35] D.B. Yang, H.H. Gonza´les-Ban˜ os, and L.J. Guibas, “Counting People in Crowds with a
    Real-Time Network of Simple Image Sensors,” Proc. Ninth Int’l Conf. Computer Vision, pp.
    122-129, 2003.

    下載圖示 校內:2019-08-28公開
    校外:2019-08-28公開
    QR CODE