研究生: |
董郁華 Dong, Yu-Hua |
---|---|
論文名稱: |
連續手語之自適應手勢追蹤 Adaptive Tracking of Gestures for Continuous Sign Language |
指導教授: |
謝璧妃
Hsieh, Pi-Fuei |
學位類別: |
碩士 Master |
系所名稱: |
電機資訊學院 - 資訊工程學系 Department of Computer Science and Information Engineering |
論文出版年: | 2009 |
畢業學年度: | 98 |
語文別: | 中文 |
論文頁數: | 36 |
中文關鍵詞: | 粒子濾波器 、等位函數法 、形狀先驗 |
外文關鍵詞: | Particle Filter, Level Set, Shape Prior |
相關次數: | 點閱:112 下載:8 |
分享至: |
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
手語有四十九種基本手型,加上軌跡有各種不同的意思,所以辨識手語主要可以分成兩大類,追蹤軌跡與辨識手型。手勢是一種最自然的溝通方式,因此手勢辨識一直是熱門的研究領域。台灣手語擁有一組有限且定義清楚的手勢,因此適合應用在手勢辨識上。非靜態的手語由於雙手在三維空間任意移動時常伴隨著遮蔽的問題。遮蔽的問題在於當兩物體的位置於二維影像上發生重疊時,造成較後方的物體在影像上被遮蔽無法完整呈現,而不完整的資訊會影響到最後辨識的結果。
追蹤台灣手語軌跡中的問題是遮蔽,當遮蔽發生時要辨識出追蹤的物體是否為真正的目標是困難的。如果無有力的特徵時,即使手在很短的時間內被遮蔽,追蹤目標有可能發生漂移或失敗。長時間準確的追蹤是一項艱鉅的挑戰。在這項工作中,我們使用粒子濾波器與等位函數法模型去追蹤雙手,我們提出一個新的且有效率與完備的追蹤演算法並可處理部分遮蔽。
實驗中選取了會發生遮蔽情形的手語影帶測試,結果顯示手的形狀能幫助追蹤到正確的目標,並且增加了準確率。
There are 49 kinds of basic hand shape in Taiwanese Sign Language (TSL), it may convey different meaningful information depending on the trajectory of two hands. So, sign language recognition mainly can be divided into two categories, tracking and identification in hand shape. Recognition of Taiwanese sign language is completely defined by a set of gestures and is a typical application of gesture recognition. In recognition of dynamic sign language, a difficult problem arises when two moving hands in the 2D image appear overlapped partially. The occurrence of occlusion yields deficient silhouette of hand shapes, leading to indigent recognition results.
One of the problems for tracking trajectory of Taiwanese Sign Language is occlusion. It is difficult to recognize whether the object being followed by the tracker is really target when occlusion occurs. Without effective features, the tracker is likely to drift away or fail gradually when the hand is occluded even if a short duration of time. Long duration robust tracking is a great challenge. In this work, we use particle filter with level set based models to track hands. We present a new algorithm for robust and efficient tracking by handling partial occlusions.
In the experiment, we chose sign words associated with hand occlusion for test. The results show that the shape can assist to track the really target, and improve the accuracy.
[1] P. Kumar, S. Ranganath, K. Sengupta, and H. Weimin, “Cooperative multitarget
tracking with efficient split and merge handling,” IEEE Trans. Circuits and System for
Video Technology, vol. 16, no.12, pp. 1477–1490, Dec. 2006.
[2] H. T. Nguyen and A. W. M. Smeulders, “Fast occluded object tracking by a robust
appearance filter,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 26,
no.8, pp. 1099–1104, 2004.
[3] Y. Wu , T. Yu and G. Hua “Tracking appearances with occlusions,” Proc. IEEE Conf.
Comput. Vis. Pattern Recognit., vol. 1, Jun. 2003, p. 789.
[4] N. J. Gordon, D. J. Salmond, and A. F. M. Smith, “Novel approach to
nonlinear/non-Gaussian Bayesian state estimation,” in IEE Proceedings-F, vol. 140,
pp. 107–113, Apr. 1993.
[5] A. S. Bashi, V. P. Jilkov, X. R. Li, and H. Chen, “Distributed implementations of
particle filters,” Information Fusion, Proceedings of the Sixth International
Conference of, vol. 2, pp. 1164–1171, 2003.
[6] J. S. Zelek, Topics in Visual Tracking, Particle Filtering, 2007.
[7] B. Zhang, W. Tian, and Z. Jin, “Robust appearance-guided particle filter for object
tracking with occlusion analysis,” International Journal of Electronics and
Communications, vol. 62, pp. 24–32, 2008.
[8] J. Wang, Y. Ma, C. Li, H. Wang, J. Liu, “An Efficient Multi-Object Tracking Method
Using Multiple Particle Filters,” WRI World Congress on Computer Science and
Information Engineering, vol.6, pp. 568-572, 2009.
[9] D. S. Jang, S. W. Jang, and H. I. Choi, “2D human body tracking with structural
Kalman filter,” Pattern Recognition, vol. 35, pp. 2041–2049, 2002.
[10] Y. Ma, S. Worrall, A. M. Kondoz, “Depth Assisted Visual Tracking,” Workshop on
Image Analysis for Multimedia Interactive Services, pp.157-160, 2009.
[11] O. Masoud and N. P. Papanikolopoulos, “A novel method for tracking and counting
pedestrians in real-time using a single camera,” IEEE Trans. Vehicular Technology,
vol. 50, no.5, pp. 1267–1278, Sep. 2001.
[12] T. Zhao and R. Nevatia, “Tracking multiple humans in complex situations,” IEEE
Trans. Pattern Analysis and Machine Intelligence, vol. 26, no. 9, pp. 1208–1221, Sep.
2004.
[13] S. L. Dockstader and A. M. Tekalp, “On the tracking of articulated and occluded
video object motion,” Real-Time Imaging, pp. 415–432, Oct. 2001.
[14] E. Maggio, A. Cavallaro, “Learning Scene Context for Multiple Object Tracking,”
IEEE Trans. Image Processing, vol. 18, no. 8, pp. 1873 – 1884, 2009.
35
[15] Y. Wu and J. Fan, “Contextual Flow,” IEEE Conference on Computer Vision and
Pattern Recognition, pp. 33 – 40, 2009.
[16] C. Rasmussen, and G. D. Hager, “Probabilistic data association methods for tracking
complex visual objects,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol.
23, no.6, pp. 560–576, June 2001.
[17] I. J. Cox, “A review of statistical data association techniques for motion
correspondence,” International Journal Computer Vision, vol. 10, no.1, pp. 53–66,
1993.
[18] F. Hao, Z. Miao, P. Guo, and Z. Xu, “Real Time Multiple Object Tracking Using
Tracking Matrix, ” International Conference on Computational Science and
Engineering, vol.2, pp. 37-41, 2009
[19] M. Yang, Y. Wu, and G. Hua, “Context-Aware Visual Tracking,” IEEE Trans. Pattern
Analysis and Machine Intelligence, vol. 31, no.7, pp. 1195-1209, 2009.
[20] P. Cui, L. Sun, F. Wang, and S. Yang, “Contextual Mixture Tracking,” IEEE Trans.
Multimedia, vol. 11, no. 2, pp. 333-341, 2009.
[21] A. R. Mansouri, “Region tracking via level set PDEs without motion computation,”
IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 24, no. 7, pp. 947–961,
July 2002.
[22] A. Shahrokni, T. Drummond, F. Fleuret, P. Fua, “Classification-Based Probabilistic
Modeling of Texture Transition for Fast Line Search Tracking and Delineation,” IEEE
Trans. Pattern Analysis and Machine Intelligence, vol. 31, no.3, pp. 570 - 576, 2009.
[23] D. Terzopoulos and R. Szeliski, “Tracking with Kalman snakes,” in Active Vision.
Cambridge, MA: MIT Press, pp. 3–20. 1992.
[24] A. Yezzi and S. Soatto, “Deformotion: Deforming motion, shape average and the joint
registration and approximation of structures in images,” Int. J. Comput. Vis., vol. 53,
no. 2, pp. 153–167, 2003.
[25] S. Dambreville, Y. Rathi, and A. Tannenbaum, “Tracking deformable objects with
unscented kalman filtering and geometric active contours,” presented at the American
Control Conf., 2006.
[26] D. Cremers, T. Kohlberger, and C. Schnorr, “Nonlinear shape statistics in
mumford-shah based segmentation,” in Proc. 7th ECCV, 2002, vol. 2351, pp. 93–108.
[27] T. Zhang and D. Freedman, “Tracking objects using density matching and shape
priors,” in Proc. 9th IEEE Int. Conf. Computer Vision, 2003, pp. 1950–1954.
[28] N. Paragios and R. Deriche, “Geodesic active contorus and level sets for the detection
and tracking of moving objects,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 22, no.
3, pp. 266–280, Mar. 2000.
[29] A. Blake and M. Isard, Eds., Active Contours. New York: Springer, 1998.
36
[30] T. Chan and L. Vese, “Active contours without edges,” IEEE Trans. Imag. Proc., vol.
10, pp. 266-277, 2001.
[31] S. Osher and R. Fedkiw, Level Set Methods and Dynamic Implicit Surfaces, New York:
Springer Verlag, 2003.
[32] M. Kass, A. Witkin, and D. Terzopoulos, “Snakes: active contour models,” Int.
Journal Computer Vision, vol. 1, no. 4, pp. 321–331, 1987.
[33] G. Sundaramoorthi, A. Yezzi, and A.C. Mennucci, “Coarse-to-Fine Segmentation and
Tracking Using Sobolev Active Contours,” IEEE Trans. Pattern Analysis Machine
Intelligence, vol. 30, pp. 851–864, May 2008.
[34] R. Malladi, J. A. Sethian, and B. C. Vemuri, “Shape modeling with front propagation:
a level set approach,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 17,
no. 2, pp. 158–175, Feb. 1995.
[35] T. F. Chan and L. A. Vese, “Active contours without edges,” IEEE Trans. Image
Processing, vol. 10, no. 2, pp. 266–277, Feb. 2001.
[36] S. Arulampalam, S. Maskell, N. Gordon, and T. Clapp, “A Tutorial on Particle Filters
for On-Line Non-Linear/Non-Gaussian Bayesian Tracking,” IEEE Trans. Signal
Processing, vol. 50, no. 2, pp. 174-189, 2002.
[37] P. Pérez, J. Vermaak, and A. Blake, “Data fusion for visual tracking with particles,”
Proc. IEEE, vol. 92, no. 3, pp. 495–513, Mar. 2004.
[38] Po-Wen Chou, Pi-Fuei Hsieh, and Chia-Cheng Hsieh, “Kernel-based nonlinear feature
extraction for image,” in Proc. IEEE Int. Geoscience and Remote Sensing Symp.,
Boston, Massachusetts, USA, July 6-11, 2008.
[39] (NSC98-2221-E-006-181)國科會計畫:影像分類之非線性核特徵萃取法研究。
Kernel-based nonlinear feature extraction for image, 2009.