簡易檢索 / 詳目顯示

研究生: 萬宗瑋
Wan, Tsung-Wei
論文名稱: 手勢追蹤與辨識於互動遊戲之應用
Application of Hand Tracking and Gesture Recognition to an Interactive Game
指導教授: 蔡清元
Tsay, Tsing-Iuan
學位類別: 碩士
Master
系所名稱: 工學院 - 機械工程學系
Department of Mechanical Engineering
論文出版年: 2010
畢業學年度: 98
語文別: 中文
論文頁數: 72
中文關鍵詞: 移動邊緣偵測樣板匹配法手部追蹤手勢辨識
外文關鍵詞: Moving Edges, Template Matching, Hand Tracking, Gesture Recognition
相關次數: 點閱:141下載:7
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 本論文成功地建立了一個可靠的即時手部追蹤與手勢辨識系統。此系統是建立在可利用個人電腦之RS-232 序列埠控制的商業旋轉/俯仰/變焦相機平台上,本系統之整體架構可分為四個主要部分─手部偵測、手部資訊提取、手部追蹤和手勢辨識。第一部分主要利用移動邊緣法偵測影像中移動的手部。接著,第二部分則自動提取手部的亮度與彩色資訊,使得手部被追蹤。然後,第三部分利用樣板匹配法使系統能持續的追蹤手部。最後,系統利用雷達掃描的方式判斷手指數。本系統被應用到剪刀、石頭、布的互動式遊戲中,並以許多的實驗來驗證此手部追蹤與手勢辨識系統的效能。

    This thesis develops a reliable real-time hand tracking and gesture recognition system. The system presented herein is built on a commercial pan/tilt/zoom camera platform, which can be controlled using a personal computer through an RS-232 port. The overall architecture of the system can be divided into four main parts – the hand detector, the hand information extractor, the hand tracker and the gesture recognizer. The first part detects the moving hand in the image, mainly using the moving edges technique. Then, the second part automatically extracts the hand’s illumination and color information to enable the hand to be tracked. Next, the third part adopts a template matching method to enable the system to continuously track the hand. Finally, using a radar scanning method allows the system to determine the number of fingers. The developed system is applied to an interactive game, named paper-scissors-rock. To validate the performance of the developed hand tracking and gesture recognition system, numerous experiments are undertaken.

    摘要 ............................................... I 英文摘要 ........................................... II 誌謝 ............................................... III 目錄 ............................................... IV 表目錄 ............................................ VII 圖目錄 ............................................ VIII 第 1 章緒論 ......................................... 1 1.1 研究動機與目標 .................................. 1 1.2 文獻回顧 ........................................ 2 1.3 研究貢獻 ........................................ 4 1.4 論文架構 ........................................ 4 第 2 章研究背景 ..................................... 5 2.1 系統架構 ........................................ 5 2.2 相機平台設定與控制 .............................. 7 2.2.1 相機建模與座標定義 ............................ 7 2.2.2 相機平台控制 .................................. 10 2.3 基礎影像處理演算法 .............................. 14 2.3.1 邊緣偵測 ...................................... 14 2.3.2 斷開和閉合運算 ................................ 17 2.3.3 樣板匹配法 .................................... 19 2.3.4 色彩空間轉換 .................................. 22 2.3.5 膚色偵測 ...................................... 23 第 3 章目標偵測與追蹤 ............................... 25 3.1 運動偵測 ........................................ 25 3.2 目標資訊提取 .................................... 29 3.3 目標追蹤 ........................................ 31 3.3.1 樣板匹配追蹤法則 .............................. 31 3.3.2 更新目標樣板資訊 .............................. 33 第 4 章手勢辨識 ..................................... 36 4.1 手勢辨識 ( I ) .................................. 36 4.1.1 紀錄輪廓到重心的距離與角度 .................... 36 4.1.2 手指數的閥值設定 .............................. 38 4.2 去除手臂 ........................................ 39 4.3 手勢辨識 ( II ) ................................. 41 第 5 章實驗結果 ..................................... 42 5.1 自動目標追蹤 .................................... 42 5.1.1 目標姿態改變 .................................. 44 5.1.2 相似目標物干擾 ................................ 46 5.1.3 目標物局部遮蔽 ................................ 47 5.1.4 亮度改變 ...................................... 48 5.1.5 低對比度背景 .................................. 48 5.1.6 相機平台之目標物追蹤 .......................... 49 5.2 手勢辨識 ........................................ 50 5.3 人手追蹤與辨識 .................................. 61 第 6 章結論與未來展望 ............................... 64 6.1 結論 ............................................ 64 6.2 未來展望 ........................................ 64 參考文獻 ............................................ 66 自述 ................................................ 72

    [1] S. Birchfield, “An Elliptical Head Tracker,” Proceedings of the 31st Asilomar Conference on Signals, Systems, and Computers, Nov. 1997.
    [2] S. Birchfield, “Elliptical Head Tracking Using Intensity Gradients and Color Histograms,” Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Santa Barbara, California, July 1998.
    [3] R. Brunelli and T. Poggio, “Template Matching: Matched Spatial Filters and Beyond,” MIT AI Memo No.1549, Oct. 1995.
    [4] V. Caselles, R. Kimmel and G. Sapiro, “Geodesic Active Contours,” International Journal of Computer Vision, Vol.22, pp.61-79, 1997.
    [5] F. Cen and F. Qi, “Tracking Non-rigid Objects in Clutter Background with Geometric Active Contours,” Electronics Letters, Vol.38, No.12, June 2002.
    [6] C. C. Chang, I. Y. Chen and Y. S. Huang, “Hand Pose Recognition Using Curvature Scale Space,” Proceedings of the 16th International Conference on Pattern Recognition, Vol.2, pp.386-389, Aug. 2002.
    [7] C. J. Chen, Motion Detection and Estimation of a Real-Time Visual Servo Tracking System, Master Thesis, Department of Mechanical Engineering, National Cheng-Kung University, 2003.
    [8] P. Y. Chen, A Robust Visual Servo System for Tracking an Arbitrary Shaped Object by a New Active Contour Method, Master Thesis, Department of Electrical Engineering, National Taiwan University, 2003.
    [9] M. Y. Cheng, M. C. Tsai and C. Y. Sun, “Dynamic Visual Tracking Based on Multiple Feature Matching and g-h Filter,” Advanced Robotics, Vol. 20, No. 12, pp.1401-1423, 2006.
    [10] C. H. Chuang and W. N. Lie, “Fast and Accurate Active Contours for Object Boundary Segmentation,” Proceedings of IEEE Asia-Pacific Conference on Circuits and Systems, Dec. 2000.
    [11] C. H. Chuang, Object Segmentation, Registration, and Tracking based on Contour Detection, Ph.D. Thesis, Department of Electrical Engineering, National Chung Cheng University, 2003.
    [12] L. D. Cohen, “On Active Contour Models and Balloons,” CVGIP: Image Understanding, Vol.53, No.2, pp.211-218, March 1991.
    [13] R. C. Gonzalez and R. E. Woods, Digital Image Processing, 3rd Edition, Prentice Hall, 2008.
    [14] L. Gupta and S. Ma “Gesture-Based Interaction and Communication: Automated Classification of Hand Gesture Contours,” IEEE Transactions on Systems, Man and Cybernetics, Part C, Vol.31, Issue 1, pp.114-120, Feb. 2001.
    [15] C. Haworth, A. M. Peacock and D. Renshaw, “Performance of Reference Block Updating Techniques when Tracking with the Block Matching Algorithm,” Proceedings of IEEE International Conference on Image Processing, Vol.1, pp.365-368, 2001.
    [16] E. J. Holden and R. Owens “Recognizing Moving Hand Shapes,” Proceedings of the 12th IEEE International Conference on Image Analysis and Processing, 2003.
    [17] C. M. Huang, S. C. Wang, L. C. Fu, P. Y. Chen and Y. S. Cheng, “A Robust Visual Tracking of an Arbitrary-Shaped Object by a New Active Contour Method for a Virtual Reality Application,” Proceedings of IEEE Conference on Networking, Sensing and Control, Mar. 2004.
    [18] F. Huang and J. Su, “Face Contour Detection Using Geometric Active Contours,” Proceedings of the 4th IEEE World Congress on Intelligent Control and Automation, June 2002.
    [19] S. Hutchinson, G. D. Hager and P. I. Corke, “A Tutorial on Visual Servo Control,” IEEE Transaction on Robotics and Automation, Vol.12, No.5, pp.651-670, 1996.
    [20] J. H. Jean and R.Y. Wu, “Adaptive Visual Tracking of Moving Objects Modeled with Unknown Parameterized Shape Contour,” Proceedings of IEEE International Conference on Networking, Sensing and Control, Vol.1, pp.76 - 81, Mar. 2004.
    [21] L. Ji and H. Yan, “An Attractable Snake Model for Contour Extraction in MRI Images,” Proceedings of the 20th IEEE Annual International Conference, Vol.2, pp.609-612, Oct. 1998.
    [22] L. Ji and H. Yan, “Attractable Snakes Based on the Greedy Algorithm for Contour Extraction,” Pattern Recognition, Vol.35, pp.791-806, April 2002.
    [23] M. Kass, A. Witkin and D. Terzopoulos, “Snake: Active Contour Models,” International Journal of Computer Vision, Vol.1, pp.321-331, 1987.
    [24] K. K. Kim, K. C. Kwak and S. Y. Ch, “Gesture Analysis for Human-Robot Interaction,” Proceedings of IEEE Conference on Advanced Communication Technology, Vol.32, pp.1824-1827, Feb. 2006.
    [25] K. M. Lam and Hong Yan, “Fast Greedy Algorithm for Active Contours,” Electronics Letters, Vol.30, No.1, pp.21-23, Jan. 1994.
    [26] M. S. Lew, N. Sebe and T. S. Huang, “Improving Visual Matching,” Proceedings ofIEEE Conference on Computer Vision and Pattern Recognition, Vol.2, pp.58-65, 2000.
    [27] J. P. Lewis, “Fast Normalized Cross-Correlation,” Vision Interfaces, pp120-123, 1995.
    [28] C. H. Li, Robust Visual Tracking in Cluttered Environment with a Pan-Tilt Camera by an Active Contour Method, Master Thesis, Department of Mechanical Engineering, National Cheng Kung University, 2005.
    [29] A. J. Lipton, H. Fujiyoshi and R. S. Patil, “Moving Target Classification and Tracking from Real-Time Video,” Proceedings of the DARPA Image Understanding Workshop, 1998.
    [30] A. J. Lipton, H. Fujiyoshi and R. S. Patil, “Moving Target Classification and Tracking from Real-Time Video,” Proceeding of the 4th IEEE Workshop on Application of Computer Vision, pp.8-14, 1998.
    [31] X. Liu and K. Fujimura, “Hand Gesture Recognition Using Depth Data,” Proceedings of the 6th IEEE International Conference on Automatic Face and Gesture Recognition, pp.529-534, May 2004.
    [32] A. R. Mirhosseini and H. Yan, “An Optimally Fast Greedy Algorithm for ActiveContours,” Proceedings of IEEE International Symposium Circuit and Systems, Vol.2, pp.1189-1192, June 1997.
    [33] D. Murry and A. Basu, “Motion Tracking with an Active Camera,” IEEE Transaction on Pattern Analysis and Machine Intelligence, Vol.16, No.5, pp.449-459, May 1994.
    [34] N. Paragios and R. Deriche, “Geodesic Active Contours and Level Sets for the Detection and Tracking of Moving Objects,” IEEE Transaction on Pattern Analysis and Machine Intelligence, Vol.22, No.3, pp.266-280, Mar. 2000.
    [35] A. M. Peacock, S. Matsunaga, D. Renshaw, J. Hannah, and A. Murray, “Reference Block Updating When Tracking with Block Matching Algorithm,” Electronics letters, Vol.36, No.34, pp.309-310, Feb. 2000.
    [36] D. J. Williams and M. Shah, “A Fast Algorithm for Active Contours and Curvature Estimation,” CVGIP: Image Understanding, Vol.55, No.1, pp.14-26, Jan. 1992.
    [37] C. Xu and J. L. Prince, “Generalized Gradient Vector Flow External Forces for Active Contours,” International Journal on Signal Processing, Vol.71, No.2, pp.131-139, Dec. 1998.
    [38] C. Xu and J. L. Prince, “Gradient Vector Flow: A New External Force for Snake,” Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, pp.66-71, 1997.
    [39] C. Xu and J. L.Prince, “Snakes, Shapes, and Gradient Vector Flow,” IEEE Transaction on Image Processing, Vol.7, No.3, pp.359-369, 1998.
    [40] C. Xu, A. Yezzi Jr. and J. L. Prince, “On the Relationship between Parametric and Geometric Active Contours,” Proceedings of IEEE Conference on Signals, Systems and Computers, Nov. 2000.
    [41] N. Xu and N. Ahuja, “Object Contour Tracking Using Graph Cuts Based Active Contours,” Proceedings of International Conference on Image. Vol.3, June 2002.
    [42] N. Xu, R. Bansal and N. Ahuja, “Object Segmentation Using Graph Cuts Based Active Contours,” Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Vol.2, June 2003.
    [43] S. Yalamanchili, W. N. Martin and J. K. Aggarwal, “Extraction of moving object description via differencing,” Computer Graphics and Image Processing, Vol.18, Issue2, pp.188-201, 1982.
    [44] Z. Zhang, “A Flexible New Technique for Camera Calibration,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol.22, No.11, pp.1330-1334, Nov. 2000.
    [45] 曹文潔, 猜拳機, 碩士論文, 國立中央大學電機系, 2007.
    [46] 陳培森, 看護系統之監控與辨識功能建立, 碩士論文, 國立中央大學電機系, 2006.
    [47] 黃俊捷, 互動雙足式機器人之設計與實現(I)─手勢辨識, 碩士論文, 國立中央大學電機系, 2008.

    下載圖示 校內:2013-08-30公開
    校外:2013-08-30公開
    QR CODE