簡易檢索 / 詳目顯示

研究生: 朱建勳
Chu, Chien-Hsun
論文名稱: 保全防災機器人團隊之即時影像處理系統
Real-time Image Processing System for Surveillance and Security Robot Team
指導教授: 李祖聖
Li, Tzuu-Hseng S.
學位類別: 碩士
Master
系所名稱: 電機資訊學院 - 電機工程學系
Department of Electrical Engineering
論文出版年: 2007
畢業學年度: 95
語文別: 英文
論文頁數: 101
中文關鍵詞: 保全機器人影像
外文關鍵詞: Surveillance Robot, image
相關次數: 點閱:83下載:7
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 本論文提出在室內環境中保全防災機器人之即時人臉偵測與追蹤及自我定位與自動充電系統之設計,同時可透過網際網路監控機器人畫面以了解現場情況。整個影像系統不僅可以在巡邏過程自動偵測環境中之可疑人物而且可透過辨識環境特徵來獲得機器人本身之位置,以供保全防災機器人團隊做任務之安排。本論文結合了數種影像處理技術,包括膚色分析、侵蝕及擴張、邊緣偵測等方法,因人臉經影像處理後為近似橢圓形,所以再用隨機式的橢圓測法求得橢圓之中心與長短軸,利用該資訊來追蹤人臉之位置。在環境定位系統中,結合RFID資訊與視覺辨識環境中的條碼特徵,我們可以完成自我定位與巡邏工作。另外,使用者能透過回傳畫面了解現場情況。最後,我們將藉由實驗之結果,來驗證所提方法之效能及適用性。

    This thesis proposes a real-time image processing approach for face detection and tracking, self-localization, and design of auto-recharge system of a surveillance and security robot in an indoor environment. Moreover, the remote scene could be monitored by webcam equipped on the robot. The robots not only detect stranger automatically of the environment in the patrol routine but also get the self-localization by recognizing environmental features to arrange assignments. This thesis combines with several image processing techniques including the complexion analysis, erosion and dilation, and edge-detection. Because face is similar to an ellipsoid after image processing, we use the randomized algorithms for ellipses detection to obtain the center, major and short axis of ellipse, then using the data to detect face region and track it. By combining RFID data with the vision system to recognize the bar-code characteristics in the environment, we can complete the self-localization and the patrol routine. Users recognize the sensed condition by the data transmitted from accompanied robots. Finally, the practical experiments demonstrate the feasibility and effectiveness of the proposed schemes.

    Abstract Ⅰ Acknowledgment Ⅲ Contents Ⅳ List of Figures Ⅶ List of Tables Ⅹ Chapter 1. Introduction 1 1.1 Motivation 1 1.2 Thesis Organization 3 Chapter 2.Overview of the surveillance and security robot 4 2.1 Introduction 4 2.2 Hardware Architecture of the Surveillance and Security Robot 6 2.2.1 The Vision Module 7 2.2.2 The Electro-Mechanical Compass Module 9 2.2.3 The Ultrasonic Sensor Module 10 2.2.4 The Infrared Sensor Module 12 2.2.5 RFID 13 2.2.6 The Fire Sensors Module 15 2.2.7 The Power Circuit Module 15 2.2.8 The Wireless Communication Module 17 2.2.9 The Omni-directional Mobile Module 18 2.3.10 The On-board NB 20 2.3 Summary 21 Chapter 3. Face Detection and Tracking Algorithm 22 3.1 Introduction 22 3.2 Fast Face Detection and Tracking Based on Vision System 24 3.2.1 Image Pre-processing 24 3.2.2 Complexion Analysis 24 3.2.3 Linear Filter 28 3.2.4 Edge Detection 29 3.2.5 Sobel Operator 31 3.2.6 Mathematical Morphology 32 3.3 Randomized Algorithms for Ellipses Detection 37 3.3.1 Introduction 37 3.3.2 Determine the Center of the Ellipse 39 3.3.3 Determine the Three Remnant Parameters 48 3.3.4 Determine the Candidacy Ellipse 49 3.3.5 Determine the True Ellipse 50 3.4 Face Tracking 50 3.4.1 Fuzzy Logic Based Image for Face Tracking 50 3.4.2 Fuzzification Interface 51 3.4.3 Decision Making Logic 53 3.4.4 Knowledge Base 53 3.4.5 The Position Correction 55 3.4.6 The Flowchart of Whole Vision System 55 3.4.7 Experimental Results 57 3.5 Design and Implementation of SOPC Based Image for Face Tracking 60 3.6 Summary 64 Chapter 4. Self-Localization and Auto-recharge System 66 4.1 Introduction 66 4.2 Binary Image Transformation 67 4.3 The Bar-code Localization Algorithm 68 4.4 Self-Localization Combines Vision with RFID Technique 72 4.5 The Auto-recharge System 76 4.6 Summary 80 Chapter 5.Experimental Results 81 5.1 Introduction 81 5.2 The Operation Interface 82 5.3 Experimental Results of Image Processing and Localization 85 5.4 The Behaviors of Auto-Recharge 92 Chapter 6. Conclusions and Future Works 94 6.1 Conclusions 94 6.2 Future Works 95 References 97 Biography 101

    [1] N. Barnes and Z. Q. Liu, “Fuzzy Control for Active Perceptual Docking,” in Proceedings of the 10th IEEE International Conference on Fuzzy Systems, pp. 1531–1534, December, 2001.
    [2] S. O. Lee, Y. J. Cho, M. H. Bo, B. Jae, and S. R. Clark, “A stable target-tracking control for unicycle mobile robots,” in Proceedings of the IEEE International Conference on Intelligent Robots and Systems, IROS 2000, pp. 1822–1827, 2000.
    [3] L. E. Parker and B. A. Ettunons, “Cooperative multi-robot observation of multiple moving targets,” in Proceedings of the IEEE International Conference on Robotics and Automation, pp. 2082–2089, 1997.
    [4] H. Kobayasbi and M. Ymagida, “Moving object detection by an autonomous guard robot,” in Proceedings of the 4th IEEE International Conference on Robot and Human Communication, pp. 323–326, 1995.
    [5] L. Brethes, P. Menezes, F. Lerasle, and J. Hayet, “Face Tracking and Hand Gesture Recognition for Human-Robot Interaction,” in Proceedings of the 2004 IEEE International Conference on Robotics and Automation, ICRA 04, pp. 1901–1906, 2004.
    [6] S. Lang, M. K. Brock, S. Hohenner, J. Fritsch, G. A. Fink, and G. Sagerer, “Providing the Basis for Human-Robot-Interaction: A Multi-Modal Attention System for a Mobile Robot,” in Proceedings of the IEEE International Conference on Multimodal Interfaces, pp. 28–35, 2003.
    [7] K. Roufas, Z. Ying, D. Duff, and M. Yim, “Six degree of freedom sensing for docking using IR LED emitters and receivers,” in the 7th IEEE International Conference on Fuzzy Systems, pp. 2000-2006, 1998.
    [8] Y. Sue and H. Imago, “Long term activity of the autonomous robot: proposal of a bench-mark problem for the autonomy,” in Proceedings of the 1998 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 1871–1878, October, 1998.
    [9] R. C. Arkin and R. R. Murphy, “Autonomous navigation in a manufacturing environment,” in Proceedings of the IEEE Transactions on Robotics and Automation, pp. 445–454, August, 1990.
    [10] B. W. Minten, R. R. Murphy, J. Hyams, and M. Micire, “Low-order complexity vision-based docking,” IEEE Transactions on Robotics and Automation, pp. 922–930, December, 2001.
    [11] M. C. Silverman, D. Nies, B. Jung, and G. S. Sukhatme, “Staying alive: a docking station for autonomous robot recharging,” in Proceedings of the IEEE International Conference on Robotics and Automation, ICRA 02, pp. 1050–1055, May, 2002.
    [12] N. Barnes and Z. Q. Liu, “Fuzzy Control for Active Perceptual Docking,” in the 10th IEEE International Conference on Fuzzy Systems, pp. 1531–1534, December, 2001.
    [13] G. Dudek and C. Zhang, “Vision-based robot localization without explicit object models,” in Proceedings of the IEEE International Conference on Robotics and Automation, pp. 1122–1127, 1996.
    [14] L. E. Parker and B. A. Ettunons, “Cooperative multi-robot observation of multiple moving targets,” in Proceedings of the IEEE International Conference on Robotics and Automation, pp. 2082–2089, 1997.
    [15] R. C. Luo and K. L. Su, “A multiagent multisensor based real-time sensory control system for intelligent security robot,” in Proceedings of the IEEE International Conference on Robotics and Automation, pp. 2394–2399, September, 2003.
    [16] R. C. Luo, K. L. Su, and C. W. Deng, “Multisensor based power diagnosis system for an intelligent robot,” in Proceedings of the IEEE International Conference on Industrial Electronics Society, pp. 2500–2505, November, 2003.
    [17] R. C. Luo, K. L. Su, and K. H. Tsai, “Intelligent security robot fire detection system using adaptive sensory fusion method,” in Proceedings of the IEEE International Conference on Industrial Electronics Society, pp. 2663–2668, November, 2003.
    [18] Y. Takahashi and I. Masuda, “A visual interface for security robots,” in Proceedings of the IEEE International Workshop on Robot and Human Communication, pp. 123–128, September, 1992.
    [19] M. Sonka, V. Hlavac, and R. Boyle, Image Processing, Analysis, and Machine Vision, Prentice-Hall, New Jersey, 1998.
    [20] R. Jain, R. Kasturi, and B. G. Schunck, Machine Vision, Prentice-Hall, New Jersey, 1995.
    [21] http://www.logitech.com/index.cfm/webcam_
    communications/webcams/devices/245&cl=tw,zh
    [22] 台灣太群科技公司, http://www.topteamnavigation.com.tw/eng/n1.htm
    [23] http://www.playrobot.com/index.htm
    [24] http://www.multice.com/F3-Phote%20Sensor/C%20-MAIN.HTM
    [25] http://www.summitco.com.tw/2-product-2-7.php
    [26] http://www.100y.com.tw/html/productclass.htm
    [27] http://www.intel.com/support/wireless/wlan/pro2200bg/
    [28] http://www.msicomputer.com/product/p_spec.asp?model=RG54GS
    [29] http://www-307.ibm.com/pc/support/site.wss/document.do?sitestyle=lenovo&lndocid=MIGR-59144
    [30] http://www.secom.co.jp/
    [31] http://www.alsok.co.jp/
    [32] B. D. Lucas and T. Kanade, “An iterative image registration technique with an application to stereo vision,” in International Joint Conference on Artificial Intelligence, IJCAI’81, pp. 674–679, 1981.
    [33] J. Shi and C. Tomasi, “Good features to track,” in IEEE International Conference on Computer Vision and Pattern Recognition, CVPR’94, pp. 593–600, Seattle, Washington, June, 1994.
    [34] M.O. Berger, “How to track efficiently piecewise curved contours with a view to reconstructing 3D objects,” in International Conference on Pattern Recognition, ICPR’94, pp. 32–36, Jerusalem, October, 1994.
    [35] G. Hager and K. Toyama, “The XVision system: a general-purpose substrate for portable real-time vision applications,” Computer Vision Image, pp. 23–37, December, 1998.
    [36] S. Boukir, P. Bouthemy, F. Chaumette, and D. Juvin, “A local method for contour matching and its parallel implementation,” Machine Vision, pp. 321–330, April. 1998.
    [37] M. Vincze, “Robust tracking of ellipses at frame rate,” Pattern Recognition, pp. 487–498, February, 2001.
    [38] P. Bouthemy, “A maximum likelihood framework for determining moving edges,” IEEE Transactions Pattern Analysis and Machine Intelligence, pp. 499–511, May, 1989.
    [39] Y. Aoki, K. Hisatomi, and S. Hashimoto, “Robust and active human face tracking vision using multiple information,” in Proceedings World Multi-Conference System Cybernetics Informatics, pp. 28–33, May, 1999.
    [40] F. Fang and S. He, “Cortical responses to invisible objects in the human dorsal and ventral pathways,” Nature Neurosis, pp. 1380–1385, 2005.
    [41] D. J. Felleman and D. C. Essen, “Distributed hierarchical processing in the primate cerebral cortex,” Cerebral Cortex, pp. 1–47, 1991.
    [42] E. R. Dougherty and C. R. Giardina, Mathematical Methods for Artificial Intelligence and Autonomous Systems, Prentice-Hall, New Jersey, 1988.
    [43] http://image.cse.nsysu.edu.tw/research/Delaunay/delaunay.htm
    [44] K. Kanatani and N. Ohta, “Optimal robot self-localization and accuracy bounds,” IEEE Transactions on Information and Systems, pp. 447–452, 1999.
    [45] E. M. Mouaddib and B. Marhic, “Geometrical matching for mobile robot localization,” IEEE Transactions on Robotics and Automation, pp. 542–552, 2000.
    [46] Y. Arai, T. Fuji, H. Asama, H. Kaetsu, and I. Endo, “Local communication-based navigation in a multi-robot environment,” in Proceedings of the 9th International Conference on Advanced Robotics, pp. 233–234, 1999.
    [47] L. Moreno, J. M. Armingol, A. D. Escalera, and M.A. Salichs, “Global integration of ultrasonic sensors information in mobile robot localization,” in Proceedings of the 9th International Conference on Advanced Robotics, pp. 283–298, 1999.

    下載圖示 校內:2014-07-18公開
    校外:2017-07-18公開
    QR CODE