簡易檢索 / 詳目顯示

研究生: 張峻銘
Chang, Chun-Ming
論文名稱: 人形機器人足球賽視覺與策略系統之設計與實現
Design and Implementation of Vision and Strategy System for Humanoid Robot Soccer Competition
指導教授: 李祖聖
Li, Tzuu-Hseng S.
學位類別: 碩士
Master
系所名稱: 電機資訊學院 - 電機工程學系
Department of Electrical Engineering
論文出版年: 2009
畢業學年度: 97
語文別: 英文
論文頁數: 95
中文關鍵詞: 團隊合作策略自主定位控制策略機器人足球賽影像處理系統
外文關鍵詞: collaboration strategy, control strategy, image processing system, robot soccer game, localization
相關次數: 點閱:147下載:3
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 本論文主要在探討人形機器人足球賽所使用的視覺與策略系統。由於人形機器人足球比賽提供了一個動態而且多樣化的環境,因此可以做為多方面研究題材的測試平台,包括運動控制、影像處理、自主定位和團隊合作策略。影像畫面由CMOS感測器擷取,並且使用PDA作為嵌入式的影像處理系統。本篇提出了一個快速的影像區塊切割演算法,同時改良了傳統的隨機測圓法來更快速的完成球體的辦識。場地上的線特徵使用端點疊代法來偵測,並且使用顏色的資訊來辨識球門和球柱。所有的控制策略皆是以NIOS的FPGA控制板來運算。機器人的自主定位是使用常態分佈的蒙地卡羅法,並使用機率分佈的格子地圖來記錄定位的資訊。球的追蹤是使用模糊控制器驅動頭部的兩顆直流馬達來達成。機器人之間透過無線網路來達成團隊合作策略,並使用動態指派角色的方式達到最佳的進攻和防守效能。最後透過實驗與競賽結果,可以充分展現本人形機器人在視覺與策略上優越的效能與強健性。

    The robot soccer game presents a dynamic and complex environment. It is a challenging platform for multi-agent research, involving many topics such as motion control, image processing algorithm, localization, and collaboration strategy. This thesis mainly concerns about the development of the vision and strategy system for humanoid robot soccer competition. The image information is captured by the CMOS sensor, and computed by an embedded image processing system, which is implemented on a PDA. The main function of the image system is feature recognition. Most obvious features on the field can be recognized efficiently. A fast object segmentation method in the image with the need of only one execution loop is proposed. An improved Randomized Circle Detection (RCD) method is adopted to realize the ball recognition. The recognitions of the field line features are obtained using the Iterative End-Points Fitting Algorithm (IEPFA). The border points of the goals and landmark poles can also be calculated correctly. The control strategy system is implemented on a NIOS FPGA board. The localization method of our robot is realized via the Uniform Monte Carlo Localization (Uniform MCL) and the robot’s belief in its position is represented by a position probability grid map. We develop a fuzzy controller for the ball tracking function. The collaboration strategy in the competition is also examined. Finally, the experiment results demonstrate the capability of vision and strategy system, and the efficiency and validity in the humanoid soccer competition of RoboCup.

    Chapter 1. Introduction 1 1.1 Motivation 1 1.2 Thesis Organization 2 Chapter 2. Hardware of the Humanoid Robot 4 2.1 Introduction 4 2.2 Mechanism Design 6 2.3 Vision System 8 2.3.1 Image Sensor 8 2.3.2 Embedded Image Processing System 9 2.4 Control Strategy Center 12 2.5 Actuators 14 2.6 Power System 16 2.7 Summary 18 Chapter 3. Vision System 19 3.1 Introduction 19 3.2 Fast Color Range Setting 21 3.3 Object Recognition 27 3.3.1 Fast Object Segmentation 27 3.3.2 Ball Recognition 30 3.3.3 Line Features Recognition 33 3.3.4 Landmark Poles and Goals Recognition 41 3.4 Target Position Acquirement 45 3.4.1 Camera Calibration 45 3.4.2 Target Position Derivation 47 3.4.3 Inverse Kinematic Model of the Vision System 52 3.5 Summary 53 Chapter 4. Control Strategy 54 4.1 Introduction 54 4.2 The Control Strategy for Localization 56 4.2.1 Position Probability Grid Map 56 4.2.1 Uniform Monte Carlo Localization 59 4.3 The Control Strategy for Common Behaviors 64 4.3.1 Ball Tracking 64 4.3.2 Getting Back into Stable Standing 65 4.4 The Control Strategy for Robot Soccer Competition 66 4.4.1 Communication Architecture 66 4.4.2 The Strategy for Collaboration 68 4.4.3 The Strategy for Attacker 69 4.4.4 The Strategy for Assistant 72 4.4.5 The Strategy for Defender 73 4.4.6 The Strategy for Goalkeeper 74 4.5 Summary 77 Chapter 5. Experimental Results 78 5.1 Introduction 78 5.2 Experimental Results of Strategy for Localization 79 5.3 Experimental Results of Strategy for Robot Soccer Competition 82 Chapter 6. Conclusions and Future Works 89 6.1 Conclusions 89 6.2 Future Works 91 References 92 Biography 95

    [1] RoboCup, http://www.robocup.org/.
    [2] FIRA, http://www.fira.net/.
    [3] H. Messom, S. Gupta, and N. Demidenko, “Hough transform run length encoding for real-time image processing,” IEEE Transactions on Instrumentation and Measurement, vol. 56, no. 3, pp. 962-967, June 2007.
    [4] J. Davison, D. Reid, D. Molton, and O. Stasse, “MonoSLAM: real-time single camera SLAM,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 29, no. 6, pp. 1052-1067, June 2007.
    [5] J. Schmudderich, V. Willert, J. Eggert, S. Rebhan, C. Goerick, and G. Sagerer, “Estimating object proper motion using optical flow, kinematics, and depth information,” IEEE Transactions on Systems, Man, and Cybernetics, Part B, vol. 38, no. 4, pp. 1139-1151, Aug. 2008.
    [6] T. Gevers and H. Stokman, “Classifying color edges in video into shadow-geometry, highlight, or material transitions,” IEEE Transactions on Multimedia, vol. 5, no. 2, pp. 237-243, June 2003.
    [7] G. S. Gupta and D. bailey, “Discrete YUV look-up tables for fast colour segmentation for robotic applications,” in Proc. IEEE Int. Conf. on Electrical and Computer Engineering, pp. 963-968, May 2008.
    [8] Q. Chen, H. Xie, and P. Woo, “Vision-based fast objects recognition and distances calculation of robots,” in Proc. IEEE Int. Conf. on Industrial Electronics Society, pp. 6-10, Nov. 2005.
    [9] J. Fasola and M. Veloso, “Real-time object detection using segmented and grayscale images,” in Proc. IEEE Int. Conf. on Robotics and Automation, vol. 5, pp. 4088-4093, May 2006.
    [10] Y. T. Su, Development and implementation of visual and control systems for humanoid robot, Master Thesis, Dept. of E.E., N.C.K.U., Taiwan, June 2006.
    [11] C. Y. Hu, FPGA-based fuzzy controller and image processing system for small-sized humanoid robot, Master Thesis, Dept. of E.E., N.C.K.U., Taiwan, June 2007.
    [12] T. C. Chen and K. L. Chung, “An efficient randomized algorithm for detecting circles,” Computer Vision and Image Understanding, Academic Press, pp. 172-191, 2001.
    [13] B. Lamiroy, L. Fritz, and O. Gaucher, “Robust circle detection,” in Proc. Int. Conf. on Document Analysis and Recognition, vol. 1, pp. 526-530, 2007.
    [14] R. O. Duda and P. E. Hart, "Use of the Hough transformation to detect lines and curves in pictures," Commun. ACM, vol. 15, no. 1, pp. 11-15, 1972.
    [15] L. Xu, E. Oja, and P. Kultanan, "A new curve detection method : randomized Hough transform (RHT)," Pattern Recognition Letters, vol. 11, no. 5, pp.331-338, 1990.
    [16] Shapiro, Linda, and Stockman, George, Computer Vision, Prentice-Hall, Inc. 2001.
    [17] W. Xun, J. Jianqiu, L. Yun, and J. Zhaoyi, “Multi-feature SUSAN corner detection method,” in Proc. IEEE Int. Conf. on Image Analysis Techniques, vol. 6044, 2005.
    [18] S. M. Smith and J. M. Brady, “SUSAN-A new approach to low level image processing,” International Journal of Computer Visio, vol. 23, no. 1, pp. 45-78, 1997.
    [19] R.O. Duda and P.E. Hart. Pattern Classification and Scene Analysis. New York: John Wiley & Sons, 1973.
    [20] The rules of RoboCup Soccer Humanoid League 2009,
    http://www.tzi.de/humanoid/bin/view/Website/Downloads/.
    [21] J. S. Lee and Y. H. Jeong, “CCD camera calibrations and projection error analysis,” in Proc. IEEE Int. Conf. on Science and Technology, vol. 2, pp. 50-55, July 2000.
    [22] S. Ernst, C. Stiller, J. Goldbeck, and C. Roessig, “Camera calibration for lane and obstacle detection,” in Proc. IEEE/IEEJ/JSAI Int. Conf. on Intelligent Transportation System, pp. 356-361, Oct 1999.
    [23] J. Heikkila and O. Silven, “A four-step camera calibration procedure with implicit image correction,” in Proc. IEEE Int. Conf. on Computer Vision and Pattern Recognition, vol. 3, pp. 1106-1112, June 1997.
    [24] Y. S. Chen, S. W. Shih, Y. P. Hung, and C. S. Fuh, “Camera calibration with a motorized zoom lens,” in Proc. IEEE Int. Conf. on Pattern Recognition, vol. 4, pp. 495-498, Sep. 2000.
    [25] Camera calibration tools, http://ubimon.doc.ic.ac.uk/dvs/m581.html/.
    [26] Z. Zhang, “A flexible new technique for camera calibration,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 11, pp. 1330-1334, Nov 2000.
    [27] K. T. Holland, R. A. Holman, T. C. Lippmann, and J. Stanley, “Practical use of video imagery in nearshore oceanographic field studies,” IEEE Journal of Oceanic Engineering, vol. 22, no. 1, pp. 81-92, Jan 1997.
    [28] Y. I. Abdel-Aziz and H. M. Karara, “Direct linear transformation from comparator coordinates into object space coordinates in close-range photogrammetry,” in Proc. ASP/UI Symp. Close-Range Photogrammetry, Urbana, IL, pp. 1–18, 1971.
    [29] T. R¨ofer, T. Laue, and D. Thomas, “Particle-filter-based self-localization using
    landmarks and directed lines,” RoboCup 2005: Robot Soccer World Cup IX,
    Lecture Notes in Artificial Intelligence, Springer, 2005.
    [30] S. Lenser and M. Veloso, “Sensor resetting localization for poorly modeled mobile
    robots,” in Proc. of the IEEE International Conference on Robotics and Automation
    (ICRA), 2002.
    [31] I. Harmati and K. Skrzypczyk, “Robot team coordination for target tracking using fuzzy logic controller in game theoretic framework,” in Proc. of IEEE ICRA, pp.75-86, Jan. 2009.
    [32] R. Ueda, “Uniform Monte Carlo Localization - Fast and robust self-localization method for mobile robots,” in Proc. of IEEE ICRA, pp.1353-1358, 2002.
    [33] P. Buschka, A. Saffiotti, and Z. Wasik, “Fuzzy landmark-based localization for a legged robot,” in Proc. Int. on Intelligent Robots and Systems, pp. 1205-1210, 2000.
    [34] D. Herrero-Perez and H. Martinez-Barbera, “Fast and robust recognition of field line intersections,” in Robotics Symposium, 2006. LARS ‘06. IEEE 3rd Latin American, pp. 115-119, 2006.
    [35] H. S. Kim, H. S. Shim, M. J. Jung, and J. H. Kim, “Action selection mechanism for soccer robot,” in Proc. Int. on Computational Intelligence in Robotics and Automation, pp. 390-395, 1997.
    [36] H. S. Shim, Y. G. Sung, S. H. Kim, and J. H. Kim, “Design of action level in a hybrid Control Structure for vision based soccer robot,” in Proc. Int. on Intelligent Robots and Systems, pp. 1406-1411, 1999.
    [37] H. L. Sng, G. S. Gupta, and C. H. Messom, ”Strategy for collaboration in robot soccer,” in Proc. Int. on Electronic Design, Test and Application, pp. 347-351, 2002.

    下載圖示 校內:2012-07-28公開
    校外:2014-07-28公開
    QR CODE