簡易檢索 / 詳目顯示

研究生: 余振宇
Yu, Chen-Yu
論文名稱: 物件辨識與機器人手眼協調控制之設計與實現
Design and Implementation of Object Recognition and Robot Eye-Hand Cooperation Control
指導教授: 李祖聖
Li, Tzuu-Hseng S.
學位類別: 碩士
Master
系所名稱: 電機資訊學院 - 電機工程學系
Department of Electrical Engineering
論文出版年: 2011
畢業學年度: 99
語文別: 英文
論文頁數: 80
中文關鍵詞: 物件辨識手眼協調機械手臂
外文關鍵詞: Object recognition, Eye-hand cooperation, Robot arm
相關次數: 點閱:103下載:14
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 本論文係討論具有物件辨識功能及手眼協調機器人控制設計與實現。首先討論手眼協調機器人的硬體架構,本機器人主要取自於一般居家機器人的上半身,由筆記型電腦為控制核心,結合具6個自由度機器手臂與物件辨識影像系統,機械手臂主要由11顆伺服馬達模組所組成,影像系統包含3D立體視覺攝影機以及網路攝影機進行物件辨識。其次討論軟體控制架構,物件辨識功能主要整合物件的顏色、形狀以及特徵為依據,並以GPU (Graphic Processing Unit)平行運算SURF (Speed Up Robust Feature )演算法。辨識完成後,將影像端所得知目標物體位置傳送至控制核心,控制核心經由正運動學、逆運動學以及對稱梯形加減速產生位置命令及速度命令,發送至伺服馬達以控制機械手臂。最後,根據上銀機械手競賽部分項目進行實作,並討論手眼協調機器人其功能應用。

    This thesis mainly discusses the design and implementation of the object recognition and robot eye-hand cooperation control. Firstly, the hardware architecture of the robot eye-hand cooperation is described. The eye-hand cooperation robot comes from the upper body part of the home service robot using a notebook computer as the operation core to combine a 6-DOF robot arm and object recognition vision system. The robot arm consists of 11 servo motor modules. The vision system includes a 3D stereo camera and a webcam to implement the object recognition. Secondly, the software control architecture is described. The function of the object recognition depends on the integration of the color, shape and features of the object. Moreover, the calculation of the SURF (Speed Up Robust Feature) algorithm is paralleled by GPU (Graphic Processing Unit). After the recognition is finished, the vision system localizes the target location and sends the information to the operation core. By applying kinematics, inverse kinematics and symmetric trapezoidal acceleration/deceleration, the operation core generates the position and speed commands and sends them to servo motor modules to control the robot arm. Finally, the experiments are implemented according to the part of the subjects in the High-Win robot arm competition. The applications of the robot eye-hand cooperation control are also discussed.

    Abstract Ⅰ Acknowledgement Ⅲ Contents Ⅳ List of Figures Ⅵ List of Tables Ⅹ Chapter 1. Introduction 1 1.1 Motivation 1 1.2 Software and Hardware 4 Chapter 2. Design of Eye-Hand Cooperation Robot 6 2.1 Introduction 6 2.2 System Architecture of Eye-Hand Cooperation Robot 8 2.3 Hardware Architecture of Eye-Hand Cooperation Robot 10 2.3.1 Central Operation Units 11 2.3.2 Servo Motor Module 12 2.3.3 Power Supply Unit and Signal Circuit Board 13 2.3.4 Stereo Vision Module 15 2.3.5 Hardware Configuration of Eye-Hand Cooperation Robot 17 2.4 Summary 20 Chapter 3. Design of the Trajectory Control for Robot Arm 21 3.1 Introduction 21 3.2 Direct Kinematic Module of 6-DOF Robot Arm 23 3.3 Inverse Kinematic Module of 6-DOF Robot Arm 29 3.4 Motion Generation for Trajectory Control 37 3.5 Summary 41 Chapter 4. Design of Object Recognition Vision System 42 4.1 Introduction 42 4.2 Object HSV Color Space Analysis 44 4.3 Object Shape Analysis 46 4.4 GPU Based SURF (Speed Up Robust Feature) 50 4.4.1 Integral Image 52 4.4.2 Interest Point Detection 53 4.4.3 Interest Point Description and Matching 54 4.5 Summary 59 Chapter 5. Experimental Results 60 5.1 Introduction 60 5.2 The Operation Interface of Eye-Hand Cooperation Robot 61 5.3 Experimental Results of Eye-Hand Cooperation Robot63 5.3.1 Arrange the Dominoes 63 5.3.2 Catch and Place the Object 65 5.4 Summary 74 Chapter 6. Conclusion and Future Work 75 6.1 Conclusion 75 6.2 Future Work 77 References 78 Biography 80

    [1] S. W. Wijesoma, D. F. H Wolfe, and R. J. Richards, “Eye-to-Hand Coordination for Visual-Guided Robot Control Application,” The International Journal of Robot Research, Vol. 12, No. 1, pp. 65-78, February 1993.
    [2] G. Flandin, F. Chaumette, and E. Marchand, “Eye-in-hand / Eye-to-hand Cooperation for Visual Servoing,” in Proceeding of IEEE International Conference on Robotics & Automation, pp. 2741-2746, San Francisco, April 2000.
    [3] H. Bay, A. Ess, T. Tuytelaars, and L.V. Gool, “Speeded-Up Robust Features (SURF),” Computer Vision and Image Understanding, Vol. 110, pp. 346-359, 2008.
    [4] D. Lowe, Distinctive Image Features from Scale-invariant Keypoints, International Journal of Computer Vision, Vol. 60, No.2, pp.91-100, 2004.
    [5] I. Biederman, “Recognition-by-Component: A Theory of Human Image Understanding,” Phsycologycal Review, Vol. 94, No. 2, pp. 115-147, 1987.
    [6] ROBOTIS, http://www.robotis.com/zbxe/intro/
    [7] Point Grey Research Inc. Stereo Vision SDK Mannual. Available:
    http://www.ptgrey.com/support/downloads/
    [8] L. W. Tsai, ROBOT ANALYSIS–The Mechanics of Serial and Parallel Manipulators, Wiley-IEEE, 1999.
    [9] J. Xie, S. Yan, and W. Qiang, “A Method for Solving The Inverse Kinematics Problem of 6-DOF Space Manipulator,” in Proceeding of International Symposium on Systems and Control in Aerospace and Astronautics, pp.379-382, Harbin, January 2006.
    [10] G. Bradski and A. Kaehler, Learning OpenCV: Computer vision with the OpenCV library: O'Reilly Media, 2008.
    [11] D. Douglas and T. Peucker, “Algorithms for the reduction of the number of points required for represent a digitized line or its caricature,” Canadian Cartographer, pp. 112-122, 1973.
    [12] T. Lindeberg, “Feature detection with automatic scale selection,” International Journal of Computer Vision, Vol. 30, No. 2, pp.79-116, 1998.
    [13] N. Zhang, “Computing Optimised Parallel Speeded-Up Robust Features (P-SURF) on Multi-Core Processors,” International Journal of Parallel Programing, Vol. 38, No. 2, pp. 138-158, 2010.
    [14] R.C. Gonzalez and R.E. Woods, Digital Image Processing 3/e, Prentice Hall, 2008.
    [15] Hiwin robot arm cometition 2011 rule bok. Available: http://www.robotown.org.tw/content.php?r=134/
    [16] OpenMP.org. Available: http://openmp.org/wp/
    [17] T. Watson. videoInput Library. Available: http://muonics.net/school/spring05/videoInput/
    [18] CUDA Toolkit and NVIDIA GPU computing SDK. Available: http://developer.nvidia.com/cuda-downloads
    [19] OpenSURF - The Official Home of the Image Processing Library. Available: http://www.chrisevansdev.com/opensurf.html
    [20] CUDA SURF – A real-time impelmentation for SURF, Available: http://www.d2.mpi-inf.mpg.de/surf
    [21] Logitech – Get immersed in the digital world with a mouse, keyboard, webcam, headset, Harmony remote, speakers, and more. Available: http://www.logitech.com/en-us/home
    [22] Rob Hess, Particle Filter Object Tracking. Availabe: http://blogs.oregonstate.edu/hess/code/particles/
    [23] T. Terriberry, L. French and J. Helmsen, “GPU Accelerating Speeded-Up Robust Features,” in Proceeding of the 4th International Symposium on 3D Data Processing, Visualization and Transmission, pp. 355–362, June 2008.
    [24] J.G. Bollinger and N.A. Duffie, Computer Control of Machines and Processes, Addison Wesley, 1988.
    [25] 張舒, 褚艳利, GPU高性能運算之CUDA, 中國水利水電出版社, 2009.
    [26] L. M. J. Florack, B. M. ter Haar Romeny, J. J. Koenderink, and M. A. Viergever, “General intensity transformations and differential invariants”, Journal of Mathematical Imaging and Vision, Vol. 4, No. 2, pp.171–187, 1994.

    下載圖示 校內:2016-08-22公開
    校外:2016-08-22公開
    QR CODE