簡易檢索 / 詳目顯示

研究生: 許貿雄
Hsu, Mau-Hsiung
論文名稱: 視覺導引物件抓取之自走型機器人之姿態控制
Pose Control of Mobile Robots for Vision-Guided Material Grasping
指導教授: 蔡清元
Tsay, Tsing-Iuan
學位類別: 碩士
Master
系所名稱: 工學院 - 機械工程學系
Department of Mechanical Engineering
論文出版年: 2002
畢業學年度: 90
語文別: 英文
論文頁數: 93
中文關鍵詞: 自走型機器人攝影機校正轉角和質心偵測邊界偵測工作編碼
外文關鍵詞: task encoding, edge detection, camera model calibration, Mobile robots, corner and centroid detection
相關次數: 點閱:126下載:10
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 中文摘要
    自走型機器人使用於晶圓廠生產線上的晶圓卡匣、晶圓盒供運送及上下料,其主要是由移動型平台、機械手臂與視覺系統所組成,本研究提出的彈性搬運系統不只節省人力支出,在工件的運送及上下料提供更高的可靠度與性能。
    自走型機器人於執行取放工件的任務中,由於導引方式的限制,存在著定位精度之誤差,因此應用視覺系統,來導引機械手臂到達指定位置,執行工件取放工作。本研究進一步針對“眼在手”之視覺系統,提出以卡氏座標為基礎的“看而後動”之工作編碼的控制法則,從所獲得的影像中搜尋工件的轉角和質心,並利用視覺理論求得該工件相對於目前端接器的位置,進而控制端接器精確定位到目標姿態並將工件夾取,其中應用到的技術包括:影像加強、邊界偵測、轉角和質心偵測、攝影機參數校正方法與攝影機到端接器方位估測理論、變焦鏡頭運用,並應用任務編碼與系統控制等等。
    在論文最後,進行實驗來驗證所提出的理論與策略,實驗內容包括將工件放置在不同位置與方位並傾斜某角度,判斷是否可以快速得到工件的三維空間資訊,並精確抓取工件。

    Abstract
    Mobile robots frequently replace humans in handling and transporting wafer carriers in semiconductor production lines. The constructed mobile robot is primarily composed of a mobile base, a robot manipulator, and a visual system. This flexible material transfer system can save the expense of human resources, as well as provide reliable and efficient transportation and handling.
    During pick-and-place operations between a predefined station and the mobile robot, position and orientation errors of the mobile base are inevitably caused by the guidance control system. Thus, this study employs the eye-in-hand vision system to provide visual information for controlling the manipulator of the mobile robot to grasp accurately stationary material. This work further presents a position-based look-and-move, task encoding control strategy for eye-in-hand vision architecture, that maintains all target features in the camera’s field of view throughout the visual guiding. Moreover, the manipulator can quickly approach the material and precisely position the end-effector in the desired pose. Numerous techniques are required for implementing such a task, including image enhancement, edge detection, corner and centroid detection, camera model calibration method, robotic hand/eye calibration method, using a camera with controlled zoom and focus, and task encoding scheme.
    Finally, these technologies are experimentally applied to realize a manipulator that can quickly approach a target object and precisely position its end-effector in the desired relative pose to the object, independently of where the target object is located on a station. Specific experimental demonstrations include grasping the target object with different locations on the station and grasping the target object tilted by different angles to the station.

    Abstract (in Chinese) i Abstract (in English) ii Acknowledgements iii Table of Contents iv List of Tables vi List of Figures vii Nomenclature xii 1 Introduction 1 1.1 Preface 1 1.2 Motivation and Objective 2 1.3 Literature Survey 3 1.4 Contribution 6 1.5 Thesis Organization 6 2 Backgrounds 9 2.1  Constructed Mobile Robot 9 2.1.1  Mobile Base 9 2.1.2  Robot Manipulator 10 2.1.3  Vision System 10 2.2  Hardware Architecture of the Constructed Mobile Robot 11 2.3  Task of the Mobile Robot and Goal of the Research 12 3 Corner and Centroid Detection 19 3.1  Image Preprocessing and Centroid Detection 19 3.1.1  Image Preprocessing 20 3.1.2  Locating the Centroid of the Target Area 21 3.2 Edge Detection 22 3.3 Quickly Estimating the Approximate Locations of Corners of the Target Area 24 3.4  Least-Squares Line Fitting 26 3.5  Determining the Corners of the Target Area 27 4 Calibrations 36 4.1 Camera Model Calibration 37 4.1.1  Determining the Intrinsic Parameters 38 4.1.2  Determining Extrinsic Parameters 40 4.2 Robotic Hand/Eye Calibration 41 4.2.1  3D Robotic Hand/Eye Calibration 42 4.2.2  Proposed Robotic Hand/Eye Calibration 44 5 Relative End-Effector Control Using Position-Based Task Encoding 48 5.1  Position-Based Task Encoding for the Robotic System with an Eye-in-Hand Configuration 49 5.2  Using a Camera with Controllable Zoom and Focus to Locate the Target Object Accurately 50 5.3  Control Strategy for Approaching the Target Object 51 5.3.1  Definition of Notation 51 5.3.2  Four Off-Line Tasks 52 5.3.3  Position-Based Look-and-Move Task Encoding Structure 53 5.3.4  Control Strategy 56 6 Experimentation 66 6.1 Experimental Setup 66 6.2 Camera Model and Hand/Eye Calibrations 67 6.3 Positioning Performance by the Position-based Task Encoding Algorithm 69 7 Conclusion 89 7.1  Summary 89 7.2  Future Improvement 90 Bibliography 91

    [1]. Y. I. Abdel-Aziz , H. M. Karara , “Direct linear transformation into object space coordinate in close-range photogrammetry”, ASP Symposium on Close-Range Photogrammetry, USA, pp.1-18, 1971.
    [2]. J. Baeten, H. Bruyninckx, and J. D. Schutter, “Tool/camera configurations for eye-in-hand hybrid vision/force control,” Proceedings of the 2002 IEEE ICRA, pp1704-1709, May 2002.
    [3]. J. Birk, “ General Methods to enable robots with vision to acquire, orient and transport workpieces,” TR-5, University of Rhode Island, August 1979.
    [4]. S. Boudet, F. Chaumette, E. Malis, “2 1/2 D visual servoing”, IEEE Transactions on Robotics and Automation, Vol.15, pp.238-250, April 1999.
    [5]. F. Chaumette, Y. Mezouar, “Path planning in image space for robust visual servoing”, IEEE International Conference on Robotics and Automation, Vol.3, pp.2759-2764, 2000.
    [6]. Wen-Chung Chang, J. P. Hespanha, A. S. Morse, G. D. Hager,“Task re-encoding in vision-based control systems”, IEEE Conference on Decision and Control, Vol.1, pp.48 –53,1997.
    [7]. Y. J. Chen, “The study of visual tracking for an eyes-on-hand robot”, National Chung Cheng University for the Degree of Master, June 1997.
    [8]. P. Corke, M. Good, “ Dynamic effects in visual closed-loop systems,” IEEE Transactions on Robotics and Automation, Vol.12, No.5, pp.651-670, Oct.1996.
    [9]. G. D. Hager, Zachary Dodds, A. S. Morse, “What can be done with an uncalibrated stereo system”, Lecture Notes in Control and Information Sciences, Vol. 237, pp. 79-89, 1998.
    [10]. I. J. Jiang, ”Robot positioning and grasping with integrated vision and control,” M. S. Thesis, Department of Electrical Engineering, N.D.H.U, July 2001.
    [11]. S. H. Lee, S. Y. Lee, W. H. Seo, “A study on real-time implementation of visual feedback control of robot manipulator,” Systems, 1999 IEEE International Conference on Systems Man and Cybernetics, Vol.2, pp.824 - 829, Oct. 1999.
    [12]. R. X. Lin, ”Development of a mobile robot for vision guided material handling,” M. S. Thesis, Department of Mechanical Engineering, N.C.K.U, July 2001.
    [13]. Q. T. Luong and O. D. Faugeras, “Self-calibration of a moving camera form point correspondences and fundamental matrices,” International Journal of computer vision, 22(3), pp.261-28, 1997.
    [14]. S. Matthias, “Towards Autonomous Robotic Serving: Using an integrated hand-arm-eye system for manipulating unknown objects”, Robotics and Autonomous Systems, Elsevier Science , Vol.26, pp. 23-42, January 1999.
    [15]. F. Mokhtarian and R. Suomela, “Robust image corner detection through curvature scale space,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, Vol. 20, pp.1376 -1381, 1998.
    [16]. H.P. Moravec, “Towards automatic visual obstacle avoidance,” Proc. Int’l Joint Conf. Artificial Intelligence, p. 584, 1977.
    [17]. H. N. Nair, C. V. Stewart, “Robust focus ranging”, CVOR92, pp. 309-314, 1992.
    [18]. S. K. Nayar, “Shape from focus system”, CVPR, pp. 302-308, 1992
    [19]. R. P. Paul, "Robot manipulators: mathematics, programming and control – the computer control of robot manipulators", MIT, pp.25-32, August 1982.
    [20]. A. C. Sanderson, L. E. Weiss, “Image-based visual servo control using relational graph error signals,” Proceeding of IEEE, pp. 1074-1077, 1980.
    [21]. A.C. Sanderson and L.E Weiss, “ Image based visual servo control or robots,” 26th Annual SPIE Technical Symposium, SPIE, 1982.
    [22]. Tani, “ High precision manipulators with visual sense,” 7th Int. Symp. on Industrial Robots, p. 561-568 , 1977.
    [23]. R.Y. Tsai, “ A versatile camera calibration technique for high-accuracy 3D machine vision metrology using off-the-shelf TV cameras and lenses,” In IEEE Journal of Robotics and Automation, RA-3(4) pages 323-344, 1987.
    [24]. R. Y. Tsai, “An efficient and accurate camera calibration technique for 3D machine vision”, International Conference on Computer Vision and Pattern Recognition, USA, pp.364-374, 1986.
    [25]. R.Y. Tsai and R.K. Lenz, “A new technique for fully autonomous and efficient 3D robotics hand-eye calibration,” In 4th International Symposium on Robotics Research, volume 4, pages 287-297, 1987.
    [26]. R.Y. Tsai and R.K. Lenz, “ Techniques for calibration of the scale factor and image center for high accuracy,” 3D machine vision metrology. IEEE Trans. Pattern Analysis and Machine Intell, 10(5):713-720, 1988.
    [27]. L.E. Weiss, “ Dynamic visual servo control of robots: an adaptive image-based approach,” Ph.D thesis at CMU, CMU-RI-TR-84-16,1984.
    [28]. Z. Zhang, “A flexible new technique for camera calibration”, Pattern Analysis and Machine Intelligence, IEEE Transactions on , Volume: 22 Issue: 11 , pp.1330 –1334, Nov. 2000.
    [29]. X. Zhang and D. Zhao, “Parallel algorithm for detecting dominant points on multiple digital curves,” Pattern Recognition, vol. 30, no. 2, pp. 239-244, 1997.

    下載圖示 校內:2007-08-28公開
    校外:2007-08-28公開
    QR CODE