簡易檢索 / 詳目顯示

研究生: 黃啟銘
Huang, Qi-Ming
論文名稱: 以自動化視覺為基礎之機械手臂校正及移動目標取放
Automatically Visual-Based Robot Arm Calibration and Pick and Place for Motion Target
指導教授: 連震杰
Lien, Jenn-Jier
共同指導教授: 郭淑美
Guo, Shu-Mei
學位類別: 碩士
Master
系所名稱: 電機資訊學院 - 資訊工程學系
Department of Computer Science and Information Engineering
論文出版年: 2017
畢業學年度: 105
語文別: 中文
論文頁數: 81
中文關鍵詞: 機械手臂視覺定位攝影機校正機械手臂視覺校正機械手臂取放
外文關鍵詞: Robot Arm, Visual positioning, Camera Calibration, Hand-eye Calibration, Pick and Place
相關次數: 點閱:152下載:9
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 我們將固定式攝影機手臂(Eye-to-hand)系統分為兩個子系統,包括視覺子系統(攝影機)以及控制子系統(機械手臂),兩個子系統一開始沒有關聯,需進行校正來計算攝影機與機械手臂基底的轉換式,用來轉換兩者間的座標關係,傳統校正方法程序費時且對非專業人員來說較為困難。本論文提出一自動校正演算法,讓機械手臂夾取棋盤格,並在攝影機前擺動七組特定姿態,藉此達到校正目的,整個過程只需約30秒,且校正結果的標準差在0.3mm以下,與傳統方法相比較為穩定、快速及安全。我們可以將此校正結果,應用在動態目標抓取實驗中,利用視覺定位出目標在空間中的姿態,引導機械手臂抓取移動中的目標物。此系統設計將攝影機以斜拍之方式取像,提升影像精度並解決遮蔽(Occlusion)問題,並利用影像輪廓矩(Contour Moment)定位追蹤動態目標,預測目標三維姿態控制手臂抓取目標,在運輸帶速度不高於120mm/s的情況下,抓取成功率為97%。

    There are two subsystems in the eye-to-hand system, including the visual subsystem (camera) and the control subsystem (robotic arm). The two subsystems are not associated at the beginning and need to be calibrated. The transformation between the camera and the robot base is used to transform the coordinate relationship between the two subsystems, and the traditional calibration method is time consuming and difficult for non-professionals. In this paper, an automatic calibration algorithm is proposed to allow the robot arm to catch the checkerboard and swing the seven groups of specific postures in front of the camera to achieve the calibration purpose. The whole process takes only about 30 seconds and the standard deviation of the calibration results is 0.3 mm. Comparing with the traditional method, our method is more stable, fast and safe. We use this calibration results, applied to the motion target pick and place experiment. Using visual technology to locate the target and guide the robot arm to pick the target. This system is designed to take the camera in an oblique way to improve the image accuracy and solve the occlusion problem. Using contour moment to locate the dynamic target information. Then, we predict the target three-dimensional attitude and control robot arm to grab the target. In the case of a conveyor speed less than 120 mm / s, the success rate is 97%.

    Content 摘要 IV Abstract V Acknowledgments VI Content VII Content of Figure IX Content of Table XII Chapter 1. Introduction 1 1.1 Motivation 1 1.2 Related Works 2 1.3 Contribution 7 Chapter 2. Vision-Based Robot Arm System 9 2.1 System Setup and Framework 10 2.2 Visual Subsystems 14 2.3 Control Subsystems 16 Chapter 3. Calibration between Robot Arm and Camera 22 3.1 Camera Calibration 25 3.2 Hand-Eye Manual Calibration 29 3.3 Hand-Eye Automatic Calibration 33 Chapter 4. Robot Arm Pick and Place for Motion Target 43 4.1 Rectification of Camera Perspective Plane 44 4.2 Target Extraction and Tracking Using Color Modeling and Contour Moments 47 4.3 Target Coordinate Transformation from 2D Image Position to 3D Robot Arm Pose 54 4.4 Motion Target Prediction for Pick and Place 57 Chapter 5. Experimental Results 62 5.1 Translation or Rotation Stability Comparison of Hand-Eye Automatic Calibration 62 5.2 Manual and Auto Comparisons of Hand-Eye Calibration 69 5.3 Result of Robot Arm Pick and Place 73 Chapter 6. Conclusion and Future Work 78 Reference 80

    [1] G. Bradski, "Computer Vision Face Tracking for Use in a Perceptual User Interface," IEEE Workshop Applications of Computer Vision, pp. 790-799, 1998.
    [2] S. Belongie, "Rodrigues’ Rotation Formula," MathWorld–A Wolfram Web, 1999.
    [3] G. Bradsky and A. Kaehler, "Learning OpenCV: Computer Vision with the OpenCV Library," O’Reilly Media, 2008.
    [4] A. De La Escalera and J. Armingol, "Automatic Chessboard Detection for Intrinsic and Extrinsic Camera Parameter Calibration," Sensors, 10, pp. 2027–2044, 2010.
    [5] G.L. Du and P. Zhang, "Online Robot Calibration Based on Vision Measurement," Robotics and Computer-Integrated Manufacturing, 29 (6), pp. 484-492, 2013.
    [6] C. Finn, and S. Levine. "Deep Visual Foresight for Planning Robot Motion," arXiv preprint arXiv: 1610.00696, 2016.
    [7] S. Huang, K. Murakami, Y. Yamakawa, T. Senoo and M. Ishikawa, "Fast Peg-and-Hole Alignment Using Visual Compliance," IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) November 3-7, 2013.
    [8] R. Hartley and A. Zisserman, "Multiple View Geometry in Computer Vision," Cambridge University Press, pp. 588-596, 2003.
    [9] P. Joubert, "Dense Object Reconstruction from Uncontrolled Motion Using the Microsoft Kinect Sensor," Applied Mathematics Honours Project, 2011.
    [10] Y. Meng and H. Zhuang, "Autonomous Robot Calibration Using Vision Technology," Robotics and Computer-Integrated Manufacturing, vol. 23, pp. 436-446, 2007.
    [11] J. Miseikis, K. Glette, O. J. Elle, and J. Torresen, "Automatic Calibration of a Robot Manipulator and Multi 3D Camera System," CoRR, vol. abs/1601.01566, 2016.
    [12] D. Pagliari and L. Pinto, "Calibration of Kinect for Xbox One and Comparison between the Two Generations of Microsoft Sensors, " Sensors, 15, pp. 27569–27589, 2015.
    [13] L. Pinto and A. Gupta, "Supersizing Self-Supervision: Learning to Grasp from 50k Tries and 700 Robot Hours," IEEE International Conference on Robotics and Automation, pp. 3406-3413, 2015.
    [14] A.R. Smith, "Color Gamut Transform Pairs," SIGGRAPH 78, pp. 12-19, 1978.
    [15] S. Suzuki and K. Abe, "Topological Structural Analysis of Digitized Binary Images by Border Following," Computer Vision Graphics and Image Processing, vol. 30, no. l, pp. 32-46, 1985.
    [16] H. Sung, S. Lee and D. Kim, "A Robot-Camera Hand/Eye Self-Calibration System Using a Planar Target," 44th International Symposium on Robotics, pp. 1-4, 2013.
    [17] A. Watanabe, S. Sakakibara, K. Ban, M. Yamada, G. Shen and T. Arai, "A Kinematic Calibration Method for Industrial Robots Using Autonomous Visual Measurement," Annals of the CIRP, vol.55, pp. 1-6, 2006.
    [18] YASKAWA, "FS100 Instructions," pp. 8.21-8.42, 2012.
    [19] YASKAWA, "Motoman MH5LF Robot," 2013.
    [20] YASKAWA, "FS100 Operator’s Manual," pp. 2.5-2.15, 2014.
    [21] YASKAWA, "FS100 Options Instructions Programmer Manul for New Language Environment MotoPlus," pp. 4.1-4.6, 2014.
    [22] Z. Zhang, "Flexible Camera Calibration by Viewing a Plane from Unknown Orientations," International Conference on Computer Vision, pp.666-673, 1999.
    [23] Z. Zhang, "A Flexible New Technique for Camera Calibration," IEEE Transaction on Pattern Analysis and Machine Intelligence, Vol.22, no.11, pp.1330-1334, 2000.

    無法下載圖示 校內:2022-08-22公開
    校外:不公開
    電子論文尚未授權公開,紙本請查館藏目錄
    QR CODE