簡易檢索 / 詳目顯示

研究生: 林潔君
Lin, Chieh-Chun
論文名稱: 基於視覺之工業用機械手臂物件夾取研究
Study on Vision Based Object Grasping of Industrial Manipulator
指導教授: 鄭銘揚
Cheng, Ming-Yang
學位類別: 碩士
Master
系所名稱: 電機資訊學院 - 電機工程學系
Department of Electrical Engineering
論文出版年: 2015
畢業學年度: 103
語文別: 中文
論文頁數: 85
中文關鍵詞: 雙眼視覺立體匹配深度估測六軸關節型機械手臂
外文關鍵詞: Stereo Vision, Robot Grasping, Feature Matching, 3D Reconstruction, Six-Axis Articulated Robot
相關次數: 點閱:128下載:29
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 近年來,由於世界人口老化、少子化的問題,導致勞動力不足且人力成本高漲,使需要大量人力的製造業、加工產業等受到嚴重衝擊,這讓自動化的議題逐漸受到重視。同時,由於產業上逐漸走向「少量多樣」的趨勢,使得產線的彈性應變能力比起以往更為關鍵。以往利用工業機器人協助生產,必須透過其教導器針對特定物體作產線的設定,且其效能、精準度非常仰賴人員的技術程度,其過程也非常耗時且大幅降低作業的彈性能力。為了解決上述問題,整合電腦視覺進入自動化產線是一個有效的解決方法。在生產線上,經常讓機械手臂擔任動作重複性高的生產位置,如執行物件取放的裝箱、擺盤等任務。在自動化取放系統的應用中,必須知道物件在輸送帶上或是裝箱內的擺放位置與狀態,並且將它們放置到期望的位置,因此攝影機擷取的物體影像與三維空間的關係,以及物體姿態與機械手臂夾取的方式則變為重要的議題。本論文針對基於視覺之工業用機械手臂物件夾取進行研究,所探討的主題包含自動化系統架設、基於視覺之物體空間資訊擷取以及機械手臂對物件的夾取姿態等。本論文主要是以眼對手之雙眼視覺系統架構,利用特徵點匹配、形狀辨識檢驗物體在空間中的狀態,並對機械手臂下達命令使其進行物件夾取。實驗結果顯示,本論文所開發之基於視覺機械手臂系統確實能完成全自動物件夾取工作。

    In recent years, because of low birth rate and aging population, labor force has become insufficient and labor cost keeps rising up. The manufacturing industries that require a large amount of labor force are severely impacted by these problems. Thus, the topic related to automation has become more and more important. In the meantime, because more and more industries are following the trend of "small-volume large-variety production", the flexibility and adaptability of production lines have become much more important than before. In the past, industrial robots that are used for assisting production lines were controlled by teach pendants in order to perform a specific task. The performance and accuracy of industrial robots were highly dependent on the proficiency of technicians. Furthermore, it took a lot of time to set up a production line and it was lack of flexibility. In order to solve the problems mentioned above, the integration of computer vision in automatic production lines is an effective solution. In production lines, robotic arms are frequently used to perform repetitive tasks, such as pick-and-place tasks for packing and placing objects. In the application of the pick-and-place system, the position of objects on a conveyor belt or the locations for placing objects have to be known, in order to place them correctly. Therefore, the topics related to the relationship between object images and object 3D positions and the relationship between object poses and object grasping methods are extremely important. This thesis focuses on the research topic of vision based object grasping of industrial manipulators. This research topic contains object spatial information, object grasping poses, and the eye-hand coordination and frame transformation. This thesis mainly uses the eye-to-hand stereo camera system to retrieve object’s 3D position information by feature matching, and uses shape recognition to give instructions to a robotic arm for picking and placing objects. Experimental results indicate that the vision based automatic system developed in this thesis can successfully complete automatic pick-and-place tasks.

    中文摘要 II 英文摘要 III 誌謝 XII 目 錄 XIV 表目錄 XVII 圖目錄 XVIII 第一章 緒論 1 1.1 研究動機與目的 1 1.2 文獻回顧 2 1.3 本論文架構 6 第二章 六軸關節型機械手臂夾取系統建立 8 2.1 ABB六軸關節型機械手臂 9 2.1.1 ABB虛擬環境 - RobotStudio 10 2.1.1.1 基礎建模(Modeling) 11 2.1.1.2 離線編程(Off-line Programming) 13 2.1.1.3 模擬系統(Simulation) 22 2.1.2 ABB程式語言- RAPID Code 23 2.1.2.1 RAPID 基本結構與資料型態 23 2.1.2.2 RAPID 函式指令介紹與系統程式流程 26 2.1.2.3 ABB溝通方法- PCSDK 29 2.2 SMC夾爪控制 30 2.2.1 夾爪通訊方法- System Serial Port 32 2.2.2 夾爪通訊協定- ModbusRTU 35 2.2.3 夾爪控制指令及驗證碼 35 2.3 POINTGREY攝影機 37 第三章 基於特徵點配對之深度估測法 38 3.1 攝影機模型 39 3.1.1 內部參數 40 3.1.2 外部參數 42 3.2 影像矯正 43 3.3 特徵點匹配演算法 45 3.3.1 特徵點偵測 45 3.3.1.1 SIFT特徵點偵測演算法 46 3.3.1.2 SURF特徵點偵測演算法 47 3.3.1.3 FAST角點偵測演算法 49 3.3.1.4 ORB角點偵測演算法 50 3.3.1.5 Star角點偵測演算法 50 3.3.2 特徵點篩選與匹配 51 3.3.2.1 ROI區域排除錯誤匹配 52 3.3.2.2 基礎矩陣對應點排除錯誤匹配 53 3.3.2.3 視差向量與特徵點距離排除錯誤匹配 54 3.4 視差法估測深度資訊 56 第四章 座標系轉換與夾取姿態 58 4.1 攝影機座標系與機械手臂基底座標系之轉換關係 58 4.1.1 間接轉換法 60 4.1.2 直接轉換法 62 4.2 物體判別與夾取姿態 63 第五章 實驗方法與實驗結果 66 5.1 實驗環境與設置 66 5.1.1 實驗環境 67 5.1.2 目標夾取物 67 5.2 實驗方法與結果 68 5.2.1 攝影機校正 69 5.2.2 影像矯正對深度估測準確率之影響 72 5.2.3 原始特徵點與篩選後之比較 73 5.2.4 座標轉換實驗與結果 75 5.2.5 旋轉角度測試 76 5.2.6 基本幾何形狀物體夾取測試 78 第六章 結論與未來建議 79 6.1 結論 79 6.2 未來建議 79 參考文獻 81

    [1] http://www.omron-ap.com/solutions/application_solutions/category_details.asp?app_id=E538
    [2] 江宗錡,工業用機械手臂之手眼校正研究,碩士論文,國立成功大學電機工程學系,2014。
    [3] I. Lysenkov, V. Eruhimov, and G. Bradski, "Recognition and pose estimation of rigid transparent objects with a kinect sensor," Robotics, pp. 273, 2013.
    [4] Y. Zhuang, N. Jiang, H. Hu, and F. Yan, "3-D-laser-based scene measurement and place recognition for mobile robots in dynamic indoor environments," IEEE Trans. Instrumentation and Measurement, vol. 62, pp. 438-450, 2013.
    [5] http://campar.in.tum.de/twiki/pub/Chair/TeachingWs10Cv2/3D_CV2_WS_201
    0_Rectification_Disparity.pdf
    [6] M. R. Cutkosky, "On grasp choice, grasp models, and the design of hands for manufacturing tasks," IEEE Trans. Robotics and Automation, vol. 5, pp. 269-279, 1989.
    [7] A. Saxena, J. Driemeyer, and A. Y. Ng, "Robotic grasping of novel objects using vision," The International Journal of Robotics Research, vol. 27, pp. 157-173, 2008.
    [8] https://www.youtube.com/watch?v=tXB-iuanQR8
    [9] W. Lohry, V. Chen, and S. Zhang, “Absolute three-dimensional shape measurement using coded fringe patterns without phase unwrapping or projector calibration,” Opt. Express, vol. 22, no. 2, pp. 1287-1301, Jan 2014.
    [10] C. Quan, X. He, C. Wang, C. Tay, and H. Shang, “Shape measurement of small objects using lcd fringe projection with phase shifting,” Optics Communications, vol. 189, pp. 21-29, Mar 2001.
    [11] Satoshi Kakunai, Tohru Sakamoto, and Koichi Iwata, “Profile measurement taken with liquid-crystal gratings,” Applied Optics, vol.38, no.13, pp. 2824-2828, 1999.
    [12] 黃信勳,動態色澤結構光三維光學量測系統之發展,碩士論文,國立臺北科技大學 自動化科技研究所,2005。
    [13] 連震杰,電腦視覺:從理論到應用上課講義,國立成功大學資訊工程學系,2014。
    [14] R. Szeliski, Computer Vision: Algorithms and Applications, Springer Science & Business Media, 2010.
    [15] T. Kanade and M. Okutomi, "A stereo matching algorithm with an adaptive window: Theory and experiment," IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 16, pp. 920-932, 1994.
    [16] B. Tang, D. A. Boudaoud, B. J. Matuszewski, and L. K. Shark, "An efficient feature based matching algorithm for stereo images," in Geometric Modeling and Imaging--New Trends, 1993, pp. 195-202.
    [17] http://zh.wikipedia.org/wiki/%E4%B8%89%E7%B6%AD%E6%8E%83%E6%8F%8F%E5%84%80.
    [18] https://en.wikipedia.org/wiki/Feature_detection_(computer_vision)
    [19] http://docs.opencv.org/modules/features2d/doc/common_interfaces_of_feature_
    detectors.html?highlight=featuredetector#FeatureDetector
    [20] 黃登淵、黃靖甯、胡武誌、孫宛琳、劉立彬,"一種基於影像分區之有效特徴擷取與匹配方法," 台灣網際網路研討會(TANET2014),高雄,台灣,October 22-24, 2014.
    [21] 陳育菘、廖育昇、徐子建,立體視覺特徵點比對演算法分析與實現,中華民國第十六屆車輛工程學術研討會,國立臺北科技大學車輛工程學系,2011。
    [22] R. Maini and H. Aggarwal, "Study and comparison of various image edge detection techniques," International Journal of Image Processing (IJIP), vol. 3, pp. 1-11, 2009.
    [23] J. Malik, S. Belongie, T. Leung, and J. Shi, "Contour and texture analysis for image segmentation," International Journal of Computer Vision, vol. 43, pp. 7-27, 2001.
    [24] http://www.codeproject.com/Articles/196168/Contour-Analysis-for-Image-Reco
    gnition-in-C
    [25] http://opencv-srf.blogspot.tw/2011/09/object-detection-tracking-using-contours.
    html
    [26] A. Toshev, B. Taskar, and K. Daniilidis, "Shape-based object detection via boundary structure segmentation," International Journal of Computer Vision, vol. 99, pp. 123-146, 2012.
    [27] S. Belongie, J. Malik, and J. Puzicha, "Shape matching and object recognition using shape contexts," IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 24, pp. 509-522, 2002.
    [28] https://en.wikipedia.org/wiki/Blob_detection
    [29] http://www.labbookpages.co.uk/software/imgProc/blobDetection.html
    [30] P. Arbelaez, M. Maire, C. Fowlkes, and J. Malik, "Contour detection and hierarchical image segmentation," IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 33, pp. 898-916, 2011.
    [31] 蔡弘晉,基於單應性矩陣之三維模型重建法應用於六軸關節型機械手臂,碩士論文,國立成功大學電機工程學系,2014。
    [32] http://www.diva-portal.org/smash/get/diva2:360454/FULLTEXT01
    [33] https://yamol.tw/tfulltext-%E6%9B%BF%E4%BB%A3/item-40.%E4%B8%8
    B%E5%9C%96%E7%9A%84+SCARA+%E5%9E%8B%E5%B7%A5%E6%A5%AD%E6%A9%9F%E6%A2%B0%E6%89%8B%E8%87%82%E5%B1%AC%E6%96%BC%E4%BD%95%E7%A8%AE%E5%BA%A7%E6%A8%99%E5%BD%A2%E5%BC%8F%26nbsp%3B(A)%E7%9B%B4%E8%A7%92%E5%BA%A7..-640653.htm
    [34] http://blog.robotiq.com/bid/63528/What-are-the-different-types-of-industrial-
    robots
    [35] Operating Manual –RobotStudio
    [36] Operating Manual – Introduction to RAPID
    [37] ABB Robotics Application Manual PCSDK
    [38] SMC Operation Manual : LECP6 Series Operation Manual
    [39] http://www.dotblogs.com.tw/billchung/archive/2012/01/05/64457.aspx
    [40] http://www.dotblogs.com.tw/billchung/archive/2012/01/11/65270.aspx
    [41] http://www.dotblogs.com.tw/billchung/archive/2012/01/11/65318.aspx
    [42] https://msdn.microsoft.com/zh-tw/library/system.io.ports.serialport(v=vs.110).a
    spx
    [43] R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision, Cambridge University Press, 2003.
    [44] http://homepages.inf.ed.ac.uk/rbf/CVonline/LOCAL_COPIES/OWENS/LECT9
    /node2.html
    [45] A. Fusiello, E. Trucco, and A. Verri, "A compact algorithm for rectification of stereo pairs," Machine Vision and Applications, vol. 12, pp. 16-22, 2000.
    [46] D. G. Lowe, "Distinctive image features from scale-invariant keypoints," International Journal of Computer Vision, vol. 60, pp. 91-110, 2004.
    [47] H. Bay, A. Ess, T. Tuytelaars, and L. Van Gool, "Speeded-up robust features (SURF)," Computer Vision and Image Understanding, vol. 110, pp. 346-359, 2008.
    [48] E. Rosten and T. Drummond, "Machine learning for high-speed corner detection," in Computer Vision–ECCV, 2006, pp. 430-443.
    [49] E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, "ORB: an efficient alternative to SIFT or SURF," in Computer Vision (ICCV), 2011, pp. 2564-2571.
    [50] M. Calonder, V. Lepetit, C. Strecha, and P. Fua, "Brief: Binary robust independent elementary features," Computer Vision–ECCV, 2010, pp. 778-792.
    [51] M. Agrawal, K. Konolige, and M. R. Blas, "Censure: Center surround extremas for realtime feature detection and matching," in Computer Vision–ECCV, 2008, pp. 102-115.
    [52] 楊富貴,基於改良式可信度傳遞於同質區域之立體視覺匹配演算法,碩士論文,國立中央大學通訊工程學系,2009。
    [53] http://en.wikipedia.org/wiki/Fundamental_matrix_(computer_vision)
    [54] http://www.robots.ox.ac.uk/~vgg/hzbook/hzbook1/HZepipolar.pdf
    [55] http://en.wikipedia.org/wiki/Epipolar_geometry
    [56] https://en.wikipedia.org/wiki/Essential_matrix
    [57] K. S. Fu, R. Gonzalez, and C. G. Lee, Robotics: Control Sensing, Vis: Tata McGraw-Hill Education, 1987.
    [58] M. W. Spong, S. Hutchinson, and M. Vidyasagar, Robot Modeling and Control, Wiley, 2006.
    [59] https://code.google.com/p/aforge/
    [60] http://www.emgu.com/wiki/index.php/Main_Page
    [61] Z. Zhang, "A flexible new technique for camera calibration," IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 22, pp. 1330-1334, 2000.
    [62] http://opencv.org

    下載圖示 校內:2020-08-25公開
    校外:2020-08-25公開
    QR CODE