| 研究生: |
張宗淳 Chang, Zong-Chun |
|---|---|
| 論文名稱: |
移植夾抓姿勢偵測方法於機械手臂 Porting a grasp pose detection method to a robotic arm |
| 指導教授: |
侯廷偉
Hou, Ting-Wei |
| 學位類別: |
碩士 Master |
| 系所名稱: |
工學院 - 工程科學系 Department of Engineering Science |
| 論文出版年: | 2022 |
| 畢業學年度: | 110 |
| 語文別: | 中文 |
| 論文頁數: | 36 |
| 中文關鍵詞: | 視覺抓取系統 、機械手臂 、RGB-D 相機 、夾爪姿勢偵測 、手眼校正 |
| 外文關鍵詞: | vision-based grasping system, robotic arm, RGB-D sensor, grasp pose detection, Hand-Eye calibration |
| 相關次數: | 點閱:137 下載:20 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
機械手臂結合視覺辨識為目前工業上的基礎應用之一,但目前文獻大多以其中的夾爪姿勢偵測為研究重點,很少統整視覺抓取系統的建立流程。本論文以 RGB-D 相機和工業機械手臂進行實作,並將夾爪姿勢偵測方法應用在系統中,達到相機、物體、機械手臂三個座標系之間的資訊傳遞。整個系統分為五個階段,相機拍攝物體定位、建立物體姿勢、夾爪姿勢偵測、手臂運動路徑規劃和執行抓取動作。在物體定位部分,透過不同的環境因子,例如拍攝距離、光線來源、拍攝角度等,設計出適合結構光深度相機拍攝的環境。在建立物體姿勢的部分,以深度資訊建立物體部分點雲圖。在夾爪姿勢偵測部分,為了將 GrapNet-1Billion 的偵測方法移植到建立的視覺抓取系統,資料前處理時修改了工作環境遮罩產生方法和選取有效點的遮罩條件,來配合實作的設備和工作環境。最後在運動規劃的部份,透過手眼校正與旋轉矩陣將相機、物體、夾爪、世界座標系四者對齊,計算出夾爪抓取的目標點,並以視覺化座標軸的方式驗證了系統的可行性。
Robotic arm combined with visual recognition is one of the basic applications in the current industry, but most of the related works focus on grasp pose detection and rarely focus on the establishment process of vision-based grasping system. In this thesis, an RGB-D camera and an industrial robotic arm are used for implementation, and the grasp pose detection method is applied in the system to achieve the information transfer between the camera, the object, and the world coordinate system. The system is divided into five stages, object localization, object pose establishment, grasp pose detection, path planning and grasp execution. In the object localization, through different environmental factors, such as shooting distance, light source, shooting angle, etc., an environment suitable for structured light depth camera shooting is designed. In object pose establishment, a point cloud model of the object is created with the depth captured by the RGB-D camera. In grasp pose detection, in order to port the detection method of GrapNet-1Billion to the system, the method of generating workspace mask and the mask of filtering valid points has been modified to match the device and working environment. In path planning, the camera, object, gripper, and the world coordinate system are aligned through the Hand-Eye calibration and rotation matrix. In the end, the feasibility of the system is verified by visualizing the coordinate axis.
[1] K. Kleeberger, R. Bormann, W. Kraus, and M. F. Huber. “A survey on learning-based robotic grasping,” Current Robotics Reports, vol. 1, no. 4, pp. 239-249. Springer, Dec 2020. doi: 10.1007/s43154-020-00021-6.
[2] G. Du, K. Wang, S. Lian, and K. Zhao. “Vision-based robotic grasping from object localization, object pose estimation to grasp estimation for parallel grippers: a review,” Artificial Intelligence Review, vol. 54, no. 3, pp. 1677–1734. Springer, March 2021. doi: 10.1007/s10462-020-09888-5.
[3] 李俊則、張禎元。光學視覺與機械手臂系統整合之校正方法介紹。科儀新知 ,226期,頁 24-36,2021年3月。
[4] Y. Shiu and S. Ahmad, "Finding the mounting position of a sensor by solving a homogeneous transform equation of the form AX = XB," 1987 IEEE International Conference on Robotics and Automation, Raleigh, NC, USA, pp. 1666-1671. 1987. doi: 10.1109/ROBOT.1987.1087758.
[5] R. Y. Tsai and R. K. Lenz, "A new technique for fully autonomous and efficient 3D robotics hand/eye calibration," IEEE Transactions on Robotics and Automation, vol. 5, no. 3, pp. 345-358. June 1989. doi: 10.1109/70.34770.
[6] F. C. Park and B. J. Martin, "Robot sensor calibration: solving AX=XB on the Euclidean group," IEEE Transactions on Robotics and Automation, vol. 10, no. 5, pp. 717-721. Oct. 1994. doi: 10.1109/70.326576.
[7] K. H. Strobl and G. Hirzinger, "Optimal Hand-Eye Calibration," 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems, Beijing, China, pp. 4647-4653. Oct 2006. doi: 10.1109/IROS.2006.282250.
[8] K. Daniilidis, “Hand-Eye Calibration Using Dual Quaternions,” The International Journal of Robotics Research, vol. 18, no. 3, pp. 286–298. March 1999. doi:10.1177/02783649922066213.
[9] J. C. K. Chou and M. Kamel. “Finding the Position and Orientation of a Sensor on a Robot Manipulator Using Quaternions,” The International Journal of Robotics Research, vol. 10, no. 3, pp. 240–254. June 1991. doi:10.1177/027836499101000305.
[10] N. Andreff, R. Horaud and B. Espiau, "On-line hand-eye calibration," Second International Conference on 3-D Digital Imaging and Modeling (Cat. No.PR00062), Ottawa, ON, Canada, pp. 430-436. Oct 1999. doi: 10.1109/IM.1999.805374.
[11] R. Horaud and F. Dornaika. “Hand-Eye Calibration,” The International Journal of Robotics Research, vol. 14, no. 3, pp. 195–210. June 1995. doi: 10.1177/027836499501400301.
[12] H. H. Chen, "A screw motion approach to uniqueness analysis of head-eye geometry," 1991 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Maui, HI, USA, pp. 145-151. June 1991. doi: 10.1109/CVPR.1991.139677.
[13] Z. Zhang, L. Zhang, and G.-Z. Yang. “A computationally efficient method for hand–eye calibration,” International Journal of Computer Assisted Radiology and Surgery (Int J CARS), vol. 12, no. 10, pp. 1775–1787. Oct 2017. doi: 10.1007/s11548-017-1646-x.
[14] H. Liang, X. Ma, S. Li, M. Gorner, S. Tang, B. Fang, F. Sun, and J. Zhang, "Pointnetgpd: detecting grasp configurations from point sets," International Conference on Robotics and Automation (ICRA), Montreal, QC, Canada, pp. 3629-3635. May 2019. doi: 10.1109/ICRA.2019.8794435.
[15] C Giorio, M Fascinari. “Kinect in motion - audio and visual tracking by example,” Packt Publishing, April 2013.
[16] R. A. El-laithy, J. Huang, and M. Yeh, "Study on the use of microsoft kinect for robotics applications," 2012 IEEE/ION Position, Location and Navigation Symposium, pp. 1280-1288. April 2012. doi: 10.1109/PLANS.2012.6236985.
[17] H.-S. Fang, C. Wang, M. Gou. C. Lu. “Graspnet-1billion: a large-scale benchmark for general object grasping,” 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, pp. 11444-11453. June 2020. doi: 10.1109/CVPR42600.2020.01146.
[18] U. Asif, J. Tang, and S. Harrer. “Graspnet: an efficient convolutional neural network for real-time grasp detection for low-powered devices.” 27th International Joint Conference on Artificial Intelligence (IJCAI-18), pp. 4875-4882. July 2018. doi: 10.24963/ijcai.2018/677.
[19] A. Mousavian, C. Eppner, and D. Fox. “6-dof graspnet: variational grasp generation for object manipulation,” 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Korea (South), pp. 2901-2910. 2019. doi: 10.1109/ICCV.2019.00299.
[20] M. Sundermeyer, A. Mousavian, R. Triebel, and D. Fox. "Contact-GraspNet: Efficient 6-DoF Grasp Generation in Cluttered Scenes," 2021 IEEE International Conference on Robotics and Automation (ICRA), Xi'an, China, pp. 13438-13444. 2021 doi: 10.1109/ICRA48506.2021.9561877.
[21] Y. Lu, B. Deng, Z. Wang, P. Zhi, Y. Li, S. Wang. “Hybrid physical metric for 6-doF grasp pose detection,” 2022 IEEE International Conference on Robotics and Automation (ICRA), Philadelphia, PA, USA, pp. 8238-8244. May 2022. doi: 10.1109/ICRA46639.2022.9811961.
[22] [online] Available: https://docs.opencv.org/4.x/dc/dbb/tutorial_py_calibration.html, last retrieve 23 Aug 2022.
[23] [online] Available: https://docs.opencv.org/4.x/d7/d53/tutorial_py_pose.html, last retrieve 23 Aug 2022.
[24] [online] Available: https://docs.opencv.org/4.2.0/d9/d0c/group__calib3d.html?fbclid=IwAR13KADBZfa3FdRjg6HZk0nJb2DbhW-5sXafXU_3ht_tZlInyAGsMNXX_EQ#gaebfc1c9f7434196a374c382abf43439b:~:text=%E2%97%86-,calibrateHandEye,-(), last retrieve 23 Aug 2022.
[25] [online] Available: https://github.com/graspnet/graspnet-baseline, last retrieve 23 Aug 2022.
[26] [online] Available: https://openkinect.org/wiki/Getting_Started, last retrieve 23 Aug 2022.
[27] [online] Available: https://openkinect.org/wiki/Imaging_Information?fbclid=IwAR1sx2OGFCWj1cdmvNxdBHFEe29I3OIz6AIX6OV5AMsUS1vDGlrs5LFl5lE, last retrieve 24 Aug 2022.
[28] T. Mallick, P. P. Das and A. K. Majumdar, "Characterizations of Noise in Kinect Depth Images: A Review," IEEE Sensors Journal, vol. 14, no. 6, pp. 1731-1740. June 2014, doi: 10.1109/JSEN.2014.2309987
[29] [online] Available: https://github.com/wkentaro/labelme, last retrieve 23 Aug 2022.
[30] H. Sarbolandi, D. Lefloch, A. Kolb, “Kinect range sensing: structured-light versus time-of-flight kinect,” Computer Vision and Image Understanding, vol. 139, pp. 1-20, Oct 2015. doi: 10.1016/j.cviu.2015.05.006.