| 研究生: |
巴布羅 Gonzalez, Pablo |
|---|---|
| 論文名稱: |
應用於多面物體自動夾取之基於Kinect即時三維點雲處理研究 Study on real-time Kinect-based 3D point cloud processing for automatic polyhedral object grasping |
| 指導教授: |
鄭銘揚
Cheng, Ming-Yang |
| 學位類別: |
碩士 Master |
| 系所名稱: |
電機資訊學院 - 電機工程學系 Department of Electrical Engineering |
| 論文出版年: | 2016 |
| 畢業學年度: | 104 |
| 語文別: | 英文 |
| 論文頁數: | 66 |
| 中文關鍵詞: | 三維點雲 、ABB六軸關節型機械手臂 |
| 外文關鍵詞: | Kinect,, 3D point cloud, ABB robot |
| 相關次數: | 點閱:95 下載:6 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
自微軟Kinect感測器問世以來,近距離場景即時三維物體辨識之研究已經成為機器人與電腦視覺領域中主要潮流之一。作為發展應用於工業之具強健性物體辨識系統此一長遠目標之一部分,本論文針對工業用機械手臂與Kinect所組成之三維視覺系統應用於多面體物件夾取工作中之平面辨識與分類等問題進行研究。由於從視覺感測器如Kinect等設備所獲取之資料包含雜訊,本論文已經完成使用一連串的演算法用以降低雜訊之干擾,並同時適當地將資料點進行分類分群。本論文已開發出一種新的影像分割與匹配演算法,以三維位置資訊與物體表面之法向量代表所給定之感興趣物體,並利用模糊邏輯介面系統具備強健性與靈活性之特色來進行數據分群,再利用隨機抽樣一致演算法(RANSAC)將分割後的片段重新匹配至相應之平面模型。本論文採用以眼對手方式架設之基於Kinect視覺系統取得多面體物件之三維位置資訊與表面之法向量,並給予機械手臂命令以進行物件夾取。實驗結果顯示本論文所提出之影像平面分割演算法可以成功將物體影像進行分割,而且本論文所開發之基於Kinect視覺系統能夠精確地達成多面體物件即時自動夾取。
Since the introduction of the Microsoft Kinect sensor, real-time 3D perception of objects in a close-range scene has become one of the major trends in the current research on robotics and computer vision domain. As part of a long-term goal to develop a robust object recognition system for industrial applications, this thesis focuses on the research topic of 3D planes recognition and separation for polyhedral object grasping using industrial manipulators and Kinect as a 3D vision system. Since the data acquired by the vision sensor such as Kinect are invariably noisy, a pipeline of algorithms has been implemented to reduce the noise and clustering the data points properly. With the 3D position and surface normal vectors that represent the object of interest given, a split and merge algorithm has been developed to cluster the data exploiting the flexibility and robustness of the fuzzy logic inference system and then fitting each segment to a planar model using RANSAC. This thesis mainly uses the eye-to-hand Kinect-based vision system to retrieve the 3D position and normal vectors of the planar faces of the object and then give instructions to a robotic arm for grasping. Experimental results indicate that the proposed plane segmentation algorithm can successfully segment the object, and the Kinect-based robotic vision system developed in this thesis can achieve real-time automatic polyhedral object grasping with high precision.
[1] E. Bessette. (2016, May 5). Eyeing Automated Solutions [Online]. Available: http://www.cubicautomation.com/solutions
[2] C. Powell. (2014, Sept. 8). FANUC America Demonstrates a Variety of 3D Bin Picking Applications with the Next Generation iRVision 3D Area Sensor at IMTS 2014 [Online]. Available:
http://www.fanucamerica.com/FanucAmerica-news/press-releases/PressReleaseDetails.aspx?id=42
[3] J. Linshi. (2015, Mar. 19). Meet Sawyer, a New Robot That Wants to Revolutionize Manufacturing [Online]. Available:
http://time.com/3749307/rethink-robotics-sawyer- robot/
[4] J. Sun, Z. Peng, W. Zhou, J. Y. H. Fuh, G. S. Hong, and A. Chiu, “A Review on 3D Printing for Customized Food Fabrication,” Procedia Manufacturing, vol. 1, pp. 308-319, 2015.
[5] B. Khoshnevis, “Automated construction by contour crafting-related robotics and information technologies,” Automation in Construction, vol. 13, no. 1, pp. 5-19, 2004.
[6] H. Golnabi and A. Asadpour, “Design and application of industrial machine vision systems,” Robot. Comput.-Integr. Manuf., vol. 23, no. 6, pp. 630-637, 2007.
[7] M. A. Ayub, A. B. Mohamed, and A. H. Esa, “In-line inspection of roundness using machine vision,” Procedia Technology, vol. 15, pp. 807-816, 2014.
[8] Z. Bi and L. Wang, “Advances in 3D data acquisition and processing for industrial applications,” Robot. Comput.-Integr. Manuf., vol. 26, no. 5, pp. 403-413, 2010.
[9] R. B. Rusu, G. Bradski, R. Thibaux, and J. Hsu, “Fast 3D recognition and pose using the viewpoint feature histogram,” in proc. of the 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Taipei, 2010, pp. 2155-2162.
[10] W. Wohlkinger, A. Aldoma, R. B. Rusu, and M. Vincze, “3DNet: Large-scale object class recognition from CAD models,” in proc. of the 2012 IEEE International Conference on Robotics and Automation (ICRA), Saint Paul, MN, 2012, pp. 5384-5391.
[11] G. Stockman, “Object recognition and localization via pose clustering,” Computer Vision, Graphics, and Image Processing, vol. 40, pp. 361-387, 1987.
[12] Z. Kang and Z. Li, “Primitive Fitting Based on the Efficient multiBaySAC Algorithm,” PLoS ONE, vol. 10, no. 3, pp. e0117341, 2015.
[13] P. Fankhauser, M. Bloesch, D. Rodriguez, R. Kaestner, M. Hutter, and R. Siegwart, “Kinect v2 for mobile robot navigation: Evaluation and modeling,” in proc. of the 2015 International Conference on Advanced Robotics (ICAR), Istanbul, 2015, pp. 388-394.
[14] R. A. Newcombe, S. Izadi, O. Hilliges, D. Molyneaux, D. Kim, A. J. Davison, P. Kohli, J. Shotton, S. Hodges, and A. Fitzgibbon, “KinectFusion: Real-time dense surface mapping and tracking,” in proc. of the 2011 10th IEEE International Symposium on Mixed and Augmented Reality (ISMAR), , Basel, 2011, pp. 127-136.
[15] O. Pauly, B. Diotte, P. Fallavollita, S. Weidert, E. Euler, and N. Navab, “Machine learning-based augmented reality for improved surgical scene understanding,” Computerized Medical Imaging and Graphics, vol. 41, pp. 55-60, 2015.
[16] L. Yang, L. Zhang, H. Dong, A. Alelaiwi, and A. Saddik, “Evaluating and improving the depth accuracy of Kinect for Windows v2,” IEEE Sensors Journal, vol. 15, no. 8, pp. 4275-4285, 2015.
[17] H. Sarbolandi, D. Lefloch, and A. Kolb, “Kinect range sensing: Structured-light versus Time-of-Flight Kinect,” Computer Vision and Image Understanding, vol. 139, pp. 1-20, 2015.
[18] G. Vosselman, B. G. H. Gorte, G. Sithole, and T. Rabbani, “Recognising structure in laser scanner point cloud,” Int. Arch. of Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. 46 (Part 8/W2), pp. 33-38, 2004.
[19] D. Borrmann, J. Elseberg, K. Lingemann, and A. Nüchter, “The 3D Hough Transform for plane detection in point clouds: A review and a new accumulator design,” 3D Research, vol. 2, no. 2, pp. 1-13, 2011.
[20] C. Leys, C. Ley, O. Klein, P. Bernard, and L. Licata, “Detecting outliers: Do not use standard deviation around the mean, use absolute deviation around the median,” Journal of Experimental Social Psychology, vol. 49, no. 4, pp. 764-766, 2013.
[21] L. Wang, B. Yuan, and J. Chen, “Robust Fuzzy C-Means and Bilateral Point Clouds Denoising,” in proc. of the 2006 8th international Conference on Signal Processing, Beijing, 2006, pp. 16-20.
[22] A. Adams, N. Gelfand, J. Dolson, and M. Levoy, “Gaussian KD-trees for fast high-dimensional filtering,” ACM Trans. Graph., vol. 28, no. 3, pp. 1-12, 2009.
[23] T. Rabbani, F. van den Heuvel, and G. Vosselmann, “Segmentation of point clouds using smoothness constraint,” International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences, vol. 36, no. 5, pp. 25-27, 2006.
[24] J. Zhao, D. Li, and Y. Wang, “Ancient architecture point cloud data segmentation based on modified fuzzy C-means clustering algorithm,” in proc. of the International Conference on Earth Observation Data Processing and Analysis, 2008, vol. 7285, pp. 72851|1-72851|10.
[25] D. Holz, R. Schnabel, D. Droeschel, J. Stuckler, and S. Behnke. “Towards semantic scene analysis with time-of- flight cameras,” in RoboCup 2010: Robot Soccer World Cup XIV, Berlin: Springer, 2011, pp. 121-132.
[26] J. Stückler, R. Steffens, D. Holz, and S. Behnke, “Efficient 3D object perception and grasp planning for mobile manipulation in domestic environments,” Robotics and Autonomous Systems, vol. 61, no. 10, pp. 1106-1115, 2013.
[27] D. Holz, S. Holzer, R. B. Rusu, and S. Behnke, “Real-Time Plane Segmentation using RGB-D Cameras,” in Robot Soccer World Cup XV, Berlin: Springer-Verlag, 2012, pp. 306-317.
[28] R. Schnabel, R. Wahl, and R. Klein, “Efficient RANSAC for Point-Cloud Shape Detection,” Computer Graphics Forum, vol. 26, no. 2, pp. 214-226, 2007.
[29] ABB. (2016, May 5). RAPID Instructions, Functions and Data types - Technical reference manual [Online]. Available:
http://developercenter.robotstudio.com/Index.aspx?DevCenter=ManualsOrRobotStudio&OpenDocument&Url=..%2fRapidIFDTechRefManual%2fstart.html.
[30] F. Cheevasuvit, H. Maitre, and D. Vidal-Madjar, “A robust method for picture segmentation based on split-and-merge procedure,” Comput. Vis. Graph. Image Process., vol. 34, pp. 268-281, 1986.
[31] S-Y. Chen, W-C. Lin, and C-T. Chen, “Split-and-merge image segmentation based on localized feature analysis and statistical tests,” CVGIP: Graphic. Models Image Processing, vol. 53, no. 5, pp. 457-475, 1991.
[32] M Y.-T. Su, J. Bethel, and S. Hu, “Octree-based segmentation for terrestrial LiDAR point cloud data in industrial applications,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 113, pp. 59-74, 2016.
[33] V. Hautamäki, S. Cherednichenko, I. Kärkkäinen, T. Kinnunen, and P. Fränti, “Improving k-means by Outlier Removal,” in proc. of the 14th Scandinavian Conference on Image Analysis, 2005, pp. 978-987.
[34] G. Turk. (2010, Aug. 1). The Stanford Bunny [Online].Available:
http://www.cc.gatech.edu/~turk/bunny/bunny.html