簡易檢索 / 詳目顯示

研究生: 陳韋勳
Chen, Wei-Hsun
論文名稱: 基於二維或三維影像之圓柱體姿態辨識
Cylinder Posture Recognition Based on a 2-D or 3-D Image
指導教授: 楊中平
Young, Chung-Ping
學位類別: 碩士
Master
系所名稱: 電機資訊學院 - 資訊工程學系
Department of Computer Science and Information Engineering
論文出版年: 2019
畢業學年度: 107
語文別: 英文
論文頁數: 76
中文關鍵詞: 點雲RGB-D 相機BlenderRANSAC
外文關鍵詞: Point cloud, RGB-D camera, Blender, RANSAC
相關次數: 點閱:101下載:8
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 具有圓型特徵的物件在工業環境當中一直以來都是大量會被著手處理的對象,又尤其以圓柱體最為常見。其中工業環境處理的方式舉凡像是物件辨識、機械手臂抓取後進行加工等。在辨識的過程當中,最常使用到的為直接利用硬體現有設備,對物件進行 3-D 掃描等。然而,透過相關文獻的知識我們發現了對於圓型特徵的辨識其實是可從 2-D 方向著手處理的。因此,在本文中,我們提出了一種新的圓柱體姿態估計方法,通過從單張二維的影像提取一些圓型的特徵,並且將其是否有面向相機、是否置於相機正中心底下、圓柱體運動方向進行分類,將問題切割成多個子問題,再分別客製化處理後估計出真實場景的圓柱體姿態。在三維的實現方法中,我們使用從 RGB-D 相機和 Blender(3-D 圖形軟體)獲得的 3-D 點雲資料來進行分析,透過一連串的前處理來過濾資料後,使用 RANSAC 演算法估計出圓柱體姿態。一組用合成生成的數據進行的實驗,我們透過圓柱體與測量平面的傾斜夾角,和圓柱體在測量平面上的旋轉角度來量化圓柱體的法向量以評估我們兩種方法的穩健性,在第五章將會對這兩種方法的實驗結果進行比較,並且在附錄包含詳細的實驗數據。

    Objects with circular features have always been the subject of frequent handling in industrial environments, and cylinders are most common. The way the industrial environment handles objects is like object recognition, robotic arm grabbing, and machining. In the process of recognition, the most commonly used technique is to use the hardware equipment to perform 3-D scanning on the object. However, through the knowledge of the relevant literature, we found that the recognition of the circular features can be handled from 2-D. In this paper, we propose a new method for cylinder posture recognition. By extracting some circular features from a single twodimensional image and classifying whether it is facing the camera, whether it is at the center of the camera, and analyze the rotate direction of the cylinder, the original problem is divide into multiple sub-problems. After we deal with these sub-problems separately, we can recognize the cylinder posture of the real scene. In the 3-D implementation, we use point cloud information obtained from the RGB-D camera and Blender (3-D graphics software) to analyze, through a series of pre-processing to filter the data, we finally using the RANSAC algorithm to recognize cylinder’s posture. We quantify the normal vector of the cylinder by the tilt angle between the cylinder and the measurement plane and the rotation angle of the cylinder on the measurement plane. A set of experiments with synthetically generated data to assess the robustness of our two methods, a complete comparison of the experimental results is in Chapter 5 and detailed experimental data is included in the Appendix.

    Abstract..............I 摘要.............. II Acknowledgement ............III List of Tables............VII List of Figures............X Chapter 1 Introduction............1 Chapter 2 Related Work ...........4 2.1 The method of 2-D ............4 2.1.1 Ellipse detection ...........4 2.1.2 Determine if the cylinder is facing the camera ......4 2.2 The method of 3-D ............5 Chapter 3 Methodology ............12 3.1 Cylinder Posture Recognition from 2-D Image ......13 3.1.1 Ellipse detection ...........14 3.1.2 Ellipse position determination........17 3.1.3 Determine if the cylinder is facing the camera.....18 3.1.4 Compensate or reduce the cylinder tilt angle .....23 3.1.5 Find the cylinder rotation angle........26 3.2 Cylinder Posture Recognition from 3-D image .....29 3.2.1 Camera calibration..........29 3.2.2 Camera alignment...........29 3.2.3 Depth error correction ..........31 3.2.4 Statistical outlier removal.........32 3.2.5 Estimate of normal vectors.........32 Chapter 4 Experimental Results..........35 Chapter 5 Conclusion and Future work.........40 Appendix A-1 .............41 Appendix A-2 .............50 Appendix B-1............58 Appendix B-2............66 Reference ..............74

    [1] M. Von Steinkirch, "Introduction to the Microsoft Kinect for computational photography and vision," ed: May, 2013.
    [2] P. Fankhauser, M. Bloesch, D. Rodriguez, R. Kaestner, M. Hutter, and R. Siegwart, "Kinect v2 for mobile robot navigation: Evaluation and modeling," in 2015 International Conference on Advanced Robotics (ICAR), 2015: IEEE, pp. 388-394.
    [3] M. Quigley et al., "ROS: an open-source Robot Operating System," in ICRA workshop on open source software, 2009, vol. 3, no. 3.2: Kobe, Japan, p. 5.
    [4] M. Fornaciari, A. Prati, and R. Cucchiara, "A fast and effective ellipse detector for embedded vision applications," (in English), Pattern Recognition, vol. 47, no. 11, pp. 3693-3708, Nov 2014.
    [5] R. Safaee-Rad, I. Tchoukanov, K. C. Smith, and B. Benhabib, "Three-dimensional location estimation of circular features for machine vision," IEEE Transactions on Robotics and Automation, vol. 8, no. 5, pp. 624-640, 1992.
    [6] W. Jia, Y. Yue, J. D. Fernstrom, Z. Zhang, Y. Yang, and M. Sun, "3D localization of circular feature in 2D image and application to food volume estimation," in 2012 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, 2012: IEEE, pp. 4545-4548.
    [7] L. Liu and Z. Zhao, "A novel image analysis method for rotational motion of circular feature based on perspective projection," The Imaging Science Journal, vol. 63, no. 5, pp. 252-262, 2015.
    [8] L. C. Goron, Z.-C. Marton, G. Lazea, and M. Beetz, "Robustly segmenting cylindrical and box-like objects in cluttered scenes using depth cameras," in ROBOTIK 2012; 7th German Conference on Robotics, 2012: VDE, pp. 1-6.
    [9] R. Figueiredo, P. Moreno, and A. Bernardino, "Robust cylinder detection and pose estimation using 3d point cloud information," in 2017 IEEE International Conference on Autonomous Robot Systems and Competitions (ICARSC), 2017: IEEE, pp. 234-239.
    [10] M. A. Fischler and R. C. Bolles, "Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography," Communications of the ACM, vol. 24, no. 6, pp. 381-395, 1981.
    [11] Kröger, M., Sauer-Greff, W., Urbansky, R., et al, “Performance evaluation on contour extraction using Hough transform and RANSAC for multi-sensor data fusion applications in industrial food inspection,” in 2016 Signal Processing: Algorithms, Architectures, Arrangements, and Applications(SPA), 2016: IEEE, pp. 234–237
    [12] L. Yang, L. Zhang, H. Dong, A. Alelaiwi, and A. El Saddik, "Evaluating and improving the depth accuracy of Kinect for Windows v2," IEEE Sensors Journal, vol. 15, no. 8, pp. 4275-4285, 2015.
    [13] D. Pagliari and L. Pinto, "Calibration of kinect for xbox one and comparison between the two generations of microsoft sensors," Sensors, vol. 15, no. 11, pp. 27569-27589, 2015.
    [14] A. Kolb, E. Barth, R. Koch, and R. Larsen, "Time‐of‐flight cameras in computer graphics," in Computer Graphics Forum, 2010, vol. 29, no. 1: Wiley Online Library, pp. 141-159.

    下載圖示 校內:2022-09-04公開
    校外:2022-09-04公開
    QR CODE