簡易檢索 / 詳目顯示

研究生: 林毅斌
Lin, Yi-Bin
論文名稱: 整合RGB-D與雷射感測器實現居家服務型機器人之物件資訊探索式SLAM
Explored-SLAM with Object Information Using RGB-D and Laser Sensors for Home Service Robot
指導教授: 李祖聖
Li, Tzuu-Hseng S.
學位類別: 碩士
Master
系所名稱: 電機資訊學院 - 電機工程學系
Department of Electrical Engineering
論文出版年: 2016
畢業學年度: 104
語文別: 英文
論文頁數: 80
中文關鍵詞: RGB-D SLAM模糊感測器結合路徑規劃居家服務型機器人
外文關鍵詞: RGB-D SLAM, fuzzy sensor fusion, path planning, home service robot
相關次數: 點閱:110下載:4
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 本論文實現居家服務型機器人於居家環境中,可自我探索環境,並同時建立包含物件資訊之視覺特徵地圖與定位。視覺特徵地圖是利用RGB-D攝影機所取得之影像,以SURF及BRISK分別偵測與描述特徵點,並以最小方差法預估該時刻機器人的姿態位置。在建立地圖同時,機器人會將所辨識到的物品或環境資訊,同時標註在地圖上,提升地圖的方便性與實用性。當機器人知道物件的地標位置後,人們將可用更自然的方式給予機器人指令。為了建立完整的視覺地圖,機器人必須具備自我探索的能力,本論文首先利用攝影機的深度資訊,建立2D視覺化地圖,接著利用該地圖避開障礙物產生探索點,再應用蟻群演算法(ACO)規劃所有探索點最短的探訪路徑,然後提出兩階段概率地圖法(Two-stage PRM),規劃各個探索點間的路徑,使機器人移動更加順暢。為了增進機器人的位置預估精準度,本論文更提出模糊預估法(Multisensory Fuzzy Pose Estimator, MFPE),整合並調節攝影機(Kinect II)、慣性感測器(IMU)及雷射測距儀三種感測器,進行機器人定位。實驗結果顯示,相較於純視覺定位,結合多種感測器之模糊預估法具備較高的定位精準度。同時,實驗也展示了機器人可同時建立具備物品資訊的視覺地圖,並利用物品地標完成任務。

    In this thesis, the home service robot can use RGB-D images and IMU sensors to simultaneous localize and build a map with objects information. For building 3D visual feature map, the robot extracts the features by Speeded Up Robust Features (SURF) and describes the features by Binary Robust Invariant Scalable Keypoints (BRISK). Besides, the Hamming distance is used to match features, and the least square method is used to estimate the pose of the robot. While establishing the map, the robot will recognize objects and mark them on the map. So that the robot will know the position of all recognized objects. To establish the map as complete as possible, a good exploration is necessary. The robot uses the depth information to build a 2D visualized map, and plan exploration points in this map while avoiding all obstacles. Next, the Ant Colony Optimization (ACO) will plan the efficient path to go thought the exploration points. Finally, the thesis proposes two-stage Probabilistic Roadmap Method (TPRM) to plan the path between two exploration points. The TPRM can plan a more smooth and fluent path. To improve the accuracy and stability of localization, the thesis also presents the Multisensory Fuzzy Pose Estimator (MFPE), which integrates the depth camera, IMU, and laser range finder. The experiment shows that comparing with only using depth camera, the accuracy of the map and location can be improved by MFPE. The experiments also demonstrate the efficiency and practicality of building a 3D visual features map with objects landmarks.

    Abstract I Acknowledgement III Contents IV List of Figures VI List of Tables IX Chapter 1 Introduction 1 1.1 Motivation 1 1.2 Related Work 2 1.2.1 RGB-D SLAM 2 1.2.2 Environment Exploration and Searching 4 1.2.3 Object Recognition 5 1.3 System Overview 6 1.4 Thesis Organization 7 Chapter 2 Automatic Exploration and Path Planning 9 2.1 Introduction 9 2.2 2D Map Visualization 10 2.2.1 Registering Obstacles Position 11 2.2.2 2D Visualized Map Establishment by Image Processing 13 2.2.3 2D Map Updating by Robot Field of View 16 2.3 Robot Exploration Algorithm 18 2.4 ACO Path Planning for Exploratory Sequence 23 2.5 Two-Stage PRM Path Planning for Robot Moving 26 2.5.1 2D Diffused Pollution Map Generation 26 2.5.2 Conventional PRM Method 29 2.5.3 Dijkstra’s Algorithm Implementation 31 2.5.4 Proposed Two-Stage PRM Method 34 Chapter 3 Integration SLAM and Object Information 37 3.1 Introduction 37 3.2 RGB-D SLAM 38 3.2.1 SLAM System Structure 38 3.2.2 Feature Extraction 40 3.2.2.1 SURF and BRISK 40 3.2.2.2 Feature Refinement 44 3.2.3 Feature Matching and Pose Estimation 45 3.2.4 Feature Map Updating 48 3.3 Fuzzy Based Multisensory Pose Estimator 50 3.4 Object and Landmark Tagging 55 3.4.1 Object Recognition 55 3.4.2 Object Tagging 57 Chapter 4 Experiments 58 4.1 Introduction 58 4.2 Experimental Setting 59 4.2.1 Environment 59 4.2.2 Home Service Robot 60 4.3 Experiment I:Explored-SLAM 61 4.4 Experiment II:Test the Accuracy of Localization 64 4.5 Experiment III:Object Searching and Grasping with Object Information Map 70 Chapter 5 Conclusion and Future Works 72 5.1 Conclusion 72 5.2 Future Works 73 References 75

    [1] Y. Sakagami, R. Watanabe, C. Aoyama, S. Matsunaga, N. Higaki, and K. Fujimura, “The Intelligent ASIMO: System Overview And Integration,” in Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems, vol. 3, pp. 2478-2483, 2002.
    [2] S. Chitta, E. G. Jones, M. Ciocarlie, and K. Hsiao, “Perception, Planning, And Execution for Mobile Manipulation in Unstructured Environments,” IEEE Robotics and Automation Magazine, Special Issue on Mobile Manipulation, vol. 19, 2012.
    [3] SLAM. [Online] Available: https://en.wikipedia.org/wiki/Simultaneous_localization_and_mapping
    [4] M. Bosse and J. Roberts, “Histogram Matching and Global Initialization for Laser-only SLAM in Large Unstructured Environments,” in Proceedings of the 2007 IEEE International Conference on Robotics and Automation, pp. 4820-4826, 2007.
    [5] H. Zhao, M. Chiba, R. Shibasaki, X. Shao, J. Cui, and H. Zha, “SLAM in a Dynamic Large Outdoor Environment using a Laser Scanner,” in Proceedings of the 2008 IEEE International Conference on Robotics and Automation, pp. 1455-1462, 2008.
    [6] J. Choi, “Hybrid Map-based SLAM using a Velodyne Laser Scanner,” in Proceedings of the 17th IEEE International Conference on Intelligent Transportation Systems, pp. 3082-3087, 2014.
    [7] P. Beňo, F. Duchoň, M. Tölgyessy, P. Hubinský, and M. Kajan, “3D Map Reconstruction with Sensor Kinect: Searching for Solution Applicable to Small Mobile Robots,” in Proceedings of International Conference on Robotics in Alpe-Adria-Danube Region, pp. 1-6, 2014.
    [8] P. Henry, M. Krainin, E. Herbst, X. Ren, and D. Fox, “RGB-D Mapping: Using Kinect-Style Depth Cameras for Dense 3D Modeling of Indoor Environments,” Journal of Robotics Research, vol. 31, pp. 647-663, 2012.
    [9] G. Hu, S. Huang, L. Zhao, A. Alempijevic, and G. Dissanayake, “A Robust RGB-D SLAM Algorithm,” in Proceedings of International Conference on Intelligent Robots and Systems, pp. 1714-1719, 2012.
    [10] J. Choi, S. Ahn, and W. K. Chung, “Robust Sonar Feature Detection for the SLAM of Mobile Robot,” in Proceedings of International Conference on Intelligent Robots and Systems, pp. 3415-3420, 2005.
    [11] F. Sun, Y. Zhou, C. Li, and Y. Huang, “Research on Active SLAM with Fusion of Monocular Vision and Laser Range Data,” in Proceedings of International Conference on Intelligent Control and Automation, pp. 6550-6554, 2010.
    [12] The depth camera, Kinect. [Online] Available: http://www.xbox.com/en-SG/Xbox360/Accessories/Kinect/Home
    [13] The depth camera, Kinect II. [Online] Available: http://www.xbox.com/en-SG/xbox-one/accessories/kinect-for-xbox-one#fbid=PKBhrjxJICP
    [14] F. Endres, J. Hess, N. Engelhard, J. Sturm, D. Cremers, and W. Burgard, “An Evaluation Of The RGB-D SLAM System,” in Proceedings of International Conference on Robotics and Automation, pp. 1691-1696, 2012.
    [15] A. Colorni, M. Dorigo, and V. Maniezzo, “Distributed Optimization by Ant Colonies,” in Proceedings of European Conference on Artificial Life, vol. 142, pp. 134-142, 1991.
    [16] D. Trivun, E. Šalaka, D. Osmanković, J. Velagić, and N. Osmić, “Active SLAM-based Algorithm for Autonomous Exploration with Mobile Robot,” in Proceedings of IEEE International Conference on Industrial Technology, pp. 74-79, 2015.
    [17] D. Fox, J. Ko, K. Konolige, and B. Stewart, “A Hierarchical Bayesian Approach to the Revisiting Problem in Mobile Robot Map Building,” in Proceedings of International Symposium on Robotics Research, pp. 60-69, 2005.
    [18] S. Omachi and M. Omachi, “Fast Template Matching with Polynomials,” IEEE Transactions on Image Processing, vol. 16, no. 8, pp. 2139-2149, 2007.
    [19] S. H. Kim, H. R. Tizhoosh, and M. Kamel, “Choquet Integral-based Aggregation of Image Template Matching Algorithms,” in Proceedings of International Conference of the North American Fuzzy Information Processing Society, pp. 143-148, 2003.
    [20] D. G. Lowe, “Object Recognition from Local Scale-Invariant Features,” in Proceedings of IEEE International Conference on Computer Vision, vol. 2, pp. 1150-1157, 1999.
    [21] C. Y. Lin and E. Setiawan, “Object Orientation Recognition Based on SIFT and SVM by Using Stereo Camera,” in Proceedings of IEEE International Conference on Robotics and Biomimetics, pp. 1371-1376, 2009.
    [22] B. Le Saux and M. Sanfourche, “Robust Vehicle Categorization from Aerial Images by 3D-template Matching and Multiple Classifier System,” in Proceedings of International Symposium on Image and Signal Processing and Analysis, pp. 466-470, 2011.
    [23] J. P. Lewis, “Fast Template Matching,” Vision Interface, vol. 95, no. 120123, pp. 15-19, 1995.
    [24] H. Schweitzer, J. W. Bell, and F. Wu, “Very Fast Template Matching,” in Proceedings of European Conference on Computer Vision, pp. 358-372, 2002.
    [25] L. Di Stefano, M. Marchionni, and S. Mattoccia, “A Fast Area-based Stereo Matching Algorithm,” Image and Vision Computing, vol. 22, no. 12, pp. 983-1005, 2004.
    [26] L. Di Stefano and S. Mattoccia, “Fast Template Matching using Bounded Partial Correlation,” Machine Vision and Applications, vol. 13, no. 4, pp. 213-221, 2003.
    [27] H. Bay, T. Tuytelaars, and L. Van Gool, “Surf: Speeded up Robust Features,” in Proceedings of European Conference on Computer Vision, pp. 404-417, 2006.
    [28] S. Leutenegger, M. Chli, and R. Y. Siegwart, “BRISK: Binary Robust Invariant Scalable Keypoints,” in Proceedings of International Conference on Computer Vision, pp. 2548-2555, 2011.
    [29] M. Calonder, V. Lepetit, C. Strecha, and P. Fua, “BRIEF: Binary Robust Independent Elementary Features,” in Proceedings of the European Conference on Computer Vision, pp. 778-792, 2010.
    [30] E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, “ORB: An Efficient Alternative to SIFT or SURF,” in Proceedings of International Conference on Computer Vision, pp. 2564-2571, 2011.
    [31] L. E. Kavraki and J. C. Latombe, “Probabilistic Roadmaps for Robot Path Planning,” in Practical Motion Planning in Robotics: Current Approaches and Future Directions, pp 33-53, John Wiley, 1998.
    [32] R. Bohlin and L. E. Kavraki, “Path Planning using Lazy PRM,” in Proceedings of IEEE International Conference on Robotics and Automation, vol. 1, pp. 521-528, 2000.
    [33] K. Belghith, F. Kabanza, L. Hartman, and R. Nkambou, “Anytime dynamic path-planning with flexible probabilistic roadmaps,” in Proceedings of IEEE International Conference on Robotics and Automation, pp. 2372-2377, 2006.
    [34] A. Crauser, K. Mehlhorn, U. Meyer, and P. Sanders, “A Parallelization of Dijkstra's Shortest Path Algorithm,” in Proceedings of International Symposium on Mathematical Foundations of Computer Science, pp. 722-731, 1998.
    [35] C.-L. Lee, Piecewise Linear Feature Based RBPF SLAM for Home Service Robot and Its Application to Accompanying Walk, M.S. Thesis, Department of Electrical Engineering, National Cheng Kung University, Tainan, Taiwan, R.O.C., 2016.

    下載圖示 校內:2022-07-01公開
    校外:2022-07-01公開
    QR CODE