簡易檢索 / 詳目顯示

研究生: 蔡子正
Tsai, Tzu-Cheng
論文名稱: 三維點雲特徵萃取於高精地圖構建演算法開發
3D Point Cloud Feature Extraction for High-Quality Map Construction
指導教授: 彭兆仲
Peng, Chao-Chung
學位類別: 碩士
Master
系所名稱: 工學院 - 航空太空工程學系
Department of Aeronautics & Astronautics
論文出版年: 2022
畢業學年度: 110
語文別: 英文
論文頁數: 94
中文關鍵詞: 三維光達SLAM地面點分割感測器融合點雲特徵萃取光達位姿估測與建圖
外文關鍵詞: 3D LiDAR SLAM, Ground Segmentation, Sensor Fusion, Feature Extraction, LiDAR Odometry and Mapping
相關次數: 點閱:145下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 做為三維光達 (Light Detection and Ranging, LiDAR) 同步定位與建圖技術 (Simultaneous Localization and Mapping, SLAM) 的基礎,高精點雲地圖構建在自動駕駛載具中具有舉足輕重的地位,本研究開發一電腦感知系統以進行精確位姿估測與高精點雲地圖構建。從光達感測器獲取之點雲幀將執行一系列的點雲處理算法,萃取出具穩健性與不變性之點雲特徵:地面點分割依幾何性質將點雲幀分割成地面與障礙物標籤;歐式距離點雲聚類將三維點依感測器特性在不同方向尺度進行分群以移除離群點;動態障礙物偵測將二維影像物件標籤與三維點雲分群融合以移除非固定物件。經由上述分類器與曲率閾值判定,萃取邊緣、平面與交界特徵點以代表點雲幀之物理與幾何特徵,基於特徵之最近點疊代法 (Iterative Closest Point, ICP) 將相鄰幀 (frame-to-frame) 點雲進行匹配獲得相對位姿以使點雲幀變換至世界坐標系,並利用主成分分析 (Principal Component Analysis, PCA) 搜尋特徵約束以進行特徵與局部地圖 (frame-to-map) 匹配,修正位姿估測與消除豎直方向漂移誤差,最後進行相似幀檢測與位姿圖優化 (3D Pose Graph) 算法實現地圖閉環 (loop closure) 以消除位姿累積誤差,獲得充分點雲標籤資訊與具全局一致性之高精度點雲地圖構建。為了驗證本論文之建圖精度與場域適用性,我們使用著名的KITTI資料集 (HDL-64E)、具點雲分割標籤真值之 Semantic-KITTI 資料集、虛擬場景之Unreal Engine 資料集、以及成大航太系館與自強校區資料集 (VLP-16) 作為本研究演算法之測試對象。在位姿估測結果中,本研究演算法相較於今日前沿算法LOAM與LeGO-LOAM,在旋轉與平移誤差中均取得約40%~45%的誤差結果,證明本論文於三維點雲特徵萃取高精地圖構建演算法開發之研究成果。

    An accurate point cloud map generation lays the foundation for the realization of autonomous vehicles (AVs) as it is the backbone of Simultaneous Localization and Mapping (SLAM) based on 3D Light Detection and Ranging (LiDAR). In our research, a perception system with precise pose estimation is developed and high-quality point cloud map is constructed. For every input data from LiDAR sensor, series of preprocessing modules are implemented to extract characteristic feature points which are robust and invariant to motion changing: Ground segmentation divides point cloud frame into ground and obstacle sets for further preprocessing procedure. Point clustering labels point cloud groups based on their Euclidean distance values which scale unequally in different axes and removes outliers. Object removal detects dynamic objects in 2D image and diffuses information to 3D point cloud to remove non-static object clusters. Through classifiers above, edge, planar, and crossline features are extracted based on curvature and segmentation result. To estimate pose, a feature-based Iterative Closest Point (ICP) method is conducted to align frame-to-frame feature pairs in odometry. The transformation is refined by frame-to-map alignment with Principal Component Analysis (PCA) method and vertical drift elimination in mapping. The scan context place recognition and 3D pose graph are implemented as loop closure for final pose optimization. All proposed methods are examined and evaluated in HDL-64E KITTI Dataset, Semantic KITTI Dataset, Unreal Engine Dataset, and our own VLP-16 Campus Dataset. The pose estimation technique we proposed has improved in both rotational and translational accuracy for only 40%~45% in average errors compared to state-of-the-art LOAM and LeGO-LOAM methods, which proves the achievement of our research on point cloud feature extraction for high-quality map construction.

    Abstract ii Acknowledgements iii Table of Contents iv List of Tables v List of Figures vi Chapter 1. Introduction 1 1.1. Research Motivation and Purposes 1 1.2. Related Work 2 1.3. Thesis Research Roadmap 6 1.4. Contributions 8 1.5. Datasets 8 Chapter 2. Point Cloud Preprocessing 11 2.1. Data Formulation and Order 11 2.2. Curvature and Reliable Points Classifier 12 2.3. Ground Point Segmentation 14 2.4. Point Clustering and Removal 22 2.5. Sensor Fusion Object Removal 24 Chapter 3. Feature Extraction 27 3.1. Edge and Planar Feature 27 3.2. Crossline Feature 28 Chapter 4. Pose Estimation and Optimization 32 4.1. Consecutive Frame Odometry 32 4.2. Global Feature Mapping 41 Chapter 5. Loop Closure 48 5.1. Scan Context Place Recognition and Loop Closure 48 Chapter 6. Experiments and Evaluations 53 6.1. Evaluation of Ground Point Segmentation 53 6.2. Evaluation of Sensor Fusion Object Removal 57 6.3. Evaluation of Feature Extraction 58 6.4. Evaluation of LiDAR Odometry 59 6.5. Evaluation of LiDAR Mapping 62 6.6. Evaluation of Loop Closure 70 6.7. Evaluation in Unreal Engine Dataset 73 6.8. Evaluation in VLP-16 Campus Dataset 76 6.9. Evaluation in Point Cloud Map Construction 81 Chapter 7. Conclusion and Future Work 88 Reference 89 Appendix 92

    [1] hra002. "Hovermap 100." https://confluence.csiro.au/display/ASL/Hovermap (accessed.
    [2] K. Stencil, "TRANSFORM THE REAL WORLD INTO ITS DIGITAL TWIN WITH MOBILE 3D REALITY CAPTURE TECHNOLOGY." [Online]. Available: https://www.kaarta.com/.
    [3] M. A. Fischler and R. C. Bolles, "Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography," Commun. ACM, vol. 24, no. 6, pp. 381–395, 1981, doi: 10.1145/358669.358692.
    [4] P. Pfaff, R. Triebel, and W. Burgard, "An Efficient Extension to Elevation Maps for Outdoor Terrain Mapping and Loop Closing," The International Journal of Robotics Research, vol. 26, no. 2, pp. 217-230, 2007/02/01 2007, doi: 10.1177/0278364906075165.
    [5] B. Li, "On Enhancing Ground Surface Detection from Sparse Lidar Point Cloud," in 2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 3-8 Nov. 2019 2019, pp. 4524-4529, doi: 10.1109/IROS40897.2019.8968135.
    [6] M. Himmelsbach, F. v. Hundelshausen, and H. Wuensche, "Fast segmentation of 3D point clouds for ground vehicles," in 2010 IEEE Intelligent Vehicles Symposium, 21-24 June 2010 2010, pp. 560-565, doi: 10.1109/IVS.2010.5548059.
    [7] J. Cheng, D. He, and C. Lee, "A simple ground segmentation method for LiDAR 3D point clouds," in 2020 2nd International Conference on Advances in Computer Technology, Information Science and Communications (CTISC), 20-22 March 2020 2020, pp. 171-175, doi: 10.1109/CTISC49998.2020.00034.
    [8] D. Zermas, I. Izzat, and N. Papanikolopoulos, "Fast segmentation of 3D point clouds: A paradigm on LiDAR data for autonomous vehicle applications," in 2017 IEEE International Conference on Robotics and Automation (ICRA), 29 May-3 June 2017 2017, pp. 5067-5073, doi: 10.1109/ICRA.2017.7989591.
    [9] H. Lim, M. Oh, and H. Myung, "Patchwork: Concentric Zone-Based Region-Wise Ground Segmentation With Ground Likelihood Estimation Using a 3D LiDAR Sensor," IEEE Robotics and Automation Letters, vol. 6, no. 4, pp. 6458-6465, 2021, doi: 10.1109/LRA.2021.3093009.
    [10] I. Bogoslavskyi and C. Stachniss, "Efficient Online Segmentation for Sparse 3D Laser Scans," Photogrammetrie - Fernerkundung - Geoinformation, vol. 85, pp. 41-52, 12/01 2016, doi: 10.1007/s41064-016-0003-y.
    [11] P. Narksri, E. Takeuchi, Y. Ninomiya, Y. Morales, N. Akai, and N. Kawaguchi, "A Slope-robust Cascaded Ground Segmentation in 3D Point Cloud for Autonomous Vehicles," in 2018 21st International Conference on Intelligent Transportation Systems (ITSC), 4-7 Nov. 2018 2018, pp. 497-504, doi: 10.1109/ITSC.2018.8569534.
    [12] W. Huang et al., "A Fast Point Cloud Ground Segmentation Approach Based on Coarse-To-Fine Markov Random Field," IEEE Transactions on Intelligent Transportation Systems, pp. 1-14, 2021, doi: 10.1109/TITS.2021.3073151.
    [13] Z. Shen, H. Liang, L. Lin, Z. Wang, W. Huang, and J. Yu, "Fast Ground Segmentation for 3D LiDAR Point Cloud Based on Jump-Convolution-Process," Remote Sensing, vol. 13, no. 16, 2021, doi: 10.3390/rs13163239.
    [14] Q. Hu et al., "Randla-net: Efficient semantic segmentation of large-scale point clouds," in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2020, pp. 11108-11117.
    [15] A. Paigwar, E. Ö, D. Sierra-Gonzalez, and C. Laugier, "GndNet: Fast Ground Plane Estimation and Point Cloud Segmentation for Autonomous Vehicles," in 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 24 Oct.-24 Jan. 2021 2020, pp. 2150-2156, doi: 10.1109/IROS45743.2020.9340979.
    [16] M. Ester, H.-P. Kriegel, J. Sander, and X. Xu, "A density-based algorithm for discovering clusters in large spatial databases with noise," in kdd, 1996, vol. 96, no. 34, pp. 226-231.
    [17] F. Gao, C. Li, and B. Zhang, "A Dynamic Clustering Algorithm for Lidar Obstacle Detection of Autonomous Driving System," IEEE Sensors Journal, vol. 21, no. 22, pp. 25922-25930, 2021, doi: 10.1109/JSEN.2021.3118365.
    [18] F. Nie, W. Zhang, Y. Wang, Y. Shi, and Q. Huang, "A Forest 3-D Lidar SLAM System for Rubber-Tapping Robot Based on Trunk Center Atlas," IEEE/ASME Transactions on Mechatronics, pp. 1-11, 2021, doi: 10.1109/TMECH.2021.3120407.
    [19] Y. Wu, Y. Wang, S. Zhang, and H. Ogai, "Deep 3D Object Detection Networks Using LiDAR Data: A Review," IEEE Sensors Journal, vol. 21, no. 2, pp. 1152-1171, 2021, doi: 10.1109/JSEN.2020.3020626.
    [20] X. Chen, H. Ma, J. Wan, B. Li, and T. Xia, "Multi-view 3d object detection network for autonomous driving," in Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, 2017, pp. 1907-1915.
    [21] C. C. Lin, C. H. Kuo, and H. T. Chiang, "CNN-Based Classification for Point Cloud Object With Bearing Angle Image," IEEE Sensors Journal, pp. 1-1, 2021, doi: 10.1109/JSEN.2021.3130268.
    [22] Y. Wu, S. Zhang, H. Ogai, H. Inujima, and S. Tateno, "Realtime Single-Shot Refinement Neural Network With Adaptive Receptive Field for 3D Object Detection From LiDAR Point Cloud," IEEE Sensors Journal, vol. 21, no. 21, pp. 24505-24519, 2021, doi: 10.1109/JSEN.2021.3114345.
    [23] X. Zhao, P. Sun, Z. Xu, H. Min, and H. Yu, "Fusion of 3D LIDAR and Camera Data for Object Detection in Autonomous Vehicle Applications," IEEE Sensors Journal, vol. 20, no. 9, pp. 4901-4913, 2020, doi: 10.1109/JSEN.2020.2966034.
    [24] L. Pang, Z. Cao, J. Yu, S. Liang, X. Chen, and W. Zhang, "An Efficient 3D Pedestrian Detector with Calibrated RGB Camera and 3D LiDAR," in 2019 IEEE International Conference on Robotics and Biomimetics (ROBIO), 6-8 Dec. 2019 2019, pp. 2902-2907, doi: 10.1109/ROBIO49542.2019.8961523.
    [25] B. H. Wang, W. Chao, Y. Wang, B. Hariharan, K. Q. Weinberger, and M. Campbell, "LDLS: 3-D Object Segmentation Through Label Diffusion From 2-D Images," IEEE Robotics and Automation Letters, vol. 4, no. 3, pp. 2902-2909, 2019, doi: 10.1109/LRA.2019.2922582.
    [26] H. Lim, S. Hwang, and H. Myung, "ERASOR: Egocentric Ratio of Pseudo Occupancy-Based Dynamic Object Removal for Static 3D Point Cloud Map Building," IEEE Robotics and Automation Letters, vol. 6, no. 2, pp. 2272-2279, 2021, doi: 10.1109/LRA.2021.3061363.
    [27] C. Jiang, D. P. Paudel, D. Fofi, Y. Fougerolle, and C. Demonceaux, "Moving Object Detection by 3D Flow Field Analysis," IEEE Transactions on Intelligent Transportation Systems, vol. 22, no. 4, pp. 1950-1963, 2021, doi: 10.1109/TITS.2021.3055766.
    [28] P. J. Besl and N. D. McKay, "A method for registration of 3-D shapes," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 14, no. 2, pp. 239-256, 1992, doi: 10.1109/34.121791.
    [29] K.-L. Low, "Linear Least-Squares Optimization for Point-to-Plane ICP Surface Registration," 2004.
    [30] A. Segal, D. Haehnel, and S. Thrun, "Generalized-icp," in Robotics: science and systems, 2009, vol. 2, no. 4: Seattle, WA, p. 435.
    [31] P. Biber and W. Strasser, "The normal distributions transform: a new approach to laser scan matching," in Proceedings 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2003) (Cat. No.03CH37453), 27-31 Oct. 2003 2003, vol. 3, pp. 2743-2748 vol.3, doi: 10.1109/IROS.2003.1249285.
    [32] J. Zhang and S. Singh, LOAM: Lidar Odometry and Mapping in Real-time. 2014.
    [33] J. Zhang and S. Singh, "Low-drift and real-time lidar odometry and mapping," Autonomous Robots, vol. 41, no. 2, pp. 401-416, 2017.
    [34] T. Shan and B. Englot, "LeGO-LOAM: Lightweight and Ground-Optimized Lidar Odometry and Mapping on Variable Terrain," in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 1-5 Oct. 2018 2018, pp. 4758-4765, doi: 10.1109/IROS.2018.8594299.
    [35] J. Lin and F. Zhang, "Loam livox: A fast, robust, high-precision LiDAR odometry and mapping package for LiDARs of small FoV," in 2020 IEEE International Conference on Robotics and Automation (ICRA), 31 May-31 Aug. 2020 2020, pp. 3126-3131, doi: 10.1109/ICRA40945.2020.9197440.
    [36] T. Shan, B. Englot, D. Meyers, W. Wang, C. Ratti, and D. Rus, "LIO-SAM: Tightly-coupled Lidar Inertial Odometry via Smoothing and Mapping," in 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 24 Oct.-24 Jan. 2021 2020, pp. 5135-5142, doi: 10.1109/IROS45743.2020.9341176.
    [37] H. Wang, C. Wang, C.-L. Chen, and L. Xie, "F-LOAM: Fast LiDAR Odometry And Mapping," arXiv e-prints, p. arXiv:2107.00822, 2021. [Online]. Available: https://ui.adsabs.harvard.edu/abs/2021arXiv210700822W.
    [38] S. Chen et al., "NDT-LOAM: A Real-time Lidar odometry and mapping with weighted NDT and LFA," IEEE Sensors Journal, pp. 1-1, 2021, doi: 10.1109/JSEN.2021.3135055.
    [39] R. B. Rusu, N. Blodow, Z. C. Marton, and M. Beetz, "Aligning point cloud views using persistent feature histograms," in 2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, 22-26 Sept. 2008 2008, pp. 3384-3391, doi: 10.1109/IROS.2008.4650967.
    [40] R. B. Rusu, N. Blodow, and M. Beetz, "Fast Point Feature Histograms (FPFH) for 3D registration," in 2009 IEEE International Conference on Robotics and Automation, 12-17 May 2009 2009, pp. 3212-3217, doi: 10.1109/ROBOT.2009.5152473.
    [41] M. Himstedt, J. Frost, S. Hellbach, H. Böhme, and E. Maehle, "Large scale place recognition in 2D LIDAR scans using Geometrical Landmark Relations," in 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems, 14-18 Sept. 2014 2014, pp. 5030-5035, doi: 10.1109/IROS.2014.6943277.
    [42] R. Dubé, D. Dugas, E. Stumm, J. Nieto, R. Siegwart, and C. Cadena, "SegMatch: Segment based place recognition in 3D point clouds," in 2017 IEEE International Conference on Robotics and Automation (ICRA), 29 May-3 June 2017 2017, pp. 5266-5272, doi: 10.1109/ICRA.2017.7989618.
    [43] Y. Wang, Z. Sun, C.-Z. Xu, S. E. Sarma, J. Yang, and H. Kong, "Lidar iris for loop-closure detection," in 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2020: IEEE, pp. 5769-5775.
    [44] G. Kim and A. Kim, "Scan Context: Egocentric Spatial Descriptor for Place Recognition Within 3D Point Cloud Map," in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 1-5 Oct. 2018 2018, pp. 4802-4809, doi: 10.1109/IROS.2018.8593953.
    [45] H. Wang, C. Wang, and L. Xie, "Intensity Scan Context: Coding Intensity and Geometry Relations for Loop Closure Detection," in 2020 IEEE International Conference on Robotics and Automation (ICRA), 31 May-31 Aug. 2020 2020, pp. 2095-2101, doi: 10.1109/ICRA40945.2020.9196764.
    [46] G. Kim, S. Choi, and A. Kim, "Scan Context++: Structural Place Recognition Robust to Rotation and Lateral Variations in Urban Environments," IEEE Transactions on Robotics, pp. 1-19, 2021, doi: 10.1109/TRO.2021.3116424.
    [47] G. Xue, J. Wei, R. Li, and J. Cheng, "LeGO-LOAM-SC: An Improved Simultaneous Localization and Mapping Method Fusing LeGO-LOAM and Scan Context for Underground Coalmine," Sensors, vol. 22, no. 2, 2022, doi: 10.3390/s22020520.
    [48] G. Grisetti, R. Kümmerle, C. Stachniss, and W. Burgard, "A tutorial on graph-based SLAM," IEEE Intelligent Transportation Systems Magazine, vol. 2, no. 4, pp. 31-43, 2010.
    [49] G. Grisetti, R. Kümmerle, H. Strasdat, and K. Konolige, "g2o: A general framework for (hyper) graph optimization," in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), 2011, pp. 9-13.
    [50] L. Carlone, R. Tron, K. Daniilidis, and F. Dellaert, "Initialization techniques for 3D SLAM: A survey on rotation estimation and its use in pose graph optimization," in 2015 IEEE International Conference on Robotics and Automation (ICRA), 26-30 May 2015 2015, pp. 4597-4604, doi: 10.1109/ICRA.2015.7139836.
    [51] A. Geiger, P. Lenz, and R. Urtasun, "Are we ready for autonomous driving? The KITTI vision benchmark suite," in 2012 IEEE Conference on Computer Vision and Pattern Recognition, 16-21 June 2012 2012, pp. 3354-3361, doi: 10.1109/CVPR.2012.6248074.
    [52] A. Geiger, P. Lenz, C. Stiller, and R. Urtasun, "Vision meets robotics: The KITTI dataset," The International Journal of Robotics Research, vol. 32, no. 11, pp. 1231-1237, 2013/09/01 2013, doi: 10.1177/0278364913491297.
    [53] J. Behley et al., "SemanticKITTI: A Dataset for Semantic Scene Understanding of LiDAR Sequences," in 2019 IEEE/CVF International Conference on Computer Vision (ICCV), 27 Oct.-2 Nov. 2019 2019, pp. 9296-9306, doi: 10.1109/ICCV.2019.00939.
    [54] J. Behley et al., "Towards 3D LiDAR-based semantic scene understanding of 3D point cloud sequences: The SemanticKITTI Dataset," The International Journal of Robotics Research, p. 02783649211006735, 2021, doi: 10.1177/02783649211006735.
    [55] L. I. Smith, "A tutorial on principal components analysis," 2002.

    無法下載圖示 校內:2027-08-01公開
    校外:2027-08-01公開
    電子論文尚未授權公開,紙本請查館藏目錄
    QR CODE