簡易檢索 / 詳目顯示

研究生: 吳姿蓉
Wu, Tzu-Jung
論文名稱: 基於相機-光達資訊融合之高精度彩色點雲建圖系統
High-precision LIO Color Mapping System Based on Camera-LiDAR Fusion
指導教授: 彭兆仲
Peng, Chao-Chung
學位類別: 碩士
Master
系所名稱: 工學院 - 航空太空工程學系
Department of Aeronautics & Astronautics
論文出版年: 2025
畢業學年度: 113
語文別: 英文
論文頁數: 118
中文關鍵詞: LiDAR-SLAMLIO相機-光達資訊融合彩色點雲點雲物件標定
外文關鍵詞: LiDAR-SLAM, LIO, camera-LiDAR fusion, colored point cloud, 3D object detection
相關次數: 點閱:13下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • Abstract i 摘要 ii ACKNOWLEDGEMENTS iii TABLE OF CONTENTS iv LIST OF TABLES vi LIST OF FIGURES vii CHAPTER 1 INTRODUCTION 1 1.1 Research Background 1 1.2 Literature Review 2 1.2.1 LiDAR Inertial Odometry 3 1.2.2 Camera-LiDAR Information Fusion 4 1.2.3 Point Cloud Object Detection 5 1.3 System Structure 7 CHAPTER 2 Mathematical Preliminaries 10 2.1 Euclidean Transform 10 2.2 Lie Group and Lie Algebra 11 2.2.1 Special Orthogonal Group 12 2.2.2 Special Euclidean Group 14 CHAPTER 3 LiDAR Inertial Odometry 17 3.1 State Estimation 18 3.1.1 Operator Description 18 3.1.2 Forward Propagation 19 3.1.3 Backward Propagation 24 3.1.4 Residual Computation 25 3.1.5 Iterated State Update 27 3.2 Hierarchical Mapping Strategy 30 3.2.1 Global Keyframe Map 30 3.2.2 Local Sliding Window Map 31 3.2.3 Map Fusion and Matching 32 3.3 Experiment Results 36 CHAPTER 4 Point cloud Colorization 49 4.1 Fisheye Camera Calibration 49 4.2 Fisheye Camera Model 50 4.2.1 Fisheye Camera Calibration 52 4.3 Camera-LiDAR Calibration 56 4.3.1 Feature Extraction 57 4.3.2 Feature Matching 58 4.4 Undistortion Lookup Table Generation 62 4.5 Experiment and Result 64 CHAPTER 5 Object Labeling in Point Cloud 81 5.1 Image-Based Object Detection in Point Cloud 82 5.2 Adaptive DBSCAN Algorithm 84 5.3 Object Bounding Box Tracking behind the Camera FOV 90 5.4 MVEE Algorithm 92 5.5 Experiment and Result 96 5.5.1 Evaluation of Adaptive DBSCAN Clustering 96 5.5.2 Object Labeling and Semantic Representation 98 CHAPTER 6 Conclusion and Future work 103 REFERENCES 105

    [1] L. Huang, "Review on LiDAR-based SLAM Techniques," in 2021 International Conference on Signal Processing and Machine Learning (CONF-SPML), 14-14 Nov. 2021 2021, pp. 163-168, doi: 10.1109/CONF-SPML54095.2021.00040.
    [2] J. Zhang and S. Singh, "LOAM : Lidar Odometry and Mapping in real-time," Robotics: Science and Systems Conference (RSS), pp. 109-111, 01/01 2014.
    [3] T. Shan and B. Englot, "LeGO-LOAM: Lightweight and Ground-Optimized Lidar Odometry and Mapping on Variable Terrain," pp. 4758-4765, 2018, doi: 10.1109/IROS.2018.8594299.
    [4] W. Zhen, S. Zeng, and S. Soberer, Robust localization and localizability estimation with a rotating laser scanner. 2017, pp. 6240-6245.
    [5] Y. Balazadegan Sarvrood, S. Hosseinyalamdary, and Y. Gao, "Visual-LiDAR Odometry Aided by Reduced IMU," ISPRS International Journal of Geo-Information, vol. 5, no. 1, doi: 10.3390/ijgi5010003.
    [6] X. Zuo, P. Geneva, W. Lee, Y. Liu, and G. Huang, LIC-Fusion: LiDAR-Inertial-Camera Odometry. 2019.
    [7] W. Xu and F. Zhang, "FAST-LIO: A Fast, Robust LiDAR-Inertial Odometry Package by Tightly-Coupled Iterated Kalman Filter," IEEE Robotics and Automation Letters, vol. 6, no. 2, pp. 3317-3324, 2021, doi: 10.1109/LRA.2021.3064227.
    [8] W. Xu, Y. Cai, D. He, J. Lin, and F. Zhang, "FAST-LIO2: Fast Direct LiDAR-Inertial Odometry," IEEE Transactions on Robotics, vol. 38, no. 4, pp. 2053-2073, 2022, doi: 10.1109/TRO.2022.3141876.
    [9] Y. Wu, T. Guadagnino, L. Wiesmann, L. Klingbeil, C. Stachniss, and H. Kuhlmann, "LIO-EKF: High Frequency LiDAR-Inertial Odometry using Extended Kalman Filters," arXiv preprint arXiv:2311.09887, 2023.
    [10] C. Qin, H. Ye, C. E. Pranata, J. Han, S. Zhang, and M. Liu, "LINS: A Lidar-Inertial State Estimator for Robust and Efficient Navigation," in 2020 IEEE International Conference on Robotics and Automation (ICRA), 31 May-31 Aug. 2020 2020, pp. 8899-8906, doi: 10.1109/ICRA40945.2020.9197567.
    [11] X. Li, H. Yu, X. Wang, S. Li, Y. Zhou, and H. Chang, "FGO-GIL: Factor Graph Optimization-Based GNSS RTK/INS/LiDAR Tightly Coupled Integration for Precise and Continuous Navigation," IEEE Sensors Journal, vol. 23, no. 13, pp. 14534-14548, 2023, doi: 10.1109/JSEN.2023.3278723.
    [12] A. Das, J. Elfring, and G. Dubbelman, "Real-Time Vehicle Positioning and Mapping Using Graph Optimization," Sensors, vol. 21, no. 8, doi: 10.3390/s21082815.
    [13] T. Shan, B. Englot, D. Meyers, W. Wang, C. Ratti, and D. Rus, "LIO-SAM: Tightly-coupled Lidar Inertial Odometry via Smoothing and Mapping," in 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 24 Oct.-24 Jan. 2021 2020, pp. 5135-5142, doi: 10.1109/IROS45743.2020.9341176.
    [14] Z. Wang, Y. Wu, and Q. Niu, "Multi-Sensor Fusion in Automated Driving: A Survey," IEEE Access, vol. 8, pp. 2847-2868, 2020, doi: 10.1109/ACCESS.2019.2962554.
    [15] H. Cho, Y.-W. Seo, B. Kumar, and R. Rajkumar, A multi-sensor fusion system for moving object detection and tracking in urban driving environments. 2014.
    [16] X. Han, J. Lu, Y. Tai, and C. Zhao, "A real-time LIDAR and vision based pedestrian detection system for unmanned ground vehicles," in 2015 3rd IAPR Asian Conference on Pattern Recognition (ACPR), 3-6 Nov. 2015 2015, pp. 635-639, doi: 10.1109/ACPR.2015.7486580.
    [17] C. Premebida, G. Monteiro, U. Nunes, and P. Peixoto, "A Lidar and Vision-based Approach for Pedestrian and Vehicle Detection and Tracking," in 2007 IEEE Intelligent Transportation Systems Conference, 30 Sept.-3 Oct. 2007 2007, pp. 1044-1049, doi: 10.1109/ITSC.2007.4357637.
    [18] L. Wang and Y. Huang, "Fast vehicle detection based on colored point cloud with bird’s eye view representation," Scientific Reports, vol. 13, no. 1, p. 7447, 2023/05/08 2023, doi: 10.1038/s41598-023-34479-z.
    [19] H. Kang, X. Wang, and C. Chen, "Accurate fruit localisation using high resolution LiDAR-camera fusion and instance segmentation," Computers and Electronics in Agriculture, vol. 203, p. 107450, 12/01 2022, doi: 10.1016/j.compag.2022.107450.
    [20] R. Unnikrishnan and M. Hebert, "Fast Extrinsic Calibration of a Laser Rangefinder to a Camera," 01/01 2005.
    [21] L. Zhou and Z. Deng, "A new algorithm for computing the projection matrix between a LIDAR and a camera based on line correspondences," in 2012 IV International Congress on Ultra Modern Telecommunications and Control Systems, 3-5 Oct. 2012 2012, pp. 436-441, doi: 10.1109/ICUMT.2012.6459706.
    [22] T. Tóth, Z. Pusztai, and L. Hajder, "Automatic LiDAR-Camera Calibration of Extrinsic Parameters Using a Spherical Target," in 2020 IEEE International Conference on Robotics and Automation (ICRA), 31 May-31 Aug. 2020 2020, pp. 8580-8586, doi: 10.1109/ICRA40945.2020.9197316.
    [23] A. Geiger, F. Moosmann, C. Ö, and B. Schuster, "Automatic camera and range sensor calibration using a single shot," in 2012 IEEE International Conference on Robotics and Automation, 14-18 May 2012 2012, pp. 3936-3943, doi: 10.1109/ICRA.2012.6224570.
    [24] D. Scaramuzza, A. Harati, and R. Siegwart, "Extrinsic self calibration of a camera and a 3D laser range finder from natural scenes," in 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems, 29 Oct.-2 Nov. 2007 2007, pp. 4164-4169, doi: 10.1109/IROS.2007.4399276.
    [25] J. Nie, F. Pan, D. Xue, and L. Luo, "A Survey of Extrinsic Parameters Calibration Techniques for Autonomous Devices," in 2021 33rd Chinese Control and Decision Conference (CCDC), 22-24 May 2021 2021, pp. 3543-3548, doi: 10.1109/CCDC52312.2021.9602601.
    [26] Z. Wang, J. Zhan, C. Duan, X. Guan, P. Lu, and K. Yang, "A Review of Vehicle Detection Techniques for Intelligent Vehicles," (in eng), IEEE Trans Neural Netw Learn Syst, vol. 34, no. 8, pp. 3811-3831, Aug 2023, doi: 10.1109/tnnls.2021.3128968.
    [27] X. Jin, H. Yang, X. He, G. Liu, Z. Yan, and Q. Wang, "Robust LiDAR-Based Vehicle Detection for On-Road Autonomous Driving," Remote Sensing, vol. 15, no. 12, doi: 10.3390/rs15123160.
    [28] A. Golovinskiy, V. G. Kim, and T. Funkhouser, "Shape-based recognition of 3D point clouds in urban environments," in 2009 IEEE 12th International Conference on Computer Vision, 29 Sept.-2 Oct. 2009 2009, pp. 2154-2161, doi: 10.1109/ICCV.2009.5459471.
    [29] R. Q. Charles, H. Su, M. Kaichun, and L. J. Guibas, "PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation," in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 21-26 July 2017 2017, pp. 77-85, doi: 10.1109/CVPR.2017.16.
    [30] Y. Zhou and O. Tuzel, "VoxelNet: End-to-End Learning for Point Cloud Based 3D Object Detection," in 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 18-23 June 2018 2018, pp. 4490-4499, doi: 10.1109/CVPR.2018.00472.
    [31] A. Krizhevsky, I. Sutskever, and G. E. Hinton, "ImageNet classification with deep convolutional neural networks," Commun. ACM, vol. 60, no. 6, pp. 84–90, 2017, doi: 10.1145/3065386.
    [32] R. Girshick, J. Donahue, T. Darrell, and J. Malik, "Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation," in 2014 IEEE Conference on Computer Vision and Pattern Recognition, 23-28 June 2014 2014, pp. 580-587, doi: 10.1109/CVPR.2014.81.
    [33] W. Liu et al., SSD: Single Shot MultiBox Detector. 2016, pp. 21-37.
    [34] T. Lin, P. Goyal, R. Girshick, K. He, and P. Dollar, "Focal Loss for Dense Object Detection," in 2017 IEEE International Conference on Computer Vision (ICCV), 22-29 October 2017 2017, pp. 2999-3007, doi: 10.1109/ICCV.2017.324. [Online]. Available: http://doi.ieeecomputersociety.org/10.1109/ICCV.2017.324
    [35] X. Xu, L. Zhang, J. Yang, C. Cao, Z. Tan, and M. Luo, "Object Detection Based on Fusion of Sparse Point Cloud and Image Information," IEEE Transactions on Instrumentation and Measurement, vol. 70, pp. 1-12, 2021, doi: 10.1109/TIM.2021.3102739.
    [36] L. Liu, J. He, K. Ren, Z. Xiao, and Y. Hou, "A LiDAR–Camera Fusion 3D Object Detection Algorithm," Information, vol. 13, no. 4, doi: 10.3390/info13040169.
    [37] X. Chen, H. Ma, J. Wan, B. Li, and T. Xia, "Multi-View 3D Object Detection Network for Autonomous Driving," 11/23 2016.
    [38] H. Siwei and L. Baolong, "Review of Bounding Box Algorithm Based on 3D Point Cloud," International Journal of Advanced Network, Monitoring and Controls, vol. 6, no. 1, pp. 18-23, 2021, doi: doi:10.21307/ijanmc-2021-003.
    [39] M. Zand, A. Etemad, and M. Greenspan, "Oriented Bounding Boxes for Small and Freely Rotated Objects," IEEE Transactions on Geoscience and Remote Sensing, vol. 60, pp. 1-15, 2022, doi: 10.1109/TGRS.2021.3076050.
    [40] J. Yi, P. Wu, B. Liu, Q. Huang, H. Qu, and D. Metaxas, Oriented Object Detection in Aerial Images with Box Boundary-Aware Vectors. 2021, pp. 2149-2158.
    [41] M. Feng, T. Zhang, S. Li, G. Jin, and Y. Xia, "An improved minimum bounding rectangle algorithm for regularized building boundary extraction from aerial LiDAR point clouds with partial occlusions," International Journal of Remote Sensing, vol. 41, no. 1, pp. 300-319, 2020/01/02 2020, doi: 10.1080/01431161.2019.1641245.
    [42] B. Naujoks and H. J. Wuensche, "An Orientation Corrected Bounding Box Fit Based on the Convex Hull under Real Time Constraints," in 2018 IEEE Intelligent Vehicles Symposium (IV), 26-30 June 2018 2018, pp. 1-6, doi: 10.1109/IVS.2018.8500692.
    [43] E. Kwak and A. Habib, "Automatic representation and reconstruction of DBM from LiDAR data using Recursive Minimum Bounding Rectangle," ISPRS Journal of Photogrammetry and Remote Sensing, vol. 93, pp. 171-191, 2014/07/01/ 2014, doi: https://doi.org/10.1016/j.isprsjprs.2013.10.003.
    [44] M. J. Todd and E. A. Yıldırım, "On Khachiyan's algorithm for the computation of minimum-volume enclosing ellipsoids," Discrete Applied Mathematics, vol. 155, no. 13, pp. 1731-1744, 2007/08/15/ 2007, doi: https://doi.org/10.1016/j.dam.2007.02.013.
    [45] N. Bowman and M. Heath, "Computing minimum-volume enclosing ellipsoids," Mathematical Programming Computation, vol. 15, 05/20 2023, doi: 10.1007/s12532-023-00242-8.
    [46] C. Hertzberg, R. Wagner, U. Frese, and L. Schröder, "Integrating Generic Sensor Fusion Algorithms with Sound State Representations through Encapsulation of Manifolds," Information Fusion - INFFUS, vol. 14, 07/06 2011, doi: 10.1016/j.inffus.2011.08.003.
    [47] N. Higham, Accuracy and Stability of Numerical Algorithms. 2002.
    [48] D. Scaramuzza, A. Martinelli, and R. Siegwart, A Toolbox for Easily Calibrating Omnidirectional Cameras. 2006.
    [49] M. Bauer, "Fish-eye lens designs and their relative performance," Proceedings of SPIE - The International Society for Optical Engineering, 10/01 2000, doi: 10.1117/12.405226.
    [50] B. Micusik, D. Martinec, and T. Pajdla, "3D metric reconstruction from uncalibrated omnidirectional images," pp. 420-2.
    [51] T. Svoboda and T. Pajdla, "Epipolar Geometry for Central Catadioptric Cameras," International Journal of Computer Vision, vol. 49, pp. 23-37, 08/01 2002, doi: 10.1023/A:1019869530073.
    [52] L. Zhou, Z. Li, and M. Kaess, Automatic Extrinsic Calibration of a Camera and a 3D LiDAR Using Line and Plane Correspondences. 2018, pp. 5562-5569.
    [53] M. A. Fischler and R. C. Bolles, "Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography," in Readings in Computer Vision, M. A. Fischler and O. Firschein Eds. San Francisco (CA): Morgan Kaufmann, 1987, pp. 726-740.
    [54] Z. Zhang, "A flexible new technique for camera calibration," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 11, pp. 1330-1334, 2000, doi: 10.1109/34.888718.
    [55] J. Redmon, S. K. Divvala, R. B. Girshick, and A. Farhadi, "You Only Look Once: Unified, Real-Time Object Detection," 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 779-788, 2015.
    [56] T.-J. Wu, R. He, and C.-C. Peng, "Real-Time Environmental Contour Construction Using 3D LiDAR and Image Recognition with Object Removal," Remote Sensing, vol. 16, no. 23, doi: 10.3390/rs16234513.
    [57] M. Ester, H.-P. Kriegel, J. Sander, and X. Xu, "A density-based algorithm for discovering clusters in large spatial databases with noise," in kdd, 1996, vol. 96, no. 34, pp. 226-231.

    無法下載圖示 校內:2030-08-20公開
    校外:2030-08-20公開
    電子論文尚未授權公開,紙本請查館藏目錄
    QR CODE