簡易檢索 / 詳目顯示

研究生: 李濬承
Li, Jung-Cheng
論文名稱: 應用已知地標實現無人機於無GPS環境之視覺慣性里程之尺度修正
Landmark Based Scale Correction of Visual Inertial Odometry for a UAV in non-GPS Environment
指導教授: 賴盈誌
Lai, Ying-Chih
學位類別: 碩士
Master
系所名稱: 工學院 - 航空太空工程學系
Department of Aeronautics & Astronautics
論文出版年: 2021
畢業學年度: 109
語文別: 英文
論文頁數: 70
中文關鍵詞: 尺度修正視覺慣性里程計傳感器整合無GPS導航
外文關鍵詞: Scale Correcton, Visual Inertial Odometry, Data Fusion, Non-GPS Navigation
相關次數: 點閱:131下載:16
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 隨著科技進步,無人機開始應用於很多領域,但是有些區域不能依賴傳統的GPS導航,例如室內場景、橋樑檢測。視覺慣性里程計(VIO)是一個熱門的無GPS導航的研究主題,然而視覺慣性里程計在相關研究中有著尺度與長途誤差的問題。本研究提出一個方式,在初始的過程中,只偵測已知地標的四個已知尺度的角點,利用這四個已知距離的角點得出相對準確的位置,並使用視覺慣性里程計估測出的位置與已知地標估測出的相對準確的位置用最小平方的方式進行尺度修正,以及使用擴展卡爾曼濾波器做整合,預測階段使用慣性測量單元做積分,量測階段分兩部分,在估測尺度過程中與回程看到已知地標使用已知地標估測出的視覺里程計做修正以消除長途誤差,在完成估測尺度後,使用尺度修正後的視覺慣性里程計做修正。實驗初期在地面測試做初步驗證,本研究使用自組的無人機裝上視覺慣性感測器收集資料,並整合高精度RTK做軌跡驗證。最後應用於飛行測試,實驗結果顯示本研究提出的方式有效地解決視覺慣性里程計的尺度與長途誤差問題。

    With the rapid development of technology, unmanned aerial vehicles (UAVs) have become more popular and can be applied in many areas. However, there are some environments where GPS is not available or has the problem of GPS signal outage, such as indoor inspections and bridge inspections. Visual Inertial Odometry (VIO) is a popular research solution for non-GPS navigation. However, VIO has the problems of scale errors and long-term drift discussed in the other studies. This study proposes a method to correct the position errors of VIO without the help of GPS information. In the initial process, only corners of known landmarks are detected and utilized to improve the positioning results of VIO by the known landmark information. The position of the UAV is estimated by VIO, and then the accurate position is estimated by the extended Kalman filter (EKF) with the known landmark, which is used to obtain the scale by using the least square method. The Inertial Measurement Unit (IMU) is used for integration in the prediction phase. The measurement update is divided into two parts. When the scale is being estimated or UAV is close and back to the takeoff location, the visual odometer outputs are integrated with estimated positions from the known landmarks to make corrections to eliminate long-term drift. After the scale estimation is completed, the VIO with scale correction is used. At the beginning of the experiment, preliminary verification was conducted on the ground. In this study, a self-assembled UAV equipped with a visual inertial sensor was used to collect data and integrated high-precision RTK for trajectory verification. Finally, it is applied to flight tests. The experimental results show that the method proposed in this research effectively solves the problem of scale and long-term drift of VIO.

    中文摘要 I Abstract II Acknowledgements IV Contents V List of Tables VII List of Figures VIII 1 Introduction 1 1.1 Research Background 1 1.2 Motivation and Objectives 4 1.3 Literature Review 4 1.4 Thesis Outline 7 2 Methodology 8 2.1 Research Process 8 2.2 Visual Inertial Odometry 9 2.2.1 Coordinate Definition 9 2.2.2 Architecture of VIO 10 2.3 Scale Estimation with Landmark Assistant 15 2.3.1 Marker Detection 16 2.3.2 Perspective-n-Points 17 2.3.3 Least square 18 2.4 Architecture of Sensor Fusion 18 2.4.1 Extended Kalman Filter 20 3 System experiment setup 24 3.1 System overview 25 3.2 System setup 28 3.3 Results 34 4 Ground Test 36 4.1 Object and Experiment design 36 4.2 Evaluation on the ground test 44 4.3 Results and Discussion 52 5 Flight Test 53 5.1 Object and Experiment design 53 5.2 Evaluation on the flight test 55 5.3 Results and Discussions 65 6 Conclusion 66 7 References 68

    [1] N. Jeong, H. Hwang, and E. T. Matson, "Evaluation of low-cost lidar sensor for application in indoor uav navigation," in 2018 IEEE Sensors Applications Symposium (SAS), 2018, pp. 1-5: IEEE.
    [2] C. Laoudias, A. Moreira, S. Kim, S. Lee, L. Wirola, and C. Fischione, "A survey of enabling technologies for network localization, tracking, and navigation," IEEE Communications Surveys Tutorials, vol. 20, no. 4, pp. 3607-3644, 2018.
    [3] Y. Lu, Z. Xue, G.-S. Xia, and L. Zhang, "A survey on vision-based UAV navigation," Geo-spatial information science, vol. 21, no. 1, pp. 21-32, 2018.
    [4] J. Delmerico and D. Scaramuzza, "A benchmark comparison of monocular visual-inertial odometry algorithms for flying robots," in 2018 IEEE International Conference on Robotics and Automation (ICRA), 2018, pp. 2502-2509: IEEE.
    [5] A. J. Davison, I. D. Reid, N. D. Molton, and O. Stasse, "MonoSLAM: Real-time single camera SLAM," IEEE transactions on pattern analysis machine intelligence, vol. 29, no. 6, pp. 1052-1067, 2007.
    [6] R. Mur-Artal, J. M. M. Montiel, and J. D. Tardos, "ORB-SLAM: a versatile and accurate monocular SLAM system," IEEE transactions on robotics, vol. 31, no. 5, pp. 1147-1163, 2015.
    [7] R. Mur-Artal and J. D. Tardós, "Orb-slam2: An open-source slam system for monocular, stereo, and rgb-d cameras," IEEE transactions on robotics, vol. 33, no. 5, pp. 1255-1262, 2017.
    [8] C. Campos, R. Elvira, J. J. G. Rodríguez, J. M. Montiel, and J. D. Tardós, "ORB-SLAM3: An Accurate Open-Source Library for Visual, Visual–Inertial, and Multimap SLAM," IEEE Transactions on Robotics, 2021.
    [9] J. Engel, T. Schöps, and D. Cremers, "LSD-SLAM: Large-scale direct monocular SLAM," in European conference on computer vision, 2014, pp. 834-849: Springer.
    [10] J. Engel, V. Koltun, and D. Cremers, "Direct sparse odometry," IEEE transactions on pattern analysis machine intelligence, vol. 40, no. 3, pp. 611-625, 2017.
    [11] A. P. Bustos, T.-J. Chin, A. Eriksson, and I. Reid, "Visual SLAM: Why bundle adjust?," in 2019 International Conference on Robotics and Automation (ICRA), 2019, pp. 2385-2391: IEEE.
    [12] M. Bloesch, M. Burri, S. Omari, M. Hutter, and R. Siegwart, "Iterated extended Kalman filter based visual-inertial odometry using direct photometric feedback," The International Journal of Robotics Research, vol. 36, no. 10, pp. 1053-1072, 2017.
    [13] T. Qin, P. Li, and S. Shen, "Vins-mono: A robust and versatile monocular visual-inertial state estimator," IEEE Transactions on Robotics, vol. 34, no. 4, pp. 1004-1020, 2018.
    [14] S. Cortés, A. Solin, E. Rahtu, and J. Kannala, "ADVIO: An authentic dataset for visual-inertial odometry," in Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 419-434.
    [15] J. Engel, J. Stückler, and D. Cremers, "Large-scale direct SLAM with stereo cameras," in 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2015, pp. 1935-1942: IEEE.
    [16] P. Liu, M. Geppert, L. Heng, T. Sattler, A. Geiger, and M. Pollefeys, "Towards robust visual odometry with a multi-camera system," in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2018, pp. 1154-1161: IEEE.
    [17] Z. Zhang, R. Zhao, E. Liu, K. Yan, and Y. Ma, "Scale estimation and correction of the monocular simultaneous localization and mapping (slam) based on fusion of 1d laser range finder and vision data," Sensors, vol. 18, no. 6, p. 1948, 2018.
    [18] Q. Lv, J. Ma, G. Wang, and H. Lin, "Absolute scale estimation of orb-slam algorithm based on laser ranging," in 2016 35th Chinese Control Conference (CCC), 2016, pp. 10279-10283: IEEE.
    [19] T. Caselitz, B. Steder, M. Ruhnke, and W. Burgard, "Monocular camera localization in 3d lidar maps," in 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2016, pp. 1926-1931: IEEE.
    [20] S. Urzua, R. F. Munguía Alcalá, and A. Grau Saldes, "Monocular SLAM system for MAVs aided with altitude and range measurements: A GPS-free approach," Journal of intelligent robotic systems: theory applications, vol. 94, no. 1, pp. 203-217, 2019.
    [21] K. Tateno, F. Tombari, I. Laina, and N. Navab, "Cnn-slam: Real-time dense monocular slam with learned depth prediction," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 6243-6252.
    [22] S. Wang, R. Clark, H. Wen, and N. Trigoni, "Deepvo: Towards end-to-end visual odometry with deep recurrent convolutional neural networks," in 2017 IEEE International Conference on Robotics and Automation (ICRA), 2017, pp. 2043-2050: IEEE.
    [23] T. H. Nguyen, T.-M. Nguyen, M. Cao, and L. Xie, "Loosely-coupled ultra-wideband-aided scale correction for monocular visual odometry," Unmanned Systems, vol. 8, no. 02, pp. 179-190, 2020.
    [24] D. Schubert, T. Goll, N. Demmel, V. Usenko, J. Stückler, and D. Cremers, "The TUM VI benchmark for evaluating visual-inertial odometry," in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2018, pp. 1680-1687: IEEE.
    [25] T. Qin, "https://github.com/ethz-asl/kalibr," 2018.
    [26] gaowenliang, "https://github.com/gaowenliang/imu_utils," 2007.

    下載圖示 校內:2023-10-14公開
    校外:2023-10-14公開
    QR CODE