| 研究生: |
黃諺恩 Huang, Yen-En |
|---|---|
| 論文名稱: |
實現自駕車的即時精準導航 : 發展適用於Autoware之強韌複合導航系統 Towards Robust and Universal Real-Time Navigation for Autonomous Vehicles: Development of a Resilient Hybrid Navigation System for Autoware Platforms |
| 指導教授: |
江凱偉
Chiang, Kai-Wei |
| 學位類別: |
碩士 Master |
| 系所名稱: |
工學院 - 測量及空間資訊學系 Department of Geomatics |
| 論文出版年: | 2024 |
| 畢業學年度: | 112 |
| 語文別: | 英文 |
| 論文頁數: | 182 |
| 中文關鍵詞: | EGI慣性導航系統 、Autoware 、自動駕駛 、高精地圖 、即時導航定位 |
| 外文關鍵詞: | EGI Inertial Navigation System, Autoware, Autonomous Driving, HD Maps, Real-Time Navigation |
| 相關次數: | 點閱:101 下載:15 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
近年來隨著電動車銷量在2022年突破了千萬輛的里程碑,電動車相關產業蒸蒸日上。電動車如同像是會移動的電腦系統,透過5G通訊和AI等相關技術,正在朝著自動駕駛的方向邁進。自動駕駛已然成為未來的交通趨勢,而「自動輔助駕駛」更有望提升至更高階級的「完全自動駕駛」。隨著向更高層級的自動駕駛發展,若要實現並且實際落地運作,相對於傳統定位技術,勢必要提高定位、導航、定時的精準與精確度。在這樣的基礎上,更多的感測器加入了以導航定位為目的行列,這些感測器提供了額外的導航輔助資訊,例如光達、相機、雷達,甚至是超聲波技術。然而這些感測器都有其各自的缺點,相機會有進光量的問題,雷達有辨識解析度較低的情形,因此以多感測器進行整合的架構較常運用在自駕車導航的目的上,市面上也有許多以多感測器的整合架構實際應用在自駕車上的自駕車團隊,Waymo, Cruise等都是知名的自駕車團隊。
這些自駕車團隊也開發出許多不同的自動駕駛平台,較為有名的是Tier IV 的Autoware。Autoware是供自駕車使用的開源軟體,以光達為主的架構使他們有能力進行高精度的導航定位,然而以光達為主的演算法有它的侷限性,若在像是高速、訊號遮蔽、特徵點過少的地區,這樣的架構很容易造成光達輔助導航會有失效的問題。Autoware所設計的初始化方式使他們以光達為主的架構在匹配時有失效的問題。最重要的一點是這樣的架構非常依賴點雲地圖,而點雲地圖的製作、產製成本與精度無法在短時間內達到有效的平衡。
在這樣的背景下,本研究嘗試提出一套系統框架以及一套以慣性導航/衛星定位/光達/高精地圖的多感測器整合演算法並透過卡曼濾波器來實踐,在以可以在自駕車上運行的Autoware架構下,處理並解決Autoware目前面臨的問題,並且驗證本研究提出的策略的可行性。針對適用於所有場景的前提下,選擇了4種場景,並依照開放式、半遮蔽式與遮蔽式環境進行深入探討。同時以Autoware的定位模組演算法作為對照組,比較探討本研究提出的策略。在成本的考量上選用了較低成本的Velodyne VLP-16,以及自製的低成本EGI慣性導航系統EGI-370。
導航結果的驗證以高規格的導航系統計算參考軌跡與各自計算的軌跡做評估。經結果顯示,本研究所提出的系統框架以及核心演算法在各個場景中街有較好的導航表現,且每個場景皆有顯著的提升。系統框架也經過評估適合使用在自駕車中,並且在各個場景中皆能達到車道內等級(0.5公尺)的導航精度要求。
In recent years, the electric vehicle (EV) industry has seen tremendous growth, with sales surpassing ten million units in 2022. EVs, akin to mobile computer systems, are rapidly moving toward autonomous driving driven by technologies such as 5G communications and artificial intelligence (AI). Autonomous driving has become a definitive future trend in transportation, with a shift from “Assisted Driving” to “Fully Autonomous Driving” on the horizon. As autonomous driving evolves to higher levels, the precision and accuracy of positioning, navigation, and timing technologies must surpass those of traditional systems. Consequently, an array of sensors providing additional navigational assistance, such as LiDAR, cameras, radars, and even ultrasonic technology, have become integral to navigation systems. However, each sensor type has its limitations, such as lighting issues for cameras and lower resolution for radars, making a multi-sensor fusion framework increasingly prevalent for autonomous vehicle navigation. Renowned autonomous vehicle teams like Waymo and Cruise have implemented various autonomous driving platforms, with notable examples like Tier IV’s Autoware.
Autoware, an open-source software primarily LiDAR-based, is capable of high-precision navigation but faces limitations in scenarios like high-speed environments, signal occlusion areas, or places with sparse features. Furthermore, the initialization method employed by Autoware can lead to inefficiencies in the LiDAR-based architecture, particularly during the process of point cloud matching. This architecture heavily relies on point cloud maps, whose production costs and accuracy balance are challenging to achieve promptly. In this context, this study proposes a system framework and a multi-sensor integration algorithm using inertial navigation, satellite positioning, LiDAR, and high-definition maps, implemented through an Extended Kalman filter. This approach aims to address and resolve the current issues faced by Autoware, verifying the feasibility of the proposed strategies under the Autoware framework suitable for autonomous vehicles. Four scenarios were chosen, each representing open, semi-occluded, and occluded environments, for an in-depth analysis. The Autoware localization module algorithm serves as a control group for comparative exploration.
Cost considerations led to the selection of the more affordable Velodyne VLP-16 LiDAR and a custom low-cost EGI inertial navigation system, EGI-370. Navigation results were validated against high-standard navigation system calculations, comparing reference trajectories with those computed by each system. Results indicate that the proposed system framework and core algorithm exhibit superior navigation performance across all scenarios, with significant improvements in each. The system framework is evaluated as suitable for use in autonomous vehicles, achieving lane-level (0.5 meters) navigation precision in various scenarios.
[1] FutureBridge (2020). “Autonomy Delay: Current Challenges and Bottlenecks.”https://www.futurebridge.com/blog/autonomy-delay-current-challenges-and-bottlenecks/
[2] SAE International (2021). “Taxonomy and definitions for terms related to driving automation systems for on-road motor vehicles”.https://www.sae.org/standards/content/j3016_202104/
[3] James Jeffs, IDTechEX (2022). “Autonomous Vehicle Trends & Milestones In 2022 & Expectations for 2023”.https://www.idtechex.com/en/research-article/autonomous-vehicle-trends-and-milestones-in-2022-and-expectations-for-2023/28361
[4] Stephenson, S., Meng, X., Moore, T., Baxendale, A., & Ford, T. (2011, November). Accuracy requirements and benchmarking position solutions for intelligent transportation location based services. In Proceedings of the 8th international symposium on location-based services.
[5] Reid, T. G., Pervez, N., Ibrahim, U., Houts, S. E., Pandey, G., Alla, N. K., & Hsia, A. (2019, September). Standalone and RTK GNSS on 30,000 km of North American Highways. In Proceedings of the 32nd International Technical Meeting of the Satellite Division of The Institute of Navigation (ION GNSS+ 2019) (pp. 2135-2158).
[6] Cai, H., Hu, Z., Huang, G., Zhu, D., & Su, X. (2018). Integration of GPS, monocular vision, and high definition (HD) map for accurate vehicle localization. Sensors, 18(10), 3270.
[7] Ghallabi, F., Nashashibi, F., El-Haj-Shhade, G., & Mittet, M. A. (2018, November). Lidar-based lane marking detection for vehicle positioning in an HD map. In 2018 21st International Conference on Intelligent Transportation Systems (ITSC) (pp. 2209-2214). IEEE.
[8] AutowareFoundation (2018). Autoware. https://github.com/autowarefoundation/autoware
[9] AutowareFoundation (2021). Autoware-documentation.https://github.com/autowarefoundation/autoware-documentation
[10] Narasimhappa, M., Mahindrakar, A. D., Guizilini, V. C., Terra, M. H., & Sabat, S. L. (2019). MEMS-based IMU drift minimization: Sage Husa adaptive robust Kalman filtering. IEEE Sensors Journal, 20(1), 250-260.
[11] Shin, E. H., & El-Sheimy, N. (2002, January). Accuracy improvement of low cost INS/GPS for land applications. In Proceedings of the 2002 national technical meeting of the institute of navigation (pp. 146-157).
[12] Zhao, L., Qiu, H., & Feng, Y. (2016). Analysis of a robust Kalman filter in loosely coupled GPS/INS navigation system. Measurement, 80, 138-147.
[13] Falco, G., Pini, M., & Marucco, G. (2017). Loose and tight GNSS/INS integrations: Comparison of performance assessed in real urban scenarios. Sensors, 17(2), 255.
[14] Noureldin, A., Karamat, T. B., & Georgy, J. (2013). Fundamentals of inertial navigation, satellite-based positioning and their integration.
[15] Mostafa, M. R., & Hutton, J. (2001, June). Airborne kinematic positioning and attitude determination without base stations. In Proceedings, International Symposium on Kinematic Systems in Geodesy, Geomatics, and Navigation (KIS 2001) Banff, Alberta, Canada.
[16] K.W. Chiang, G.J. Tsai, H.W. Chang, C. Joly, N. EI-Sheimy, Seamless navigation and mapping using an INS/GNSS/grid-based SLAM semi-tightly coupled integration scheme, Information Fusion, Volume 50, 2019, Pages 181-196, ISSN 1566-2535
[17] Zhu, J., Gehrung, J., Huang, R., Borgmann, B., Sun, Z., Hoegner, L., ... & Stilla, U. (2020). TUM-MLS-2016: An annotated mobile LiDAR dataset of the TUM city campus for semantic point cloud interpretation in urban areas. Remote Sensing, 12(11), 1875.
[18] Turner, D., Lucieer, A., & Wallace, L. (2013). Direct georeferencing of ultrahigh-resolution UAV imagery. IEEE Transactions on Geoscience and Remote Sensing, 52(5), 2738-2745.
[19] Deng, Q., Sun, H., Chen, F., Shu, Y., Wang, H., & Ha, Y. (2021). An optimized FPGA-based real-time NDT for 3D-LiDAR localization in smart vehicles. IEEE Transactions on Circuits and Systems II: Express Briefs, 68(9), 3167-3171.
[20] Reshetyuk, Y. (2009). Self-calibration and direct georeferencing in terrestrial laser scanning (Doctoral dissertation, KTH).
[21] Zaganidis, A., Magnusson, M., Duckett, T., & Cielniak, G. (2017, September). Semantic-assisted 3D normal distributions transform for scan registration in environments with limited structure. In 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 4064-4069). IEEE.
[22] Biber, P., & Straßer, W. (2003, October). The normal distributions transform: A new approach to laser scan matching. In Proceedings 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2003) (Cat. No. 03CH37453) (Vol. 3, pp. 2743-2748). IEEE.
[23] Magnusson, M., Lilienthal, A., & Duckett, T. (2007). Scan registration for autonomous mining vehicles using 3D‐NDT. Journal of Field Robotics, 24(10), 803-827.
[24] Hong, H., & Lee, B. H. (2017, September). Probabilistic normal distributions transform representation for accurate 3D point cloud registration. In 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) (pp. 3333-3338). IEEE.
[25] Jun, L., Wei, L., Donglai, D., & Qiang, S. (2015, July). Point cloud registration algorithm based on NDT with variable size voxel. In 2015 34th Chinese Control Conference (CCC) (pp. 3707-3712). IEEE.
[26] Magnusson, M. (2009). The three-dimensional normal-distributions transform: an efficient representation for registration, surface analysis, and loop detection (Doctoral dissertation, Örebro universitet).
[27] Rusu, R. B., & Cousins, S. (2011, May). 3d is here: Point cloud library (pcl). In 2011 IEEE international conference on robotics and automation (pp. 1-4). IEEE.
[28] Nguyen, A., Cano, A. M., Edahiro, M., & Kato, S. (2023). GPU-Accelerated 3D Normal Distributions Transform. Journal of Robotics and Mechatronics, 35(2), 445-459.
[29] Jose, S., Variyar, V. S., & Soman, K. P. (2017). Effective utilization and analysis of ros on embedded platform for implementing autonomous car vision and navigation modules. In 2017 International Conference on Advances in Computing, Communications and Informatics (ICACCI) (pp. 877-882). IEEE.
[30] Azam, S., Munir, F., Sheri, A. M., Kim, J., & Jeon, M. (2020). System, design and experimental validation of autonomous vehicle in an unconstrained environment. Sensors, 20(21), 5999.
[31] Kuutti, S., Bowden, R., Jin, Y., Barber, P., & Fallah, S. (2020). A survey of deep learning applications to autonomous vehicle control. IEEE Transactions on Intelligent Transportation Systems, 22(2), 712-733.
[32] Zong, W., Zhang, C., Wang, Z., Zhu, J., & Chen, Q. (2018). Architecture design and implementation of an autonomous vehicle. IEEE access, 6, 21956-21970.
[33] Parekh, D., Poddar, N., Rajpurkar, A., Chahal, M., Kumar, N., Joshi, G. P., & Cho, W. (2022). A review on autonomous vehicles: Progress, methods and challenges. Electronics, 11(14), 2162.
[34] Satish Jeyachandran (2020). Introducing the 5-th generation Waymo Driver: Informed by experience, designed for scale, engineered to tackle more environments. https://waymo.com/blog/2020/03/introducing-5th-generation-waymo-driver.html
[35] Luo, L., Cao, S. Y., Sheng, Z., & Shen, H. L. (2022). LiDAR-based global localization using histogram of orientations of principal normals. IEEE Transactions on Intelligent Vehicles, 7(3), 771-782.
[36] Schwarting, W., Alonso-Mora, J., & Rus, D. (2018). Planning and decision-making for autonomous vehicles. Annual Review of Control, Robotics, and Autonomous Systems, 1, 187-210.
[37] Katrakazas, C., Quddus, M., Chen, W. H., & Deka, L. (2015). Real-time motion planning methods for autonomous on-road driving: State-of-the-art and future research directions. Transportation Research Part C: Emerging Technologies, 60, 416-442.
[38] Samak, C. V., Samak, T. V., & Kandhasamy, S. (2021). Control strategies for AVs. In Autonomous driving and advanced driver-assistance systems (ADAS) (pp. 37-86). CRC Press.
[39] Zang, Z., Tumu, R., Betz, J., Zheng, H., & Mangharam, R. (2022). Winning the 3rd Japan Automotive AI Challenge-Autonomous Racing with the Autoware. Auto Open Source Software Stack. In 2022 IEEE Intelligent Vehicles Symposium (IV) (pp. 1757-1764). IEEE.
[40] ApolloAuto (2017). Apollo. https://github.com/ApolloAuto/apollo
[41] Raju, V. M., Gupta, V., & Lomate, S. (2019). Performance of open AV platforms: Autoware and Apollo. In 2019 IEEE 5th International Conference for Convergence in Technology (I2CT) (pp. 1-5). IEEE.
[42] Javanmardi, E., Javanmardi, M., Gu, Y., & Kamijo, S. (2020). Pre-estimating self-localization error of NDT-based map-matching from map only. IEEE Transactions on Intelligent Transportation Systems, 22(12), 7652-7666.
[43] Akai, N., Morales, L. Y., Takeuchi, E., Yoshihara, Y., & Ninomiya, Y. (2017). Robust localization using 3D NDT scan matching with experimentally determined uncertainty and road marker matching. In 2017 IEEE Intelligent Vehicles Symposium (IV) (pp. 1356-1363).
[44] Huang, F., Wen, W., Zhang, J., & Hsu, L. T. (2022). Point wise or feature wise? A benchmark comparison of publicly available LiDAR odometry algorithms in urban canyons. IEEE Intelligent Transportation Systems Magazine, 14(6), 155-173.
[45] Huang, Y. E., Tsai, S., Liu, H. Y., Chiang, K. W., Tsai, M. L., Lee, P. L., & El-Sheimy, N. (2023). The Development and Validation of a Tactical Grade EGI System for Land Vehicular Navigation Applications. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 48, 821-828.
[46] Huang, Y. E., Tsai, S., Liu, H. Y., Chiang, K. W., Tsai, M. L., Lee, P. L., & El-Sheimy, N. (2023). The Development and Validation of a Navigation Grade EGI System for Land Vehicular Navigation Applications. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 48, 191-197.
[47] Yin, Y., Zhang, J., Guo, M., Ning, X., Wang, Y., & Lu, J. (2023). Sensor Fusion of GNSS and IMU Data for Robust Localization via Smoothed Error State Kalman Filter. Sensors, 23(7), 3676.
[48] Verma, P., Hajra, K., Banerjee, P., & Bose, A. (2022). Evaluating PDOP in multi-GNSS environment. IETE Journal of Research, 68(3), 1705-1712.
[49] AutowareFoundation (2023). YabLoc.https://autowarefoundation.github.io/autoware.universe/latest/localization/yabloc/