簡易檢索 / 詳目顯示

研究生: 葉東華
Yeh, Tung-Hua
論文名稱: 基於向量地圖之語意物件輔助慣性導航/衛星定位/相機/光達一體機應用於都市區域車道級車載導航之研究
Vector Map-based Semantic Object-assisted INS/GNSS/Camera/LiDAR All-in-one Engine for Lane Level Vehicular Navigation Applications in Urban Area
指導教授: 江凱偉
Chiang, Kai-Wei
學位類別: 碩士
Master
系所名稱: 工學院 - 測量及空間資訊學系
Department of Geomatics
論文出版年: 2024
畢業學年度: 112
語文別: 英文
論文頁數: 149
中文關鍵詞: 一體導航機高精向量地圖機會語意物件多感測器整合
外文關鍵詞: all-in-one navigation engine, HD vector map, semantic object of opportunity, multi-sensor integration
相關次數: 點閱:55下載:15
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 隨著人工智慧、感測器技術以及高階晶片的技術發展,自動駕駛系統已逐漸落地並應用於人們的日常生活中,如無人計程車和無人公車等。然而,自動駕駛系統在面對各種環境及突發狀況時的可靠性仍然是備受挑戰的。為提升自動駕駛車輛的安全性及可靠性,穩定且高精度的導航技術扮演了至關重要的角色,尤其是自動駕駛車輛經常運行的都市環境中。傳統車載導航主要依賴衛星定位系統(GNSS),然而衛星訊號在都市環境中易受到建築物反射訊號的干擾,導致多路徑和非直視路徑訊號等現象。這些干擾可能導致過大的定位誤差,影響自動駕駛車輛在後續路徑規劃和控制的準確性和穩健性。
    為有效提升高階自動駕駛在都市複雜環境中的定位準確度,高精地圖已廣泛應用於導航領域的研究中。相較於高精點雲地圖,高精向量地圖透過自動產製或著人工數化,可以提供豐富的語意資訊,並且大幅減少資料容量及計算資源。利用這些帶有高精度位置的語意資訊,亦可為車載導航系統提供有效的更新。
    基於上述背景,本論文提出整合慣性導航系統(INS)、衛星定位系統(GNSS)、單目相機與固態光達之一體導航機,採用輕量的向量地圖並透過人工智慧辨識交通號誌作為語意物件輔助導航系統。其中,慣性導航系統作為一體導航機的核心,透過擴展卡曼濾波器整合各感測器之資訊。衛星定位系統提供全球基準之位置資訊。相機除了透過物件偵測提供語意資訊外,亦能透過視覺同時定位與製圖(V-SLAM)演算法提供位置速度資訊,並進一步透過改良過後的FGO-Refreshed-SLAM演算法和衛星定位系統觀測量進行整合,在衛星訊號遮蔽區仍可持續為系統提供無縫之初始位置解。固態光達則負責量測號誌距離並提供幾何資訊。由於交通號誌僅出現於路口等特定位置,本論文提出機會語意物件(SOO)的概念,透過語意物件搜尋演算法,為系統提供向量地圖上的語意物件資訊。
    為驗證一體導航機的定位成果,本論文透過高規格的導航系統提供之參考解進行評估。驗證結果顯示,視覺導航與語意物件資訊皆能有效提升在都市複雜環境的定位準確度。尤其在衛星訊號時而良好,時而受干擾甚至遮蔽之區域更能展現一體導航機之優勢。一體導航機之定位整合架構在各種場景皆能達到車道等級(1.5 m)甚至車道內等級(0.5 m),符合現代自動駕駛應用需求。

    With the advancement of artificial intelligence, sensor technology, and high-performance chips, autonomous driving systems have gradually been applied to daily life, such as in driverless taxis and buses. However, the reliability of these systems when faced with various environments and unexpected situations remains a significant challenge. To enhance the safety and reliability of autonomous vehicles, stable and high-precision navigation technology plays a crucial role, especially in urban environments where these vehicles frequently operate. Traditional vehicle navigation primarily relies on Global Navigation Satellite Systems (GNSS). However, satellite signals are often disrupted by reflections from buildings, lead to multi-path interference and non-line-of-sight (NLOS) reception. These interferences can lead to significant positioning errors in vehicular navigation, affecting the accuracy and robustness of the subsequent tasks such as path planning and control.
    To effectively improve positioning accuracy in complex urban environments for advanced autonomous driving, high-definition (HD) maps have been widely utilized in navigation research. Compared to HD point cloud maps, HD vector maps, produced automatically or manually digitized, provide rich semantic information while significantly reducing data volume and computational resources. This semantic information, with high positional accuracy, can also be used to update vehicular navigation systems effectively.
    Based on this background, this thesis proposes an all-in-one navigation engine combining an Inertial Navigation System (INS), GNSS, a monocular camera, and a solid-state LiDAR. The system utilizes a lightweight vector map and artificial intelligence to recognize traffic signs as semantic objects to assist the navigation system. The INS serves as the core of the navigation engine, integrating sensor data through an Extended Kalman Filter (EKF). The GNSS provides global positional information. The camera not only provides semantic information through object detection but also offers positional and velocity data via Visual Simultaneous Localization and Mapping (V-SLAM), further integrated with a refined FGO-Refreshed-SLAM algorithm and GNSS observations to maintain seamless initial position solutions even in GNSS-challenging areas. The solid-state LiDAR measures the distance to traffic signs and provides geometric information. Given that traffic signs typically appear only at specific locations such as intersections, this thesis introduces the concept of semantic object opportunity (SOO), using a semantic object search algorithm to provide vector map-based semantic object information to the system.
    To validate the positioning results of the proposed all-in-one navigation engine, this thesis employs the reference solutions provided by a high-grade navigation system to evaluate the system. The results demonstrate that both visual measurements and semantic object information significantly improve positioning accuracy in complex urban environments. The navigation engine particularly shows its advantage in urban areas where GNSS signals are intermittently good, disrupted, or even blocked. The proposed integration framework can achieve which-lane level accuracy (1.5 m) or even where-in-lane accuracy (0.5 m) across various scenarios, meeting the demands of modern autonomous driving applications.

    中文摘要 I Abstract III Acknowledgements V Content VII List of Tables X List of Figures XII Chapter 1 Introduction 1 1.1 Background and Literature Review 1 1.2 Motivation, Objective and Contribution 7 1.3 Thesis Outline 10 Chapter 2 Fundamentals of Multi-Sensor Integration 12 2.1 Reference Frames and Transformations 12 2.1.1 Inertial Frame 12 2.1.2 Earth-Centered Earth-Fixed Frame 13 2.1.3 Navigation Frame 15 2.1.4 Body Frame 16 2.1.5 Vehicle Frame 17 2.1.6 LiDAR Frame 17 2.1.7 Camera Frame and Image Frame 18 2.2 Inertial Navigation System (INS) 20 2.2.1 Inertial Navigation System 20 2.2.2 Initial Alignment 22 2.2.3 Navigation Equations 23 2.3 Global Navigation Satellite System (GNSS) 25 2.3.1 Overview of GNSS 26 2.3.2 GNSS Measurements and Positioning 26 2.3.3 Error Source of GNSS 31 2.4 INS/ GNSS Integration Schemes 32 2.4.1 Kalman Filter 33 2.4.2 Loosely Coupled Scheme 36 2.4.3 Tightly Coupled Scheme 37 2.4.4 Motion Constraints 38 2.5 Visual SLAM and Visual-Inertial SLAM 41 2.5.1 Overview of Iconic V-SLAM and VI-SLAM 41 2.5.2 ORB-SLAM 3 vs. VINS-Mono 44 Chapter 3 Proposed Methodology 46 3.1 Proposed FGO-Refreshed-SLAM Algorithm 46 3.1.1 Refreshed-SLAM 47 3.1.2 FGO-Refreshed-SLAM 47 3.1.3 Position and Velocity Update 51 3.2 HD Vector Map Based Multi-Sensor Integration Scheme 52 3.2.1 High-Definition Vector Map 54 3.2.2 Traffic Signs Detection 55 3.2.3 Proposed SOO Searching Algorithm 57 3.2.4 Semantic Dilution of Precision 59 3.2.5 Semantic Update 62 Chapter 4 Sensor Platform Design and Calibration 64 4.1 Proposed All-in-One Navigation Unit 64 4.1.1 Hardware Setup 64 4.1.2 Time Synchronization 68 4.2 Sensor Calibration 71 4.2.1 Calibration for Camera and IMU 72 4.2.2 Calibration for LiDAR and Camera 74 4.2.3 Calibration for LiDAR and Main INS/GNSS Module 81 Chapter 5 Experiment Results and Discussion 83 5.1 Reference System 83 5.2 Experiment Scenarios 85 5.2.1 Taiwan Boulevard in Taichung 86 5.2.2 Tainan THSR Station in Shalun, Tainan 88 5.3 Evaluation of Proposed FGO-Refreshed-SLAM 94 5.3.1 Refreshed-SLAM vs. FGO-Refreshed-SLAM 94 5.3.2 Compare ORB-SLAM 3 and VINS-Mono based on FR-SLAM 96 5.4 Evaluation of Proposed All-in-One Navigation Engine 98 5.4.1 Taichung Experiment 98 5.4.2 Shalun Experiment 1 106 5.4.3 Shalun Experiment 2 115 Chapter 6 Conclusion and Future Work 123 6.1 Conclusion 123 6.2 Future Work 125 References 127

    [1] Abdelkrim, N., N. Aouf, A. Tsourdos and B. White (2008). Robust nonlinear filtering for INS/GPS UAV localization. 2008 16th Mediterranean Conference on Control and Automation: 695-702.
    [2] Aggarwal, P. (2010). MEMS-based integrated navigation., Artech House.
    [3] Allen, J. J. (2009). Micro-system inertial sensing technology overview.
    [4] Aqel, M. O., M. H. Marhaban, M. I. Saripan and N. B. Ismail (2016). "Review of visual odometry: types, approaches, challenges, and applications." Springerplus 5(1): 1897.
    [5] Autoware Foundation (2021). Autoware-documentation. https://github.com/autowarefoundation/autoware-documentation
    [6] Başak, A. (2021). Image-Based Localization in 3D Point Clouds, Technical University of Munich.
    [7] Campos, C., R. Elvira, J. J. G. Rodriguez, J. M. M. Montiel and J. D. Tardos (2021). "ORB-SLAM3: An Accurate Open-Source Library for Visual, Visual–Inertial, and Multimap SLAM." IEEE Transactions on Robotics 37(6): 1874-1890.
    [8] Chang, H. W. (2014). Enhanced Portable Navigation for Cycling Applications, University of Calgary.
    [9] Chiang, K.-W., D. T. Le, T. T. Duong and R. Sun (2020). "The Performance Analysis of INS/GNSS/V-SLAM Integration Scheme Using Smartphone Sensors for Land Vehicle Navigation Applications in GNSS-Challenging Environments." Remote Sensing 12(11).
    [10] Chiang, K.-W., D. T. Le, K.-Y. Lin and M.-L. Tsai (2023). "Multifusion schemes of INS/GNSS/GCPs/V-SLAM applied using data from smartphone sensors for land vehicular navigation applications." Information Fusion 89: 305-319.
    [11] Chiang, K.-W., C.-K. Wang, J.-H. Hong, P.-L. Li, C.-S. Yang, M.-L. Tsai, J. Lee and S. Lin (2022). "Verification and validation procedure for high-definition maps in Taiwan." Urban Informatics 1(1).
    [12] Chiang, K. W., C. X. Lin, S. Tsai, C. H. Huang and M. L. Tsai (2023). "Seamless Realtime Lane Level Vehicular Navigation in Gnss Challenging Environments Using a Rtk Gnss/Imu/Vins Integration Scheme." The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLVIII-1/W1-2023: 609-616.
    [13] Davison, A. J., I. D. Reid, N. D. Molton and O. Stasse (2007). "MonoSLAM: real-time single camera SLAM." IEEE Trans Pattern Anal Mach Intell 29(6): 1052-1067.
    [14] El-Sheimy, N., H. Hou and X. Niu (2008). "Analysis and Modeling of Inertial Sensors Using Allan Variance." IEEE Transactions on Instrumentation and Measurement 57(1): 140-149.
    [15] El-Sheimy, N., S. Nassar and A. Noureldin (2004). "Wavelet de-noising for IMU alignment." IEEE Aerospace and Electronic Systems Magazine 19(10): 32-39.
    [16] Engel, J., V. Koltun and D. Cremers (2018). "Direct Sparse Odometry." IEEE Trans Pattern Anal Mach Intell 40(3): 611-625.
    [17] Engel, J., T. Schöps and D. Cremers (2014). LSD-SLAM: Large-Scale Direct Monocular SLAM, Cham, Springer International Publishing.
    [18] Falco, G., M. Pini and G. Marucco (2017). "Loose and Tight GNSS/INS Integrations: Comparison of Performance Assessed in Real Urban Scenarios." Sensors (Basel) 17(2).
    [19] Farrell, J. (2008). Aided navigation: GPS with high rate sensors, McGraw-Hill, Inc.
    [20] Forster, C., M. Pizzoli and D. Scaramuzza (2014). SVO: Fast semi-direct monocular visual odometry. 2014 IEEE International Conference on Robotics and Automation (ICRA): 15-22.
    [21] Galvez-López, D. and J. D. Tardos (2012). "Bags of Binary Words for Fast Place Recognition in Image Sequences." IEEE Transactions on Robotics 28(5): 1188-1197.
    [22] Gao, W. (2007). imu_utils: A ROS package tool to analyze the IMU performance. https://github.com/gaowenliang/imu_utils
    [23] Gao, Y., S. Liu, M. M. Atia and A. Noureldin (2015). "INS/GPS/LiDAR Integrated Navigation System for Urban and Indoor Environments Using Hybrid Scan Matching Algorithm." Sensors (Basel) 15(9): 23286-23302.
    [24] Ge, M., G. Gendt, G. Dick and F. P. Zhang (2005). "Improving carrier-phase ambiguity resolution in global GPS network solutions." Journal of Geodesy 79(1-3): 103-110.
    [25] Grewal, M. S., L. R. Weill and A. P. Andrews (2007). Global positioning systems, inertial navigation, and integration.
    [26] Groves, P. D. (2013). Principles of GNSS, Inertial, and Multisensor Integrated Navigation Systems., Artech House.
    [27] Hayal, A. G. (2010). Static calibration of the tactical grade inertial measurement units. Doctoral dissertation, The Ohio State University.
    [28] Hobert, L., A. Festag, I. Llatser, L. Altomare, F. Visintainer and A. Kovacs (2015). "Enhancements of V2X communication in support of cooperative autonomous driving." IEEE Communications Magazine 53(12): 64-70.
    [29] Hsu, L.-T., Y. Gu and S. Kamijo (2015). "3D building model-based pedestrian positioning method using GPS/GLONASS/QZSS and its reliability calculation." GPS Solutions 20(3): 413-428.
    [30] Huang, G. (2019). Visual-Inertial Navigation: A Concise Review. 2019 International Conference on Robotics and Automation (ICRA): 9572-9582.
    [31] IEEE standard specification format guide and test procedure for single-axis interferometric fiber optic gyros. In IEEE Std 952–1997; IEEE: New York, NY, USA, 1998; pp. 1–95.
    [32] Ioannides, R. T., T. Pany and G. Gibbons (2016). "Known Vulnerabilities of Global Navigation Satellite Systems, Status, and Potential Mitigation Techniques." Proceedings of the IEEE 104(6): 1174-1194.
    [33] Ioannides, R. T., T. Pany and G. Gibbons (2016). "Known Vulnerabilities of Global Navigation Satellite Systems, Status, and Potential Mitigation Techniques." Proceedings of the IEEE 104(6): 1174-1194.
    [34] Jeffrey, C. (2010). An introduction to GNSS: GPS, GLONASS, Galileo and other global navigation satellite systems., NovAtel.
    [35] Jin, J., X. Zhu, Y. Jiang and Z. Du (2018). Localization Based on Semantic Map and Visual Inertial Odometry. 2018 24th International Conference on Pattern Recognition (ICPR): 2410-2415.
    [36] Kaplan, E. and C. Hegarty (2005). Understanding GPS: principles and applications, Artech House.
    [37] Kim, T. and T. H. Park (2020). "Extended Kalman Filter (EKF) Design for Vehicle Position Tracking Using Reliability Function of Radar and Lidar." Sensors (Basel) 20(15).
    [38] Kim, Y., & Bang, H. (2018). Introduction to Kalman filter and its applications. Introduction and Implementations of the Kalman Filter. 1: 1-16.
    [39] Klein, G. and D. Murray (2007). Parallel Tracking and Mapping for Small AR Workspaces. 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality: 1-10.
    [40] Kuutti, S., S. Fallah, K. Katsaros, M. Dianati, F. McCullough and A. Mouzakitis (2018). "A Survey of the State-of-the-Art Localization Techniques and Their Potentials for Autonomous Vehicle Applications." IEEE Internet of Things Journal 5(2): 829-846.
    [41] Kyriakidis, M., R. Happee and J. C. F. de Winter (2015). "Public opinion on automated driving: Results of an international questionnaire among 5000 respondents." Transportation Research Part F: Traffic Psychology and Behaviour 32: 127-140.
    [42] Madyastha, V., V. Ravindra, S. Mallikarjunan and A. Goyal (2011). Extended Kalman filter vs. error state Kalman filter for aircraft attitude estimation. AIAA Guidance, Navigation, and Control Conference: 6615.
    [43] Mourikis, A. I. and S. I. Roumeliotis (2007). A Multi-State Constraint Kalman Filter for Vision-aided Inertial Navigation. Proceedings 2007 IEEE International Conference on Robotics and Automation: 3565-3572.
    [44] Mur-Artal, R., J. M. M. Montiel and J. D. Tardos (2015). "ORB-SLAM: A Versatile and Accurate Monocular SLAM System." IEEE Transactions on Robotics 31(5): 1147-1163.
    [45] Mur-Artal, R. and J. D. Tardos (2017). "ORB-SLAM2: An Open-Source SLAM System for Monocular, Stereo, and RGB-D Cameras." IEEE Transactions on Robotics 33(5): 1255-1262.
    [46] Nassar, S. (2003). Improving the inertial navigation system (INS) error model for INS and INS/DGPS applications, University of Calgary.
    [47] Noureldin, A., T. B. Karamat and J. Georgy (2013). Fundamentals of inertial navigation, satellite-based positioning and their integration.
    [48] Peng, X. and M. G. Petovello (2015). "Measuring GNSS Multipath Distributions in Urban Canyon Environments." IEEE Transactions on Instrumentation and Measurement 64(2): 366-377.
    [49] Petovello, M. G. (2003). Real-time integration of a tactical-grade IMU and GPS for high-accuracy positioning and navigation.
    [50] Petovello, M. G. (2003). Real-time integration of a tactical-grade IMU and GPS for high-accuracy positioning and navigation.
    [51] Prikhodko, I. P., B. Bearss, C. Merritt, J. Bergeron and C. Blackmer (2018). Towards self-navigating cars using MEMS IMU: Challenges and opportunities. 2018 IEEE International Symposium on Inertial Sensors and Systems (INERTIAL): 1-4.
    [52] Qin, T., S. Cao, J. Pan and S. Shen (2019). A General Optimization-based Framework for Global Pose Estimation with Multiple Sensors.
    [53] Qin, T., P. Li and S. Shen (2018). "VINS-Mono: A Robust and Versatile Monocular Visual-Inertial State Estimator." IEEE Transactions on Robotics 34(4): 1004-1020.
    [54] Qin, T. and S. Shen (2018). Online Temporal Calibration for Monocular Visual-Inertial Systems. 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS): 3662-3669.
    [55] Randell, C., C. Djiallis and H. Muller (2003). Personal position measurement using dead reckoning. Seventh IEEE International Symposium on Wearable Computers, 2003. Proceedings.: 166-173.
    [56] Rosten, E. and T. Drummond (2006). Machine Learning for High-Speed Corner Detection, Berlin, Heidelberg, Springer Berlin Heidelberg.
    [57] SAE International (2021). Taxonomy and definitions for terms related to driving automation systems for on-road motor vehicles. https://www.sae.org/standards/content/j3016_202104/
    [58] Scherzinger, B. M. (1996). Inertial navigator error models for large heading uncertainty. Proceedings of Position, Location and Navigation Symposium - PLANS '96.
    [59] Servières, M., V. Renaudin, A. Dupuis and N. Antigny (2021). "Visual and Visual-Inertial SLAM: State of the Art, Classification, and Experimental Benchmarking." Journal of Sensors.
    [60] Shin, E. H. (2005). Estimation techniques for low-cost inertial navigation. UCGE report. 20219.
    [61] Shin, E. H. and N. El-Sheimy (2002). Accuracy improvement of low cost INS/GPS for land applications. Proceedings of the 2002 national technical meeting of the institute of navigation: 146-157.
    [62] Srinara, S., Y. T. Chiu, M. L. Tsai and K. W. Chiang (2022). "HIGH-DEFINITION POINT CLOUD MAP-BASED 3D LiDAR-IMU CALIBRATION FOR SELF-DRIVING APPLICATIONS." The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences XLIII-B1-2022: 271-277.
    [63] Stephenson, S., X. Meng, T. Moore, A. Baxendale and T. Edwards (2011). Accuracy requirements and benchmarking position solutions for intelligent transportation location-based services. Proceedings of the 8th international symposium on location-based services.
    [64] Teunissen, P. and O. Montenbruck (2017). Springer Handbook of Global Navigation Satellite Systems, Springer.
    [65] Titterton, D., J. L. Weston and J. Weston (2004). Strapdown inertial navigation technology, IET.
    [66] Wan, G., X. Yang, R. Cai, H. Li, Y. Zhou, H. Wang and S. Song (2018). Robust and Precise Vehicle Localization Based on Multi-Sensor Fusion in Diverse City Scenes. 2018 IEEE International Conference on Robotics and Automation (ICRA): 4670-4677.
    [67] Wang, Y., A. Chernyshoff and A. M. Shkel (2020). "Study on Estimation Errors in ZUPT-Aided Pedestrian Inertial Navigation Due to IMU Noises." IEEE Transactions on Aerospace and Electronic Systems 56(3): 2280-2291.
    [68] Welch, G. F. (2020). Kalman filter. Computer Vision: A Reference Guide.
    [69] Woodside Capital Partners (2016). Beyond The Headlights: ADAS and Autonomous Sensing. https://woodsidecap.com/wcp-publishes-adasautonomous-sensing-industry-beyond-headlights-report/
    [70] Xiao, Z., D. Yang, T. Wen, K. Jiang and R. Yan (2020). "Monocular Localization with Vector HD Map (MLVHM): A Low-Cost Method for Commercial IVs." Sensors (Basel) 20(7).
    [71] Xu, R., M. Ding, Y. Qi, S. Yue and J. Liu (2018). "Performance Analysis of GNSS/INS Loosely Coupled Integration Systems under Spoofing Attacks." Sensors (Basel) 18(12).
    [72] Yeon Fuh, J. (1998). "Error analysis of analytic coarse alignment methods." IEEE Transactions on Aerospace and Electronic Systems 34(1): 334-337.
    [73] Yuan, C., X. Liu, X. Hong and F. Zhang (2021). "Pixel-Level Extrinsic Self Calibration of High Resolution LiDAR and Camera in Targetless Environments." IEEE Robotics and Automation Letters 6(4): 7517-7524.
    [74] Zaganidis, A., M. Magnusson, T. Duckett and G. Cielniak (2017). Semantic-assisted 3D normal distributions transform for scan registration in environments with limited structure. 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS): 4064-4069.
    [75] Zhang, G. and L.-T. Hsu (2018). "Intelligent GNSS/INS integrated navigation system for a commercial UAV flight control system." Aerospace Science and Technology 80: 368-380.
    [76] Zhang, J. and S. Singh (2014). "LOAM: Lidar Odometry and Mapping in Real-time." Robotics: Science and Systems 2.
    [77] Zhang, J., K. Zhang, R. Grenfell and R. Deakin (2006). "Short Note: On the Relativistic Doppler Effect for Precise Velocity Determination using GPS." Journal of Geodesy 80(2): 104-110.
    [78] Zhao, L., H. Qiu and Y. Feng (2016). "Analysis of a robust Kalman filter in loosely coupled GPS/INS navigation system." Measurement 80: 138-147.
    [79] Zhou, H., W. Xu, J. Chen and W. Wang (2020). "Evolutionary V2X Technologies Toward the Internet of Vehicles: Challenges and Opportunities." Proceedings of the IEEE 108(2): 308-323.

    下載圖示 校內:立即公開
    校外:立即公開
    QR CODE