簡易檢索 / 詳目顯示

研究生: 黃詩惠
Huang, Shih-Hui
論文名稱: 運用相機鳥瞰圖及高精地圖於自駕車之環境知覺
Autonomous Vehicle Environmental Perception using Camera Bird's Eye View and High Definition Map
指導教授: 莊智清
Juang, Jyh-Ching
學位類別: 碩士
Master
系所名稱: 電機資訊學院 - 電機工程學系
Department of Electrical Engineering
論文出版年: 2021
畢業學年度: 109
語文別: 英文
論文頁數: 55
中文關鍵詞: 自動駕駛環境感知高精地圖反向透視映射電腦視覺
外文關鍵詞: Autonomous Vehicle, Environmental Perception, HD map, Inverse Perspective Mapping, Computer Vision
相關次數: 點閱:118下載:7
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 近年來,自動駕駛已成為車輛科技的發展趨勢,車輛也都會配置等級不同的先進駕駛輔助系統(Advanced Driver Assistance Systems, ADAS)來輔助駕駛行駛。而對於自動駕駛和先進駕駛輔助系統而言,周遭環境的感知和重新建置是不可或缺的重要任務。自駕車必須盡可能地瞭解周遭環境,知道與前車的距離、周遭是否有障礙物、車輛自身位於車道的位置等,進而能進一步地由決策系統做出決策反應。
    目前大多數自駕車用於建圖的感測器為光學攝像機和光達。光達具有非常高的精準度,在建立周遭3D地理環境與辨識物體外型上有很大的優勢。但缺點為價格過於高昂,而且需要安裝於車體外,其防水性也是需要謹慎考量的部分。而近年來高精地圖(High Definition maps, HD Maps)在自駕車的研究上也越來越普遍。高精地圖為因應自駕車技術時代來臨,專門為機器打造的新型態地圖,以輔助機器在行駛導航上做出決策。具有公分等級的極高精度,並且擁有豐富的道路資訊,提供車輛更多的地理座標信息。本文提出使用低成本的感測器攝影機,其提供的影像資訊結合高精地圖,藉由高精地圖提供的道路資訊取代高成本的光達,達到重建周遭環境的目的。
    本文打算結合攝像機的彩色影像與具有高精確度的高精地圖來進行周遭環境德重建。所提出的方法為將攝影機的視覺影像使用OpenCV進行反向透視變換(Inverse Perspective Mapping, IPM)的影像處理,將前視畫面轉換為鳥瞰圖,並提取高精地圖中的道路資訊將其標註於鳥瞰圖中,重建周遭環境。

    In recent years, self-driving has become the development trend of vehicle technology. Vehicles are also equipped with Advanced Driver Assistance Systems, ADAS to support drivers. For self-driving and ADAS, environmental perception and building the surrounding environment is an important task.
    To achieve this purpose, most methods choose to use a camera and lidar these two kinds of sensors. Lidar has high accuracy, and great advantages in the establishment of surrounding 3D geographical environment and identify the appearance of objects. But the disadvantage is that the price is high, and needs to be equipped outside the vehicle, water-proof resistance rank is also needed to be carefully considered. Recently, High Definition maps, have become more and more common in the researches of self-driving cars. HD maps are designed for machines to assist them in location and making decisions on driving and navigation. Extremely high precision in centimeter-level and abundant road information to provide more geographic coordinate information for vehicles. This thesis intends to combine the image get from a low-cost sensor camera with HD maps. By using the abundant road information provided by HD maps to replace high-cost sensor lidar.
    In this thesis, we combined the color image view with HD maps to build the surrounding environment. The proposed method is using OpenCV for image processing inverse perspective mapping, IPM. Transfer the front view image into a bird's eye view image. Then extract the road information from the HD maps to mark it on the bird's eye view. Achieve the purpose of rebuild the surrounding environment.

    摘要 I Abstract III Acknowledge V Contents VI List of Tables VIII List of Figures IX List of Abbreviations X Chapter 1 Introduction 1 1.1 Motivation and Objectives 1 1.2 Thesis Contribution 4 1.3 Thesis Overview 4 Chapter 2 System Overview and Data Analysis 6 2.1 System Architecture 6 2.2 Sensor and Data Analysis 7 2.2.1 HD Maps 7 2.2.2 Image Data 13 2.3 Coordinate Systems and Transform 13 2.3.1 Coordinate Systems 14 2.3.2 Coordinate Transform 17 Chapter 3 Image Processing and Inverse Perspective Mapping 24 3.1 Monocular Camera Calibration 24 3.2 Inverse Perspective Mapping (IPM) 30 Chapter 4 Implementation and Experiments 34 4.1 System Configuration 34 4.1.1 Software 34 4.1.2 Hardware Platform 37 4.2 Image Format Conversion 40 4.3 HD Maps Coordinate Transform 41 4.4 Experiments 44 4.4.1 Image Data Combine HD maps 44 4.4.2 Image Data for IPM processing 45 4.4.3 IPM Image Combine HD maps 45 4.5 Discussion 49 Chapter 5 Conclusion and Future Work 51 5.1 Conclusion 51 5.2 Future Work 52 Reference 54

    [1] S. D. Pendleton et al., "Perception, planning, control, and coordination for autonomous vehicles," Machines, vol. 5, no. 1, p. 6, 2017.
    [2] C. Lin, D. Tian, X. Duan, and J. Zhou, "3D Environmental Perception Modeling in the Simulated Autonomous-Driving Systems," Complex System Modeling and Simulation, vol. 1, no. 1, pp. 45-54, 2021.
    [3] J. Dickmann et al., "Automotive radar the key technology for autonomous driving: From detection and ranging to environmental understanding," in 2016 IEEE Radar Conference (RadarConf), 2016: IEEE, pp. 1-6.
    [4] R. Liu, J. Wang, and B. Zhang, "High definition map for automated driving: Overview and analysis," The Journal of Navigation, vol. 73, no. 2, pp. 324-341, 2020.
    [5] "WEVOLVER "2020 Autonomous Vehicle Technology Report"." [Online]. Available: https://www.wevolver.com/article/2020.autonomous.vehicle.technology.report.
    [6] S. Tuohy, D. O'Cualain, E. Jones, and M. Glavin, "Distance determination for an automobile environment using inverse perspective mapping in OpenCV," 2010.
    [7] M. O. TAŞ, H. S. Yavuz, and A. Yazici, "Updating HD-maps for autonomous transfer vehicles in smart factories," in 2018 6th International Conference on Control Engineering & Information Technology (CEIT), 2018: IEEE, pp. 1-5.
    [8] Y. Kang and A. Magdy, "HiDaM: A Unified Data Model for High-definition (HD) Map Data," in 2020 IEEE 36th International Conference on Data Engineering Workshops (ICDEW), 2020: IEEE, pp. 26-32.
    [9] M. Heo, J. Kim, and S. Kim, "HD Map Change Detection with Cross-Domain Deep Metric Learning," in 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2020: IEEE, pp. 10218-10224.
    [10] "興創知能股份有限公司, ThinkTron ". [Online]. Available: https://www.thinktronltd.com/.
    [11] "高精地圖研究發展中心, High Definition Maps Research Center." [Online]. Available: http://www.hdm.geomatics.ncku.edu.tw/.
    [12] W. N. Tun, S. Kim, J.-W. Lee, and H. Darweesh, "Open-source tool of vector map for path planning in autoware autonomous driving software," in 2019 IEEE International Conference on Big Data and Smart Computing (BigComp), 2019: IEEE, pp. 1-3.
    [13] Z. Zhang, "A flexible new technique for camera calibration," IEEE Transactions on pattern analysis and machine intelligence, vol. 22, no. 11, pp. 1330-1334, 2000.
    [14] "MathWorks - What Is Camera Calibration?." [Online]. Available: https://www.mathworks.com/help/vision/ug/camera-calibration.html#bu0niag.
    [15] M. Oliveira, V. Santos, and A. D. Sappa, "Multimodal inverse perspective mapping," Information Fusion, vol. 24, pp. 108-121, 2015.
    [16] N. Y. Ershadi and J. M. Menéndez, "A New Strategy of Detecting Traffic Information Based on Traffic Camera: Modified Inverse Perspective Mapping."
    [17] M. Nieto, L. Salgado, F. Jaureguizar, and J. Cabrera, "Stabilization of inverse perspective mapping images based on robust vanishing point estimation," in 2007 IEEE Intelligent Vehicles Symposium, 2007: IEEE, pp. 315-320.
    [18] T. Bruls, H. Porav, L. Kunze, and P. Newman, "The right (angled) perspective: Improving the understanding of road scenes using boosted inverse perspective mapping," in 2019 IEEE Intelligent Vehicles Symposium (IV), 2019: IEEE, pp. 302-309.
    [19] "ROS.org." [Online]. Available: https://www.ros.org/.
    [20] "ROS.org | Wiki : Documentation ". [Online]. Available: http://wiki.ros.org/.
    [21] "The Autoware Foundation." [Online]. Available: https://www.autoware.org/.
    [22] "Autoware.AI." [Online]. Available: https://github.com/Autoware-AI/autoware.ai/wiki.
    [23] "ROS.org | Wiki : rviz." [Online]. Available: http://wiki.ros.org/rviz.
    [24] "ROS.org | Wiki: cv_bridge/Tutorials/UsingCvBridgeToConvertBetweenROSImagesAndOpenCVImage." [Online]. Available: http://wiki.ros.org/cv_bridge/Tutorials/UsingCvBridgeToConvertBetweenROSImagesAndOpenCVImages.
    [25] "pymap3d API documentation." [Online]. Available: https://geospace-code.github.io/pymap3d/.
    [26] "ROS.org | Wiki : tf." [Online]. Available: http://wiki.ros.org/tf.
    [27] T. Foote, "tf: The transform library," in 2013 IEEE Conference on Technologies for Practical Robot Applications (TePRA), 2013: IEEE, pp. 1-6.

    下載圖示 校內:2023-08-17公開
    校外:2023-08-17公開
    QR CODE