簡易檢索 / 詳目顯示

研究生: 陳家年
Chen, Jia-Nian
論文名稱: 應用於自駕車之光達相機多物件追蹤實時系統
Real-Time LIDAR–Camera Multi-Object Tracking System for Autonomous Vehicle Application
指導教授: 莊智清
Juang, Jyh-Ching
學位類別: 碩士
Master
系所名稱: 電機資訊學院 - 電機工程學系
Department of Electrical Engineering
論文出版年: 2020
畢業學年度: 108
語文別: 英文
論文頁數: 51
中文關鍵詞: 自動駕駛車輛多感知融合多物件追蹤
外文關鍵詞: Autonomous Vehicle, Sensor Fusion, Multi Object Tracking
相關次數: 點閱:112下載:4
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 自駕車將在未來深深影響人類的生活模式或許說已經是進行式,尤其現行 level2自動輔助駕駛已經改變人類在高速公路上的行車模式,而台灣政府也在自駕車上不遺餘力地投資並在各地區建置測試場域,朝自主駕駛前進。
    由於亞洲相對其他地區國家應用場景較不一樣,歐美地區國家常見較筆直的道路且汽車出現頻率較多,亞洲較多蜿蜒的街道,摩托車,腳踏車及行人的比例相當多,常出現交錯蛇行的路線。本文嘗試提出適用於台灣街道場景的多物件追蹤系統,摩托車及腳踏車物體體積較小,尤其隨者距離越來越遠,物體反射面積越小導致光達及影像能收到物件的資訊也降低。本文希望能結合光達與影像的優點來提升物件的識別性以利於長距離物件追蹤。最後系統實現於成功大學自駕車上,並利用低功耗 AI 加速器作為深度學習推理引擎。

    Self-driving cars will profoundly affect human life patterns in the future. Perhaps it is already progressive, especially the current level2 automatic assisted driving has changed the driving mode of humans on highways. In order to toward to autonomous driving. Taiwan government invests in self-driving cars field and builds a test field in various administrative regions.
    Due to the different application scenarios in Asia compared to countries in other regions. Countries in Europe and the United States often have straight roads and most of the objects are cars. There are many winding streets in Asia, and a large proportion of objects are mo-torcycles, bicycles and pedestrians. This thesis attempts to propose a multi-object tracking system suitable for street scenes in Taiwan. Motorcycles and bicycles are smaller objects, especially as the distance increases, the smaller the reflective area of the object and the less data the LiDAR and camera can receive from the object. This thesis combines the ad-vantages of LiDAR and camera to improve the features difference of objects and long-distance object tracking. Finally, the system is implemented on NCKU Autonomous Vehicle, and uses a low-power AI accelerator as a deep learning inference engine.

    摘要 I Abstract III Acknowledgements V Contents VI List of Tables VIII List of Figures IX List of Abbreviations X Chapter 1 Introduction 1 1.1 Motivation 1 1.2 Literature Review 2 1.3 Contributions 3 1.4 Thesis Overview 3 Chapter 2 System Overview 4 2.1 NCKU Autonomous Vehicle Hardware Setup 4 2.1.1 Computing and Connecting Hardware 4 2.1.2 Sensors Introduction 6 2.2 NCKU Autonomous Vehicle Software Setup 9 2.2.1 Robot Operating System 9 2.2.2 NVIDIA® DriveWorks 11 2.2.3 Kneron® SDK 12 2.3 Coordinate Systems and Transformation 13 2.3.1 Coordinate Transformation 14 2.3.2 Coordinate Systems 15 Chapter 3 System Architecture 19 3.1 Detection Module 19 3.1.1 YOLOv3-tiny Object Detection 20 3.1.2 Voxelization of the Point Cloud 23 3.1.3 Features Embedding 25 3.2 Tracking Module 27 3.2.1 Data Association 27 3.2.2 Birth and Death 28 3.2.3 Kalman Filter 29 Chapter 4 Experiment 33 4.1 Experiment Scenarios 33 4.2 Experiment Computing Unit and Sensors 35 4.3 Experiment Result 36 4.3.1 Single target 36 4.3.2 Two targets 40 4.3.3 Three Targets 44 Chapter 5 Conclusions and Future Works 48 5.1 Conclusions 48 5.2 Future Works 49 Reference 50

    [1] G.Ciaparrone, F. L.Sánchez, S.Tabik, L.Troiano, R.Tagliaferri, andF.Herrera, “Deep Learning in Video Multi-Object Tracking: A Survey,” Neurocomputing, vol. 381, pp. 61–88, Jul.2019.
    [2] A.Milan, L.Leal-Taixe, I.Reid, S.Roth, andK.Schindler, “MOT16: A Benchmark for Multi-Object Tracking,” Mar.2016.
    [3] P.Emami, P. M.Pardalos, L.Elefteriadou, andS.Ranka, “Machine Learning Methods for Solving Assignment Problems in Multi-Target Tracking,” Feb.2018.
    [4] M.Camplani et al., “Multiple Human Tracking in RGB-D Data: A Survey,” Jun.2016.
    [5] A.Shenoi et al., “JRMOT: A Real-Time 3D Multi-Object Tracker and a New Large-Scale Dataset,” Feb.2020.
    [6] A.Bewley, Z.Ge, L.Ott, F.Ramos, andB.Upcroft, “Simple Online and Realtime Tracking,” Proc. - Int. Conf. Image Process. ICIP, vol. 2016-August, pp. 3464–3468, Feb.2016.
    [7] N.Wojke, A.Bewley, andD.Paulus, “Simple online and realtime tracking with a deep association metric,” in Proceedings - International Conference on Image Processing, ICIP, Feb. 2018, vol. 2017-September, pp. 3645–3649.
    [8] X.Weng andK.Kitani, “A Baseline for 3D Multi-Object Tracking,” Jul.2019.
    [9] “Autonomous Vehicle Development Platforms | NVIDIA Docs.” https://docs.nvidia.com/drive/ (accessed Jul. 14, 2020).
    [10] “M2AI-2280-520 | AI Edge Computing Module with Kneron KL520 NPU - AAEON.” https://www.aaeon.com/en/p/ai-modules-m2ai-2280-520 (accessed Jul. 14, 2020).
    [11] “High-resolution OS1 lidar sensor: robotics, trucking, mapping | Ouster.” https://ouster.com/products/os1-lidar-sensor/ (accessed Jul. 14, 2020).
    [12] “Aptiv Electronically Scanning Radar | RADAR | AutonomouStuff.” https://autonomoustuff.com/product/aptiv-esr-2-5-24v/ (accessed Jul. 14, 2020).
    [13] “SEKONIX | Camera.” http://sekolab.com/products/camera/ (accessed Jul. 14, 2020).
    [14] “ROS.org | Powering the world’s robots.” https://www.ros.org/ (accessed Jul. 14, 2020).
    [15] “Point Cloud Library | The Point Cloud Library (PCL) is a standalone, large scale, open project for 2D/3D image and point cloud processing.” https://pointclouds.org/ (accessed Jul. 14, 2020).
    [16] “OpenCV.” https://opencv.org/ (accessed Jul. 14, 2020).
    [17] “rviz - ROS Wiki.” http://wiki.ros.org/rviz (accessed Jul. 14, 2020).
    [18] “rqt - ROS Wiki.” http://wiki.ros.org/rqt (accessed Jul. 14, 2020).
    [19] “Autoware Camera-LiDAR Calibration Package — Autoware 1.9.0 documentation.” https://autoware.readthedocs.io/en/feature-documentation_rtd/DevelopersGuide/PackagesAPI/sensing/autoware_camera_lidar_calibrator.html (accessed Jul. 14, 2020).
    [20] R. E.Kalman, “A new approach to linear filtering and prediction problems.,” pp. 35–45, 1960.
    [21] G.Welch andG.Bishop, “An Introduction to the Kalman Filter.”
    [22] “Taiwan CAR Lab.” http://taiwancarlab.narlabs.org.tw/index_en.html (accessed Jul. 14, 2020).

    下載圖示 校內:2023-08-20公開
    校外:2023-08-20公開
    QR CODE