| 研究生: |
陳家年 Chen, Jia-Nian |
|---|---|
| 論文名稱: |
應用於自駕車之光達相機多物件追蹤實時系統 Real-Time LIDAR–Camera Multi-Object Tracking System for Autonomous Vehicle Application |
| 指導教授: |
莊智清
Juang, Jyh-Ching |
| 學位類別: |
碩士 Master |
| 系所名稱: |
電機資訊學院 - 電機工程學系 Department of Electrical Engineering |
| 論文出版年: | 2020 |
| 畢業學年度: | 108 |
| 語文別: | 英文 |
| 論文頁數: | 51 |
| 中文關鍵詞: | 自動駕駛車輛 、多感知融合 、多物件追蹤 |
| 外文關鍵詞: | Autonomous Vehicle, Sensor Fusion, Multi Object Tracking |
| 相關次數: | 點閱:112 下載:4 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
自駕車將在未來深深影響人類的生活模式或許說已經是進行式,尤其現行 level2自動輔助駕駛已經改變人類在高速公路上的行車模式,而台灣政府也在自駕車上不遺餘力地投資並在各地區建置測試場域,朝自主駕駛前進。
由於亞洲相對其他地區國家應用場景較不一樣,歐美地區國家常見較筆直的道路且汽車出現頻率較多,亞洲較多蜿蜒的街道,摩托車,腳踏車及行人的比例相當多,常出現交錯蛇行的路線。本文嘗試提出適用於台灣街道場景的多物件追蹤系統,摩托車及腳踏車物體體積較小,尤其隨者距離越來越遠,物體反射面積越小導致光達及影像能收到物件的資訊也降低。本文希望能結合光達與影像的優點來提升物件的識別性以利於長距離物件追蹤。最後系統實現於成功大學自駕車上,並利用低功耗 AI 加速器作為深度學習推理引擎。
Self-driving cars will profoundly affect human life patterns in the future. Perhaps it is already progressive, especially the current level2 automatic assisted driving has changed the driving mode of humans on highways. In order to toward to autonomous driving. Taiwan government invests in self-driving cars field and builds a test field in various administrative regions.
Due to the different application scenarios in Asia compared to countries in other regions. Countries in Europe and the United States often have straight roads and most of the objects are cars. There are many winding streets in Asia, and a large proportion of objects are mo-torcycles, bicycles and pedestrians. This thesis attempts to propose a multi-object tracking system suitable for street scenes in Taiwan. Motorcycles and bicycles are smaller objects, especially as the distance increases, the smaller the reflective area of the object and the less data the LiDAR and camera can receive from the object. This thesis combines the ad-vantages of LiDAR and camera to improve the features difference of objects and long-distance object tracking. Finally, the system is implemented on NCKU Autonomous Vehicle, and uses a low-power AI accelerator as a deep learning inference engine.
[1] G.Ciaparrone, F. L.Sánchez, S.Tabik, L.Troiano, R.Tagliaferri, andF.Herrera, “Deep Learning in Video Multi-Object Tracking: A Survey,” Neurocomputing, vol. 381, pp. 61–88, Jul.2019.
[2] A.Milan, L.Leal-Taixe, I.Reid, S.Roth, andK.Schindler, “MOT16: A Benchmark for Multi-Object Tracking,” Mar.2016.
[3] P.Emami, P. M.Pardalos, L.Elefteriadou, andS.Ranka, “Machine Learning Methods for Solving Assignment Problems in Multi-Target Tracking,” Feb.2018.
[4] M.Camplani et al., “Multiple Human Tracking in RGB-D Data: A Survey,” Jun.2016.
[5] A.Shenoi et al., “JRMOT: A Real-Time 3D Multi-Object Tracker and a New Large-Scale Dataset,” Feb.2020.
[6] A.Bewley, Z.Ge, L.Ott, F.Ramos, andB.Upcroft, “Simple Online and Realtime Tracking,” Proc. - Int. Conf. Image Process. ICIP, vol. 2016-August, pp. 3464–3468, Feb.2016.
[7] N.Wojke, A.Bewley, andD.Paulus, “Simple online and realtime tracking with a deep association metric,” in Proceedings - International Conference on Image Processing, ICIP, Feb. 2018, vol. 2017-September, pp. 3645–3649.
[8] X.Weng andK.Kitani, “A Baseline for 3D Multi-Object Tracking,” Jul.2019.
[9] “Autonomous Vehicle Development Platforms | NVIDIA Docs.” https://docs.nvidia.com/drive/ (accessed Jul. 14, 2020).
[10] “M2AI-2280-520 | AI Edge Computing Module with Kneron KL520 NPU - AAEON.” https://www.aaeon.com/en/p/ai-modules-m2ai-2280-520 (accessed Jul. 14, 2020).
[11] “High-resolution OS1 lidar sensor: robotics, trucking, mapping | Ouster.” https://ouster.com/products/os1-lidar-sensor/ (accessed Jul. 14, 2020).
[12] “Aptiv Electronically Scanning Radar | RADAR | AutonomouStuff.” https://autonomoustuff.com/product/aptiv-esr-2-5-24v/ (accessed Jul. 14, 2020).
[13] “SEKONIX | Camera.” http://sekolab.com/products/camera/ (accessed Jul. 14, 2020).
[14] “ROS.org | Powering the world’s robots.” https://www.ros.org/ (accessed Jul. 14, 2020).
[15] “Point Cloud Library | The Point Cloud Library (PCL) is a standalone, large scale, open project for 2D/3D image and point cloud processing.” https://pointclouds.org/ (accessed Jul. 14, 2020).
[16] “OpenCV.” https://opencv.org/ (accessed Jul. 14, 2020).
[17] “rviz - ROS Wiki.” http://wiki.ros.org/rviz (accessed Jul. 14, 2020).
[18] “rqt - ROS Wiki.” http://wiki.ros.org/rqt (accessed Jul. 14, 2020).
[19] “Autoware Camera-LiDAR Calibration Package — Autoware 1.9.0 documentation.” https://autoware.readthedocs.io/en/feature-documentation_rtd/DevelopersGuide/PackagesAPI/sensing/autoware_camera_lidar_calibrator.html (accessed Jul. 14, 2020).
[20] R. E.Kalman, “A new approach to linear filtering and prediction problems.,” pp. 35–45, 1960.
[21] G.Welch andG.Bishop, “An Introduction to the Kalman Filter.”
[22] “Taiwan CAR Lab.” http://taiwancarlab.narlabs.org.tw/index_en.html (accessed Jul. 14, 2020).