| 研究生: |
賴柏翔 Lai, Bo-Xiang |
|---|---|
| 論文名稱: |
運用感測融合、車道檢測以及高精地圖實現自駕車之定位 Autonomous Vehicle Localization using Sensor Fusion with Lane Marking Detection and High Definition Map |
| 指導教授: |
莊智清
Juang, Jyh-Ching |
| 學位類別: |
碩士 Master |
| 系所名稱: |
電機資訊學院 - 電機工程學系 Department of Electrical Engineering |
| 論文出版年: | 2020 |
| 畢業學年度: | 108 |
| 語文別: | 英文 |
| 論文頁數: | 98 |
| 中文關鍵詞: | 自動駕駛車輛 、感測器融合 、深度學習 、影像處理 、高精地圖 |
| 外文關鍵詞: | Autonomous Vehicle, Sensor Fusion, Deep Learning, Image Processing, HD map |
| 相關次數: | 點閱:84 下載:0 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
對於自動駕駛以及先進駕駛輔助系統(ADAS)來說,準確的車輛自我定位是極其重要的任務。現有基於全球衛星導航系統(GNSS)的方法,其在戶外空間環境下的定位精度約2到3公尺,並未符合自動駕駛定位精度的要求。此外,使用高精地圖來為自駕車做定位的研究在近幾年越來越常見。因此,其衍生出的車輛自我定位方法成為重要的研究部分。大多數自定位演算法在低成本,靈活性和強健性等各個方面都顯示出其優勢。
本文提出了一種基於多感測器融合的自動駕駛車輛定位方法。現在有很多能夠實現可接受之性能的定位方案,其使用低成本感測器,例如消費者等級的GPS接收器和攝相機。本文打算結合來自不同感測器的更多資訊,來驗證、改良和促進這些現有方法。所提出的方法通過使用卡爾曼濾波器融合GPS、單眼視覺影像和高精地圖,並結合車道線實現更精確的車輛定位。高精地圖被視為自動駕駛系統中的啟動器,因為它提供了用於定位的精確地理坐標信息,並且車道線是其中不可缺少的信息。
為了使用前視攝像機檢測車道線,有幾個現有方案可以實現。一種是使用OpenCV來進行影像處理,以提取重要的車道線特徵並進行定位。另一種是建立一個實時的端到端神經網絡來獲得良好結果。在卡爾曼濾波器中,觀測值基本由兩部分組成,即原始GPS坐標和車輛與車道之間的橫向距離。最後,我們使用高精地圖將橫向距離與GPS坐標相關聯,以形成線性卡爾曼濾波器。通過使用多感測器,我們可以獲得更多定位的方法。實驗結果表明,採用卡爾曼濾波器的多傳感器融合可以提高定位效果。同時,我們在Nvidia Drive PX2上實現了所提出的方法,並獲得了實時定位的結果。
Accurate self-vehicle localization is an important task for autonomous driving and Ad-vanced Driver Assistance Systems (ADAS). Current Global Navigation Satellite System (GNSS)-based solutions do not provide better than 2-3 m in open-sky environments. More-over, map-based localization using High Definition (HD) maps became a source of infor-mation for intelligent vehicles. Hence, the derived self-localization methods become an im-portant research part. Most of the self-localization methods show their advantages in various aspects like low-cost, flexible, and robust.
This thesis proposes a localization method using multi-sensor fusion for an autonomous vehicle. Existing localization solutions are using low-cost sensors such as consumer-level GPS receiver and camera that can achieve acceptable performance. The thesis intends to combine more information from different sensors to verify, refine, and advance those existing methods. The proposed method utilizes lane information for vehicle localization by using Kalman filter to fuse the HD map, monocular camera vision, and GNSS data for more accu-rate vehicle localization. HD map is treated as an enabler in the autonomous system because it provides precise geographic coordinates information for localization and lane lines are in-dispensable information within it. To detect lane lines with a front-view camera, there are few methods to implement it. One is using OpenCV solutions to process the image for ex-tracting important lane lines’ features and locate them. Another is to build a real-time end-to-end neural network to achieve competitive results. In the Kalman filter, the observa-tions primarily consist of two sources which are the raw GPS coordinate and the lateral dis-tance between the vehicle and the lane lines. Finally, we use an HD map to correlate the lo-cal lateral distance and the GPS data to formulate a linear Kalman filter. By using mul-ti-sensor, we can develop more approaches to correct positioning. Experimental results demonstrate that this multi-sensor fusion using Kalman filter can enhance the localization results. In the meanwhile, we bring about our proposed method on Nvidia Drive PX2 and achieve the real-time localization results.
[1] P.Biber, “The Normal Distributions Transform: A New Approach to Laser Scan Matching,” in IEEE International Conference on Intelligent Robots and Systems, 2003, vol. 3, pp. 2743–2748.
[2] K.Yousif, A.Bab-Hadiashar, andR.Hoseinnezhad, “An Overview to Visual Odometry and Visual SLAM: Applications to Mobile Robotics,” Intell. Ind. Syst., vol. 1, no. 4, pp. 289–311, 2015.
[3] “HD Map | TomTom.” https://www.tomtom.com/products/hd-map/ (accessed Jun. 24, 2020).
[4] P.Weber andM.Haklay, “OpenStreetMap: user-generated street maps,” IEEE Pervasive Comput., vol. 7, no. 4, pp. 12–18, 2008.
[5] P.Bender, J.Ziegler, andC.Stiller, “Lanelets: Efficient map representation for autonomous driving,” in IEEE Intelligent Vehicles Symposium, Proceedings, 2014, pp. 420–425.
[6] M.Schreiber, C.Knöppel, andU.Franke, “LaneLoc: Lane marking based localization using highly accurate maps,” in IEEE Intelligent Vehicles Symposium, Proceedings, 2013, pp. 449–454.
[7] C.Wang, H.Huang, Y.Ji, B.Wang, andM.Yang, “Vehicle Localization at an Intersection Using a Traffic Light Map,” IEEE Trans. Intell. Transp. Syst., vol. 20, no. 4, pp. 1432–1441, 2019.
[8] F.Ghallabi, F.Nashashibi, G.El-Haj-Shhade, andM. A.Mittet, “LIDAR-Based Lane Marking Detection for Vehicle Positioning in an HD Map,” IEEE Conf. Intell. Transp. Syst. Proceedings, ITSC, vol. 2018-Novem, pp. 2209–2214, 2018.
[9] W.Lu, S. A. F.Rodríguez, E.Seignez, andR.Reynaud, “Lane Marking-Based Vehicle Localization Using Low-Cost GPS and Open Source Map,” Unmanned Syst., vol. 3, no. 4, pp. 239–251, 2015.
[10] M. A.Quddus, W. Y.Ochieng, L.Zhao, andR. B.Noland, “A general map matching algorithm for transport telematics applications,” in GPS Solutions, 2003, vol. 7, no. 3, pp. 157–167.
[11] E.Ågren, “Lateral Position Detection Using a Vehicle-Mounted Camera,” Computer Vision, Linköpings Universitet, 2003.
[12] X.Du andK. K.Tan, “Vision-based approach towards lane line detection and vehicle localization,” Mach. Vis. Appl., vol. 27, no. 2, pp. 175–191, 2016.
[13] H.Cai, Z.Hu, G.Huang, D.Zhu, andX.Su, “Integration of GPS, monocular vision, and high definition (HD) map for accurate vehicle localization,” Sensors (Switzerland), vol. 18, no. 10, 2018.
[14] C.Yuan, H.Chen, J.Liu, D.Zhu, andY.Xu, “Robust lane detection for complicated road environment based on normal map,” IEEE Access, vol. 6, pp. 49679–49689, 2018.
[15] D.Neven, B.DeBrabandere, S.Georgoulis, M.Proesmans, andL.VanGool, “Towards End-to-End Lane Detection: An Instance Segmentation Approach,” IEEE Intell. Veh. Symp. Proc., vol. 2018-June, pp. 286–291, 2018.
[16] Y.Hou, Z.Ma, C.Liu, andC. C.Loy, “Learning lightweight lane detection CNNS by self attention distillation,” Proc. IEEE Int. Conf. Comput. Vis., vol. 2019-Octob, pp. 1013–1021, 2019.
[17] “基于图像的车道线检测与跟踪算法研究 - 中国优秀硕士学位论文全文数据库.” http://gb.oversea.cnki.net/KCMS/detail/detail.aspx?filename=1014300516.nh&dbcode=CMFD&dbname=CMFDREF (accessed Jun. 24, 2020).
[18] “结合卡尔曼滤波器噪声分析的车道线检测跟踪算法 - 中国期刊全文数据库.” http://gb.oversea.cnki.net/KCMS/detail/detail.aspx?filename=JZCK201605063&dbcode=CJFD&dbname=CJFD2016 (accessed Jun. 24, 2020).
[19] N.Deo andM. M.Trivedi, “Multi-Modal Trajectory Prediction of Surrounding Vehicles with Maneuver based LSTMs,” IEEE Intell. Veh. Symp. Proc., vol. 2018-June, pp. 1179–1184, 2018.
[20] N.Deo, A.Rangesh, andM. M.Trivedi, “How Would Surround Vehicles Move? A Unified Framework for Maneuver Classification and Motion Prediction,” IEEE Trans. Intell. Veh., vol. 3, no. 2, pp. 129–140, 2018.
[21] J.Wiest, M.Höffken, U.Kreßel, andK.Dietmayer, “Probabilistic trajectory prediction with Gaussian mixture models,” IEEE Intell. Veh. Symp. Proc., pp. 141–146, 2012.
[22] Christiand, Y. C.Lee, andW.Yu, “EKF localization with lateral distance information for mobile robots in urban environments,” URAI 2011 - 2011 8th Int. Conf. Ubiquitous Robot. Ambient Intell., pp. 281–286, 2011.
[23] E.Dubrofsky, “Homography Estimation,” Computer Science, British Columbia, 2009.
[24] “Homepage - InLane.” https://inlane.eu/ (accessed Jun. 24, 2020).
[25] W.Li, W.Li, X.Cui, S.Zhao, andM.Lu, “A tightly coupled RTK/INS algorithm with ambiguity resolution in the position domain for ground vehicles in harsh urban environments,” Sensors (Switzerland), vol. 18, no. 7, 2018.
[26] J. H.Im, S. H.Im, andG. I.Jee, “Vertical corner feature based precise vehicle localization using 3D LIDAR in Urban area,” Sensors (Switzerland), vol. 16, no. 8, 2016.
[27] D.Valiente, A.Gil, L.Payá, J. M.Sebastián, andÓ.Reinoso, “Robust visual localization with dynamic uncertainty management in omnidirectional SLAM,” Appl. Sci., vol. 7, no. 12, 2017.
[28] “Autoware.AI.” https://www.autoware.ai/ (accessed Jun. 24, 2020).
[29] “興創知能股份有限公司 thinktron.” https://www.thinktronltd.com/ (accessed Jun. 24, 2020).
[30] “Home - TuSimple.” https://www.tusimple.com/ (accessed Jun. 24, 2020).
[31] “Berkeley DeepDrive.” https://bdd-data.berkeley.edu/ (accessed Jun. 24, 2020).
[32] “CULane Dataset.” https://xingangpan.github.io/projects/CULane.html (accessed Jun. 24, 2020).
[33] “TuSimple/tusimple-benchmark: Download Datasets and Ground Truths: https://github.com/TuSimple/tusimple-benchmark/issues/3.” https://github.com/TuSimple/tusimple-benchmark (accessed Jun. 24, 2020).
[34] D.Salomon, Curves and surfaces for computer graphics. Berlin, Heidelberg: Springer-Verlag, 2006.
[35] “BurkeyLai/bdd2tusimple: Format transform from BDD to TuSimple.” https://github.com/BurkeyLai/bdd2tusimple (accessed Jun. 24, 2020).
[36] X.Pan, J.Shi, P.Luo, X.Wang, andX.Tang, “Spatial as deep: Spatial CNN for traffic scene understanding,” 2018.
[37] “BurkeyLai/culane2tusimple: Format transform from CULane to TuSimple.” https://github.com/BurkeyLai/culane2tusimple (accessed Jun. 24, 2020).
[38] E.Shelhamer, J.Long, andT.Darrell, “Fully Convolutional Networks for Semantic Segmentation,” in IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, vol. 39, no. 4, pp. 640–651.
[39] E.Romera, J. M.Alvarez, L. M.Bergasa, andR.Arroyo, “ERFNet: Efficient Residual Factorized ConvNet for Real-Time Semantic Segmentation,” IEEE Trans. Intell. Transp. Syst., vol. 19, no. 1, pp. 263–272, 2018.
[40] K.He, X.Zhang, S.Ren, andJ.Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2016, vol. 2016-December.
[41] “NVIDIA TensorRT | NVIDIA Developer.” https://developer.nvidia.com/tensorrt (accessed Jun. 25, 2020).
[42] “udacity/CarND-Advanced-Lane-Lines.” https://github.com/udacity/CarND-Advanced-Lane-Lines (accessed Jun. 25, 2020).
[43] B.Dorj andD. J.Lee, “A Precise Lane Detection Algorithm Based on Top View Image Transformation and Least-Square Approaches,” J. Sensors, vol. 2016, pp. 1–13, 2016.
[44] M.Quigley et al., “ROS: an open-source Robot Operating System,” in ICRA workshop on open source software, 2009, vol. 3, no. 3.2.
[45] “ROS.org | Powering the world’s robots.” https://www.ros.org/ (accessed Jun. 28, 2020).