| 研究生: |
王昱翔 Wang, Yu-Hsiang |
|---|---|
| 論文名稱: |
應用多感測器融合於自動駕駛之聯合式號誌辨識與車輛定位 Application of Multi-Sensor Fusion for Cascade Landmark Recognition and Vehicle Localization for Autonomous Driving |
| 指導教授: |
莊智清
Juang, Jyh-Ching |
| 學位類別: |
碩士 Master |
| 系所名稱: |
電機資訊學院 - 電機工程學系 Department of Electrical Engineering |
| 論文出版年: | 2020 |
| 畢業學年度: | 108 |
| 語文別: | 英文 |
| 論文頁數: | 83 |
| 中文關鍵詞: | 自動駕駛 、傳感器融合 、導航定位 、燈號辨識 、電腦視覺 |
| 外文關鍵詞: | Autonomous Driving, Sensor Fusion, Positioning and Navigation, Traffic Light Recognition, Computer Vision |
| 相關次數: | 點閱:109 下載:0 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
具高精度且完整性的不間斷車輛定位功能對於自動化車輛的安全行駛至關重要。 而本篇論文整合一個基於地圖匹配的偵測方案,去增強低成本全球導航系統(Global Navigation Satellite System, GNSS)與慣性測量單元(Inertial Measurement Unit, IMU)在複雜環境下的定位性能。 在過去的車輛導航系統中,會使用全球導航系統與慣性測量單元的整合組件去提供位置、速度以及姿態的資訊。 這樣的整合導航受限於慣性測量單元的誤差特性以及全球導航系統的可操作環境。 如果導航系統的訊號長時間受到干擾或未接收到,與慣性測量單元的品質又未受到好的校正,那麼導航結果會發生嚴重的錯誤。 注意到的是在這頗具挑戰性的環境當中,存在一些號誌的特徵,像是紅綠燈。 而這些紅綠燈在影像上的量測量去與高精地圖相應的資訊做匹配,可以得到校正資訊,以利擴增式卡爾曼在導航定位的處理。
在這裡號誌燈的辨識對於基於地圖匹配的定位方法的可用性是一個關鍵的議題,而這樣的辨識功能也是部屬自動駕駛系統中,重要的一環。 然而號誌燈辨識的可靠度會因為非相關物體或者影像明暗程度的干擾而大幅下降偵測的精準度。 為了克服這些問題,本篇論文採用了輔助型偵測方法。 輔助型偵測用於燈號辨識是利用高精地圖的資訊,在影像上產生感興趣的區域,做為預先知道的範圍,並且利用深度學習的模型去從這些候選的區域偵測出目標物,這樣的方法可以大幅減少錯誤偵測,並且可以成功的辨識出當前的燈號狀態。
本篇論文融合了低成本全球導航系統、慣性量測元、輪速計以及單相機量測量與相應高精地圖資訊的匹配結果,去得到一個不間斷的導航結果。 這個系統被實現在真實車輛並且在台灣智駕測試實驗室做測試。
A seamless vehicle localization capability with high accuracy and integrity is essential for the safe operation of automated vehicles. This study integrates a map-matching based detection scheme and a low-cost GNSS/IMU system to enhance the localization capability in a challenging environment. Existing vehicle navigation systems typically use a GNSS/IMU navigation suite to provide position, velocity, and attitude. Such a navigation suite is subject to the error characteristics of the IMU and the operating environment of the GNSS. If the GNSS signals are affected for a long period of time and the quality of the IMU is not well calibrated, erroneous navigation results may occur. It is noted that a challenging environment is featured with some landmarks such as traffic lights. The traffic light measurement detected in an image is matched with the HD map to yield correction information for the extended Kalman filter in navigation processing.
Here, traffic light recognition is another critical issue for the availability of the map-matching base localization method, and it is an important step to deploy autonomous driving systems as well. However, the reliability of traffic light recognition suffers from unrelated objects or illumination conditions, causing a considerable reduction in detection accuracy. To overcome these problems, this study also employs an auxiliary detection method. The auxiliary detection for traffic light recognition is to generate the region of interest in an image by using the information in the HD map as the prior knowledge and detect targets by a deep learning model within candidate areas, which can significantly reduce false-positive and successfully identify the current status of traffic lights.
This thesis investigates the fusion of a low-cost GNSS receiver, IMU, vehicle odometer, and the matched results from measurements observed by a monocular camera with the information in an HD map to render a seamless navigation. The system is implemented in a real vehicle and tested at Taiwan CAR Lab.
[1] J. Soubielle, I. Fijalkow, P. Duvaut, and A. Bibaut, "GPS positioning in a multipath environment," IEEE Transactions on Signal Processing, vol. 50, no. 1, pp. 141-150, 2002.
[2] A. Mogelmose, M. M. Trivedi, and T. B. Moeslund, "Vision-based traffic sign detection and analysis for intelligent driver assistance systems: Perspectives and survey," IEEE Transactions on Intelligent Transportation Systems, vol. 13, no. 4, pp. 1484-1497, 2012.
[3] J. Greenhalgh and M. Mirmehdi, "Real-time detection and recognition of road traffic signs," IEEE transactions on Intelligent Transportation Systems, vol. 13, no. 4, pp. 1498-1506, 2012.
[4] G. Mu, Z. Xinyu, L. Deyi, Z. Tianlei, and A. Lifeng, "Traffic light detection and recognition for autonomous vehicles," The Journal of China Universities of Posts and Telecommunications, vol. 22, no. 1, pp. 50-56, 2015.
[5] S. Tang and L.-L. Huang, "Traffic sign recognition using complementary features," in 2013 2nd IAPR Asian Conference on Pattern Recognition, 2013: IEEE, pp. 210-214.
[6] S. El Margae, B. Sanae, A. K. Mounir, and F. Youssef, "Traffic sign recognition based on multi-block LBP features using SVM with normalization," in 2014 9th International Conference on Intelligent Systems: Theories and Spplications (SITA-14), 2014: IEEE, pp. 1-7.
[7] Z. Zhu, D. Liang, S. Zhang, X. Huang, B. Li, and S. Hu, "Traffic-sign detection and classification in the wild," in IEEE conference on Computer Vision and Pattern Recognition, 2016, pp. 2110-2118.
[8] N. Fairfield and C. Urmson, "Traffic light mapping and detection," in 2011 IEEE International Conference on Robotics and Automation, 2011: IEEE, pp. 5421-5426.
[9] J. Levinson, J. Askeland, J. Dolson, and S. Thrun, "Traffic light mapping, localization, and state detection for autonomous vehicles," in 2011 IEEE International Conference on Robotics and Automation, 2011: IEEE, pp. 5784-5791.
[10] C. Jang, S. Cho, S. Jeong, J. K. Suhr, H. G. Jung, and M. Sunwoo, "Traffic light recognition exploiting map and localization at every stage," Expert Systems with Applications, vol. 88, pp. 290-304, 2017.
[11] L. Zhao, "An extended Kalman filter algorithm for in-tegrating GPS and low cost dead reckoning system data for vehicle performance and emissions monitoring," The journal of Navigation, vol. 56, no. 2, pp. 257-275, 2003.
[12] J. Huang and H.-S. Tan, "A low-order DGPS-based vehicle positioning system under urban environment," IEEE/ASME Transactions on Mechatronics, vol. 11, no. 5, pp. 567-575, 2006.
[13] J. M. Barrett et al., "Development of a low-cost, self-contained, combined vision and inertial navigation system," in 2013 IEEE Conference on Technologies for Practical Robot Applications (TePRA), 2013: IEEE, pp. 1-6.
[14] P. Pinies, T. Lupton, S. Sukkarieh, and J. D. Tardos, "Inertial aiding of inverse depth SLAM using a monocular camera," in Proceedings 2007 IEEE International Conference on Robotics and Automation, 2007: IEEE, pp. 2797-2802.
[15] S. Rezaei and R. Sengupta, "Kalman filter-based integration of DGPS and vehicle sensors for localization," IEEE Transactions on Control Systems Technology, vol. 15, no. 6, pp. 1080-1088, 2007.
[16] T. Xia, M. Yang, and R. Yang, "Vision based global localization for intelligent vehicles," in 2006 IEEE Intelligent Vehicles Symposium, 2006: IEEE, pp. 571-576.
[17] T. Chu, N. Guo, S. Backén, and D. Akos, "Monocular camera/IMU/GNSS integration for ground vehicle navigation in challenging GNSS environments," Sensors, vol. 12, no. 3, pp. 3162-3185, 2012.
[18] O. Pink, F. Moosmann, and A. Bachmann, "Visual features for vehicle localization and ego-motion estimation," in 2009 IEEE Intelligent Vehicles Symposium, 2009: IEEE, pp. 254-260.
[19] T. Weiss, N. Kaempchen, and K. Dietmayer, "Precise ego-localization in urban areas using laserscanner and high accuracy feature maps," in IEEE Proceedings. Intelligent Vehicles Symposium, 2005., 2005: IEEE, pp. 284-289.
[20] N. Mattern and G. Wanielik, "Camera-based vehicle localization at intersections using detailed digital maps," in IEEE/ION Position, Location and Navigation Symposium, 2010: IEEE, pp. 1100-1107.
[21] A. Schindler, "Vehicle self-localization with high-precision digital maps," in 2013 IEEE Intelligent Vehicles Symposium Workshops (IV Workshops), 2013: IEEE, pp. 134-139.
[22] R. Liu, J. Wang, and B. Zhang, "High definition map for automated driving: Overview and analysis," The Journal of Navigation, vol. 73, no. 2, pp. 324-341, 2020.
[23] W. N. Tun, S. Kim, J.-W. Lee, and H. Darweesh, "Open-Source Tool of Vector Map for Path Planning in Autoware Autonomous Driving Software," in 2019 IEEE International Conference on Big Data and Smart Computing (BigComp), 2019: IEEE, pp. 1-3.
[24] T. Foote, "tf: The transform library," in 2013 IEEE Conference on Technologies for Practical Robot Applications (TePRA), 2013: IEEE, pp. 1-6.
[25] Z. Zhang, "A flexible new technique for camera calibration," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 11, pp. 1330-1334, 2000.
[26] M. Veľas, M. Španěl, Z. Materna, and A. Herout, "Calibration of RGB camera with velodyne lidar," in WSCG 2014 Communication Papers Proceedings. Journal of WSCG. Plzeň: Union Agency, 2014. p. 135-144.
[27] X. Li, H. Ma, X. Wang, and X. Zhang, "Traffic light recognition for complex scene with fusion detections," IEEE Transactions on Intelligent Transportation Systems, vol. 19, no. 1, pp. 199-208, 2017.
[28] J. Choi, B. T. Ahn, and I. S. Kweon, "Crosswalk and traffic light detection via integral framework," in The 19th Korea-Japan Joint Workshop on Frontiers of Computer Vision, 2013: IEEE, pp. 309-312.
[29] S.-H. Lee, J.-H. Kim, Y.-J. Lim, and J. Lim, "Traffic light detection and recognition based on Haar-like features," in 2018 International Conference on Electronics, Information, and Communication (ICEIC), 2018: IEEE, pp. 1-4.
[30] Y. Ji, M. Yang, Z. Lu, and C. Wang, "Integrating visual selective attention model with HOG features for traffic light detection and recognition," in 2015 IEEE Intelligent Vehicles Symposium (IV), 2015: IEEE, pp. 280-285.
[31] M. B. Jensen, M. P. Philipsen, A. Møgelmose, T. B. Moeslund, and M. M. Trivedi, "Vision for looking at traffic lights: Issues, survey, and perspectives," IEEE Transactions on Intelligent Transportation Systems, vol. 17, no. 7, pp. 1800-1815, 2016.
[32] H. Law and J. Deng, "Cornernet: Detecting objects as paired keypoints," in European Conference on Computer Vision (ECCV), 2018, pp. 734-750.
[1] J. Soubielle, I. Fijalkow, P. Duvaut, and A. Bibaut, "GPS positioning in a multipath environment," IEEE Transactions on Signal Processing, vol. 50, no. 1, pp. 141-150, 2002.
[2] A. Mogelmose, M. M. Trivedi, and T. B. Moeslund, "Vision-based traffic sign detection and analysis for intelligent driver assistance systems: Perspectives and survey," IEEE Transactions on Intelligent Transportation Systems, vol. 13, no. 4, pp. 1484-1497, 2012.
[3] J. Greenhalgh and M. Mirmehdi, "Real-time detection and recognition of road traffic signs," IEEE transactions on Intelligent Transportation Systems, vol. 13, no. 4, pp. 1498-1506, 2012.
[4] G. Mu, Z. Xinyu, L. Deyi, Z. Tianlei, and A. Lifeng, "Traffic light detection and recognition for autonomous vehicles," The Journal of China Universities of Posts and Telecommunications, vol. 22, no. 1, pp. 50-56, 2015.
[5] S. Tang and L.-L. Huang, "Traffic sign recognition using complementary features," in 2013 2nd IAPR Asian Conference on Pattern Recognition, 2013: IEEE, pp. 210-214.
[6] S. El Margae, B. Sanae, A. K. Mounir, and F. Youssef, "Traffic sign recognition based on multi-block LBP features using SVM with normalization," in 2014 9th International Conference on Intelligent Systems: Theories and Spplications (SITA-14), 2014: IEEE, pp. 1-7.
[7] Z. Zhu, D. Liang, S. Zhang, X. Huang, B. Li, and S. Hu, "Traffic-sign detection and classification in the wild," in IEEE conference on Computer Vision and Pattern Recognition, 2016, pp. 2110-2118.
[8] N. Fairfield and C. Urmson, "Traffic light mapping and detection," in 2011 IEEE International Conference on Robotics and Automation, 2011: IEEE, pp. 5421-5426.
[9] J. Levinson, J. Askeland, J. Dolson, and S. Thrun, "Traffic light mapping, localization, and state detection for autonomous vehicles," in 2011 IEEE International Conference on Robotics and Automation, 2011: IEEE, pp. 5784-5791.
[10] C. Jang, S. Cho, S. Jeong, J. K. Suhr, H. G. Jung, and M. Sunwoo, "Traffic light recognition exploiting map and localization at every stage," Expert Systems with Applications, vol. 88, pp. 290-304, 2017.
[11] L. Zhao, "An extended Kalman filter algorithm for in-tegrating GPS and low cost dead reckoning system data for vehicle performance and emissions monitoring," The journal of Navigation, vol. 56, no. 2, pp. 257-275, 2003.
[12] J. Huang and H.-S. Tan, "A low-order DGPS-based vehicle positioning system under urban environment," IEEE/ASME Transactions on Mechatronics, vol. 11, no. 5, pp. 567-575, 2006.
[13] J. M. Barrett et al., "Development of a low-cost, self-contained, combined vision and inertial navigation system," in 2013 IEEE Conference on Technologies for Practical Robot Applications (TePRA), 2013: IEEE, pp. 1-6.
[14] P. Pinies, T. Lupton, S. Sukkarieh, and J. D. Tardos, "Inertial aiding of inverse depth SLAM using a monocular camera," in Proceedings 2007 IEEE International Conference on Robotics and Automation, 2007: IEEE, pp. 2797-2802.
[15] S. Rezaei and R. Sengupta, "Kalman filter-based integration of DGPS and vehicle sensors for localization," IEEE Transactions on Control Systems Technology, vol. 15, no. 6, pp. 1080-1088, 2007.
[16] T. Xia, M. Yang, and R. Yang, "Vision based global localization for intelligent vehicles," in 2006 IEEE Intelligent Vehicles Symposium, 2006: IEEE, pp. 571-576.
[17] T. Chu, N. Guo, S. Backén, and D. Akos, "Monocular camera/IMU/GNSS integration for ground vehicle navigation in challenging GNSS environments," Sensors, vol. 12, no. 3, pp. 3162-3185, 2012.
[18] O. Pink, F. Moosmann, and A. Bachmann, "Visual features for vehicle localization and ego-motion estimation," in 2009 IEEE Intelligent Vehicles Symposium, 2009: IEEE, pp. 254-260.
[19] T. Weiss, N. Kaempchen, and K. Dietmayer, "Precise ego-localization in urban areas using laserscanner and high accuracy feature maps," in IEEE Proceedings. Intelligent Vehicles Symposium, 2005., 2005: IEEE, pp. 284-289.
[20] N. Mattern and G. Wanielik, "Camera-based vehicle localization at intersections using detailed digital maps," in IEEE/ION Position, Location and Navigation Symposium, 2010: IEEE, pp. 1100-1107.
[21] A. Schindler, "Vehicle self-localization with high-precision digital maps," in 2013 IEEE Intelligent Vehicles Symposium Workshops (IV Workshops), 2013: IEEE, pp. 134-139.
[22] R. Liu, J. Wang, and B. Zhang, "High definition map for automated driving: Overview and analysis," The Journal of Navigation, vol. 73, no. 2, pp. 324-341, 2020.
[23] W. N. Tun, S. Kim, J.-W. Lee, and H. Darweesh, "Open-Source Tool of Vector Map for Path Planning in Autoware Autonomous Driving Software," in 2019 IEEE International Conference on Big Data and Smart Computing (BigComp), 2019: IEEE, pp. 1-3.
[24] T. Foote, "tf: The transform library," in 2013 IEEE Conference on Technologies for Practical Robot Applications (TePRA), 2013: IEEE, pp. 1-6.
[25] Z. Zhang, "A flexible new technique for camera calibration," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 11, pp. 1330-1334, 2000.
[26] M. Veľas, M. Španěl, Z. Materna, and A. Herout, "Calibration of RGB camera with velodyne lidar," in WSCG 2014 Communication Papers Proceedings. Journal of WSCG. Plzeň: Union Agency, 2014. p. 135-144.
[27] X. Li, H. Ma, X. Wang, and X. Zhang, "Traffic light recognition for complex scene with fusion detections," IEEE Transactions on Intelligent Transportation Systems, vol. 19, no. 1, pp. 199-208, 2017.
[28] J. Choi, B. T. Ahn, and I. S. Kweon, "Crosswalk and traffic light detection via integral framework," in The 19th Korea-Japan Joint Workshop on Frontiers of Computer Vision, 2013: IEEE, pp. 309-312.
[29] S.-H. Lee, J.-H. Kim, Y.-J. Lim, and J. Lim, "Traffic light detection and recognition based on Haar-like features," in 2018 International Conference on Electronics, Information, and Communication (ICEIC), 2018: IEEE, pp. 1-4.
[30] Y. Ji, M. Yang, Z. Lu, and C. Wang, "Integrating visual selective attention model with HOG features for traffic light detection and recognition," in 2015 IEEE Intelligent Vehicles Symposium (IV), 2015: IEEE, pp. 280-285.
[31] M. B. Jensen, M. P. Philipsen, A. Møgelmose, T. B. Moeslund, and M. M. Trivedi, "Vision for looking at traffic lights: Issues, survey, and perspectives," IEEE Transactions on Intelligent Transportation Systems, vol. 17, no. 7, pp. 1800-1815, 2016.
[32] H. Law and J. Deng, "Cornernet: Detecting objects as paired keypoints," in European Conference on Computer Vision (ECCV), 2018, pp. 734-750.
[33] X. Zhou, D. Wang, and P. Krähenbühl, "Objects as points," arXiv preprint arXiv:1904.07850, 2019.
[34] T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollár, "Focal loss for dense object detection," in IEEE International Conference on Computer Vision, 2017, pp. 2980-2988.
[35] M. Hirabayashi, A. Sujiwo, A. Monrroy, S. Kato, and M. Edahiro, "Traffic light recognition using high-definition map features," Robotics and Autonomous Systems, vol. 111, pp. 62-72, 2019.
[36] A. Bewley, Z. Ge, L. Ott, F. Ramos, and B. Upcroft, "Simple online and realtime tracking," in 2016 IEEE International Conference on Image Processing (ICIP), 2016: IEEE, pp. 3464-3468.
[37] L. Leal-Taixé, A. Milan, I. Reid, S. Roth, and K. Schindler, "Motchallenge 2015: Towards a benchmark for multi-target tracking," arXiv preprint arXiv:1504.01942, 2015.
[38] R. De Maesschalck, D. Jouan-Rimbaud, and D. L. Massart, "The mahalanobis distance," Chemometrics and Intelligent Laboratory Systems, vol. 50, no. 1, pp. 1-18, 2000.
[39] J. F. Wagner and T. Wieneke, "Integrating satellite and inertial navigation—conventional and new fusion approaches," Control Engineering Practice, vol. 11, no. 5, pp. 543-550, 2003.
[40] M. G. Petovello, Real-time integration of a tactical-grade IMU and GPS for high-accuracy positioning and navigation. Citeseer, 2003.
[41] A. Gunawardena, A. Soloviev, F. Van Graas, and A. Gunawardena, "Implementation of deeply integrated GPS/Low-Cost IMU for Acquisition And Tracking Of Low CNR GPS Signals," in Proceedings of ION National Technical Meeting 2004, 2004, pp. 611-622.
[42] G. Gao and G. Lachapelle, "A novel architecture for ultra-tight HSGPS-INS integration," Positioning, vol. 1, no. 13, 2008.
[43] W. Zhang, M. Ghogho, and B. Yuan, "Mathematical model and matlab simulation of strapdown inertial navigation system," Modelling and Simulation in Engineering, vol. 2012, 2012.
[44] P. D. Groves, "Principles of GNSS, inertial, and multisensor integrated navigation systems, [Book review]," IEEE Aerospace and Electronic Systems Magazine, vol. 30, no. 2, pp. 26-27, 2015.
[45] B. Triggs, P. F. McLauchlan, R. I. Hartley, and A. W. Fitzgibbon, "Bundle adjustment—a modern synthesis," in International Workshop on Vision Algorithms, 1999: Springer, pp. 298-372.
[46] H. Cai, Z. Hu, G. Huang, D. Zhu, and X. Su, "Integration of GPS, monocular vision, and high definition (HD) map for accurate vehicle localization," Sensors, vol. 18, no. 10, p. 3270, 2018.