| 研究生: |
鄭立偉 Cheng, Li-Wei |
|---|---|
| 論文名稱: |
無GPS環境下飛機降落階段基於視覺的跑道檢測與追蹤 Vision-Based Runway Detection and Tracking for Aircraft Landing in GPS-Denied Environment |
| 指導教授: |
賴盈誌
Lai, Ying-Chih |
| 學位類別: |
碩士 Master |
| 系所名稱: |
工學院 - 航空太空工程學系 Department of Aeronautics & Astronautics |
| 論文出版年: | 2022 |
| 畢業學年度: | 110 |
| 語文別: | 英文 |
| 論文頁數: | 95 |
| 中文關鍵詞: | 深度學習 、跑道檢測 、影像定位 、無人機導航 、圖像輔助定位系統 |
| 外文關鍵詞: | Deep learning, Mask R-CNN, CNN, image localization , GPS-denied environments |
| 相關次數: | 點閱:150 下載:0 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
準確的導航數據一直是自動著陸的重要因素。單眼相機由於其獨立性可以被用來減少導航系統故障的發生,而且跑道又是圖像定位的明顯定位來源。因此,以跑道作為影像依據來估計無人機與跑道的相對位置提供了在無GPS 環境中判斷是否著陸的機會。隨著深度學習技術逐漸強大,本文利用深度學習來實現無人機在 GPS 拒絕環境中的位置估計。深度學習大幅度減少了跑道檢測局限性,同時增加了圖像輔助定位系統的可行性。
在跑道檢測中,本文使用Mask R-CNN模型偵測跑道並且獲取跑道輪廓,最後和其他方法(line- and contour-based method)的準確率進行比較。在圖像輔助定位中,主要利用跑道偵測中所輸出邊緣和輪廓信息來估計無人機相對與跑道的位移。另外使用 CNN 回歸模型來增加定位系統的強健性,解決了以往縱向距離估計的穩定性問題,最後在輸出上使用濾波器來平滑定位解。 Mask R-CNN 和 CNN 回歸模型的訓練數據集主要由模擬和飛行實驗影片組成,並對方法進行了驗證。結果發現,應用深度學習的方法在滿足降落要求的同時,擁有更高準確度的定位結果。進而證明在此議題中下,加入深度學習的技術可以使系統可用性及強健性上升。
Navigation data is critical for the successful landing of unmanned aerial vehicles (UAVs), especially in Global Positioning System (GPS)-denied environments. Vision sensors offer a way to mitigate the risk of navigation system failures, as they can provide image localization information from the runway. This paper presents a deep learning approach to estimate the position of drones in GPS-denied environments. In the runway detection phase, the Mask Region-based Convolutional Neural Network (Mask R-CNN) model is used to identify the runway and generate a mask. The resulting mask is then used to determine the relative displacement with the runway. In the image-aided localization phase, the slop of two longest lengths of the runway are utilized to localize the drone's height and lateral distance. The Convolutional Neural Network (CNN) regression model is applied to improve the system's reliability and overcome the sensitivity issues present in previous longitudinal distance estimation methods. Finally, a filter is used to smooth the relative displacement results. The deep learning approach leads to a more accurate localization result that meets the requirements of auto-landing.
[1] A. M. Dekiert, S. Wolkow, M. Angermann, U. Bestmann, and P. Hecker, "Advantages and Challenges of using Infrared Cameras for relative Positioning during Landing," in Proceedings of the 2019 International Technical Meeting of The Institute of Navigation, 2019, pp. 896-908.
[2] S. Wolkow, A. Schwithal, M. Angermann, A. Dekiert, and U. Bestmann, "Accuracy and availability of an optical positioning system for aircraft landing," in Proceedings of the 2019 International Technical Meeting of The Institute of Navigation, 2019, pp. 884-895.
[3] M. Angermann, S. Wolkow, A. Schwithal, C. Tonhäuser, and P. Hecker, "High precision approaches enabled by an optical-based navigation system," in Proceedings of the ION 2015 Pacific PNT Meeting, 2015, pp. 694-701.
[4] S. Ren, K. He, R. Girshick, and J. Sun, "Faster r-cnn: Towards real-time object detection with region proposal networks," Advances in neural information processing systems, vol. 28, 2015.
[5] K. He, G. Gkioxari, P. Dollár, and R. Girshick, "Mask r-cnn," in Proceedings of the IEEE international conference on computer vision, 2017, pp. 2961-2969.
[6] W. Liu et al., "Ssd: Single shot multibox detector," in European conference on computer vision, 2016: Springer, pp. 21-37.
[7] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, "You only look once: Unified, real-time object detection," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 779-788.
[8] Y. Xu, M. Zhu, S. Li, H. Feng, S. Ma, and J. Che, "End-to-end airport detection in remote sensing images combining cascade region proposal networks and multi-threshold detection networks," Remote Sensing, vol. 10, no. 10, p. 1516, 2018.
[9] A. Marut, K. Wojtowicz, and K. Falkowski, "ArUco markers pose estimation in UAV landing aid system," in 2019 IEEE 5th International Workshop on Metrology for AeroSpace (MetroAeroSpace), 2019: IEEE, pp. 261-266.
[10] X. Liu, S. Zhang, J. Tian, and L. Liu, "An onboard vision-based system for autonomous landing of a low-cost quadrotor on a novel landing pad," Sensors, vol. 19, no. 21, p. 4703, 2019.
[11] L. Zhuang, Y. Han, Y. Fan, Y. Cao, B. Wang, and Q. Zhang, "Method of pose estimation for UAV landing," Chinese Optics Letters, vol. 10, no. s2, p. S20401, 2012.
[12] A. K. Tripathi, V. V. Patel, and R. Padhi, "Vision Based Automatic Landing with Runway Identification and Tracking," in 2018 15th International Conference on Control, Automation, Robotics and Vision (ICARCV), 2018: IEEE, pp. 1442-1447.
[13] M. Ruchanurucks, P. Rakprayoon, and S. Kongkaew, "Automatic landing assist system using IMU+ PnP for robust positioning of fixed-wing UAVs," Journal of Intelligent & Robotic Systems, vol. 90, no. 1, pp. 189-199, 2018.
[14] X. Wang, B. Li, and Q. Geng, "Runway detection and tracking for unmanned aerial vehicle based on an improved canny edge detection algorithm," in 2012 4th International Conference on Intelligent Human-Machine Systems and Cybernetics, 2012, vol. 2: IEEE, pp. 149-152.
[15] L. G. Delphina and V. Naidu, "Detection of Airport Runway Edges Using Line Detection Techniques," ed, 2011.
[16] A. Hiba, A. Szabo, T. Zsedrovits, P. Bauer, and Á. Zarándy, "Navigation data extraction from monocular camera images during final approach," in 2018 International Conference on Unmanned Aircraft Systems (ICUAS), 2018: IEEE, pp. 340-345.
[17] T. B. McCarthy, "Feasibility study of a vision-based landing system for unmanned fixed-wing aircraft," Naval Postgraduate School Monterey United States, 2017.
[18] S. Wolkow, A. Schwithal, C. Tonhäuser, M. Angermann, U. Bestmann, and P. Hecker, "Benefits and challenges of optical positioning during landing approach," in Proceedings of the ION 2017 Pacific PNT Meeting, 2017, pp. 292-299.
[19] S. Wolkow, M. Angermann, A. Dekiert, and U. Bestmann, "Model-based threshold and centerline detection for aircraft positioning during landing approach," in Proceedings of the ION 2019 Pacific PNT Meeting, 2019, pp. 767-776.
[20] P. Hecker, M. Angermann, U. Bestmann, A. Dekiert, and S. Wolkow, "Optical aircraft positioning for monitoring of the integrated navigation system during landing approach," Gyroscopy and Navigation, vol. 10, no. 4, pp. 216-230, 2019.
[21] C. Li, W. Jin, D. Li, and Y. Xi, "Vision-aided Automatic Landing Design for Small Twin-engine Fixed Wing UAV," in 2019 IEEE 15th International Conference on Control and Automation (ICCA), 2019: IEEE, pp. 435-440.
[22] L. Minghui and H. Tianjiang, "Deep learning enabled localization for UAV autolanding," Chinese Journal of Aeronautics, vol. 34, no. 5, pp. 585-600, 2021.
[23] J. Akbar, M. Shahzad, M. I. Malik, A. Ul-Hasan, and F. Shafait, "Runway detection and localization in aerial images using deep learning," in 2019 Digital Image Computing: Techniques and Applications (DICTA), 2019: IEEE, pp. 1-8.
[24] R. Hartley and A. Zisserman, Multiple view geometry in computer vision. Cambridge university press, 2003.
[25] C. Eitner and F. Holzapfel, "Development of a navigation solution for an image aided automatic landing system," in Proceedings of the ION 2013 Pacific PNT Meeting, 2013, pp. 879-891.
[26] S. Wolkow, A. Schwithal, C. Tonhäuser, M. Angermann, and P. Hecker, "Image-aided position estimation based on line correspondences during automatic landing approach," in Proceedings of the ION 2015 Pacific PNT Meeting, 2015, pp. 702-712.
[27] Y. Bicer, M. Moghadam, C. Sahin, B. Eroglu, and N. K. Üre, "Vision-based uav guidance for autonomous landing with deep neural networks," in AIAA Scitech 2019 Forum, 2019, p. 0140.
[28] M. A. Haseeb, "Machine Learning Techniques for Autonomous Multi-Sensor Long-Range Environmental Perception System," Universität Bremen, 2021.
[29] L. Huang, T. Zhe, J. Wu, Q. Wu, C. Pei, and D. Chen, "Robust inter-vehicle distance estimation method based on monocular vision," IEEE Access, vol. 7, pp. 46059-46070, 2019.
[30] M. A. Haseeb, J. Guan, D. Ristic-Durrant, and A. Gräser, "DisNet: a novel method for distance estimation from monocular camera," 10th Planning, Perception and Navigation for Intelligent Vehicles (PPNIV18), IROS, 2018.
[31] P. Qiuchen and S. Yixu, "Object recognition and localization based on Mask R-CNN," Journal of Tsinghua University (Science and Technology), vol. 59, no. 2, pp. 135-141, 2019.
[32] Y.-C. Lai and Z.-Y. Huang, "Detection of a Moving UAV Based on Deep Learning-Based Distance Estimation," Remote Sensing, vol. 12, no. 18, p. 3035, 2020.
[33] N. Kiryati, Y. Eldar, and A. M. Bruckstein, "A probabilistic Hough transform," Pattern recognition, vol. 24, no. 4, pp. 303-316, 1991.
[34] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, "Imagenet: A large-scale hierarchical image database," in 2009 IEEE conference on computer vision and pattern recognition, 2009: Ieee, pp. 248-255.
[35] G. Cheng, J. Han, and X. Lu, "Remote sensing image scene classification: Benchmark and state of the art," Proceedings of the IEEE, vol. 105, no. 10, pp. 1865-1883, 2017.
[36] A. Bittar, H. V. Figuereido, P. A. Guimaraes, and A. C. Mendes, "Guidance software-in-the-loop simulation using x-plane and simulink for uavs," in 2014 International Conference on Unmanned Aircraft Systems (ICUAS), 2014: IEEE, pp. 993-1002.
[37] Tonhäuser, C.; Schwithal, A.; Wolkow, S.; Angermann, M.; Hecker, P. (2014): Optical sensor contribution towards multi-sensor fusion for aircraft state evaluation. In: TU Braunschweig, Germany (Hg.): 4th International Conference on Machine Control & Guidance. Braunschweig, Germany, 19.-20.03.2014. Institute of Mobile Machines and Commercial Vehicles; Institut of Flight Guidance (IFF); Institute of Geodesy and Photogrammetry. Braunschweig, S. 87–97.
校內:2028-03-23公開