簡易檢索 / 詳目顯示

研究生: 莊謹聲
Chuang, Chin-Sheng
論文名稱: 具備風險感知的定翼無人機雙眼視覺輔助降落系統
Stereo Vision Assisted Landing System for Fixed-Wing UAVs with Risk-Awareness
指導教授: 彭兆仲
Peng, Chao-Chung
學位類別: 碩士
Master
系所名稱: 工學院 - 航空太空工程學系
Department of Aeronautics & Astronautics
論文出版年: 2023
畢業學年度: 111
語文別: 英文
論文頁數: 111
中文關鍵詞: 降落同步定位與地圖構建雙目視覺特徵萃取高斯混合模型離群點移除
外文關鍵詞: Landing, SLAM, Stereo Vision, Feature Extraction, GMM, Outlier Removal
相關次數: 點閱:141下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 現行之定翼機降落輔助系統多需仰賴第三方地面設備,例如:儀表著陸系統(Instrument Landing System, ILS)、精密進場航道指示器(Precision Approach Path Indicator, PAPI),不僅成本高昂,須於特定機場使用,亦不利於安裝至低酬載飛行器,缺乏使用彈性。為提供商用小型定翼機更多元的降落選擇空間,減輕迫降於特殊跑道之風險,本文設計了一種基於雙目視覺的降落輔助系統,結合同步定位與地圖構建(Simultaneously Localization and Mapping, SLAM)之經典演算法 ORB-SLAM2,提供定翼機實時位置與姿態估計,同時根據地面傾角計算合適的下滑道,引導飛機駕駛員降落資訊並保障其安全。
    在真實情況中,影像特徵的分布相當複雜,於道路進行迫降時,更可能包含許多建築與複雜物件,而如何從中抽取出降落目標平面,將是系統設計的主要挑戰。本研究首先使用高斯混和模型 (Gaussian Mixture Model, GMM) 分割彩色影像、抽取出地面點,並進一步引入 IWPF (Iterative Weighted Plane Fitting) 演算法,有效降低離群點對平面方程式解算的干擾。另外,系統內也整合了基於逆透視變換 (Inverse Perspective Mapping, IPM) 的鳥瞰圖,藉由其直覺化界面,可望強化使用者對風險的感知。
    為驗證系統表現,本研究透過 3D 場景模擬引擎 Unreal Engine 4 建立雙目視覺資料集,進行系統效能與強健性相關測試。經過測試,本系統成功在迫降場景下實現降落姿態導引,其平均誤差小於 1 度。

    Current fixed-wing aircraft landing assistance systems often rely on third-party ground equipment such as Instrument Landing System (ILS) and Precision Approach Path Indicator (PAPI). However, these systems are costly, limited to specific airports, and not conducive to installation on low payload aircraft, lacking flexibility. To provide more landing options for commercial fixed-wing light aircraft and mitigate the risks associated with emergency landings on special runways, this thesis proposes a stereo visual-based landing assistance system. The system combines the ORB-SLAM2 algorithm for simultaneous localization and mapping (SLAM) to provide real-time position and attitude estimation of the aircraft, while also calculating the appropriate glide slope based on the ground slope to guide the pilot and ensure safety.
    In real-world scenarios, the distribution of image features can be complex, especially when performing emergency landings on roads with buildings and other complex objects. Therefore, extracting the landing target plane from the image becomes a major challenge in the system design. This study addresses this challenge by employing the Gaussian Mixture Model (GMM) to segment the color image and extract ground points. Additionally, the Iterative Weighted Plane Fitting (IWPF) algorithm is introduced to effectively reduce the interference of outliers on the plane equation calculation. Furthermore, the system integrates bird's-eye view (BEV) image using Inverse Perspective Mapping (IPM) to enhance users' perception of risk through an intuitive interface.
    To evaluate the performance of the system, a 3D scene simulation engine called Unreal Engine 4 is utilized to create a stereo image dataset and conduct tests related to system performance and robustness. The results of the investigations demonstrate that the system successfully achieves landing attitude guidance in emergency landing scenarios with an average error of less than 1 degree.

    摘要 i Abstract ii 誌謝 iii Contents v List of Figures ix List of Tables x List of Algorithms xi 1 Introduction 1 1.1 Research Motivation and Purpose 1 1.2 Research Method 3 2 Related Work Study 6 2.1 Review of V-SLAM Classic Algorithm 6 2.2 Stereo Camera Model 8 2.3 ORB-SLAM2 Framework Review and Discussion 12 3 Simulation Environment Building and Testing 17 3.1 Integrating Unreal Engine and Simulink [1] 17 3.2 Dataset Generating 19 3.3 Evaluation of ORB-SLAM2 [1] 21 4 Landing Area Extraction 27 4.1 Concept and Derivation of GMM 27 4.2 Optimizing GMM Accuracy and Efficiency for Color Image Segmentation 35 4.3 Integration of GMM Color Image Segmentation and V-SLAM 48 5 Landing Attitude Guidance 53 5.1 Outlier Removal Algorithm Review 53 5.2 MLESAC Plane Fitting 63 5.3 Iterative Weighted Plane Fitting 68 5.4 Risk-Awareness Pilot Interface 77 6 Simulation Results 80 6.1 Unreal Engine: Simple Scene 80 6.2 Unreal Engine: Complex Scene 86 7 Conclusion and Future Work 93 7.1 Conclusion 93 7.2 Future Work 94 Reference 95

    [1] C.-C. Peng, R. He, and C.-S. Chuang, “Evaluation of orb-slam based stereo vision for the aircraft landing status detection,” in IECON 2022–48th Annual Conference of the IEEE Industrial Electronics Society, pp. 1–6, 2022.
    [2] R. Mur-Artal and J. D. Tardós, “Orb-slam2: An open-source slam system for monocular, stereo, and rgb-d cameras,” IEEE Transactions on Robotics, vol. 33, no. 5, pp. 1255–1262, 2017.
    [3] D. Schlegel, M. Colosi, and G. Grisetti, “Proslam: Graph slam from a programmer’s perspective,” in 2018 IEEE International Conference on Robotics and Automation (ICRA), pp. 3833–3840, 2018.
    [4] B. Gao, H. Lang, and J. Ren, “Stereo visual slam for autonomous vehicles: A review,” in 2020 IEEE International Conference on Systems, Man, and Cybernetics (SMC), pp. 1316–1322, 2020.
    [5] Airbus, “Accidents by flight phase,” 2022.
    [6] N. T. S. Board, “Crash during nonprecision instrument approach to landing, execuflight flight 1526, british aerospace hs 125-700a, n237wr, akron, ohio, november 10, 2015,” report, National Transportation Safety Board, 2016.
    [7] C. News, “Small plane makes hard landing on highway 91 in delta,” 2015.
    [8] K. Staff, “Update: Plane forced to land due mechanical failure hits power line, fire officials say,” 2022.
    [9] Wikipedia, “Instrument landing system,” 2022.
    [10] C. Mbaocha, E. Ekwueme, O. Nosiri, and N. Chukwuchekwa, “Aircraft visibility improvement with dedicated instrument landing system (ils),” 2018.
    [11] Y. Lu, Z. Xue, G.-S. Xia, and L. Zhang, “A survey on vision-based uav navigation,” Geo-spatial information science, vol. 21, no. 1, pp. 21–32, 2018.
    [12] A. J. Davison, I. D. Reid, N. D. Molton, and O. Stasse, “Monoslam: Real-time single camera slam,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 29, no. 6, pp. 1052–1067, 2007.
    [13] G. Klein and D. Murray, “Parallel tracking and mapping for small ar workspaces,” in 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality, pp. 225–234, 2007.
    [14] R. Mur-Artal, J. M. M. Montiel, and J. D. Tardós, “Orb-slam: A versatile and accurate monocular slam system,” IEEE Transactions on Robotics, vol. 31, no. 5, pp. 1147–1163, 2015.
    [15] J. Engel, T. Schöps, and D. Cremers, “Lsd-slam: Large-scale direct monocular slam,” in European conference on computer vision, pp. 834–849, Springer, 2014.
    [16] J. Engel, V. Koltun, and D. Cremers, “Direct sparse odometry,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 40, no. 3, pp. 611–625, 2018.
    [17] C. Forster, M. Pizzoli, and D. Scaramuzza, “Svo: Fast semi-direct monocular visual odometry,” in 2014 IEEE International Conference on Robotics and Automation (ICRA), pp. 15–22, 2014.
    [18] T. Pire, T. Fischer, G. Castro, P. De Cristóforis, J. Civera, and J. Jacobo Berlles, “S-ptam: Stereo parallel tracking and mapping,” Robotics and Autonomous Systems, vol. 93, pp. 27–42, 2017.
    [19] R. Wang, M. Schworer, and D. Cremers, “Stereo dso: Large-scale direct sparse visual odometry with stereo cameras,” in Proceedings of the IEEE International Conference on Computer Vision, pp. 3903–3911, 2017.
    [20] S. Sumikura, M. Shibuya, and K. Sakurada, “Openvslam: a versatile visual slam framework,” in Proceedings of the 27th ACM International Conference on Multimedia, pp. 2292–2295, 2019.
    [21] M. Burri, J. Nikolic, P. Gohl, T. Schneider, J. Rehder, S. Omari, M. Achtelik, and R. Siegwart, “The euroc micro aerial vehicle datasets,” The International Journal of Robotics Research, vol. 35, 01 2016.
    [22] A. Geiger, P. Lenz, C. Stiller, and R. Urtasun, “Vision meets robotics: The kitti dataset,” The International Journal of Robotics Research, vol. 32, no. 11, pp. 1231–1237, 2013.
    [23] MathWorks, “rectifystereoimages,” 2022.
    [24] C. Campos, R. Elvira, J. J. G. Rodríguez, J. M. M. Montiel, and J. D. Tardós, “Orb-slam3: An accurate open-source library for visual, visual–inertial, and multimap slam,” IEEE Transactions on Robotics, vol. 37, no. 6, pp. 1874–1890, 2021.
    [25] E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, “Orb: An efficient alternative to sift or surf,” in 2011 International Conference on Computer Vision, pp. 2564–2571, 2011.
    [26] D. Lowe, “Distinctive image features from scale-invariant keypoints,” International Journal of Computer Vision, vol. 60, pp. 91–, 11 2004.
    [27] H. Bay, T. Tuytelaars, and L. Van Gool, “Surf: Speeded up robust features,” vol. 3951, pp. 404–417, 07 2006.
    [28] E. Rosten and T. Drummond, “Machine learning for high-speed corner detection,” vol. 3951, 07 2006.
    [29] M. Calonder, V. Lepetit, C. Strecha, and P. Fua, “Brief: Binary robust independent elementary features,” vol. 6314, pp. 778–792, 09 2010.
    [30] M. Muja and D. Lowe, “Fast approximate nearest neighbors with automatic algorithm configuration.,” vol. 1, pp. 331–340, 01 2009.
    [31] MathWorks, “Customize unreal engine scenes for automated driving,” 2023.
    [32] C. Jang, S. Lee, C. Choi, and Y.-K. Kim, “Realtime robust curved lane detection algorithm using gaussian mixture model,” Journal of Institute of Control, Robotics and Systems, vol. 22, no. 1, pp. 1–7, 2016.
    [33] H. Bi, H. Tang, G. Yang, H. Shu, and J.-L. Dillenseger, “Accurate image segmentation using gaussian mixture model with saliency map,” Pattern Analysis and Applications, vol. 21, no. 3, pp. 869–878, 2018.
    [34] D. C. Yang Xing, Chen Lv, Design of Integrated Road Perception and Lane Detection System for Driver Intention Inference, pp. 77–98. Elsevier, 2020.
    [35] W. Yang, H. Li, J. Liu, S. Xie, and J. Luo, “A sea-sky-line detection method based on gaussian mixture models and image texture features, ”International Journal of Advanced Robotic Systems, vol. 16, no. 6, p. 1729881419892116, 2019.
    [36] R. Y. Bakti, I. S. Areni, and A. A. Prayogi, “Vehicle detection and tracking using gaussian mixture model and kalman filter,” in 2016 International Conference on Computational Intelligence and Cybernetics, pp. 115–119, IEEE, 2016.
    [37] pd4u, “Rgb color model,” 2018.
    [38] SharkD, “Hsv color solid cylinder,” 2015.
    [39] H. Everding, “Cielab color space front view,” 2015.
    [40] MathWorks, “findpeaks,” 2022.
    [41] P. H. Torr and A. Zisserman, “Mlesac: A new robust estimator with application to estimating image geometry,” Computer vision and image understanding, vol. 78, no. 1, pp. 138–156, 2000.
    [42] M. A. Fischler and R. C. Bolles, “Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography,” Communications of the ACM, vol. 24, no. 6, pp. 381–395, 1981.
    [43] MathWorks, “svd,” 2022.
    [44] H. Wu, “Multi-interference lane detection based on ipm and edge image filtering,” in 2022 8th International Conference on Virtual Reality (ICVR), pp. 344–354, 2022.
    [45] A. Rangesh and M. M. Trivedi, “No blind spots: Full-surround multi-object tracking for autonomous vehicles using cameras and lidars,” IEEE Transactions on Intelligent Vehicles, vol. 4, no. 4, pp. 588–599, 2019.

    無法下載圖示 校內:2029-01-07公開
    校外:2029-01-07公開
    電子論文尚未授權公開,紙本請查館藏目錄
    QR CODE