簡易檢索 / 詳目顯示

研究生: 黃浩旋
Huang, Hao-Hsuan
論文名稱: 無人四旋翼機視覺避障策略研究
Vision-based Obstacle Avoidance System for Unmanned Quadrotors
指導教授: 陳介力
Chen, Chieh-Li
學位類別: 碩士
Master
系所名稱: 工學院 - 航空太空工程學系
Department of Aeronautics & Astronautics
論文出版年: 2016
畢業學年度: 104
語文別: 中文
論文頁數: 104
中文關鍵詞: 視覺避障系統三維影像資料碰撞點估測模糊系統
外文關鍵詞: Vision-based obstacle avoidance system, 3D image information, Collision point, Fuzzy decision making system
相關次數: 點閱:92下載:4
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  •   本篇論文的主要目標是以三維影像資訊為基礎,研製一無人載具用之低空飛行自動避障系統,最終的目的是透過影像資訊,偵測載具之前方障礙物,並利用障礙物之時間空間序列信息估測其未來碰撞位置,進而規劃出避障飛行指令。
      本系統利用深度攝影機,偵測載具之前方是否存在具威脅性之物件,並透過物件於影像各幀間之位置變化,估測其物件於載具上之未來碰撞點。碰撞點資訊能作為載具決定閃避與否的重要依據,當物件之碰撞點位於載具之空間座標上,則代表該物件對載具存在著威脅性,載具需規劃閃避指令,反之則否。
      避障演算法部分則利用估測的碰撞時間與未來碰撞位置作為依據,建置模糊系統,此模糊系統根據物件之威脅程度及時規劃載具之飛行避障指令。
      最後將本避障系統搭載於並聯式三維移動機構上以模擬無人飛行器避障狀況以驗證系統之避障功能。

    The main purpose of this thesis is to develop a vision based obstacle avoidance (VBOA) simulation system of unmanned quadrotors for low altitude mission. The ultimate goal is to use visual information to carry out the obstacle detection and to provide avoidance instruction for unmanned quadrotors. This VBOA system uses a camera to capture image information of the external environment. Furthermore, 2D and 3D image information are generated real time. A fuzzy decision making system is applied to generate the command to avoid obstacles. The information provides obstacle moving direction and speed such that unmanned quadrotors can react accordingly through pitch and roll motions. The main purposes of the VBOA system consists of visual obstacle identification collision zone estimation and commands generation for avoidance phase

    中文摘要 I 誌謝 IX 目錄 XI 表目錄 XIII 圖目錄 XIV 第一章 緒論 1 1.1 前言 1 1.2 文獻回顧 2 1.3 本文架構 5 第二章 深度影像處理與模糊理論 6 2.1 深度影像處理 6 2.1.1 數位影像處理 6 2.1.2 強度轉換與空間濾波 7 2.1.3 深度影像資訊 11 2.2 模糊系統架構 15 第三章 障礙物位置偵測與動態碰撞點估測 20 3.1 深度影像處理 21 3.1.1 深度影像擷取 21 3.1.2 中值濾波器 22 3.2 物件偵測與障礙物判定 25 3.2.1 偵測距離限制 25 3.2.2 影像二值化與輪廓偵測 26 3.2.3 物件連通標記與面積閥值過濾 28 3.2.4 物件影像質心與最近距離偵測 31 3.3 動態物件碰撞點估測 34 3.3.1 動態物件追蹤 35 3.3.2 動態碰撞點估測 37 第四章 模糊避障策略規劃 44 4.1 模糊系統架構設計 44 4.2 雙向模糊避障指令規劃 46 第五章 實驗與結果討論 58 5.1 實驗系統介紹 58 5.2 影像空間與實驗環境空間轉換與校正 60 5.2.1 感測器量測距離與實際距離關係校正 60 5.2.2 影像座標與感測器空間慣性座標之座標轉換 64 5.2.3 最小避障物件與面積過濾閥值實驗 70 5.3 偵測障礙物與環境影響測試 72 5.4 靜態環境避障實驗 78 5.4.1 碰撞點估測實驗 78 5.4.2 靜態障礙物模擬避障實驗 81 5.5 動態障礙物環境避障實驗 86 5.5.1 硬體介紹 86 5.5.2 系統整合 87 5.5.3 實驗結果 89 第六章 結論與未來展望 97 6.1 結論 97 6.2 未來展望 99 參考文獻 101

    Beyeler, A., Zufferey, J. C., and Floreano, D. (2009). Vision-based control of near-obstacle flight. Autonomous robots, 27(3), 201-219.
    Byrne, J., Cosgrove, M., & Mehra, R. (2006, May). Stereo based obstacle detection for an unmanned air vehicle. In Proceedings 2006 IEEE International Conference on Robotics and Automation, 2006. ICRA 2006, 2830-2835.
    Cavallo, V., Mestre, D., and Berthelon, C. (1997). Time-to-collision judgements: Visual and spatio-temporal factors. TRAFFIC AND TRANSPORT PSYCHOLOGY. THEORY AND APPLICATION.
    Eresen, A., İmamoğlu, N., and Efe, M. Ö. (2012). Autonomous quadrotor flight with vision-based obstacle avoidance in virtual environment. Expert Systems with Applications, 39(1), 894-905.
    Flacco, F., Kroeger, T., De Luca, A., and Khatib, O. (2015). A depth space approach for evaluating distance to objects. Journal of Intelligent and Robotic Systems, 80(1), 7-22.
    Ghidary, S. S., Nakata, Y., Takamori, T., & Hattori, M. (2000). Human detection and localization at indoor environment by home robot. In Systems, Man, and Cybernetics, 2000 IEEE International Conference on. Vol. 2, 1360-1365.
    Gray, R., & Regan, D. (1998). Accuracy of estimating time to collision using binocular and monocular information. Vision research, 38(4), 499-512.
    Hernandez-Lopez, J. J., Quintanilla-Olvera, A. L., López-Ramírez, J. L., Rangel-Butanda, F. J., Ibarra-Manzano, M. A., & Almanza-Ojeda, D. L. (2012). Detecting objects using color and depth segmentation with Kinect sensor.Procedia Technology, 3, 196-204.
    Hu, M.-K. (1962). Visual pattern recognition by moment invariants. IRE Transactions on Information Theory, 8(2), 179-187.
    Jung, B., & Sukhatme, G. S. (2004, March). Detecting moving objects using a single camera on a mobile robot in an outdoor environment. In International Conference on Intelligent Autonomous Systems, 980-987.
    Kuo, H. C., & Wu, L. J. (2002). An image tracking system for welded seams using fuzzy logic. Journal of Materials Processing Technology, 120(1), 169-185.
    Lee, D. N. (1976). A theory of visual control of braking based on information about time-to-collision. Perception, 5(4), 437-459.
    Mcfadyen, A., Mejias, L., Corke, P., and Pradalier, C. (2013, November). Aircraft collision avoidance using spherical visual predictive control and single point features. In 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, 50-56.
    Meyer, F. G. (1994). Time-to-collision from first-order models of the motion field: Perception-based real-world navigation. IEEE transactions on robotics and automation, 10(6), 792-798.
    Mohd Asim, R. M., Shagun Bhatt. (2015) .Position Tracking and Fuzzy Logic. International Journal of Computer Applications Technology and Research, 4(4), 226-228.
    Munaro, M., Basso, F., and Menegatti, E. (2012). Tracking people within groups with RGB-D data. 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Vilamoura.
    Schlegel, C., Illmann, J., Jaberg, H., Schuster, M., and Wörz, R. (1998). Vision Based Person Tracking with a Mobile Robot. British Machine Vision Conference (BMVC), Germany.
    Silberman, N., Hoiem, D., Kohli, P., and Fergus, R.. Indoor segmentation and support inference from RGBD images. (2012). Computer Vision -ECCV (European Conference on Computer Vision) 2012, 746-760, Florence, Italy.
    Teichman, A., Lussier, J. T., and Thrun, S. (2013). Learning to Segment and Track in RGBD. 2013 IEEE Transactions on Automation Science and Engineering. 10(4), 841-852.
    Thongchai, S., & Kawamura, K. (2000). Application of fuzzy control to a sonar-based obstacle avoidance mobile robot. In Control Applications, 2000. Proceedings of the 2000 IEEE International Conference on, 425-430.
    Zadeh, L. A. (1965). Fuzzy sets. Information and control, 8(3), 338-353.
    王基旭. (2009).多重感測融合理論應用於移動機器人之避障系統. 國立雲林科技大學電機工程系碩士班學位論文, 1-92, 雲林縣.
    何宜達. (2002). 視覺伺服技術於三維目標軌跡預測與攔截之應用. 成功大學航空太空工程學系學位論文, 1-111, 台南市.
    林琛陽. (2015). 無人四旋翼機自東視覺避障模擬系統研製. 成功大學航空太空工程學系學位論文, 1-80, 台南市.
    許伯恩. (2013). 基於Kinect的盲人室內環境防碰撞輔助系統. 國立中正大學電機工程研究所碩士班學位論文, 1-49, 嘉義縣.
    陳冠宇、吳宗穎、俞牧之、曾乾特. (2011). 立體視覺伺服自主行動機器人之動態避障與控制. Journal of Advanced Engineering, 6(3), 161-167.
    蔣育承. (2006).以視覺為基礎之機器人導航及應用. 中央大學資訊工程學系學位論文, 1-70, 桃園縣.
    顧正偉. (2012).國家高速網路與計算中心-科學視算與互動媒體實驗室, http://viml.nchc.org.tw/.

    無法下載圖示 校內:2018-07-25公開
    校外:不公開
    電子論文尚未授權公開,紙本請查館藏目錄
    QR CODE