簡易檢索 / 詳目顯示

研究生: 謝秉勳
Ping-Hsun-Hsieh,
論文名稱: 地面目標物識別及相對定位視覺演算法
Vision Algorithm for Target Recognition and Pose Estimation
指導教授: 賴維祥
Lai, Wei-Hsiang
學位類別: 碩士
Master
系所名稱: 工學院 - 航空太空工程學系
Department of Aeronautics & Astronautics
論文出版年: 2018
畢業學年度: 106
語文別: 中文
論文頁數: 68
中文關鍵詞: 無人飛行載具目標偵測電腦視覺視覺導航降落
外文關鍵詞: UAV, Vision-based Detection, Computer Vision, Autonomous Landing
相關次數: 點閱:92下載:17
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 本研究針對地面目標物精準定位,利用SURF以特徵匹配方式偵測地面目標物,定義目標物座標系將影像的對應點透過內方位參數投影到相機座標系,同時RANSAC以Homography作為模型剔除不可靠的對應點,計算座標系間的Homography轉換矩陣,最後解Homography參數得到在目標物座標系相機對地面目標物六個自由度參數的相對關係(平移與旋轉),只要影像與目標有足夠的對應點,無論周圍有無雜物,都可以成功辨識目標物與相機與計算目標物的相對方位。以UAV掛載相機及飛時(ToF)測距感測器偵測高度討論成果,飛行高度與相機解析度直接影響匹配成果,在解析度足夠且獲得足夠特徵點的情況降低運算階層可以不影響計算Homography轉換成果增加運算效能。未來可以延伸應用在UAV降落、目標物追蹤、目標定位執行投擲任務,多架無人機停機坪設計等等。

    SURF is used to search correspondences between image captured by camera and the ground target. No matter how the environment changes, as long as enough correspondences are matched, the target can be recognized successfully. Project correspondences from image coordinate to camera coordinate by using intrinsic matrix. RANSAC based on Homography transform as model is used to discard outliers. After verifying geometric relationship of projected shape between two coordinates via Homography matrix, external parameters are obtained by decomposing Homography matrix. The relative position between camera and the target would be known. UAV equipped with downward camera and time of flight rangefinder which is used to measure height information flies at the different altitude to verify the algorithm. The number of correspondences is influenced by the camera resolution and flight altitude directly. At appropriate camera resolution and flight altitude, the compute efficiency increases robustly by reducing octave number.
    The vision algorithm can be used to guide UAV landing autonomously, target tracking or goods delivery precisely in the future.

    中文摘要 I Vision Algorithm for Target Recognition and Pose Estimation II 致謝 VI 內文目錄 VII 表目錄 IX 圖目錄 X 符號 XIV 第一章 緒論 1 1.1 研究背景 1 1.2 研究動機 5 1.3 文獻回顧 5 1.4 研究方法與目的 13 1.5研究貢獻 14 1.6 論文架構 14 第二章 電腦視覺演算法 15 2.1 顏色模型 15 2.1.1 RGB顏色模型 [26] 15 2.1.2 灰階 16 2.2 相機模型 17 2.2.1內方位參數 17 2.2.2外方位參數 18 2.2.3齊次座標系 18 2.3相機校正 19 2.3.1透鏡畸變 19 2.3.2相機校正 21 2.4 加速強健特徵 25 2.4.1特徵點偵測 27 2.4.2特徵點描述子與匹配 31 2.4.3快速索引匹配 33 2.5 單應性轉換 33 2.5.1何謂單應性? 34 2.5.2 DLT演算法基礎 34 2.6成果檢驗 38 第三章 實驗設備介紹 39 3.1視覺系統設備 41 3.2飛行載具設備 44 第四章 實驗方法與測試 47 4.1相機校正 47 4.2加速強健特徵驗證 50 4.3室內測試 52 4.4飛行測試 54 4.5加速強健特徵應用討論 56 4.6檢驗外方位參數 59 第五章 結論與未來工作 63 5.1結論 63 5.2未來工作 64 參考文獻 65

    [1] https://www.faa.gov/uas/
    [2] https://www.nasa.gov/
    [3] https://www.dji.com/zh-tw
    [4] http://www.goldmansachs.com/our-thinking/technology-driving-innovation/drones/ index.html
    [5] Hrabar, S. Sukhatme, G. S. Corke, P. Usher, K. Roberts, J. "Combined Optic-flow and Stereo-based Navigation of Urban Canyons for a UAV," Proc. IEEE/RSJ International Conference on Intelligent Robots and Systems (lROS 2005), pp. 3309-3316, 2005.
    [6] Wendel, J. Meister, O. Schlaile, C. Trommer, G. F. "An Integrated GPS/MEMS-IMU Navigation System for an Autonomous Helicopter," Aerosp. Sci. Technol., vol. 10, no. 6, pp. 527-533, 2006.
    [7] Tomic,T. Schmid,K. Lutz,P. Domel, A. Kassecker, M. Mair, E. Grixa,I. L. Ruess, F. Suppa, M. Burschka,D. "Autonomous UAV: Research Platform for Indoor and Outdoor Urban Search and Rescue," IEEE Robot. Autom. Mag., vol. 19, no. 3, pp. 46-56, 2012.
    [8] Lange, S, Sünderhauf, N, Neubert, P, Drews, S, Protzel, P "Autonomous Corridor Flight of a UAV Using a Low-cost and Light-weight RGB-D Camera," In: Advances in Autonomous Mini Robots, Proc. 6th AMiRE Symposium, pp.183–192, 2011.
    [9] García, L. R. Carrillo, Dzul, A. Lozano, R. and Pégard, C. "Combining Stereo Vision and Inertial Navigation System for a Quad-Rotor UAV," Journal of Intelligent & Robotic Systems, 2011.
    [10] Caballero,F. Merino, L. Ferruz, J. and Ollero, A., "Vision-based Odometry and SLAM for Medium and High Altitude Flying UAVs," Journal of Intelligent Robotic Systems, vol. 54, pp. 137-161, 2009.
    [11] Sharp,C. Shakernia, O. Sastry, S. "A Vision System for Landing an Unmanned Aerial Vehicle," Proc. IEEE Int. Conf. Robotics and Automation, pp. 1720-1727, 2001.
    [12] Lange, S. N. Sünderhauf, P. Protzel, "Autonomous Landing for a Multirotor UAV using Vision", Workshop Proceedings of SIMPAR Intl. Conf. on SIMULATION MODELING and PROGRAMMING for AUTONOMOUS ROBOTS, pp. 482-491, 2008.
    [13] Xu,G. Zhang, Y. Cheng, S. Ji, Y. and Tian, Y. "Research on Computer Vision-based for UAV Autonomous Landing on a Ship," Pattern Recognition Letters, vol. 30, no. 6, pp. 600-605, 2009.
    [14] Wenzel, K., Masselli, A., & Zell, A. "Automatic Take Off, Tracking and Landing of a Miniature UAV on a Moving Carrier Vehicle" Journal of Intelligent Robotic Systems, 61, 221–238, 2011.
    [15] L. Hanseob, J. Seokwoo, H. S. David, "Vision-based UAV Landing on the Moving Vehicle," International Conference on Unmanned Aircraft Systems, 2016.
    [16] Chen, X., Phang, S.K., Shan, M., Chen, B.M. "System Integration of a Vision-guided UAV for Autonomous Landing on Moving Platform." In: IEEE international conference on control and automation (ICCA), pp. 761–766 , 2016.
    [17] Falanga,D. A. Zanchettin, A. Simovic, J. Delmerico, D. Scaramuzza, "Vision-based Autonomous Quadrotor Landing on a Moving Platform," Proc. IEEE Int. Symp. Safety Security Rescue Robot, pp.200-207, 2017.
    [18] Saripalli, S., Sukhatme, G.S. "Landing on a Mobile Target Using an Autonomous Helicopter," In: Proceedings of the International Conference on Field and Service Robotics, 2003.
    [19] Sanchez-Lopez, J. L. Pestana,J. Saripalli, S. Campoy, P. "An Approach Toward Visual Autonomous Ship Board Landing of a VTOL UAV," Journal of Intelligent & Robotic System, vol. 74, no. 1–2, pp. 113-127, 2014.
    [20] Lin, S., Garratt, M.A., Lambert, A.J. " Monocular Vision-based Real-time Target Recognition and Tracking for Autonomously Landing an UAV in a Cluttered Shipboard Environment", Autonom Robots, pp. 1–21,2016.
    [21] Yuan, H., Xiao, C., Zhang, F., Shi, C., Ye, H., Xiu, S., Zhou, C., Li, Q., "Feature Recognition for Initial Pose Estimation to Enhance Vision-based UAV Relative Navigation," International Ocean and Polar Engineering Conference, 2017.
    [22] Zhao, Y., Pei, H., "An Improved Vision-based Algorithm for Unmanned Aerial Vehicles Autonomous Landing," Phys. Proced. 33, pp. 935–941, 2012.
    [23] Cesetti, A., Frontoni, E., Mancini, A., Zingaretti, P., Longhi, S, "A Vision-Based Guidance System for UAV Navigation and Safe Landing Using Natural Landmarks," J. Intelligient, Robot. Syst. 57(1–4), 233–257, 2010.
    [24] Hhien, Y. C. "Visual Navigation Algorithm Development and Simulation," Mater thesis, Institute of Aeronautics and Astronautics, National Cheng Kung University, 2013.
    [25] Bay, H., Tuytelaars, T. and Gool L. Van. "SURF: Speeded Up Robust Features," In European Conference on Computer Vision, 2006.
    [26] R.C. Gonzalez, R.E. Woods, DigitalImage Processing, Prentice Hall, pp. 424-428, 2007.
    [27] Zhengyou, Z. "A flexible New Technique for Camera Calibration", IEEE Transactions on pattern analysis and machine intelligence,22(11):pp.1330–1334,2000.
    [28] Kaehler, A, Bradski, G., Learning OpenCV. O’Reilly Media, Sebastopol, 2008.
    [29] Lowe, D. G. Distinctive Image Features from Scale-Invariant Key-points, IJCV, 2004.
    [30] R. Szeliski, " Computer Vision: Algorithms and Applications", 2010.
    [31] R. Hartley, A. Zisserman, " Multiple View Geometry in Computer Vision", 2000.
    [32] E. Malis, V. Manuel, "Deeper Understanding of the Homography Decomposition for Vision-based Control," Technical Report, 2007.

    下載圖示 校內:立即公開
    校外:立即公開
    QR CODE