簡易檢索 / 詳目顯示

研究生: 李庚諺
Lee, Ken-Yen
論文名稱: 影像定位與同步建圖技術於導航系統之應用
Navigation Using SLAM Vision Technology
指導教授: 譚俊豪
Tarn, Jiun-Haur
學位類別: 碩士
Master
系所名稱: 工學院 - 航空太空工程學系
Department of Aeronautics & Astronautics
論文出版年: 2012
畢業學年度: 100
語文別: 中文
論文頁數: 77
中文關鍵詞: 即時定位視覺系統特徵點追蹤
外文關鍵詞: SLAM, Visual System, Feature Tracking
相關次數: 點閱:132下載:7
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 研究題目:影像定位與同步建圖技術於導航系統之應用
    研 究 生:李庚諺
    指導教授:譚俊豪

    本論文研究以影像為基礎的同步定位與建圖演算法(Simultaneous Localization and Mapping, SLAM),並以此來輔助無人載具導航,且只採用單一攝影機來實現SLAM的演算法,當影像隨著載具不斷移動時,可由攝影機獲得系統的姿態、位置等預測數據,搭配擴展式卡爾曼濾波器(Extended Kalman Filter)來完成載具與地標間相對位置的估測,建立出具強健性的逐增式靜止地標分佈地圖,而當地圖中的特徵點被反覆量測時,地標分佈的不確定性(Gaussian uncertainty)會持續收斂至一確定值,此地圖可用以提供系統的定位及導引資訊,若全球定位系統(Global Positioning System, GPS)的訊號被遮蔽或不穩定,本演算法可取而代之成為載具的主要定位資訊來源,反之GPS訊號穩定時則可以和攝影機所量測的數據整合,本演算法轉為輔助的角色並從而得到更精確的資訊,首先針對演算法模擬來驗證其可靠性,從一序列的影像資料庫中預測地標的落點以及與攝影機的相對位置、姿態,實驗部分攝影機以30HZ的速率擷取影像,處理器接收之後開始運行SLAM,同時使用OpenGL函式庫撰寫了同步模擬介面,並以OpenCV函式庫來完成特徵點偵測(Harris Corner Detection)和追蹤(Pyramidal Lucas-Kanade feature tracking)等影像方法,實驗分為室內及室外兩部分即時測試,並和GPS取得的數據做比較、分析。

    關鍵字:即時定位、視覺系統、特徵點追蹤

    Subject:Navigation Using SLAM Vision Technology
    Student:Ken-Yen Lee
    Advisor:Jiun-Haur Tarn

    This thesis, does research about the solution for the navigation problem in mobile robotics by adopting Simultaneous Localization and Mapping (SLAM) algorithm. The algorithm uses only a single camera without odometry information. Estimations of vehicle motion and landmarks’ bearings are computed within a standard Extended Kalman Filter frame work. For prediction, IMU information is utilized to propagate the nonlinear vehicle kinematic equations. And the innovations are obtained from the differences between predictions and the measured landmarks’ bearings. Therefore, the proposed SLAM algorithms can be used as a navigation solution when GPS service is degraded or temporarily unavailable. For real-time implementation of the adopted MonoSLAM, we put together a C++ program with graphical user interface. In which we use library from OpneGL for graphical object. Also the Harris Corner Detection and Lucas-Kanade feature tracking algorithm from OpenCV library are integrated for keeping correct landmark correspondences as the vehicle moves through the scene. The proposed SLAM programs is simulated on a table-top personal computer, and subsequently tested in an open room, then an outdoor environment. For tests performed in the outdoor environment, vehicle trajectories from SLAM algorithms are compared with the on-board GPS information and excel.

    Keyword:SLAM, Visual System, Feature Tracking.

    摘要 I Abstract II 致謝III 表目錄VII 圖目錄VIII 第一章 緒論 1 1.1 前言 1 1.2 研究目標與動機 1 1.3 文獻回顧 3 1.4 論文架構 4 第二章 影像處理基礎概念 5 2.1. 理想針孔模型 5 2.2. 畸變模型 8 2.2.1. 徑向畸變(radial distortion) 8 2.2.2. 切向畸變(tangential distortion) 9 2.2.3. 完整畸變模型10 2.3. 攝影機校正和內參數估測12 2.4. 特徵點偵測:Harris Corner Detection 14 2.5. 特徵點追蹤 16 2.5.1. 最大相關係數匹配法(correlation based feature matching) 17 2.5.2. Lucas-Kanade光流跟蹤演算法 19 2.6. 本章總結 22 第三章 單攝影機之影像定位與同步建圖23 3.1 濾波器分析 24 3.1.1. 卡爾曼濾波器(Kalman Filter, KF) 24 3.1.2. 擴展式卡爾曼濾波器(Extended Kalman Filter, EKF) 26 3.1.3. 濾波器的初始化 27 3.2 導航方程式 28 3.3 系統狀態及符號定義 33 3.3.1. 擴展式卡爾曼濾波器應用於SLAM系統 33 3.3.2. 攝影機狀態表示法 34 3.3.3. 逆深度參數化法(Inverse Depth Parameterization) 35 3.4 非線性過程模型 38 3.4.1. 運動學模型和狀態轉移函式 38 3.4.2. 移動和狀態預測 39 3.5 非線性量測模型 45 3.5.1. 量測向量和特徵點位置的關係 45 3.5.2. 量測後的狀態更新 47 3.5.3. 數據關連性(data association) 50 3.6 地圖管理 52 3.6.1. 特徵點初始化 52 3.6.2. 特徵點刪除 55 3.7 本章總結 56 第四章 實驗結果與模擬分析 57 4.1 硬體簡介 57 4.2 模擬分析 58 4.2.1. 逆深度參數化法模擬驗證 58 4.2.2. 系統狀態預測模擬 59 4.3 實驗結果 61 4.3.1. 室內:追蹤已知地標 61 4.3.2. 室內:追蹤未知地標及直線運動軌跡測試 62 4.3.3. 室內:轉角運動軌跡測試 66 4.3.4. 室外測試 69 第五章 結論與未來展望 71 5.1 結論 71 5.2 未來展望 72 參考文獻 74

    [1] C. Jerian and R. Jain, “Structure from motion: a critical analysis of methods,’’ IEEE Transactions on Systems, Man, and Cybernetics, Vol. 21, pp. 572–588, 1991.
    [2] G. Qian, R. Chellappa and Q.Zheng, “Robust structure from motion estimation using inertial data,” Journal of the Optical Society of America (JOSA A), Vol. 18, No.12, pp. 2982-2997, 2001.
    [3] B. Hummel, S. Kammel, T. Dang, C. Duchow, and C. Stiller. “Vision-based path planning in unstructured environments,” In Proceedings of the IEEE Intelligent Vehicles Symposium, pp. 176–181, 2006.
    [4] Y. Ma, J. Kosecka, and S.S. Sastry, “Vision Guided Navigation for a Nonholonomic Mobile Robot,” IEEE Transactions on Robotics and Automation, Vol.15, No.3, pp. 521-536, 1999.
    [5] Tim Bailey, “Mobile Robot Localisation and Mapping in Extensive Outdoor Environments,” PhD Thesis, Australian Centre for Field Robotics, The University of Sydney, 2002.
    [6] S. Se, D. Lowe, and J. Little, “Mobile robot localization and mapping with uncertainty using scale-invariant visual landmarks,” The International Journal of Robotics Research, Vol. 21, No. 8, pp. 735–758, 2002.
    [7] M. Montemerlo, S. Thrun, D. Koller, and B. Wegbreit, “FastSLAM 2.0: An Improved Particle Filtering Algorithm for Simultaneous Localization and Mapping that Provably Converges,” In Proceedings of International Joint Conference on Artificial Intelligence, Acapulco, pp. 1151-1156, Mexico, 2003.
    [8] A. J. Davison, I. D. Reid, N. Molton, and O. Stasse, “Monoslam: Real-time single camera slam,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 29, No. 6, pp. 1052–1067, June 2007.
    [9] J. Kim and S. Sukkarieh, “Autonomous airborne navigation in unknown terrain environments,” IEEE Transactions on Aerospace and Electronic Systems, Vol. 40, No. 3, pp. 1031-1045, July 2004.
    [10] J. Kim and S. Sukkarieh, “6DoF SLAM aided GNSS/INS navigation in GPS denied and unknown environments,” Journal of Global Positioning Systems, Vol. 4, No. 1-2, pp. 120-128, Dec. 2005.
    [11] G. Bradski and A. Kaehler, “Learning OpenCV: Computer Vision with the OpenCV Library,” In O’Reilly Media, Inc., 2008.
    [12] Zhengyou Zhang, “Flexible camera calibration by viewing a plane from unknown orientations,” In Computer Vision, The Proceedings of the Seventh IEEE International Conference on, Vol. 1, pp. 666-673, 1999.
    [13] Duance C. Brown, “Close-range camera calibration. Photogrammetric Engineering,” Vol. 37, No. 8, 1971.
    [14] J. Wang, F. Shi, J. Zhang, and Y. Liu, “A new calibration model of camera lens distortion. Pattern Recognition,” Vol. 41, No. 2, pp. 607-615, 2008.
    [15] J. Heikkila and O. Silven, “A four-step camera calibration procedure with implicit image correction,” In IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1997.
    [16] Konstantinos G. Derpanis, “The Harris Corner Detector,” 2004.
    [17] H. P. Moravec, “Towards Automatic Visual Obstacle Avoidance,” In Proceedings of the International Joint Conference on Artificial Intelligence, pp. 584, 1977.
    [18] D. M. Tsai and C. T. Lin, “Fast normalized cross-correlation for defect detection,” Pattern Recognition Letters, Vol. 24, pp. 2625-2631, 2003
    [19] B. D. Lucas and T. Kanade, “An iterative image registration technique with an application to stereo vision,” In Proceedings of the International Joint Conference on Artificial Intelligence, pp. 674-679, Canada, Aug. 1981.
    [20] R. E. Kalman, “A New Approach to Linear Filtering and Prediction Problems,” Transactions of the ASME–Journal of Basic Engineering, Vol. 82, No. Series D, pp. 35-45, 1960.
    [21] S. J. Julier and J. K. Uhlmann, “Unscented Filtering and Nonlinear Estimation,” IEEE Review, Vol. 92, No. 3, March 2004.
    [22] J. Diebel, “Representing Attitude: Euler Angles, Unit Quaternions, and Rotation Vectors,” Technical report, Stanford University, California, USA, 2006.
    [23] K. Britting, “Inertial Navigation System Analysis,” John Wiley and Sons, 1971.
    [24] J. Montiel, J. Civera, and A. J. Davison, “Unified inverse depth parametrization for monocular slam,” In Proceedings of Robotics: Science and Systems, Philadelphia, USA, August 2006.
    [25] J. Montiel, J. Civera, and A. J. Davidson, “Inverse Depth Parametrization for Monocular SLAM,” In IEEE Transactions on Robotics, Vol. 24, No. 5, pp. 932-945, 2008.
    [26] Y. Bar-Shalom, X.R. Li, and T. Kirubarajan, “Estimation with Applications to Tracking and Navigation,” John Wiley and Sons, 2001.
    [27] P. Pinies, T. Lupton, S. Sukkarieh, and J. D. Tardos, “Inertial Aiding of Inverse Depth SLAM using a Monocular Camera,” IEEE International Conference on Robotics and Automation, 2007.
    [28] J. Sola, A. Monin, M. Devy, and T. Lemaire, “Undelayed initialization in bearing only slam,” In Proceedings of the IEEE/RJS International Conference on Intelligent Robots and Systems, Vol. 1, pp. 2499–2504, 2005.
    [29] D. Törnqvist, “Estimation and Detection with Applications to Navigation,” PhD thesis, Linköping University, 2008.
    [30] R. S. Sutton , A. G. Barto, “Introduction to Reinforcement Learning,” MIT Press, Cambridge, MA, 1998.
    [31] W. Zhu, S. Levinson, “Vision-based reinforcement learning for robot navigation,” In Proceedings of the International Joint Conference on Neural Network, Vol. 2, pp. 1025-1030s, Washington DC, 2001.

    下載圖示 校內:2013-07-26公開
    校外:2014-07-26公開
    QR CODE