簡易檢索 / 詳目顯示

研究生: 陳威良
Chen, Wei-Liang
論文名稱: 基於消失點之室內自走車視覺導航研究
On Vision-Based Indoor Mobile Robot Navigation-A Vanishing Point-Based Approach
指導教授: 鄭銘揚
Cheng, Ming-Yang
學位類別: 碩士
Master
系所名稱: 電機資訊學院 - 電機工程學系
Department of Electrical Engineering
論文出版年: 2011
畢業學年度: 99
語文別: 中文
論文頁數: 136
中文關鍵詞: 室內視覺導航消失點特徵點影像座標誤差補償器
外文關鍵詞: indoor mobile robot navigation, vanishing point, feature point, image pixel error compensation
相關次數: 點閱:167下載:5
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 本論文之主旨為開發可於一般大樓走廊自主行走之室內自走車視覺導航系統。一般來說,在室內環境中,較為常見之特徵即為走廊,若要使自走車於走廊進行自主式導航,透過感測器來偵測周遭環境是較為常見的方式。雖然使用多感測器可使自走車在行走時具有較強健之導航效果,但是卻會造成硬體成本支出增加。有鑑於此,本論文於導航上只利用一具攝影機做為環境感測器,並將攝影機罝於自走車正前方。然而此種架構方式除易受光源變化影響外,並且需進行複雜且困難之影像3D 幾何運算。欲解決上述問題,本論文利用影像處理相關技術並搭配區域分割之方式解決直線航行時光源變化之問題,至於在3D 幾何運算上則利用電腦視覺相關技術計算其消失點,並令其為導引方向。另外在走廊轉角定位方面,則結合特徵點比對方式來進行轉角定位。此外,在導航控制器設計方面,本論文提出影像座標誤差補償器,可有效消除影像座標誤差,使得自走車之行走路徑可快速收斂於航行方向。最後,透過本論文所發展之自走車視覺系統並搭配本論文所提出的影像座標誤差補償器進行導航實驗,並與傳統的bang-bang 控制以及基於查表法之控制進行比較。實驗結果顯示,本論文所發展之導航系統確實可成功於室內進行自主式導航,另外本論文所發展之影像座標誤差補償器亦具有良好之控制效果。

    This thesis focuses on the development of an indoor autonomous mobile robot vision navigation system for moving along the corridors of general buildings. In general, corridors are a common characteristic in the indoor environment. Detecting the surrounding environment through the use of sensors is a common method for indoor mobile robot navigation. Although using multiple sensors in autonomous mobile robot navigation would have robust performance, it would also increase the cost of the hardware. Having this in mind, the thesis uses only a camera placed on the front of the mobile robot as an environmental sensor in navigation. However, this approach is not only sensitive to changes of light but also difficult in complex 3D geometry computation of image. To solve the aforementioned problem, this thesis uses image processing technique with region segmentation to solve the problem of changes of light in navigation, and calculates the vanishing point with computer vision technique as a guiding direction to solve the problem of 3D geometry computation. In addition, it utilizes feature matching to localize corners of the corridor. Furthermore, this thesis proposes an image pixel error compensation scheme in the navigation controller design which can effectively eliminate the image pixel error and make the moving path of the autonomous mobile robot quickly converge to the navigation direction. Finally, the navigation experiments are carried out through an autonomous mobile robot vision system with the proposed image pixel error compensation scheme and compared with the traditional bang-bang control and look-up table control. The experimental results show that the navigation system developed in this thesis can indeed successfully complete indoor autonomous navigation, and the proposed image pixel error compensation scheme has good control performance.

    中文摘要 ...............................................I Abstract ............................................... II 誌謝 ................................................... III 目錄 ................................................... IV 表目錄 ........................................... VIII 圖目錄 ................................................ IX 第一章緒論 ........................................ 1 1.1 前言 ............................................. 1 1.2 研究動機與目的 .................................... 3 1.3 文獻回顧 ..................................... 3 1.4 本文架構 .................................... 6 第二章直行之消失點與消失線 ................................ 7 2.1 前言 .............................................. 7 2.2 攝影機校正 ................................ 7 2.2.1 攝影機透視投影模型 ............................ 8 2.2.2 攝影機內部參數 ............................. 11 2.2.3 攝影機內部校正 ..................... 14 2.3 影像前置處理 .............................. 16 2.3.1 邊緣檢測 ............................ 16 2.3.1.1 Sobel 邊緣檢測 ........................ 16 2.3.1.2 影像形態學處理 .............................. 19 2.3.1.3 Canny 邊緣檢測 .................................. 22 2.3.2 霍夫轉換 ........................... 25 2.3.3 霍夫轉換篩選條件 ............................ 29 2.4 電腦視覺之攝影機成像特性 ........................... 31 2.4.1 消失點 ................................ 31 2.4.2 消失線 .................................. 35 2.4.3 消失點與消失線之幾何關係 ............................ 35 第三章轉角之場景識別 ............................ 45 3.1 前言 ........................................... 45 3.2 影像特徵點 .............................. 46 3.2.1 尺度不變特徵變換 ............................. 46 3.2.1.1 尺度空間創建與高斯差分影像 ........................ 48 3.2.1.2 極值點檢測與邊緣響應 ............................. 50 3.2.1.3 定向分配與特徵描述 ........................... 56 3.2.2 快速強健特徵點 ................................ 58 3.2.2.1 積分影像 ....................................... 59 3.2.2.2 快速特徵點檢測與尺度空間創建 ...................... 61 3.2.2.3 定向分配與特徵描述 ................................ 66 3.2.3 特徵點匹配樣板選取 .............................. 68 3.2.3.1 場景物件匹配 ............................. 68 3.2.3.2 場景全域匹配 ........................... 70 第四章自走車導航系統之控制架構 ........................ 73 4.1 前言 ........................................ 73 4.2 視覺伺服控制架構 ........................... 73 4.3 自走車之視覺迴路控制器設計 ............................ 74 4.3.1 閉迴路架構 ........................ 74 4.3.2 查表法 ................................ 76 4.3.3 影像座標誤差補償器 ..................... 80 4.4 自走車之系統架構 .................. 85 第五章視覺導航系統之設計與實現 ........................... 88 5.1 前言 .......................................... 88 5.2 實驗場景 ................................... 89 5.2.1 場景介紹 ............................ 89 5.2.2 場景分析 ............................. 94 5.2.2.1 直線航行 ......................... 95 5.2.2.2 L 形轉角 ....................... 97 5.3 實驗設備 ............................... 100 5.4 人機介面 ................................ 103 5.5 實驗結果 ................................ 105 5.5.1 直線航行實驗 ........................... 105 5.5.2 轉角定位實驗 ...................... 115 5.5.3 自走車干擾實驗 ............................ 116 5.5.4 室內自主式導航實驗 ............................ 119 第六章結論與建議 ........................ 129 6.1 結論 ....................................... 129 6.2 未來展望與建議 ...................... 130 參考文獻 ......................................... 131 自述 ................................................ 136

    [1] 國際機器人協會(International Federation of Robotics, IFR)
    http://www.ifr.org/association/
    [2] 華碩鴻海微星前進機器人世代攻教育娛樂型電子手臂佳世達產
    品明年問世
    http://tw.nextmedia.com/applenews/article/art_id/30533637/IssueID/20
    080509
    [3] 科學人第60期2007年2月號-機器人軟體革新報導
    [4] 電子工程專輯
    http://www.eettaiwan.com/ART_8800477663_676964_NT_8fdc8b6e.H
    TM
    [5] Google 無人車安全奔馳千里
    http://tw.nextmedia.com/applenews/article/art_id/32876742/IssueID/20
    101011
    [6] Pleo http://www.pleoworld.com/Home.aspx
    [7] W. Burgard, A. B. Cremers, D. Fox, D. Hahnel, G. Lakemeyer, D.
    Schulz, W. Steiner, and S. Thrun, “Experiences with an interactive
    museum tour-guide robot,” Artificial Intelligence, vol. 114, no. 1-2, pp.
    3-55, Jan. 1999.
    [8] G. N. DeSouza and A. C. Kak, “Vision for mobile robot navigation: a
    survey,” IEEE Transactions on Pattern Analysis and Machine
    Intelligence, vol. 24, no. 2, pp. 237-267, Feb. 2002.
    [9] F. Bonin-Font, A. Ortiz, and G. Oliver, “Visual navigation for mobile
    robots: A survey,” Journal of intelligent & robotic systems, vol. 53, no.
    3, pp. 263-296, May 2008.
    [10] D. Filliat and J. A. Meyer, “Map-based navigation in mobile robots: I. A
    review of localization strategies,” Cognitive Systems Research, vol. 4,
    no. 4, pp. 243-282, Dec. 2003.
    [11] J. A. Meyer and D. Filliat, “Map-based Navigation in Mobile Robots: II.
    A review of Map-Learning and Path-Planning Strategies,” Cognitive
    Systems Research, vol. 4, no. 4, pp. 283-317, Dec. 2003.
    [12] C. S. Fahn and R. L. Chen, “Landmark-oriented visual navigation of an
    unmanned vehicle along corridors,” Journal of information science and
    engineering, vol. 21, no. 2, pp. 379-413, 2005.
    [13] R. F. Vassallo, H. J. Schneebeli, and J. Santos-Victor, “Visual servoing
    and appearance for navigation,” Robotics and Autonomous Systems, vol.
    31, no. 1-2, pp. 87-98, Apr. 2000.
    [14] A. M. Pustowka, “Navegación autónoma reactiva en pasillos usando
    el punto de fuga,” in 1er Seminario de Mecatrónica UAO. 2010.
    [15] A. J. Davison, I. D. Reid, N. D. Molton, and O. Stasse, “MonoSLAM:
    real-time single camera SLAM,” IEEE Transactions on Pattern
    Analysis and Machine Intelligence, vol. 29, no. 6, pp. 1052-1067, June
    2007.
    [16] S. Kim and S. Y. Oh, “SLAM in indoor environments using
    omni-directional vertical and horizontal line features,” Journal of
    Intelligent and Robotic Systems, vol. 51, no. 1, pp. 31-43, Jan. 2008.
    [17] T. Lemaire, C. Berger, I.-K. Jung, and S. Lacroix, “Vision-based
    slam:Stereo and monocular approaches,” International Journal of
    Computer Vision, vol. 74, no. 3,pp. 343–364, Feb. 2007.
    [18] 黃政揆,利用雙鏡面環場影像攝影和超音波感測技術作戶外自動車
    學習與導航之研究,碩士論文,國立交通大學多媒體工程研究所,
    2010。
    [19] 林瑜智,自走機器人應用樓層平面圖資訊於未建構環境下實現導航
    及定位,碩士論文,臺灣大學電機工程學研究所,2010。
    [20] 蕭辰翰,以單一攝影機完成同步定位、地圖建置與物體追蹤之非延
    遲初始化演算法,碩士論文,臺灣大學資訊工程學研究所,2010。
    [21] 張洵豪,利用雙眼視覺影像於室內場景之機器人同步定位與環境地
    圖建立,碩士論文,國立清華大學電機工程學系,2010。
    [22] Z. Chen and S. T. Birchfield, “Qualitative vision-based mobile robot
    navigation,” in Proceedings of the IEEE International Conference on
    Robotics and Automation, (ICRA), Orlando, Florid, 15-19 May. 2006,
    pp. 2686-2692.
    [23] Z. Chen and S. T. Birchfield, “Qualitative vision-based path following,”
    IEEE Transactions on Robotics., vol. 25, no. 3, pp. 749-754, Jun. 2009.
    [24] R. Hartley and A. Zisserman, Multiple View Geometry in Computer
    Vision, 2nd edition. Cambridge, U.K.: Cambridge University Press,
    2003.
    [25] D. Stoyanov, Camera Calibration Tools, Available:
    http://www.doc.ic.ac.uk/~dvs/calib/main.html
    [26] J. Canny, “A computational approach to edge detection,” IEEE
    Transactions on Pattern Analysis and Machine Intelligence, vol.
    PAMI-8,no.6, pp. 679-698, Nov. 1986.
    [27] 求是科技,Visual C++ 數位影像處理技術大全,文魁資訊股份有限
    公司,2008。
    [28] 繆紹綱,數位影像處理,普林斯頓國際有限公司,2006。
    [29] 鐘國亮,影像處理與電腦視覺,臺灣東華書局股份有限公司,2002。
    [30] D. G. Lowe, “Distinctive image features from scale-invariant
    keypoints,” International Journal of Computer Vision, vol. 60, pp.
    91-110, Jan. 2004.
    [31] H. Bay, A. Ess, T. Tuytelaars, and L. V. Gool “SURF: Speeded up
    robust features,” Computer Vision and Image Understanding, vol. 110,
    no. 3, pp. 346-359, Dec. 2008.
    [32] H. Bay, From Wide-baseline Point and Line Correspondences to 3D,
    Ph.D. dissertation, Department of Information Technology and
    Electrical Engineering, ETH Zurich, 2006.
    [33] 王永明、王貴錦,圖像局部不變性特徵與描述,北京,國防工業出
    版社,2010。
    [34] Wikipedia-Principal curvature
    http://en.wikipedia.org/wiki/Principal_curvature
    [35] C. Harris and M. Stephens, “A combined corner and edge detector,” in
    Proceedings of the Alvey Vision Conference, 1988, pp. 147-151.
    [36] P. Viola and M.J. Jones, “Rapid Object Detection using a Boosted
    Cascade of Simple Features,” in Proceedings of the IEEE Computer
    Society International Conference on Computer Vision and Pattern
    Recognition, vol. 1, (CVPR), 8-14 Dec. 2001, pp. 511-518.
    [37] Wikipedia-Hessian matrix
    http://zh.wikipedia.org/wiki/%E9%BB%91%E5%A1%9E%E7%9F%A
    9%E9%98%B5
    [38] C. Evans, “Notes on the opensurf library,” University of Bristol, Tech.
    Rep. CSTR-09-001, January, 2009.
    [39] OpenSURF : http://www.chrisevansdev.com/
    [40] S. Hutchinson, G. D. Hager, and P. I. Corke, “A tutorial on visual servo
    control,” IEEE Transactions on Robotics and Automation, vol. 12, no.
    5, pp. 651-670, Oct. 1996.

    下載圖示 校內:2013-08-18公開
    校外:2014-08-18公開
    QR CODE