簡易檢索 / 詳目顯示

研究生: 許益彰
Hsu, I-Chang
論文名稱: 室內自走車之視覺導航研究
Study on Vision-Based Navigation for an Indoor Mobile Robot
指導教授: 鄭銘揚
Cheng, Ming-Yang
學位類別: 碩士
Master
系所名稱: 電機資訊學院 - 電機工程學系
Department of Electrical Engineering
論文出版年: 2008
畢業學年度: 96
語文別: 中文
論文頁數: 82
中文關鍵詞: 視覺導航室內自動導航自走車
外文關鍵詞: mobile robot, autonomous navigation, vision-based navigation
相關次數: 點閱:144下載:10
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 自動導航一直是機器人相關研究中相當熱門的一項課題,目的在使
    機器人能對自己的動向做出自主性的判斷。其中視覺式的導航是藉由攝影機擷取外界資訊做分析,以達成自動導航的目的。相關的方法大致可分為地圖式導航、無地圖式導航及建立地圖式的導航等三種;然而不管是何種方法,通常都會用到大量電腦視覺及影像處理的技術,特別是在影像比對上。有鑒於此,本論文針對電腦視覺基礎知識,諸如攝影機模型、極線幾何與立體視覺等作一簡述;並針對一些常見的影像特徵點比對或追蹤方法加以介紹。最後,在視覺導航的部份,經比較各種方法之優缺點後,本論文選擇使用一種外觀式的導航方法來實現自走車自動導航。實驗結果顯示在特徵點能充分偵測及追蹤的情況下,自走車確實可以成功地達成室內自動視覺導航。

    Autonomous navigation is an important topic in robot-related research. In particular, vision-based navigation approaches exploit the environmentinformation obtained by the camera to achieve autonomous navigation. The existing vision-based navigation approaches can be divided into threecategories: map-building-based navigation, map-based navigation, and map-less navigation. No matter what kind of approaches is used, techniques related to the camera model, calibration, epipolar geometry, and stereo vision are very important in vision based navigation. Therefore, at first, a briefintroduction to these techniques as well as image feature matching will begiven in this thesis. After comparing the performance of different approaches,this thesis chose to employ an appearance-based approach to implement vision based navigation for a mobile robot. Experimental results indicate that the mobile robot can successfully perform vision based indoor navigation if
    the image features are appropriately detected and tracked.

    中文摘要...............................................I 英文摘要..............................................II 誌謝.....................................................III 目錄.....................................................IV 圖目錄.................................................VI 第一章 緒論..........................................1 1.1 前言.................................................1 1.2 研究動機與目的.............................2 1.3 文獻回顧.........................................2 1.4 本文架構.........................................4 第二章 攝影機簡介..............................5 2.1 前言.................................................5 2.2 攝影機透視投影模型.....................5 2.3 攝影機校準...................................12 2.3.1 基本關係式................................12 2.3.2 求內部參數................................14 2.3.3 求外部參數................................15 2.4 雙攝影機之幾何關係...................16 2.4.1 雙攝影機之極線幾何................16 2.4.2 立體視覺....................................18 第三章 影像特徵點............................21 3.1 前言...............................................21 3.2 特徵點偵測...................................22 3.2.1 Harris特徵點偵測.......................22 3.2.2 KLT特徵點偵測.........................25 3.2.3 SIFT特徵點偵測部份.................27 3.3 特徵點描述與比對或追蹤............33 3.3.1 影像樣板描述及相似度量測.....34 3.3.2 SIFT特徵點描述及比對.............36 第四章 視覺導航.................................39 4.1 前言................................................39 4.2 漸進式定位....................................39 4.3 外觀式視覺導航............................51 4.4 實驗結果........................................61 第五章 結論與建議.............................77 參考文獻..............................................79 自述......................................................82

    [1] G. N. DeSouza and A. C. Kak, “Vision for mobile robot navigation: a survey,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, pp. 237-267, 2002.
    [2] D. Filliat and J. A. Meyer, “Map-based navigation in mobile robots: I. A review of localization strategies,” Cognitive Systems Research, vol. 4, pp. 243-282, 2003.
    [3] S. Thrun, “Probabilistic algorithms in robotics,” AI Magazine, vol. 21, pp. 93-109, 2000.
    [4] F. Dellaert, D. Fox, W. Burgard, and S. Thrun, “Monte Carlo localization for mobile robots,” in Proceedings of the IEEE International Conference on Robotics and Automation, pp. 1322-1328, 1999.
    [5] S. Atiya and G. D. Hager, “Real-time vision-based robot localization,” IEEE Transactions on Robotics and Automation, vol. 9, pp. 785-800, 1993.
    [6] A. Kosaka and A. Kak, “Fast vision-guided mobile robot navigation using model-based reasoning and prediction of uncertainties,” in Proceedings of the lEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 2177-2186, 1992.
    [7] M. McHenry, Y. Cheng, and L. Matthies, “Vision-based localization in urban environments,” in Proceedings of the International Society for Optical Engineering, pp. 359-370, 2005.
    [8] Y. Matsumoto, M. Inaba, and H. Inoue, “Visual navigation using view-sequenced route representation,” in Proceedings of the IEEE International Conference on Robotics and Automation, pp. 83-88, 1996.
    [9] S. D. Jones, C. Andresen, and J. L. Crowley, “Appearance based process for visual navigation,” in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 551-557, 1997.
    [10] T. Ohno, A. Ohya, and S. Yuta, “Autonomous navigation for mobile robots referring pre-recorded image sequence,” in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 672-679, 1996.
    [11] C. Balkenius, “Spatial learning with perceptually grounded representations,” in Proceedings of the EUROMICRO Workshop on Advanced Mobile Robots, pp. 16-21, 1997.
    [12] J. Santos-Victor, G. Sandini, F. Curotto, and S. Garibaldi, “Divergent stereo for robot navigation: learning from bees,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 434-439, 1993.
    [13] J. J. Leonard and H. F. Durrant-Whyte, “Simultaneous map building and localization for an autonomous mobilerobot,” in Proceedings of the IEEE/RSJ International Workshop on Intelligent Robots and Systems, pp. 1442-1447, 1991.
    [14] H. Durrant-Whyte and T. Bailey, “Simultaneous Localisation and Mapping (SLAM): Part I The essential algorithms,” Robotics and Automation Magazine, vol. 13, pp. 99-110, 2006.
    [15] M. Montemerlo, S. Thrun, D. Koller, and B. Wegbreit, “FastSLAM: A factored solution to the simultaneous localization and mapping problem,” in Proceedings of the Association for the Advancement of Artificial Intelligence National Conference on Artificial Intelligence, pp. 593-598, 2002.
    [16] M. Montemerlo, S. Thrun, D. Koller, and B. Wegbreit, “FastSLAM 2.0: An improved particle filtering algorithm for simultaneous localization and mapping that provably converges,” in Proceedings of the International Joint Conference on Artificial Intelligence, pp. 1151-1156, 2003.
    [17] R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision, 2nd edition. Cambridge, U.K.: Cambridge University Press, 2003.
    [18] Videre, “SVS system,” http://www.videredesign.com, 2008.
    [19] J. Y. Bouguet, “Camera calibration toolbox for Matlab,” http://www.vision.caltech.edu/bouguetj/calib_doc, 2008.
    [20] Z. Zhang, “Flexible camera calibration by viewing a plane from unknown orientations,” in Proceedings of the IEEE International Conference on Computer Vision, pp. 666-673, 1999.
    [21] C. Harris and M. Stephens, “A combined corner and edge detector,” in Proceedings of the Alvey Vision Conference, pp. 147-151, 1988.
    [22] H. P. Moravec, Obstacle avoidance and navigation in the real world by a seeing robot rover, Ph.D. dissertation, Stanford University, September 1980.
    [23] C. Tomasi and T. Kanade, “Detection and tracking of point features,” Carnegie Mellon University, Pittsburgh, PA, Technical Report, CMU-CS-91-132, 1991.
    [24] D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” International Journal of Computer Vision, vol. 60, pp. 91-110, 2004.
    [25] J. Wang, H. Zha, and R. Cipolla, “Coarse-to-fine vision-based localization by indexing scale-invariant features,” IEEE Transactions on Systems, Man, And Cybernetics—PART B: Cybernetics, vol. 36, pp. 413-422, 2006.
    [26] 林任宏,以視覺為基礎之移動自主定位系統,碩士論文,國立中正大學電機工程學系,2005。
    [27] A. Milella and R. Siegwart, “Stereo-Based Ego-Motion Estimation Using Pixel Tracking and Iterative Closest Point,” in Proceedings of the IEEE International Conference on Computer Vision Systems, 2006.
    [28] L. H. Matthies, Dynamic stereo vision, Ph.D. dissertation, Carnegie Mellon University, October 1989.
    [29] L. Matthies and S. Shafer, “Error modeling in stereo navigation,” IEEE Journal of Robotics and Automation, vol. 3, pp. 239-248, 1987.
    [30] Z. Chen and S. T. Birchfield, “Qualitative vision-based mobile robot navigation,” in Proceedings of the IEEE International Conference on Robotics and Automation, pp. 2686-2692, 2006.
    [31] Stan Birchfield, “KLT in C,” http://www.ces.clemson.edu/~stb/klt, 2007.

    下載圖示 校內:2013-08-08公開
    校外:2013-08-08公開
    QR CODE