| 研究生: |
林啟賢 Lin, Chi-Shian |
|---|---|
| 論文名稱: |
基於地圖之室內自走車視覺導航研究 Study on Map-Based Indoor Mobile Robot Vision Navigation |
| 指導教授: |
鄭銘揚
Cheng, Ming-Yang |
| 學位類別: |
碩士 Master |
| 系所名稱: |
電機資訊學院 - 電機工程學系 Department of Electrical Engineering |
| 論文出版年: | 2009 |
| 畢業學年度: | 97 |
| 語文別: | 中文 |
| 論文頁數: | 86 |
| 中文關鍵詞: | 漸進式定位 、自走車 、地圖式視覺導航 |
| 外文關鍵詞: | incremental localization, autonomous mobile robot, map-based vision navigation |
| 相關次數: | 點閱:78 下載:7 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
本文主旨在於實現一能於大樓走廊中進行地圖式視覺導航之全自
主式自走車,整個系統只使用攝影機作為感測器來擷取外在環境資訊。
一般而言,地圖式導航系統包含三個步驟: 建立地圖、定位以及路徑規
劃等,本文採用人工的方式建立一拓撲式地圖來表示整個研究環境,並
且設定自走車的行駛路徑如巡邏車一般不停地環繞著走廊。由於本文的
導航系統之設計與走廊天花板之影像息息相關,所以利用了天花板許多
的特徵諸如明顯的線條、電燈以及轉角特徵等使系統可透過影像進行定
向和定位,並且設計一運動控制策略使自走車能在走廊中行駛。此外,
由於天花板影像經常會因光源的變化而導致方位判斷上錯誤,為了克服
光源變化的問題,本文提出以適應性閥值提高系統對環境的適應性與強
健性。實驗結果顯示本文所設計之導航系統,確實可令自走車成功地依
據已規劃好的路徑行駛。
This thesis focuses on the implementation of an autonomous mobile robot moving along the corridors in a building by map-based vision navigation, where a camera is the only sensor in the navigation system to gather environment information. Generally speaking, map-based navigation consists of three steps: Map-building, localization, and path planning. In our system, the map and path planning are highly related to the ceiling images, and those are defined by user. This thesis constructs a topological map to represent the environment in advance. The constructed map is used to help the autonomous mobile robot to move along the corridor like as a patrol robot. A webcam mounted on the mobile robot is used to capture the ceiling images to analyze the features such as distinctive line, ceiling light and corner features. According to the obtained feature and the pre-constructed topological map, the mobile robot can perform localization. A proper moving strategy is designed to guide the mobile robot to successfully move along the corridor. Since the ceiling image is sensitive to the fluctuation in illumination, therefore variation in illumination may result in localization failure. In order to cope with this problem, an adaptive threshold technique is employed to adjust parameter values used in the navigation system. Experimental results show that our autonomous mobile robot successfully moves along the desired path.
[1] W. Burgard, A. B. cremers, D. Fox, D. Hahnel, G. Lakemeyer, D. Schulz, W. Steiner, and S. Thrun, “Experiences with an interactive museum tour-guide robot,” Artificial Intelligence, pp. 3-55, 1999.
[2] NASA Jet Propulsion Laboratory, “Mar Exploration Rover Mission,” http://marsrovers.jpl.nasa.gov/overview/
[3] G. N. DeSouza and A. C. Kak, “Vision for mobile robot navigation: a survey,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 2, pp. 237-267, 2002.
[4] D. Filliat and J. A. Meyer, “Map-based navigation in mobile robots: I. A review of localization strategies,” Cognitive Systems Research, vol. 4, pp. 243-282, 2003.
[5] J. A. Meyer and D. Filliat, “Map-based Navigation in Mobile Robots: II. A review of Map-Learning and Path-Planning Strategies,” Cognitive Systems Research, vol. 4, pp. 283-317, 2003.
[6] F. Dellaert, D. Fox, W. Burgard, and S. Thrun, “Monte Carlo localization for mobile robots,” in Proceedings of the IEEE International Conference on Robotics & Automation, pp. 1322-1328,1999.
[7] A. Kosaka and A. Kak, “Fast Vision-Guided Mobile Robot Navigation Using Model-based Reasoning and Prediction of Uncertainties,” in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 2177-2186, 1992.
[8] S. Thrun, “Bayesian Landmark Learning for Mobile Robot Localization,” Machine Learning, vol. 33, no. 1, pp. 41-76, 1998.
[9] S. Thrun, “Finding Landmarks for Mobile Robot Navigation,” in Proceedings of the IEEE International Conference on Robotics & Automation, pp. 958-963, 1998.
[10] S. Thrun, M. Bennewitz, W. Burgard, A. B. cremers, F. Dellaert, D. Fox, D. Hahnel, C. Rosenberg, N. Roy, J. Schulte, and D. Schulz, “MINERVA: A Tour-Guide Robot That Learns,” Lecture Notes in Computer Science, pp. 14-29, 1999.
[11] F. Dellaert, W. Burgard, D. Fox, S. Thrun, “Using the CONDENSATION Algorithm for Robust, Vision-based Mobile Robot Localization” in Proceedings of the IEEE Computer Society Conference, pp. 588-594, 1999.
[12] M. Montemerlo, S. Thrun, D. Koller, and B. Wegbreit, “FastSLAM: A Factored Solution to the Simultaneous Localization and Mapping Problem” in Proceedings of American Association on Artificial Intelligence, pp. 593-598, 2002.
[13] S. King and C. Weiman, “Helpmate Autonomous Mobile Robot Navigation System,” in Proceedings of the SPIE Conference on Mobile Robots, vol. 2352, pp. 190-198, 1990.
[14] S. Koening and R. Simmons, “Passive Distance Learning for Robot Navigation,” in Proceedings of the 13th International Conference on Machine Learning, pp. 266-274, 1996.
[15] M. Montemerlo, S. Thrun, D. Koller, and B. Wegbreit, “FastSLAM 2.0: An Improved Particle Filtering Algorithm for Simultaneous Localization and Mapping that Provably Converges,” in proceedings of International Joint Conference on Artificial Intelligence, 2003.
[16] J. J. Leonard and H. F. Durrant-Whyte, “Simultaneous Map Building and Localization for an Autonomous Mobile Robot,” IEEE/RSJ International Workshop on IROS, pp. 1442-1447, 1991.
[17] M. Dissanayake, P. Newman, S. Clark, and H. F. Durrant-Whyte “A Solution to the Simultaneous Localization and Map Building (SLAM) Problem,” IEEE Transactions on Robotics and Automation, vol. 17, no. 3, pp. 229-241, 2001.
[18] J. Folkesson, P. Jensfelt, and H. I. Christensen, “Vision SLAM in the Measurement Subspace,” in Proceedings of the IEEE International Conference on Robotics and Automation, pp. 30-35, 2005.
[19] R. Sim, P. Elinas, M. Griffin, and J. J. Little, “Vision-based SLAM using the Rao-Blackwellised Particle Filter,” IJCAI Workshop on Reasoning with Uncertainty in Robotics, 2005.
[20] D. X. Nguyen, B. J. You, and S. R. Oh, “A Simple Landmark Model for Vision-based Simultaneous Localization and Mapping,” in SICE-ICASE International Joint Conference, pp. 5016-5021, 2006.
[21] T. Bailey and H. Durrant-Whyte, “Simultaneous Localization and Mapping (SLAM):Part I The Essential Algorithms,” IEEE Robotics and Automation Magazine, vol. 13, no. 2, pp. 99-110, 2006.
[22] J. Kim, K. J. Yoon, J. S. Kim, and I. Kweon, “Visual SLAM by Single-Camera Catadioptric Stereo,” in SICE-ICASE International Joint Conference, pp. 2005-2009, 2006.
[23] T. Lemaire and S. Lacroix, “Monocular-vision based SLAM using Line Segments,” IEEE International Conference on Robotics and Automation, pp. 2791-2796, 2007.
[24] A. J. Davison, I. D. Reid, N. D. Molton, and O. Stasse, “MonoSLAM: Real-Time Single Camera SLAM,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 29, no. 6, pp. 1-16, 2007.
[25] L. F. Gao, Y. X. Gai, and S. Fu, “Simultaneous Localization and Mapping for Autonomous Mobile Robots Using Binocular Stereo Vision System,” in Proceedings of the 2007 IEEE International Conference on Mechatronics and Automation, pp. 326-330, 2007.
[26] H. Liu, L. Gao, Y. Gai, and S. Fu, “Simultaneous Localization and Mapping for Mobile Robots Using Sonar Range Finder and Monocular Vision,” in Proceedings of the IEEE International Conference on Automation and Logistics, pp. 1602-1607, 2007.
[27] T. Lemaire and S. Lacroix, “SLAM with Panoramic Vision,” Journal of Field Robotics, vol. 24, pp. 91–111, 2007.
[28] T. Lemaire, C. Berger, I. K. Jung, and S. Lacroix, “Vision-Based SLAM: Stereo and Monocular Approaches,” International Journal of Computer Vision, pp. 343–364, 2007.
[29] P. Yang, W. Wu, M. Moniri, and C.C. Chibelushi, “A Sensor-based SLAM Algorithm for Camera Tracking in Virtual Studio,” International Journal of Automation and Computing, pp. 152-162, 2008.
[30] K. Celik, S. J. Chung, and A. Somani, “Mono-Vision Corner SLAM for Indoor Navigation,” in Proceedings of the IEEE International Conference on Electro/Information, pp. 343-348, 2008.
[31] S. Kim and S. Y. Oh, “SLAM in Indoor Environments using Omni-directional Vertical and Horizontal Line Features,” Journal of Intelligent and Robotic Systems, vol. 51, no. 1, pp. 31-43, 2008.
[32] Y. Matsumoto, M. Inaba, and H. Inoue, “Visual navigation using view-sequenced route representation,” in Proceedings of the IEEE International Conference on Robotics and Automation, pp. 83-88, 1996.
[33] S. D. Jones, C. Andresen, and J. L. Crowley, “Appearance based process for visual navigation,” in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 551-557, 1997.
[34] T. Ohno, A. Ohya, and S. Yuta, “Autonomous navigation for mobile robots referring pre-recorded image sequence,” in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 672-679, 1996.
[35] J. Santos-Victor, G. Sandini, F. Curotto, and S. Garibaldi, “Divergent stereo for robot navigation: learning from bees,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 434-439, 1993.
[36] 程雋,應用機率論,文笙。
[37] R. C. Gonzalez and R. E. Woods, Digital image processing, Prentice Hall, 2002.
[38] Z. Xiang and G. Joy, “Color Image Quantization by Agglomerative Clustering,” IEEE Computer Graphics and Applications, vol. 14, no. 3, pp. 44-48, 1994.
[39] A. K. Jain, M. N. Murty, and P. J. Flynn, “Data Clustering: A Review,” ACM computing surveys, vol. 31, no. 3, pp. 264-323, 1999.
[40] Z. Hu, F. Lamosa, and K. Uchimura, “A Complete UV-disparity Study for Stereovision Based 3D Driving Environment Analysis,” in Proceedings of the Fifth International Conference on 3-D Digital Imaging and Modeling, pp. 204-211, 2005.
[41] 王俊凱, “基於改良式適應性背景相減法與多重影像特徵比對之多功能即時視覺追蹤系統之設計與實現,” 碩士論文,國立成功大學電機工程學系,2004。
[42] 蘇助彬, “基於視覺之移動目標物分類與人體動作分析研究,” 碩士論文,國立成功大學電機工程學系,2007。
[43] 許益彰, “室內自走車之視覺導航研究,” 碩士論文,國立成功大學電機工程學系,2008。