| 研究生: |
黃聰哲 Huang, Tsung-Che |
|---|---|
| 論文名稱: |
基於全景控制影像進行室內定位及導航之可行性分析 Indoor Positioning and Navigation Based on Control Spherical Panoramic Images |
| 指導教授: |
曾義星
Tseng, Yi-Hsing |
| 學位類別: |
碩士 Master |
| 系所名稱: |
工學院 - 測量及空間資訊學系 Department of Geomatics |
| 論文出版年: | 2016 |
| 畢業學年度: | 104 |
| 語文別: | 英文 |
| 論文頁數: | 83 |
| 中文關鍵詞: | 球形全景影像 、室內定位及導航 、影像匹配 |
| 外文關鍵詞: | spherical panorma image, indoor positioning and navigation, image feature matching |
| 相關次數: | 點閱:125 下載:6 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
室內外連續定位及導航是移動製圖技術發展中重要的一環,然而全球導航衛星系統(Global Navigation Satellite System, GNSS)於室內環境易受到訊號遮蔽之影響導致定位精度大幅地降低,因此發展高精度的室內定位及導航理論乃首要之務。本研究旨在透過球形全景影像(Spherical Panoramic Image, SPI)進行室內定位及導航分析,先於目標場域建立以之影像方位的控制影像資料庫,利用影像特徵點演算法自動獲取未知影像與控制影像重疊區共軛像點的資訊,進而計算位之影像之方位。球形全景影像晚整的視場角(Field of View, FOV)提供豐富的影像資訊,有效地降低影像數量進而提升計算效率,不僅突破傳統影像之視場角限制,也解決影像過多容易混淆之問題。
本研究可分二階段,第一階段為建立控制影像資料庫,控制影像意指其外方位資訊已知,此部分可透過光束法區域平差完成,第二階段則是未知方位球形全景影像可透過自動化搜尋控制影像資料庫,獲取含有重疊區的控制影像資訊,藉由影像特徵萃取及匹配技術求得共軛點資訊進而求解未知影像之方位資訊。針對球形全景影像之匹配及偵錯,本研究測試了三種相機姿態進行分析,分別為相機平移、相機旋轉以及相機傾斜試驗,並提出適用於球形全景影像匹配時所產生之錯誤共軛點對偵錯模型,實驗結果顯示使用本研究所提出之模型於球形全景影像共軛點偵錯是可行且有效的。
為驗證本研究提出之方位解算理論,我們使用兩種不同類型之共軛點進行室內場景定位實驗,包含手動量測共軛點以及自動化影像匹配量測共軛點,實驗場地位於成功大學測量系館,實驗結果顯示使用本研究所提出之方位解算理論在位置上可達數公分之定位精度(手工量測共軛點)以及約二十公分之定位精度(自動化量測共軛點),影響定位精度之主要因素為共軛點之品質、數量以及控制影像之分布。此外,針對影像姿態解算,本研究首先使用模擬資料測試所提出之方位解算理論是否可行,結果顯示若共軛點對觀測量不含任何誤差,則未知影像姿態角可正確求解。惟實際透過人工量測及自動化量測共軛點實驗後,發現姿態角解算成果並不穩定,對此,現階段我們仍無法給予明確結論,但根據模擬數據實驗提供之結果顯示共軛點品質於姿態解算時亦有所影響。
Continuous indoor and outdoor positioning and navigation is the goal to achieve in the field of mobile mapping technology. However, accuracy of positioning and navigation will be largely degraded in indoor or occluded areas, due to receiving weak or less GNSS signals. Targeting the need of high accuracy indoor and outdoor positioning and navigation for mobile mapping applications, the objective of this study is to develop a novel method of indoor positioning and navigation with the use of spherical panoramic image (SPI). An SPI can provide widely field of view (FOV) than a frame image. It not only breaks the limitation of FOV but also resolves the problem that handing a lot of images are confusing.
Two steps are planned in the technology roadmap. Firstly, establishing a control SPI database that contains a good number of well-distributed control SPIs pre-acquired in the target space. A control SPI means an SPI with known exterior orientation parameters (EOPs), which can be solved with bundle network adjustment of SPIs. Having a control SPI database, the target space will be ready to provide the service of positioning and navigation. Secondly, the position and orientation parameters (POPs) of a newly taken SPI can be solved by using overlapped SPIs searched from the control SPI database. The method of matching SPIs and finding conjugate image features will be developed and tested. The test cases involve three different types. Moreover, this study proposes a suitable model for eliminate the incorrect matches between two overlapped SPIs. The result reveals that using the model correctly can improve the efficiency and reliability of SPIs matching.
For validation, two kinds of corresponding points were applied in the experiment. The first kind involves manually measured points and the second kind involves automatic matched points, so that the effect of matching can be tested. The test field is in the indoor space of the Department of Geomatics. The results show positioning errors less than a few centimeters for manually measured points. The much larger errors resulted from improper matching pairs of corresponding points generated from automatic matching process. This reveals the importance of the quality of corresponding points. The numbers of corresponding points and the distribution of control SPIs are confirmed as reasons effecting the positioning result. On the other hand, for validating the feasibility of proposed method for the orientation computation. We firstly simulated the control and query SPI with known EOPs, so that the relative orientation and scale factor also can be calculated. The corresponding points were also generated by simulation. The result of simulation test shows that our theory is useful. However, the orientation result with realistic experiment is sometimes unstable. This result deviates from our anticipation, and puzzles us. In this stage, we still do not have a very clearly conclusion about orientation calculation. What we can confirm so far is that the measurement errors of corresponding points will affect the orientation results based on the test of simulated data.
Bay, H., Tuytelaars, T., and Gool, V.L. (2008), “SURF: Speeded up robust features,” Computer Vision and Image Understading, 110(3), pp.346~359.
Hartley, H. and Zisserman, A. (2004), Multiple View Geometry in Computer Vision, Cambridge University, pp.257~260.
Hayet, J.B., Lerasle, F. and Devy, M. (2007), “A visual landmark framework for mobile robot navigation,” Image and Vision Computing, pp.1341~1351.
Horn, B.K.P. (1990), “Recovering baseline and orientation from essential matrix,” http://people.csail.mit.edu/bkph/articles/Essential.pdf
Joglekar, J. and Gedam, S.S. (2012), “Area Based Image Matching Methods – A Survey,” International Journal of Emerging Technology and Advanced Engineering, 2(1), pp.130~136.
Lee, Y., Lee, S., Kim, D. and Oh, J.K. (2013), “Improved Industrial Part Pose Determination Based on 3D Closed-Loop Boundaries”, Proceedings of IEEE International Symposium on Robotics (ISR), pp.1~3.
Lienhart, R. and Maydt, J. (2002), “An extended set of haar-like features for rapid object detection,” Proceedings of IEEE International Conference on Image Processing, Vol. 1, pp.900~903.
Lin, K.Y. (2014), Bundle Adjustment of Multi-station Spherical Panorama Images with GPS Positioning, Master’s Thesis, Department of Geomatics, National Cheng Kung University.
Longuet-Higgins, H.C. (1981), “A computer algorithm for reconstructing a scene from two projections,” Nature, 293(10), pp.133~135.
Fischler, M.A. and Bolles, R.C. (1981), “Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartofraphy,” Comm. of the ACM, 24, pp.381~395.
Nguyen, V.V., Kim, J.G. and Lee, J.W. (2011), Panoramic Image-Based Navigation for Smart-Phone in Indoor Environment, Springer-Verlag, Berlin Heidelberg, pp.370~376.
Nister, D. (2004), “An Efficient Solution to the Five-Point Relative Pose Problem,” IEEE Transaction on Pattern Analysis and Machine Intelligence, 26(6), pp.758~759.
Ressl, C. (2000), “An Introduction to the Relative Orientation Using the Trifocal Tensor,” International Archives of Photogrammetry and Remote Sensing, Vol. XXXIII, Part B3, pp.769~776.
Scaramuzza, D. and Fraundorfer, F. (2012), “Visual Odometry: Part I - The First 30 Years and Fundamentals,” Proceedings of IEEE Robotics and Automation Magazine, 18(4), pp.85~87.
Se, S., Lowe, D. and Little, J. (2002), “Global Localization using Distintive Visual Features,” Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems, pp.226~231.
Sih, Y.R. (2014), Study on Vision-Based Navigation-Integration of Coplanarity and Collinearity Condition for Ego-Motion Estimation, Master’s Thesis, Department of Geomatics, National Cheng Kung University.
Stewenius, H., Engels, C., Nister, D. (2006), “Recent Developments on Direct Relative Orientation,” ISPRS Journal of Photogrammetry and Remote Sensing, 60(4), pp. 284~294.
Wang, E. and Yan, W. (2013), iNavigation: an image based indoor navigation system, Springer Science+Business Media, New York, pp.1597~1615.
Zhang, C., Xu, J., Xi, N., Jia, Y. and Li, W. (2012), “Development of an omni-direction 3D camera for robot navigation”, Proceedings of IEEE/ASME International Conference on Advance Intelligent Mechartronics, pp.262~267.