簡易檢索 / 詳目顯示

研究生: 黃瓘茗
Huang, Guan-Ming
論文名稱: 基於逐步光束法平差發展立體視覺里程計
Development of Stereo Visual Odometry Based on Stepwise Bundle Adjustment
指導教授: 曾義星
Tseng, Yi-Hsing
學位類別: 碩士
Master
系所名稱: 工學院 - 測量及空間資訊學系
Department of Geomatics
論文出版年: 2022
畢業學年度: 110
語文別: 英文
論文頁數: 94
中文關鍵詞: 立體視覺里程計逐步光束法平差共面式共線式循環匹配
外文關鍵詞: Stereo visual odometry, Stepwise bundle adjustment, Coplanarity condition, Collinearity condition, Circular matching
相關次數: 點閱:140下載:16
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 以相機作為感測器恢復移動軌跡並解算其三維位置與姿態之演算法稱之為視覺里程計(Visual Odometry,VO),而視覺里程計又分為單眼視覺里程計(Monocular Visual Odometry, MVO)以及立體視覺里程計(Stereo Visual Odometry, SVO),不論是MVO或是SVO,兩者皆以相片為主,因此演算法解算出的位置與姿態,其精度高度依賴著如何處理影像。MVO有著較不穩定的圖像幾何以及尺度恢復問題,相反的,SVO可以同時還原相機移動軌跡以及繪製3D點雲圖,因此本實驗致力於研究SVO,利用連續相片解算方位並同時產製點雲,不僅解決尺度問題,並優化解算之影像方位。 自動化影像匹配為SVO的主要流程之一,然而影像匹配後可能存在錯誤,需要適當之機制進行除錯。目前常用方式為隨機取樣一致(RANdom SAmple Consensus,RANSAC)的方式,其採用隨機抽樣的原理,每次除錯成果將不完全相同。為使除錯成果更為嚴謹一致,本研究提出基於攝影測量中的共面式以及共線式進行影像匹配後的除錯策略。本研究使用經過率定的雙相機系統,拍攝左右立體像對,計算共面式的行列式值並剔除較大的錯誤匹配,再利用共線式計算共軛點在像空間的坐標標準差,進行二次除錯,進一步利用前方交會計算匹配點在物空間的坐標並繪製三維點雲圖。為了加強特徵點的穩定性,本研究加入了循環匹配(Circular Matching)的概念,最終保留前一時刻像對與現在時刻像對的四張相片共同擁有的特徵點,稱之為四重點。最後利用這些四重點進行逐步光束法平差,其流程如下:一次利用前後時刻拍攝的四張影像進行光束法平差,解算出四張影像的位置與姿態後,利用解算出的成果代入下一組前後時刻拍攝的四張影像之中,再次進行光束法平差解算影像方位。以此類推,進而逐步地還原相機的移動軌跡。實驗使用自製之雙相機系統並經過系統率定,已獲得相關先驗資訊,將此雙相機系統安裝於推車上行駛。共測試兩種場景,並進行影像匹配除錯與逐步光束法平差之一系列流程。成果顯示解算之移動軌跡與實際拍攝情況符合,相關點雲分布與拍攝場景亦相符。實驗成果可以應證本研究基於逐步光束法平差發展立體視覺里程計的可行性,除了能夠恢復實際尺度外,亦優化影像方位,進一步產製三維點雲圖。

    The algorithm adopting the camera to reconstruct the movement of position and attitude is called visual odometry (VO). VO can be divided into monocular visual odometry (MVO) and stereo visual odometry (SVO). MVO means using only one camera to take continuous images. SVO means using two cameras amounted on a platform to obtain the stereo pair at the same time. The accuracy of position and attitude are highly dependent on how to process images well. MVO has the problems of unstable image geometry and unknown scale. On the contrary, SVO can solve the camera movement and build 3D point clouds simultaneously. Therefore, this research is dedicated to the developing the unique algorithm of SVO. There are two major goals of this research. The first one is to use photogrammetric coplanarity and collinearity condition for eliminating errors. The second one is to perform stepwise bundle adjustment for solving the continuous movement of stereo camera. To make the error eliminating results more rigorous and consistent, this research proposes an error eliminating strategy based on the photogrammetric coplanar and collinear methods. To increase the stability of the matching points, this research also adopts the concept of circular matching, which retains the feature points caught by the four images at the previous and the current time called quadruple points. Forward intersection is performed to calculate the coordinates of quadruple points in the object frame and build 3D point cloud. Finally, quadruple points are used to perform stepwise bundle adjustment so that the positions and attitudes of two cameras at the current time could be solved. These solved positions and attitudes of two cameras will be fixed and used for solve the next stations in the stepwise bundle adjustment. And then stepwise bundle adjustment is performed again. Eventually, the whole trajectory would be rebuilt. The results show that the proposed method make the result after error eliminating more stable and consistent, additionally, each station of stereo camera is rebuilt precisely.

    Abstract i 摘要 ii 致謝 iii Contents iv List of Table vi List of Figure vii Chapter 1 Introduction 1 1.1 Background 1 1.2 Objective 3 1.3 Literature Review 4 1.4 Thesis Structure 6 Chapter 2 Stereo Visual Odometry 8 Chapter 3 Stepwise Bundle Adjustment 13 3.1 Image Matching 15 3.2 Eliminating by Photogrammetry 18 3.2.1 Relative Orientation 20 3.2.2 Coplanarity Condition 22 3.2.3 Collinearity Condition 25 3.3 Circular Matching 31 3.4 Forward Intersection 33 3.5 Stepwise Bundle Adjustment 38 Chapter 4 Experiment 45 4.1 Equipment 45 4.2 Experimental Field 47 4.3 Experimental Preparation 51 4.3.1 Stereo Camera Calibration 51 4.3.2 Outdoor Ground Check points 54 4.3.3 Indoor Ground Check points 56 4.4 Experimental Design 57 4.4.1 Outdoor Scene 57 4.4.2 Indoor Scene 60 Chapter 5 Results 62 5.1 3D point cloud 62 5.1.1 Outdoor scene 63 5.1.2 Indoor scene 65 5.2 Movement of the stereo camera 67 5.2.1 Outdoor scene 68 5.2.2 Indoor scene 77 5.3 The standard deviation of bundle adjustment 84 5.3.1 Outdoor scene 84 5.3.2 Indoor scene 88 Chapter 6 Conclusions 90 Reference 92

    Aqel, M. O., Marhaban, M. H., Saripan, M. I., & Ismail, N. B. (2016). Review of visual odometry: types, approaches, challenges, and applications. SpringerPlus, 5(1), 1-26.

    Comport, A. I., Malis, E., & Rives, P. (2010). Real-time quadrifocal visual odometry. The International Journal of Robotics Research, 29(2-3), 245-266.

    Davison, A. J. (2003). Real-time simultaneous localisation and mapping with a single camera. Paper presented at the Computer Vision, IEEE International Conference on.

    Fernandez, D., & Price, A. (2004). Visual odometry for an outdoor mobile robot. Paper presented at the IEEE Conference on Robotics, Automation and Mechatronics, 2004.

    Fischler, M. A., & Bolles, R. C. (1981). Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM, 24(6), 381-395.

    Gakne, P. V., & O’Keefe, K. (2018). Tackling the scale factor issue in a monocular visual odometry using a 3D city model. Proceedings of the ITSNT.

    Howard, A. (2008). Real-time stereo visual odometry for autonomous ground vehicles. Paper presented at the 2008 IEEE/RSJ international conference on intelligent robots and systems.

    Lienhart, R., & Maydt, J. (2002). An extended set of haar-like features for rapid object detection. Paper presented at the Proceedings. international conference on image processing.

    Liu, Y., Gu, Y., Li, J., & Zhang, X. (2017). Robust stereo visual odometry using improved RANSAC-based methods for mobile robot localization. Sensors, 17(10), 2339.

    Lowe, D. G. (1999). Object recognition from local scale-invariant features. Paper presented at the Proceedings of the seventh IEEE international conference on computer vision.

    Mistry, D., & Banerjee, A. (2017). Comparison of feature detection and matching approaches: SIFT and SURF. GRD Journals-Global Research and Development Journal for Engineering, 2(4), 7-13.

    Nistér, D., Naroditsky, O., & Bergen, J. (2004). Visual odometry. Paper presented at the Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004.

    Persson, M., Piccini, T., Felsberg, M., & Mester, R. (2015). Robust stereo visual odometry from monocular techniques. Paper presented at the 2015 IEEE Intelligent Vehicles Symposium (IV).

    Scaramuzza, D., & Fraundorfer, F. (2011). Visual odometry [tutorial]. IEEE robotics & automation magazine, 18(4), 80-92.

    Strasdat, H., Montiel, J., & Davison, A. J. (2010). Scale drift-aware large scale monocular SLAM. Robotics: Science and Systems VI, 2(3), 7.

    Suikki, K. (2022). Tracking motion in mineshafts: Using monocular visual odometry. In.

    Yoon, S.-J., & Kim, T. (2019). Development of stereo visual odometry based on photogrammetric feature optimization. Remote Sensing, 11(1), 67.

    Zhao, J. (2020). An efficient solution to non-minimal case essential matrix estimation. IEEE Transactions on Pattern Analysis and Machine Intelligence.

    司元榮. (2014). 視覺導航研究-結合共面與共線條件式於自我移動估計.

    林照捷. (2018). 以解算連續像對相對方位參數之視覺里程計.

    University of Melbourne ( 2001 ). Australis User Manual. Australia: Software Manual. Understanding Coordinate System. http://www.noobeed.com/nb_coord_system.htm. 3 August 2004.

    下載圖示 校內:立即公開
    校外:立即公開
    QR CODE