| 研究生: |
林冠穎 Lin, Kuan-Ying |
|---|---|
| 論文名稱: |
基於相對方位參數平差之立體視覺里程計新計算方法與實現 A Novel Computation Method and Implementation of Stereo Visual Odometry Based on ROP Adjustment |
| 指導教授: |
曾義星
Tseng, Yi-Hsing 江凱偉 Chiang, Kai-Wei |
| 學位類別: |
博士 Doctor |
| 系所名稱: |
工學院 - 測量及空間資訊學系 Department of Geomatics |
| 論文出版年: | 2023 |
| 畢業學年度: | 111 |
| 語文別: | 中文 |
| 論文頁數: | 181 |
| 中文關鍵詞: | 立體視覺里程計 、相對方位 、車輛動態估計 、網形平差 、多感測器整合 、電腦視覺 |
| 外文關鍵詞: | stereo visual odometry, relative orientation, vehicle motion estimation, network adjustment, multi-sensor integration, computer vision |
| 相關次數: | 點閱:162 下載:0 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
立體視覺里程計 (Stereo Visual Odometry, SVO) 是一種使用立體相機系統拍攝立體影像序列來估計移動載台的連續姿態和位置的技術。這種事先經過系統率定之立體相機系統,不需要額外的感測器便可恢復具有真實尺度的平移量。目前立體視覺里程計方法廣泛發展於電腦視覺領域當中,通常涉及追蹤與匹配大量的共軛像點來計算每一時刻的三維點雲,進而推算相鄰時刻之間的相對方位參數(Relative Orientation Parameters, ROPs),透過疊加這些相對方位參數進而完整的恢復行進軌跡。接著使用局部優化改善先前估算之姿態與位置,其中光束法平差是最廣泛採用的。然而,現今的SVO方法需要高頻率的影像輸入來維持特徵點追蹤,三維點雲產製以及光束法平差亦需要複雜且龐大的計算。這些造成實現演算法需要大量計算資源來滿足需求。另外,在電腦視覺領域,強調即時與自動化的計算,演算法必須選擇關鍵幀以減少計算量,以便進行區域性優化。這導致每個時間點的動態估計精確度難以確定。此外,這些電腦視覺方法的每次的執行結果也可能不一致。
本論文從攝影測量的觀點提出了一種新穎的SVO方法來面對這些挑戰。基於從多個圖像導出的幾何約束,利用時間相鄰立體圖像對中所有可能組合的相對方位參數,發展出網形平差模型。在設計的網形平差模型中,採用相對方位參數為觀測量,而不是使用傳統光束法平差中的像點。在數據處理過程中,嚴謹率定後的相對方位參數作為平差模型之約制。為了穩定計算,當檢測到靜止或異常運動的情況時,便應用兩個基於相對方位參數發展的車輛運動約制。除此之外,本論文擴展基於相對方位參數的立體視覺里程計(ROP-based SVO)進行多感測器融合,提出一套INS/GNSS/ROP-based SVO整合架構。我們的SVO方法導出速度與航向觀測量來融合,並提供自適應權重以及車輛運動約制,來實現更穩固的無縫式導航。
本論文自行研發了一套立體相機系統,以實現提出的ROP-based SVO方法。我們使用精確的攝影測量方法來進行系統校準,並轉換參數為電腦視覺的定義,以供後續的演算法使用。此系統具有可攜性,可安裝在不同的移動載台上。我們選擇露營推車和陸地車輛作為測試平台,以適應不同的場景和規模。首先對公開的KITTI數據集進行幀速測試,在幀速頻率降至5 Hz時,ROP-based SVO明顯優於最先進的ORB-SLAM3。隨後,也是在低頻率的幀速下進行後續測試,兩個平台皆於室內和室外場域進行實驗,共有四個實驗。這四個實驗結果皆與最新的ORB-SLAM3演算法進行比較,並進行位置誤差和計算複雜度的分析。根據實驗結果,本研究所開發之ROP-based SVO方法在導航應用上具有高可行性與準確度,於軌跡展示、位置誤差、飄移率以及計算複雜度各方面皆勝過ORB-SLAM3。而在多感測器整合方面,本論文亦針對室內與室外進行實驗,兩個實驗場域與車輛實驗相同。將低成本INS/GNSS系統與ROP-based SVO結合後,在2D和3D位置、3D速度以及航向等關鍵性能指標方面,兩個實驗皆能提升整合成果。
綜合上述,本論文提出一個新穎的SVO計算方法,其基於相對方位參數發展的運動估計和局部優化技術,能夠避免密集複雜的三維點雲產製與光束法平差運算。我們的方法在計算上更為顯著輕量,更適用於即時或資源有限的電腦視覺應用。依據實驗成果,在低頻率的影像輸入情況下,位置誤差與計算複雜度皆能比當前最先進的ORB-SLAM3好,顯著提升穩定性與精確度。此外,我們所發展的SVO方法還可以擴展至多感測器整合,不僅能夠提供觀測量,並且提供自適應權重評估和車輛運動約制。依據實驗成果,將ROP-based SVO加入INS/GNSS系統後,能實現更佳的無縫式導航性能。
Stereo visual odometry (SVO) is a technique that uses a dual-camera system to capture sequences of stereo images for estimating the continuous position and orientation of a moving platform. This type of dual-camera system, pre-calibrated in advance, can recover actual scale translational motion without the need for additional sensors. Currently, SVO methods are widely employed in the computer vision (CV) community, typically involving tracking and matching a large number of conjugate points to calculate three-dimensional (3D) point clouds at every time epoch. This enables the estimation of relative orientation parameters (ROPs) between consecutive time epochs. By combined these ROPs, the moving trajectory can be re constructed completely. Subsequently, local optimizations are used to refine previously estimated attitudes and positions, with bundle adjustment being the most commonly used technique. However, present SVO methods require high-frequency frame rates to maintain feature point tracking, along with complex and extensive computations for 3D point cloud generation and bundle adjustment. These factors result in a significant demand for computational resources to implement the algorithms. Additionally, in the field of CV, there is an emphasis on real-time and automatic computation. Algorithms need to select keyframes to reduce computational complexity for local optimization, which makes it challenging to assess the precision of motion estimates at each epoch. Furthermore, the results generated by CV-based SVO methods can exhibit variability with each operation.
This thesis proposed a novel SVO method from a photogrammetry perspective to address these challenges. Leveraging geometric constraints derived from multiple images, a network adjustment model is developed using all possible combinations of ROPs among time-adjacent stereo image pairs. In the designed network adjustment model, ROPs are used as observations, rather than image points in conventional bundle adjustment. During the data processing, rigorously calibrated ROPs serve as constraints in the adjustment model. To ensure robustness in the computation, two vehicle motion constraints based on ROPs are applied when detecting stationary or abnormal motion situations. Additionally, this thesis extends the ROP-based SVO by integrating multiple sensors and proposes an INS/GNSS/ROP-based SVO integrated scheme. Our SVO method derives velocity and heading measurements for the sensor fusion, providing adaptive weighting and vehicle motion constraints to achieve a more robust seamless navigation.
A dual-camera SVO system was designed to implement the proposed SVO method. System calibration using precise photogrammetric techniques was performed, and parameters were transformed into the CV definition for subsequent algorithm. This system is portable and can be installed on different mobile platforms. Both camping cart and land vehicle as mobile platforms were chosen to adapt to various scenes and scales. Initial frame rate tests on open KITTI datasets demonstrated that the ROP-based SVO method significantly outperforms the state-of-the-art ORB-SLAM3 when the frame rate is reduced to 5 Hz. Subsequently, further tests were also conducted at the lower frame rate. Four experiments were conducted at lower frequency frame rate, both indoors and outdoors using these two platforms. The results of these four experiments with the state-of-the-art ORB-SLAM3 were compared, and analyses of position errors and computational complexity were conducted. Based on the experimental findings, the ROP-based SVO method developed in this thesis demonstrates high feasibility and accuracy in navigation applications. It outperforms ORB-SLAM3 in various aspects, including trajectory representation, position error, drift rate, and computational complexity. Regarding the multi-sensor integration, experiments both indoors and outdoors were conducted using the same experimental setups as the land vehicle experiments using. Combining a low-cost INS/GNSS system with ROP-based SVO, significant improvements were observed in key performance indicators including 2D and 3D positions, 3D velocity, and heading in both experimental scenarios.
In summary, this thesis presents a novel SVO computational method. ROP-based motion estimation and local optimization techniques avoids the need for intensive and complex 3D point cloud generation and bundle adjustment computations. Our method has significantly lightweight computation and more suitable for real-time or constrained-resource CV applications. According to the experimental results, even with low-frequency frame rates, the proposed method outperforms the state-of-the-art ORB-SLAM3 in terms of both position error and computational complexity, significantly enhancing robustness and accuracy. Furthermore, the developed SVO method can also be extended to multi-sensor integration, providing not only measurements but also adaptive weighting and vehicle motion constraints. According to experimental results, incorporating ROP-based SVO into the INS/GNSS system leads to improved seamless navigation performance.
Alcantarilla, P. F., & Solutions, T. (2011). Fast explicit diffusion for accelerated features in nonlinear scale spaces. IEEE Trans. Patt. Anal. Mach. Intell, 34(7), 1281-1298.
Aqel, M. O., Marhaban, M. H., Saripan, M. I., & Ismail, N. B. (2016a). Adaptive‐search template matching technique based on vehicle acceleration for monocular visual odometry system. IEEJ Transactions on Electrical and Electronic Engineering, 11(6), 739-752.
Aqel, M. O., Marhaban, M. H., Saripan, M. I., & Ismail, N. B. (2016b). Review of visual odometry: types, approaches, challenges, and applications. SpringerPlus, 5(1), 1-26.
Azartash, H., Banai, N., & Nguyen, T. Q. (2014). An integrated stereo visual odometry for robotic navigation. Robotics and Autonomous Systems, 62(4), 414-421.
Badino, H., Yamamoto, A., & Kanade, T. (2013). Visual odometry by multi-frame feature integration. Paper presented at the Proceedings of the IEEE International Conference on Computer Vision Workshops.
Barron, J. L., Fleet, D. J., & Beauchemin, S. S. (1994). Performance of optical flow techniques. International journal of computer vision, 12(1), 43-77.
Bay, H., Ess, A., Tuytelaars, T., & Van Gool, L. (2008). Speeded-up robust features (SURF). Computer vision and image understanding, 110(3), 346-359.
Benseddik, H. E., Djekoune, O., & Belhocine, M. (2014). SIFT and SURF Performance evaluation for mobile robot-monocular visual odometry. Journal of Image and Graphics, 2(1), 70-76.
Bertozzi, M., Broggi, A., Cardarelli, E., Fedriga, R. I., Mazzei, L., & Porta, P. P. (2011). Viac expedition toward autonomous mobility [from the field]. IEEE Robotics & Automation Magazine, 18(3), 120-124.
Bonin-Font, F., Ortiz, A., & Oliver, G. (2008). Visual navigation for mobile robots: A survey. Journal of intelligent and robotic systems, 53(3), 263-296.
Borges, P. V. K., & Vidas, S. (2016). Practical infrared visual odometry. IEEE Transactions on Intelligent Transportation Systems, 17(8), 2205-2213.
Calonder, M., Lepetit, V., Strecha, C., & Fua, P. (2010). Brief: Binary robust independent elementary features. Paper presented at the European conference on computer vision.
Campbell, J., Sukthankar, R., & Nourbakhsh, I. (2004). Techniques for evaluating optical flow for visual odometry in extreme terrain. Paper presented at the 2004 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)(IEEE Cat. No. 04CH37566).
Campos, C., Elvira, R., Rodríguez, J. J. G., Montiel, J. M., & Tardós, J. D. (2021). Orb-slam3: An accurate open-source library for visual, visual–inertial, and multimap slam. IEEE Transactions on Robotics, 37(6), 1874-1890.
Cheng, Y., Maimone, M., & Matthies, L. (2005). Visual odometry on the Mars exploration rovers. Paper presented at the 2005 IEEE International Conference on Systems, Man and Cybernetics.
Chiang, K.-W., Chang, H.-W., Li, Y.-H., Tsai, G.-J., Tseng, C.-L., Tien, Y.-C., & Hsu, P.-C. (2019). Assessment for INS/GNSS/odometer/barometer integration in loosely-coupled and tightly-coupled scheme in a GNSS-degraded environment. IEEE Sensors Journal, 20(6), 3057-3069.
Chiang, K.-W., Le, D. T., Lin, K.-Y., & Tsai, M.-L. (2023). Multifusion schemes of INS/GNSS/GCPs/V-SLAM applied using data from smartphone sensors for land vehicular navigation applications. Information Fusion, 89, 305-319.
Chien, H.-J., Chuang, C.-C., Chen, C.-Y., & Klette, R. (2016). When to use what feature? SIFT, SURF, ORB, or A-KAZE features for monocular visual odometry. Paper presented at the 2016 International Conference on Image and Vision Computing New Zealand (IVCNZ).
Chum, O., & Matas, J. (2005). Matching with PROSAC-progressive sample consensus. Paper presented at the 2005 IEEE computer society conference on computer vision and pattern recognition (CVPR'05).
Comport, A. I., Malis, E., & Rives, P. (2007). Accurate quadrifocal tracking for robust 3d visual odometry. Paper presented at the Proceedings 2007 IEEE International Conference on Robotics and Automation.
Cvišić, I., Marković, I., & Petrović, I. (2021). Recalibrating the KITTI Dataset Camera Setup for Improved Odometry Accuracy. Paper presented at the 2021 European Conference on Mobile Robots (ECMR).
De Berg, M. T., Van Kreveld, M., Overmars, M., & Schwarzkopf, O. (2000). Computational geometry: algorithms and applications: Springer Science & Business Media.
Dusha, D., & Mejias, L. (2012). Error analysis and attitude observability of a monocular GPS/visual odometry integrated navigation filter. The International Journal of Robotics Research, 31(6), 714-737.
Engel, J., Stückler, J., & Cremers, D. (2015). Large-scale direct SLAM with stereo cameras. Paper presented at the 2015 IEEE/RSJ international conference on intelligent robots and systems (IROS).
Ericson, E., & Astrand, B. (2008). Visual odometry system for agricultural field robots. Paper presented at the Anonymous Proceedings of the World Congress on engineering and computer science.
Eudes, A., Naudet-Collette, S., Lhuillier, M., & Dhome, M. (2010). Weighted local bundle adjustment and application to odometry and visual slam fusion. Paper presented at the British Machine Vision Conference.
Förstner, W., & Wrobel, B. P. (2016). Photogrammetric computer vision: Springer.
Fischler, M. A., & Bolles, R. C. (1981). Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography. Communications of the ACM, 24(6), 381-395.
Foley, B. P., Dellaporta, K., Sakellariou, D., Bingham, B. S., Camilli, R., Eustice, R. M., . . . Kourkoumelis, D. (2009). The 2005 Chios ancient shipwreck survey: new methods for underwater archaeology. Hesperia, 269-305.
Forster, C., Zhang, Z., Gassner, M., Werlberger, M., & Scaramuzza, D. (2016). SVO: Semidirect visual odometry for monocular and multicamera systems. IEEE Transactions on Robotics, 33(2), 249-265.
Forsyth, D. A. (2002). Computer Vision: A Modern Approach. In. Hoboken, NJ, USA: Prentice Hall.
Fraundorfer, F., & Scaramuzza, D. (2011). Visual odometry: Part i: The first 30 years and fundamentals. IEEE Robotics and Automation Magazine, 18(4), 80-92.
Fraundorfer, F., & Scaramuzza, D. (2012). Visual odometry: Part ii: Matching, robustness, optimization, and applications. IEEE Robotics & Automation Magazine, 19(2), 78-90.
Geiger, A., Lenz, P., Stiller, C., & Urtasun, R. (2013). Vision meets robotics: The kitti dataset. The International Journal of Robotics Research, 32(11), 1231-1237.
Geiger, A., Lenz, P., & Urtasun, R. (2012a). Are we ready for autonomous driving? the kitti vision benchmark suite. Paper presented at the 2012 IEEE conference on computer vision and pattern recognition.
Geiger, A., Moosmann, F., Car, Ö., & Schuster, B. (2012b). Automatic camera and range sensor calibration using a single shot. Paper presented at the 2012 IEEE international conference on robotics and automation.
Geiger, A., Ziegler, J., & Stiller, C. (2011). Stereoscan: Dense 3d reconstruction in real-time. Paper presented at the 2011 IEEE intelligent vehicles symposium (IV).
Grisetti, G., Kümmerle, R., Stachniss, C., & Burgard, W. (2010). A tutorial on graph-based SLAM. IEEE Intelligent Transportation Systems Magazine, 2(4), 31-43.
Hartley, R., & Zisserman, A. (2003). Multiple view geometry in computer vision. Cambridge, UK: Cambridge university press.
Hartley, R. I. (1997). In defense of the eight-point algorithm. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(6), 580-593.
He, F., & Habib, A. (2016). Automated relative orientation of UAV-based imagery in the presence of prior information for the flight trajectory. Photogrammetric Engineering & Remote Sensing, 82(11), 879-891.
He, F., Zhou, T., Xiong, W., Hasheminnasab, S. M., & Habib, A. (2018). Automated aerial triangulation for UAV-based mapping. Remote Sensing, 10(12), 1952.
Jones, E. S., & Soatto, S. (2011). Visual-inertial navigation, mapping and localization: A scalable real-time causal approach. The International Journal of Robotics Research, 30(4), 407-430.
Kümmerle, R., Steder, B., Dornhege, C., Ruhnke, M., Grisetti, G., Stachniss, C., & Kleiner, A. (2009). On measuring the accuracy of SLAM algorithms. Autonomous Robots, 27(4), 387-407.
Kelly, J., & Sukhatme, G. S. (2007). An experimental study of aerial stereo visual odometry. IFAC Proceedings Volumes, 40(15), 197-202.
Kitt, B., Geiger, A., & Lategahn, H. (2010). Visual odometry based on stereo image sequences with RANSAC-based outlier rejection scheme. Paper presented at the 2010 ieee intelligent vehicles symposium.
Kneip, L., Chli, M., & Siegwart, R. (2011). Robust real-time visual odometry with a single camera and an IMU. Paper presented at the Proceedings of the British Machine Vision Conference 2011.
Krešo, I., & Šegvic, S. (2015). Improving the egomotion estimation by correcting the calibration bias. Paper presented at the 10th International Conference on Computer Vision Theory and Applications.
Krombach, N., Droeschel, D., & Behnke, S. (2016). Combining feature-based and direct methods for semi-dense real-time stereo visual odometry. Paper presented at the International conference on intelligent autonomous systems.
Krombach, N., Droeschel, D., Houben, S., & Behnke, S. (2018). Feature-based visual odometry prior for real-time semi-dense stereo SLAM. Robotics and Autonomous Systems, 109, 38-58.
Lategahn, H., Geiger, A., Kitt, B., & Stiller, C. (2012). Motion-without-structure: Real-time multipose optimization for accurate visual odometry. Paper presented at the 2012 IEEE Intelligent Vehicles Symposium.
Li, H., Qin, J., Xiang, X., Pan, L., Ma, W., & Xiong, N. N. (2018). An efficient image matching algorithm based on adaptive threshold and RANSAC. IEEE Access, 6, 66963-66971.
Lin, K.-Y., Tseng, Y.-H., & Chiang, K.-W. (2022). Interpretation and Transformation of Intrinsic Camera Parameters Used in Photogrammetry and Computer Vision. Sensors, 22(24), 9602.
Lin, K., Tseng, Y., & Chiang, K. (2020). Network adjustment of automated relative orientation for a dual-camera system. International Archives of the Photogrammetry, Remote Sensing & Spatial Information Sciences, 43.
Liu, Y., Gu, Y., Li, J., & Zhang, X. (2017). Robust stereo visual odometry using improved RANSAC-based methods for mobile robot localization. Sensors, 17(10), 2339.
Lowe, D. G. (2004). Distinctive image features from scale-invariant keypoints. International journal of computer vision, 60(2), 91-110.
Lucas, B. D., & Kanade, T. (1981). An iterative image registration technique with an application to stereo vision (Vol. 81): Vancouver.
Luong, Q.-T., & Faugeras, O. D. (1996). The fundamental matrix: Theory, algorithms, and stability analysis. International journal of computer vision, 17(1), 43-75.
Mueggler, E., Rebecq, H., Gallego, G., Delbruck, T., & Scaramuzza, D. (2017). The event-camera dataset and simulator: Event-based data for pose estimation, visual odometry, and SLAM. The International Journal of Robotics Research, 36(2), 142-149.
Mur-Artal, R., Montiel, J. M. M., & Tardos, J. D. (2015). ORB-SLAM: a versatile and accurate monocular SLAM system. IEEE Transactions on Robotics, 31(5), 1147-1163.
Mur-Artal, R., & Tardós, J. D. (2017). Orb-slam2: An open-source slam system for monocular, stereo, and rgb-d cameras. IEEE Transactions on Robotics, 33(5), 1255-1262.
Nistér, D., Naroditsky, O., & Bergen, J. (2004). Visual odometry. Paper presented at the Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004.
Nistér, D., Naroditsky, O., & Bergen, J. (2006). Visual odometry for ground vehicle applications. Journal of Field Robotics, 23(1), 3-20.
Nourani-Vatani, N., Roberts, J., & Srinivasan, M. V. (2009). Practical visual odometry for car-like vehicles. Paper presented at the 2009 IEEE International Conference on Robotics and Automation.
Oskiper, T., Zhu, Z., Samarasekera, S., & Kumar, R. (2007). Visual odometry system using multiple stereo cameras and inertial measurement unit. Paper presented at the 2007 IEEE Conference on Computer Vision and Pattern Recognition.
Pereira, F. I., Ilha, G., Luft, J., Negreiros, M., & Susin, A. (2017). Monocular visual odometry with cyclic estimation. Paper presented at the 2017 30th SIBGRAPI Conference on Graphics, Patterns and Images (SIBGRAPI).
Piyathilaka, L., & Munasinghe, R. (2010). Multi-camera visual odometry for skid steered field robot. Paper presented at the 2010 Fifth International Conference on Information and Automation for Sustainability.
Pizarro, O., Eustice, R. M., & Singh, H. (2003). Relative Pose Estimation for Instrumented, Calibrated Imaging Platforms. Paper presented at the DICTA.
Rosten, E., & Drummond, T. (2006). Machine learning for high-speed corner detection. Paper presented at the European conference on computer vision.
Rublee, E., Rabaud, V., Konolige, K., & Bradski, G. (2011). ORB: An efficient alternative to SIFT or SURF. Paper presented at the 2011 International conference on computer vision.
Scaramuzza, D. (2011). 1-point-ransac structure from motion for vehicle-mounted cameras by exploiting non-holonomic constraints. International journal of computer vision, 95(1), 74-85.
Scaramuzza, D., & Fraundorfer, F. (2011). Visual odometry [tutorial]. IEEE Robotics & Automation Magazine, 18(4), 80-92.
Scaramuzza, D., & Siegwart, R. (2008). Appearance-guided monocular omnidirectional visual odometry for outdoor ground vehicles. IEEE Transactions on Robotics, 24(5), 1015-1026.
Shen, K. (2018). Effect of baseline on stereo vision systems. Project report Department of Electrical and Electronic Engineering.
Shin, E.-H. (2005). Estimation techniques for low-cost inertial navigation.
Skaloud, J. (2002). Direct georeferencing in aerial photogrammetric mapping. Retrieved from
Srinara, S., Lee, C.-M., Tsai, S., Tsai, G.-J., & Chiang, K.-W. (2021). Performance analysis of 3D NDT scan matching for autonomous vehicles using INS/GNSS/3D LiDAR-SLAM integration scheme. Paper presented at the 2021 IEEE International Symposium on Inertial Sensors and Systems (INERTIAL).
Steinbrücker, F., Sturm, J., & Cremers, D. (2011). Real-time visual odometry from dense RGB-D images. Paper presented at the 2011 IEEE international conference on computer vision workshops (ICCV Workshops).
Sturm, R. (1869). Das Problem der Projectivität und seine Anwendung auf die Flächen zweiten Grades. Mathematische Annalen, 1(4), 533-574.
Torr, P. H., & Zisserman, A. (2000). MLESAC: A new robust estimator with application to estimating image geometry. Computer vision and image understanding, 78(1), 138-156.
Triggs, B., McLauchlan, P. F., Hartley, R. I., & Fitzgibbon, A. W. (1999). Bundle adjustment—a modern synthesis. Paper presented at the International workshop on vision algorithms.
Wang, H., Mou, W., Suratno, H., Seet, G., Li, M., Lau, M., & Wang, D. (2012). Visual odometry using RGB-D camera on ceiling Vision. Paper presented at the 2012 IEEE International Conference on Robotics and Biomimetics (ROBIO).
Wang, S., Clark, R., Wen, H., & Trigoni, N. (2017). Deepvo: Towards end-to-end visual odometry with deep recurrent convolutional neural networks. Paper presented at the 2017 IEEE international conference on robotics and automation (ICRA).
Xu, Z. (2015). Stereo Visual Odometry With Windowed Bundle Adjustment: University of California, Los Angeles.
Yoon, S.-J., & Kim, T. (2019). Development of stereo visual odometry based on photogrammetric feature optimization. Remote Sensing, 11(1), 67.
Yousif, K., Bab-Hadiashar, A., & Hoseinnezhad, R. (2015). An overview to visual odometry and visual SLAM: Applications to mobile robotics. Intelligent Industrial Systems, 1(4), 289-311.
Yu, H., Fu, Q., Yang, Z., Tan, L., Sun, W., & Sun, M. (2018). Robust robot pose estimation for challenging scenes with an RGB-D camera. IEEE Sensors Journal, 19(6), 2217-2229.
Zhang, Y., Zhang, H., Wang, G., Yang, J., & Hwang, J.-N. (2019). Bundle adjustment for monocular visual odometry based on detections of traffic signs. IEEE transactions on vehicular technology, 69(1), 151-162.
Zhang, Z. (1998). Determining the epipolar geometry and its uncertainty: A review. International journal of computer vision, 27(2), 161-195.
Zheng, M., Zhang, F., Zhu, J., & Zuo, Z. (2020). A fast and accurate bundle adjustment method for very large-scale data. Computers & Geosciences, 142, 104539.
校內:2028-10-30公開