簡易檢索 / 詳目顯示

研究生: 沈孟謙
Shen, Meng-Qian
論文名稱: 應用由粗到細影像特徵匹配法於歷史航照影像之地理對位
Historical Aerial Image Georeferencing by Using Coarse-to-fine Feature-based Image Matching
指導教授: 曾義星
Tseng, Yi-Hsing
學位類別: 碩士
Master
系所名稱: 工學院 - 測量及空間資訊學系
Department of Geomatics
論文出版年: 2018
畢業學年度: 106
語文別: 英文
論文頁數: 66
中文關鍵詞: 歷史航照影像影像特徵匹配由粗到細
外文關鍵詞: Historical Aerial Image, Image Feature Matching, Coarse-to-fine
相關次數: 點閱:117下載:3
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 歷史航照影像忠實記錄了當時的地景及地貌,是時空資訊的重要環節,有助於我們瞭解時代變遷之地理環境變遷。台灣自1940年代起,因軍事偵察、土地利用調查及製圖工作,累積大量且範蓋全台灣的歷史航照。中央研究院人文社會科學中心透過數位典藏計畫,蒐集並掃描了大量的台灣歷史影像,然而因多數的歷史影像缺少幾何方位資訊,難以進行地理定位,而無法進行時間軸向的土地環境變遷分析。
    電腦視覺領域的發展提供了自動化的影像匹配方法,藉由在影像重疊區尋找相似特徵進行影像匹配的特徵法匹配策略已經被廣為使用於處理多張影像的拼接。進行影像匹配時,處理時間常是納入考量的因素之一,尤其是針對高解析度之影像,特徵萃取在影像匹配過程中扮演極為重要的角色,過多的影像細節及資訊容易被視為雜訊並導致匹配的成果不佳甚至失敗。本研究利用電腦視覺演算法,建立影像的自動化匹配流程,拆解SIFT匹配法 (Scale Invariant Feature Transform) 應用於進行歷史影像的匹配,並結合由粗到細 ( Coarse-to-fine ) 理論,將萃取出來的影像特徵點依照不同尺度分類,區分為包含較少雜訊的大尺度特徵及包含較多影像細節資訊及雜訊的小尺度特徵。影像匹配策略為,由尺度大的特徵開始匹配,此類特徵點匹配較為穩定,匹配速度快且錯誤率低,再進行小尺度的細微特徵匹配,此類特徵保留較多影像細節,但同時包含較多雜訊,進行匹配時的錯誤率會提高。
    由粗到細特徵分類理論於使用原始解析度影像進行特徵匹配時能有效提升的運算速度及減少記憶體使用,同時提供高精度的匹配成果。匹配過程中出現的錯誤需仰賴RANSAC (RANdom Sample Consensus ) 除錯理論,從多次抽樣中選取通過的最大共識集合來將錯誤匹配點去除。本研究另提出二維仿射轉換做為約制條件,利用大尺度特徵所得到的匹配成果計算出兩張影像的轉換參數,引入小尺度的匹配程序做為約制條件剔除不符合此轉換參數的匹配點,能加速匹配點除錯。對於多張歷史影像之間的匹配,參考相鄰矩陣的概念來儲存匹配點,建立連結點及多重點的關係,再於福衛二號衛星影像上人工找尋所對應地區的控制點,考慮到歷史航照影像的航高及拍攝地區多為平坦區域,本研究使用二維仿射轉換作為平差之數學模型,利用網型平差計算座標轉換參數,將歷史空照影像轉換為TWD97之地面座標系統。航照影像能夠提供拍攝當時的地表資訊,本研究能自動化並快速的處理大量原始解析度的歷史影像,進行影像分群,有助於釐清影像之間的相對關係,使這些珍貴影像能有效的歸類並妥善保存。

    Historical aerial images record and reflect the fidelity of captured landscapes at the time of photography. Since 1940s, numerous historical aerial images have been taken for various applications, such as military reconnaissance, land use investigation, and mapping survey. The Research Center for Humanities and Social Sciences (RCHSS) of Taiwan Academia Sinica collected and digitized a large number of historical aerial images. Because most of the images do not have the information of image orientation, it’s difficult to analyze the spatial-temporal transition of landscape.
    Nowadays, computer vision methods have been widely used to process image matching automatically. Practical feature-based image matching has been developed in the field of computer vision, which allows us to stitch overlapped images by searching similar points which are called features in overlapped areas. Extracting features from images is one of the most important steps while processing image matching. The decomposition of SIFT (Scale Invariant feature transform) and SURF (Speed-Up Robust Features) algorithm has been applied to digitized historical aerial images. Time consuming is a common problem when processing images with high resolution such as historical aerial images; furthermore, higher resolution usually contains more noises that will lead to bad matching result. In this study, coarse-to-fine algorithm is applied to perform hierarchical image matching, which divides the features into two groups based on the unique scales. All extracted features will be classified as high octave and low octave where the number and character of features are totally different. The proposed methods match images from high octave to low octave, which means from rough features to finer features. High octave matching provides an initial relationship of two images by excluding noises in low octave, and low octave provides detailed image information. The coarse-to-fine theory plays a significant role in the progress to improve the efficiency.
    In this study, RANSAC (RANdom SAmple Consensus) with affine condition is applied to remove outliers. After matching features from different octaves, there would exist some incorrect matched tie points that need to be removed. In this study, RANSAC is applied to find outliers through iteration and decide the most consensus to calculate the transformation parameters for two image coordinate systems. In low octave matching process, the transformation is applied as a condition to restrict the distribution of the matched tie points, and the wrong matched tie points are quickly deleted by checking the distance of features. In addition, by clarifying the relationship of the overlapped images from the index of matched tie points and transformation matrices, the process can display image alignment of all matched images by transforming all images to the one which has the most matched tie points. The test images covering Tainan city were taken in 1960s which belonged to 2W8G flight mission. In order to perform image georeferencing, it’s necessary to choose several control points from FORMOSAT-2 manually after the historical aerial photographs are matched. This study shows the feasibility and improvement of applying coarse-to-fine algorithm and RANSAC with affine condition to deal with numerous historical aerial photographs, and it’s proved to accelerate the matching process effectively and also achieve high accuracy.

    ABSTRACT III ACKNOWLEDGEMENT V CONTENTS VI LIST OF TABLES VIII LIST OF FIGURES IX Chapter 1 Introduction 1 1.1 Motivation 1 1.2 Objective 2 1.3 Research Approach 3 1.4 Thesis Structure 5 Chapter 2 Coarse-to-fine Image Matching 6 2.1 Image Matching Method 6 2.2 Feature Extraction 6 2.2.1 Scale Invariant Feature Transform (SIFT) 7 2.2.2 Speed-Up Robust Feature (SURF) 12 2.3 Coarse-to-fine 16 2.4 Image Matching and RANSAC 20 Chapter 3 Image Alignment and Adjustment 33 3.1 Alignment of Multiple Images 33 3.2 Georeferencing of Historical Aerial Images 36 3.3 Network Adjustment of Historical Aerial Images 38 Chapter 4 Experiments 41 4.1 Test Data 41 4.2 Coarse-to-fine Matching Result 44 4.3 Comparison of Network Adjustment 50 4.4 Multiple Historical Image Matching 58 Chapter 5 Conclusions 62 5.1 Conclusions 62 5.2 Suggestions 63 REFERENCES 64

    Alhwarin, F., C. Wang, D. Ristić-Durrant, and A. Gräser, (2008). Improved SIFT-features matching for object recognition. Proceedings of the 2008 International Conference on Visions of Computer Science: BCS International Academic Conference. pp. 179-190.
    Bay, H., A. Ess, T. Tuytelaars, and V. L. Gool. (2008). Speeded-Up Robust Features (SURF). Computer Vision and Image Understanding, 110(3), pp. 346–359.
    Brown, M., and D. G. Lowe, (2002). Invariant Features from Interest Point Groups. In British Machine Vision Conference, pp. 656–665.
    Brown, M., and D. G. Lowe, (2007). Automatic panoramic image stitching using invariant features. In International Journal of Computer Vision,Vol. 74, pp. 59–73.
    Brown, M., R. Szeliski, and S. Winder, (2005). Multi-image matching using multi-scale oriented patches. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 1, pp. 510–517.
    Carneiro, G., and Jepson, A. D. (2003). Multi-scale Phase-based Local Features 1 Introduction 2 Image Deformations Studied 3 Where : The Interest Points. Pattern Recognition.
    Chen, H. R., (2015). Automatic Image Matching and Georeferencing of Digitized Historical Aerial Photographs. Master's Thesis. National Cheng Kung University, Taiwan.
    Dufournaud, Y., C. Schmid, and R. Horaud, (2000). Matching images with different resolutions. Proceedings of the Conference on Computer Vision and Pattern Recognition, 1, pp. 612–618.
    Fischler, M. A., and, R. C. Bolles, (1981). Random Sample Consensus: A Paradigm for Model Fitting with. Communications of the ACM, 24, pp. 381–395.
    Foerstner, W. (1986). A feature based correspondence algorithm for image matching. Archives of Photogrammetry and Remote Sensing, 26(3), pp. 150-166.
    Harris, C., and Stephens, M. (1988). A Combined Corner and Edge Detector. Proceedings of the Alvey Vision Conference 1988, 23.1-23.6.
    Jahanshahi, M. R., Masri, S. F., and Sukhatme, G. S. (2011). Multi-image stitching and scene reconstruction for evaluating defect evolution in structures. Structural Health Monitoring, 10(6), pp. 643–657.
    Jao, F. J. (2014). Historical GIS Data Processing-Automatic Historical Aerial Image Registration Using SIFT and Least-Squares. Master's Thesis. National Cheng Kung University, Taiwan.
    Lew, M. S., and T. S. Huang, (1999). Optimal Multi-Scale Matching. Proceedings of the Conference on Computer Vision and Pattern Recognition, Fort Collins, Colorado, USA, 2(c), pp. 88–93.
    Li, Q., G.Wang, J. Liu, and S. Chen, (2009). Robust Scale-Invariant Feature Matching for Remote Sensing Image Registration. IEEE Geoscience and Remote Sensing Letters, 6(2), pp. 287–291.
    Cheng, L., J. Gong, X. Yang, C. Fan, and P. Han, (2008). Robust Affine Invariant Feature Extraction for Image Matching. IEEE Geoscience and Remote Sensing Letters, 5(2), pp. 246–250.
    Lienhart, R., and J. Maydt, (2002). An extended set of Haar-like features for rapid object detection. In Proceedings. International Conference on Image Processing, Vol. 1, pp. I-900-I-903.
    Lindeberg, T. (1994). Scale-Space Theory in Computer Vision. Norwell, MA, USA, Kluwer Academic Publishers, pp. 149-162.
    Lindeberg, T. (1998). Feature Detection with Automatic Scale Selection. International Journal of Computer Vision, 30(2), pp. 79–116.
    Lowe, D. G. (1999). Object recognition from local scale-invariant features. In Proceedings of the Seventh IEEE International Conference on Computer Vision, Vol.2, pp. 1150–1157.
    Lowe, D. G. (2004). Distinctive image features from scale invariant keypoints. International Journal of Computer Vision, 60(2), 91–110.
    Gong, M., S. Zhao, L. Jiao, D. Tian, and S. Wang, (2014). A Novel Coarse-to-Fine Scheme for Automatic Image Registration Based on SIFT and Mutual Information. IEEE Transactions on Geoscience and Remote Sensing, 52(7), pp. 4328–4338.
    Mikolajczyk, K. (2002). Detection of local features invariant to affine transformations. Ph.D. Thesis, Institut National Polytechnique de Grenoble, France.
    Mikolajczyk, K., K. Mikolajczyk, C. Schmid, and C. Schmid, (2005). A performance evaluation of local descriptors. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(10), pp. 1615–1630.
    Mikolajczyk, K., and C. Schmid, (2002). An Affine Invariant Interest Point Detector. In European Conference on Computer Vision (ECCV), 2002, Copenhagen, Denmark, Vol 2350, pp.128–142.
    Panchal, P. M., S. R. Panchal, and S. K. Shah, (2013). A comparison of SIFT and SURF. International Journal of Innovative Research in Computer and Communication Engineering, 1(2), pp. 143–152.
    Yi, Z., C. Zhiguo, and X. Yang, (2008). Multi-spectral remote image registration based on SIFT. Electronics Letters, 44(2), pp. 107–108.
    Zhao, F., Q. Huang, and W. Gao, (2006). Image matching by multiscale oriented corner correlation. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 3851 LNCS, pp. 928–937.

    下載圖示 校內:2021-01-01公開
    校外:2021-01-01公開
    QR CODE