| 研究生: |
陳鵬宇 Chen, Peng-Yu |
|---|---|
| 論文名稱: |
遙測影像之全色態強化及錯誤修補問題之研究 Studies of Pansharpening and Error Concealment Problems in Remote Sensing Images |
| 指導教授: |
戴顯權
Tai, Shen-Chuan |
| 學位類別: |
博士 Doctor |
| 系所名稱: |
電機資訊學院 - 電腦與通信工程研究所 Institute of Computer & Communication Engineering |
| 論文出版年: | 2019 |
| 畢業學年度: | 107 |
| 語文別: | 英文 |
| 論文頁數: | 100 |
| 中文關鍵詞: | 全色態強化 、影像修補 、影像重建 、卷積神經網路 、深度學習 |
| 外文關鍵詞: | pansharpening, image concealment, image restoration, convolution neural network, deep learning |
| 相關次數: | 點閱:59 下載:0 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
近幾十年來,遙測技術在影像收集任務中有著極為重要的地位。從衛星上拍攝得到的影像幫助人們更了解這個地球;然而在實際衛星遙測任務上其實有一些問題存在,本論文為了增進臺灣遙測技術的發展,希望藉由影像處理的方式提升福爾摩沙衛星所拍攝影像之品質。文章中會介紹兩個關於衛星遙測的問題以及其解決方法,其中一個問題是由於硬體限制導致衛星無法同時拍攝擁有高空間解析度以及高頻譜解析度的影像。全色態強化演算法利用融合高空間解析度但低頻譜解析度的影像和低空間解析度但高頻譜解析度的影像來解決此問題;另一個問題是影像從太空傳送至地面時可能導致影像損毀。影像修補演算法藉由填補遺失像素資訊來重建影像。
本論文分別提出傳統及近代的方法來解決此兩個問題,首先會先介紹所提出的傳統方法。利用一個預先訓練好的字典並以實例為基礎的全色態強化演算法在本論文被提出,並採用IHS全色態強化的概念來降低影像失真程度;另一方面,本論文也提出一個以內插為基礎的影像修補演算法。依據參考影像的資訊找出相似像素,並利用相似像素內插出遺失的像素資訊。本論文利用回歸模型來產生一張輔助影像作為參考影像,且提出一個適應性搜尋視窗以避免選擇位於不同地表的像素作為相似像素。最後,由於深度學習演算法引起了機器學習領域的普遍關注,並且於近年來應用於地球科學及遙測,本論文也提出了基於卷積神經網路模型的近代方法用以解決此兩個影像重建問題。實驗結果顯示出本論文所提出的傳統方法在視覺品質以及客觀量測標準下已有良好的表現;但在以卷積神經網路為基礎的近代方法下呈現了比傳統方法更優越的結果,其可能成為遙測影像重建問題未來的研究方向。總結而言,本論文分別提出了全色態強化演算法以及影像修補演算法以解決遙測影像的重建問題,並且呈現了傳統方法與近代方法的比較。
Remote sensing techniques have gathered great importance in data-collection tasks over the last few decades. Images captured by satellites give people opportunities to understand Earth. However, there are some problems in actual satellite remote sensing tasks. For development of remote sensing techniques in Taiwan, the objective in this dissertation is to increase image qualities captured by Formosat using image processing methods. In the article, two of the problems are introduced, and the solutions to them are given. One problem is that images with both high-spatial and high-spectral resolution are not available simultaneously due to hardware restrictions. Pansharpening algorithms are used to solve the problems by the fusion of images with high-spatial but low-spectral and with low-spatial but high-spectral resolution. The other problem is that images could be corrupted when transmitting from space to ground. Image concealment algorithms are used to restore the images by filling missing pixels.
This dissertation presents both traditional and modern approaches to solve these two problems. First, proposed traditional approaches are introduced. An example-based pansharpening algorithm is presented with a pre-trained dictionary, and the IHS-based pansharpening concept is adopted to eliminate distortion. On the other hand, an interpolation-based algorithm is presented for image concealment. Similar pixels are found according to reference images, and used to interpolate missing pixels. Regression model is used to generate an auxiliary image as the reference image, and adaptive search window is proposed to avoid the selection of similar pixels in different land-covers. Finally, since deep learning algorithms have aroused general interest in the machine-learning area and been applied to the geoscience and remote sensing in recent years, modern approaches based on convolution neural network models are utilized to solve both the two image restoration problems. Results shows the traditional approaches proposed the good visual quality and objective measurements. However, results using convolution neural networks provide superior performance to those using traditional approaches, and probably become the future direction of restoration for remote sensing images. To conclude, this dissertation provides pansharpening and image concealment algorithms to solve restoration problems in remote sensing images, and presents a comparison between traditional and modern approaches.
[1] J. A. Benediktsson, J. Chanussot, and W. M. Moon, “Very highresolution remote sensing: Challenges and opportunities,” Proc. IEEE, vol. 100, no. 6, pp. 1907–1910, 2012.
[2] J. M. Bioucas-Dias, A. Plaza, G. Camps-Valls, P. Scheunders, N. M. Nasrabadi, and J. Chanussot, “Hyperspectral remote sensing data analysis and future challenges,” IEEE Geosci. Remote Sens. Mag., vol. 1, no. 2, pp. 6–36, 2013.
[3] M. Bevis, S. Businger, T. A. Herring, C. Rocken, R. A. Anthes, and R. H. Ware, “GPS meteorology: Remote sensing of atmospheric water vapor using the Global Positioning System,” J. Geophys. Res. [Atmos.], vol. 97, no. D14, pp. 15,787–15,801, Oct. 1992.
[4] M. H. Ward, J. R. Nuckols, S. J. Weigel, S. K. Maxwell, K. P. Cantor, and R. S. Miller, “Identifying populations potentially exposed to agricultural pesticides using remote sensing and a geographic information system,” Environ. Health Perspect., vol. 108, no. 1, pp. 5–12, 2000.
[5] S. P. Hoogendoorn, H. J. V. Zuylen, M. Schreuder, B. Gorte, and G. Vosselman, “Microscopic traffic data collection by remote sensing,” Transport. Res. Rec.: J. Transport. Res. Board, vol. 1855, no. 1, pp. 121–128, 2003.
[6] L. Alparone, L. Wald, J. Chanussot, C. Thomas, P. Gamba, and L. M. Bruce, “Comparison of Pansharpening Algorithms: Outcome of the 2006 GRS-S Data Fusion Contest,” IEEE Trans. Geosci. Remote Sens.,vol. 45, no. 10, pp. 3012 – 3021, Oct 2007.
[7] G. Vivone, L. Alparone, J. Chanussot, M. Dalla Mura, A. Garzelli, G. Licciardi Restaino and L. Wald, “A Critical Comparison Among Pansharpening Algorithms,” IEEE Trans. Geosci. Remote Sens., vol. 53, no. 5, pp. 2565 – 2586, May 2015.
[8] L. Alparone, S. Baronti, B. Aiazzi, and A. Garzelli, “Spatial Methods for Multispectral Pansharpening: Multiresolution Analysis Demystified,” IEEE Trans. Geosci. Remote Sens., vol. 54, no. 5, pp. 2563 – 2576, May 2016.
[9] L. Loncan et al., "Hyperspectral Pansharpening: A Review," IEEE Trans. Geosci. Remote Sens. Mag., vol. 3, no. 3, pp. 27-46, Sept. 2015.
[10] W. J. Carper, T. M. Lillesand, and R. W. Kiefer, “The use of intensity–hue–saturation transformations for merging SPOT panchromatic and multispectral image data,” Photogramm. Eng. Remote Sens., vol. 56, no. 4, pp. 459–467, Apr. 1990.
[11] T. M. Tu, P. S. Huang, C. L. Hung, and C. P. Chang, “A fast intensityhue-saturation fusion technique with spectral adjustment for IKONOS imagery,” IEEE Geosci. Remote Sens. Lett., vol. 1, no. 4, pp. 309 – 312, Oct 2004.
[12] S. Rahmani, M. Strait, D. Merkurjev, M. Moeller and T. Wittman, “An adaptive IHS pan-sharpening method,” IEEE Geosci. Remote Sens. Lett., vol. 7, no. 4, pp. 746 – 750, Oct 2010.
[13] L. Yee, L. Junmin, and Z. Jiangshe, “An Improved Adaptive Intensity Hue-Saturation Method for the Fusion of Remote Sensing Images,” IEEE Geosci. Remote Sens. Lett., vol. 11, no. 5, pp. 985 – 989, May 2014.
[14] A. R. Gillespie, A. B. Kahle, and R. E. Walker, “Color enhancement of highly correlated images. II. Channel ratio and “chromaticity” transformation techniques,” Remote Sensing of Environment, vol. 22, no. 3, pp. 343 – 365, 1987.
[15] P. Chavez, A. Kwarteng, Extracting spectral contrast in Landsat thematic mapper image data using selective principal component analysis. Photogramm. Eng. Remote Sens. 1989, 55, 339–348.
[16] C. A. Laben and B. V. Brower, “Process for enhancing the spatial resolution of multispectral imagery using pan-sharpening,” U.S. Patent 6 011 875, Jan. 4, 2000.
[17] A. Garzelli, F. Nencini, and L. Capobianco, “Optimal MMSE pan sharpening of very high resolution multispectral images,” IEEE Trans. Geosci. Remote Sens., vol. 46, no. 1, pp. 228–236, Jan. 2008.
[18] J. Choi, K. Yu, and Y. Kim, “A new adaptive component-substitutionbased satellite image fusion by using partial replacement,” IEEE Trans. Geosci. Remote Sens., vol. 49, no. 1, pp. 295–309, Jan. 2011.
[19] X. Kang, S. Li and J. Benediktsson, “Pansharpening with matting model,” IEEE Trans. Geosci. Remote Sens., vol. 52, no. 8, pp. 5088 – 5099, Aug 2014.
[20] H. Yin and S. Li, "Pansharpening With Multiscale Normalized Nonlocal Means Filter: A Two-Step Approach," IEEE Trans. Geosci. Remote Sens., vol. 53, no. 10, pp. 5734-5745, Oct. 2015.
[21] J. Nuñez, X. Otazu, O. Fors, A. Prades, V. Pala, and R. Arbiol, “Multiresolution-based image fusion with additive wavelet decomposition,” IEEE Trans. Geosci. Remote Sens., vol. 37, no. 3, pp. 1204 – 1211, May 1999.
[22] X. Otazu, M. González-Audícana, O. Fors, and J. Núñez, “Introduction of Sensor Spectral Response Into Image Fusion Methods. Application to Wavelet-Based Methods,” IEEE Trans. Geosci. Remote Sens., vol. 43, no. 10, pp. 2376 – 2385, Oct 2005.
[23] L. Alparone, L. Wald, J. Chanussot, C. Thomas, P. Gamba, and L. Bruce, “Comparison of pansharpening algorithms: Outcome of the 2006 GRS-S data-fusion contest,” IEEE Trans. Geosci. Remote Sens., vol. 45, no. 10, pp. 3012–3021, Oct. 2007.
[24] F. Nencini, A. Garzelli, S. Baronti, and L. Alparone, “Remote sensing image fusion using the curvelet transform,” Inf. Fusion, vol. 8, no. 2, pp. 143–156, Apr. 2007.
[25] S. Zheng, W. Shi, J. Liu, and J. Tian, “Remote sensing image fusion using multiscale mapped LS-SVM,” IEEE Trans. Geosci. Remote Sens., vol. 46, no. 5, pp. 1313–1322, May 2008.
[26] A. G. Mahyari and M. Yazdi, “Panchromatic and multispectral image fusion based on maximization of both spectral and spatial similarities,” IEEE Trans. Geosci. Remote Sens., vol. 49, no. 6, pp. 1976–1985, Jun. 2011.
[27] S. Zhong, Y. Zhang, Y. Chen and D. Wu, “Combining Component Substitution and Multiresolution Analysis: A Novel Generalized BDSD Pansharpening Algorithm,” IEEE J. Sel. Topics Appl. Earth Observ. in Remote Sens., vol. 10, no. 6, pp. 2867-2875, June 2017.
[28] C. Ballester, V. Caselles, L. Igual, J. Verdera, and B. Rougé, “A variational model for P+XS image fusion,” Int. J. Comput. Vis., vol. 69, no. 1, pp. 43–58, 2006.
[29] X. He, L. Condat, J. Chanussot, and J. Xia, “Pansharpening using total variation regularization,” in Proc. IEEE Int. Geosci. Remote Sens. Symp. (IGARSS), Munich, Germany, Jul. 2012, pp. 166–169.
[30] F. Palsson, J. R. Sveinsson, M. O. Ulfarsson, and J. A. Benediktsson, “A new pansharpening method using an explicit image formation model regularized via total variation,” in Proc. Int. Geosci. Remote Sens. Symp. (IGARSS), Munich, Germany, Jul. 2012, pp. 2288–2291.
[31] L. Lorenzi, F. Melgani, and G. Mercier, “Inpainting strategies for reconstruction of missing data in VHR images,” IEEE Geosci. Remote Sens. Lett., vol. 8, no. 5, pp. 914-918, Sep 2011.
[32] A. Maalouf, P. Carre, B. Augereau, and C. Fernandez-Maloigne, “A bandelet-based inpainting technique for clouds removal from remotely sensed images,” IEEE Geosci. Remote Sens., vol. 47, no. 7, pp. 2363-2371, July 2009.
[33] F. Zhou, Z. Wang, and F. Qi, “Inpainting thick image regions using isophote propagation,” in Proc. IEEE ICIP, 2006, pp. 689–692.
[34] D. Tschumperle´ and R. Deriche, “Vector-valued image regularization with PDEs: A common framework for different applications,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 27(4), 2005, pp. 506-517.
[35] M. Bertalmio, L. Vese, G. Sapiro, and S. Osher, “Simultaneous structure and texture image inpainting,” IEEE Trans. Image Process., vol. 12(8), 2003, pp. 882-889.
[36] H. Shen and L. Zhang, “A MAP-based algorithm for destriping and inpainting of remotely sensed images,” IEEE Geosci. Remote Sens., vol. 47, no. 5, pp. 1492–1502, May 2009.
[37] W. Fan, W Ju, and Z. Gu, “Method for reconstructing the pixel missing region on remote sensing images,” J Appl Remote Sens., vol. 7, July 2013
[38] C. Zeng, H. Shen, and L. Zhang, “Recovering missing pixels for Landsat ETM + SLC-off imagery using multi-temporal regression analysis and a regularization method,” Remote Sens. Environ., vol. 131, no. 15, pp. 182-194, Apr. 2013.
[39] Y. Zhang, B. Guindon, and J. Cihlar, “An image transform to characterize and compensate for spatial variations in thin cloud contamination of Landsat images,” Remote Sens. Environ., vol. 82, no. 2/3, pp. 173–187, Oct. 2002.
[40] P. Rakwatin, W. Takeuchi, and Y. Yasuoka, “Restoration of Aqua MODIS band 6 using histogram matching and local least squares fitting,” IEEE Geosci. Remote Sens., vol. 47, no. 2, pp. 613–627, Feb. 2009.
[41] I. Gladkova, M. D. Grossberg, F. Shahriar, G. Bonev, and P. Romanov, “Quantitative restoration for MODIS band 6 on Aqua,” IEEE Geosci. Remote Sens., vol. 50, no. 6, pp. 2409–2416, Jun. 2012.
[42] H. Shen, C. Zeng, and L. Zhang, “Recovering reflectance of AQUA MODIS band 6 based on within-class local fitting,” IEEE J. Sel. Topics Appl. Earth Observ. Remote Sens., vol. 4, no. 1, pp. 185–192, Mar. 2011.
[43] Q. Cheng; H. Shen; L. Zhang; P. Li, "Inpainting for Remotely Sensed Images With a Multichannel Nonlocal Total Variation Model," IEEE Geosci. Remote Sens., vol.52, no.1, pp.175-187, Jan. 2014.
[44] J. Storey, P. Scaramuzza, G. Schmidt, and J. Barsi, “Landsat 7 scan line corrector-off gap-filled product development,” in Proc. Pecora 16 Conf., 2005, pp. 23–27.
[45] M. Pringle, M. Schmidt, and J.Muir, “Geostatistical interpolation of SLCoff Landsat ETM+ images,” ISPRS J. Photogramm. Remote Sens., vol. 64, no. 6, pp. 654–664, Nov. 2009.
[46] A. B. Salberg, “Land cover classification of cloud-contaminated multitemporal high-resolution images,” IEEE Geosci. Remote Sens., vol. 49, no. 1, pp. 377–387, Jan. 2011.
[47] F. Melgani, “Contextual reconstruction of cloud-contaminated multitemporal multispectral images,” IEEE Geosci. Remote Sens., vol. 44, no. 2, pp. 442–455, Feb. 2006.
[48] C.-H. Lin, K.-H. Lai, Z.-B. Chen, and J.-Y. Chen, “Patch-based information reconstruction of cloud-contaminated multitemporal images,” IEEE Geosci. Remote Sens., vol. 52, no. 1, pp. 163-174, Jan 2014.
[49] C.-H. Lin, P.-H. Tsai, K.-H. Lai, and J.-Y. Chen, “Cloud removal from multitemporal satellite images using information cloning,” IEEE Geosci. Remote Sens., vol. 51, no. 1, pp. 232-241, Jan 2013.
[50] G. S. James Storey, Pasquale Scaramuzza, “Landsat 7 scan line corrector-off gap-filled product,” Global Priorities in Land Remote Sensing, October 2005.
[51] F. Gao, J. Masek, M. Schwaller, and F. Hall, “On the blending of the landsat and modis surface reflectance: predicting daily landsat surface reflectance,” IEEE Geosci. Remote Sens., vol. 44, no. 8, pp. 2207-2218, Aug 2006.
[52] J. Chen, X. Zhu, J. E. Vogelmann, F. Gao, and S. Jin, “A simple and effective method for filling gaps in landsat etm+ slc-off images,” Remote Sens. Environ., vol. 115, no. 4, pp. 1053-1064, 2011.
[53] W. Fan, J. Chen, and W. Ju, “A pixel missing patch inpainting method for remote sensing image,” in Geoinformatics, 2011 19th International Conference on, June 2011, pp. 1-4.
[54] W. Hu, M. Li, Y. Liu, Q. Huang, and K. Mao, “A new method of restoring etm+ slc-off images based on multi-temporal images,” in Geoinformatics, 2011 19th International Conference on, June 2011, pp. 1-4.
[55] L. Zhang, L. Zhang and B. Du, “Deep Learning for Remote Sensing Data: A Technical Tutorial on the State of the Art, ” IEEE Geosci. Remote Sens. Mag. vol. 4, no. 2, pp. 22-40, June 2016.
[56] W. Hu, Y. Huang, L. Wei, F. Zhang, and H. Li, “Deep convolutional neural networks for hyperspectral image classification,” J. Sensors, vol. 2015, Article ID 258619, pp. 1–12, 2015.
[57] C. Xing, L. Ma, and X. Yang, “Stacked denoise AE based feature extraction and classification for hyperspectral images,” J. Sensors, vol. 2015, Article ID 3632943, pp. 1–10, 2015.
[58] A. Lagrange, B. L. Saux, A. Beaupere, A. Boulch, A. Chan-Hon-Tong, S. Herbin, H. Randrianarivo, and M. Ferecatu, “Benchmarking classification of earth-observation data: From learning explicit features to convolutional networks,” in Proc. IEEE Int. Geosci. Remote Sens. Symp., Milan, Italy, 2015, pp. 4173–4176.
[59] X. Chen, S. Xiang, C. L. Liu, and C. H. Pan, “Vehicle detection in satellite images by hybrid deep convolutional neural networks,” IEEE Geosci. Remote Sens. Lett., vol. 11, no. 10, pp. 1797–1801, 2014.
[60] J. Han, D. Zhang, G. Cheng, L. Guo, and J. Ren, “Object detection in optical remote sensing images based on weakly supervised learning and high-level feature learning,” IEEE Trans. Geosci. Remote Sens., vol. 53, no. 6, pp. 3325–3337, 2015.
[61] F. Zhang, B. Du, and L. Zhang, “Scene classification via a gradient boosting random convolutional network framework,” IEEE Trans. Geosci. Remote Sens., vol. 54, no. 3, pp. 1793–1802, 2016.
[62] A. Romero, C. Gatta, and G. Camps-Valls, “Unsupervised deep feature extraction for remote sensing image classification,” IEEE Trans. Geosci. Remote Sens., vol. 54, no. 3, pp. 1349–1362, 2016.
[63] J. Zhang, P. Zhong, Y. Chen, and S. Li, “L_(1/2)-regularized deconvolution network for the representation and restoration of optical remote sensing images,” IEEE Trans. Geosci. Remote Sens., vol. 52, no. 5, pp. 2617–2627, 2014.
[64] W. Huang, L. Xiao, Z. Wei, H. Liu and S. Tang, “A New Pan-Sharpening Method With Deep Neural Networks,” IEEE Geosci. Remote Sens. Lett., vol. 12, no. 5, pp. 1037-1041, May 2015.
[65] G. Masi, D. Cozzolino, L. Verdoliva, and G. Scarpa, “Pansharpening by Convolutional Neural Networks,” Remote Sens., vol. 8, Jul 2016.
[66] Y. Wei, Q. Yuan, H. Shen and L. Zhang, “Boosting the Accuracy of Multispectral Image Pansharpening by Learning a Deep Residual Network,” IEEE Geosci. Remote Sens. Lett., vol. 14, no. 10, pp. 1795-1799, Oct. 2017.
[67] J. Kim, J. K. Lee and K. M. Lee, “Accurate Image Super-Resolution Using Very Deep Convolutional Networks,” 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, 2016, pp. 1646-1654.
[68] R. L. Dodge and R. G. Congalton, “Remote Sensing Imagery Remote Sensing Imagery”, American Geosciences Institute, 2013.
[69] Satellite Imagery, Geospatial Data, Satellite Map (2001). Retrieved September 4, 2018, from: https://www.satimagingcorp.com
[70] V. Nair and G. E. Hinton, “Rectified linear units improve restricted Boltzmann machines,” in Proc. 27th Int. Conf. Mach. Learn., Haifam, Israel, 2010, pp. 807–814.
[71] K. Fukushima, "Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position." Biol. Cybern. 1980, 36, 193–202.
[72] P. Jagalingam, Arkal Vittal Hegde, “A Review of Quality Metrics for Fused Image”, Aquatic Procedia, Volume 4, 2015, Pages 133-142.
[73] L. Alparone, L. Wald, J. Chanussot, C. Thomas, P. Gamba, and L. Bruce, “Comparison of pansharpening algorithms: Outcome of the 2006 GRS-S data-fusion contest,” IEEE Trans. Geosci. Remote Sens., vol. 45, no. 10, pp. 3012–3021, Oct. 2007.
[74] L. Zhang, L. Zhang, D. Tao, and X. Huang, “On combining multiple features for hyperspectral remote sensing image classification,” IEEE Trans. Geosci. Remote Sens.,, vol. 50, no. 3, pp. 879 – 893, Mar 2012.
[75] T. Ranchin and L. Wald, “Fusion of high spatial and spectral resolution images: The ARSIS concept and its implementation,” Photogramm. Eng. Remote Sens., vol. 66, no. 1, pp. 49–61, Jan. 2000.
[76] L. Alparone, B. Aiazzi, S. Baronti, A. Garzelli, F. Nencini, and M. Selva, “Multispectral and panchromatic data fusion assessment without reference,” Photogramm. Eng. Remote Sens., vol. 74, no. 2, pp. 193 – 200, Feb 2008.
[77] Y. Linde, A. Buzo, and R. Gray, “An algorithm for vector quantizer design,” IEEE Trans. Commun. vol. 28, no. 1, pp. 84 - 95, Jan 1980.
校內:2024-07-01公開