| 研究生: |
洪凱悅 Hong, Kai-Yue |
|---|---|
| 論文名稱: |
一個用於高光譜影像分類的精化樣本資料方法之受限玻茲曼機 A Refined Sample Data Method for Hyperspectral Images Classification Based on Restricted Boltzmann Machine |
| 指導教授: |
戴顯權
Tai, Shen-Chuan |
| 學位類別: |
碩士 Master |
| 系所名稱: |
電機資訊學院 - 電腦與通信工程研究所 Institute of Computer & Communication Engineering |
| 論文出版年: | 2017 |
| 畢業學年度: | 105 |
| 語文別: | 英文 |
| 論文頁數: | 50 |
| 中文關鍵詞: | 高光譜影像 、影像分類 、深度學習 、特徵擷取 |
| 外文關鍵詞: | hyperspectral images, images classification, deep learning, feature extraction |
| 相關次數: | 點閱:209 下載:4 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
近年來深度學習成為了熱門的研究,應用的範圍甚廣,包括:影像分類、物件識別、地標檢索,而本論文是使用深度學習進行衛星影像的地形分類。因為在訓練網路時,輸入的資料中可能含有不相關或冗餘的資料,這樣不僅會增加訓練量也會降低學習模組的準確率。為了解決這個問題,我們提出了一個改善的系統,首先重新找出適合的訓練樣本,接著再利用此樣本進行學習網路的訓練,以此增加分類的準確率。此系統主要是利用影像光譜的資訊,使用共生矩陣取得特徵權重,找到較符合的資料,並排除冗餘的資料。此系統將應用在高光譜影像的分類中,並且與其他的分類方法進行比較。
本論文以Pavia Center、Pavia University及Indian Pine等高光譜影像資料庫來驗證所提出的方法。實驗結果顯示,透過本論文提出的前處理步驟相較於其他的分類方法具有較好的準確率。
Deep learning has been popular in many applications in recent years, such as image classification, object recognition and landmark retrieval. When training a neural network, the input data usually have irrelevant or redundant proportion features which not only increase the amount of training data but also degrade classification accuracy. To solve this problem, an algorithm which refines input data selection for learning network is proposed. The proposed algorithm selects data by the feature weight with Co-Occurrence Matrix method. The proposed algorithm is applied to the classification of hyperspectral images, and compared with other classification algorithms.
In the experiment, three hyperspectral image datasets are used for the evaluation, i.e., Pavia Centre, Pavia University, and Indian Pines. The experimental results indicate that the accuracy of proposed algorithm is better than other classification algorithms.
[1] Mohamed Tolba, Hala Moushir Ebied, Mohamed H. Abdel-Aziz, “High Performance Hyperspectral Image Classification using Graphics Processing Units,” Thesis for: Master of Science, DOI: 10.13140/2.1.3025.4887, Jan. 2015.
[2] Altman, N. S. An introduction to kernel and nearest-neighbor nonparametric regression. The American Statistician. 1992, 46 (3): 175–185. doi:10.1080/00031305.1992.10475879.
[3] Cortes, C.; Vapnik, V. (1995). "Support-vector networks". Machine Learning. 20 (3): 273–297. doi:10.1007/BF00994018.
[4] Michael Kearns(1988); Thoughts on Hypothesis Boosting, Unpublished manuscript (Machine Learning class project, December 1988).
[5] Ronan Collobert, Fabian Sinz, Jason Weston, Léon Bottou, " Large Scale Transductive SVMs," The Journal of Machine Learning Research archive Volume 7, 12/1/2006 Pages 1687-1712.
[6] Bryant, “Graph-Based Algorithms for Boolean Function Manipulation,” in: IEEE Transactions on Computers, Volume: C-35, Issue: 8, Pages: 677 - 691, Aug. 1986.
[7] Hinton, G. E.; Osindero, S.; Teh, Y. W. (2006). "A Fast Learning Algorithm for Deep Belief Nets" (PDF). Neural Computation. 18 (7): 1527–1554. doi:10.1162/neco.2006.18.7.1527. PMID 16764513.
[8] Randall B. Smith, " Tutorial: Introduction to Hyperspectral Imaging ", January 2012.
[9] Smolensky, Paul (1986). "Chapter 6: Information Processing in Dynamical Systems: Foundations of Harmony Theory". In Rumelhart, David E.; McLelland, James L. Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Volume 1: Foundations. MIT Press. pp. 194–281. ISBN 0-262-68053-X.
[10] Rosenblatt, Frank (1958), The Perceptron: A Probabilistic Model for Information Storage and Organization in the Brain, Cornell Aeronautical Laboratory, Psychological Review, v65, No. 6, pp. 386–408. doi:10.1037/h0042519.
[11] Fukushima K., “Neocognitron: a self organizing neural network model for a mechanism of pattern recognition unaffected by shift in position,” in Biological Cybernetics 36(4):193-202. DOI: 10.1007/BF00344251, Feb. 1980
[12] Hinton, G. E.; Sejnowski, T. J. (1986). D. E. Rumelhart, J. L. McClelland, and the PDP Research Group, eds. "Learning and Relearning in Boltzmann Machines". Parallel Distributed Processing: Explorations in the Microstructure of Cognition. Volume 1: Foundations. Cambridge: MIT Press: 282–317.
[13] Hinton, G. E. “A Practical Guide to Training Restricted Boltzmann Machines.” Department of Computer Science; University of Toronto. Aug. 2010
[14] Bengio, Y. (2009). "Learning Deep Architectures for AI". Foundations and Trends in Machine Learning. 2. doi:10.1561/2200000006.
[15] Robert M. Haralick, K. Shanmugam, Its'Hak Dinstein, “Textural Features for Image Classification,” in: IEEE Transactions on Systems, Man, and Cybernetics, Volume: SMC-3, Issue: 6, Nov. 1973.
[16] R Hahnloser, R. Sarpeshkar, M A Mahowald, R. J. Douglas, H.S. Seung, “Digital selection and analogue amplification coexist in a cortex-inspired silicon circuit,” Nature. 405. pp. 947–951, 2000.
[17] Rosenblatt, Frank. x. “Principles of Neurodynamics: Perceptrons and the Theory of Brain Mechanisms,” Spartan Books, Washington DC, 1961.
[18] Der-Chang Lo, Chih-Chiang Wei, En-Ping Tsai, “Parameter Automatic Calibration Approach for Neural-Network-Based Cyclonic Precipitation Forecast Models,” Water 2015, 7(7), 3963-3977; doi:10.3390/w7073963, July 2015
[19] Noulas A K, Krouse B J A, deep belief networks for dimensionality reduction, Proceedings of the 20th BelgianDutch Conference on Artificial Intelligence, 185-191, 2008.
[20] "About Python". Python Software Foundation. Retrieved 24 April 2012., second section "Fans of Python use the phrase "batteries included" to describe the standard library, which covers everything from asynchronous processing to zip files."
[21] Zhouhan Lin, Yushi Chen, Xing Zhao, “Spectral-spatial classification of hyperspectral image using autoencoders” Information, Communications and Signal Processing (ICICS) 2013 9th International Conference, DOI: 10.1109/ICICS.2013.6782778, April 2014