| 研究生: |
蔣舒翔 Jiang, Shu-Siang |
|---|---|
| 論文名稱: |
應用生成對抗網路於船型目標之辨識 Application of Generative Adversarial Network to Recognition of Ship Targets |
| 指導教授: |
李坤洲
Lee, Kun-Chou |
| 學位類別: |
碩士 Master |
| 系所名稱: |
工學院 - 系統及船舶機電工程學系 Department of Systems and Naval Mechatronic Engineering |
| 論文出版年: | 2020 |
| 畢業學年度: | 108 |
| 語文別: | 中文 |
| 論文頁數: | 169 |
| 中文關鍵詞: | 數據增強 、深度學習 、類神經網路 、生成對抗網路 |
| 外文關鍵詞: | Data Augmentation, Deep Learning, Artificial Neural Network, Generative Adversarial Network |
| 相關次數: | 點閱:164 下載:0 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
在人工智慧已經發展相當進步的今天,日常生活中已隨處可見相關應用。而在人工智慧中以深度學習最為突出,也成為了現今最為火熱的研究領域,隨著深度學習技術研究的進步,不僅解決了許多困難,甚至在一些應用領域上的表現已經超過了人類。然而在深度學習領域中,一直存在一個相當大的問題,那就是資料的取得。深度學習的效能好壞有很大的一部份是取決於提供的訓練資料的品質及數量,因此想要達到理想的成效的前提是需要取得大量的訓練資料,但在實際的應用中,資料的取得往往相當困難及費時和費力,而取得資料後也要考慮資料品質的問題。因此為了解決上述的問題的其中一個辦法就是使用數據增強(Data Augmentation)技術,所以在本文中將使用在深度學習領域中的生成對抗網路(Generative Adversarial Network,GAN),來進行數據增強,以此提高模型的效能。
生成對抗網路(Generative Adversarial Network,GAN)在2014年提出,是非監督式學習的一種方法。而生成對抗網路的提出解決了許多困難的難題,其中包含了上述的訓練資料取得的問題。生成對抗網路由生成網路(Generator)和鑑別網路(Discriminator)兩個神經網路所組成,隨著訓練,達到生成資料的效果。如今生成對抗網路的發展相當迅速,每年都會有多種的變種型生成對抗網路的提出,而在本研究使用了自己結合輔助分類器生成對抗網路(ACGAN)及WGAN-GP各自優點而組成的一種新的GAN架構WACGAN-GP,來進行數據增強。
文中分別對視覺影像(光學影像)的MNIST資料集、CIFAR-10資料集、船舶分類資料集和非視覺影像(合成孔徑雷達影像)的MSTAR資料集、OpenSARShip資料集兩種影像類別的五個資料集上做了實驗,在只有少量的訓練資料前提下,藉由生成對抗網路來生成資料,並添加到原訓練資料中進行數據增廣的動作,達到數據增強的效果,以此來提升影像辨識的準確率。而在實驗中也對於資料集中各類別數量相等與不相等的兩種情況做了探討並且也比較了論文中所提出的生成對抗網路數據增強方法和傳統數據增強(水平翻轉、垂直翻轉、水平加垂直翻轉)以及ACGAN的效果差異,最後研究結果顯示在進行生成對抗網路的數據增強後,辨識準確率均有不錯的提升,並且提升的效果也比傳統數據增強來的更好。
In this study, Generative Adversarial Network(GAN) will be used for data augmentation applications to achieve the purpose of improving model performance.
In the study, we used the WACGAN-GP, which was formed by combining the advantages of ACGAN and WGAN-GP, to apply data augmentation with only a small amount of training data.
The research method is to use GAN to generate data and add it to the original training data to perform data augmentation actions to improve the accuracy of model and achieve the effect of data augmentation.
In the study, experiments were performed on five image categories of the visual image (optical image) MNIST dataset, CIFAR-10 dataset, ship classification dataset, and non-visual image (synthetic aperture radar image) MSTAR dataset and OpenSARShip dataset Experiments on above datasets show that the accuracy rate after GAN's data augmentation has improved.
It also compares the effect difference with traditional data augmentation, and the result is that WACGAN-GP is better, so it also proves that GAN's Data augmentation is a direction worth studying in the future.
[1] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks”, Advances in neural information processing systems., pp. 1097-1105, 2012
[2] K. Simonyan, and A. Zisserman, “Very deep convolutional networks for large-scale image recognition”, in Proc. Int. Conf. Learn. Represent., pp. 1-14, 2015
[3] N. V. Chawla, K. W. Bowyer, L. O. Hall, and W. P. Kegelmeyer, “SMOTE: synthetic minority over-sampling technique”, Journal of Aetificial Intelligence Research, Vol. 16, pp. 321-357, 2002
[4] H. Zhang, M. Cisse, Y.N. Dauphin, and D. Lopez-Paz, “mixup: Beyond empirical risk minimization”, in: arXiv preprint arXiv:1710.09412, 2017
[5] E. D. Cubuk, B. Zoph, D. Mane, V. Vasudevan, and Q. V. Le, “Autoaugment: Learning augmentation policies from data”, in: arXiv preprint arXiv:1805.09501, 2018
[6] I. Goodfellow et al., “Generative Adversarial Networks”, in: arXiv preprint arXiv:1406.2661, 2014
[7] A. Radford, L. Metz, and S. Chintala, “Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks”, in: arXiv preprint arXiv:1511.06434, 2015
[8] M. Mirza, and S. Osindero, “Conditional Generative Adversarial Nets”, in: arXiv preprint arXiv:1411.1784, 2014
[9] A. Odena, C. Olah, and J. Shlens, “Conditional Image Synthesis With Auxiliary Classifier GANs”, in: arXiv preprint arXiv:1610.09585, 2016
[10] M. Arjovsky, S. Chintala, and L. Bottou, “Wasserstein GAN”, in: arXiv preprint arXiv:1701.07875, 2017
[11] I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin, and A. C. Courville, “Improved Training of Wasserstein GANs”, in: arXiv preprint arXiv:1704.00028, 2017
[12] Zhaoqing Pan et al., “Recent Progress on Generative Adversarial Networks (GANs): A Survey”, IEEE Access, vol. 7, pp. 36322-36333, Mar, 2019
[13] X. Chen, Y. Duan, R. Houthooft, J. Schulman, I. Sutskever, and P. Abbeel, “InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets”, in: arXiv preprint arXiv:1606.03657, 2016
[14] A. Makhzani, J. Shlens, N. Jaitly, and I. Goodfellow, “Adversarial Autoencoders”, in: arXiv preprint arXiv:1511.05644, 2015
[15] J. Donahue, P. Krähenbühl, and T. Darrell, “Adversarial Feature Learning”, in: arXiv preprint arXiv:1605.09782, 2016
[16] V. Dumoulin et al., “Adversarially Learned Inference”, in: arXiv preprint arXiv:1606.00704, 2016
[17] D. Ulyanov, A. Vedaldi, and V. Lempitsky, “It Takes (Only) Two: Adversarial Generator-Encoder Networks”, in Proc. AAAI Conf. Artif. Intell., 2018
[18] A. B. L. Larsen, S. K. Sønderby, H. Larochelle, and O.Winther, “Autoencoding beyond pixels using a learned similarity metric”, in: arXiv preprint arXiv:1512.09300, 2015
[19] L. Metz, B. Poole, D. Pfau, and J. Sohl-Dickstein, “Unrolled Generative Adversarial Networks”, in: arXiv preprint arXiv:1611.02163, 2016
[20] S. Nowozin, B. Cseke, and R. Tomioka, “f-GAN: Training Generative Neural Samplers using Variational Divergence Minimization”, in: arXiv preprint arXiv:1606.00709, 2016
[21] T. Che, Y. Li, A. P. Jacob, Y. Bengio, and W. Li, “Mode Regularized Generative Adversarial Networks”, in: arXiv preprint arXiv:1612.02136, 2016
[22] X. Mao, Q. Li, H. Xie, R. Y. K. Lau, Z. Wang, and S. P. Smolley, “On the Effectiveness of Least Squares Generative Adversarial Networks”, IEEE Trans. Pattern Anal. Mach. Intell. vol. 41, pp. 2947-2960, Sep, 2018.
[23] G.-J. Qi, “Loss-Sensitive Generative Adversarial Networks on Lipschitz Densities.” in: arXiv preprint arXiv:1701.06264, 2017
[24] J. Zhao, M. Mathieu, and Y. LeCun, “Energy-based Generative Adversarial Network”, in: arXiv preprint arXiv:1609.03126, 2016
[25] H. Petzka, A. Fischer, and D. Lukovnicov, “On the regularization of Wasserstein GANs”, in: arXiv preprint arXiv:1709.08894, 2017
[26] C. Shorten, and T. M. Khoshgoftaar, “A survey on Image Data Augmentation for Deep Learning”, Journal of Big Data, 6:60, 2019