簡易檢索 / 詳目顯示

研究生: 陳欣妤
Chen, Hsin-Yu
論文名稱: 使用GAN進行資料補遺改善在不同深度學習之地下水位預測
Using GAN for Imputation of Missing Recorded Data to Improve Groundwater Level Prediction Based on Deep Learning Methods
指導教授: 羅偉誠
Lo, Wei-Cheng
學位類別: 碩士
Master
系所名稱: 工學院 - 水利及海洋工程學系
Department of Hydraulic & Ocean Engineering
論文出版年: 2022
畢業學年度: 110
語文別: 中文
論文頁數: 89
中文關鍵詞: 生成對抗網路卷積神經網路長短期記憶濁水溪沖積扇補遺地下水位預測
外文關鍵詞: GAN, CNN, LSTM, alluvial fan of Choushui river, Imputation, Groundwater prediction
相關次數: 點閱:135下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 摘要 i Extended Abstract ii 誌謝 v 目錄 vi 圖目錄 ix 表目錄 xi 第一章 緒論 1 1-1 研究背景 1 1-2 研究動機與目的 2 1-3 文獻回顧 3 1-3-1 資料補遺方法 3 1-3-2 時序列資料預測方法 5 1-4 研究流程 7 第二章 研究地區 9 2-1 濁水溪流域 9 2-2 氣候概述 10 2-3 地質分布 11 2-4 水文地質條件 13 2-5 觀測井分布 17 2-6 數據資料 19 第三章 研究方法 21 3-1 類神經網絡 21 3-1-1 McCulloch-Pitts 神經元模型 21 3-1-2 激活函數 23 3-1-3 最佳化 26 3-1-4 損失函數 28 3-1-5 評估指標 31 3-2 生成對抗插補網絡 32 3-2-1 生成對抗網路 32 3-2-2 GAIN 34 3-3 長短期記憶 39 3-3-1 循環神經網絡 39 3-3-2 LSTM 41 3-4 卷積神經網路 43 3-4-1 卷積層(Convolution Layer) 45 3-4-2 池化層(Pooling Layer) 47 3-4-3 全連接層 48 3-5 環境設定 48 第四章 模式分析與結果討論 49 4-1 地下水資料補遺 49 4-1-1 補遺資料預處理 49 4-1-2 GAIN超參數設定 49 4-1-3 補遺結果驗證 51 4-1-4 實際補遺案例探討 55 4-2 地下水位預測 57 4-2-1 水位預測資料預處理 57 4-2-2 Univariate、Seq2val、Seq2seq 60 4-2-3 CNN、 LSTM 超參數設定 62 4-2-4 預測資料結果討論 65 第五章 結論與建議 75 附錄 水位站資料缺失情形 77 參考文獻 86

    [1] 經濟部中央地質調查所、經濟部水利署、工研院能資所、台糖新營廠地下水開發保育中心(2003),水文地質調查與應用研討會論文集,經濟部中央地質調查所,新北市中和區。
    [2] 陳文福、陳瑞娥、陸挽中、黃智昭、王詠絢 (2014),「濁水溪沖積扇扇頂及鄰近含水層之水力傳導係數」,經濟部中央地質調查所特刊,第 二十七 號,第 65-83 頁。
    [3] Maier H. R., Jain A., Dandy G. C., Sudheer, K.(2010) “Methods Used for the Development of Neural Networks for the Prediction of Water Resource Variables in River Systems: Current Status and Future Directions”, Environ Model., 25, 891–909.
    [4] Rajaee, T., Ebrahimi, H., and Nourani, V.(2019) “A review of the artificial intelligence methods in groundwater level modeling”, 572, 336–351.
    [5] William L. Hamilton, Rex Ying, Jure Leskovec (2017). “Representation Learning on Graphs: Methods and Applications.”, IEEE Data Eng. Bull, 40(3), 52-74
    [6] Kyunghyun Cho , Bart van Merrienboer , Caglar Gulcehre , Dzmitry Bahdanau , Fethi Bougares , Holger Schwenk , Yoshua Bengio (2014). “Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation.”, EMNLP, 1724-1734
    [7] Kohonen, Teuvo. (1982) “Self-organized formation of topologically correct feature maps.” Biological cybernetics 43(1), 59-69.
    [8] Stef Van Buuren, Karin Groothuis-Oudshoorn (2011). “MICE: Multivariate Imputation by Chained Equations in R.” Journal of Statistical Software, 45(3), 1-67.
    [9] Daniel J Stekhoven, Peter Bühlmann (2012). “MissForest—non-parametric missing value imputation for mixed-type data.” Bioinformatics, 28(1), 112-118.
    [10] Candès, Emmanuel J. and Recht, Benjamin (2009). “Exact Matrix Completion via Convex Optimization.” Foundations of Computational Mathematics, 9(6), 717-772.
    [11] Pascal Vincent, H. Larochelle, Y. Bengio, Pierre-Antoine Manzagol (2008). “Extracting and composing robust features with denoising autoencoders.”, Machine Learning, Proceedings of the Twenty-Fifth International Conference (ICML), Helsinki, Finland.
    [12] Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio (2014). “Generative Adversarial Networks.”, Proceedings of the International Conference on Neural Information Processing Systems (NIPS 2014), 2672–2680.
    [13] Tongyang Pan, Jinglong Chen, Tianci Zhang, Shen Liu, Shuilong He, Haixin Lv (2021). “Generative adversarial network in mechanical fault diagnosis under small sample: A systematic review on applications and future perspectives.”, ISA Transactions
    [14] Jinsung Yoon, James Jordon, Mihaela van der Schaar (2018). “GAIN: Missing Data Imputation using Generative Adversarial Nets.”, International Conference on Machine Learning,
    [15] Stef Van Buuren (2018). Flexible Imputation of Missing Data, Second Edition, Chapman & Hall/CRC press, Boca Raton, FL.
    [16] Bourlard, Hervé, and Yves Kamp. (1988) “Auto-association by multilayer perceptrons and singular value decomposition.” Biological cybernetics, 59 (4-5), 291-294.
    [17] Marc’Aurelio Ranzato, Christopher Poultney, Sumit Chopra, and Yann LeCun. (2007)“Efficient learning of sparse representations with an energy-based model.” Proceedings of NIPS.
    [18] Kingma, Diederik P., and Max Welling. (2013) “Auto-encoding variational bayes.” arXiv preprint arXiv:1312.6114.
    [19] Vincent, Pascal, et al. (2008) “Extracting and composing robust features with denoising autoencoders.” Proceedings of the 25th international conference on Machine learning.
    [20] Elman, Jeffrey L. (1990) “Finding structure in time.” Cognitive science, 14.2, 179-211.
    [21] Yann LeCun, Yoshua Bengio, Geoffrey Hinton (2015). “Deep learning” nature, 521(7553), 436-444.
    [22] Maryam Malekzadeh, Saeid Kardar, Keivan Saeb, Saeid Shabanlou, Lobat Taghavi (2019). “A Novel Approach for Prediction of Monthly Ground Water Level Using a Hybrid Wavelet and Non-Tuned Self-Adaptive Machine Learning Model.” Water Resources Management, 33(1-2), 1609-1628.
    [23] Cortes Corinna, Vladimir Vapnik. (1995) “Support-vector networks.” Machine learning, 20(3), 273-297.
    [24] Zhong Lihenga, Hu Linab, Zhou Hang (2019). “Deep learning based multi-temporal crop classification.” Remote Sensing of Environment, 221, 430-443.
    [25] Sepp Hochreiter, Jürgen Schmidhuber (1997). “Long Short-Term Memory.” Neural Computation, 9(8), 1735-1780.
    [26] Yann LeCun, Bernhard Boser, John S Denker, Donnie Henderson, Richard E Howard, Wayne Hubbard, Lawrence D Jackel (1989). “Backpropagation applied to handwritten zip code recognition” Neural computation, 1(4), 541-551.
    [27] Yann LeCun, Yoshua Bengio (1995). “Convolutional networks for images, speech, and time series” The handbook of brain theory and neural networks, 3361(10).
    [28] Junyoung Chung, Caglar Gulcehre, Kyunghyun Cho, Yoshua Bengio (2015). “Gated Feedback Recurrent Neural Networks.” Proceedings of the 32nd International Conference on Machine Learning, PMLR 37, 2067-2075.
    [29] Ilya Sutskever, Oriol Vinyals, Quoc V. Le (2014). “Sequence to sequence learning with neural networks.”, Proceedings of the 27th International Conference on Neural Information Processing Systems, 2, 3104–3112.
    [30] Warren S. McCulloch, Walter Pitts (1943). “A logical calculus of the ideas immanent in nervous activity.”, The bulletin of mathematical biophysics volume, 5, 115–133
    [31] Rosenblatt, Frank (1957). “The Perceptron—a perceiving and recognizing automaton.”, Cornell Aeronautical Laboratory, 85-460-1
    [32] Rosenblatt, Frank. (1958) “The perceptron: a probabilistic model for information storage and organization in the brain.” Psychological review 65(6), 386.
    [33] Paul Werbos (1974). “Beyond regression : new tools for prediction and analysis in the behavioral sciences.” Thesis (Ph. D.), Harvard University.
    [34] Rumelhart, David E., Hinton, Geoffrey E., Williams, Ronald J. (1986). “Learning representations by back-propagating errors.”, Nature. 323 (6088), 533–536.
    [35] Radford A, Metz L, Chintala S. (2015). “Unsupervised representation learning with deep convolutional generative adversarial networks.”, arXiv preprint arXiv:1511.06434.
    [36] Reed S, Akata Z, Yan X, et al. (2016). “Generative adversarial text to image synthesis.”, arXiv preprint arXiv:1605.05396.
    [37] John Hopfield (1982). “Neural networks and physical systems with emergent collective computational abilities.” Proceedings of the National Academy of Sciences, 79(8), 2554-2558.
    [38] M. Riedmiller, H. Braun, All Authors (1993). “A direct adaptive method for faster backpropagation learning: the RPROP algorithm.” IEEE International Conference on Neural Networks., IEEE, San Francisco, CA, USA, Pages.
    [39] Y. Bengio; P. Simard; P. Frasconi (1994). “Learning long-term dependencies with gradient descent is difficult.” IEEE Transactions on Neural Networks, 5(2), 157-166.
    [40] Harry Wechsler (1992). Neural Networks for Perception III.3 – Theory of the Backpropagation Neural Network, Academic Press, 65-93.
    [41] Sepp Hochreiter, Jürgen Schmidhuber (1997). “Long Short-Term Memory.” Neural Computation, 9(8), 1735-1780. Hinton G E. Learning Distributed Representations of Concepts[C]. Proceedings of the 8th Annual Conference of the Cognitive Science Society. 1986, 1: 12.
    [42] Elman, J. L. Finding structure in time. CRL Technical Report 8801, Center for Research in Language, University of California, San Diego, 1988.
    [43] Schuster M, Paliwal K K. Bidirectional recurrent neural networks[J]. Signal Processing, IEEE Transactions on, 1997, 45(11): 2673-2681.
    [44] Graves A, Mohamed A R, Hinton G. Speech Recognition with Deep Recurrent Neural Networks[J]. Acoustics Speech & Signal Processing . icassp. international Conference on, 2013:6645 - 6649.
    [45] Jaeger H, Haas H. Harnessing nonlinearity: Predicting chaotic systems and saving energy in wireless communication[J]. Science, 2004, 304(5667), 78-80.
    [46] Cho K, Van Merrienboer B, Gulcehre C, et al.(2014) Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation[J]. Eprint Arxiv.
    [47] Chung J, Gulcehre C, Cho K H, et al.(2014) Empirical evaluation of gated recurrent neural networks on sequence modeling[J]. arXiv preprint arXiv:1412.3555.
    [48] Jan Koutnik, Klaus Greff, Faustino Gomez, Juergen Schmidhuber.(2014) A Clockwork RNN[J]. Proceedings of The 31st International Conference on Machine Learning,. 1863–1871.
    [49] Sutskever, Ilya, Martens, James, Dahl, George E., and Hinton, Geoffrey E. On the importance of initialization and momentum in deep learning. In Dasgupta, Sanjoy and Mcallester, David (eds.), Proceedings of the 30th International Conference on Machine Learning (ICML-13), 28, 1139–1147.
    [50] Dzmitry Bahdanau , Kyunghyun Cho , Yoshua Bengio (2014). “Neural Machine Translation by Jointly Learning to Align and Translate.”, Accepted at ICLR 2015 as oral presentation, arXiv:1409.0473.
    [51] Tomáš Mikolov , Martin Karafiát , Lukáš Burget , Jan , Honza " Černocký , Sanjeev Khudanpur (2010). “Recurrent neural network based language model.”, In INTERSPEECH 2010, CONFERENCE, 1045-1048.
    [52] Kunihiko Fukushima, Sei Miyake (1980). “Neocognitron: A Self-Organizing Neural Network Model for a Mechanism of Visual Pattern Recognition” Lecture Notes in Biomathematics, 45, 267-285.
    [53] Nils Ackermann (2018). “Introduction to 1D Convolutional Neural Networks in Keras for Time Sequences”
    [54] Yanbiao Xi et al. (2019). “Mapping Tree Species Composition Using OHS-1 Hyperspectral Data and Deep Learning Algorithms in Changbai Mountains, Northeast China” Forests, 10(9), 818.
    [55] Diederik P. Kingma, Jimmy Ba (2015). “Adam: A Method for Stochastic Optimization.”, The 3rd International Conference for Learning Representations, San Diego, ICLR (Poster).

    無法下載圖示 校內:不公開
    校外:不公開
    電子論文尚未授權公開,紙本請查館藏目錄
    QR CODE