簡易檢索 / 詳目顯示

研究生: 郭佳修
Guo, Chia-Siou
論文名稱: 應用在機翼升力係數預測的深度主動學習方法
Deep active learning method for the lift coefficient prediction
指導教授: 陳瑞彬
Chen, Ray-Bing
學位類別: 碩士
Master
系所名稱: 管理學院 - 數據科學研究所
Institute of Data Science
論文出版年: 2023
畢業學年度: 111
語文別: 中文
論文頁數: 28
中文關鍵詞: 查詢策略蒙地卡羅之隨機關閉翼型升力係數
外文關鍵詞: Query Strategy, Monte-Carlo Dropout, Lift Coefficient
相關次數: 點閱:77下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 本研究使用主動學習方法應用於翼型升力係數的預測,其中結合了蒙地卡羅之隨機關閉方法來估計卷積神經網路模型的預測不確定性,以此建構採集函數用以更新訓練集。透過主動學習的方式,結合了模型的不確定性估計和樣本選擇策略,模型能夠主動地挑選最具資訊含量的樣本進行訓練,減少標記成本。我們的實驗結果表明,透過主動學習的架構,模型在訓練的初期能夠迅速降低預測損失。相比於隨機選取樣本的方法,主動學習方法的成效更加穩定。此外,主動學習方法能使用較少的樣本即可獲得更好的預測結果。

    In this thesis, the airfoil lift coefficient prediction problem is considered. Usually to obtain a good prediction model, the large training samples are necessary. However, it would take a lot of cost. Due to the cost consideration, the idea of the active learning is adopted here to properly select the fewer training samples for the prediction model learning. The key step is to measure the prediction uncertainty via the Monte-Carlo Dropout approach and then the training set is updated by iteratively adding samples with the larger prediction uncertainties. The numerical results support that compared with the random sampling procedure, the active learning approach can identify the fewer samples such that the corresponding prediction model still can reach the target loss value. In addition, among the several independent replications, the prediction model is also more robust based on the training set collected by the active learning approach.

    摘要 I 英文延伸摘要 II 誌謝 VI 目錄 VII 表目錄 IX 圖目錄 X 第一章 緒論 1 1.1 研究背景 1 1.2 研究動機 2 1.3 研究貢獻 2 1.4 本文結構 3 第二章 文獻回顧 4 2.1 深度學習應用在升力係數 4 2.2 主動學習 5 第三章 研究方法 8 3.1 卷積神經網路 8 3.1.1 卷積層 8 3.1.2 池化層 9 3.1.3 全連接層 10 3.2 蒙地卡羅之隨機關閉 11 3.3 主動學習 12 第四章 升力係數預測效能驗證與方法比較 14 4.1 資料介紹 14 4.2 模型設置 14 4.3 相關參數校調 15 4.3.1 模型收斂 15 4.3.2 不確定性預測 17 4.4 數值結果 20 第五章 結論 24 參考文獻 25

    [1] Z. Yuan, Y. Wang, Y. Qiu, J. Bai, and G. Chen, “Aerodynamic coefficient prediction of airfoils with convolutional neural network,” in The Proceedings of the 2018 Asia-Pacific International Symposium on Aerospace Technology (APISAT 2018) 9th, pp. 34–46, Springer, 2019.
    [2] Y. Zhang, W. J. Sung, and D. N. Mavris, “Application of convolutional neural network to predict airfoil lift coefficient,” in 2018 AIAA/ASCE/AHS/ASC structures, structural dynamics, and materials conference, p. 1903, 2018.
    [3] B. Yu, L. Xie, and F. Wang, “An improved deep convolutional neural network to predict airfoil lift coefficient,” in Proceedings of the International Conference on Aerospace System Science and Engineering 2019, pp. 275–286, Springer, 2020.
    [4] H. Moin, H. Z. I. Khan, S. Mobeen, and J. Riaz, “Airfoil's aerodynamic coefficients prediction using artificial neural network,” in 2022 19th International Bhurban Conference on Applied Sciences and Technology (IBCAST), pp. 175–182, IEEE, 2022.
    [5] B. Settles, “Active learning literature survey,” Computer Sciences Technical Report 1648, University of Wisconsin–Madison, 2009.
    [6] P. Ren, Y. Xiao, X. Chang, P.-Y. Huang, Z. Li, B. B. Gupta, X. Chen, and X. Wang, “A survey of deep active learning,” ACM computing surveys (CSUR), vol. 54, no. 9, pp. 1–40, 2021.
    [7] Y. Gal and Z. Ghahramani, “Dropout as a bayesian approximation: Representing model uncertainty in deep learning,” in international conference on machine learning, pp. 1050–1059, PMLR, 2016.
    [8] Y. Gal, R. Islam, and Z. Ghahramani, “Deep bayesian active learning with image data,” in International conference on machine learning, pp. 1183–1192, PMLR, 2017.
    [9] F. Olsson, “A literature survey of active machine learning in the context of natural language processing,” 05 2009.
    [10] Ł. Rączkowski, M. Możejko, J. Zambonelli, and E. Szczurek, “Ara: accurate, reliable and active histopathological image classification framework with bayesian deep learning,” Scientific reports, vol. 9, no. 1, p. 14347, 2019.
    [11] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” Proceedings of the IEEE, vol. 86, no. 11, pp. 2278–2324, 1998.
    [12] H. Chen, L. He, W. Qian, and S. Wang, “Multiple aerodynamic coefficient prediction of airfoils using a convolutional neural network,” Symmetry, vol. 12, no. 4, p. 544, 2020.
    [13] D. Angluin, “Queries and concept learning,” Mach. Learn., vol. 2, p. 319–342, apr 1988.
    [14] D. Cohn, L. Atlas, and R. Ladner, “Improving generalization with active learning,” Mach. Learn., vol. 15, p. 201–221, may 1994.
    [15] D. D. Lewis and W. A. Gale, “A sequential algorithm for training text classifiers,” in Proceedings of the 17th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’94, (Berlin, Heidelberg), p. 3–12, Springer-Verlag, 1994.
    [16] H. S. Seung, M. Opper, and H. Sompolinsky, “Query by committee,” in Proceedings of the fifth annual workshop on Computational learning theory, pp. 287–294, 1992.
    [17] A. I. Schein and L. H. Ungar, “Active learning for logistic regression: an evaluation,” 2007.
    [18] M. Abdar, F. Pourpanah, S. Hussain, D. Rezazadegan, L. Liu, M. Ghavamzadeh, P. Fieguth, X. Cao, A. Khosravi, U. R. Acharya, et al., “A review of uncertainty quantification in deep learning: Techniques, applications and challenges,” Information Fusion, vol. 76, pp. 243–297, 2021.
    [19] J. Gawlikowski, C. R. N. Tassi, M. Ali, J. Lee, M. Humt, J. Feng, A. M. Kruspe, R. Triebel, P. Jung, R. Roscher, M. Shahzad, W. Yang, R. Bamler, and X. Zhu, “A survey of uncertainty in deep neural networks,” ArXiv, vol. abs/2107.03342, 2021.
    [20] B. Lakshminarayanan, A. Pritzel, and C. Blundell, “Simple and scalable predictive uncertainty estimation using deep ensembles,” Advances in neural information processing systems, vol. 30, 2017.
    [21] W. H. Beluch, T. Genewein, A. Nürnberger, and J. M. Köhler, “The power of ensembles for active learning in image classification,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 9368–9377, 2018.
    [22] C. Blundell, J. Cornebise, K. Kavukcuoglu, and D. Wierstra, “Weight uncertainty in neural network,” in International conference on machine learning, pp. 1613–1622, PMLR, 2015.
    [23] N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: a simple way to prevent neural networks from overfitting,” The journal of machine learning research, vol. 15, no. 1, pp. 1929–1958, 2014.
    [24] F. Verdoja and V. Kyrki, “Notes on the behavior of mc dropout,” arXiv preprint arXiv:2008.02627, 2020.
    [25] L. Shi, C. Copot, and S. Vanlanduit, “Evaluating dropout placements in bayesian regression resnet,” Journal of Artificial Intelligence and Soft Computing Research, vol. 12, no. 1, pp. 61–73, 2022.
    [26] R. Seoh, “Qualitative analysis of monte carlo dropout,” arXiv preprint arXiv:2007.01720, 2020.
    [27] E. Tsymbalov, M. Panov, and A. Shapeev, “Dropout-based active learning for regression,” in Analysis of Images, Social Networks and Texts: 7th International Conference, AIST 2018, Moscow, Russia, July 5–7, 2018, Revised Selected Papers 7, pp. 247–258, Springer, 2018.
    [28] I. Loshchilov and F. Hutter, “Decoupled weight decay regularization,” arXiv preprint arXiv:1711.05101, 2017.

    無法下載圖示 校內:2028-08-18公開
    校外:2028-08-18公開
    電子論文尚未授權公開,紙本請查館藏目錄
    QR CODE