| 研究生: |
黃世豪 Huang, Shih-Hao |
|---|---|
| 論文名稱: |
使用機器學習尋找Slater-Koster方法的參數 Use Machine Learning to Find Slater-Koster Method's Parameters |
| 指導教授: |
張泰榕
Chang, Tay-Rong |
| 學位類別: |
碩士 Master |
| 系所名稱: |
理學院 - 物理學系 Department of Physics |
| 論文出版年: | 2020 |
| 畢業學年度: | 108 |
| 語文別: | 英文 |
| 論文頁數: | 77 |
| 中文關鍵詞: | 機器學習 、深度學習 、緊密束縛法 |
| 外文關鍵詞: | Machine Learning, Deep Learning, Tight-Binding Method |
| 相關次數: | 點閱:277 下載:24 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
這篇論文使用機器學習來預測Slater-Koster方法的參數。我們訓練電腦將計算出的哈密頓量映射到Slater-Koster參數。並且我們使用這些預測出來的參數來構造Slater-Koster哈密頓量。
在第二章中,我簡單地介紹一些物理知識和深度學習。在這ㄧ部分中,我主要討論深度學習的工作原理。在第三章中,我將介紹緊束縛方法(tight-binding method),深度學習另一面向和無監督學習K均值。在這一部分中,我將討論如何訓練深度學習以及如何使其表現更好。包含K均值的原因是因為我需要使用它來聚類一些數據。
在第4章中,以材料WSe2和材料Sb2Te3為例。首先,我嘗試了許多深度學習模型和技術來預測WSe2的Slater-Koster方法參數。但是,仍有一些深度學習無法預測的參數。因此,我用K均值方法將它們聚類以找到最接近的參數。接下來,我使用相同的步驟來預測Sb2Te3的參數。
結果表明我們可以使用機器學習來找到Slater-Koster參數,但仍有一些結果需要改進。完整的過程分為兩個步驟,我們無法僅使用一種機器學習工具來預測所有參數。當計算的哈密頓量為非Slater-Koster哈密頓量時,預測結果將存在一些偏差。
This article uses machine learning to predict Slater-Koster method’s parameters. We train a machine to map the calculated Hamiltonian to Slater-Koster parameters. And we use these predicted parameters to construct Slater-Koster Hamiltonian.
In chapter 2, I simply introduce some Physics knowledges, and deep learning. In this part, I mainly talk about how deep learning work. In chapter 3, I introduce the tight-binding method, deep learning again and unsupervised learning K-means. In this part, I talk about how to train a deep learning and how to make it perform better. K-means is included because I need use it to cluster some data.
In chapter 4, material Wse2 and material Sb2Te3 are used as my example. First, I tried many deep learning models and techniques to predict WSe2’s Slater-Koster parameters. But there are still some parameters that deep learning cannot predict. Thus, I cluster them with K-means method to find the nearest parameters. Next, I used the same procedure to find the Sb2Te3’s parameters.
Although the results show that we can use machine learning to find Slater-Koster parameters, there are still some results that need improvement. Full procedure is two steps, we can’t predict all parameters just use one machine learning tool. When the calculated Hamiltonian is non-Slater-Koster Hamiltonian, there will be some deviations in the predicted results.
[1] Shu-Ting Pi (2017). Exploring Novel Materials Using Deep Learning Algorithm https://github.com/pipidog/TBDNN/blob/master/TBDNN%20proposal.pdf
[2] Chollet, F. (2017). Deep Learning with Python . Manning. ISBN: 9781617294433
[3] Pankaj Mehta, Marin Bukov, Ching-Hao Wang, Alexandre G.R. Day, Clint Richardson,Charles K. Fisher, David J. Schwab. A high-bias, low-variance introduction to Machine Learning for physicists, Physics Report, Volume 810, ISSN 0370-1573
[4] Gregory H. Wannier (1937). The Structure of Electronic Excitation Levels in Insulating Crystals, Phys. Rev. 52, 191
[5] J.C. Slater, G.F. Koster (1954). Simplified LCAO Method for the Periodic Potential Problem, Phys. Rev. 94, 1498
[6] Charles Kittel (2015). Introduction to Solid State Physics[8th], Wiley, ISBN:9780471415268
[7] Wahyu Setyawan, Stefano Curtarolo (2010). High-throughput electronic band structure calculations: Challenges and tools, Computational Materials Science, Volume 49, Issue 2, ISSN 0927-0256
[8] LeCun, Yann, Bottou, Léon, Orr, Genevieve B, Müller, Klaus-Robert (1998b). Efficient backprop. In: Neural Networks: Tricks of the Trade. Springer
[9] Bottou, Léon, (2012). Stochastic gradient descent tricks. In: Neural Networks: Tricks of the Trade. Springer, pp. 421–436.
[10] Duchi, John, Hazan, Elad, Singer, Yoram (2011). Adaptive subgradient methods for online learning and stochastic optimization. J. Mach. Learn. Res. 12 (Jul), 2121-2159
[11] Tieleman, Tijmen, Hinton, Geoffrey (2012). Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Netw. Mach. Learn. 4(2), 26-31.
[12] Kingma, Diederik P., Ba, Jimmy (2014). Adam: A method for stochastic optimization. ArXiv preprint arXiv:1412.6980.
[13] Mohammad Nakhaee, S. Ahmad Ketabi, Francois M. Peeters (2020). Tight-Binding Studio: A technical software package to find the parameters of tight-binding Hamiltonian, Computer Physics Communications, Volume 254, ISSN 0010-4655
[14] Klaus Capelle (2002). A bird's-eye view of density-functional theory.
[15] Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, Ruslan Salakhutdinov (2014). Dropout: A Simple Way to Prevent Neural Networks from Overfitting. 15(56):1929−1958
[16] Sergey Ioffe and Christian Szegedy (2015). Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. ArXiv. 1502.03167
[17] Yann LeCun, Yoshua Bengio and Geoffrey Hinton (2015). Deep Learning. Nature 521, 436–444 (2015). https://doi.org/10.1038/nature14539