簡易檢索 / 詳目顯示

研究生: 林揚笙
Lin, Yang-Sheng
論文名稱: 應用類神經網路與粒子群演算法於馬達參數設計
Motor Parameter Design Using Neural Network and Particle Swarm Optimization
指導教授: 謝旻甫
Hsieh, Min-Fu
共同指導教授: 蔡明祺
Tsai, Mi-Ching
學位類別: 碩士
Master
系所名稱: 電機資訊學院 - 電機工程學系
Department of Electrical Engineering
論文出版年: 2020
畢業學年度: 108
語文別: 中文
論文頁數: 48
中文關鍵詞: 類神經網路粒子群演算法機器學習應用馬達參數設計
外文關鍵詞: neural network, particle swam optimization, machine learning application, motor parameter design
相關次數: 點閱:112下載:1
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 隨著自動化的蓬勃發展,工業上馬達的應用需求日益增加,因此需要加速馬達的生產及設計,但馬達設計常常花最多的時間等待模擬的結果,如果不如預期,便要退回根據結果重新進行修改,反覆來往數次將會花費數日,嚴重影響新產品的推行。且現今高效率馬達為精密機械,設計往往需要大量的背景知識以及實務經驗,門檻較高,各家廠商人力吃緊,若非設計全新馬達,只對舊有馬達進行些許參數變動,運用程式幫忙進行快速設計,便可將人力做更有效率的應用。
    本文將分為兩部分,第一部分為應用有限元素軟體分析多筆馬達性能,並將模擬資料輸入類神經網路進行訓練,得到可對馬達參數進行性能預測的網路,並驗證網路與有限元素分析之結果,再以真實馬達比對前兩者準確性。第二部分應用粒子群演算法與類神經網路最佳化馬達的參數以及性能,藉由使用者選定的目標數值以及設定不同性能的權重,在使用者限定區間內盡可能找出滿足使用者需求的馬達參數。最後再實際使用程式,增加目標馬達的效率及體積功率密度,並以有限元素分析做驗證。

    In recent years, the industrial requirements have been growing, consequently, the demand of motors in industry automation has increased, therefore, the production of motors needs to speed up.
    However, motor design usually spend the most time waiting the result of simu-lation. If the result isn’t expected, it will cost several days to modify, and delay the new product release.
    Under these demands, this thesis focuses on the design of neural network, includ-ing network structure and training data set. Replacing finite element analysis software with neural network due to its advantages of calculating time.
    In addition, this thesis also combine PSO algorithm and neural network to opti-mize the parameters and performance, through the target performance selected by the user and set the weight of the performance, find the motor parameters that satisfy us-er’s demand.
    Finally, use the program to increase the efficiency and power density of target motor, and verify it with the analysis of finite element software. Not only the perfor-mance that predict by neural network is comparable to finite element software, but al-so reduces the redesigning time by using optimization algorithm.

    中文摘要 I Abstract II 誌謝 XII 目錄 XIII 表目錄 XV 圖目錄 XVI 第一章 緒論 1 1.1 研究背景 1 1.2 研究動機與目的 2 1.3 論文架構 5 第二章 文獻回顧 6 2.1 類神經網路 6 2.1.1 多層感知器(Multilayer Perceptron) 6 2.1.2 反向傳遞演算法(Back Propagation) 8 2.2 粒子群演算法 12 第三章 機器學習資料類型與模型建立 14 3.1 目標訓練資料 14 3.1.1 反電動勢常數(BEMF Constant) 18 3.1.2 頓轉扭矩(Cogging Torque) 19 3.1.3 體積功率密度(Power Density) 20 3.1.4 效率(Efficiency) 21 3.1.5 總諧波失真(Total Harmonic Distortion) 22 3.2 類神經網路結構 24 3.2.1 模型結構 24 3.2.2 結構內容 26 3.3 粒子群演算法結構 29 第四章 程式訓練與驗證 32 4.1 資料 32 4.2 訓練結果 34 4.3 類神經網路準確度驗證 38 4.4 程式驗證 40 第五章 結論與未來研究建議 45 5.1 結論 45 5.2 未來研究建議 46 參考文獻 47

    [1] N. Borchardt, R. Kasper, J. Sauerhering, W. Heinemann and K. L. Foster, "Multilayer air gap winding designs for electric machines: theory, design, and characterisation," in The Journal of Engineering, vol. 2019, no. 17, pp. 3855-3861, 6 2019.
    [2] S. Li, N. A. Gallandat, J. R. Mayor, T. G. Habetler and R. G. Harley, "Calculating the Electromagnetic Field and Losses in the End Region of a Large Synchronous Generator Under Different Operating Conditions With 3-D Transient Finite-Element Analysis," in IEEE Transactions on Industry Applications, vol. 54, No. 4, pp. 3281-3293, July-Aug. 2018.
    [3] F. Rosenblatt, “The perceptron Model for information storage and or-ganization in the brain,” Physical Review, vol. 65,No. 6,pp.386-408,1958.
    [4] D. E. Rumelhart, G. E. Hinton, R. J. Williams, “Learnint representations by back-propagating errors,” in Nature,vol. 323, No. 9, pp.533-536, October 1986
    [5] F. J. Pineda, “Generalization of Back-Propagation to Recurrent Neural Network,” Physical Review Letters, vol. 59, No.19,pp. 2229-2232, November 1987
    [6] 周志華, 機器學習, 清華大學出版社, 2016
    [7] K. Hornik, “Multilayer Feedforward Network are Universal Approxi-mators,” Neural Network, vol. 2,pp.539-366,1989
    [8] J. Kennedy, R. Eberhart, “Particle Swarm Optimization,” IEEE Interna-tional Conference, 1995
    [9] 東元精電網站 http://www.tedmotors.com/_tw/pro/detail.php?pid=224&cid=88&f=88, 2020/7/10存取
    [10] N. Srivastava, G. Hinton, A. Krizhevsky, “Dropout: A Simple Way to Prevent Neural Networks fromm Overfitting,” Jmlr, vol. 15, pp.1929-1958, 2014
    [11] D. P. Kingma, J. Lei Ba, “ADAM: A Method for Stochastic Optimiza-tion,” ICLR, 2015

    下載圖示 校內:2025-07-01公開
    校外:2025-07-01公開
    QR CODE