簡易檢索 / 詳目顯示

研究生: 范寶朱
Phan, Bao Chau
論文名稱: 基於深度強化學習方法的遠端微電網管理策
Management Strategies of an Isolated Microgrid Based on Deep Reinforcement Learning Approach
指導教授: 林清一
Lin, Chin E.
共同指導: 賴盈誌
Lai, Ying-Chih
學位類別: 博士
Doctor
系所名稱: 工學院 - 航空太空工程學系
Department of Aeronautics & Astronautics
論文出版年: 2021
畢業學年度: 109
語文別: 英文
論文頁數: 85
外文關鍵詞: Hybrid renewable energy system (HRES), HOMER analysis, Maximum power point tracking (MPPT) control, Energy management system (EMS), Deep Reinforcement Learning (DRL).
相關次數: 點閱:167下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • The disadvantages of using fossil fuels for power generation in high operational cost and environmental pollution, great concerns have arisen in adaptation of renewable energy as alternative sources for sustainable energy. To reduce those energy sources on high dependence of weather conditions, a hybrid renewable energy system (HRES) is introduced in this research. Based on general understanding, solar and wind energy are the primary energy resources to study. However, the electrolyzed hydrogen in storage can also be adopted for the operation of fuel cell (FC). Battery and FC are means of storage systems that supply energy in case of insufficiency; while diesel generator plays a back-up system in order to fulfill the load demand under bad weather conditions. Because of the high cost of grid extension and diesel generation in rural and island territories, the development of HRES can be the most feasible solution to run. The case of HRES on Basco Island, the Philippines, is studied in this dissertation. Since accurate mathematical models are unavailable to solve the optimal control problem, this work aims at the application of deep reinforcement learning (DRL) for control strategies of an HRES to improve the system efficiency and reliability. Three major parts are considered: optimal sizing of HRES based on HOMER software; MPPT control of solar and wind energy conversion system; and the energy management of the hybrid power system. First, the optimal configuration of the HRES is determined following the load demand and weather data at the applied site. Second, two algorithms of the DRL method is applied for MPPT control to improve the energy conversion, including Deep Q Network (DQN) and Deep Deterministic Policy Gradient (DDPG). Finally, the energy management of the proposed HRES is developed based on the DQN method. The energy management system (EMS) is one of the most important parts to ensure the system in reliable and efficient operation. The main function of the EMS is to balance the power flow between the system components, and simultaneously reduce the amount of fossil fuel and cost of energy production. The simulation results by conducting MATLAB/Simulink environment show the efficiency and potential of the proposed methods.

    ABSTRACT I ACKNOWLEDGEMENTS III TABLE OF CONTENTS IV LIST OF TABLES VII LIST OF FIGURES VIII CHAPTER ONE INTRODUCTION 1 1.1 Research Background. 1 1.2 Literature review 5 1.2.1 Optimal sizing of the HRES 5 1.2.2 MPPT control problem 8 1.2.3 Energy management system of the HRES 12 1.3 Objective and contribution 16 1.4 Thesis organization 17 CHAPTER TWO OPTIMAL SIZING OF THE HRES BASED ON HOMER 19 2.1 Site description 19 2.2 System components 21 2.3 The criteria of system optimization 24 2.3.1 The net present cost 24 2.3.2 The cost of energy 24 2.4 Optimal sizing result 25 CHAPTER THREE MPPT CONTROL BASED ON DEEP REINFORCEMENT LEARNING 29 3.1 PV model and partial shading problem 30 3.1.1 Mathematical model of the PV system 30 3.1.2 Partial shading condition 30 3.1.3 DC/DC boost converter 31 3.1.4 The proposed PV system 33 3.2 Deep reinforcement learning based MPPT control 34 3.2.1 Markov Decision Process Model of a PV System 34 3.2.2 Methodology of the DQN MPPT Control 36 3.2.3 Methodology of the DDPG MPPT Control 37 3.3 Results and discussion 39 3.3.1 Set up of the simulation 39 3.3.2 Training results 40 3.3.3 Performance under different weather condition 42 CHAPTER FOUR ENERGY MANAGEMENT SYSTEM BASED ON DEEP REINFORCEMENT LEARNING 50 4.1 Mathematical models of the system components 51 4.1.1 PV system 51 4.1.2 Wind turbine system 52 4.1.3 Battery storage system 52 4.1.4 Diesel generator 54 4.1.5 Fuel cell 55 4.1.6 Electrolyzer 55 4.1.7 Hydrogen Tank 56 4.1.8 Power balance 56 4.2 EMS of the HRES based on Deep Q Network 57 4.2.1 Markov Decision Process Model of the EMS of the proposed HRES 57 4.2.2 Methodology of the DQN-based EMS 60 4.2.3 Methodology of the conventional dispatch-based EMS 62 4.3 Simulation results 64 4.3.1 Simulation set up 64 4.3.2 Training result 65 4.3.3 Performance under various conditions 66 CHAPTER FIVE CONCLUSIONS 71 5.1 Conclusion 71 5.2 Limitation and future works 73 REFERENCES 77

    [1] C. E. Lin and B. C. Phan, "Optimal Hybrid Energy Solution for Island Micro-Grid," in 2016 IEEE International Conferences on Big Data and Cloud Computing (BDCloud), Social Computing and Networking (SocialCom), Sustainable Computing and Communications (SustainCom) (BDCloud-SocialCom-SustainCom), 2016, pp. 461-468.
    [2] A. c. f. E. Team, "ASEAN Renewable Energy Policies," 2016.
    [3] F. J. Vivas, A. De las Heras, F. Segura, and J. M. Andújar, "A review of energy management strategies for renewable hybrid energy systems with hydrogen backup," Renewable and Sustainable Energy Reviews, vol. 82, pp. 126-155, 2018/02/01/ 2018.
    [4] A. Chauhan and R. P. Saini, "A review on Integrated Renewable Energy System based power generation for stand-alone applications: Configurations, storage options, sizing methodologies and control," Renewable and Sustainable Energy Reviews, vol. 38, pp. 99-120, 2014/10/01/ 2014.
    [5] M. Ahangari Hassas and K. Pourhossein, "Control and Management of Hybrid Renewable Energy Systems: Review and Comparison of Methods " Journal of Operation and Automation in Power Engineering, vol. 5, no. 2, pp. 131-138, 2017.
    [6] R. S. Sutton and A. G. Barto, Reinforcement learning: An introduction. MIT press, 2011.
    [7] R. C. Hsu, C.-T. Liu, W.-Y. Chen, H.-I. Hsieh, and H.-L. Wang, "A Reinforcement Learning-Based Maximum Power Point Tracking Method for Photovoltaic Array," International Journal of Photoenergy, vol. 2015, 2015.
    [8] V. Mnih et al., Playing Atari with Deep Reinforcement Learning. 2013.
    [9] E. Mocanu et al., On-line Building Energy Optimization using Deep Reinforcement Learning. 2017.
    [10] Y. Hu, W. Li, K. Xu, T. Zahid, F. Qin, and C. Li, "Energy Management Strategy for a Hybrid Electric Vehicle Based on Deep Reinforcement Learning," vol. 8, no. 2, p. 187, 2018.
    [11] A. Maheri, "Multi-objective design optimisation of standalone hybrid wind-PV-diesel systems under uncertainties," Renewable Energy, vol. 66, pp. 650-661, 6// 2014.
    [12] X. Zhang, S.-C. Tan, G. Li, J. Li, and Z. Feng, "Components sizing of hybrid energy systems via the optimization of power dispatch simulations," Energy, vol. 52, pp. 165-172, 4/1/ 2013.
    [13] D. K. Khatod, V. Pant, and J. Sharma, "Analytical Approach for Well-Being Assessment of Small Autonomous Power Systems With Solar and Wind Energy Sources," IEEE Transactions on Energy Conversion, vol. 25, no. 2, pp. 535-545, 2010.
    [14] H. X. Yang, L. Lu, and J. Burnett, "Weather data and probability analysis of hybrid photovoltaic–wind power generation systems in Hong Kong," Renewable Energy, vol. 28, no. 11, pp. 1813-1824, 9// 2003.
    [15] G. Bekele and B. Palm, "Feasibility study for a standalone solar–wind-based hybrid energy system for application in Ethiopia," Applied Energy, vol. 87, no. 2, pp. 487-495, 2// 2010.
    [16] A.-K. Daud and M. S. Ismail, "Design of isolated hybrid systems minimizing costs and pollutant emissions," Renewable Energy, vol. 44, pp. 215-224, 8// 2012.
    [17] S. Chandra, P. Gaur, and Srishti, "Maximum Power Point Tracking Approaches for Wind–Solar Hybrid Renewable Energy System—A Review," 2018, pp. 3-12.
    [18] B. Bendib, H. Belmili, and F. Krim, "A survey of the most used MPPT methods: Conventional and advanced algorithms applied for photovoltaic systems," Renewable and Sustainable Energy Reviews, vol. 45, pp. 637-648, 2015/05/01/ 2015.
    [19] A. Mohapatra, B. Nayak, P. Das, and K. B. Mohanty, "A review on MPPT techniques of PV system under partial shading condition," Renewable and Sustainable Energy Reviews, vol. 80, pp. 854-867, 2017.
    [20] H. Rezk, A. Fathy, and A. Y. Abdelaziz, "A comparison of different global MPPT techniques based on meta-heuristic algorithms for photovoltaic system subjected to partial shading conditions," Renewable and Sustainable Energy Reviews, vol. 74, pp. 377-386, 2017.
    [21] J. P. Ram, T. S. Babu, and N. Rajasekar, "A comprehensive review on solar PV maximum power point tracking techniques," Renewable and Sustainable Energy Reviews, vol. 67, pp. 826-847, 2017/01/01/ 2017.
    [22] P. Kofinas, S. Doltsinis, A. I. Dounis, and G. A. Vouros, "A reinforcement learning approach for MPPT control method of photovoltaic sources," Renewable Energy, vol. 108, pp. 461-473, 2017.
    [23] C. Wei, Z. Zhang, W. Qiao, and L. Qu, "Reinforcement-Learning-Based Intelligent Maximum Power Point Tracking Control for Wind Energy Conversion Systems," IEEE Transactions on Industrial Electronics, vol. 62, no. 10, pp. 6360-6370, 2015.
    [24] A. Nambiar, E. Anderlini, G. Payne, D. Forehand, A. Kiprakis, and A. Wallace, "Reinforcement Learning Based Maximum Power Point Tracking Control of Tidal Turbines," Proceedings of the 12th European Wave and Tidal Energy Conference, Conference contribution vol. 27, 2017.
    [25] B. C. Phan and Y.-C. Lai, "Control Strategy of a Hybrid Renewable Energy System Based on Reinforcement Learning Approach for an Isolated Microgrid," Applied Sciences, vol. 9, no. 19, p. 4001, 2019.
    [26] X. Zhang et al., "Memetic reinforcement learning based maximum power point tracking design for PV systems under partial shading condition," Energy, vol. 174, pp. 1079-1090, 2019.
    [27] M. Dong et al., "Global Maximum Power Point Tracking of PV Systems under Partial Shading Condition: A Transfer Reinforcement Learning Approach," Applied Sciences, vol. 9, p. 2769, 2019.
    [28] R. Hsu, C. T. Liu, W. Y. Chen, H.-I. Hsieh, and H. L. Wang, "A Reinforcement Learning-Based Maximum Power Point Tracking Method for Photovoltaic Array," International Journal of Photoenergy, vol. 2015, 2015.
    [29] A. Youssef, M. E. Telbany, and A. Zekry, "Reinforcement Learning for Online Maximum Power Point Tracking Control," Journal of Clean Energy Technologies, vol. 4, no. 4, 2016.
    [30] Y. Li, "Deep Reinforcement Learning: An Overview," arXiv preprint arXiv:1810.06339, 2018.
    [31] Z. Zhang, D. Zhang, and R. C. Qiu, "Deep reinforcement learning for power system applications: An overview," CSEE Journal of Power and Energy Systems, vol. 6, no. 1, pp. 213-225, 2020.
    [32] C. Wei, Z. Zhang, W. Qiao, and L. Qu, "An Adaptive Network-Based Reinforcement Learning Method for MPPT Control of PMSG Wind Energy Conversion Systems," IEEE Transactions on Power Electronics, vol. 31, no. 11, pp. 7837-7848, 2016.
    [33] A. Saenz-Aguirre, E. Zulueta, U. Fernandez-Gamiz, J. Lozano, and J. Lopez-Guede, "Artificial Neural Network Based Reinforcement Learning for Wind Turbine Yaw Control," Energies, vol. 12, p. 436, 2019.
    [34] I. V, S. V, and L. R, "Resources, configurations, and soft computing techniques for power management and control of PV/wind hybrid system," Renewable and Sustainable Energy Reviews, vol. 69, pp. 129-143, 2017/03/01/ 2017.
    [35] B. Heymann, J. F. Bonnans, P. Martinon, F. J. Silva, F. Lanas, and G. Jiménez-Estévez, "Continuous optimal control approaches to microgrid energy management," Energy Systems, vol. 9, no. 1, pp. 59-77, 2018/02/01 2018.
    [36] A. Merabet, K. T. Ahmed, H. Ibrahim, R. Beguenane, and A. M. Y. M. Ghias, "Energy Management and Control System for Laboratory Scale Microgrid Based Wind-PV-Battery," IEEE Transactions on Sustainable Energy, vol. 8, no. 1, pp. 145-154, 2017.
    [37] Z. Chen, A. Luo, H. Wang, Y. Chen, M. Li, and Y. Huang, "Adaptive sliding-mode voltage control for inverter operating in islanded mode in microgrid," International Journal of Electrical Power & Energy Systems, vol. 66, pp. 133-143, 2015/03/01/ 2015.
    [38] F.-C. Wang, P.-C. Kuo, and H.-J. Chen, "Control design and power management of a stationary PEMFC hybrid power system," International Journal of Hydrogen Energy, vol. 38, no. 14, pp. 5845-5856, 2013/05/10/ 2013.
    [39] J. N. S, D. Gaonkar, and P. Bhat Nempu, Power Control of PV/Fuel Cell/Supercapacitor Hybrid System for Stand-Alone Applications. 2016, pp. 672-679.
    [40] Z. Roumila, D. Rekioua, and T. Rekioua, "Energy management based fuzzy logic controller of hybrid system wind/photovoltaic/diesel with storage battery," International Journal of Hydrogen Energy, vol. 42, no. 30, pp. 19525-19535, 2017/07/27/ 2017.
    [41] N. Varghese and P. Reji, "Battery charge controller for hybrid stand alone system using adaptive neuro fuzzy inference system," in 2016 International Conference on Energy Efficient Technologies for Sustainability (ICEETS), 2016, pp. 171-175.
    [42] M. F. Zia, E. Elbouchikhi, and M. Benbouzid, "Microgrids energy management systems: A critical review on methods, solutions, and prospects," Applied Energy, vol. 222, pp. 1033-1055, 2018/07/15/ 2018.
    [43] L. W. Chong, Y. W. Wong, R. K. Rajkumar, R. K. Rajkumar, and D. Isa, "Hybrid energy storage systems and control strategies for stand-alone renewable energy power systems," Renewable and Sustainable Energy Reviews, vol. 66, pp. 174-189, 2016/12/01/ 2016.
    [44] T. Huang and D. Liu, "A self-learning scheme for residential energy system control and management," Neural Computing and Applications, vol. 22, no. 2, pp. 259-269, 2013.
    [45] R. Leo, R. S. Milton, and S. Sibi, "Reinforcement learning for optimal energy management of a solar microgrid," in 2014 IEEE Global Humanitarian Technology Conference - South Asia Satellite (GHTC-SAS), 2014, pp. 183-188.
    [46] L. Raju, S. Sankar, and R. S. Milton, "Distributed Optimization of Solar Micro-grid Using Multi Agent Reinforcement Learning," Procedia Computer Science, vol. 46, pp. 231-239, 2015/01/01/ 2015.
    [47] H.-M. Kim, Y. Lim, and T. Kinoshita, "An Intelligent Multiagent System for Autonomous Microgrid Operation," Energies, vol. 5, no. 9, 2012.
    [48] Y. S. F. Eddy, H. B. Gooi, and S. X. Chen, "Multi-Agent System for Distributed Management of Microgrids," IEEE Transactions on Power Systems, vol. 30, no. 1, pp. 24-34, 2015.
    [49] P. Kofinas, G. Vouros, and A. I. Dounis, "Energy Management in Solar Microgrid via Reinforcement Learning," presented at the Proceedings of the 9th Hellenic Conference on Artificial Intelligence, Thessaloniki, Greece, 2016.
    [50] P. Kofinas, G. Vouros, and A. Dounis, Energy management in solar microgrid via reinforcement learning using fuzzy reward. 2017, pp. 1-19.
    [51] P. Kofinas, A. I. Dounis, and G. A. Vouros, "Fuzzy Q-Learning for multi-agent decentralized energy management in microgrids," Applied Energy, vol. 219, pp. 53-67, 2018/06/01/ 2018.
    [52] D. N. Luta and A. K. Raji, "Optimal sizing of hybrid fuel cell-supercapacitor storage system for off-grid renewable applications," Energy, vol. 166, pp. 530-540, 2019/01/01/ 2019.
    [53] J. Ahmed and Z. Salam, "A Maximum Power Point Tracking (MPPT) for PV system using Cuckoo Search with partial shading capability," Applied Energy, vol. 119, pp. 118-130, 2014.
    [54] A. F. Mirza, Q. Ling, M. Y. Javed, and M. Mansoor, "Novel MPPT techniques for photovoltaic systems under uniform irradiance and Partial shading," Solar Energy, vol. 184, pp. 628-648, 2019.
    [55] F. Belhachat and C. Larbes, "A review of global maximum power point tracking techniques of photovoltaic system under partial shading conditions," Renewable and Sustainable Energy Reviews, vol. 92, pp. 513-553, 2018.
    [56] A. Davoudi, J. Jatskevich, and T. D. Rybel, "Numerical state-space average-value modeling of PWM DC-DC converters operating in DCM and CCM," IEEE Transactions on Power Electronics, vol. 21, no. 4, pp. 1003-1012, 2006.
    [57] V. Mnih et al., "Playing atari with deep reinforcement learning," arXiv preprint arXiv:1312.5602, 2013.
    [58] J. Fan, Z. Wang, Y. Xie, and Z. Yang, "A Theoretical Analysis of Deep Q-Learning," arXiv preprint arXiv:1901.00137v3, 2019.
    [59] Y. Wu, H. Tan, J. Peng, H. Zhang, and H. He, "Deep reinforcement learning of energy management with continuous control strategy and traffic information for a series-parallel plug-in hybrid electric bus," Applied Energy, vol. 247, pp. 454-466, 2019.
    [60] M. Glavic, "(Deep) Reinforcement learning for electric power system control and related problems: A short review and perspectives," Annual Reviews in Control, vol. 48, pp. 22-35, 2019.
    [61] N. Casas, "Deep deterministic policy gradient for urban traffic light control," arXiv preprint arXiv:1703.09035, 2017.
    [62] X. Luo, J. Wang, M. Dooner, and J. Clarke, "Overview of current development in electrical energy storage technologies and the application potential in power system operation," Applied Energy, vol. 137, pp. 511-536, 1/1/ 2015.
    [63] K. U. Skarstein Oyvin, "Design consideration with respect to long term diesel saving in wind/diesel plants," Wind Engineering, vol. vol.13, 1989.
    [64] M. S. Ismail, M. Moghavvemi, and T. M. I. Mahlia, "Techno-economic analysis of an optimized photovoltaic and diesel generator hybrid power system for remote houses in a tropical climate," Energy Conversion and Management, vol. 69, pp. 163-173, 5// 2013.
    [65] A. Kaabeche and R. Ibtiouen, "Techno-economic optimization of hybrid photovoltaic/wind/diesel/battery generation in a stand-alone power system," Solar Energy, vol. 103, pp. 171-182, 5// 2014.

    無法下載圖示
    校外:不公開
    電子論文及紙本論文均尚未授權公開
    QR CODE