| 研究生: |
趙于甯 Chao, Yu-Ning |
|---|---|
| 論文名稱: |
在智慧電網下的電動車充電排程管理:以深度強化學習求解 Electric Vehicle Charging Scheduling Management in Smart Grids: Solving with Deep Reinforcement Learning |
| 指導教授: |
劉任修
Liu, Ren-Shiou |
| 學位類別: |
碩士 Master |
| 系所名稱: |
管理學院 - 資訊管理研究所 Institute of Information Management |
| 論文出版年: | 2024 |
| 畢業學年度: | 112 |
| 語文別: | 中文 |
| 論文頁數: | 53 |
| 中文關鍵詞: | 智慧電網 、強化學習 、電動車充電 、時間電價 |
| 外文關鍵詞: | Smart grid, Reinforcement learning, Electric vehicle charging, Time-Of-Use pricing |
| 相關次數: | 點閱:78 下載:30 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
隨著電動車(Electric Vehicles)的崛起和對環保意識的不斷增強,電動車充電基礎設施的重要性日益凸顯。電動車的普及需要高效管理充電基站和優化能源使用,以應對能源環境的挑戰,提高能源效率,並滿足不斷增長的使用者需求。
然而,這種快速普及也帶來了充電基礎設施管理和充電排程的新挑戰,如充電站擁擠、用戶等待時間過長等問題,且過往文獻較少同時考慮到使用者充電成本、使用者排隊時間成本與電網負載。為解決這些問題,本研究探討了應用強化學習技術,使電動車能夠更靈活地根據使用者電量需求與充電時間限制進行充電。
為解決這些問題,本研究提出一種運用強化學習技術,使電動車能夠更靈活地根據使用者電量需求與充電時間限制進行充電。它允許電動車智能地根據目前電網負載情況與電價調整充電策略,這種方法使電動車使用者可以在不同時間和場合選擇最佳的充電時間,以最小化電動車之充電成本。
實驗結果顯示,本研究提出之模型由於能動態的根據電網負載情況,將電動車充電做排程,在最小化充電成本方面優於我們提出的base model-隨機充電模型。
The rise of electric vehicles (EVs) and environmental awareness underscore the importance of EV charging infrastructure. Efficient management of charging stations and optimized energy usage are crucial for addressing energy and environmental challenges, improving efficiency, and meeting user demands. However, rapid EV adoption brings challenges like station overcrowding and long wait times. Past research often overlooks factors such as user costs, wait times, and grid load simultaneously.
This study explores reinforcement learning techniques for flexible EV charging based on user energy needs and time constraints. By adjusting charging strategies according to grid load and electricity prices, EV users can optimize charging times, reducing costs. Experimental results show that dynamically scheduling EV charging based on grid load outperforms random charging in cost reduction.
Arulkumaran, K., Deisenroth, M. P., Brundage, M., and Bharath, A. A. (2017). Deep reinforcement learning: A brief survey. IEEE Signal Processing Magazine, 34(6):26– 38.
Chekired, D. A. and Khoukhi, L. (2017). Smart grid solution for charging and discharg- ing services based on cloud computing scheduling. IEEE Transactions on Industrial Informatics, 13(6):3312–3321.
Chung, H.-M., Li, W.-T., Yuen, C., Wen, C.-K., and Crespi, N. (2019). Electric vehi- cle charge scheduling mechanism to maximize cost efficiency and user convenience. IEEE Transactions on Smart Grid, 10(3):3020–3030.
Chung, H.-M., Maharjan, S., Zhang, Y., and Eliassen, F. (2021). Intelligent charging management of electric vehicles considering dynamic user behavior and renewable energy: A stochastic game approach. IEEE Transactions on Intelligent Transporta- tion Systems, 22(12):7760–7771.
Das, R., Wang, Y., Busawon, K., Putrus, G., and Neaimeh, M. (2021). Real-time multi- objective optimisation for electric vehicle charging management. Journal of Cleaner Production, 292:126066.
Ghavami, A., Kar, K., Bhattacharya, S., and Gupta, A. (2013). Price-driven charging of plug-in electric vehicles: Nash equilibrium, social optimality and best-response convergence. pages 1–6.
He, Y., Venkatesh, B., and Guan, L. (2012). Optimal scheduling for charging and discharging of electric vehicles. IEEE Transactions on Smart Grid, 3(3):1095–1105.
Kapoor, A., Patel, V. S., Sharma, A., and Mohapatra, A. (2022). Centralized and decen- tralized pricing strategies for optimal scheduling of electric vehicles. IEEE Transac- tions on Smart Grid, 13(3):2234–2244. Li, Y. (2019). Reinforcement learning applications. arXiv: Learning.
Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., and Riedmiller, M. A. (2013). Playing atari with deep reinforcement learning. ArXiv, abs/1312.5602.
Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., Graves, A., Riedmiller, M. A., Fidjeland, A. K., Ostrovski, G., Petersen, S., Beat- tie, C., Sadik, A., Antonoglou, I., King, H., Kumaran, D., Wierstra, D., Legg, S., and Hassabis, D. (2015). Human-level control through deep reinforcement learning. Nature, 518:529–533.
Moschella, M., Ferraro, P., Crisostomi, E., and Shorten, R. (2021). Decentralized as- signment of electric vehicles at charging stations based on personalized cost functions and distributed ledger technologies. IEEE Internet of Things Journal, 8(14):11112– 11122.
Samsuden, M. A., Diah, N. M., and Rahman, N. A. (2019). A review paper on imple- menting reinforcement learning technique in optimising games performance. pages 258–263.
Song, Y., Zhao, H., Luo, R., Huang, L., Zhang, Y., and Su, R. (2022). A sumo frame- work for deep reinforcement learning experiments solving electric vehicle charging dispatching problem.
Tang, Q., Wang, K., Song, Y., Li, F., and Park, J. H. (2020). Waiting time minimized charging and discharging strategy based on mobile edge computing supported by software-defined network. IEEE Internet of Things Journal, 7(7):6088–6101.
Wan, Z., Li, H., He, H., and Prokhorov, D. (2019). Model-free real-time ev charging scheduling based on deep reinforcement learning. IEEE Transactions on Smart Grid, 10(5):5246–5257.
Wang, B., Hu, Y., Xiao, Y., and Li, Y. (2018). An ev charging scheduling mechanism based on price negotiation. Future Internet, 10(5).
Yang, Y. and Wang, J. (2020). An overview of multi-agent reinforcement learning from game theoretical perspective.
Zhao, T. and Ding, Z. (2017). Distributed initialization-free cost-optimal charging con- trol of plug-in electric vehicles for demand management. IEEE Transactions on In- dustrial Informatics, 13(6):2791–2801.