簡易檢索 / 詳目顯示

研究生: 蔡孟蓁
Tsai, Meng-Chen
論文名稱: 基於深度強化學習之電動車充電分配策略
A Deep Reinforcement Learning Approach to Electric Vehicle Charging Management
指導教授: 劉任修
Liu, Ren-Shiou
學位類別: 碩士
Master
系所名稱: 管理學院 - 資訊管理研究所
Institute of Information Management
論文出版年: 2025
畢業學年度: 113
語文別: 中文
論文頁數: 59
中文關鍵詞: 電動車充電充電排程充電站推薦深度強化學習
外文關鍵詞: Electric vehicle charging, Charging scheduling, Charging station recommendation, Deep reinforcement learning
相關次數: 點閱:3下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 隨著全球環保意識提升,節能減碳已成為現代社會的重要目標。電動車因其零排放的特性,被廣泛認為是緩解傳統燃油車所造成之環境汙染的替代方案,並逐漸在市場中普及。然而,大量電動車的無序充電行為可能會為電網帶來衝擊,使得當地電網負載過高。目前電動車的充電基礎設施發展速度也仍落後於車輛增長速度,尤其對於依賴公共充電站充電的車主而言,往往需要和其他車主競爭有限的充電樁資源,導致充電設施擁擠、等待時間過長等問題。因此,各種充電排程方法應運而出,以協調電動車的充電行為。
    為了讓電動車車主能獲得更好的充電體驗並有效利用資源,本研究基於強化學習技術,提出了一種整合充電時段與充電地點推薦的機制。該機制透過設計一個中央分配系統,考量特定社區內所有電動車與充電站的即時狀況,在用戶能夠配合調度的時間範圍內,為電動車找出適合的充電地點與時段,避免在熱門時段或地區的充電堵塞,降低因充電而帶來的負面影響,最小化所有車主的整體充電成本與等待時間,促進社區的充電資源共享。
    根據研究結果顯示,本研究所提出之方法能夠有效改善電動車充電所需的成本,並能根據不同的情境調整優化的目標。在充電資源充足時,優先改善充電成本;而在資源緊張或充電需求集中時,亦能有效引導車輛分散至不同充電時段與地點,進而降低排隊等待時間。

    As the global push for sustainability intensifies, electric vehicles (EVs) have emerged as a cleaner alternative to traditional fuel-powered cars. However, the rapid increase in EV adoption has outpaced the expansion of charging infrastructure, leading to issues such as grid overload and congestion at public charging stations. These problems are often compounded by the lack of coordinated charging behavior among EV users. Previous studies have rarely explored integrated approaches that jointly consider users' time flexibility and public charging station recommendations. To address these challenges, this study proposes a deep reinforcement learning-based system that recommends both charging locations and time slots. By centrally coordinating all EVs and charging stations within a community, the system aims to minimize users’ overall charging cost and wait time. Experimental results show that the method adapts to varying levels of resource availability, efficiently guiding EVs to avoid congestion and optimize charging performance.

    摘要 i EXTENDED ABSTRACT ii 誌謝 ix 目錄 x 表目錄 xii 圖目錄 xiii 1 緒論 1 1.1 背景及動機 1 1.2 研究目的 3 1.3 研究貢獻 3 1.4 論文架構 4 2 相關文獻探討 5 2.1 電動車充電排程 5 2.2 充電站分配 7 2.3 強化學習 9 2.4 小結 11 3 研究方法 12 3.1 問題描述 12 3.2 馬可夫決策過程 15 3.2.1 狀態(State) 15 3.2.2 動作(Action) 17 3.2.3 狀態轉移(State Transition) 18 3.2.4 獎勵(Reward) 18 3.3 強化學習演算法 20 4 實驗與分析 24 4.1 實驗環境與參數設定 24 4.1.1 實驗環境 24 4.1.2 參數設定 24 4.2 實驗評估指標 26 4.3 實驗之 Base Model 概述 27 4.4 實驗結果與分析 28 4.4.1 實驗一:獎勵權重組合的選定與評估 28 4.4.2 實驗二:模型在不同電動車數量之情境分析 32 4.4.3 實驗三:模型在不同充電站數量之情境分析 35 5 結論與未來發展 39 參考文獻 40

    Abbasi, M. H., Zhang, J., and Krovi, V. (2022). A lyapunov optimization approach to the quality of service for electric vehicle fast charging stations. In 2022 IEEE Vehicle Power and Propulsion Conference (VPPC), pages 1–6. IEEE.
    Abdullah, H. M., Gastli, A., and Ben-Brahim, L. (2021). Reinforcement learning based ev charging management systems–a review. IEEE Access, 9:41506–41531.
    Alaee, P., Bems, J., and Anvari-Moghaddam, A. (2023). A review of the latest trends in technical and economic aspects of ev charging management. Energies, 16(9):3669.
    Aljaidi, M., Aslam, N., Chen, X., Kaiwartya, O., Al-Gumaei, Y. A., and Khalid, M. (2022). A reinforcement learning-based assignment scheme for evs to charging stations. In 2022 IEEE 95th Vehicular Technology Conference:(VTC2022-Spring), pages 1–7. IEEE.
    Amjad, M., Ahmad, A., Rehmani, M. H., and Umer, T. (2018). A review of evs charging: From the perspective of energy optimization, optimization approaches, and charging techniques. Transportation Research Part D: Transport and Environment, 62:386–417.
    Azzouz, I. and Fekih Hassen, W. (2023). Optimization of electric vehicles charging scheduling based on deep reinforcement learning: A decentralized approach. Energies, 16(24):8102.
    Chao, H.-p. (2010). Price-responsive demand management for a smart grid world. The Electricity Journal, 23(1):7–20.
    Chen, W., Liu, D., and Cao, J. (2024). Personalized preference based electric vehicle charging recommendation considering photovoltaic consumption: A transfer reinforcement learning method. IEEE Transactions on Transportation Electrification.
    Danner, D. and de Meer, H. (2021). Quality of service and fairness for electric vehicle charging as a service. Energy Informatics, 4:1–20.
    Elghitani, F. and El-Saadany, E. F. (2020). Efficient assignment of electric vehicles to charging stations. IEEE Transactions on Smart Grid, 12(1):761–773.
    Fan, X., Gao, Y., and Zhang, F. (2024). DST: Personalized Charging Station Recommendation for Electric Vehicles Based on Deep Reinforcement Learning and Spatio-Temporal Preference Analysis. In 2024 IEEE International Conference on Web Services (ICWS), pages 330–341.
    Gui, Y., Tang, D., Zhu, H., Zhang, Y., and Zhang, Z. (2023). Dynamic scheduling for flexible job shop using a deep reinforcement learning approach. Computers & Industrial Engineering, 180:109255.
    IEA (2024). Global ev outlook 2024. Retrieved Apr, 2024, from the World Wide Web:https://www.iea.org/reports/global-ev-outlook-2024.
    Li, S., Hu, W., Cao, D., Dragicevi ˇ c, T., Huang, Q., Chen, Z., and Blaabjerg, F. (2021). ´ Electric vehicle charging management based on deep reinforcement learning. Journal of Modern Power Systems and Clean Energy, 10(3):719–730.
    Lin, H., Li, X., Cao, Y., Labiod, H., and Ahmad, N. (2024). PDQN: User Preference-Based Charging Station Recommendation. IEEE Transactions on Consumer Electronics.
    Mastoi, M. S., Zhuang, S., Munir, H. M., Haris, M., Hassan, M., Usman, M., Bukhari, S. S. H., and Ro, J.-S. (2022). An in-depth analysis of electric vehicle charging station infrastructure, policy implications, and future trends. Energy Reports, 8:11504– 11529.
    Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A. A., Veness, J., Bellemare, M. G., Graves, A., Riedmiller, M., Fidjeland, A. K., Ostrovski, G., et al. (2015). Humanlevel control through deep reinforcement learning. nature, 518(7540):529–533.
    Moghaddam, Z., Ahmad, I., Habibi, D., and Phung, Q. V. (2017). Smart charging strategy for electric vehicle charging stations. IEEE Transactions on transportation electrification, 4(1):76–88.
    Nie, J., Xia, S., Liu, Y., Ding, S., Hu, L., Zhao, M., Fan, Y., Abdel-Aty, M., Preindl, M., and Jiang, X. (2023). A data-driven and human-centric ev charging recommendation system at city-scale. In Proceedings of the 14th ACM International Conference on Future Energy Systems, pages 427–438.
    Sadeghian, O., Oshnoei, A., Mohammadi-Ivatloo, B., Vahidinasab, V., and Anvari-Moghaddam, A. (2022). A comprehensive review on electric vehicles smart charging: Solutions, strategies, technologies, and challenges. Journal of Energy Storage, 54:105241.
    Su, S., Li, Y., Yamashita, K., Xia, M., Li, N., and Folly, K. A. (2023). Electric vehicle charging guidance strategy considering “traffic network-charging station-driver” modeling: A multi-agent deep reinforcement learning based approach. IEEE Transactions on Transportation Electrification.
    Sutton, R. S. (2018). Reinforcement learning: An introduction. A Bradford Book.
    Vardakas, J. S., Zorba, N., and Verikoukis, C. V. (2014). A survey on demand response programs in smart grids: Pricing methods and optimization algorithms. IEEE Communications Surveys & Tutorials, 17(1):152–178.
    Wang, X., Wang, S., Liang, X., Zhao, D., Huang, J., Xu, X., Dai, B., and Miao, Q. (2022). Deep reinforcement learning: A survey. IEEE Transactions on Neural Networks and Learning Systems, 35(4):5064–5078.
    Xia, Y., Gu, X., Fu, D., and Chen, X. (2023). Large scale ev charging allocation in unbalanced multi-microgrid system. IEEE Journal on Emerging and Selected Topics in Circuits and Systems, 13(3):797–805.
    Yan, D., Yin, H., Li, T., and Ma, C. (2021). A two-stage scheme for both power allocation and ev charging coordination in a grid-tied pv–battery charging station. IEEE Transactions on Industrial Informatics, 17(10):6994–7004.
    Yuan, W., Huang, J., and Zhang, Y. J. A. (2015). Competitive charging station pricing for plug-in electric vehicles. IEEE Transactions on Smart Grid, 8(2):627–639.
    Zhang, F., Yang, Q., and An, D. (2020). Cddpg: A deep-reinforcement-learning-based approach for electric vehicle charging control. IEEE Internet of Things Journal, 8(5):3075–3087.
    Zhang, W., Liu, H., Wang, F., Xu, T., Xin, H., Dou, D., and Xiong, H. (2021). Intelligent electric vehicle charging recommendation based on multi-agent reinforcement learning. In Proceedings of the Web Conference 2021, pages 1856–1867.
    國 家 發 展 委 員 會 (2022). 臺 灣 2050 淨 零 排 放. Retrieved Mar 30, 2022, from the World Wide Web:https://www.ndc.gov.tw/Content List.aspx ?n=FD76ECBAE77D9811&fbclid=IwZXh0bgNhZW0CMTEAAR2Yj8BwlSukvRqE -xZW xoD1x8wYmOH5 ExiMAM1c6UBGBNTibk-8FYHFk aem kD8CZ75EERTycxx -WSibtg#.
    趙于甯 (2024). 在智慧電網下的電動車充電排程管理:以深度強化學習求解. 碩 士論文, 國立成功大學. 臺灣博碩士論文知識加值系統.

    無法下載圖示 校內:2030-07-28公開
    校外:2030-07-28公開
    電子論文尚未授權公開,紙本請查館藏目錄
    QR CODE