簡易檢索 / 詳目顯示

研究生: 林子烜
Lin, Zi-Xuan
論文名稱: 在多接取邊緣計算中基於深度強化學習的計算卸載策略
Computing Offloading Strategy Based on Deep Reinforcement Learning in Multi-Access Edge Computing
指導教授: 蘇銓清
Sue, Chuan-Ching
學位類別: 碩士
Master
系所名稱: 電機資訊學院 - 資訊工程學系
Department of Computer Science and Information Engineering
論文出版年: 2022
畢業學年度: 110
語文別: 英文
論文頁數: 36
中文關鍵詞: 深度強化學習多接取邊緣計算計算卸載資源分配
外文關鍵詞: Deep Reinforcement Learning, Multi-Access Edge Computing, Computing Offloading Strategy, Resource allocation
相關次數: 點閱:136下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 隨著5G網路與物聯網迅速發展,使用者對於網路服務的需求持續增加,越來越多需要低延遲時間的應用服務出現,User equipments (UEs) 與各種IoT devices可能會受到計算能力和電池壽命等物理上無可避免的限制,很難去實現與滿足一些低延遲要求。多接取邊緣計算 (MEC) 透過將伺服器放在離使用者設備更近的網路邊緣上,因此可以減少延遲、也降低對頻寬的要求。然而,在多用戶場景中,大量用戶會爭奪計算資源和無線資源。本論文將研究在基於 MEC 系統中的卸載決策與資源分配策略,將用戶設備的延遲和能耗總成本設定為我們的長期優化目標。我們提出一種用於計算卸載的深度強化學習 (DRL) 演算法,以降低MEC延遲與能耗。透過我們的DRL方法,將離散動作和連續動作分開,為了避免因離散動作放鬆成連續集或是將連續動作離散化的動作,所造成的動作空間增加,使用深度 Q 網路 (DQN) 決定卸載策略和深度確定性策略梯度 (DDPG) 解決資源分配問題。實驗結果顯示,相比Local computing、Random、Offloading computing等其他基線演算法方法,我們提出的方法對於Local computing、Random、Offloading computing分別提高62.2%、65.1%、66%。此外,我們還評估了其他同樣基於深度強化學習演算法,在延遲和能耗方面優於其他比較方法,分別提高36.9%、23.5%、7.1%,並且我們可以在較少的訓練回合下獲得出色的性能。最後,我們在實驗討論了各種參數下對系統的影響。

    With the rapid development of 5G networks and the Internet of Things, users' demand for network services continues to increase, more and more application services that require low latency appear. User equipments (UEs) and various IoT devices may be affected by physically unavoidable constraints in computing capacity and battery life, and it is difficult to achieve and meet some low latency requirements. Multi-access edge computing (MEC) reduces latency and bandwidth requirements by placing servers at the edge of the network that are close to UEs. However, in a multi-user scenario, a lot of users compete for computing and wireless resources. This article will study the offloading decision and resource allocation strategy based on MEC systems, setting the delay and total energy consumption cost of UEs as our long-term optimization goals. We design a deep reinforcement learning (DRL) model to reduce MEC delay and energy consumption. With our DRL model, discrete actions and continuous actions are separated. To avoid the increase of action space caused by the relaxation of discrete actions into continuous sets or the actions of discretizing continuous actions, Deep Q Network (DQN) is used to decide to offload policy and Deep Deterministic Policy Gradients (DDPG) address resource allocation problems. Experimental results show that our proposed method improves by 62.2%, 65.1%, 66% compared with baseline algorithm methods such as Local computing, Random, Offloading computing, etc. In addition, we also evaluate other algorithms based on deep reinforcement learning, which outperform existing methods in terms of latency and energy consumption, our proposed method improves by 36.9%, 23.5%, 7.1%, and we can get great performance with few training episodes. Finally, we experimentally discuss the effects on the system under various conditions.

    摘 要 I Abstract II 致 謝 IV Contents V List of Tables VII List of Figures VIII I. Introduction 1 II. Related Work 4 A. Multi-access Edge Computing 4 B. Reinforcement Learning 4 C. Motivation 7 III. System Model 8 A. Network Model 9 B. Task Model 10 C. Computing Model 11 1. Local Computing Model 11 2. Offloading Computing Model 11 D. Deep reinforcement learning 13 E. Algorithm 14 IV. Evaluation 18 A. Environment Setting 18 B. DRL Parameters 19 C. Result 21 V. Conclusion and Future Work 30 References 31

    [1] Z. Zhou, X. Chen, E. Li, L. Zeng, K. Luo, and J. Zhang, “Edge intelligence: Paving the last mile of artificial intelligence with edge computing” Proc. IEEE, vol. 107, no. 8, pp. 1738–1762, Aug 2019.
    [2] K. Zhang, S. Leng, Y. He, S. Maharjan, and Y. Zhang, “Mobile edge computing and networking for green and low-latency Internet of Things” IEEE Commun. Mag., vol. 56, no. 5, pp. 39–45, May 2018.
    [3] X. Peng, J. Ren, L. She, D. Zhang, J. Li and Y. Zhang, “BOAT: A block-streaming app execution scheme for lightweight IoT devices” IEEE Internet of Things J. pp. 1816-1829, 2018.
    [4] E. Wang, D. Li, B. Dong, H. Zhou, and M. Zhu, “Flat and hierarchical system deployment for edge computing systems” Future Gener. Comput. Syst., vol. 105, pp. 308–317, Apr. 2020.
    [5] G. Premsankar, M. D. Francesco, and T. Taleb, “Edge computing for the Internet of Things: A case study,” IEEE Internet Things J., vol. 5, no. 2, pp. 1275–1284, Apr. 2018.
    [6] M. Quwaider and Y. Jararweh, “A cloud supported model for efficient community health awareness.’’ Pervasive and Mobile Computing, vol. 28, pp. 35–50, Jun. 2016.
    [7] B. Liu, C. Liu, and M. Peng, “Resource allocation for energyefficient MEC in NOMA-enabled massive IoT networks” IEEE J.Sel. Areas Commun., vol. 39, no. 4, pp. 1015–1027, Apr. 2021.
    [8] H. Zhou, X. Chen, S. He, J. Chen, and J. Wu, “DRAIM: A novel delayconstraint and reverse auction-based incentive mechanism for WiFi offloading” IEEE J. Sel. Areas Commun., vol. 38, no. 4, pp. 711–722, Apr. 2020.
    [9] T. Taleb, K. Samdanis, B. Mada, H. Flinck, S. Dutta and D. Sabella, “On Multi-Access Edge Computing: A Survey of the Emerging 5G Network Edge Cloud Architecture and Orchestration.” IEEE Communications Surveys & Tutorials, vol. 19, no. 3, pp. 1657-1681, 2017.
    [10] P. Porambage, J. Okwuibe, M. Liyanage, M. Ylianttila and T. Taleb, “Survey on Multi-Access Edge Computing for Internet of Things Realization” IEEE Communications Surveys & Tutorials, vol. 20, no. 4, pp. 2961-2991, Fourthquarter 2018.
    [11] P. Mach and Z. Becvar, “Mobile edge computing: A survey on architecture and computation offloading.” IEEE Communications Surveys Tutorials, vol. 19, no. 3, pp. 1628–1656, 2017.
    [12] J. Du, L. Zhao, J. Feng, X. Chu, and F. R. Yu, “Economical revenue maximization in cache enhanced mobile edge computing” in Proc. IEEE Int. Conf. Commun. (ICC), pp. 1–6, Jul. 2018.
    [13] Y. Li, X. Wang, X. Gan, H. Jin, L. Fu, and X. Wang, “Learning-aided computation offloading for trusted collaborative mobile edge computing” IEEE Trans. Mobile Comput., vol. 19, no. 12, pp. 2833–2849, Dec. 2020.
    [14] D. Wang, H. Qin, B. Song, and X. Du, “Resource allocation in information-centric wireless networking with D2D-enabled MEC: A deep reinforcement learning approach” IEEE Access, vol. 7, pp. 114935–114944, 2019.
    [15] X. Chen, L. Jiao, W. Li, W. Li and X. Fu, “Efficient multi-user computation offloading for mobile-edge cloud computing”, IEEE/ACM Trans. Netw. vol. 24, no. 5, pp. 2795–2808, 2016.
    [16] R. Lin, Z. Zhou, S. Luo, Y. Xiao, and M. Zukerman, “Distributed optimization for computation offloading in edge computing” IEEE Trans. Wireless Commun., vol. 19, no. 12, pp. 8179–8194, 2020.
    [17] X. Zhao, J.H. Peng, W. You, “A privacy aware computation offloading method mased on lyapunov optimization” J. Electr. Inform. Technol. vol 42 no.3, pp704–71, 2020.
    [18] S. Li, N. Zhang, S. Lin, L. Kong, A. Katangur, M. K. Khan, M. Ni, and G. Zhu. “Joint admission control and resource allocation in edge computing for internet of things”, IEEE Network, vol. 32, no. 1, pp. 72–79, Jan 2018.
    [19] J. Liu, Y. Mao, J. Zhang, and K. B. Letaief, “Delay-optimal computation task scheduling for mobile-edge computing systems” IEEE ISIT 2016, Barcelona, pp. 1451-1455, July 2016.
    [20] B. Cao, L. Zhang, Y. Li, D. Feng, W. Cao, “Intelligent offloading in multi-access edge computing: A state-of-the-art review and framework” IEEE Communications Magazine vol. 57 no.3, pp56–62, 2019.
    [21] A. Shakarami, M. Ghobaei-Arani, and A. Shahidinejad, “A survey on the computation offloading approaches in mobile edge computing: A machine learning-based perspective” Comput. Netw., vol. 182, ISSN 1389-1286, 107496, 2020.
    [22] Y. Zhang, J. Fu. “Energy-efficient computation offloading strategy with tasks scheduling in edge computing” Wirel. Netw, pp. 609–620, 2020.
    [23] Liu, Kai-Hsiang, and Wanjiun Liao, “Intelligent Offloading for Multi-Access Edge Computing: A New Actor-Critic Approach.” ICC 2020-2020 IEEE International Conference on Communications (ICC). IEEE, pp. 1-6, 2020.
    [24] Junyao Yang, Yan Wang, Zijian Li. “Inverse order-based optimization method for task offloading and resource allocation in mobile edge computing.” Applied Soft Computing,Volume 116, ISSN 1568-4946, 108361, 2022.
    [25] Volodymyr Mnih, K. Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare, Alex Graves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie, Amir Sadik, Ioannis Antonoglou, Helen King , Dharshan Kumaran, Daan Wierstra, Shane Legg and Demis Hassabis, “Human-level control through deep reinforcement learning”, Nature, vol. 518, pp. 529-523, 2015.
    [26] T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra, “Continuous control with deep reinforcement learning.”, arXiv preprint arXiv:1509.02971, 2015.
    [27] Christopher J.C.H. Watkins and Peter Dayan, “Q-learning”, Machine learning, vol. 8, pp. 279-292, May 1992.
    [28] Z. Wang, V. Bapst, N. Heess, V. Mnih, R. Munos, K. Kavukcuoglu, and N. de Freitas, “Sample efficient actor-critic with experience replay” in Proc. Int. Conf. Learning Representations, pp. 1–20. 2017.
    [29] K. Jiang, H. Zhou, D. Li, X. Liu and S. Xu, “A Q-learning based Method for Energy-Efficient Computation Offloading in Mobile Edge Computing.” 2020 29th International Conference on Computer Communications and Networks (ICCCN), pp. 1-7,2020.
    [30] T. Alfakih, M. M. Hassan, A. Gumaei, C. Savaglio and G. Fortino. “Task Offloading and Resource Allocation for Mobile Edge Computing by Deep Reinforcement Learning Based on SARSA” IEEE Access, vol. 8, pp. 54074-54084, 2020.
    [31] J. Li, H. Gao, T. Lv and Y. Lu. “Deep reinforcement learning based computation offloading and resource allocation for MEC”. 2018 IEEE Wireless Communications and Networking Conference (WCNC), pp. 1-6, 2018.
    [32] H. Zhou, K. Jiang, X. Liu, X. Li and V. C. M. Leung. “Deep Reinforcement Learning for Energy-Efficient Computation Offloading in Mobile-Edge Computing.” . IEEE Internet of Things Journal, vol. 9, no. 2, pp. 1517-1530, 15 Jan.15, 2022.
    [33] L. Ale, N. Zhang, X. Fang, X. Chen, S. Wu and L. Li, “Delay-Aware and Energy-Efficient Computation Offloading in Mobile-Edge Computing Using Deep Reinforcement Learning.” IEEE Transactions on Cognitive Communications and Networking, vol. 7, no. 3, pp. 881-892, Sept. 2021.
    [34] T. Zheng, J. Wan, J. Zhang and C. Jiang, “Deep Reinforcement Learning-Based Workload Scheduling for Edge Computing.” J Cloud Comp vol.11, no.3, pp. 1-13, 2022.
    [35] H. Lu, X. He, M. Du, X. Ruan, Y. Sun and K. Wang, “Edge QoE: Computation Offloading with Deep Reinforcement Learning for Internet of Things.” IEEE Internet of Things Journal, vol. 7, no. 10, pp. 9255-9265, Oct. 2020.
    [36] Kai-Hsiang Liu, Yi-Huai Hsu, Wan-Ni Lin, and Wanjiun Liao, “Fine-grained offloading for multi-access edge computing with actor-critic federated learning,” in IEEE Wireless Communications and Networking Conference, 2021, pp. 1–6
    [37] C. Zhao and X. D. Wang, “Decentralized computation offloading for multi-user mobile edge computing: A deep reinforcement learning approach,” EURASIP J. Wireless Commun. Netw., vol. 2020, no. 1, pp. 1–21, 2020.
    [38] ] S. Guo, B. Xiao, Y. Yang, and Y. Yang, “Energy-efficient dynamic offloading and resource scheduling in mobile cloud computing.” IEEE INFOCOM 2016, San Francisco, CA, pp. 1-9, April 2016.
    [39] L. Huang, S. Bi, and Y. J. Zhang, “Deep reinforcement learning for online computation offloading in wireless powered mobile-edge computing networks.” IEEE Transactions on Mobile Computing, July,2019.
    [40] X. Chen, L. Jiao, W. Li, and X. Fu, “Efficient multi-user computation offloading for mobile-edge cloud computing,” IEEE/ACM Transactions on Networking, vol. 24, no. 5, pp. 2795–2808, October 2016.
    [41] K. Zhang, Y. Mao, S. Leng, Q. Zhao, L. Li, X. Peng, L. Pan, S. Maharjan, and Y. Zhang, “Energy-efficient offloading for mobile edge computing in 5g heterogeneous networks,” IEEE Access, vol. 4, pp. 5896–5907, 2016.

    無法下載圖示 校內:2027-09-16公開
    校外:2027-09-16公開
    電子論文尚未授權公開,紙本請查館藏目錄
    QR CODE