簡易檢索 / 詳目顯示

研究生: 許峻瑋
Hsu, Chun-Wei
論文名稱: 基於微服務架構下的oneM2M環境與深度強化學習自動擴展系統設計
Design and Analysis of a Microservice Architecture and Deep Reinforcement Learning-based Auto Scaling System in oneM2M-Based Environment
指導教授: 蘇銓清
Sue, Chuan-Ching
學位類別: 碩士
Master
系所名稱: 電機資訊學院 - 資訊工程學系
Department of Computer Science and Information Engineering
論文出版年: 2023
畢業學年度: 111
語文別: 英文
論文頁數: 63
中文關鍵詞: 物聯網oneM2M彈性可擴展性強化學習DockerContainer
外文關鍵詞: IoT, oneM2M, Elasticity, Scalability, Reinforcement learning, Docker, Container
相關次數: 點閱:122下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 隨著物聯網(IoT)的快速興起,oneM2M成為一個關鍵的物聯網標準平台,使不同的物聯網設備和平台能夠相互通信和協作,然而,物聯網設備和數據的急速增加,oneM2M面臨著可擴展性的挑戰。容器化技術為oneM2M平台服務提供了水平和垂直擴展的可能性,讓我們在能夠動態調整分配的資源量以滿足不同的工作負載需求的變化。
    在應用程序的彈性問題中,調整分配的資源量可以分成水平擴展和垂直擴展,水平擴展決定服務的replicas數量(離散動作),垂直擴展決定服務的CPU資源(連續動作),然而在過往,利用強化學習方法解決彈性問題,會將混合動作空間離散化,而我們利用P-DQN以及MP-DQN方法應用至我們的環境中,不須將動作空間離散化並且調整資源較為靈活。在本研究中,我們考慮兩層式服務的智慧工廠情境,同時考慮水平擴展和垂直擴展,最後我們比較了基於閥值、DQN、P-DQN和MP-DQN的方法。

    With the rapid rise of the Internet of Things (IoT), oneM2M has emerged as a critical IoT standard platform facilitating communication and collaboration among various IoT devices and platforms. However, the exponential increase in IoT devices and data presents serious scalability challenges for oneM2M. Containerization technology offers potential solutions for these challenges by providing horizontal and vertical scalability of oneM2M platform services, enabling dynamic adjustments of allocated resources to meet fluctuating workload demands.
    In addressing application elasticity problems, resource allocation can be divided into horizontal and vertical scalability. Horizontal scalability decides the number of replicas for a service (a discrete action), while vertical scalability determines the amount of CPU resources for a service (a continuous action). Past studies that utilized reinforcement learning methods to solve elasticity issues often discretized the hybrid action space. In contrast, our approach employs P-DQN and MP-DQN methods to our environment, which does not require action space discretization, offering more flexible resource adjustments. In this study, we consider a two-tiered service scenario in the context of a smart factory, simultaneously considering both horizontal and vertical scalability. Finally, we compare the performance of threshold-based, DQN, P-DQN, and MP-DQN methods.

    中文摘要 I Abstract II Content IV List of Tables VI List of Figures VII 1 Introduction 1 2 Background and Related Work 3 2.1 oneM2M 3 2.1.1 oneM2M Resource Tree 4 2.2 Docker Swarm 4 2.3 Related Work 5 2.4 Motivation 9 3 System Architecture 10 3.1 Scenario 10 3.2 oneM2M Deployment 12 3.3 System Process Flow 13 3.4 System Architecture 15 4 Scaling Agent 17 4.1 Notation 17 4.2 DQN Agent 18 4.2.1 State 18 4.2.2 Action 18 4.2.3 Cost 19 4.3 P-DQN/MP-DQN Agent 21 4.3.1 State 21 4.3.2 Action 22 4.3.3 Cost 23 4.4 P-DQN/MP-DQN Framework 24 4.5 Algorithm 27 5 Performance Evaluation 29 5.1 Experimental Setting 29 5.2 Model Training Parameters 31 5.3 Result 35 5.3.1 Testing Result(static workload) 35 5.3.2 Testing Result(Dynamic workload) 40 5.2 Discussion 46 6. Conclusion and Future Work 48 7. Reference 50 8. Appendix 54 8.1 Appendix A 54

    [1] G. Blinowski, A. Ojdowska and A. Przybyłek, “Monolithic vs. microservice architecture: A performance and scalability evaluation”, IEEE Access, vol. 10, pp. 20357-20374, 2022.
    [2] E. Casalicchio and S. Iannucci, “The state-of-the-art in container technologies: Application, orchestration and security”, IEEE Transactions on Services Computing, vol. 32, no. 17, pp. e5668, 2020.
    [3] Y. Al-Dhuraibi, F. Paraiso, N. Djarallah and P. Merle, “Elasticity in Cloud Computing: State of the Art and Research Challenges”, IEEE Transactions on Services Computing, vol. 11, no. 2, pp. 430-447, 2018.
    [4] W.-Y. Kim, J.-S. Lee and E.-N. Huh, “Study on proactive auto scaling for instance through the prediction of network traffic on the container environment”, Proceedings of the 11th International Conference on Ubiquitous Information Management and Communication, pp. 1-8, 2017.
    [5] A. Dusia, Y. Yang and M. Taufer, “Network quality of service in Docker containers”, 2015 IEEE International Conference on Cluster Computing, pp. 527-528, 2015.
    [6] K. Huang, Q. Meng, Y. Xie, D. Feng and L. QIN, “Dynamic weighted scheduling strategy based on Docker swarm cluster”, Journal of Computer Applications, vol. 38, no. 5, pp. 1399, 2018.
    [7] M. Nardelli, V. Cardellini and E. Casalicchio, “Multi-level elastic deployment of containerized applications in geo-distributed environments”, 2018 IEEE 6th International Conference on Future Internet of Things and Cloud, pp. 1-8, 2018.
    [8] C. Barna, H. Khazaei, M. Fokaefs and M. Litoiu, “Delivering elastic containerized cloud applications to enable DevOps”, 2017 IEEE/ACM 12th International Symposium on Software Engineering for Adaptive and Self-Managing Systems, pp. 65-75, 2017.
    [9] W. Iqbal, A. Erradi, M. Abdullah and A. Mahmood, “Predictive auto-scaling of multi-tier applications using performance varying cloud resources”, IEEE Transactions on Cloud Computing, vol. 10, no.1, pp. 1-8, 2019.
    [10] E. Casalicchio, “A study on performance measures for auto-scaling CPU-intensive containerized applications”, Cluster Computing, vol. 22, pp. 995-1006, 2019.
    [11] Y. Al-Dhuraibi, F. Paraiso, N. Djarallah and P. Merle, “Autonomic vertical elasticity of Docker containers with ELASTICDOCKER”, 2017 IEEE 10th International Conference on Cloud Computing, pp.472-479, 2017.
    [12] S. N. Srirama, M. Adhikari and S. Paul, “Application deployment using containers with auto-scaling for microservices in cloud environment”, Journal of Network and Computer Applications, vol. 160, pp. 102629, 2020.
    [13] S. Shevtsov, D. Weyns and M. Maggio, “Self-adaptation of software using automatically generated control-theoretical solutions”, Engineering Adaptive Software Systems: Communications of NII Shonan Meetings, pp. 35-55, 2019.
    [14] oneM2M. Available on July 31, 2023: https://www.onem2m.org/.
    [15] I. M. Al Jawarneh, P. Bellavista, F. Bosi, L. Foschini, G. Martuscelli, R. Montanari and A. Palopoli, “Container orchestration engines: A thorough functional and performance comparison”, ICC 2019-2019 IEEE International Conference on Communications, pp. 1-6, 2019.
    [16] C.-L. Tseng and F. J. Lin, “Extending scalability of IOT/M2M platforms with fog computing”, 2018 IEEE 4th World Forum on Internet of Things, pp. 825-830, 2018.
    [17] oneM2M.org, “TS-0001-Functional_Architecture-V3_15_1”, Available on July 31, 2023: https://www.onem2m.org/images/files/deliverables/Release3/TS-0001-Functional_Architecture-V3_15_1.pdf.
    [18] Docker. Available on July 31, 2023: https://www.docker.com/.
    [19] K. Rzadca, P. Findeisen, J. Swiderski, P. Zych, P. Broniek, J. Kusmierek, P. Nowak, B. Strack, P. Witusowski, S. Hand and J. Wilkes, “Autopilot: workload autoscaling at Google”, Proceedings of the Fifteenth European Conference on Computer Systems, pp. 1-16, 2020.
    [20] Z. He, “Novel container cloud elastic scaling strategy based on Kubernetes.” 2020 IEEE 5th information technology and mechatronics engineering conference, pp. 1400-1404, 2020.
    [21] H. Arabnejad, C. Pahl, P. Jamshidi and G. Estrada, “A comparison of reinforcement learning techniques for fuzzy cloud auto-scaling”, 2017 17th IEEE/ACM international symposium on cluster, cloud and grid computing, pp. 64-73, 2017.
    [22] S. Zhang, T. Wu, M. Pan, C. Zhang and Y. Yu, “A-SARSA: A predictive container auto-scaling algorithm based on reinforcement learning”, 2020 IEEE international conference on web services, pp. 489-497, 2020.
    [23] F. Rossi, “Auto-scaling Policies to Adapt the Application Deployment in Kubernetes”, ZEUS, pp. 30-38, 2020.
    [24] F. Rossi, M. Nardelli and V. Cardellini, “Horizontal and Vertical Scaling of Container-Based Applications Using Reinforcement Learning”, 2019 IEEE 12th International Conference on Cloud Computing, pp. 329-338, 2019.
    [25] F. Rossi, V. Cardellini, F. Lo Presti and Francesco Lo, “Self-adaptive Threshold-based Policy for Microservices Elasticity”, 2020 IEEE 28th International Symposium on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems, pp. 1-8, 2020.
    [26] F. Rossi, V. Cardellini, F. Lo Presti and M. Nardelli, “Dynamic Multi-metric Thresholds for Scaling Applications Using Reinforcement Learning”, IEEE Transactions on Cloud Computing, 2022.
    [27] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. Bellemare, A. Graves, M. Riedmiller, A. K. Fidjeland, G. Ostrovski and others, “Human-level control through deep reinforcement learning”, nature, vol. 518, no. 7540, pp. 529-533, 2015.
    [28] J. Xiong, Q. Wang, Z. Yang, P. Sun, L. Han, Y. Zheng, H. Fu, T. Zhang, J. Liu and H. Liu, “Parametrized deep q-networks learning: Reinforcement learning with discrete-continuous hybrid action space”, arXiv preprint arXiv:1810.06394, 2018.
    [29] C. J. Bester, S. D. James and G. D. Konidaris, “Multi-pass q-networks for deep reinforcement learning with parameterised action spaces”, arXiv preprint arXiv:1905.04388, 2019.
    [30] T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver and D. Wierstra, “Continuous control with deep reinforcement learning”, ICLR, 2016.
    [31] J. Zhu, F. Wu and J. Zhao, “An overview of the action space for deep reinforcement learning”, Proceedings of the 2021 4th International Conference on Algorithms, Computing and Artificial Intelligence, pp. 1-10, 2021.
    [32] R. Agarwal, D. Schuurmans and M. Norouzi, “An optimistic perspective on offline reinforcement learning”, International Conference on Machine Learning, pp. 104-114, 2020.
    [33] K. Zhang, Z. Yang and T. Basar, “Multi-agent reinforcement learning: A selective overview of theories and algorithms”, Handbook of reinforcement learning and control, pp. 321-384, 2021.

    無法下載圖示 校內:2028-08-23公開
    校外:2028-08-23公開
    電子論文尚未授權公開,紙本請查館藏目錄
    QR CODE