簡易檢索 / 詳目顯示

研究生: 陳奕岑
Chen, Yi-Cen
論文名稱: 個人化聯邦學習之逐層式模型校準於異質資料
Personalized Federated Learning with Layer-wise Model Calibration for Heterogeneous Data
指導教授: 曾繁勛
Tseng, Fan-Hsun
學位類別: 碩士
Master
系所名稱: 電機資訊學院 - 資訊工程學系
Department of Computer Science and Information Engineering
論文出版年: 2024
畢業學年度: 112
語文別: 英文
論文頁數: 72
中文關鍵詞: 聚合策略資料異質性聯邦學習逐層式模型校準非獨立同分布個人化
外文關鍵詞: aggregation strategy, data heterogeneity, federated learning, layer-wise, model calibration, non-IID, personalized
相關次數: 點閱:68下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 聯邦學習被視為有效利用現今社會產生的大量資料同時避免隱私疑慮的解決方案。聯邦學習允許參與的用戶裝置共同訓練一個神經網路模型,並且僅透過網路傳送模型參數而不需要將原始訓練資料傳送給雲端伺服器,因此能避免用戶私人資料外洩同時有效利用蒐集到的資料。然而,不同來源裝置產生的資料形成非獨立同分布現象,導致資料異質性問題,這將顯著影響聯邦學習的訓練成效。本論文提出一種基於個人化聯邦學習的模型校準演算法,以解決聯邦學習訓練中的資料異質性問題。個人化聯邦學習允許用戶對本地模型進行客製化,使其更適應各用戶需求,從而比全局模型更適合解決用戶的獨特任務。通過新提出的模型校準機制,本論文在本地模型與全局模型之間找到平衡點,使得用戶能保留先前所學的獨特知識,同時有效利用聯邦學習用戶間共享知識的特性提升訓練成效。該機制包括二階段的校準因子調整方法,根據用戶的表現及系統的訓練階段調整全局模型對用戶的影響。此外,透過逐層式模型校準,對不同用戶的模型進行篩選,挑選重要的神經網路層進行私有化,保留關鍵的獨特知識,增進個人化聯邦學習的效益。最後,使用基於訓練階段的聚合策略,根據訓練過程中的變化控制不同用戶在模型聚合時的權重,調整用戶對全局模型的影響程度,以提升系統整體的訓練成效。實驗結果顯示,所提出的演算法在準確率和損失值方面均優於其他現有方法。並且此演算法對用戶的計算複雜度影響極小,不會增加過多訓練所需的時間。

    Federated Learning (FL) is increasingly recognized as a pivotal solution for managing the vast amounts of data generated in modern society. FL allows participants to collaboratively train a neural network model, ensuring the effective utilization of data distributed across multiple clients. Additionally, FL preserves data privacy by not requiring clients to transmit their raw data to a central server, thereby preventing personal data leakage. However, the significant variation in data statistics among clients leads to the well-known issue of data heterogeneity, namely non-IID data distribution. The high degree of data heterogeneity results in a degradation of training accuracy. To address this problem, this thesis proposes the Personalized Federated Learning with Layer-wise Model Calibration (PerFedLC) algorithm. This approach places a strong emphasis on personalization, integrating a range of innovative techniques, including model calibration, in order to enhance performance. Personalized federated learning (PFL) is a field of FL that allows each client to customize local models in order to enhance performance on local tasks. In contrast to a global model, which may not be suitable for every client's unique task due to differing data distributions, a personalized model caters to specific needs. This thesis introduces a model calibration mechanism designed to assist participants in leveraging shared knowledge while retaining their individual information, thereby balancing global and local models. The model calibration process involves a two-stage calibration factor adjustment scheme, which modulates the global influence on each client based on their performance and training stage. Additionally, a layer-wise filter function is developed to refine the personalization of model calibration for each client, allowing them to identify the key characteristics of their local models from a layer-wise perspective. Finally, a stage-based aggregation rule is employed to control the contribution of clients according to their performance and training stage, enhancing overall system performance. The experimental results on different datasets and scenarios show that the proposed PerFedLC algorithm outperforms other FL algorithms in terms of higher accuracy and lower loss. Furthermore, the proposed algorithm does not significantly increase computational complexity, indicating its applicability in edge computing and mobile network environments, thereby enhancing performance and robustness.

    摘要 I Abstract II 致謝 IV Directory V Table of Contents VII List of Figures VIII Chapter 1 Introduction 1 1.1 Background 1 1.2 Motivation 4 1.3 Contributions 5 1.4 The Architecture of Thesis 6 Chapter 2 Related Works 7 2.1 Traditional Federated Learning 7 2.2 Personalized Federated Learning 8 Chapter 3 Problem Formulation 13 Chapter 4 Proposed Method 18 4.1 The Workflow of PerFedLC 18 4.2 Model Calibration of PerFedLC 20 4.3 Adjustment of Calibration Factor 22 4.4 Layer-wise Model Calibration 26 4.5 Stage-based Aggregation 28 4.6 PerFedLC Algorithm 29 Chapter 5 Experimental Results 33 5.1 Experimental Setup 33 5.2 Training Performance 36 5.2.1 Accuracy & Loss 36 5.2.2 Scalability 39 5.2.3 Robustness 40 5.3 Analysis of Calibration Mechanism 41 5.3.1 Adjustment of Calibration Factor 41 5.3.2 Comparison of Stage Transition 44 5.3.3 Impact of calibration tolerance 46 5.4 Analysis of Other Schemes 47 5.4.1 Personalized Value, Ratio, and Performance 47 5.4.2 Aggregation Strategy Comparison 51 5.5 Training Efficiency 53 5.5.1 Completion Time for Achieving Target Accuracy 53 5.5.2 Average Training Time of Each User 53 Chapter 6 Conclusions and Future Works 55 Reference 57

    [1] M. G. Kibria, K. Nguyen, G. P. Villardi, O. Zhao, K. Ishizu and F. Kojima, "Big Data Analytics, Machine Learning, and Artificial Intelligence in Next-Generation Wireless Networks," IEEE Access, vol. 6, pp. 32328-32338, 2018.
    [2] K. Wei, J. Li, M. Ding, C. Ma, H. H. Yang, F. Farokhi, S. Jin, T. Q. S. Quek, and H. Vincent Poor, “Federated learning with differential privacy: Algorithms and performance analysis,” IEEE Transactions on Information Forensics and Security, vol. 15, pp. 3454–3469, 2020.
    [3] W. Y. B. Lim, N. C. Luong, D. T. Hoang, Y. Jiao, Y.-C. Liang, Q. Yang, D. Niyato, and C. Miao, “Federated learning in mobile edge networks: A comprehensive survey,” IEEE Communications Surveys & Tutorials, vol. 22, no. 3, pp. 2031–2063, 2020.
    [4] S. Niknam, H. S. Dhillon, and J. H. Reed, “Federated learning for wireless communications: Motivation, opportunities, and challenges,” IEEE Communications Magazine, vol. 58, no. 6, pp. 46–51, 2020.
    [5] S. Wang, T. Tuor, T. Salonidis, K. K. Leung, C. Makaya, T. He, and K. Chan, “Adaptive federated learning in resource constrained edge computing systems,” IEEE Journal on Selected Areas in Communications, vol. 37, no. 6, pp. 1205–1221, 2019.
    [6] B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y. Arcas, “Communication-Efficient Learning of Deep Networks from Decentralized Data,” in Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, ser. Proceedings of Machine Learning Research, A. Singh and J. Zhu, Eds., vol. 54. PMLR, 20–22 Apr 2017, pp. 1273–1282.
    [7] F. Sattler, S. Wiedemann, K. -R. Müller, and W. Samek, “Robust and communication-efficient federated learning from non-i.i.d. data,” IEEE Transactions on Neural Networks and Learning Systems, vol. 31, no. 9, pp. 3400–3413, 2020.
    [8] H. Wang, Z. Kaplan, D. Niu and B. Li, "Optimizing federated learning on non-IID data with reinforcement learning," in IEEE INFOCOM 2020 - IEEE Conference on Computer Communications, Toronto, ON, Canada, 2020.
    [9] Y. Zhang, T. Zhao, D. Miao, and W. Pedrycz, “Granular multilabel batch active learning with pairwise label correlation,” IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 52, no. 5, pp. 3079–3091, 2022.
    [10] J. Jia, L. Zhai, W. Ren, L. Wang, and Y. Ren, “An effective imbalanced jpeg steganalysis scheme based on adaptive cost-sensitive feature learning,” IEEE Transactions on Knowledge and Data Engineering, vol. 34, no. 3, pp. 1038–1052, 2022.
    [11] F. Pelosin and A. Torsello, “Smaller is better: An analysis of instance quantity/quality trade-off in rehearsal-based continual learning,” in 2022 International Joint Conference on Neural Networks (IJCNN), 2022, pp. 1–8.
    [12] Y. Zhao, M. Li, L. Lai, N. Suda, D. Civin, and V. Chandra, “Federated learning with non-iid data,” 2018. [Online]. Available: https://arxiv.org/abs/1806.00582
    [13] W. Zhang, X. Wang, P. Zhou, W. Wu and X. Zhang, "Client selection for federated learning with non-IID data in mobile edge computing," IEEE Access, vol. 9, pp. 24462-24474, 2021.
    [14] A. Cheng, P. Wang, X. S. Zhang and J. Cheng, "Differentially private federated learning with local regularization and sparsification," in 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 2022, pp. 10112-10121.
    [15] S. Vahidian, M. Morafah and B. Lin, "Personalized Federated Learning by Structured and Unstructured Pruning under Data Heterogeneity," in 2021 IEEE 41st International Conference on Distributed Computing Systems Workshops (ICDCSW), Washington, DC, USA, 2021, pp. 27-34.
    [16] S. Vahidian, M. Morafah, C. Chen, M. Shah, and B. Lin, “Rethinking data heterogeneity in federated learning: Introducing a new notion and standard benchmarks,” IEEE Transactions on Artificial Intelligence, vol. 5, no. 3, pp. 1386–1397, 2024.
    [17] P. Verma, J. G. Breslin, and D. O’Shea, “Percfed: An effective personalized clustered federated learning mechanism to handle non-iid challenges for industry 4.0,” in 2023 IEEE 12th International Conference on Cloud Networking (CloudNet), 2023, pp. 299–306.
    [18] A. Z. Tan, H. Yu, L. Cui, and Q. Yang, “Towards personalized federated learning,” IEEE Transactions on Neural Networks and Learning Systems, vol. 34, no. 12, pp. 9587–9603, 2023.
    [19] J. Wicaksana, Z. Yan, X. Yang, Y. Liu, L. Fan and K. -T. Cheng, "Customized federated learning for multi-source decentralized medical image classification," in IEEE Journal of Biomedical and Health Informatics, vol. 26, no. 11, pp. 5596-5607, 2022.
    [20] J. Shu, T. Yang, X. Liao, F. Chen, Y. Xiao, K. Yang, and X. Jia, "Clustered federated multitask learning on non-IID data with enhanced privacy," in IEEE Internet of Things Journal, vol. 10, no. 4, pp. 3453-3467, 15 Feb.15, 2023.
    [21] M. Morafah, S. Vahidian, W. Wang, and B. Lin, “Flis: Clustered federated learning via inference similarity for non-iid data distribution,” IEEE Open Journal of the Computer Society, vol. 4, pp. 109–120, 2023.
    [22] Z. Zhou, Y. Li, X. Ren, and S. Yang, “Towards efficient and stable k-asynchronous federated learning with unbounded stale gradients on non-iid data,” IEEE Transactions on Parallel and Distributed Systems, vol. 33, no. 12, pp. 3291–3305, 2022.
    [23] T. Nishio and R. Yonetani, “Client selection for federated learning with heterogeneous resources in mobile edge,” in ICC 2019 - 2019 IEEE International Conference on Communications (ICC), 2019, pp. 1–7.
    [24] S. Abdulrahman, H. Tout, A. Mourad, and C. Talhi, “Fedmccs: Multicriteria client selection model for optimal iot federated learning,” IEEE Internet of Things Journal, vol. 8, no. 6, pp. 4723–4735, 2021.
    [25] M. Duan, D. Liu, X. Chen, R. Liu, Y. Tan, and L. Liang, “Self-balancing federated learning with global imbalanced data in mobile systems,” IEEE Transactions on Parallel and Distributed Systems, vol. 32, no. 1, pp. 59– 71, 2021.
    [26] Y. Ye, S. Li, F. Liu, Y. Tang, and W. Hu, “Edgefed: Optimized federated learning based on edge computing,” IEEE Access, vol. 8, pp. 209 191– 209 198, 2020.
    [27] J. Liu, J. Liu, H. Xu, Y. Liao, Z. Wang and Q. Ma, “YOGA: adaptive layer-wise model aggregation for decentralized federated learning,” IEEE/ACM Transactions on Networking, vol. 32, no. 2, pp. 1768-1780, 2024.
    [28] Y. A. Ur Rehman, Y. Gao, P. P. B. De Gusmão, M. Alibeigi, J. Shen and N. D. Lane, “L-DAWA: layer-wise divergence aware weight aggregation in federated self-supervised visual representation learning,” in 2023 IEEE/CVF International Conference on Computer Vision (ICCV), Paris, France, 2023.
    [29] J. Mills, J. Hu, and G. Min, “Multi-task federated learning for personalised deep neural networks in edge computing,” IEEE Transactions on Parallel and Distributed Systems, vol. 33, no. 3, pp. 630–641, 2022.
    [30] Q. Wu, K. He, and X. Chen, “Personalized federated learning for intelligent iot applications: A cloud-edge based framework,” IEEE Open Journal of the Computer Society, vol. 1, pp. 35–44, 2020.
    [31] F. Yu, H. Lin, X. Wang, S. Garg, G. Kaddoum, S. Singh, and M. M. Hassan, “Communication-efficient personalized federated meta-learning in edge networks,” IEEE Transactions on Network and Service Management, vol. 20, no. 2, pp. 1558–1571, 2023.
    [32] M. G. Arivazhagan, V. Aggarwal, A. K. Singh, and S. Choudhary, “Federated learning with personalization layers,” 2019, arXiv:1912.00818.
    [33] L. Collins, H. Hassani, A. Mokhtari, and S. Shakkottai, “Exploiting shared representations for personalized federated learning,” in Proceedings of the 38th International Conference on Machine Learning, ser. Proceedings of Machine Learning Research, M. Meila and T. Zhang, Eds., vol. 139. PMLR, 18–24 Jul 2021, pp. 2089–2099.
    [34] A. Li, J. Sun, B. Wang, L. Duan, S. Li, Y. Chen, Y. Chen, H. Li, “LotteryFL: empower edge intelligence with personalized and communication-efficient federated learning,” in 2021 IEEE/ACM Symposium on Edge Computing (SEC), San Jose, CA, USA, 2021.
    [35] Y. Zeng, Y. Yang, T. Yao and W. He, “Personalized federated aggregation algorithm based on local attention mechanism,” in 2023 IEEE 14th International Symposium on Parallel Architectures, Algorithms and Programming (PAAP), Beijing, China, 2023.
    [36] Y. Zhao, G. Yu, J. Wang, C. Domeniconim M. Guo, X. Zhang, L. Cui, "Personalized Federated Few-Shot Learning," IEEE Transactions on Neural Networks and Learning Systems, vol. 35, no. 2, pp. 2534-2544, 2024.
    [37] A. Krizhevsky and G. Hinton, “Learning multiple layers of features from tiny images,” University of Toronto, Toronto, Ontario, Tech. Rep. 0, 2009. [Online]. Available: https://www.cs.toronto.edu/kriz/learningfeatures-2009-TR.pdf

    無法下載圖示 校內:2027-08-01公開
    校外:2027-08-01公開
    電子論文尚未授權公開,紙本請查館藏目錄
    QR CODE