| 研究生: |
張家菖 Chang, Jia-Chang |
|---|---|
| 論文名稱: |
基於模型導向選擇機制與強化學習的高效封包分類框架 An Efficient Packet Classification Framework Using Model-Based Selection Scheme and Reinforcement Learning |
| 指導教授: |
張燕光
Chang, Yeim-Kuan |
| 學位類別: |
碩士 Master |
| 系所名稱: |
電機資訊學院 - 資訊工程學系 Department of Computer Science and Information Engineering |
| 論文出版年: | 2025 |
| 畢業學年度: | 113 |
| 語文別: | 英文 |
| 論文頁數: | 82 |
| 中文關鍵詞: | 封包分類 、強化學習 、線性回歸模型 、K 最近鄰 、布隆過濾器 |
| 外文關鍵詞: | Packet Classification, Reinforcement Learning, Linear Regression Model, K-Nearest Neighbors, Bloom Filter |
| ORCID: | 0009-0003-7114-7296 |
| 相關次數: | 點閱:68 下載:0 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
為因應對高效率、安全與高品質網路服務不斷成長的需求,封包分類因流量異質性與工作負載傾斜而變得越來越複雜,導致高吞吐量環境中的長尾延遲。本論文提出 Adaptive Multi-Classifier Packet System (AMPS),這是一個可擴充的架構,旨在減少尾端延遲並提升分類效率。AMPS 整合了用於動態參數調整的強化學習、用於延遲預測的線性回歸、基於 KNN 的結構選擇,以及用於長尾過濾的 Bloom 過濾器,所有這些都在並行多核心架構中。透過結合五個互補的分類器-PT-Tree、DBTable、KSet、DynamicTuple 和 MultilayerTuple,AMPS 可確保在傾斜流量下的穩健效能。在 ClassBench 和 WIDE 軌跡上進行的評估,證明 AMPS 能有效改善吞吐量和尾端行為。值得注意的是,AMPS 在 ClassBench 100K 規則集上的平均搜尋時間為 25.937 ns,優於基線方法達 5.387 倍。此外,AMPS 使用 K-Nearest Neighbors (KNN) 達到 76.03% 的最佳選擇準確度,大幅超越最高為 66.84% 的線性回歸模型。這些結果強調了 AMPS 在資料結構選擇方面提供卓越查詢效率、減少尾端延遲和高準確度的能力,使其成為下一代網路最佳化的可擴充和適應性解決方案。
In response to the growing demand for efficient, secure, and high-quality network services, packet classification has become increasingly complex due to traffic heterogeneity and workload skew, which lead to long-tail latency in high-throughput environments. This thesis presents the Adaptive Multi-Classifier Packet System (AMPS), a scalable framework designed to reduce tail latency and enhance classification efficiency. AMPS integrates reinforcement learning for dynamic parameter tuning, linear regression for latency prediction, KNN-based structure selection, and Bloom Filters for long-tail filtering, all within a parallel multi-core architecture. By combining five complementary classifiers—PT-Tree, DBTable, KSet, DynamicTuple, and MultilayerTuple—AMPS ensures robust performance under skewed traffic. Evaluations on ClassBench and WIDE traces demonstrate its effectiveness in improving throughput and tail behavior. Notably, AMPS achieves an average search time of 25.937 ns on ClassBench 100K rulesets, outperforming baseline methods by up to 5.387 times. Furthermore, AMPS attains a best selection accuracy of 76.03% using K-Nearest Neighbors (KNN), significantly surpassing linear regression models, which peak at 66.84%. These results underscore AMPS’s capability to deliver exceptional query efficiency, reduced tail latency, and high accuracy in data structure selection, positioning it as a scalable and adaptive solution for next-generation network optimization.
[1] Guennebaud, G. and Jacob, B. Eigen v3 C++ Library. [Online]. Available: https://eigen.tuxfamily.org
[2] E. Liang, R. Liaw, R. Nishihara, P. Moritz, R. Fox, K. Goldberg, J. Gonzalez, M. Jordan, and I. Stoica, “RLlib: Abstractions for Distributed Reinforcement Learning,” in Proceedings of the 35th International Conference on Machine Learning, J. Dy and A. Krause, Eds., Proceedings of Machine Learning Research, vol. 80, pp. 3053–3062, 2018. [Online]. Available: https://proceedings.mlr.press/v80/liang18b.html
[3] Y.-K. Chang, Y.-H. Lin, J.-C. Chang, W. Li, and S.-Y. Hsieh, “Efficient Hierarchical Hash-Based Multi-Field Packet Classification With Fast Update for Software Switches,” IEEE Access, vol. 13, pp. 28962–28978, Feb. 2025. [Online]. Available: https://doi.org/10.1109/ACCESS.2025.3540411
[4] Z. Liao, S. Qian, Z. Zheng, J. Zhang, J. Cao, G. Xue, and M. Li, “PT-Tree: A Cascading Prefix Tuple Tree for Packet Classification in Dynamic Scenarios,” IEEE/ACM Transactions on Networking, vol. 32, no. 1, pp. 506–519, Feb. 2024. [Online]. Available: DOI 10.1109/TNET.2023.3289029
[5] Z. Liao, S. Qian, Z. Zheng, J. Zhang, J. Cao, G. Xue, and M. Li, “DBTable: Leveraging Discriminative Bitsets for High-Performance Packet Classification,” IEEE/ACM Transactions on Networking, vol. 32, no. 6, pp. 5232–5246, Sep. 2024. DOI: 10.1109/TNET.2024.3452780
[6] C. Zhang, G. Xie, X. Wang, "DynamicTuple: The Dynamic Adaptive Tuple for High-Performance Packet Classification," Computer Networks, vol. 202, 2022.
[7] C. Zhang, G. Xie, "MultilayerTuple: A General, Scalable and High-Performance Packet Classification Algorithm for SDN," IFIP Networking, 2021.
[8] Blanco, J. L., & Rai, P. K. (2014). nanoflann: a C++ header-only fork of FLANN, a library for Nearest Neighbor (NN) with KD-trees. https://github.com/jlblancoc/nanoflann.
[9] D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. van den Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, and D. Hassabis, “Mastering the game of Go with deep neural networks and tree search,” Nature, vol. 529, no. 7587, pp. 484–489, Jan. 2016. [Online]. Available: https://doi.org/10.1038/nature16961
[10] O. Vinyals, I. Babuschkin, W. M. Czarnecki, M. Mathieu, A. Dudzik, J. Chung, D. H. Choi, R. Powell, T. Ewalds, P. Georgiev, J. Oh, D. Horgan, M. Kroiss, I. Danihelka, A. Huang, L. Sifre, T. Cai, J. P. Agapiou, M. Jaderberg, A. S. Vezhnevets, M. Leblond, T. P. Lillicrap, K. Simonyan, T. Leibo, A. Silver, D. Hassabis, K. Kavukcuoglu, and T. Graepel, “Grandmaster level in StarCraft II using multi-agent reinforcement learning,” Nature, vol. 575, no. 7782, pp. 350–354, Nov. 2019. [Online]. Available: https://doi.org/10.1038/s41586-019-1724-z
[11] Taylor, D. E., & Turner, J. S. (2007). Classbench: A packet classific ation benchmark. IEEE/ACM transactions on networking, 15(3), 499 511.
[12] K. Cho, K. Mitsuya, and A. Kato, “Traffic Data Repository at the WIDE Project,” MAWI Working Group Traffic Archive, [Online]. Available: https://mawi.wide.ad.jp/mawi/samplepoint-F/2024/202401011400.html. [Accessed: Jul. 25, 2025].
[13] Ren, H., Qian, S., Zheng, Z., Zhang, J., Liao, Z., Hu, H., Cao, J., Xue, G., & Li, M. (2025). EPC: An ensemble packet classification framework for efficient and stable performance. Computer Networks, 111306.
[14] H. Chen, Y. Yang, M. Xu, Y. Zhang and C. Liu, "Neurotrie: Deep Reinforcement Learning-based Fast Software IPv6 Lookup," 2022 IEEE 42nd International Conference on Distributed Computing Systems (ICDCS), Bologna, Italy, 2022, pp. 917-927, doi: 10.1109/ICDCS54860.2022.00093.
[15] Eric Liang, Hang Zhu, Xin Jin, and Ion Stoica. 2019. Neural packet classification. In Proceedings of the ACM Special Interest Group on Data Communication (SIGCOMM '19). Association for Computing Machinery, New York, NY, USA, 256–269. https://doi.org/10.1145/3341302.3342221
[16] Towers, M. et al. (2024). Gymnasium: A Standard Interface for Reinforcement Learning Environments. arXiv preprint arXiv:2407.17032.
[17] W. Jakob, J. Rhinelander, and D. Moldovan, “pybind11 – Seamless operability between C++11 and Python,” [Online]. Available: https://github.com/pybind/pybind11
[18] P. Moritz, R. Nishihara, S. Wang, A. Tumanov, R. Liaw, E. Liang, M. Elibol, Z. Yang, W. Paul, M. I. Jordan, and I. Stoica, “Ray: A Distributed Framework for Emerging AI Applications,” in 13th USENIX Symposium on Operating Systems Design and Implementation (OSDI ’18), Carlsbad, CA, USA, Oct. 2018, pp. 561–577. [Online]. Available: https://www.usenix.org/conference/osdi18/presentation/moritz
[19] A. Paszke, S. Gross, F. Massa, A. Lerer, J. Bradbury, G. Chanan, T. Killeen, Z. Lin, N. Gimelshein, L. Antiga, A. Desmaison, A. Köpf, E. Z. Yang, Z. DeVito, M. Raison, A. Tejani, S. Chilamkurthy, B. Steiner, L. Fang, J. Bai, and S. Chintala, “PyTorch: An imperative style, high-performance deep learning library,” in *Advances in Neural Information Processing Systems (NeurIPS)*, vol. 32, 2019. [Online]. Available: https://papers.nips.cc/paper_files/paper/2019/hash/bdbca288fee7f92f2bfa9f7012727740-Abstract.html
校內:2029-07-30公開