| 研究生: |
蔡一郎 Tsai, Yi-Lang |
|---|---|
| 論文名稱: |
大尺度網路中的增強式智能學習與協同合作安全政策管理 Intelligent and Collaborative Policy Management in Large-Scale Network Based on Reinforcement Learning |
| 指導教授: |
賴溪松
Laih, Chi-Sung |
| 學位類別: |
碩士 Master |
| 系所名稱: |
電機資訊學院 - 電機工程學系碩士在職專班 Department of Electrical Engineering (on the job class) |
| 論文出版年: | 2008 |
| 畢業學年度: | 96 |
| 語文別: | 英文 |
| 論文頁數: | 65 |
| 中文關鍵詞: | 大尺度網路 、安全政策 、增強式學習 、Q學習 |
| 外文關鍵詞: | Q-Learning, Security Policy, Reinforcement Learning, Large-Scale Network |
| 相關次數: | 點閱:103 下載:2 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
在大尺度的網路環境中,網路設備的建置與管理是複雜、分散以及具有層級式架構的,為了保護網路的安全,必須從未經授權的存取以及網路上的攻擊著手,因此在許多的網路設備上都設定了與資訊安全相關的政策與規則,然而隨著安全政策的增加,在一個層級式的網路環境中,就必須考慮網路整體的效能,並且網路流量的特性是動態變化的,但是網路設備上的安全政策是固定的,並不會隨著網路流量特性的變化而有自動調整,因此如何針對安全政策與規則進行調整以因應網路流量特性的變化,是一個值得研究的議題。
在我的研究中,導入增強式的學習法,以建置一個智能學習與調整安全政策的系統(APAS),實現安全政策與安全規則在最佳化以及因應網路流量特性的變化上,這是一個具有自我學習與協同合作的系統,對於網路上使用者的行為以及目前網路上的安全政策與規則進行最佳化的處理,以調整安全政策與規則的排列順序,增加與網路流量的擊中率,幾乎能夠達到前20%的安全規則,能夠符合近80%流量的特性,相較於未進行最佳化之前僅有30%~50%的擊中率,在採用增強化學習法之後,在效能上有明顯的改善。
In large-scale network, the deployment of network devices are complicated, distributed and hierarchy. In order to protect secure network from unauthorized access connections and network attacks, many security policies and rules are configured on the devices. However, with the raise of policies, policy matching and conflict has become the bottleneck of network and decreased network performance in hierarchy network infrastructure. Moreover, Network traffic always changes dynamically, but security policies do not. How to dynamically adapt security policies is also a significant issue.
In our study, an adaptive policy analysis system (APAS) is designed and developed to optimize and dynamically adapt security policies. A learning model based on reinforcement learning is proposed and incorporated into APAS. The model analyzes the corresponding large-scale network deployment and the usage of end-user for learning optimized policy matching. The results of evaluations indicate that more than 80% network traffic almost hits the first 20% security policy which is much higher than 30%~50% hit rate by using other methods.
[1] Ehab S. Al-Shaer and Hazem H. Hamed, “Discovery of Policy Anomalies in Distributed Firewalls” INFOCOM 2004. Twenty-third AnnualJoint Conference of the IEEE Computer and Communications Societies 7-11 March 2004 Volume: 4, On page(s): 2605- 2616 vol.4
[2] Richard S. Sutton and Andrew G. Barto “Reinforcement Leaning: An Introduction” March 1998 ISBN-10:0-262-19398-1
[3] Leslie Pack Kaelbling, Michael L. Littman and Andrew W. Moore, “Reinforcement Learning: A Survey” Journal of Artificial Intelligence Research 4 (1996) 237-285
[4] Sufyan Almajali and Tzilla Elrad, “Dynamic Network Policies Using Aspect Oriented Network Framework”, IEEE 2006 (ICNICONSMCL’06)
[5] Korosh Golnabi, Richard K. Min, Latifur Khan and Ehab Al-Shaer, “Analysis of Firewall Policy Rules Using Data Mining Techniques” IEEE 2006 NOMS 2006
[6] Charles C. Zhang, Marianne Winslett, Carl A. Gunter, “On the Safety and Efficiency of Firewall Policy Deployment”, IEEE 2007 Symposium on Security and Privacy(SP’07)
[7] Ehab S. Al-Shaer, Hazem H. Hamed, “Discovery of Policy Anomalies in Distributed Firewalls”, IEEE INFOCOM 2004
[8] Yi Yin, R.S. Bhuvaneswaran, Yoshiaki Katayama, Naohisa Takahashi, “Analysis Methods of Firewall Policies by using Spatial Relationships between Filters”, IEEE ICSCN 2007
[9] Adel El-Atawy, Taghrid Samak, Zein Wali, Ehab Al-Shaer, “An Automated Framework for Validating Firewall Policy Enforcement”, Eighth IEEE International Workshop on Policies for Distributed Systems and Networks (POLICY’07)
[10] El-Sayed M. El-Alfy, “A Heuristic Approach for Firewall Policy Optimization”, ICACT 2007
[11] Weiping Wang, Rong Ji, Wenhui Chen, Bo Chen, Zhepeng Li, “Firewall Rules Sorting Based on Markov Model”, 2007 IEEE Computer Society
[12] Richard M. Fujimoto, Kalyan Perumalla, Alfred Park, Hao Wu, Mostafa H. Ammar, “Large-Scale Network Simulation: How Big? How Fast?”, Proceedings of the 11TH IEEE/ACM International Symposium on Modeling, Analysis and Simulation of Computer Telecommunications Systems (MASCOTS’03)
[13] I. Al-Oqily, A. Karmouch, “Automating Overlay Networks Management”, 21st International Conference on Advanced Networking and Applications (AINA’07) IEEE Computer Society
[14] “Q-Learning By Examples”, http://people.revoledu.com/kardi/tutorial/ReinforcementLearning/Resources.html
[15] “Cisco PIX Firewall Support List”, ftp://ftp-sj.cisco.com/pub/mibs/supportlists/pix/pix-supportlist.html
[16] “Free Connectionist Q-learning Java Framework”, http://elsy.gdan.pl/
[17] “sourceforge.net NFDUMP-Netflow processing tools”, http://sourceforge.net/projects/nfdump/
[18] “sourceforge nfsen”, http://sourceforge.net/projects/nfsen/
[19] Niksun NetVCR, http://www.niksun.com/Products_NetVCR.htm
[20] Cacti, http://www.cacti.net/