| 研究生: |
彭彥筑 Peng, Yen-Chu |
|---|---|
| 論文名稱: |
深層神經網路感染攻擊與應對策略之研究 Poisoning Attack and Countermeasure on DNNs |
| 指導教授: |
李忠憲
Li, Jung-Shian |
| 學位類別: |
碩士 Master |
| 系所名稱: |
電機資訊學院 - 電腦與通信工程研究所 Institute of Computer & Communication Engineering |
| 論文出版年: | 2020 |
| 畢業學年度: | 108 |
| 語文別: | 中文 |
| 論文頁數: | 52 |
| 中文關鍵詞: | 感染攻擊 、深層神經網路 、對抗例子 、機器學習 、資料操縱 |
| 外文關鍵詞: | Poisoning Attack, DNNs, Adversarial Example, Machine Learning, Data Manipulation |
| 相關次數: | 點閱:53 下載:0 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
在近幾年內,隨著機器學習快速進步,許多領域都使用機器學習來幫助解決各種問題,在機器學習中,深層神經網路以深度學習聞名取得了非常卓越的成就,然而深層神經網路正面臨了遭受攻擊的風險。資料集是成就良好表現的重要原因之一,然而使用者卻無法保證其是安全無虞的,所謂的對抗例子、感染攻擊與後門攻擊便是透過操作資料集達成攻擊的手段。在深層神經網路上,操作測試集資料的對抗例子已經有許多深入的研究,然而在操作訓練集資料的感染攻擊上則較少成果,故本文以感染攻擊為主要研究對象。本文以三個深層神經網路下的感染攻擊作為研究,提出新的感染攻擊演算法,稱為類別分歧演算法,能夠比所研究之癱瘓攻擊更具癱瘓能力;同時也針對所實驗的感染攻擊擬定對策,提出以降噪自編碼器為主體的強健演算法,稱為資料清洗,能夠有效的降低攻擊所造成的影響;另外也提出一個整合的POT偵測演算法,能夠有效的應對不同類型的攻擊。
In recent years, deep neural networks (DNNs), which are known as deep learning, achieved a great success in many fields. However, deep neural networks are at risk of being attacked. Dataset is one of the most important reasons that made machine performance good. However, users cannot sure whether the dataset is secure or not. Adversarial examples, poisoning attacks and backdoor attacks are the attacks that manipulating the dataset. In DNNs, there have been many studies on adversarial example, which manipulating the testing dataset. But there are few studies on poisoning attacks, which manipulating the training dataset. We focus on poisoning attack and analysis three poisoning attack on DNNs. We also propose a new poisoning attack, Category Diverse attack, which has better ability on paralyzing DNNs. On the other hand, we also formulate countermeasures for poisoning attacks used in this research. We propose Data Washing, which was a robust algorithm that based on denoise autoencoder. It can effectively alleviate the effect of poisoning attack. In addition, we also proposed POT detection algorithm, which have good performance on detecting different types of attack.
[1] S. Shen, S. Tople and P. Saxena, "Auror: Defending against poisoning attacks in collaborative deep learning systems," in Proceedings of the 32nd Annual Conference on Computer Security Applications, Los Angeles, CA, USA, 5-9 Dec 2016.
[2] M. Mozaffari-Kermani, S. Sur-Kolay, A. Raghunathan and N. K. Jha, "Systematic poisoning attacks on and defenses for machine learning in healthcare," IEEE Journal of Biomedical and Health Informatics, vol. 19, no. 6, pp. 1893 - 1905, 30 July 2014.
[3] A. Shafahi, W. R. Huang, M. Najibi, O. Suciu, C. Studer, T. Dumitras and T. Goldstein, "Poison frogs! targeted clean-label poisoning attacks on neural networks," in Advances in Neural Information Processing Systems, Montréal, Canada , 2-8 Dec 2018.
[4] C. Zhu, W. R. Huang, H. Li, G. Taylor, C. Studer and T. Goldstein, "Transferable Clean-Label Poisoning Attacks on Deep Neural Nets," in International Conference on Machine Learning, Long Beach, CA, USA, 9-15 June 2019.
[5] J. Shen, X. Zhu and D. Ma, "TensorClog: An Imperceptible Poisoning Attack on Deep Neural Network Applications," IEEE Access, vol. 7, pp. 41498 - 41506, 18 3 2019.
[6] M. Subedar, N. Ahuja, R. Krishnan, I. J. Ndiour and O. Tickoo, "Deep Probabilistic Models to Detect Data Poisoning," in Fourth workshop on Bayesian Deep Learning (NeurIPS 2019), Vancouver, Canada, 13 Dec. 2019.
[7] K. He, X. Zhang, S. Ren and J. Sun, "Deep Residual Learning for Image Recognition," in Proceedings of the IEEE conference on computer vision and pattern recognition, Las Vegas, NV, USA, 27-30 June 2015.
[8] K. Simonyan and A. Zisserman, "Very deep convolutional networks for large-scale image recognition," in International conference on learning representations, San Diego, CA, USA , 7-9 May 2015.
[9] A. Krizhevsky, Learning Multiple Layers of Features from Tiny Images, 8 Apr 2009.
[10] O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, M. Bernstein, A. C. Berg and L. Fei-Fei, "Imagenet large scale visual recognition challenge," International journal of computer vision, vol. 115, pp. 211-252, Dec 2015.
[11] X. Chen, C. Liu, B. Li, K. Lu and D. Song, "Targeted backdoor attacks on deep learning systems using data poisoning," in arXiv, 2017.
[12] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erha, V. Vanhoucke and A. Rabinovich, "Going deeper with convolutions," in Proceedings of the IEEE conference on computer vision and pattern recognition, Boston, MA, USA, 7-12 June 2015.
[13] A. Krizhevsky, I. Sutskever and G. E. Hinton., "Imagenet classification with deep convolutional neural networks," in Advances in neural information processing systems, Lake Tahoe, CA, USA, 3-8 Dec 2012.
[14] Y. LeCun, L. Bottou, Y. Bengio and P. Haffner, "Gradient-based learning applied to document recognition," Proceedings of the IEEE, vol. 86, no. 11, pp. 2278 - 2324, Nov 1998.
[15] "Keras documention," [Online]. Available: https://keras.io/. [Accessed 5 Aug 2020].
[16] "PyTorch," [Online]. Available: https://pytorch.org/. [Accessed 5 Aug 2020].
[17] D. P. Kingma and J. L. Ba, "Adam: A method for stochastic optimization," in International conference on learning representations, San Diego, CA, USA, 7-9 May 2015.
[18] H. Xiao, B. Biggio, G. Brown, G. Fumera, C. Eckert and F. Roli, "Is feature selection secure against training data poisoning?," in International Conference on Machine Learning, Lille, France, 6-11 July 2015.
[19] M. Jagielski, A. Oprea, B. Biggio, C. Liu, C. Nita-Rotaru and B. Li, "Manipulating machine learning: Poisoning attacks and countermeasures for regression learning," in IEEE Symposium on Security and Privacy (SP), San Francisco, CA, USA, 20-24 May 2018.
[20] I. J. Goodfellow, J. Shlens and C. Szegedy, "Explaining and harnessing adversarial examples," in International conference on learning representations, San Diego, CA, USA, 7-9 May 2015.
[21] S.-M. Moosavi-Dezfooli, A. Fawzi and P. Frossard, "DeepFool:A simple and accurate method to fool deep neural networks," in Proceedings of the IEEE conference on computer vision and pattern recognition, Las Vegas, NV, USA, 27-30 June 2016.
[22] N. Carlini and D. Wagner, "Towards evaluating the robustness of neural networks," in ieee symposium on security and privacy (sp), San Jose, CA, USA, 22-24 May 2017.
[23] B. Biggio, B. Nelson and P. Laskov, "Poisoning attacks against support vector machines," in International Conference on Machine Learning, Edinburgh, Scotland, UK, 26 Jun - 1 July 2012.
[24] S. Mei and X. Zhu, "Using machine teaching to identify optimal training-set attacks on machine learners," in AAAI Conference on Artificial Intelligence, Austin, Texas, USA, 25-29 Jan 2015.
[25] L. Muñoz-González, B. Biggio, A. Demontis, A. Paudice, V. Wongrassamee, E. C. Lupu and F. Roli, "Towards poisoning of deep learning algorithms with back-gradient optimization," in Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security., Dallas , USA, 3 Nov 2017.
[26] M. A. Hearst, "Support vector machines," IEEE Intelligent Systems and their Applications, vol. 13, no. 4, pp. 18-28, July-Aug 1998.
[27] C. Yang, Q. Wu, H. Li and Y. Chen, "Generative poisoning attack method against neural networks," in arXiv, 2017.
[28] X. Yuan, P. He, Q. Zhu and X. Li, "Adversarial Examples: Attacks and Defenses for Deep Learning," IEEE Transactions on Neural Networks and Learning Systems, vol. 30, no. 9, pp. 2805 - 2824, 14 January 2019.
[29] N. Papernot, P. McDaniel, X. Wu, S. Jha and A. Swami, "Distillation as a defense to adversarial perturbations against deep neural networks.," in IEEE Symposium on Security and Privacy (SP), San Jose, CA, USA, 23-25 May 2016.
[30] J. H. Metzen, T. Genewein, V. Fischer and B. Bischoff, "On detecting adversarial perturbations," in International conference on learning representations, Toulon, France, 24-26 Apr 2017.
[31] Y. Song, T. Kim, S. Nowozin, S. Ermon and N. Kushman, "Pixeldefend: Leveraging generative models to understand and defend against adversarial examples," in International conference on learning representations, Vancouver, Canada, 30 Apr - 3 May 2018.
[32] S. Gu and L. Rigazio, "Towards deep neural network architectures robust to adversarial examples," in International conference on learning representations Workshop, San Diego, CA, USA, 8 May 2015.
[33] D. Meng and H. Chen, "Magnet: a two-pronged defense against adversarial examples," in ACM SIGSAC Conference on Computer and Communications Security, Dallas, Tx, USA, 3 Oct - 3 Nov 2017.
[34] J. Steinhardt, P. W. Koh and P. Liang, "Certified defenses for data poisoning attacks," in Advances in neural information processing systems, Long Beach, CA, 4-9 Dec 2017.
[35] "tensorflow," [Online]. Available: https://www.tensorflow.org/. [Accessed 5 Aug 2020].
[36] P. Vincent, H. Larochelle, Y. Bengio and P.-A. Manzagol, "Extracting and composing robust features with denoising autoencoders," in Proceedings of the 25th international conference on Machine learning, Helsinki, Fabianinkatu 33, 5-9 July 2008.