簡易檢索 / 詳目顯示

研究生: 翁紹嘉
Weng, Shao-Chia
論文名稱: 神經網路區域穩定性分析之符號區間與線性鬆弛方法的硬體設計
Hardware Design For Neural Network Local Robustness Analysis With Symbolic Interval And Linear Relaxation
指導教授: 陳盈如
Chen, Yean-Ru
學位類別: 碩士
Master
系所名稱: 電機資訊學院 - 電機工程學系
Department of Electrical Engineering
論文出版年: 2022
畢業學年度: 111
語文別: 中文
論文頁數: 55
中文關鍵詞: 正規驗證深度神經網路硬體實作符號區間分析
外文關鍵詞: Formal Verification, Deep Neural Network, Hardware Implementation, Symbolic Interval Analysis
相關次數: 點閱:140下載:8
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 近年來隨著人工智能的快速發展,愈來愈多安全問題出現,對於其安全性驗證的演算法也相繼被提出。本篇論文專注於驗證神經網路的局部穩定性 (local robustness),也就是在特定擾動與資料庫的神經網路,驗證其穩定性 (robustness)。在眾多安全驗證演算法中,我們發現 Reluval 與 Neurify 在分析時,不同區間相互獨立,且包含大量乘加運算,適合用硬體加速,且同時符合健全性與完整性。
    在本篇論文中,全硬體設計且加上 Reluval 沒有的卷積與池化運算。同時,在梯度 (gradient) 上也支援卷積與池化運算,可以從輸出層反推得到最具影響力的輸入,並切割區間得到更嚴謹的結果或反例。最後,我們實驗不同存儲裝置、切割區間的方式與梯度運算的方式的效果。同時,我們可以得到比 DeepPoly 與 Refinepoly 更快的分析速度。

    In recent years, with the rapid development of artificial intelligence, more and more security problems have appeared, and many algorithms for formal verification have also been proposed. We focus on verifying the local robustness of neural networks, that is, neural networks under specific perturbations and databases. Among many formal verification algorithms, we found that Reluval and Neurify are suitable for hardware implementation. Because different intervals were independent of each other and include a large number of multiplication and addition operations. They are sound and complete which can make our analysis more precise. In this paper, we implement the full hardware design which is combined with convolution and pooling operations that Reluval does not have. At the same time, convolution and pooling operations are also supported by the gradient calculation, which can be inferred from the output layer to get the most influential input and split the interval to get more rigorous results or potential counterexamples. Finally, we experiment with the effects of different storage policies, ways of cutting intervals, and different policies of gradient calculation. At the same time, we can receive more conclusive results than DeepPoly and RefinePoly, and much faster than them.

    摘要 i 英文延伸摘要 ii 誌謝 vi 目錄 vii 表格 ix 圖片 x 第一章 緒論 1 1.1 研究動機 1 1.2 研究貢獻 2 1.3 論文組織 2 第二章 先備知識 3 2.1 深度卷積神經網路 3 2.2 L infinity norm 與 Epsilon 6 2.3 區間分析 9 第三章 文獻探討 13 3.1 攻擊方法 13 3.2 利用正規驗證分析 14 3.2.1 布林可滿足性問題分析 14 3.2.2 抽象域分析 15 3.2.3 區間分析 16 3.2.4 符號區間分析 16 3.3 利用利普希茨常數分析 17 第四章 研究方法 19 4.1 演算法 19 4.1.1 符號區間運算與符號線性鬆弛 20 4.1.2 安全屬性檢查 23 4.1.3 搜尋反例 24 4.1.4 梯度運算與切割區間 25 4.2 硬體架構 27 4.2.1 硬體全系統架構 27 4.2.2 Reconfigurable Data 模組 28 4.2.3 System Controller 模組 28 4.2.4 Convolution & Pooling 模組 29 4.2.5 Fully Connneted 模組 30 4.2.6 Backward Gradient Calculation 模組 31 4.2.7 Property Check 模組 33 4.2.8 Split Interval & Concrete Root Cause Prepare 模組 35 第五章 實驗結果 38 5.1 實驗數據 38 第六章 結論 53 參考文獻 54

    [1] ETH Robustness Analyzer for Neural Networks (ERAN). https://github.com/ethsri/eran, 2020.

    [2] Timon Gehr, Matthew Mirman, Dana Drachsler-Cohen, Petar Tsankov, Swarat Chaudhuri, and Martin Vechev. Ai2: Safety and robustness certification of neural networks with abstract interpretation. In 2018 IEEE Symposium on Security and Privacy (SP), pages 3–18. IEEE, 2018.

    [3] Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572, 2014.

    [4] Matthias Hein and Maksym Andriushchenko. Formal guarantees on the robustness of a classifier against adversarial manipulation. Advances in neural information processing systems, 30, 2017.

    [5] Guy Katz, Clark Barrett, David L Dill, Kyle Julian, and Mykel J Kochenderfer. Reluplex: An efficient smt solver for verifying deep neural networks. In International conference on computer aided verification, pages 97–117. Springer, 2017.

    [6] Mykel J Kochenderfer, Jessica E Holland, and James P Chryssanthacopoulos. Next generation airborne collision avoidance system. Technical report, Massachusetts Institute of Technology-Lincoln Laboratory Lexington United States, 2012.

    [7] Alexey Kurakin, Ian J Goodfellow, and Samy Bengio. Adversarial examples in the physical world. In Artificial intelligence safety and security, pages 99–112. Chapman and Hall/CRC, 2018.

    [8] Yann LeCun and Corinna Cortes. MNIST handwritten digit database. 2010.

    [9] Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z Berkay Celik, and Ananthram Swami. Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM on Asia conference on computer and communications security, pages 506–519, 2017.

    [10] Gagandeep Singh, Timon Gehr, Markus Püschel, and Martin Vechev. Boosting robustness certification of neural networks. In International Conference on Learning Representations, 2018.

    [11] Gagandeep Singh, Timon Gehr, Markus Püschel, and Martin Vechev. An abstract domain for certifying neural networks. Proceedings of the ACM on Programming Languages, 3(POPL):1–30, 2019.

    [12] Chi-Kai Wang. Local robustness analysis on neural network with symbolic interval analysis by hardware acceleration method. Master’s thesis, 2021.

    [13] Shiqi Wang, Kexin Pei, Justin Whitehouse, Junfeng Yang, and Suman Jana. Efficient formal safety analysis of neural networks. In S. Bengio, H. Wallach, H. Larochelle, 54 K. Grauman, N. Cesa-Bianchi, and R. Garnett, editors, Advances in Neural Information Processing Systems, volume 31. Curran Associates, Inc., 2018.

    [14] Shiqi Wang, Kexin Pei, Justin Whitehouse, Junfeng Yang, and Suman Jana. Formal security analysis of neural networks using symbolic intervals. In 27th USENIX Security Symposium (USENIX Security 18), pages 1599–1614, 2018.

    [15] Tsui-Wei Weng, Huan Zhang, Pin-Yu Chen, Jinfeng Yi, Dong Su, Yupeng Gao, Cho Jui Hsieh, and Luca Daniel. Evaluating the robustness of neural networks: An extreme value theory approach. arXiv preprint arXiv:1801.10578, 2018.

    [16] Kaidi Xu, Huan Zhang, Shiqi Wang, Yihan Wang, Suman Jana, Xue Lin, and Cho-Jui Hsieh. Fast and complete: Enabling complete neural network verification with rapid and massively parallel incomplete verifiers. arXiv preprint arXiv:2011.13824, 2020.

    下載圖示 校內:立即公開
    校外:2025-11-04公開
    QR CODE