簡易檢索 / 詳目顯示

研究生: 高義鈞
Kao, Yi-Chun
論文名稱: 於神經網絡運算單元上的硬體木馬之實作與偵測
Design and Detect a Hardware Trojan on Calculation Unit of Neural Network
指導教授: 陳盈如
Chen, Yean-Ru
學位類別: 碩士
Master
系所名稱: 電機資訊學院 - 電機工程學系
Department of Electrical Engineering
論文出版年: 2019
畢業學年度: 107
語文別: 中文
論文頁數: 114
中文關鍵詞: 安全攸關系統硬體木馬後門攻擊正規驗證
外文關鍵詞: Safety-critical system, Hardware Trojan, Backdoor attack, Formal verification
相關次數: 點閱:40下載:2
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 隨著人工智能的發展演進,它在許多不同領域的應用上都有所斬獲。其中部分的應用是安全攸關的系統: 例如醫療儀器、自動駕駛甚至是軍事武器。而這些應用會有即時回饋的需求,這時就會需要嵌入式加速硬體的介入來滿足其對速度的要求。在這種應用場景下如果對於這些硬體植入惡意的元件,使得產品能夠依照攻擊者的意願蓄意的產生錯誤,那將對人身安全造成極大的威脅。這種對硬體植入的惡意元件被稱為硬體木馬,它是指被故意植入電子系統中的特殊電路,這種電路為了不被檢測人員發現平常會潛伏在原始電路之中,僅在攻擊者制定的特殊條件成立時才會被觸發。一旦特殊條件成立,電路能夠被攻擊者利用以達到洩露信息、竄改電路功能甚至直接破壞電路。因為硬體木馬對於安全的威脅極大,如何在電路封裝量產前偵測硬體木馬是一個很重要的議題。

    我們利用手寫辨識的神經網絡辨識器為實驗例子,在具有99%準確率的神經網絡架構上植入硬體木馬,實驗結果顯示在攻擊者指定的觸發條件尚未達到前,神經網絡的正確率仍能維持正確率為99%,而在觸發條件達成後神經網絡的辨識結果則由攻擊者任意操控。本研究共實作兩種攻擊,分別是「目標攻擊」,該攻擊僅需更改平均2.52 %在第6層隱藏層中的神經元,其攻擊成功率就能夠達到97.3%。另一種是「非目標攻擊」,此攻擊僅需更改1.54%在第6層隱藏層中的神經元就能達到99.26%的攻擊成功率。利用這個頗具威脅性的後門攻擊當作偵測的對象,我們分析了神經網絡所具有的性質使得攻擊者有空間可以創造特殊的條件,並且使用正規驗證的方式在RTL階段針對上述被植入硬體木馬的電路進行驗證。實驗結果顯示使用正規驗證的方法能夠成功在490 sec的驗證時間偵測因硬體木馬而產生的惡意行為。

    With the advanced development and great evolutions of machine learning algorithms, artificial intelligence is widely adopted in many research fields and products. In order to guarantee the classification accuracy, not only the robustness of the implementation is concerned needed, but also the security issues should be highly addressed. In this paper, we mainly discuss the attack of hardware Trojan on neural network. Hardware Trojan is a special circuit that is intentionally embedded in an electronic system. Their purposes are to lead malfunction, leak information, and even destroy chips. To discuss the impact of hardware Trojan toward the machine learning circuit designs, we realize a convolution neural network (CNN) circuit with Trojans inserted at RTL. Our Trojan-free design can achieve over 99% classification accuracy on MNIST datasets. After Trojan injected, we can only modify around 2.52% of neuron numbers at the 6th hidden layer, which results in 97.3% attack success rate. In addition, we also used Cadence's formal verification tool JasperGold to detect this threatening hardware Trojan attack and successfully detect its malicious behavior at 490 sec verification time.

    中文摘要 ii 英文延伸摘要 iv 目錄 I 表目錄 III 圖目錄 IV 第一章 緒論 1 1.1. 研究動機 1 1.2 問題描述 2 1.3 研究目的 9 第二章 文獻探討 15 2.1. 基於 Model Pruning 的方法設計硬體木馬 [2] 16 2.2. 竄改神經元數值之硬體木馬設計 [1] 19 2.3. 傳統對抗例的防禦手法 [17], [19] 24 2.4. 硬體木馬偵測 25 第三章 研究方法 27 3.1. 背景知識及專有名詞介紹 28 3.1.1. 人工神經網絡 28 3.1.2. 硬體木馬 33 3.2. 攻擊框架 37 3.3. 觸發機制設計 40 3.3.1. 罕見値設定 40 3.3.2. 神經元罕見値之特殊圖片製造 45 3.4. 硬體木馬設計 56 3.5. 硬體木馬偵測 74 第四章 實驗結果 83 4.1. 本實驗所使用之神經網絡架構介紹 83 4.2. 硬體木馬攻擊成效 88 4.2.1. 觸發硬體木馬的輸入圖像 88 4.2.2. 觸發機制設計之細節 89 4.2.3. 攻擊成效 93 4.3. 硬體木馬之偵測結果 99 第五章 結論 105 參考文獻 106 附錄 110

    [1] J. Clements and Y. Lao. Hardware trojan attacks on neural networks. arXiv preprint arXiv:1806.05768, 2018.
    [2] W. Li, J. Yu, X. Ning, P. Wang, Q. Wei, Y. Wang, and H. Yang. Hu-fu: Hardware and software collaborative attack framework against neural networks. In 2018 IEEE Computer Society Annual Symposium on VLSI (ISVLSI), pages 482–487. IEEE, 2018.
    [3] L. Bottou. Large-scale machine learning with stochastic gradient descent. In Proceedings of COMPSTAT’2010, pages 177–186. Springer, 2010.
    [4] J. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12(Jul):2121–2159, 2011.
    [5] D. P. Kingma and J. Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980, 2014.
    [6] Y. Taigman, M. Yang, M. Ranzato, and L. Wolf. Deepface: Closing the gap to human-level performance in face verification. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1701–1708, 2014.
    [7] C. Chen, A. Seff, A. Kornhauser, and J. Xiao. Deepdriving: Learning affordance for direct perception in autonomous driving. In Proceedings of the IEEE International Conference on Computer Vision, pages 2722–2730, 2015.
    [8] J. Qiu, J. Wang, S. Yao, K. Guo, B. Li, E. Zhou, J. Yu, T. Tang, N. Xu, S. Song, et al. Going deeper with embedded fpga platform for convolutional neural network. In Proceedings of the 2016 ACM/SIGDA International Symposium on Field-Programmable Gate Arrays, pages 26–35. ACM, 2016.
    [9] A. Zhou, A. Yao, Y. Guo, L. Xu, and Y. Chen. Incremental network quantization: Towards lossless cnns with low-precision weights. arXiv preprint arXiv:1702.03044, 2017.
    [10] M. Courbariaux, I. Hubara, D. Soudry, R. El-Yaniv, and Y. Bengio. Binarized neural networks: Training deep neural networks with weights and activations constrained to+ 1 or-1. arXiv preprint arXiv:1602.02830, 2016.
    [11] N. P. Jouppi, C. Young, N. Patil, D. Patterson, G. Agrawal, R. Bajwa, S. Bates, S. Bhatia, N. Boden, A. Borchers, et al. In-datacenter performance analysis of a tensor processing unit. In 2017 ACM/IEEE 44th Annual International Symposium on Computer Architecture (ISCA), pages 1–12. IEEE, 2017.
    [12] H. A. Haenssle, C. Fink, R. Schneiderbauer, F. Toberer, T. Buhl, A. Blum, A. Kalloo, A. B. H. Hassen, L. Thomas, A. Enk, et al. Man against machine: diagnostic performance of a deep learning convolutional neural network for dermoscopic melanoma recognition in comparison to 58 dermatologists. Annals of Oncology, 29(8):1836–1842, 2018.
    [13] N. Papernot, P. McDaniel, I. Goodfellow, S. Jha, Z. B. Celik, and A. Swami. Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM on Asia conference on computer and communications security, pages 506–519. ACM, 2017.
    [14] N. Papernot, P. McDaniel, S. Jha, M. Fredrikson, Z. B. Celik, and A. Swami. The limitations of deep learning in adversarial settings. In 2016 IEEE European Symposium on Security and Privacy (EuroS&P), pages 372–387. IEEE, 2016.
    [15] F. Tramèr, F. Zhang, A. Juels, M. K. Reiter, and T. Ristenpart. Stealing machine learning models via prediction apis. In 25th {USENIX} Security Symposium ({USENIX} Security 16), pages 601–618, 2016.

    [16] M. Sharif, S. Bhagavatula, L. Bauer, and M. K. Reiter. Accessorize to a crime: Real and stealthy attacks on state- of-the-art face recognition. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security, pages 1528–1540. ACM, 2016.
    [17] S. Zheng, Y. Song, T. Leung, and I. Goodfellow. Improving the robustness of deep neural networks via stability training. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 4480–4488, 2016.
    [18] N. Carlini and D. Wagner. Towards evaluating the robustness of neural networks. In 2017 IEEE Symposium on Security and Privacy (SP), pages 39–57. IEEE, 2017.
    [19] D. Hendrycks and T. G. Dietterich. Benchmarking neural network robustness to common corruptions and surface variations. arXiv preprint arXiv:1807.01697, 2018.
    [20] W. He, J. Wei, X. Chen, N. Carlini, and D. Song. Adversarial example defense: Ensembles of weak defenses are not strong. In 11th {USENIX} Workshop on Offensive Technologies ({WOOT} 17), 2017.
    [21] S.-M. Moosavi-Dezfooli, A. Fawzi, O. Fawzi, and P. Frossard. Universal adversarial perturbations. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1765–1773, 2017.
    [22] M. Tehranipoor and F. Koushanfar. A survey of hardware trojan taxonomy and detection. IEEE design & test of computers, 27(1):10–25, 2010.
    [23] R. S. Chakraborty, S. Narasimhan, and S. Bhunia. Hardware trojan: Threats and emerging solutions. In 2009 IEEE International high level design validation and test workshop, pages 166–171. IEEE, 2009.
    [24] F. Wolff, C. Papachristou, S. Bhunia, and R. S. Chakraborty. Towards trojan-free trusted ics: Problem analysis and detection scheme. In Proceedings of the conference on Design, automation and test in Europe, pages 1362–1365. ACM, 2008.
    [25] Samantha Pham, Jennifer L. Dworak, and Theodore W. Manikas, “An Analysis of Differences between Trojans inserted at RTL and at Manufacturing with Implications for their Detectability,” in IEEE North Atlantic Test Workshop (NATW), 2012.
    [26] R. Karri, J. Rajendran, K. Rosenfeld, and M. Tehranipoor. Trustworthy hardware: Identifying and classifying hardware trojans. Computer, 43(10):39–46, 2010.
    [27] B. Min and G. Choi. Rtl functional verification using excitation and observation coverage. In Sixth IEEE International High-Level Design Validation and Test Workshop, pages 58–63. IEEE, 2001.
    [28] Y. LeCun et al. Yann lecun’s home page. http://yann.lecun.com/exdb/mnist/. Accessed 29 Oct. 2014.
    [29] P. Molchanov, S. Tyree, T. Karras, T. Aila, and J. Kautz. Pruning convolutional neural networks for resource efficient inference. arXiv preprint arXiv:1611.06440, 2016.
    [30] C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich. Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1–9, 2015.
    [31] S. Bhunia and M. Tehranipoor. The Hardware Trojan War. Springer, 2018.
    [32] J. Zhang and Q. Xu. On hardware trojan design and implementation at register-transfer level. In 2013 IEEE Interna- tional Symposium on Hardware-Oriented Security and Trust (HOST), pages 107–112. IEEE, 2013.
    [33] D. Yu, F. Seide, G. Li, and L. Deng. Exploiting sparseness in deep neural networks for large vocabulary speech recognition. In 2012 IEEE International conference on acoustics, speech and signal processing (ICASSP), pages 4409– 4412. IEEE, 2012.
    [34] C. Fagot, O. Gascuel, P. Girard, and C. Landrault. On calculating efficient lfsr seeds for built-in self test. In European Test Workshop 1999 (Cat. No. PR00390), pages 7–14. IEEE, 1999.
    [35] G. Hetherington, T. Fryars, N. Tamarapalli, M. Kassab, A. Hassan, and J. Rajski. Logic bist for large industrial designs: Real issues and case studies. In International Test Conference 1999. Proceedings (IEEE Cat. No. 99CH37034), pages 358–367. IEEE, 1999.
    [36] W.-T. Cheng, M. Sharma, T. Rinderknecht, L. Lai, and C. Hill. Signature based diagnosis for logic bist. In 2006 IEEE International Test Conference, pages 1–9. IEEE, 2006.
    [37] M. Rathmair, F. Schupfer, and C. Krieg. Applied formal methods for hardware trojan detection. In 2014 IEEE International Symposium on Circuits and Systems (ISCAS), pages 169–172. IEEE, 2014.
    [38] D. Giannakopoulou, C. S. Pasareanu, and J. M. Cobleigh. Assume-guarantee verification of source code with design-level assumptions. In Proceedings. 26th International Conference on Software Engineering, pages 211–220. IEEE, 2004.
    [39] V. Nair and G. E. Hinton. Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th international conference on machine learning (ICML-10), pages 807–814, 2010.
    [40] E. M. Clarke, W. Klieber, M. Nováček, and P. Zuliani. Model checking and the state explosion problem. In LASER Summer School on Software Engineering, pages 1–30. Springer, 2011.

    無法下載圖示 校內:2024-08-20公開
    校外:不公開
    電子論文尚未授權公開,紙本請查館藏目錄
    QR CODE