| 研究生: |
洪瑋辰 Hung, Wei-Chen |
|---|---|
| 論文名稱: |
類比非揮發記憶體仿生電路之深度學習模擬平台 A deep learning simulation platform for non-volatile memory-based analog neuromorphic circuits |
| 指導教授: |
盧達生
Lu, Darsen |
| 學位類別: |
碩士 Master |
| 系所名稱: |
電機資訊學院 - 微電子工程研究所 Institute of Microelectronics |
| 論文出版年: | 2019 |
| 畢業學年度: | 107 |
| 語文別: | 中文 |
| 論文頁數: | 64 |
| 中文關鍵詞: | 非揮發性記憶體 、類神經網路 、類比運算 |
| 外文關鍵詞: | Non-volatile memory, Neuromorphic, Analog computing |
| 相關次數: | 點閱:138 下載:2 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
隨著人工智慧的迅速發展,「仿生電路」加速器被視為未來有潛力的運算架構。不同於馮紐曼架構,「記憶體內運算」將儲存單元與計算單元結合在類比非揮發性記憶體上,此法不僅可免除計算單元與記憶體單元資料搬移所造成的時間與能量消耗,還可以使矩陣乘法做大規模的平行化運算,最終達到高效率能量消耗及降低硬體面積。為了預測類比記憶體元件在新的人工智慧架構下,對最後的結果如準確率、功耗、運算速度會如何影響,因此本篇論文目標是建立類比非揮發記憶體仿生電路之深度學習模擬平台,並且探討元件的非理想特性如位元限制、非線性權重更新、元件對元件的變異對神經網路訓練之影響。
本論文以TensorFlow作為軟體框架,建立類神經網路模擬軟體,並以數學函式來描述類比記憶體元件波數量對權重之關係,透過修改函式的參數,可以調整元件的位元精準度和非線性程度。為了瞭解元件變異性對神經網路之影響,利用高斯分佈函數建立變異性分佈矩陣,藉此模擬元件對元件的變異性。為了計算晶片在神經網路訓練過程的能耗,建立動態能耗與靜態能耗之公式,探討不同運算階段之能耗。最後萃取真實元件電阻式記憶體之參數以比較不同元件在神經網路之準確率。
利用上述基礎所建立之模擬平台,可從模擬結果可知,元件至少需要八位元才能達到90%以上的準確率,而當元件曲線越非線性時準確率下降越嚴重。透過額外的數位電路幫助累積權重梯度,可使元件在低精準度的神經網路仍可以達到95%以上的準確率,也大幅提升非線性特性的準確率。結果比較了不同的真實電阻式記憶體元件在神經網路的準確率。在元件對元件的變異性模擬中,可發現神經網路對元件的變異性有強健度。
With the rapid development of artificial intelligence, the "Neuromorphic" accelerator is regarded as a potential computing architecture in the future. Unlike the Von Neumann architecture, "In-memory computing" combines storage units and computing units on analog non-volatile memory. This method not only eliminates the time and energy consumption caused by the movement of data between the computing unit and the memory unit, but also make matrix multiplication to do large-scale parallelization, and finally achieve high efficiency energy consumption and reduce hardware area. In order to predict how the analog memory components under the new artificial intelligence architecture will affect the results such as accuracy, power consumption and operation speed, the goal of this paper is to establish a deep learning simulation platform for analogous non-volatile memory neuromorphic circuits. And explore the non-ideal characteristics of device such as bit constraints, nonlinear weight updates, component-to-component variations on neural network training.
In this thesis, TensorFlow is used as the software framework to build a neural network simulation software. The mathematical function is used to describe the relationship between the number of analog device pulse and the weight. By modifying the parameters of the function, the bit precision of the device and degree of nonlinearity can be adjusted. In order to understand the influence of component variability on the neural network, a Gaussian distribution function is used to establish a variability distribution matrix, thereby simulating the device-to-device variation. In order to calculate the energy consumption of the synaptic array during the neural network training process, the formulas of dynamic energy consumption and static energy consumption are established, and the energy consumption in different operation stages is discussed. Finally, the parameters of the real device resistive memory (RRAM) are extracted to compare the accuracy of different device in the neural network.
Using the simulation platform established by the above foundation, the simulation results show that the deivce needs at least 8-bit to achieve an accuracy of more than 90%. When the device curve more nonlinear, the accuracy decay more severe. By accumulating the weight gradient through additional digital circuits, the accuracy can achieve more than 95% in a low-precision neural network, and also greatly improve the accuracy of nonlinear characteristics. The results compare the accuracy of different real RRAM device in the neural network. In the variability simulation of component-to-component, it can be found that the neural network is robust to the variability of the device.
[1]. Backus, J., Can programming be liberated from the von Neumann style?: a functional style and its algebra of programs. Commun. ACM, 1978. 21(8): p. 613-641.
[2]. Copeland, M. What’s the Difference Between Artificial Intelligence, Machine Learning, and Deep Learning? 2016; Available from: https://blogs.nvidia.com/blog/2016/07/29/whats-difference-artificial-intelligence-machine-learning-deep-learning-ai/.
[3]. Mead, C., Neuromorphic electronic systems. Proceedings of the IEEE, 1990. 78(10): p. 1629-1636.
[4]. Merolla, P.A., et al., A million spiking-neuron integrated circuit with a scalable communication network and interface. Science, 2014. 345(6197): p. 668.
[5]. Burr, G.W., et al., Experimental Demonstration and Tolerancing of a Large-Scale Neural Network (165 000 Synapses) Using Phase-Change Memory as the Synaptic Weight Element. IEEE Transactions on Electron Devices, 2015. 62(11): p. 3498-3507.
[6]. Burr, G.W., et al., Neuromorphic computing using non-volatile memory. Advances in Physics: X, 2016. 2(1): p. 89-124.
[7]. Yu, S., Neuro-Inspired Computing With Emerging Nonvolatile Memorys. Proceedings of the IEEE, 2018. 106(2): p. 260-285.
[8]. Chen, P., et al. Mitigating effects of non-ideal synaptic device characteristics for on-chip learning. in 2015 IEEE/ACM International Conference on Computer-Aided Design (ICCAD). 2015.
[9]. Wong, H.S.P., et al., Phase Change Memory. Proceedings of the IEEE, 2010. 98(12): p. 2201-2227.
[10]. Bichler, O., et al., Visual Pattern Extraction Using Energy-Efficient “2-PCM Synapse” Neuromorphic Architecture. IEEE Transactions on Electron Devices, 2012. 59(8): p. 2206-2214.
[11]. Mulaosmanovic, H., et al. Novel ferroelectric FET based synapse for neuromorphic systems. in 2017 Symposium on VLSI Technology. 2017.
[12]. Jerry, M., et al. Ferroelectric FET analog synapse for acceleration of deep neural network training. in 2017 IEEE International Electron Devices Meeting (IEDM). 2017.
[13]. Wong, H.S.P., et al., Metal–Oxide RRAM. Proceedings of the IEEE, 2012. 100(6): p. 1951-1970.
[14]. Ielmini, D. and V. Milo, Physics-based modeling approaches of resistive switching devices for memory and in-memory computing applications. Journal of Computational Electronics, 2017. 16(4): p. 1121-1143.
[15]. Sawa, A., Resistive switching in transition metal oxides. Materials Today, 2008. 11(6): p. 28-36.
[16]. Yu, S., et al. Scaling-up resistive synaptic arrays for neuro-inspired architecture: Challenges and prospect. in 2015 IEEE International Electron Devices Meeting (IEDM). 2015.
[17]. Gao, L., et al., Fully parallel write/read in resistive synaptic array for accelerating on-chip learning. Nanotechnology, 2015. 26(45): p. 455204.
[18]. Park, S., et al. Neuromorphic speech systems using advanced ReRAM-based synapse. in 2013 IEEE International Electron Devices Meeting. 2013.
[19]. Jo, S.H., et al., Nanoscale memristor device as synapse in neuromorphic systems. Nano Lett, 2010. 10(4): p. 1297-301.
[20]. Xu, C., et al. Overcoming the challenges of crossbar resistive memory architectures. in 2015 IEEE 21st International Symposium on High Performance Computer Architecture (HPCA). 2015.
[21]. Hu, M., et al. Memristor crossbar based hardware realization of BSB recall function. in The 2012 International Joint Conference on Neural Networks (IJCNN). 2012.
[22]. Truong, S.N. and K.-S. Min, New Memristor-Based Crossbar Array Architecture with 50-% Area Reduction and 48-% Power Saving for Matrix-Vector Multiplication of Analog Neuromorphic Computing. JSTS:Journal of Semiconductor Technology and Science, 2014. 14(3): p. 356-363.
[23]. Xu, Z., et al., Parallel Programming of Resistive Cross-point Array for Synaptic Plasticity. Procedia Computer Science, 2014. 41: p. 126-133.
[24]. MNIST Handwritten Digits Dataset. . Available from: http://yann.lecun.com/exdb/mnist/.
[25]. Gonzalez, R. and M. Horowitz, Energy dissipation in general purpose microprocessors. IEEE Journal of Solid-State Circuits, 1996. 31(9): p. 1277-1284.
[26]. Liang, J., et al., Effect of Wordline/Bitline Scaling on the Performance, Energy Consumption, and Reliability of Cross-Point Memory Array. ACM Journal on Emerging Technologies in Computing Systems, 2013. 9(1): p. 1-14.
[27]. Sakurai, T. and K. Tamaru, Simple formulas for two- and three-dimensional capacitances. IEEE Transactions on Electron Devices, 1983. 30(2): p. 183-185.
[28]. Boybat, I., et al. Stochastic weight updates in phase-change memory-based synapses and their influence on artificial neural networks. in 2017 13th Conference on Ph.D. Research in Microelectronics and Electronics (PRIME). 2017.
[29]. Courbariaux, M., Y. Bengio, and J.-P. David, BinaryConnect: Training Deep Neural Networks with binary weights during propagations. NIPS, 2015.
[30]. Hubara, I., et al., Quantized neural networks: training neural networks with low precision weights and activations. J. Mach. Learn. Res., 2017. 18(1): p. 6869-6898.
[31]. Woo, J., et al., Improved Synaptic Behavior Under Identical Pulses Using AlOx/HfO2 Bilayer RRAM Array for Neuromorphic Systems. IEEE Electron Device Letters, 2016. 37(8): p. 994-997.