| 研究生: |
蔡丞冠 Tsai, Cheng-Kuan |
|---|---|
| 論文名稱: |
基於鈣離子長期可塑性之中大尺度神經群之建模與模擬 Modeling and Simulation of Mesoscale Neuronal Population with Calcium-Based Long-Term Plasticity |
| 指導教授: |
朱銘祥
Ju, Ming-Shaung |
| 共同指導教授: |
林宙晴
Lin, Chou-Ching |
| 學位類別: |
碩士 Master |
| 系所名稱: |
工學院 - 機械工程學系 Department of Mechanical Engineering |
| 論文出版年: | 2021 |
| 畢業學年度: | 109 |
| 語文別: | 中文 |
| 論文頁數: | 114 |
| 中文關鍵詞: | 長期突觸可塑性 、鈣離子可塑性模型 、群體密度模型 、平均場模型 、色突觸群體密度模型 、擴散係數查表法 |
| 外文關鍵詞: | long-term synaptic plasticity, calcium-based, mean-field model, colored-synapse population density model, table lookup method |
| 相關次數: | 點閱:99 下載:2 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
長期突觸可塑性是重要的神經生理現象,與大腦的認知系統及學習系統有關,其持續時間長度通常為數十秒到小時級距,目前研究已知當中的分子機制為鈣離子濃度與NMDA受體所主導。有學者提出鈣離子突觸可塑性模型,用以描述突觸傳遞效力隨著突觸後神經元的鈣離子濃度升降而增減,產生長期增益或長期抑制現象。上述的鈣離子可塑性模型僅針對單一神經元作模擬,然而人類各腦區的神經元數量多不可數,因此有必要建立一個神經群體網路動力學模型。群體密度模型是一種神經群體網路的建模方法,以機率密度函數代表一個神經群體,因此在神經群體動力學的模擬,具有較快的計算速度。然而隨著神經電生理模型變數之增加,群體密度模型的狀態空間維度隨之上升,因此需要將模型降維。本研究之目的為結合神經群體密度模型與長期突觸可塑性模型,發展群體密度模型合併平均場模型之神經群體模型架構。
本文使用CbAdEx模型結合突觸電導的平均場模型建立色突觸群體密度模型,此模型是基於電導式突觸的擴散過程近似法及神經適應電流的絕熱近似法。除此之外,本研究亦使用擴散係數查表法,估測EIF模型與AdEx模型之色突觸群體密度模型之擴散係數。神經群體網路動力學的標準模型是蒙地卡羅模擬法。EIF模型與AdEx模型的蒙地卡羅模擬結果顯示擴散係數查表法的建模相對誤差小於色突觸群體密度模型的建模相對誤差。蒙地卡羅法與平均場模型之結果相比,相對誤差大多在10%以下。CbAdEx模型之色突觸群體密度模型模擬結果顯示此模型能模擬長期增益與長期抑制,且能在定性上估測神經群體網路的動作電位頻率演化趨勢。
本文所提出之色突觸群體密度模型為第一個結合長期突觸可塑性之群體密度模型,計算速度高於蒙地卡羅模擬法,且能夠定性上估測三種神經元模型的蒙地卡羅模擬的動作電位頻率,以及平均場模型能夠定量估測各神經元狀態變數的總體平均值。
Long-term synaptic plasticity is an important neurophysiological phenomenon, which is related to the cognitive and learning functions of the brain. Its duration ranges from tens of seconds to several hours. The molecular mechanism is believed to be the inflow of calcium ion and binding of N-methyl-D-aspartate (NMDA) receptor. Following this idea, this thesis adopts the calcium-based adaptive exponential integrate and fire (CbAdEx) model as a single neuron model manifesting synaptic plasticity. The population density model is a method of modeling a collection of neurons. When incorporating more complex synaptic dynamics, the state-space dimension of master equation of the population density model increases, resulting in an increase of computational load. In order to maintain the computational efficiency, the dimension of the model needs to be reduced. In this thesis, the reduction of dimension is achieved by using a mean-field model of synaptic conductance. Based on both diffusion approximation of synaptic conductance and adiabatic approximation of neuronal adaptation current, a colored-synapse population density model (csPDM) is proposed. In addition, the feasibility of using a table lookup method to estimate the diffusion coefficients of csPDM is investigated in the cases of simpler single neuron models without synaptic plasticity, including the exponential integrate and fire (EIF) model and AdEx model. The simulation results show that, when compared with other methods of estimating diffusion coefficients, the table lookup method has a smaller relative error, mostly below 10%, in both EIF and AdEx models. The simulation results of the csPDM of CbAdEx model reveal that this model can simulate long-term potentiation and long-term depression, and can qualitatively estimate the firing rates of the CbAdEx population. The csPDM proposed in this thesis is the first population density model that possesses long-term synaptic plasticity. The computational load is smaller than that by Monte Carlo simulation (MCS). Moreover, it can qualitatively estimate the firing rate of neuronal populations.
[1] P.Dayan and L. F. Abbott, Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems. 2001.
[2] E. J. Speckmann and C. E. Elger, “Introduction to the Neurophysiological Basis of the EEG and DC Potentials,” Electroencephalogr. Basic Princ. Clin. Appl. Relat. Fields, 2004.
[3] A. L. Hodgkin and A. F. Huxley, “A quantitative description of membrane current and its application to conduction and excitation in nerve,” Bull. Math. Biol., vol. 52, no. 1–2, 1990.
[4] W. Gerstner, W. M. Kistler, R. Naud, and L. Paninski, Neuronal dynamics: From single neurons to networks and models of cognition. 1st ed. Cambridge University Press, 2014.
[5] A. Destexhe and M. Rudolph-Lilith, Neuronal noise. New York, NY, USA: Springer-Verlag, 2012.
[6] E. M. Izhikevich and M. Eugene, Dynamical Systems in Neuroscience. Cambridge, MA, USA: MIT Press, 2007.
[7] L. Badel, S.Lefort, T. K. Berger, C. C. H. Petersen, W. Gerstner, and M. J. E. Richardson, “Extracting non-linear integrate-and-fire models from experimental data using dynamic I-V curves,” Biol. Cybern., vol. 99, no. 4–5, pp. 361–370, 2008.
[8] R. Brette and W. Gerstner, “Adaptive exponential integrate-and-fire model as an effective description of neuronal activity,” J. Neurophysiol., vol. 94, no. 5, pp. 3637–3642, 2005.
[9] R. Naud, N. Marcille, C. Clopath, and W. Gerstner, “Firing patterns in the adaptive exponential integrate-and-fire model,” Biol. Cybern., vol. 99, no. 4–5, 2008.
[10] B. Picconi, G. Piccoli, and P. Calabresi, “Synaptic dysfunction in Parkinson’s disease,” Adv. Exp. Med. Biol., vol. 970, 2012.
[11] C. Capone, M. DiVolo, A. Romagnoni, M. Mattia, and A. Destexhe, “State-dependent mean-field formalism to model different activity states in conductance-based networks of spiking neurons,” Phys. Rev. E, vol. 100, no. 6, p. 62413, 2019.
[12] Michael A. Paradiso, Mark F. Bear, and Barry W. Connors, Neuroscience : exploring the brain, 4th ed. Philadelphia : Wolters Kluwer, 2016.
[13] T. J. Sejnowski, A. Destexhe, Thalamocortical Assemblies: How Ion Channels, Single Neurons and Large-Scale Networks Organize Sleep Oscillations, Oxford Univ. Press. Oxford, 2001.
[14] K. A. Newhall, G.Kovačič, P. R. Kramer, D. Zhou, A.V. Rangan, and D.Cai, “Dynamics of current-based, poisson driven, integrate-and-fire neuronal networks,” Commun. Math. Sci., vol. 8, no. 2, 2010.
[15] S. F. Cooke and T. V. P. Bliss, “Plasticity in the human central nervous system,” Brain, vol. 129, no. 7, pp. 1659–1673, 2006.
[16] A. Citri and R. C. Malenka, “Synaptic plasticity: Multiple forms, functions, and mechanisms,” Neuropsychopharmacology, vol. 33, no. 1. 2008.
[17] C. Marchal, “Modeling calcium-dependent synaptic plasticity and its role in sleep-dependent memory consolidation,” 2021.
[18] V. Schmutz, W. Gerstner, and T. Schwalger, Mesoscopic population equations for spiking neural networks with synaptic short-term plasticity, vol. 10, no. 1, 2020.
[19] C. H. Huang and C. C. K. Lin, “An efficient population density method for modeling neural networks with synaptic dynamics manifesting finite relaxation time and short-term plasticity,” eNeuro, vol. 5, no. 6, pp. 1–21, 2018.
[20] M. Tsodyks, A. Uziel, and H. Markram, “Synchrony generation in recurrent networks with frequency-dependent synapses.,” J. Neurosci., vol. 20, no. 1, 2000.
[21] K. Udupa and R. Chen, “Motor cortical plasticity in Parkinson’s disease,” Front. Neurol., vol. 4 SEP, no. September, pp. 1–12, 2013.
[22] A. Morrison, M. Diesmann, and W. Gerstner, “Phenomenological models of synaptic plasticity based on spike timing,” Biol. Cybern., vol. 98, no. 6, pp. 459–478, 2008.
[23] H. D. I. Abarbanel, R. Huerta, and M. I. Rabinovich, “Dynamical model of long-term synaptic plasticity,” Proc. Natl. Acad. Sci. U. S. A., vol. 99, no. 15, pp. 10132–10137, 2002.
[24] M. Graupner, P. Wallisch, and S. Ostojic, “Natural firing patterns imply low sensitivity of synaptic plasticity to spike timing compared with firing rate,” J. Neurosci., vol. 36, no. 44, pp. 11238–11258, Nov. 2016.
[25] M. Graupner and N. Brunel, “Erratum: Calcium-based plasticity model explains sensitivity of synaptic changes to spike pattern, rate, and dendritic location,” Proc. Natl. Acad. Sci. U. S. A., vol. 109, no. 52, p. 21551, 2012.
[26] F. A. C. Azevedo et al., “Equal numbers of neuronal and nonneuronal cells make the human brain an isometrically scaled-up primate brain,” J. Comp. Neurol., vol. 513, no. 5, pp. 532–541, 2009.
[27] K. Im, J. M. Lee, O. Lyttelton, S. H. Kim, A. C. Evans, and S. I. Kim, “Brain size and cortical structure in the adult human brain,” Cereb. Cortex, vol. 18, no. 9, pp. 2181–2191, 2008.
[28] S. Raychaudhuri, “Introduction to monte carlo simulation,” in Proc. Winter Simulation Conf. (WSC), pp. 91–100., 2008.
[29] D. P. Landau and K. Binder, A Guide to Monte Carlo Simulations in Statistical Physics, 3rd ed. Cambridge, U.K.: Cambridge Univ. Press, 2009.
[30] D. J. Higham, “An algorithmic introduction to numerical simulation of stochastic differential equations,” SIAM Rev., vol. 43, no. 3, 2001.
[31] A. Omurtag, B. W. Knight, and L. Sirovich, “On the simulation of large populations of neurons,” J. Comput. Neurosci., vol. 8, no. 1, pp. 51–63, 2000.
[32] E. M. Izhikevich and G. M. Edelman, “Large-scale model of mammalian thalamocortical systems,” Proc. Natl. Acad. Sci. U. S. A., vol. 105, no. 9, 2008.
[33] H. Risken and T. Frank, The Fokker-Planck Equation: Methods of Solutions and Applications. Series in Synergetics. Springer-Verlag, New York, 2nd ed., 1989.
[34] D. Q. Nykamp and D. Tranchina, “A Population Density Approach that Facilitates Large-Scale Modeling of Neural Networks: Analysis and an Application to Orientation Tuning,” vol. 50, p. 32, 1999.
[35] E. Haskell, D. Q. Nykamp, and D. Tranchina, “Population density methods for large-scale modelling of neuronal networks with realistic synaptic kinetics: Cutting the dimension down to size,” Netw. Comput. Neural Syst., 2001.
[36] W. Nicola, C. Ly, and S. A. Campbell, “One-dimensional population density approaches to recurrently coupled networks of neurons with noise,” SIAM J. Appl. Math., vol. 75, no. 5, pp. 2333–2360, 2015.
[37] C. Ly and D. Tranchina, “Critical analysis of dimension reduction by a moment closure method in a population density approach to neural network modeling,” Neural Comput., 2007.
[38] C. Ly, “A Principled Dimension-Reduction Method for the Population Density Approach to Modeling Networks of Neurons with Synaptic Dynamics,” Neural Comput., 2013.
[39] O. David and K. J. Friston, “A neural mass model for MEG/EEG: Coupling and neuronal dynamics,” Neuroimage, vol. 20, no. 3, 2003.
[40] S. Ostojic and N. Brunel, “From spiking neuron models to linear-nonlinear models,” PLoS Comput. Biol., vol. 7, no. 1, 2011.
[41] M. Augustin, J. Ladenbauer, F. Baumann, and K. Obermayer, “Low-dimensional spike rate models derived from networks of adaptive integrate-and-fire neurons: Comparison and implementation,” PLoS Comput. Biol., 2017.
[42] M. DiVolo, A. Romagnoni, C. Capone, and A. Destexhe, “Biologically Realistic Mean-Field Models of Conductance-Based Networks of Spiking Neurons with Adaptation,” Neural Comput., 2019.
[43] M. Breakspear, “Dynamic models of large-scale brain activity,” Nat. Neurosci., vol. 20, no. 3, pp. 340–352, 2017.
[44] C. Cakan and K. Obermayer, “Biophysically grounded mean-field models of neural populations under electrical stimulation,” PLoS Comput. Biol., vol. 16, no. 4, pp. 1–30, 2020.
[45] W. Paulus, “Transcranial electrical stimulation (tES - tDCS; tRNS, tACS) methods,” Neuropsychol. Rehabil., vol. 21, no. 5, pp. 602–617, 2011.
[46] T. V. P. Bliss and S. F. Cooke, “Long-term potentiation and long-term depression: A clinical perspective,” Clinics, vol. 66, no. SUPPL.1, 2011.
[47] T. Takeuchi, A. J. Duszkiewicz, and R. G. M. Morris, “The synaptic plasticity and memory hypothesis: Encoding, storage and persistence,” Philosophical Transactions of the Royal Society B: Biological Sciences, vol. 369, no. 1633. 2014.
[48] 黃志旭, “μ波神經群體密度模型之有限元素模擬,” 國立成功大學 機械工程學系 博士論文, 2013.
[49] M. Stimberg, R. Brette, and D. F. M. Goodman, “Brian 2, an intuitive and efficient neural simulator,” Elife, 2019.
[50] P. Wang, A. M. Tartakovsky, and D. M. Tartakovsky, “Probability density function method for langevin equations with colored noise,” Phys. Rev. Lett., 2013.
[51] D. A. Barajas-Solano and A. M. Tartakovsky, “Probabilistic density function method for nonlinear dynamical systems driven by colored noise,” Phys. Rev. E, 2016.
[52] T. Maltba, P. A. Gremaud, and D. M. Tartakovsky, “Nonlocal PDF methods for Langevin equations with colored noise,” J. Comput. Phys., 2018.
[53] A. Papoulis, “Probability, random variables and stochastic processes,” New York McGraw-Hill, 1984.
[54] W. Nicola, C. Ly, and S. U. E. A. N. N.Campbell, “One-Dimensional Population Density Approaches to Recurrently Coupled Networks of Neurons with Noise,” SIAM J. Appl. Math, vol. 75, no. 5, pp. 2333–2360, 2015.
[55] M. Augustin, J. Ladenbauer, and K. Obermayer, “How adaptation shapes spike rate oscillations in recurrent neuronal networks,” Front. Comput. Neurosci., vol. 7, no. FEB, pp. 1–11, 2013.
[56] M. J. E. Richardson, “Effects of synaptic conductance on the voltage distribution and firing rate of spiking neurons,” Phys. Rev. E - Stat. Physics, Plasmas, Fluids, Relat. Interdiscip. Top., vol. 69, no. 5, 2004.
[57] P. Lánský, “On approximations of Stein’s neuronal model,” J. Theor. Biol., vol. 107, no. 4, 1984.
[58] A. Szparaga and S. Kocira, “Generalized logistic functions in modelling emergence of Brassica napus L.,” PLoS One, vol. 13, no. 8, 2018.
[59] H. Schaeffer, “Learning partial differential equations via data discovery and sparse optimization,” Proc. R. Soc. A Math. Phys. Eng. Sci., vol. 473, no. 2197, 2017.
[60] S. L. Brunton, J. L. Proctor, J. N. Kutz, and W. Bialek, “Discovering governing equations from data by sparse identification of nonlinear dynamical systems,” Proc. Natl. Acad. Sci. U. S. A., vol. 113, no. 15, 2016.
[61] T. E. Maltba, H. Zhao, and D. M. Tartakovsky, “Autonomous learning of nonlocal stochastic neuron dynamics,” pp. 1–26, 2020, [Online]. Available: http://arxiv.org/abs/2011.10955.
[62] J. Bakarji and D. M. Tartakovsky, “Data-driven discovery of coarse-grained equations,” arXiv. 2020.
[63] J. Li, G. Sun, G. Zhao, and L. W. H. Lehman, “Robust low-rank discovery of data-driven partial differential equations,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, no. 01, pp. 767-774, 2020.
[64] S. Rudy, A. Alla, S. L. Brunton, and J. N. Kutz, “Data-driven identification of parametric partial differential equations,” SIAM J. Appl. Dyn. Syst., vol. 18, no. 2, 2019.
[65] X. Li et al., “Sparse learning of partial differential equations with structured dictionary matrix,” Chaos, vol. 29, no. 4, 2019.
[66] R. Tibshirani, “Regression Shrinkage and Selection Via the Lasso,” J. R. Stat. Soc. Ser. B, vol. 58, no. 1, 1996.
[67] S. J. Sheather, “Density estimation,” Stat. Sci., vol. 19, no. 4, 2004.
[68] F. vanBreugel, J. N. Kutz, and B. W. Brunton, “Numerical differentiation of noisy data: A unifying multi-objective optimization framework,” IEEE Access, vol. 8, 2020.
[69] R. Chartrand, “Numerical Differentiation of Noisy, Nonsmooth Data,” ISRN Appl. Math., 2011.
[70] I. Knowles and R. Renka, “Methods for numerical differentiation of noisy data,” Electron. J. Differ. Equ., 2014.
[71] A. Ouahsine and H. Smaoui, “Flux-limiter schemes for oceanic tracers: Application to the English Channel tidal model,” Comput. Methods Appl. Mech. Eng., vol. 179, no. 3–4, 1999.
[72] M. Breuß and D. Dietrich, “On the optimization of flux limiter schemes for hyperbolic conservation laws,” Numer. Methods Partial Differ. Equ., vol. 29, no. 3, 2013.
[73] F. Pedregosa et al., “Scikit-learn: Machine learning in Python,” J. Mach. Learn. Res., vol. 12, 2011.
[74] S. Duczek and H. Gravenkamp, “Mass lumping techniques in the spectral element method: On the equivalence of the row-sum, nodal quadrature, and diagonal scaling methods,” Comput. Methods Appl. Mech. Eng., vol. 353, 2019.
[75] K. Rajdl, P. Lansky, and L. Kostal, “Fano Factor: A Potentially Useful Information,” Front. Comput. Neurosci., vol. 14, 2020.
[76] U. T. Eden and M. A. Kramer, “Drawing inferences from Fano factor calculations,” J. Neurosci. Methods, vol. 190, no. 1, 2010.
[77] J. S. Bendat and A. G. Piersol, Random Data: Analysis and Measurement Procedures, 4th ed., 2012.
[78] N. Brunel, “Dynamics of sparsely connected networks of excitatory and inhibitory spiking neurons,” J. Comput. Neurosci., pp. 183–208, 2000.
[79] B. Cockburn and C. W. Shu, “The local discontinuous galerkin method for time-dependent convection-diffusion systems,” SIAM J. Numer. Anal., vol. 35, no. 6, 1998.
[80] Y. Cheng and C. Shu, “A discontinuous Galerkin finite element method for time dependent partial differential equations with higher order derivatives,” Math. Comput., vol. 5718, no. 07, pp. 1–32, 2007.
[81] Y. Xu and C. W. Shu, “Local discontinuous galerkin methods for high-order time-dependent partial differential equations,” Communications in Computational Physics. 2010.
[82] C. H. Huang, C. C. K. Lin, and M. S. Ju, “Discontinuous Galerkin finite element method for solving population density functions of cortical pyramidal and thalamic neuronal populations,” Comput. Biol. Med., vol. 57, pp. 150–158, 2015.