簡易檢索 / 詳目顯示

研究生: 伍錫志
Wu, Hsi-Chih
論文名稱: 基於雙向遞迴式類神經網路與時序注意力機制之非人靈長類腦部皮質訊號解碼手部運動模型開發
Decoding nonhuman primates arm-movement from intracortical signals using bidirectional recurrent network and temporal attention module
指導教授: 楊世宏
Yang, Shih-Hung
學位類別: 碩士
Master
系所名稱: 工學院 - 機械工程學系
Department of Mechanical Engineering
論文出版年: 2020
畢業學年度: 109
語文別: 英文
論文頁數: 94
中文關鍵詞: 腦機介面運動皮質神經棘波訊號遞迴式類神經網路時序特徵模組注意力圖
外文關鍵詞: Brain-computer Interface, Motor Cortex, Neural Spiking Activity, Recurrent Neural Network, Temporal Attention Module, Attention Map
相關次數: 點閱:150下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 神經棘波訊號在運動皮質解碼的腦機介面任務提供精準的訊號來源。本研究開發用於解碼運動皮質區神經訊號的解碼器,提高解碼精度且超越目前最先進的解碼結果。解碼器可與於預測位置、速度、加速度,三種物理量的各自x 與y 方向的運動,總計六種運動變數,而這六種變數的平均預測精度皆優於先前的研究結果。利用雙向遞迴式類神經網路與時序注意力機制,本研究提供解碼器的視覺化工具。所繪製出的注意力圖能夠幫助研究者設計或評估遞迴式類神經網路,並且提供神經訊號解碼的相關資訊。當前的腦機介面相關研究在決定解碼器需使用多長的過去腦波資訊時,是以任意挑選或經驗法則的方式做選擇。本研究提供的分析工具除了視覺化腦波資訊外也可做量化,協助挑選出最適當的過去腦波長度,適用於速度與加速度解碼器。注意力圖所得到的資訊也與神經科學的理論相符,為神經解碼結果提供更多生物上的解釋。

    Neural spike signals provide remarkable precision in motor cortex decoding of BCIs (Brain-computer interfaces). In this research, we developed a neural signal decoder to predict the hand movements of nonhuman primates with high prediction accuracy, and the results surpassed the state-of-the-art methods. The decoding performances were provided in three types of kinematic variables (position, velocity, accelerateion) and in both x-coordinate and y-coordinate have achieved higher precisions than previous studies. Using bidirectional recurrent neural networks (BRNNs) and a temporal attention module, we provided a insightful visualization tool for the decoder. Our attention maps is capable of aidding researchers to design or evaluate the RNNs, providing new aspects of the neural signals. When it comes to determing the tap sizes for neural decoder, arbitrarity and empirical methods are used by current researches. We provided an analytical measure to visualize and quantify the suitable tap sizes for velocity and acceleration decoders. Our attention patterns are also matched by neuroscience theories, and it gives the reason to explain our decoding results.

    Chinese abstract i Abstract ii Acknowledgment iii Table of Contents iv List of Tables vi List of Figures vii Nomenclature ix 1 Introduction 1 1.1 Why, what and how is invasive BCIs? 1 1.2 Difference between center-out task and point-to-point pursuit task 2 1.3 Unsolved problems related to point-to-point pursuit task 3 1.4 Motivation and goals of our research 3 1.5 Our contributions to invasive BCIs 4 1.6 Delimitation 5 2 Related Works 6 2.1 Development and traditional measures of invasive BCIs 6 2.2 Restrictions of traditional invasive BCIs 7 2.3 Advantages of deep learning in BCIs 8 2.4 Development of deep learning in invasive BCIs 9 2.5 Unsolved problems in deep learning invasive BCIs 12 2.6 Development of normalization in deep learning 12 2.7 Development and related work of attention modules 13 3 Neural Decoder Design 19 3.1 Data acquisition 19 3.2 Animal pursuit task with point-to-point trajectory 24 3.3 Network architecture 27 3.4 Loss function and optimizer 39 3.5 Performance evaluation 40 4 Experimental Results 42 4.1 Implementation details 42 4.2 Qualitative analysis 44 4.3 Comparison with state-of-the-art neural decoders 48 4.4 Internal dynamics of network 50 4.5 Visualization of attention process 51 4.6 Tap size selection 55 4.7 Ablation study 61 4.8 Failure case study 70 5 Discussion 76 5.1 Contributions of modification modules in neural decoder 76 5.2 Why our neural decoder surpasses others 78 5.3 The insight provided by our attention module 79 5.4 Connections between our AI model and neuroscience 79 5.5 Difficulty in position decoder 80 6 Conclusions 83 References 84 Appendix A Monkey Indy 92 Appendix B Monkey Loco 94

    [1] Joseph G Makin, Joseph E O'Doherty, Mariana MB Cardoso, and Philip N Sabes. Superior arm-movement decoding from cortex with a new, unsupervised-learning algorithm. Journal of neural engineering, 15(2): 026010, 2018.
    [2] Tommy Hosman, Marco Vilela, Daniel Milstein, Jessica N Kelemen, David M Brandman, Leigh R Hochberg, and John D Simeral. Bci decoder performance comparison of an lstm recurrent neural network and a kalman filter in retrospective simulation. In 2019 9th International IEEE/EMBS Conference on Neural Engineering (NER), pages 1066–1071. IEEE, 2019.
    [3] Anirban Chowdhury, Yogesh Kumar Meena, Haider Raza, Braj Bhushan, Ashwani Kumar Uttam, Nirmal Pandey, Adnan Ariz Hashmi, Alok Bajpai, Ashish Dutta, and Girijesh Prasad. Active physical practice followed by mental practice using bci-driven hand exoskeleton: a pilot trial for clinical effectiveness and usability. IEEE journal of biomedical and health informatics, 22(6):1786–1795, 2018.
    [4] Sergey D Stavisky, Jonathan C Kao, Paul Nuyujukian, Stephen I Ryu, and Krishna V Shenoy. A high performing brain–machine interface driven by low-frequency local field potentials alone and together with spikes. Journal of neural engineering, 12(3):036009, 2015.
    [5] Mijail D Serruya, Nicholas G Hatsopoulos, Liam Paninski, Matthew R Fellows, and John P Donoghue. Instant neural control of a movement signal. Nature, 416(6877):141–142, 2002.
    [6] Jose M Carmena, Mikhail A Lebedev, Roy E Crist, Joseph E O’Doherty, David M Santucci, Dragan F Dimitrov, Parag G Patil, Craig S Henriquez, and Miguel AL Nicolelis. Learning to control a brain–machine interface for reaching and grasping by primates. PLoS biol, 1(2):e42, 2003.
    [7] Christian Ethier, Emily R Oby, Matthew J Bauman, and Lee E Miller. Restoration of grasp following paralysis through brain-controlled stimulation of muscles. Nature, 485(7398):368–371, 2012.
    [8] Ardi Tampuu, Tambet Matiisen, H Freyja Ólafsdóttir, Caswell Barry, and Raul Vicente. Efficient neural decoding of self-location with a deep recurrent network. PLoS computational biology, 15(2):e1006822, 2019.
    [9] Simin Li, Jie Li, and Zheng Li. An improved unscented kalman filter based decoder for cortical brainmachine interfaces. Frontiers in neuroscience, 10:587, 2016.
    [10] Alex K Vaskov, Zachary T Irwin, Samuel R Nason, Philip P Vu, Chrono S Nu, Autumn J Bullard, Mackenna Hill, Naia North, Parag G Patil, and Cynthia A Chestek. Cortical decoding of individual finger group motions using refit kalman filter. Frontiers in neuroscience, 12:751, 2018.
    [11] S Kumar Chenna, Yogesh Kr Jain, Himanshu Kapoor, Raju S Bapi, Narri Yadaiah, Atul Negi, V Seshagiri Rao, and Bulusu Lakshmana Deekshatulu. State estimation and tracking problems: A comparison between kalman filter and recurrent neural networks. In International Conference on Neural Information Processing, pages 275–281. Springer, 2004.
    [12] Vladimir Brezina, Irina V Orekhova, and Klaudiusz R Weiss. The neuromuscular transform: the dynamic, nonlinear link between motor neuron firing patterns and muscle contraction in rhythmic behaviors. Journal of neurophysiology, 83(1):207–231, 2000.
    [13] Nicholas G Hatsopoulos, Catherine L Ojakangas, Liam Paninski, and John P Donoghue. Information about movement direction obtained from synchronous activity of motor cortical neurons. Proceedings of the National Academy of Sciences, 95(26):15706–15711, 1998.
    [14] Dean V Buonomano and Rodrigo Laje. Population clocks: motor timing with neural dynamics. In Space, Time and Number in the Brain, pages 71–85. Elsevier, 2011.
    [15] David Sussillo, Sergey D Stavisky, Jonathan C Kao, Stephen I Ryu, and Krishna V Shenoy. Making brain–machine interfaces robust to future neural variability. Nature communications, 7:13749, 2016.
    [16] Jingjing Xu, Xu Sun, Zhiyuan Zhang, Guangxiang Zhao, and Junyang Lin. Understanding and improving layer normalization. In Advances in Neural Information Processing Systems, pages 4381–4391, 2019.
    [17] Po-He Tseng, Núria Armengol Urpi, Mikhail Lebedev, and Miguel Nicolelis. Decoding movements from cortical ensemble activity using a long short-term memory recurrent network. Neural computation, 31(6):1085–1113, 2019.
    [18] Zheng Li, Joseph E O’Doherty, Timothy L Hanson, Mikhail A Lebedev, Craig S Henriquez, and Miguel AL Nicolelis. Unscented kalman filter for brain-machine interfaces. PloS one, 4(7):e6243, 2009.
    [19] James Pustejovsky, Robert Knippen, Jessica Littman, and Roser Saurí. Temporal and event information in natural language text. Language resources and evaluation, 39(2-3):123–164, 2005.
    [20] Katja Kornysheva and Jörn Diedrichsen. Human premotor areas parse sequences into their spatial and temporal features. Elife, 3:e03043, 2014.
    [21] Wenpeng Yin, Katharina Kann, Mo Yu, and Hinrich Schütze. Comparative study of CNN and RNN for natural language processing. CoRR, abs/1702.01923, 2017.
    [22] Joshua I Glaser, Ari S Benjamin, Raeed H Chowdhury, Matthew G Perich, Lee E Miller, and Konrad P Kording. Machine learning for neural decoding. Eneuro, 7(4), 2020.
    [23] Zishen Xu, Wei Wu, Shawn S Winter, Max L Mehlman, William N Butler, Christine M Simmons, Ryan E Harvey, Laura E Berkowitz, Yang Chen, Jeffrey S Taube, et al. A comparison of neural decoding methods and population coding across thalamo-cortical head direction cells. Frontiers in neural circuits, 13:75, 2019.
    [24] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. Neural machine translation by jointly learning to align and translate. 2015.
    [25] Minh-Thang Luong, Hieu Pham, and Christopher D. Manning. Effective approaches to attention-based neural machine translation. CoRR, abs/1508.04025, 2015.
    [26] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in neural information processing systems, pages 5998–6008, 2017.
    [27] Jianpeng Cheng, Li Dong, and Mirella Lapata. Long short-term memory-networks for machine reading. CoRR, abs/1601.06733, 2016.
    [28] Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. Hierarchical attention networks for document classification. In Proceedings of the 2016 conference of the North American chapter of the association for computational linguistics: human language technologies, pages 1480–1489, 2016.
    [29] Yequan Wang, Minlie Huang, Xiaoyan Zhu, and Li Zhao. Attention-based lstm for aspect-level sentiment classification. In Proceedings of the 2016 conference on empirical methods in natural language processing, pages 606–615, 2016.
    [30] Zhouhan Lin, Minwei Feng, Cícero Nogueira dos Santos, Mo Yu, Bing Xiang, Bowen Zhou, and Yoshua Bengio. A structured self-attentive sentence embedding. CoRR, abs/1703.03130, 2017.
    [31] Jie Hu, Li Shen, and Gang Sun. Squeeze-and-excitation networks. CoRR, abs/1709.01507, 2017.
    [32] Sanghyun Woo, Jongchan Park, Joon-Young Lee, and In So Kweon. Cbam: Convolutional block attention module. In Proceedings of the European conference on computer vision (ECCV), pages 3–19, 2018.
    [33] Wei Wu, Michael J Black, Yun Gao, M Serruya, A Shaikhouni, JP Donoghue, and Elie Bienenstock. Neural decoding of cursor motion using a kalman filter. In Advances in neural information processing systems, pages 133–140, 2003.
    [34] Rahul Gupta and James Ashe. Offline decoding of end-point forces using neural ensembles: application to a brain–machine interface. IEEE Transactions on Neural Systems and Rehabilitation Engineering, 17 (3):254–262, 2009.
    [35] Nur Ahmadi, Timothy G Constandinou, and Christos-Savvas Bouganis. Decoding hand kinematics from local field potentials using long short-term memory (lstm) network. In 2019 9th International IEEE/EMBS Conference on Neural Engineering (NER), pages 415–419. IEEE, 2019.
    [36] Sonia Todorova and Valérie Ventura. Neural decoding: A predictive viewpoint. Neural computation, 29(12):3290–3310, 2017.
    [37] Jakob H Macke, Lars Buesing, John P Cunningham, M Yu Byron, Krishna V Shenoy, and Maneesh Sahani. Empirical models of spiking in neural populations. In Advances in neural information processing systems, pages 1350–1358, 2011.
    [38] Jeremy Holleman, Apurva Mishra, Chris Diorio, and Brian Otis. A micro-power neural spike detector and feature extractor in. 13μm cmos. In 2008 IEEE Custom Integrated Circuits Conference, pages 333–336. IEEE, 2008.
    [39] Apostolos P Georgopoulos, Andrew B Schwartz, and Ronald E Kettner. Neuronal population coding of movement direction. Science, 233(4771):1416–1419, 1986.
    [40] Wei Wu, Michael J Black, David Mumford, Yun Gao, Elie Bienenstock, and John P Donoghue. Modeling and decoding motor cortical activity using a switching kalman filter. IEEE transactions on biomedical engineering, 51(6):933–942, 2004.
    [41] Mark L Homer, Matthew T Harrison, Michael J Black, János A Perge, Sydney S Cash, Gerhard Friehs, and Leigh R Hochberg. Mixing decoded cursor velocity and position from an offline kalman filter improves cursor control in people with tetraplegia. In 2013 6th International IEEE/EMBS Conference on Neural Engineering (NER), pages 715–718. IEEE, 2013.
    [42] Shoeb Shaikh, Rosa So, Tafadzwa Sibindi, Camilo Libedinsky, and Arindam Basu. Towards intelligent intracortical bmi: Low-power neuromorphic decoders that outperform kalman filters. IEEE Transactions on Biomedical Circuits and Systems, 13(6):1615–1624, 2019.
    [43] Beata Jarosiewicz, Anish A Sarma, Daniel Bacher, Nicolas Y Masse, John D Simeral, Brittany Sorice, Erin M Oakley, Christine Blabe, Chethan Pandarinath, Vikash Gilja, et al. Virtual typing by people with tetraplegia using a self-calibrating intracortical brain-computer interface. Science translational medicine,7(313):313ra179–313ra179, 2015.
    [44] Chethan Pandarinath, Paul Nuyujukian, Christine H Blabe, Brittany L Sorice, Jad Saab, Francis R Willett, Leigh R Hochberg, Krishna V Shenoy, and Jaimie M Henderson. High performance communication by people with paralysis using an intracortical brain-computer interface. Elife, 6:e18554, 2017.
    [45] Paul Nuyujukian, Jose Albites Sanabria, Jad Saab, Chethan Pandarinath, Beata Jarosiewicz, Christine H Blabe, Brian Franco, Stephen T Mernoff, Emad N Eskandar, John D Simeral, et al. Cortical control of a tablet computer by people with paralysis. PloS one, 13(11):e0204566, 2018.
    [46] Maryam M Shanechi, Amy L Orsborn, Helene G Moorman, Suraj Gowda, Siddharth Dangi, and Jose M Carmena. Rapid control and feedback rates enhance neuroprosthetic control. Nature communications, 8(1):1–10, 2017.
    [47] Daniel Milstein, Jason Pacheco, Leigh Hochberg, John D Simeral, Beata Jarosiewicz, and Erik Sudderth. Multiscale semi-markov dynamics for intracortical brain-computer interfaces. In Advances in Neural Information Processing Systems, pages 868–878, 2017.
    [48] Chethan Pandarinath, Daniel J O'Shea, Jasmine Collins, Rafal Jozefowicz, Sergey D Stavisky, Jonathan C Kao, Eric M Trautmann, Matthew T Kaufman, Stephen I Ryu, Leigh R Hochberg, et al. Inferring singletrial neural population dynamics using sequential auto-encoders. Nature methods, 15(10):805–815, 2018.
    [49] John J Hopfield. Neural networks and physical systems with emergent collective computational abilities. Proceedings of the national academy of sciences, 79(8):2554–2558, 1982.
    [50] Jeffrey L Elman. Finding structure in time. Cognitive science, 14(2):179–211, 1990.
    [51] Michael I Jordan. Serial order: A parallel distributed processing approach. In Advances in psychology, volume 121, pages 471–495. Elsevier, 1997.
    [52] Md. Zahangir Alom, Tarek M. Taha, Christopher Yakopcic, Stefan Westberg, Paheding Sidike, Mst Shamima Nasrin, Brian C. Van Esesn, Abdul A. S. Awwal, and Vijayan K. Asari. The history began from alexnet: A comprehensive survey on deep learning approaches. CoRR, abs/1803.01164, 2018.
    [53] Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural computation, 9(8):1735–1780, 1997.
    [54] Junyoung Chung, Çaglar Gülçehre, KyungHyun Cho, and Yoshua Bengio. Empirical evaluation of gated recurrent neural networks on sequence modeling. CoRR, abs/1412.3555, 2014.
    [55] Nur Ahmadi, Timothy Constandinou, and Christos-Savvas Bouganis. Impact of referencing scheme on decoding performance of lfp-based brain-machine interface. BioRxiv, 2020.
    [56] Adilet Tuleuov and Berdakh Abibullaev. Deep learning models for subject-independent erp-based braincomputer interfaces. In 2019 9th International IEEE/EMBS Conference on Neural Engineering (NER), pages 945–948. IEEE, 2019.
    [57] Julie Dethier, Paul Nuyujukian, Stephen I Ryu, Krishna V Shenoy, and Kwabena Boahen. Design and validation of a real-time spiking-neural-network decoder for brain–machine interfaces. Journal of neural engineering, 10(3):036008, 2013.
    [58] Nur Ahmadi, Timothy G Constandinou, and Christos-Savvas Bouganis. End-to-end hand kinematic decoding from lfps using temporal convolutional network. In 2019 IEEE Biomedical Circuits and Systems Conference (BioCAS), pages 1–4. IEEE, 2019.
    [59] Robert D Flint, Eric W Lindberg, Luke R Jordan, Lee E Miller, and Marc W Slutzky. Accurate decoding
    of reaching movements from field potentials in the absence of spikes. Journal of neural engineering, 9(4):046006, 2012.
    [60] Dong Wang, Qiaosheng Zhang, Yue Li, Yiwen Wang, Junming Zhu, Shaomin Zhang, and Xiaoxiang Zheng. Long-term decoding stability of local field potentials from silicon arrays in primate motor cortex during a 2d center out task. Journal of Neural Engineering, 11(3):036009, 2014.
    [61] Satyam Kumar, Tharun Reddy, and Laxmidhar Behera. Eeg based motor imagery classification using instantaneous phase difference sequence. In 2018 IEEE International Conference on Systems, Man, and Cybernetics (SMC), pages 499–504. IEEE, 2018.
    [62] Douglas McLelland, Louisa Lavergne, and Rufin VanRullen. The phase of ongoing eeg oscillations predicts the amplitude of peri-saccadic mislocalization. Scientific reports, 6:29335, 2016.
    [63] Hansjörg Scherberger, Murray R Jarvis, and Richard A Andersen. Cortical local field potential encodes movement intentions in the posterior parietal cortex. Neuron, 46(2):347–354, 2005.
    [64] Rodrigo Quian Quiroga and Stefano Panzeri. Extracting information from neuronal populations: information theory and decoding approaches. Nature Reviews Neuroscience, 10(3):173–185, 2009.
    [65] Nuri F Ince, Rahul Gupta, Sami Arica, Ahmed H Tewfik, James Ashe, and Giuseppe Pellizzer. High accuracy decoding of movement target direction in non-human primates based on common spatial patterns of local field potentials. PloS one, 5(12):e14384, 2010.
    [66] Huiling Tan, Alek Pogosyan, Keyoumars Ashkan, Alexander L Green, Tipu Aziz, Thomas Foltynie, Patricia Limousin, Ludvic Zrinzo, Marwan Hariz, and Peter Brown. Decoding gripping force based on local field potentials recorded from subthalamic nucleus in humans. Elife, 5:e19089, 2016.
    [67] Andy Brown, Aaron Tuor, Brian Hutchinson, and Nicole Nichols. Recurrent neural network attention mechanisms for interpretable system log anomaly detection. In Proceedings of the First Workshop on Machine Learning for Computing Systems, pages 1–8, 2018.
    [68] Drew A. Hudson and Christopher D. Manning. Compositional attention networks for machine reasoning. CoRR, abs/1803.03067, 2018.
    [69] Sho Nakagome, Trieu Phat Luu, Yongtian He, Akshay Sujatha Ravindran, and Jose L Contreras-Vidal. Anempirical comparison of neural networks and machine learning algorithms for eeg gait decoding. Scientific Reports, 10(1):1–17, 2020.
    [70] Sergey Ioffe and Christian Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. CoRR, abs/1502.03167, 2015.
    [71] Tim Salimans and Durk P Kingma. Weight normalization: A simple reparameterization to accelerate training of deep neural networks. In Advances in neural information processing systems, pages 901–909, 2016.
    [72] Dmitry Ulyanov, Andrea Vedaldi, and Victor S. Lempitsky. Instance normalization: The missing ingredient for fast stylization. CoRR, abs/1607.08022, 2016.
    [73] Yuxin Wu and Kaiming He. Group normalization. In Proceedings of the European conference on computer vision (ECCV), pages 3–19, 2018.
    [74] Takeru Miyato, Toshiki Kataoka, Masanori Koyama, and Yuichi Yoshida. Spectral normalization for generative adversarial networks. CoRR, abs/1802.05957, 2018.
    [75] Weijiang Li, Fang Qi, Ming Tang, and Zhengtao Yu. Bidirectional lstm with self-attention mechanism and multi-channel features for sentiment classification. Neurocomputing, 2020.
    [76] Ilya Sutskever, Oriol Vinyals, and Quoc V Le. Sequence to sequence learning with neural networks. In Advances in neural information processing systems, pages 3104–3112, 2014.
    [77] Sarath Chandar, Chinnadhurai Sankar, Eugene Vorontsov, Samira Ebrahimi Kahou, and Yoshua Bengio. Towards non-saturating recurrent units for modelling long-term dependencies. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 3280–3287, 2019.
    [78] Ankur P. Parikh, Oscar Täckström, Dipanjan Das, and Jakob Uszkoreit. A decomposable attention model for natural language inference. CoRR, abs/1606.01933, 2016.
    [79] Romain Paulus, Caiming Xiong, and Richard Socher. A deep reinforced model for abstractive summarization. CoRR, abs/1705.04304, 2017.
    [80] Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. Show, attend and tell: Neural image caption generation with visual attention. In International conference on machine learning, pages 2048–2057, 2015.
    [81] Chongyang Tao, Shen Gao, Mingyue Shang, Wei Wu, Dongyan Zhao, and Rui Yan. Get the point of my utterance! learning towards effective responses with multi-head attention mechanism. In IJCAI, pages 4418–4424, 2018.
    [82] Fenglong Ma, Radha Chitta, Jing Zhou, Quanzeng You, Tong Sun, and Jing Gao. Dipole: Diagnosis prediction in healthcare via attention-based bidirectional recurrent neural networks. In Proceedings of the 23rd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1903–1911, 2017.
    [83] Yu Hu, Yongkang Wong, Wentao Wei, Yu Du, Mohan Kankanhalli, and Weidong Geng. A novel attentionbased hybrid cnn-rnn architecture for semg-based gesture recognition. PloS one, 13(10):e0206049, 2018.
    [84] Qiang Cui, Shu Wu, Yan Huang, and Liang Wang. A hierarchical contextual attention-based network for sequential recommendation. Neurocomputing, 358:141–149, 2019.
    [85] Donglin Li, Jiacan Xu, Jianhui Wang, Xiaoke Fang, and Ji Ying. A multi-scale fusion convolutional neural network based on attention mechanism for the visualization analysis of eeg signals decoding. IEEE Transactions on Neural Systems and Rehabilitation Engineering: a Publication of the IEEE Engineering
    in Medicine and Biology Society, 2020.
    [86] Dalin Zhang, Kaixuan Chen, Debao Jian, and Lina Yao. Motor imagery classification via temporal attention cues of graph embedded eeg signals. IEEE Journal of Biomedical and Health Informatics, 2020.
    [87] Sonia Todorova, Patrick Sadtler, Aaron Batista, Steven Chase, and Valérie Ventura. To sort or not to sort: the impact of spike-sorting on neural decoding performance. Journal of neural engineering, 11(5):056005, 2014.
    [88] John Duchi, Elad Hazan, and Yoram Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of machine learning research, 12(7), 2011.
    [89] Tijmen Tieleman and Geoffrey Hinton. Lecture 6.5-rmsprop, coursera: Neural networks for machine learning. University of Toronto, Technical Report, 2012.
    [90] Matthew T Kaufman, Jeffrey S Seely, David Sussillo, Stephen I Ryu, Krishna V Shenoy, and Mark M Churchland. The largest response component in the motor cortex reflects movement timing but not movement type. Eneuro, 3(4), 2016.
    [91] Demis Hassabis, Dharshan Kumaran, Christopher Summerfield, and Matthew Botvinick. Neuroscienceinspired artificial intelligence. Neuron, 95(2):245–258, 2017.
    [92] James M Goodman, Gregg A Tabot, Alex S Lee, Aneesha K Suresh, Alexander T Rajan, Nicholas G Hatsopoulos, and Sliman Bensmaia. Postural representations of the hand in the primate sensorimotor cortex. Neuron, 104(5):1000–1009, 2019.
    [93] Elizaveta V Okorokova, James M Goodman, Nicholas G Hatsopoulos, and Sliman J Bensmaia. Decoding hand kinematics from population responses in sensorimotor cortex during grasping. Journal of Neural Engineering, 17(4):046035, 2020. doi: 10.1088/1741-2552/ab95ea. URL https://doi.org/10.1088/1741-2552/ab95ea.
    [94] Sung-Phil Kim, John D Simeral, Leigh R Hochberg, John P Donoghue, and Michael J Black. Neural control of computer cursor velocity by decoding motor cortical spiking activity in humans with tetraplegia. Journal of neural engineering, 5(4):455, 2008.
    [95] Timothy P Lillicrap and Stephen H Scott. Preference distributions of primary motor cortex neurons reflect control solutions optimized for limb biomechanics. Neuron, 77(1):168–179, 2013.

    下載圖示
    2026-01-01公開
    QR CODE