| 研究生: |
曹義棟 Susanto, Imam Yogie |
|---|---|
| 論文名稱: |
基於深度混合式神經網路之膚電訊號情緒辨識 Emotion Recognition from Galvanic Skin Response Signal Based on Deep Hybrid Neural Networks |
| 指導教授: |
胡敏君
Hu, Min-Chun |
| 學位類別: |
碩士 Master |
| 系所名稱: |
電機資訊學院 - 資訊工程學系 Department of Computer Science and Information Engineering |
| 論文出版年: | 2019 |
| 畢業學年度: | 107 |
| 語文別: | 英文 |
| 論文頁數: | 36 |
| 外文關鍵詞: | Galvanic Skin Response, Emotion Recognition, Hybrid Neural Network |
| 相關次數: | 點閱:116 下載:0 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
In this work, we proposed a novel hybrid neural network based on Galvanic Skin Response (GSR) signal for emotion recognition. Our method can recognize nine kinds of arousal label and nine kinds of valence label with the accuracy 86.18% and 78.25%, respectively. To recognize the user emotion during experiencing the Human Computer Interaction (HCI) systems, we designed a real-time emotion recognition system based on our proposed model, which can help improve user experience.
[1] Shimmer3 pocket guide. https://imotions.com/blog/galvanic-skin-response/.
[2] J. Abdon Miranda-Correa, M. Khomami Abadi, N. Sebe, and I. Patras. AMIGOS: A Dataset for Affect, Personality and Mood Research on Individuals and Groups. arXiv e-prints, Feb 2017.
[3] F. Agrafioti, D. Hatzinakos, and A. K. Anderson. Ecg pattern analysis for emotion detection. IEEE Transactions on Affective Computing, 3(1):102–115,Jan 2012.
[4] D. Ayata, Y. Yaslan, and M. Kamasak. Emotion recognition via galvanic skin response: Comparison of machine learning algorithms and feature extraction methods. Istanbul University - Journal of Electrical and Electronics Engineering, Volume = 17:issn = 1303–0914, 03 2017.
[5] L. Barrett. Valence focus and arousal focus: Individual differences in the structure of affective experience. Journal of Personality and Social Psychology, 69:153–166, 01 1995.
[6] A. Burns, E. P. Doheny, B. R. Greene, T. Foran, D. Leahy, K. O’Donovan, and M. J. McGrath. Shimmer™: An extensible platform for physiological signal capture. In 2010 Annual International Conference of the IEEE Engineering in Medicine and Biology, pages 3759–3762, Aug 2010.
[7] J. Chung, C. Gulcehre, K. Cho, and Y. Bengio. Empirical evaluation of gated recurrent neural networks on sequence modeling. In NIPS 2014 Workshop on Deep Learning, Dec 2014.
[8]W. T. Cochran, J. W. Cooley, D. L. Favin, H. D. Helms, R. A. Kaenel, W. W. Lang, G. C. Maling, D. E. Nelson, C. M. Rader, and P. D. Welch. What is the fast fourier transform? Proceedings of the IEEE, 55(10):1664–1674, Oct 1967.33
[9] P. Ekman and W. V. Friesen. Measuring facial movement with the facial action coding system. In Emotion in the human face(2nded.). New York: Cambridge University Press, 1982.
[10] E. Fortune, M. Tierney, C. N. Scanaill, A. Bourke, N. Kennedy, and J. Nelson. Activity level classification algorithm using shimmer™ wearable sensors for individuals with rheumatoid arthritis. In 2011 Annual International Conference of the IEEE Engineering in Medicine and Biology Society, pages 3059–3062, Aug 2011.
[11] K. Greff, R. K. Srivastava, J. Koutník, B. R. Steunebrink, and J. Schmidhuber. Lstm: A search space odyssey. IEEE Transactions on Neural Networks and Learning Systems, 28(10):2222–2232, 2017.
[12] A. Kawala-Janik, M. Podpora, A. Gardecki, W. Czuczwara, J. Baranowski, and W. Bauer. Game controller based on biomedical signals. In 2015 20th International Conference on Methods and Models in Automation and Robotics (MMAR), pages 934–939, Aug 2015.
[13] A. M. Khan and M. Lawo. Wearable recognition system for emotional states using physiological devices. 04 2016.
[14] D. Kingma and J. Ba. Adam: A method for stochastic optimization. International Conference on Learning Representations, 12 2014.
[15] G. A. R. Kumar, R. K. Kumar, and G. Sanyal. Facial emotion analysis using deep convolution neural network. In 2017 International Conference on Signal Processing and Communication (ICSPC) , pages 369–374, July 2017.
[16] T. Luong, H. Pham, and C. D. Manning. Effective approaches to attention-based neural machine translation. In EMNLP, 2015.
[17] F. Noroozi, C. Corneanu, D. Kamińska, T. Sapiński, S. Escalera, and G. Anbarjafari. Survey on emotional body gesture recognition. IEEE Transactions on Affective Computing, PP, 01 2018.34
[18] F.Noroozi, M.Marjanovic, A.Njegus, S.Escalera, and G.Anbarjafari. Audio-visual emotion recognition in video clips.
IEEE Transactions on Affective Computing, 10(1):60–75, Jan 2019.
[19] J. D. S. Ortega, M. Senoussaoui, E. Granger, M. Pedersoli, P. Cardinal, and A. L. Koerich. Multimodal Fusion with Deep Neural Networks for Audio-Video Emotion Recognition. arXiv e-prints, page arXiv:1907.03196, Jul2019.
[20] M. Schuster and K. K. Paliwal. Bidirectional recurrent neural networks. IEEE Transactions on Signal Processing, 45(11):2673–2681, Nov 1997.
[21] K. Tong, K. Leung, and Y. Leung. A system for personalized health care with ecg and eeg signals for analysis. In 2017 International Smart Cities Conference (ISC2), pages 1–6, Sep. 2017.
[22] G. Udovičić, J. Ðerek, M. Russo, and M. Sikora. Wearable emotion recognition system based on gsr and ppg signals. In Proceedings of the 2Nd International Workshop on Multimedia for Personal Health and Health Care, MMHealth ’17, pages 53–59, New York, NY, USA, 2017. ACM.
[23] W. Wei, Q. Jia, F. Yongli, and G. Chen. Emotion recognition based on weighted fusion strategy of multichannel physiological signals. Computational Intelligence and Neuroscience, 2018:1–9, 07 2018.
[24] F. Yu and V. Koltun. Multi-scale context aggregation by dilated convolutions. CoRR, Nov 2015.
[25] Y. Zhang, J. Du, Z. Wang, J. Zhang, and Y. Tu. Attention based fully convolutional network for speech emotion recognition. pages 1771–1775, 11 2018.