| 研究生: |
呂明諠 Lu, Ming-Hsuan |
|---|---|
| 論文名稱: |
具情緒感知之人性化居家關懷多媒體系統 A Humanized Multimedia System with Emotional Cognition for Home Care Applications |
| 指導教授: |
王駿發
Wang, Jhing-Fa |
| 學位類別: |
碩士 Master |
| 系所名稱: |
電機資訊學院 - 電機工程學系 Department of Electrical Engineering |
| 論文出版年: | 2016 |
| 畢業學年度: | 104 |
| 語文別: | 英文 |
| 論文頁數: | 50 |
| 中文關鍵詞: | 居家關懷 、使用者辨識 、情緒辨識 、客製化 、資訊檢索 |
| 外文關鍵詞: | home care, user recognition, emotion recognition, customization, information retrieval |
| 相關次數: | 點閱:100 下載:8 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
本論文提出一個具情緒感知的人性化多媒體系統,並將其應用於居家關懷。該系統能夠陪伴使用者一起互動,進而達到服務和娛樂的效果,給予使用者更好的陪伴與關懷,藉此提升使用者的幸福感。本系統可辨識當前使用者並做出口語對話、影音媒體播放、資訊查詢、情感辨識與回饋、協助聯絡等回饋。
為了達到上述功能,本系統分為三大部分,分別是使用者辨識、對話情緒辨識和多媒體應用。使用者辨識可以精確的辨識當前的使用者身分。對話情緒辨識可辨識使用者說話中的喜、怒、哀以及中性四種情緒。系統可針對辨識到的情緒給予適當的回饋。多媒體應用則包含了多項的影音娛樂、資料查詢與協助聯絡等數項服務。
實驗部分,我們對於使用者辨識及情緒辨識做客觀的準確率分析。其中,在Outside-test的部分,使用者辨識正確率可達到96%,情緒辨識可達到70%。然而,由於情緒感受因人而異,所以我們也針對情緒感受的部分做MOS的主觀評測。最後,為了瞭解本系統之可使用性,再針對整個系統進行MOS的評量,其結果達到4.4,表示本系統是可期待的。
In this thesis, we proposed a humanized multimedia system with emotional cognition for home care applications. The system is able to accompany and interact with user, its purpose is to give a better companionship and care for user, thereby to enhance the happiness of the users. The system has the capability of a current user recognition, spoken dialogue, audio and video media playback, information retrieval, emotion recognition, and contact assistance.
To achieve the above functions, the system can be divided into three parts, namely, user recognition, semantic emotion recognition, and multimedia application. User recognition can identify exactly the identification of the current user. The semantic emotion recognition can identify four kinds of user’s emotion (i.e., happiness, anger, sadness, and neutral). The system can give an appropriate feedback according to the recognized emotion. Multimedia applications include more than ten services, such as a number of audio and video entertainments, information retrieval, and contact assistance.
In the experiment, we used an objective analysis of accuracy for user recognition and emotion recognition. In the outside-test, the accuracies of user recognition and emotion recognition can reach 96% and 70%, respectively. However, the emotional feeling of different person is different from others, mean opinion score (MOS) is applied to the subjective evaluation for emotion recognition. Finally, in order to evaluate the usability of the proposed framework, the MOS evaluation for the whole system was conducted. The result of the MOS for the whole system can arrive 4.4 that reveals the proposed system expectable.
[1] “Who is Pepper?” https://www.ald.softbankrobotics.com/en/cool-robots/pepper
[2] “Asus Chairman Jonney Shih explains the Zenbo robot,” http://www.pcworld.com/article/3081305/tech-events-dupe/asus-chairman-jonney-shih-explains-the-zenbo-robot.html
[3] “iOS - Siri - Apple (台灣)” http://www.apple.com/tw/ios/siri/
[4] KIM, Michelle Y. A multimedia information system for home health-care support. IEEE MultiMedia, 1995, 2.4: 83-87.
[5] CHEN, Oscal T.-C., et al. Voice-activity home care system. In: 2016 IEEE-EMBS International Conference on Biomedical and Health Informatics (BHI). IEEE, 2016. p. 110-113.
[6] YU, G. B.; CHAO, I. P.; SU, J. L. The research of wireless application protocol applied in the home care system. In: Engineering in Medicine and Biology Society, 2001. Proceedings of the 23rd Annual International Conference of the IEEE. IEEE, 2001. p. 3583 vol. 4.
[7] MING, Wang; XIAOQING, Yi; DINA, Fang. Design and implementation of home care system on wireless sensor network. In: Computer Science & Education (ICCSE), 2013 8th International Conference on. IEEE, 2013. p. 467-471.
[8] OU, Yang-Yen, et al. A happiness-oriented home care system for elderly daily living. In: Orange Technologies (ICOT), 2014 IEEE International Conference on. IEEE, 2014. p. 193-196.
[9] “認識生物辨識” http://www.doubletech.tw/new_page_5.htm
[10] BÜYÜK, Osman; ARSLAN, Levent M. HMM-based text-dependent speaker recognition with handset-channel recognition. In: 2010 IEEE 18th Signal Processing and Communications Applications Conference. IEEE, 2010. p. 383-386.
[11] ZHANG, Kaisheng, et al. Study on the embedded fingerprint image recognition system. In: Information Science and Management Engineering (ISME), 2010 International Conference of. IEEE, 2010. p. 169-172.
[12] CHEN, Jianxu, et al. Iris recognition based on human-interpretable features. IEEE Transactions on Information Forensics and Security, 2016, 11.7: 1476-1485.
[13] HSU, Chih-Bin, et al. Combining local and global features based on the eigenspace for vein recognition. In: Intelligent Signal Processing and Communications Systems (ISPACS), 2012 International Symposium on. IEEE, 2012. p. 401-405.
[14] Andrea F, Abate, Michele Nappi, Daniel Riccio, abrele Sabatino, "2D and 3D face recognition: A survey", ScienceDirect Pattern Recognition Letters , vol. 28, 2007 pp. 1885-1906.
[15] Tiping Zhang, Yuan Yan Tang, Bin Fang, Zhaowei Shang, Xiaoyu Liu,"Face Recognition Under Varying Illumination Using Gradientfaces",IEEE Transactions on Image Precessing, vol. 18, 2009, pp. 2599-2606.
[16] 劉倫偉、陳繼堂,「結合膚色區域分割與主要成分分析於多人臉部辨識」, 國立台灣海洋大學,機械與機電工程學系碩士論文,2006 年.
[17] 蘇木春、楊煒達,「簡易方法之少量人臉辨識系統」,國立中央大學,資訊工程研究所碩士論文,2007 年.
[18] Lian Hock Koh, Surendra Ranganath, Y.V. Venkatesh, "An integratedautomatic face detection and recognition system", ScienceDirect PatternRecognition, vo. 35, 2002, pp. 1259-1273.
[19] Zhiming Liu, Chengjun Liu, "Fusion of the complementary DiscreteCosine Features in the YIQ color space for face recognition", ComputerVision and Image Understanding, vol. 111, 2008, pp. 249-263.
[20] Chengjun Liu, Harry Wechsler, "Gabor Feature Based Classification Using the Enhanced Fisher Liner Discriminant Model for Face Recognition", IEEE transactions on image processing, vol. 11, pp.467-476, 2002.
[21] Ke un-Chang Kwak, "Face Recognitio n Usi ng an Enhanced Independent Component Analysis Approach," IEEE Transactions on neural networks, vol. 18, NO. 2, pp. 530-541,2007.
[22] W.-T. Luo, "I am not at all sad with sad movies: explore entertainment with appraisal theory ", in Journal of Chinese Communication Society, 2013.
[23] R. A. Thompson, “Emotional regulation: A theme in search of definition,” Monographs of the Society for Research in Child Development, vol. 59, pp. 25-52, 1994.
[24] S. F. Chen, "A Path Model Construction of ICU Nurses' Irrational Beliefs, Emotional Traits, Emotional Management, and Their Interpersonal Relationship", National Political University, Taiwan, 2007.
[25] J. J. Gross and R. A. Thompson, "Emotion regulation: Conceptual foundations," in Handbook of Emotion Regulation, New York: Guilford Press, 2007, pp. 3-26.
[26] A. Neviarouskaya, A. Masaki, "Sentiment Word Relations with Affect, Judgment, and Appreciation," IEEE Transactions on Affective Computing, vol. 4, pp. 425-438, Dec. 2013.
[27] J. J. Gross, “The emerging field of emotion regulation: An integrative review,” Review of General Psychology, vol. 2, pp. 271-299, 1998.
[28] A. Aldao, S. Nolen-Hoeksema, and S. Schweizer, “Emotion-regulation strategies across psychopathology: a meta-analytic review,” Clinical Psychology Review, vol. 30, 217e237. 2010.
[29] J. J. Gross, and O. P. John, “Individual differences in two emotion regulation processes: implications for affect, relationships, and well-being,” Journal of personality and social psychology, vol. 85, no. 2, pp. 348, 2003.
[30] B. Parkinson, P. Totterdell, R. B. Briner, and S. Reynolds, “Changing moods,” New York, Addison, Wesley, Longman, 1996.
[31] WILLS, Craig E. User interface design for the engineer. In: Electro/94 International. Conference Proceedings. Combined Volumes. IEEE, 1994. p. 415-419.
[32] DAVIS, Jon; TIERNEY, Andrew; CHANG, Elizabeth. A user adaptable user interface model to support ubiquitous user access to EIS style applications. In: 29th Annual International Computer Software and Applications Conference (COMPSAC'05). IEEE, 2005. p. 351-358.
[33] XUDONG, Lu; JIANCHENG, Wan. User interface design model. In: Software Engineering, Artificial Intelligence, Networking, and Parallel/Distributed Computing, 2007. SNPD 2007. Eighth ACIS International Conference on. IEEE, 2007. p. 538-543.
[34] BENYON, David, et al. How Was Your Day? Evaluating a Conversational Companion. IEEE Transactions on Affective Computing, 2013, 4.3: 299-311.
[35] “Haar-like features,” https://en.wikipedia.org/wiki/Haar-like_features
[36] “Microsoft Cognitive Services,” https://www.microsoft.com/cognitive-services
[37] PAUL, Sananda, et al. EEG based emotion recognition system using MFDFA as feature extractor. In: Robotics, Automation, Control and Embedded Systems (RACE), 2015 International Conference on. IEEE, 2015. p. 1-5.
[38] WU, Chung-Hsien; LIANG, Wei-Bin. Emotion recognition of affective speech based on multiple classifiers using acoustic-prosodic information and semantic labels. IEEE Transactions on Affective Computing, 2011, 2.1: 10-21.
[39] “Speech API - MSDN – Microsoft,” https://msdn.microsoft.com/en-us/library/mt613445.aspx
[40] GO, Hyoun-Joo, et al. Emotion recognition from the facial image and speech signal. In: SICE 2003 Annual Conference. IEEE, 2003. p. 2890-2895.
[41] PAUL, Sananda, et al. EEG based emotion recognition system using MFDFA as feature extractor. In: Robotics, Automation, Control and Embedded Systems (RACE), 2015 International Conference on. IEEE, 2015. p. 1-5.
[42] SHAHEEN, Shadi, et al. Emotion recognition from text based on automatically generated rules. In: 2014 IEEE International Conference on Data Mining Workshop. IEEE, 2014. p. 383-392.