| 研究生: |
葉韋廷 Yeh, Wei-Ting |
|---|---|
| 論文名稱: |
應用 SAM 量表於醫療照護產品設計之語音情緒研究 A Research of Emotional Speech in Medicare Product Design Using Self – Assessment Manikin |
| 指導教授: |
謝孟達
Shieh, Meng-Dar |
| 學位類別: |
碩士 Master |
| 系所名稱: |
規劃與設計學院 - 工業設計學系 Department of Industrial Design |
| 論文出版年: | 2012 |
| 畢業學年度: | 100 |
| 語文別: | 中文 |
| 論文頁數: | 98 |
| 中文關鍵詞: | SAM量表 、醫療照護 、產品設計 、語音情緒 、數量化一類 |
| 外文關鍵詞: | Self–Assessment Manikin, Medicare, Product Design, Emotional Speech, Quantification Theory Type I |
| 相關次數: | 點閱:128 下載:0 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
本研究為應用SAM(Self–Assessment Manikin)情緒量表,並在感性工學(Kansei Engineering)的架構下,針對我國使用之中文語音,以未來趨勢的醫療照護(Medicare)用途作為語音的應用任務,欲建立出語音種類和聽覺感受之間的情緒關係,然後達到未來產品設計(Product Design)方向逐漸注重情緒感知的考量,並於消費者使用訴求為主的經濟型態下,進一步有效考慮到使用者的聽覺感官感受。首先,透過情緒量表之愉悅(Pleasure)、激發(Arousal)、支配(Dominance)三向維度來瞭解構成語音情緒(Emotional Speech)變化和影響的基本屬性,接著再利用集群過程完整分類語音的聽覺感受類型,最後以數量化一類(Quantification Theory Type I)分析方法找出實際的相關性與影響力大小。結果成功證實語音的特色變化為產生情緒感受差異的主要來源,且區分出四大類型的語音聽覺感受(I. 低愉悅低激發低支配、II. 低激發低支配、III. 高激發、IV. 高愉悅),並建議在挑選上以「高愉悅」目標為主要選擇方向,同時也提出語音情緒的應用指標(包含了性別、音調、速度三項目),以利於在未來人機互動(Human-Machine Interaction)產品的設計開發過程中,給予設計師一個有效的方向來提供語音類型選擇上的協助,並刺激國內語音技術在產品的應用與發展。
This research apply a scale called SAM (Self–Assessment Manikin), and using the purpose of medicare to be an applied speech case accord with the future trends to aim at Chinese speech in the structure of Kansei Engineering. It will be established the emotional relationship between speech types and auditory perception. The directions of emotional products design will also be focused gradually in the future, and further to consider users of auditory senses from economic consumer demands. First, to find out basic component properties of changes and impact for the emotional speech, an emotional scale with three dimensions as pleasure, arousal, and dominance is used. Second, classifying all speech types of auditory perception completely through the cluster process. Finally, identifying actual correlations and influence value by Quantification Theory Type I. Results confirmed that characteristic variations of speech as the main reason for emotional differences generated. Then it had distinguished speech into four auditory perception types (I. Low Pleasure & Low Arousal & Low Dominance; II. Low Arousal & Low Dominance; III. High Arousal; IV. High Pleasure), also suggested the “High Pleasure” for the choice direction and proposed an applicable indicator (includes three variables of gender, pitch, speed) for emotional speech at the same time. That’s beneficial to facilitate the development process of human-machine interactive products design in the future. Giving designers an effective direction of select for speech types, besides stimulating more applications and developments used in the products with domestic speech technologies.
1. Álvarez García, V.M.; Paule Ruiz, M.P.; Pérez Pérez, J.R. (2010), “Voice Interactive Classroom, a Service-Oriented Software Architecture for Speech-Enabled Learning,” Journal of Network and Computer Applications, 33(5): p. 603-610.
2. Agarwal, Anshu & Meyer, Andrew (2009), “Beyond Usability: Evaluating Emotional Response as an Integral Part of the User Experience,” CHI EA '09 Proceedings of the 27th international conference extended abstracts on Human factors in computing systems, p. 2919-2930.
3. Becker, Liza; Rompay, Thomas J.L. van; Schifferstein, Hendrik N.J.; Galetzka, Mirjam (2011), “Tough Package, Strong Taste: The Influence of Packaging Design on Taste Impressions and Product Evaluations,” Food Quality and Preference, 22(1): p. 17-23.
4. Bradley, Margaret M.; Lang, Peter J. (2007), “The International Affective Digitized Sounds (2nd Edition; IADS-2): Affective Ratings of Sounds and Instruction Manual,” NIMH Center for the Study of Emotion and Attention, Gainesville.
5. Brave, S.; Nass, C. (2003), “Emotion in Human–Computer Interaction,” The Human–Computer Interaction Handbook, Lawrence Erlbaum Associates, Mahwah, NJ.
6. Bradley, Margaret M.; Lang, Peter J. (1994), “Measuring Emotion: The Self-Assessment Manikin and the Semantic Differential,” Journal of Behavior Therapy and Experimental Psychiatry, 25(1): p. 49-59.
7. Chen, Li-Chieh; Chu, Po-Ying (2011), “Developing the Index for Product Design Communication and Evaluation From Emotional Perspectives,” Expert Systems with Applications, 39(2): p. 2011-2020.
8. Desmet, Pieter M. A. (2003), “Measuring Emotion: Development and Application of an Instrument to Measure Emotional Responses to Products,” Funology: From usability to enjoyment, MA: Kluwer Academic Press. p. 111-123.
9. Ferreiros, J.; J.M. Pardo; R. de Córdoba; J. Macias-Guarasa; J.M. Montero; F. Fernández; V. Sama; L.F. dʼHaro; G. González. (2011), “A Speech Interface for Air Traffic Control Terminals,” Aerospace Science and Technology, In Press, Corrected Proof.
10. Greated, Marianne (2011), “The Nature of Sound and Vision in Relation to Colour,” Optics & Laser Technology, 43(2): p. 337-347.
11. Hansakunbuntheung, C.; Tesprasit, V.; Siricharoenchai, R.; Sagisaka, Y. (2003), “Analysis and Modeling of Syllable Duration for Thai Speech Synthesis.” European Conference on Speech Communication and Technology (EUROSPEECH), p. 93-96.
12. Iwano, K.; Yamada, M.; Togawa, T.; Furui, S. (2002), “Speech-Rate-Variable HMM-Based Japanese TTS System”, ISCA TTS Workshop 2002.
13. Jordan, Patrick W. (2000), Designing Pleasurable Products: An Introduction to New Human Factors, Taylor & Francis, London.
14. Kamaruddin, Norhaslinda; Wahab, Abdul; Quek, Chai (2011), “Cultural Dependency Analysis for Understanding Speech Emotion,” Expert Systems with Applications. In Press, Corrected Proof.
15. Levy, Pierre; Kim, Dahyun; Tsai, Tung Jen; Lee, SeungHee; Yamanaka, Toshimasa (2009), “Colourful Rain–Experiencing Synaesthesia,” International Conference on Designing Pleasurable Products and Interfaces, DPPI09: p. 1-11.
16. Lang, Peter J.; Bradley, Margaret M.; Cuthbert, Bruce N. (2008), “The International Affective Picture System (IAPS): Affective Ratings of Pictures and Instruction Manual,” NIMH Center for the Study of Emotion and Attention, Gainesville.
17. Mehrabian, Albert; Russell, James A. (1974), An Approach to Environmental Psychology. Cambridge, MA, US: The MIT Press.
18. Nagamachi, Miyako (1995), “Kansei Engineering: A New Ergonomic Consumer-Oriented Technology for Product Development,” International Journal of Industrial Ergonomics, 15(1): p. 3-11.
19. Rodríguez, William R.; Saz, Oscar; Lleida, Eduardo (2011), “A Prelingual Tool for the Education of Altered Voices,” Speech Communication, In Press, Corrected Proof.
20. Rong, Jia; Chen, Yi-Ping Phoebe; Chowdhury, Morshed; Li, Gang (2007), “Acoustic Features Extraction for Emotion Recognition,” Computer and Information Science, 2007. ICIS 2007. 6th IEEE/ACIS International Conference on, p. 419-424.
21. Razak, Aishah Abdul; Komiya, Ryoichi; Izani, Mohamad; Abidin, Zainal (2005), “Comparison Between Fuzzy and NN Method for Speech Emotion Recognition,” Information Technology and Applications, 2005. ICITA 2005. Third International Conference on, p 297-302.
22. Smith, Shana; Fu, Shih-Hang (2011), “The Relationships Between Automobile Head-up Display Presentation Images and Drivers’ Kansei,” Displays, 32(2): p. 58-68.
23. Spence, Charles; Gallace, Alberto (2011), “Tasting Shapes and Words,” Food Quality and Preference, 22(3): p. 290-295.
24. Schroder, Marc (2006), “Expressing Degree of Activation in Synthetic Speech,” Audio, Speech, and Language Processing, IEEE Transactions on, 14(4): p. 1128-1136.
25. Schnelle, Dirk; Lyardet, Fernando; Wei, Tao (2005), “Audio Navigation Patterns,” Proceedings of EuroPLoP 2005, p. 237-260.
26. Topol, Eric (2011), “Wiress Devices and Their Applications in Healthcare,” FUTURESCAN 2011 - Healthcare Trends and Implications 2011-2016, The Society for Healthcare Strategy and Market Development of the American Hospital Association, p. 37-42.
27. Topol, Eric (2009), “The Wireless Future of Medicine,” Conferences of TEDMED 2009. (www.ted.com/speakers/eric_topol.html)
28. Tsonos, Dimitrios; Xydas, Gerasimos; Kouroupetroglou, Georgios (2007), “A Methodology for Reader's Emotional State Extraction to Augment Expressions in Speech Synthesis,” Tools with Artificial Intelligence, 2007. ICTAI 2007. 19th IEEE International Conference on, 2: p. 218-225.
29. Vergara, Margarita; Mondragón, Salvador; Sancho-Bru, Joaquín Luis; Company, Pedro; Agost, María-Jesús (2011), “Perception of Products by Progressive Multisensory Integration. A Study on Hammers,” Applied Ergonomics, 42(5): p. 652-664.
30. Yeh, Jun-Heng; Pao, Tsang-Long; Lin, Ching-Yi; Tsai, Yao-Wei; Chen, Yu-Te (2011), “Segment-based Emotion Recognition From Continuous Mandarin Chinese Speech,” Computers in Human Behavior, 27(5): p. 1545-1552.
31. Yang, Chih-Chieh; Shieh, Meng-Dar (2010), “A Support Vector Regression Based Prediction Model of Affective Responses for Product Form Design,” Computers & Industrial Engineering, 59(4): p. 682-689.
32. 林俊男,2001,人工聲音信號意象感知評價之研究,國立雲林科技大學工業設計系碩士班論文。
33. 高韻萍,2003,產品造形意象與音樂的配對,國立交通大學應用藝術所碩士論文。
34. 黃凱郁,2005,高齡使用者中文語音介面之階層研究,國立雲林科技大學工業設計系碩士班論文。
35. 郭柏祥,2006,產品形態與情緒之關聯性研究─以電子式煮水壺為例,國立雲林科技大學工業設計系碩士班論文。
36. 陳可欣,2007,包裝飲用水瓶身造形對消費情緒影響之研究,高雄師範大學工業設計研究所碩士論文。
37. 張建成,2007,系列化產品造形風格與設計手法研究以 OLYMPUS 數 位相機為例,設計學報 12(3): p. 1-16。
38. 蔡明勳,2008,語音使用者介面設計之互動式多媒體資訊站─以博物館導覽為例,南華大學資訊管理學系碩士論文。
39. 張育碩,2008,音樂感受性與產品造形關係之研究─以燈飾為例,大同大學工業設計研究所碩士論文。
40. 陳正倫,2009,類神經網路應用於語音情緒的分析與辨識,國立中央大學資訊工程學系碩士論文。
41. 黃妙雯,2010,視覺與聽覺感性意象之關聯─以數位相機為例,國立成功大學工業設計學系碩士論文。
42. K. T. Strongman,1993,情緒心理學(游恆山譯),五南,台北。
43. 鄭麗玉,1993,認知心理學,五南,台北。
44. 簡仁宗,2003,語音處理─搭起人機界面的橋梁,科學發展361期。
45. 廖峻峰,2003,語音技術與 VoiceXML 應用,NCCU Computer Center。
校內:2017-08-30公開