簡易檢索 / 詳目顯示

研究生: 李怡慧
Lee, Yi-Hui
論文名稱: 基於音樂訊號特徵之歌曲情緒成分分析演算法之研發
Development of a Music Emotion Composition Analysis Algorithm Based on Music Features
指導教授: 王振興
Wang, Jeen-Shing
學位類別: 碩士
Master
系所名稱: 電機資訊學院 - 電機工程學系
Department of Electrical Engineering
論文出版年: 2013
畢業學年度: 101
語文別: 中文
論文頁數: 90
中文關鍵詞: 音樂特徵音樂情緒轉換邊界偵測音樂情緒分類歌曲情緒成分分析
外文關鍵詞: music features, music mood boundary detection, music emotion classification, music emotion composition analysis
相關次數: 點閱:134下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 本論文主旨在於開發歌曲情緒成分分析演算法,其以響度特徵偵測音樂之情緒轉換邊界,將歌曲切成數個具有單一情緒之音樂片段,接著再計算出每個片段的力度、節奏、音高和音色等四大類特徵共50個。每個特徵經過正規化處理後,分別以Kernel-Based Class Separability Measurement (KBCS)及Nonparametric Weighted Feature Extraction (NWFE)進行特徵選取以及特徵降維,經降維轉換後的音樂特徵則作為分類器之輸入參數。本論文使用之分類器為支持向量機(Support Vector Machines, SVM),並以階層式的架構進行音樂片段的情緒分類,第一層依激昂度(Arousal),分為強激昂度類以及弱激昂度類;第二層則再依正向度(Valance),將第一層的分類結果再各細分為正向及負向類,最後分類器的每一個分類類別對應至本論文分析的四種情緒類別快樂(Happy)、緊張(Tensional)、悲傷(Sad)和平靜(Peaceful)。共計31首經過音樂專家標註之西方古典樂用於驗證本演算法音樂情緒轉換邊界偵測的準確度,其平均查全率(Recall rate)為79.10%,查準率(Precision rate)為53.76%。而在音樂情緒種類辨識的部分,共使用339個古典樂片段驗證本演算法,其中174個片段由大學學生進行情緒標註,以模擬一般使用者對於音樂的感受;45個片段由專業的音樂治療師進行情緒標註,以及120個片段由音樂家進行情緒標註,以代表專業的判斷結果。於學生標註之資料,獲得平均正確率(Accuracy)為86.94%,辨識率(Recognition rate)為75.26%,而音樂治療師標註之資料,獲得平均正確率為92.33%,辨識率為84.33%,以音樂家標註之資料,獲得平均正確率為87.56%,辨識率為78.12%。研究結果驗證了本演算法應用於歌曲情緒成分分析的可行性,未來可進一步將成果應用在一般使用者曲目推薦、音樂治療成效評估以及音樂會曲目安排等藝文活動之方向。

    This thesis proposes a music emotion composition analysis algorithm. First, the proposed algorithm detects mood boundaries of input music based on its loudness feature. With the boundaries, the music is divided into numbers of clips and each clip contains only one emotion. For each music clip, a total of 50 features from dynamic, rhythm, pitch, and timbre of music were extracted. Each feature is assigned a selection priority according to the kernel-based class separability (KBCS) measurement. Then a nonparametric weighted feature extraction (NWFE) method is applied for feature reduction. With the reduced features, a hierarchical-framed support vector machine is utilized to classify four categories of music emotion in a hierarchical manner. Finally, each music clip is assigned to one of four categories of music emotion including happy, tensional, sad and peaceful by the proposed algorithm. In the experiment, a total of 31 pieces of western classical music were used to validate the mood boundary detection algorithm. The average recall rate of mood boundary detection was 79.10%, and the average precision rate was 53.76%. The performance of the music emotion classifier was evaluated by two groups of dataset which contained totally 339 music clips, one’s music emotion was annotated by college students, the other’s music emotion was labeled by music therapists, another’s music emotion was labeled by musicians. The average accuracy and recognition rates were 86.94% and 75.26% for the data from students’ group, respectively, were 92.33% and 84.33% for the data from therapists’ group, were 87.56% and 78.12% for the data from musicians’ group, respectively. The effectiveness of the proposed algorithm has validated by the experimental results. In the future, the proposed algorithm can be further applied to music list recommendation on the Internet for users, evaluation of the effectiveness of music therapy, and arrangement of the music list for concerts.

    中文摘要....................................................i 英文摘要..................................................iii 誌謝.......................................................v 目錄......................................................vi 表目錄...................................................viii 圖目錄......................................................x 第1章 緒論..................................................1 1.1 研究背景與動機...........................................1 1.2 文獻探討................................................2 1.3 研究目的................................................9 1.4 論文架構................................................9 第2章 基於音樂專家建議之音樂情緒成分分析演算法架構.................10 2.1 演算法架構.............................................10 2.2 黃金準則(Gold Standard)標註.............................11 第3章 歌曲情緒成分分析演算法..................................15 3.1 資料前處理.............................................15 3.2 特徵擷取...............................................15 3.3 音樂情緒轉換邊界偵測演算法................................29 3.4 音樂情緒分類演算法.......................................32 3.5 歌曲情緒成分分析演算法....................................45 第4章 實驗結果..............................................46 4.1 音樂情緒轉換邊界偵測.....................................46 4.2 音樂情緒分類............................................49 第5章 討論.................................................69 5.1 音樂情緒轉換邊界偵測.....................................69 5.2 音樂情緒分類............................................69 第6章 結論與未來工作.........................................84 6.1 結論..................................................84 6.2 未來工作...............................................84 參考文獻...................................................86

    [1] Y.-C. Huang, S.-H. Lin, C.-Y. Chien, Y.-C. Chen, L.-C. Chou, S.-C. Huang and M.-Y. Jan, “A biomedical entertainment platform design based on musical rhythm characteristic and heart rate variability (HRV),” IEEE International Conference on Multimedia and Expo, pp. 385-388, 2008.
    [2] P. N. Juslina and P. Laukka, “Expression, perception, and induction of musical emotions: A review and a questionnaire study of everyday listening,” Journal of New Music Research, vol. 33, no. 3, pp. 217-238, 2004.
    [3] D. M Clark, “On the induction of depressed mood in the laboratory: Evaluation and comparison of the velten and musical procedures,” Advances in Behaviour Research & Therapy, vol 5, no. 1, pp. 27-49, 1983.
    [4] [Online]. Available: http://www.mrtlm.com/index.php/
    [5] 謝汝光, “高品質的癌症音樂療法,” Energy Medicine Association, [Online]. Available: http://www.energymedicine.org.tw/seminars/%E9%AB%98%E5%93%81%E8%B3%AA%E7%9A%84%E7%99%8C%E7%97%87%E9%9F%B3%E6%A8%82%E7%99%82%E6%B3%95
    [6] 謝俊逢, 音樂療法: 理論與方法, 大陸書店, 2003.
    [7] [Online]. Available: http://www.musictherapy.org/
    [8] E. Glenn Schellenberg, “Music lessons enhance IQ,” Psychological Science, vol. 15, no. 8, pp. 511-514.
    [9] Y. Fang, Y. Zhuang and Y. Pan, “Popular music retrieval by detecting mood,” In Proceedings of the 26th Annual International ACM SIGIR Conference on Research and Development in Informaion Retrieval, pp. 375-376, 2003.
    [10] D. Huron, “Perceptual and cognitive applications in music information retrieval,” In Proceedings of the International Conference on Music Information Retrieval (ISMIR), pp. 23-25, 2000.
    [11] J. A. Sloboda and P. N. Juslin, “Psychological perspectives on music and emotion,” In Music and Emotion: Theory and Research, Oxford University Press, Oxford, UK, 2001.
    [12] [Online]. Available: http://cn.last.fm/
    [13] [Online]. Available: http://www.pandora.com/
    [14] Y-H. Yang and H. H. Chen, “Machine recognition of music emotion: A review,” ACM Transactions on Intelligent Systems and Technology (TIST), vol. 3, no. 3, pp. 1-30, 2012.
    [15] P. Ekman, W. V. Friesen and P. Ellsworth, “What emotion categories or dimensions can observers judge from facial behavior?” Emotion in the human face, New York: Cambridge University Press, pp. 39-55, 1982.
    [16] R. Plutchik, “Nature of emotions,” American Scientist, vol. 89, no. 4, pp. 344-350, 2002.
    [17] X. Hu and J. S. Downie, “Exploring mood metadata: relationships with genre, artist and usage metadata,” In Proceedings of the International Conference on Music Information Retrieval (ISMIR), pp. 67-72, 2007.
    [18] W.M. Wundt, “Outlines of psychology,” Classics in the history of psychology, 1897.
    [19] J. A. Russel, “The circumplex model of affect,” Journal of Personality and Social Psychology, vol. 39, no. 6, pp. 1161-1178, 1980.
    [20] A. Mehrabian, “Basic dimensions for a general psychological theory,” pp. 39–53, 1980.
    [21] M. Y. Wang, N. Y. Zhang and H. C. Zhu, “ User-adaptive music recognition,” In Proceedings of the IEEE International Conference on Signal Processing, pp. 1325-1355, 2004.
    [22] L. Lu, D. Liu, H.-J. Zhang, “Automatic mood detection and tracking of music audio signal,” In Processing of IEEE Transactions on Audio, Speech, and Language, vol.14, pp. 5-18, 2006.
    [23] Y.-H. Yang, Y.-C. Lin, Y.-F. Su, and Homer H. Chen, “A regression approach to music emotion recognition,” IEEE Transactions on Audio, Speech, and Language, vol. 16, no. 2, pp. 448-457, 2008.
    [24] M. L. Zhang and Z. H. Zhou, “ML-knn: A lazy learning approach to multi-label learning,” Pattern Recognition, vol. 40, no. 7, pp. 2038-2048, 2007.
    [25] Y.-H. Yang, C.-C. Liu and Homer H. Chen, “Music emotion classification: A fuzzy approach, ” ACM International Conference on Multimedia, pp. 81-84, 2006
    [26] C. Laurier and P. Herrera, “Audio music mood classification using support vector machine,” In Proceedings of the International Conference on Music Information Retrieval (ISMIR), 2007.
    [27] M. D. Korhonen, D. A. Clausi, and M. E. JerniganE, “Modeling emotional content of music using system identification,” IEEE Transactions on Systems, Man, and Cybernetics (TSMC), vol. 36, no. 3, pp. 588-599, 2006.
    [28] E. Schubert, “Measurement and time series analysis of emotion in music,” Ph.D. dissertation, School of Music Education, University of New South Wales, Sydney, Australia, 1999.
    [29] O. Lartillot, P. Toiviainen, and T. Eerola, MIRtoolbox: An Integrated Set of Functions Written in Matlab, Finnish Center of Excellence in Interdisciplinary Music Research, University of Jyvaskyla, Finland. [Online]. Available: http://www.jyu.fi/hum/laitokset/musiikki/en/research/coe/materials/mirtoolbox
    [30] A. Gabrielsson and E. Lindström, “The influence of musical structure on emotional expression,” In Music and Emotion: Theory and Research, New York: Oxford University Press, pp. 223-248, 2001.
    [31] Concertina Connection Inc., Basic Finger Articulation. [Online]. Available: http://www.concertinaconnection.com/basic%20articulation.htm
    [32] K. Henver, “Experimental studies of the elements of expression in music,” American Journal of Psychology, vol. 48, no. 2, pp. 246–268, 1936.
    [33] G. Tzanetakis and P. Cook, “Musical genre classification of audio signals,” IEEE Transactions on Speech and Audio Processing, vol. 10, no.5, pp. 293–302, 2002.
    [34] C. S. Xu, N. C. Maddage, and X. Shao, “Automatic music classification and summarization,” IEEE Transactions on Speech and Audio Processing, vol. 13, no. 3, pp. 441-450, 2005.
    [35] E. Rapoport, “Emotional expression code in opera and lied singing,” Journal of New Music Research, vol. 25, pp. 109-149, 1996.
    [36] R. G. Crowder, “Perception of the major/minor distinction I: Historical and theoretical foundations,” Psychomusicology, vol. 4, pp. 3–12, 1984.
    [37] R. G. Crowder, “Perception of the major/minor distinction III: Hedonic, music, and affective discriminations,” Bulletin of the Psychonomic Society, vol. 23, pp. 314-316, 1985.
    [38] K. Henver, “The affective value of pitch and tempo in music,” American Journal of Psychology, vol. 49, pp. 621–630, 1937.
    [39] M. G. Rigg, “The mood effects of music: A comparison of data from four investigators,” Journal of Psychology, vol. 58, pp. 427-438, 1964.
    [40] L. Wang, “Feature selection with kernel class separability,” IEEE Transations on Pattern Analysis and Machine Intelligence, vol. 30, no. 9, pp.1534-1546, 2008.
    [41] K. Pearson, “On lines and planes of closest fit to systems of points in space,” Philosophical Magazine, vol. 2, no. 6, pp.559-572, 1901.
    [42] A. M. Martinez and A. C. Kak, “PCA versus LDA,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 23, no. 2, pp.228-233, 2001.
    [43] B. C. Kuo, and D. A Landgrebe, “Nonparametric weighted feature extraction for classification,” IEEE Transactions on Geoscience and Remote Sensing, vol. 42, No. 5, pp.1096-1105, 2004.
    [44] V. N. Vapnik, “The nature of statistical learning theory,” Springer-Verlag New York, Inc., pp. 188, 1995.

    下載圖示 校內:2023-12-31公開
    校外:2023-12-31公開
    QR CODE