簡易檢索 / 詳目顯示

研究生: 陳敬曄
Chen, Ching-Yeh
論文名稱: 鋼琴奏鳴曲的自動化樂句分析
Automated Phrase Analysis of Sonatas
指導教授: 蘇文鈺
Su, Wen-Yu
學位類別: 碩士
Master
系所名稱: 電機資訊學院 - 資訊工程學系
Department of Computer Science and Information Engineering
論文出版年: 2019
畢業學年度: 107
語文別: 中文
論文頁數: 53
中文關鍵詞: 和弦辨識調性分析終止式偵測節奏複雜度樂句分段
外文關鍵詞: Chord Recognition, Key Analysis, Cadence Detection, Rhythm Complexity, Phrase Segmentation
相關次數: 點閱:82下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 樂句是曲式分析中的最基本單位,在音樂資訊檢索領域中已有大量的文獻研究如何對單音音樂與複音音樂進行斷句分析。但針對主音音樂卻很少。本文以18、19世紀的奏鳴曲為例,提出一個自動化工具分析和聲、調性、終止式、節奏與樂句的結束點。
    奏鳴曲,是古典主義時期最常見且最具藝術代表的曲式,作為調性音樂,主旋律與和聲有明顯的方向性,多數以終止為目標。作曲家也常藉由轉調的手法,使不同調性的片段形成鮮明的對比。在調性音樂中有多種常用的終止式。部分具有終止感,使音樂圓滿地結束。而另一些卻非常不穩定,從而感覺音樂需要繼續前進。終止往往會通過一個時值較長的音符使音樂“慢下來”。而樂句通常是以終止式為結尾。
    我們改進了前人提出的樣板比對的和弦辨識演算法、並藉由自然音階與順階和弦來判斷樂曲調性。最後,結合調性、和弦與音符特徵推測出終止式的位置。
    而就樂句分析上除了觀察調性與和聲進行,也可以透過節奏的重複或變化來判定。我們分別對奏鳴曲式左右手做節奏複雜度的分析。右手設為主旋律,作為樂曲的外聲部,其節奏變化頻繁且複雜。左手設為伴奏,提供主旋律穩定的和聲支持或呼應,其節奏固定且單純。量化左右手節奏複雜度後針對連續性分組,並與終止式結合找出樂句的斷點。
    在資料方面,我們從古典主義時期挑選了4位作曲家,海頓、莫札特、克雷門蒂、貝多芬各兩首奏鳴曲的第一樂章,分別由兩組音樂老師手動標記包含和弦、調性、終止式、樂句與樂節。與前人比較,我們從終止式分析進展到樂句分析。為了更精確地評估我們改以“拍”為單位,而非先前的“小節”。與標記結果相比,我們的演算法已能提供初步的樂句劃分。
    總共 1283 個小節中,判斷 512 個樂句的 F-Measure 為 41%

    A musical phrase is "the most basic unit” of a musical form. In the field of music information retrieval (MIR), there has been numerous research about how to segment the phrases of monophonic and polyphonic. However, the phrases of the homophonic is rarely discussed. This study is based on the sonatas in the 18th and 19th centuries and proposes an automatic tool to analyze harmony, tonality, cadence, rhythm and phrase.
    The sonata is the most common and artistic representation of the musical form in the period of Classicism. As a tonal music, the melody and the harmony have clear directionality, and the cadence is their regular objective. Composers often use modulation to let different tonal segments be sharply distinct. Various kinds of cadence are used in the modality. Some can properly give the music a perfect end, but others can be quite unstable that the music seems to go forward. A cadence always has a longer note to let the music slower; a phrase usually ends with a cadence.
    This study makes progress on the chord recognition algorithm of the template matching proposed by previous studies. Moreover, we can distinguish the tonality by using diatonic scale and diatonic chords. Finally, the position where the cadence should be is inferred from the combined features of the tonality, the chord and the note.
    In addition to the observation on the tonality and the harmony, phrases can also be analyzed by repetition and variation in rhythm. This study separately analyzes the rhythm complexity of the sonata’s left and right hands. The right hand is defined as the melody, the outer voice of the musical composition, and its rhythm changes frequently and complicated. The left hand is defined as the accompaniment, providing the melody a stable harmony to support or response, the rhythm is unchangeable and simple. After quantifying the complexity of the left and right hand rhythm, they are divided into groups via continuity, and we can combine this with the cadence to find out the boundary of the phrase.
    According to the data, we selected four composers, Haydn, Mozart, Clementi, and Beethoven from the Classicism. Then we choose two sonatas with the first movement (sonata form) of each composer. The entire chord, the tonality, the cadence, the phrase, and the period are manually annotated through two groups of musicians. Comparing with the previous study, we put more effort on phrase analysis than cadence analysis. In order to evaluate more accurately, we use the unit of "beat" instead of "bar". The results show that our algorithm can provide preliminary segmentation of phrases.
    In the total of 1283 bars, the F-Measure of the 512 phrases is 41%.

    LIST OF TABLES VI LIST OF FIGURES VII Chapter 1 Introduction 1 1.1 Motivation 1 1.2 Background 2 Chapter 2 Related Work 5 2.1 Chord Recognition 5 2.2 Key Estimation 9 2.3 Cadence Detection 12 2.4 Syncopation Quantification 18 Chapter 3 Dataset 20 Chapter 4 Method 25 4.1 Chord Recognition – Bass Weight 26 4.2 Key Estimation - Nearly Related Key 29 4.3 Rhythm Extraction 31 Chapter 5 Evaluation 37 5.1 Chord Results 37 5.2 Key Results 39 5.3 Cadence Detection Results 43 5.4 Sub-phrase, Phrase and Period Results 45 Chapter 6 Conclusion and Future Work 50 Chapter 7 References 51

    [1] B. Pardo and W.P. Birmingham, “Algorithms for chordal analysis,” Computer Music Journal, vol. 26, no. 2, pp. 27-49, 2002.
    [2] C. Harte and M. Sandler, “Automatic chord identification using a quantised chromagram,” in Proc. of the Audio Engineering Society Convention (AES), 2005.
    [3] K. Lee, “Automatic chord recognition from audio using enhanced pitch class profile,” in Proc. of the Int. Computer Music Conf. (ICMA), pp. 306-313, 2006.
    [4] L. Oudre, Y. Grenier, and C. Fevotte, “Template-based chord recognition: Influence of the chord types”, in Proc. of the 10th Int. Society for Music Information Retrieval Conf. (ISMIR), pp. 153–158, 2009.
    [5] A. Sheh and D. Ellis, “Chord segmentation and recognition using EM-trained Hidden Markov Models”, in Proc. of the 4th Int. Symposium on Music Information Retrieval (ISMIR), pp. 185-191, 2003.
    [6] J. P. Bello and J. Pickens, “A robust mid-level representation for harmonic content in music signals,” in Proc. of the 6th Int. Conf. Music Information Retrieval (ISMIR), 2005.
    [7] K. Lee and M. Slaney, “Automatic chord recognition from audio using an HMM with supervised learning”, in Proc. of the 7th Int. Conf. on Music Information Retrieval (ISMIR), pp. 133-137, 2006.
    [8] H. Papadopoulos and G. Peeters, “Large-scale study of chord estimation algorithms based on chroma representation and HMM,” in Proc. Int. Workshop Content-Based Multimedia Indexing (CBMI), pp. 53-60, 2007.
    [9] J. Ens, P. Wang, Evangeline Yee, and S. S. Rahman, “CHORDR: Hidden-Markov-Perceptron for Chord Recognition,” [online], 2015.
    [10] D. Temperley, “What’s Key for Key? The Krumhansl-Schmuckler Key-Finding Algorithm Reconsidered,” Music Perception: An Interdisciplinary Journal, vol. 17, no. 1, pp. 65-100, 1999.
    [11] A. Shenoy, R. Mohapatra, and Y. Wang, “Key determination of acoustic musical signals,” in Proc. of the IEEE Int. Conf. on Multimedia & Expo (ICME), vol. 3, pp. 1771–1774, 2004.
    [12] C. Chuan and E. Chew, "Polyphonic audio key-finding using the spiral array CEG algorithm", in Proc. of the IEEE Int. Conf. Multimedia and Expo, 2005.
    [13] Ö. Izmirli, “Template Based Key Finding From Audio,” in Proc. of the Int. Computer Music Conf. (ICMC), 2005.
    [14] E. Chew, “The spiral array: An algorithm for determining key boundaries,” in Proc. of the Second Int. Conf. on Music and Artificial Intelligence (ICMAI), pp. 18–31, 2002.
    [15] W. Chai and B. Vercoe, “Detection of key change in classical piano music,” in Proc. of the Int. Symposium on Music Information Retrieval (ISMIR), pp. 468–473, 2005.
    [16] K. Noland and M. Sandler, “Key estimation using a hidden Markov model”, in Proc. of the Int. Symposium on Music Information Retrieval (ISMIR), pp. 121–126, 2006.
    [17] T. Rocher, M. Robine, P. Hanna, and L. Oudre, “Concurrent Estimation of Chords and Keys From Audio,” in Proc. of the Int. Conf. on Music Information Retrieval (ISMIR), 2010.
    [18] H. Papadopoulos and G. Peeters, “Local Key Estimation from an Audio Signal Relying on Harmonic and Metrical Structures”, IEEE Trans. on Audio, Speech, and Language Processing, vol. 20, no. 4, pp. 1297–1312, 2012.
    [19] Y. Ni, M. McVicar, R. Santos-Rodriguez, and T. De Bie, “An end-to-end machine learning system for harmonic analysis of music,” IEEE Trans. Audio, Speech, and Language Processing, vol. 20, no. 6, pp. 1771-1783, 2012.
    [20] C. H. Chuan and E. Chew, “An Optimization-Based Approach to Key Segmentation,” IEEE Int. Symposium on Multimedia (ISM), pp. 603-608, 2016.
    [21] M.T. Pearce, D. Müllensiefen, and G.A. Wiggins, “A comparison of statistical and rule-based models of melodic segmentation," in Proc. of the 9th Int. Conf. on Music Information Retrieval (ISMIR), pp. 89–94, 2008. 

    [22] N. Jiang and M. Müller, “Automated Methods for Analyzing Music Recordings in Sonata Form,” in Proc. of the 14th Int. Conf. on Music Information Retrieval (ISMIR), 2013.
    [23] M. Giraud, R. Groult, E. Leguy, and F. Leve, “Computational fugue analysis,” Computer Music Journal, vol. 39, no. 2, pp. 77-96, 2015.
    [24] M. Giraud, R. Groult, and F Levé, “Computational Analysis of Musical Form,” in Computational Music Analysis, pp. 113-136, 2016.
    [25] Y. T. Lin, “Cadence Detection for Music Structure Analysis,” Master Thesis, Department of Computer Science and Information Engineering, National Taiwan University, Taipei, Taiwan, 2008.
    [26] Sidorov, K, Jones, A, and Marshall, D, “Music analysis as a smallest grammar problem,” in Proceedings of the 15th International Society for Music Information Retrieval Conference (ISMIR 2014), pages 301–3016, Taipei, Taiwan, 2014
    [27] Rafael, B. and S. Oertl, “A two-layer approach for multi-track segmentation of symbolic music,” in Artificial Intelligence and Applications, 2009
    [28] Rodríguez López, M, “Automatic Melody Segmentation,” Utrecht University, 2016
    [29] M. Rodríguez-López and A. Volk, "Symbolic segmentation: A corpus-based analysis of melodic phrases," in Proc. of the 10th Int. Symposium on Computer Music Modeling and Retrieval(CMMR), pp. 381–388, 2013.
    [30] Song, C., et al., "SynPy: a python toolkit for syncopation modelling." SMC. Maynooth, Ireland, 2015
    [31] T. Cormen, C. Leiserson and R. Rivest, “Introduction to Algorithms. Cambridge,” Massachusetts: MIT Press, 1990.
    [32] P. Y. Chen, “Computer Music Form Analysis of Classical Piano Sonatas,” National Cheng Kung University, 2018.
    [33] Peter Spencer and Peter M. Temko, “A Practical Approach to the Study of Form,” in Music (Englewood Cliffs, New Jersey: Prentice-Hall), 34. , 1988.
    [34] F. G´omez, A. Melvin, D. Rappaport, and G. T. Toussaint, “Mathematical measures of syncopation,” in BRIDGES: Mathematical Connections in Art, Music and Science, pp. 73–84. , 2005

    下載圖示 校內:2024-08-01公開
    校外:2024-08-01公開
    QR CODE