簡易檢索 / 詳目顯示

研究生: 吳季耕
Wu, Chi-Keng
論文名稱: 以呼吸活動信號進行情緒分析
Emotion Analysis Using Respiratory Activity Signal
指導教授: 詹寶珠
Chung, Pau-Choo
學位類別: 博士
Doctor
系所名稱: 電機資訊學院 - 電腦與通信工程研究所
Institute of Computer & Communication Engineering
論文出版年: 2018
畢業學年度: 106
語文別: 英文
論文頁數: 102
中文關鍵詞: 情緒計算呼吸光體積變化描記圖法
外文關鍵詞: Affective computing, Respiration, PPG
相關次數: 點閱:67下載:6
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 人類的生理呼吸與心理情緒狀態之間已驗證存有明顯的相關性。然而傳統生理傳感器量測的呼吸信號不僅含有情緒變化和情緒強度的資訊,而且常混雜著移動雜訊(MAs)。呼吸信號中混雜的資訊,限制了呼吸信號應用於情緒狀態評估的效能。本研究提出從呼吸信號中提取代表性情緒引發片段(representative Emotion Elicited Segments, EESs)作為情緒識別的法則,使個體的情緒狀態識別更為可靠。 EES提取程序包含下列兩個主要的法則:(1)基於動態時間規劃距離(Dynamic Time Warping)及互信息(Mutual Information)的情緒特徵挑選法則(MIDTW); (2)情緒特徵聚類密度分析法則(Constraint-based Elicited Segment Density, CESD)。由於呼吸為一週期性且受人體自律的生理信號,特定情緒狀態下的呼吸行為表現應具有類同質性。基於此性質,本研究提出一種適用於呼吸信號的無參數之半同質性切斷演算法(parameter-free Respiration quasi-Homogeneity Segmentation, RHS),將呼吸信號依半同質性進行分段,然後從分段資料中提取出EES。從五個人類基本情緒(即“愛”,“悲傷”,“喜悅”,“憤怒”和“恐懼”)誘發實驗的驗證結果顯示,本研究提出的切斷與EES提取法則,有效提升情緒的識別準確度。
    然而朝向移動式情緒偵測的技術發展,需倚賴便攜式生理傳感器。最近的研究指出,光電容積描記圖(PPG)可用於監測呼吸活動,尤其PPG與呼吸率(respiratory rate, RR)之間的相關性已被廣泛研究與證實。然而,PPG傳感器(脈搏血氧儀)對MAs非常敏感,導致四個受呼吸活動調變的PPG特徵(PPG波的強度,頻率,幅度和脈衝寬度)明顯不一致。針對此議題,本研究提出一種基於卡曼濾波器(Kalman Filter, KF)的自適應融合法則,用於融合PPG信號中的4種RR特徵。該模型應用特徵間差異性和特徵內部的變異量統計,建構卡曼濾波器的測量程序與RR狀態程序。由於不須依賴外部實際呼吸量測信號(參考信號),為使卡曼濾波器能穩定運作,須忽略信號品質低的資料片段。
    然而,考量長時且連續RR估計的應用需求,本研究進一步提出一種自適應融合與預測的法則(Adaptive Fusion with Prediction model, AFP模型)。 AFP模型利用PPG信號的品質,及四個受呼吸調變特徵間的一致性,於時序中動態地調整AFP模型該進入學習模式亦或預測模式。為了提高RR估計過程的強健性,該模型導入一個動態類神經網路模型(Recurrent Neural Network, RNN),RNN在PPG信號品質好時,進行RR預測學習,在受移動雜訊干擾時進行RR預測。此外,AFP模型適應性的特徵融合策略,使RR估計過程能穩健地不受移動雜訊干擾的影響。並且,通過RNN的實時學習能力,個人PPG信號特徵會被導入RR估計程序的學習中,從而促進個別的RR估計,為長期的實時情緒偵測提供一種具潛力的法則。

    The human respiration has been reported having great relevance with the emotional state. However, the respiratory signal obtained using conventional physiological sensors not only reflects changing emotions and emotion intensities, but also contains motion artifacts (MAs). The resulting information ambiguity limits the practical application of the respiration signal as a means of reliable affective state appraisal. Thus, this dissertation proposes a method for extracting representative Emotion Elicited Segments (EESs) from the respiratory signal such that the affective state of the individual can be more reliably determined. The EES extraction process involves the combination of the following procedures, namely (1) Mutual Information Based Emotion Relevance Feature Ranking based on the Dynamic Time Warping Distance (MIDTW); and (2) Emotion clustering analysis based on Constraint-based Elicited Segment Density (CESD). Due to the innate regularity of the respiration signal, the respiration pattern under a particular emotion state would be relatively quasi-homogeneous over time. Accordingly, a parameter-free Respiration Quasi-Homogeneity Segmentation (RHS) algorithm is proposed for partitioning the respiration signal into quasi-homogenous segments, from which the EESs can then be extracted. The experimental results obtained for five prototypical emotions (i.e., “love”, “sadness”, “joy”, “anger” and “fear”) show that the proposed segmentation / extraction methodology enables the EESs to be reliably identified.
    However, a portable respiratory sensor is demanded when performing ambulatory emotion detection. Recent studies have suggested that photoplethysmogram (PPG) can be used for respiration activity monitoring, where the relationship between PPG and respiratory rate (RR) has been widely investigated. However PPG sensor (pulse oximetry) is very sensitive to MAs, resulting that the RR features derived from the four respiratory-induced variations (intensity, frequency, amplitude and pulse width) of PPG may present significantly inconsistent values. To address the issue, an adaptive fusion approach based on Kalman Filter (KF) is proposed to adaptively fuse the RR features in the PPG signals. The KF model applies the relationship of inter-feature coherence and intra-feature statistical changes to identify the measurement process and the four RR state processes of the KF for the four variations related to intensity, frequency, amplitude and pulse width, respectively.
    The use of KF model must neglect low signal quality intervals which may pose an application limitation. Accordingly, an Adaptive Fusion with Prediction (AFP) model is further proposed for long-term continuous RR estimation based on PPG signals. The AFP model incorporates a Recurrent Neural Network (RNN), which performs online prediction learning when the PPG signal has good quality and RR prediction when MA interference. Moreover, the AFP model dynamically adapts the feature fusion strategy, and therefore renders the RR estimation process more robust toward the effects of MA interference. Notably, through the real-time learning capability of the RNN, the personal PPG characteristics of the individual can be incorporated into the RR estimation process, thereby facilitating individualized RR estimation, which provides a potential technique for long-term real-time affective detection.

    Abstract IV 摘要 VI 誌謝 VIII Chapter 1 Introduction 1 Part Ⅰ Representative Emotion Segment Estimation Using Respiratory Signal Chapter 2 Emotion Model and Emotion-Eliciting Experiment 10 Chapter 3 Respiratroy Quasi-homogeneity Segmentation(RHS) Algorithm 14 3.1 Introduction 14 3.2 Signal Transformation 16 3.3 Top-Down Splitting 17 3.4 Quasi-homogeneity Hypothesis Test 19 3.5 Numerical Simulation for RHS segmentation Algorithm 22 Chapter 4 Representative Emotion Elicited Segment Extraction and Classification 26 4.1 Introduction 26 4.2 Respiratory Segment Feature Measurement 28 4.2.1 Respiratory Features 28 4.2.2 Individual Self-adjustment Feature Standardization 29 4.3 Mutual Information Based Emotion Relevance Feature Ranking Based on DTW Distance (MIDTW) 30 4.4 Constraint-based Elicited Segment Density (CESD) Analysis 32 4.5 Experiment Results 33 4.5.1 Preprocessing 33 4.5.2 Automatic Segmentation 34 4.5.3 Extraction of Representative Emotion Elicited Segments (EESs) 35 4.5.4 Classification 37 4.5.5 Evaluation 41 4.6. Summary 45 Part Ⅱ Respiratory Rate Estimation from PPG Signals with Motion Artifacts Chapter 5 Respiratroy Rate Estimation from PPG Signals Using Kalman Filter 46 5.1 Introduction 46 5.2 Extract Respiratory Rate Features in PPG 47 5.2.1 Pulsatile Component Extraction 47 5.2.2 Pulse Wave Segmentation 48 5.2.3 PPG Signal Quality Estimation 48 5.2.4 Extraction of Respiratory-induced Variations 52 5.3 Kalman Filter Model for Respiratory Rate Fusion 54 5.3.1 Process Model Description 54 5.3.2 Update Equations 56 5.3.3 Fusion Policy and RR Estimation 57 5.4 Experiment Results and Discussion 58 5.4.1 PPG Experiment Data 58 5.4.2 Evaluation 59 5.4.3 Results and Discussion 60 Chapter 6 Respiraroy Rate Estiamtion from PPG Signals Using An Adaptive Fusion with Prediction Model (AFP) 65 6.1 Introduction 65 6.2 Adaptive Fusion with Prediction Model 67 6.2.1 RNN Model 71 6.3 Result and Discussion 75 6.3.1 Performance Assessment 75 6.3.2 Performance 79 6.3.3 Effects of Parameter Settings 82 6.3.4 Comparison 84 Chapter 7 Conclusions 87 References 92

    [1] R.W. Picard, “Affective computing,” MIT Press, 2000.
    [2] D. Kuli´c and E. A. Croft, “Affective state estimation for human-robot interaction,” IEEE Trans. Robotics, vol.23, No.5, October 2007.
    [3] A. Kapoor, W. Burleson, and R. W. Picard, “Automatic prediction of frustration”. Int’l J. Human-Computer Studies,” vol. 65, no.8, pp. 724-736, August 2007.
    [4] J. Scheirer, R. Fernandez, J. Klein, and R. W. Picard, "Frustrating the user on purpose: A step toward building an affective computer," Interacting with Computers, Volume 14, No. 2, pp. 93-118, 2002.
    [5] G. E. Sakr, I. H. Elhajj, and H. A.-S. Huijer, “Support vector machines to define and detect agitation transition,” IEEE Trans. Affective Computing, vol. 1, no. 2, July-Dec. 2010.
    [6] L.S. Greenberg, “The clinical application of emotion in psychotherapy”, Handbook of Emotions(3rd Ed.), M. Lewis and J. M. Haviland-Jones, pp. 88-101, 2008.
    [7] W. James, “What is an emotion?,” Mind, vol. 9, pp. 188-205, 1884.
    [8] S. Schachter and J. Singer, “Cognitive, social, and physiological determinants of emotional state,” Psychological Review, vol. 69, pp. 379–399, 1962
    [9] W.B. Cannon, “The James-Lange theory of emotions: a critical examination and an alternative theory,” Am. J. of Psychology, vol. 39, pp. 106-124, 1927
    [10] J. L. Andreassi, “Heart activity and behavior: stress, emotions, motivation, personality, social factors, brain interactions, and conditioning”, Psychophysiology: Human Behavior and Physiological Response, pp. 241-249, 1995.
    [11] J. T. Larsen, G. G. Berntson , K. M. Poehlmann, T. A. Ito, and J. T. Cacioppo, “The psychophysiology of emotion”, Handbook of Emotions(3rd Ed.), M. Lewis and J. M. Haviland-Jones, pp. 173-191, 2008.
    [12] S. D. Kreibig, “Autonomic nervous system activity in emotion: A review,” Biological Psychology, vol. 84, no. 3, pp. 394-421, July 2010.
    [13] R. W. Picard, E. Vyzas, and J. Healey “Toward machine emotional intelligence: analysis of affective physiological state,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 23, no. 10, October 2001.
    [14] J. Kim and E. André,” Emotion recognition based on physiological changes in music listening,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 30, no. 12, December 2008.
    [15] K. H. Kim, S. W. Bang, and S. R. Kim, “Emotion recognition system using short-term monitoring of physiological signals,“ Medical & Biological Engineering & Computing, vol. 42, pp. 1672-1687, 2004.
    [16] R. A. Calvo and S. D’Mello, “Affect detection: an interdisciplinary review of models, methods, and their applications,” IEEE Trans. Affective Computing, vol. 7, no. 1, January-June 2010.
    [17] Z. Zeng, M. Pantic, G. l. Roisman, and T. S. Huang, “A survey of affect recognition methods: audio, visual, and spontaneous expressions,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 31, no. 1, January 2009.
    [18] P. Philippot and G. Chapelle, “Respiratory feedback in the generation of emotion,” Cognition and Emotion, vol. 16, no. 5, pp. 605-627, August 2002.
    [19] F.A. Boiten, N.H. Frijda, and C.J.E. Wientjes, “Emotions and respiratory patterns: review and critical analysis,” Int’l J. of Psychophysiology, vol. 17, pp. 103-128, 1994.
    [20] C. L. Lisetti and F. Nasoz, “Using noninvasive wearable computers to recognize human emotions from physiological signals,” EURASIP J. Applied Signal Processing vol. 11, pp. 1672-1687, 2004.
    [21] J. Rottenberg, R. D. Ray, and J. J. Gross, “Emotion-elicitation using films,” Handbook of Emotion Elicitation And Assessment, J. A. Coan, Oxford University Press, 2007.
    [22] P. Gomez, S. Shafy, and B. Danuser, “Respiration, metabolic balance, and attention in affective picture processing,” Biological Psychology, vol. 78, pp. 138-149, 2008.
    [23] R.W. Picard, “Affective computing: from laughter to IEEE,” IEEE Trans. Affective Computing, vol. 1, no 1, January-June 2010.
    [24] F. A. Boiten, “The effects of emotional behaviour on components of the respiratory cycle,” Biological Psychology, vol. 49, pp. 29-51, 1998.
    [25] C.K. Wu, P.C. Chung, and C.J. Wang, “Extracting coherent emotion elicited segments from physiological signals,” Proc. IEEE SSCI Workshop Affective Computational Intelligence, 2011.
    [26] P. Gomez, P. Zimmermann, S. Guttormsen-Schar, and B. Danuser, “Respiratory responses associated with affective processing of film stimuli,” Biological Psychology, vol. 68, no. 3, pp. 223-235, March 2005.
    [27] P. Rainville, A. Bechara, N. Naqvi, and A. R. Damasio, “Basic emotions are associated with distinct patterns of cardiorespiratory activity”, Int’l J. of Psychophysiology, vol. 61, pp. 5-18, 2006.
    [28] M.A. Nicolaou, H. Gunes, and M. Pantic, "Continuous prediction of spontaneous affect from multiple cues and modalities in valence-arousal space," IEEE Trans. Affective Computing, vol. 2, no. 2, 2011
    [29] D. Michael and J. Houchin, “Automatic EEG analysis: a segmentation procedure based on the autocorrelation function,” Electroencephalography and Clinical Neurophysiology, vol. 46, no. 2, pp. 232–235, February 1979
    [30] L. Wong and W. Abdulla, "Time-frequency evaluation of segmentation methods for neonatal EEG signals" Proc. IEEE Int’l Conference of Engineering in Medicine and Biology Society, pp.1303-1306, 2006
    [31] U. Appel and A. v. Brandt, “A comparative analysis of three sequential time-series segmentation algorithms,” Signal Processing, vol. 6, no. 1, pp. 45-60, January 1984.
    [32] D. Popivnov and A. Mineva, “Testing procedures for non-stationary and non-linearity in physiological signals,” Math. Biosciences, vol. 157, no. 1-2, pp. 303-320, 1999.
    [33] E. Punskaya, C. Andrieu, A. Doucet, and W. J. Fitzgerald, "Bayesian curve fitting using MCMC with applications to signal segmentation," IEEE Trans. signal processing, vol. 50, no. 3, March 2002.
    [34] M. Staudacher, S. Telser, A. Amann, H. Hinterhuber, and M. Ritsch-Marte, “A new method for change-point detection developed for on-line analysis of the heart beat variability during sleep,” Physica A, vol. 349, no. 3-4, pp. 682-596, April 2005.
    [35] C. K. Peng, S. Havlin, H. E. Stanley, and A. L. Goldberger, “Quantification of scaling exponents and crossover phenomena in nonstationary heartbeat time-series,” American Institute of Physics, Chaos 5, no. 82, 1995.
    [36] A. Kaplan, J. Roschke, B. Darkhovsky, and J. Fell, “Macrostructural EEG characterization based on nonparametric change point segmentation: application to sleep analysis,” J. of Neuroscience Methods, vol. 106, no. 1, pp. 81-90, March 2001.
    [37] E. Tan, “Film-induced affect as a witness emotion,” Poetics, vol. 23, no.1, pp. 7–32, 1995.
    [38] L. Canini, S. Gilroy, M. Cavazza, R. Leonardi, and S. Benini, “Users' response to affective film content: A narrative perspective,” Proc. Int’l Workshop on Content-Based Multimedia Indexing , 2010
    [39] H. Peng, F Long, and C. Ding, “Feature selection based on mutual information: criteria of max-denendency, max-relevance, and min-redundancy,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol.27, no. 8, August 2005.
    [40] P. Shaver, J. Schwartz, and D. Kirson, “Emotion Knowledge : Further exploration of a prototype approach,” J. of Personality and Social Psychology, vol. 52, no. 6, pp. 1061-1086, 1987.
    [41] S. Epstein, “Controversial issues in emotion theory,” In P. Shaver(Ed.), Review of personality and social psychology, vol. 5, pp. 64-88, 1984.
    [42] D. A. Sauter, F. Eisner, P. Ekman, and S. K. Scott, “Cross-cultural recognition of basic emotions through nonverbal emotional vocalizations,” Proceedings of the National Academy of Sciences, vol. 107, no. 6, pp. 2408-2412, February 9, 2010.
    [43] B. Fehr and J. A. Russell, “Concept of emotion viewed from a prototype perspective,” J. of Experimental Psychology: General, vol. 113, pp. 464-486, 1984.
    [44] I. Bretherton, and M. Beeghly, “Talking about internal states: The acquisition of an explicit theory of mind,” Developmental Psychology, vol. 18, no. 6, pp. 906-912, 1982.
    [45] J. A. Russell, "In defense of a prototype approach to emotion concepts," J. of Personality and Social Psychology, vol. 60, no. 1, pp. 37-47, 1991
    [46] A. Kappas, “Smile when you read this, whether you like it or not: Conceptual challenges to affect detection,” IEEE Trans. Affective Computing, vol. 1, no. 1, January-June 2010.
    [47] C. Peter and A. Herbon, “Emotion representation and physiology assignments in digital systems,” Interacting with Computers, vol. 18, no. 2, pp. 139-170, March 2006.
    [48] B. E. Brodsky and B. E. Darkhovsky, “Nonparametric methods in change-Point problems,” Kluwer Academic Publishers, 1993.
    [49] A. Denis and F. Cremoux, “Using the entropy of curves to segment a time or spatial series,” Math. Geology, vol. 34, no. 8, November 2002.
    [50] M. D. Weber, L. M. Leemis, and R. K. Kincaid, “Minimum kolmogorov-smirnov test statistic parameter estimates,” J. of Statistical Computation and Simulation, vol. 76, no. 3, pp. 195-206, March 2006.
    [51] J. Richmann and J. Moorman, “Physiological time-series analysis using approximate entropy and sample entropy,” Am. J. Physiology-Heart and Circulatory Physiology, vol. 278, no. 6, H2039-H2049, June 2000.
    [52] D. Caldirola, L. Bellodi, A. Caumo, G. Migliarese, and G. Perna, "Approximate entropy of respiratory patterns in panic disorder," Am. J. of Psychiatry, vol. 161, no. 1, pp.79-87, 2004
    [53] C. D. Katsis, N. Katertsidis, G. Ganiatsas, and D. I. Fotiadis, “Toward emotion recognition in car-racing drivers: A biosignal processing approach,” IEEE Trans. Systems, Man and Cybernetics- Part A: Systems and Humans, vol. 38, no. 3, May 2008.
    [54] G. M. Smith, “Film structure and the emotion system,” Cambridge University Press, Cambridge, 2003.
    [55] M. Soleymani, G. Chanel, J. Kierkels, and T. Pun. “Affective ranking of movie scenes using physiological signals and content analysis,” ACM Workshop on Multimedia Semantics Proceedings, pp. 32–39, Vancouver, October 2008.
    [56] J. Listgarten, R. M. Neal, S. T. Roweis, and A. Emili, “Multiple alignment of continuous time series,” Neural Information Processing Systems, vol. 17, 2005.
    [57] R. JayaDevan, S. R. Kolhe, and P. M. Patil, “Dynamic time warping based static hand printed signature verification,” J. of Pattern Recognition Research 1, vol. 4, no. 1, pp. 52-65, 2009.
    [58] I. Guyon and A. Elisseeff, “An Introduction to variable and feature selection,” J. of Machine Learning Research, vol. 3, pp. 1152-1182, March, 2003.
    [59] H. Peng, mRMR. (2011). [0online]. Available: http://penglab.janelia.org/proj/mRMR/
    [60] D. L. Davies and D. W. Bouldin, “A Cluster Separation Measure,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. PAMI-1, no. 2, pp. 224-227, April 1979.
    [61] P. Rainville, A. Bechara, N. Naqvi, and A. R. Damasio, “Basic emotions are associated with distinct patterns of cardiorespiratory activity,” Int’l J. of Psychophysiology, vol. 61, no. 1, pp. 5-18, July 2006.
    [62] C. M. Bishop, “Probability distributions,” Pattern Recognition and Machine Learning, New York: Springer, pp. 124-127, 2006
    [63] D. Cheng, G. Liu, and Y. Qiu , “Applications of particle swarm optimization and k-nearest neighbors to emotion recognition from physiological signals,” Proc. Int’l Conf. on Computational Intelligence and Security, 2008.
    [64] S. Chen , C. F. N. Cowan, and P. M. Grant, "Orthogonal least squares learning algorithm for radial basis function networks," IEEE Trans. Neural Networks, vol. 2, no. 2, pp. 302-309, March 1991.
    [65] N. H. Frijda, “The emotions,” Cambridge University Press, pp. 124-175, 1986.
    [66] P. Lang, “The emotion probe: studies of motivation and attention,” Am. Psychologist, vol. 50, no. 5, pp. 372-385, 1995.
    [67] C. C. Chang and C. J. Lin, LIBSVM: A library for support vector machines software. (2011). [Online]. Available: http://www.csie.ntu.edu.tw/~cjlin/libsvm
    [68] C. K. Wu, P. C. Chung, and C. J. Wang, "Representative segment-based emotion analysis and classification with automatic respiration signal segmentation," IEEE Trans. Affective Computing, vol. 3, no. 4, pp. 482–495, 2012.
    [69] T. Tamura, Y. Maeda, M. Sekine, and M. Yoshida, “Wearable photoplethysmographic sensors—past and present,” Electronics, vol. 3, pp. 282–302, 2014.
    [70] Y. D. Lin, W. T. Liu, C. C. Tsai, and W. H. Chen, “Coherence analysis between respiration and PPG signal by bivariate AR model”, World Academy of Science, Engineering and Technology, vol.3, no.5, 2009.
    [71] K. Nakajima, T. Tamura, T. Ohta, H. Miike, and P. Oberg, “Photoplethysmographic measurement of heart and respiratory rates using digital filters,” Proceedings of the 15th Annual Int'l Conference of the IEEE Engineering in Medicine and Biology Societ, pp. 1006–1007, 1993.
    [72] L. G. Lindberg, H. Ugnell, and P. Å. Öberg, “Monitoring of respiratory and heart rates using a fibre-optic sensor ,” Medical and Biological Engineering and Computing , vol. 30, no. 5, pp. 533–537,1992.
    [73] P. Leonard, N. R. Grubb, P. S. Addison, D. Clifton, and J. N.Watson, “An algorithm for the detection of individual breaths from the pulse oximeter waveform,” J. of Clinical Monitoring and Computing, vol. 18, no. 5–6, pp. 309–312,2004.
    [74] P. S. Addison, J. N.Watson,M. L.Mestek, and R. S.Mecca, “Developing an algorithm for pulse oximetry derived respiratory rate (RR): A healthy volunteer study,” J. of Clinical Monitoring and Computing, vol. 26, no. 1, pp. 45–51, 2012.
    [75] J. Li, J. Jin, X. Chen, W. Sun, and P. Guo, “Comparison of respiratory-induced variations in photoplethysmographic signals,” Physiological Measurement, vol. 31, no. 3, pp. 415–425, 2010.
    [76] A. Johansson, “Neural network for photoplethysmographic respiratory rate monitoring,” Medical and Biological Engineering and Computing, vol. 41, no. 3, pp. 242–248, 2003.
    [77] W. Karlen, S. Raman, J. M. Ansermino, and G. A. Dumont, "Multiparameter respiratory rate estimation from the photoplethysmogram," IEEE Trans. Biomedical Engineering, vol. 60, no. 7, pp. 1946–1953, July 2013.
    [78] M. A. F. Pimentel, A. E. W. Johnson, P. H. Charlton, D. Birrenkott, P. J. Watkinson, L. Tarassenko, and D. A. Clifton, "Towards a Robust Estimation of Respiratory Rate from Pulse Oximeters," IEEE Trans. Biomedical Engineering , vol. 64, no. 8, pp. 1914–1923, Aug. 2017.
    [79] K. Nakajima, T. Tamura, T. Ohta, and H. Miike, “Monitoring of heart and respiratory rates by photoplethysmography using a digital filtering technique,” Medical Engineering & Physics, vol. 18, no. 5, pp. 365–372,1996 .
    [80] K. H. Chon, S. Dash, and K. Ju, “Estimation of respiratory rate from photoplethysmogram data using time-frequency spectral estimation,” IEEE Trans. Biomedical Engineering, vol. 56, no. 8, pp. 2054–2063, Aug. 2009.
    [81] S. Dash, K. H. Shelley, D. G. Silverman, and K. H. Chon, “Estimation of respiratory rate from ECG, photoplethysmogram, and piezoelectric pulse transducer signals: A comparative study of time-frequency methods,” IEEE Trans. Biomedical Engineering, vol. 57, no. 5, pp. 1099–1107, May 2010.
    [82] S. Fleming and L. Tarassenko, “A comparison of signal processing techniques for the extraction of breathing rate from the photoplethysmogram,” Int'l J. of Biological and Medical Sciences, vol. 2, no. 4, pp. 232–236, 2007.
    [83] J. Lee and K. H. Chon, "An Autoregressive Model-based particle filtering algorithms for extraction of respiratory rates as high as 90 breaths per minute from pulse oximeter," IEEE Trans. Biomedical Engineering, vol. 57, no. 9, pp. 2158–2167, Sept. 2010.
    [84] J. Lee and K. H. Chon, "Time-varying autoregressive model-based multiple modes particle filtering algorithm for respiratory rate extraction from pulse oximeter," IEEE Trans. Biomedical Engineering, vol. 58, no. 3, pp. 790–794, March 2011.
    [85] A. Garde , W. Karlen, J.M. Ansermino, and G.A Dumont, “Estimating respiratory and heart rates from the correntropy spectral density of the photoplethysmogram,” PLoS ONE 9(1): e86427, vol. 9, no. 1, January 2014.
    [86] P. Leonard, N. R. Grubb, P. S. Addison, D. Clifton, and J. N.Watson, “An algorithm for the detection of individual breaths from the pulse oximeter waveform,” J. of Clinical Monitoring and Computing, vol. 18, no. 5–6, pp. 309–312, 2004.
    [87] K. V. Madhav, M. R. Ram, E. H. Krishna, K. N. Reddy, and K. A. Reddy, "Estimation of respiratory rate from principal components of photoplethysmographic signals," 2010 IEEE EMBS Conference on Biomedical Engineering and Sciences (IECBES), Kuala Lumpur, pp. 311–314, 2010.
    [88] B. Prathyusha, T. S. Rao, and D. Asha, “Extraction of respiratory rate from PPG signals using PCA and EMD,” Int'l J. of Research in Engineering and Technology, vol. 1, no. 2, pp. 164–184, 2012.
    [89] K. V. Madhav, M. R. Ram, E. H. Krishna, K. N. Reddy, and K. A. Reddy, “Use of multi scale PCA for extraction of respiratory activity from photoplethysmographic signals,” 2012 IEEE Int'l Instrumentation and Measurement Technology Conference Proceedings, Graz, pp. 1784–1787, 2012.
    [90] K. V. Madhav, M. R. Ram, E. H. Krishna, K. N. Reddy, and K. A. Reddy, “Estimation of respiratory rate from ECG, BP and PPG signals using empirical mode decomposition,” 2011 IEEE Int'l Instrumentation and Measurement Technology Conference, Binjiang, pp. 1–4, 2011.
    [91] A. Garde, W. Karlen, P. Dehkordi, J. Ansermino, and G. Dumont, "Empirical mode decomposition for respiratory and heart rate estimation from the photoplethysmogram," Computing in Cardiology 2013, Zaragoza, 2013, pp. 799–802.
    [92] M. A. Motin, C. K. Karmakar, and M. Palaniswami, "An EEMD-PCA approach to extract heart rate, respiratory rate and respiratory activity from PPG signal," 2016 38th Annual Int'l Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Orlando, FL, pp. 3817-3820, 2016.
    [93] K. V. Madhav, M. R. Ram, E. H. Krishna, N. R. Komalla, and K. A. Reddy, “Robust extraction of respiratory activity from PPG signals using modified MSPCA,” IEEE Trans. Instrumentation and Measurement, vol. 62, no. 5. pp. 1094–1106, May 2013.
    [94] P. A. Leonard, J. G. Douglas, N. R. Grubb, D. Clifton, P. S. Addison, and J. N. Watson, “A fully automated algorithm for the determination of respiratory rate from the photoplethysmogram,” J. of clinical monitoring and computing, vol. 20, no. 1, pp. 33–36, Feb. 2006.
    [95] M. R. Ram, K. V. Madhav, E. H. Krishna, N. R. Komalla, and K. A. Reddy, “A novel approach for motion artifact reduction in PPG signals based on AS-LMS adaptive filter,” IEEE Trans. Instrumentation and Measurement, vol. 61, no. 5, pp. 1445–1457, May 2012.
    [96] Z. Zhang, Z. Pi, and B. Liu, “TROIKA: A general framework for heart rate monitoring using wrist-type photoplethysmographic signals during intensive physical exercise,” IEEE Trans. Biomedical Engineering, vol. 62, no. 2, pp. 522–531, Feb. 2015.
    [97] Y. Ye, Y. Cheng, W. He, M. Hou, and Z. Zhang, "Combining nonlinear adaptive filtering and signal decomposition for motion artifact removal in wearable photoplethysmography," IEEE Sensors J., vol. 16, no. 19, pp. 7133–7141, Oct.1, 2016.
    [98] J. Xiong, L. Cai, D. Jiang, H. Song, and X. He, "Spectral matrix decomposition-based motion artifacts removal in multi-channel PPG sensor signals," IEEE Access, vol. 4, pp. 3076–3086, 2016.
    [99] S. Nemati, A. Malhotra, and G. D. Clifford, “Data fusion for improved respiration rate estimation,” EURASIP J. on Advances in Signal Processing, vol. 2010, pp. 1–11, 2010.
    [100] H. P. M. and W. J. Lin, "A physiological information extraction method based on wearable PPG sensors with motion artifact removal," IEEE Int'l Conference on Communications, pp. 1–6, 2016.
    [101] J. Spigulis, "Optical non-invasive monitoring of skin blood pulsations," Applied Optics, vol.44, no.10, pp. 1850–1857, 2005.
    [102] K. V. Madhav, M. R. Ram, E. H. Krishna, N. R. Komalla, and K. A. Reddy, "Robust extraction of respiratory activity from PPG signals using modified MSPCA," IEEE Trans. Instrumentation and Measurement, vol. 62, no. 5, pp. 1094–1106, May 2013.
    [103] M. Raghuram, K. Sivani, and K. A. Reddy, "Use of complex EMD generated noise reference for adaptive reduction of motion artifacts from PPG signals," 2016 Int'l Conference on Electrical, Electronics, and Optimization Techniques (ICEEOT), Chennai, pp. 1816–1820, 2016.
    [104] C. David, J. Graham, P. S. Addison, and J. N. Watson, “Measurement of respiratory rate from the photoplethysmogram in chest clinic patients,” J. of Clinical Mon. and Comp., vol. 21, pp. 51–61, 2007.
    [105] J. Lazaro, E. Gil, R. Bailon, A. Minchole, and P. Laguna, “Deriving respiration from photoplethysmographic pulse width,” Medical & Biological Engineering & Computing ,vol.51, no.1-2, pp.233–242, 2013.
    [106] J. Allen, “Photoplethysmography and its application in clinical physiological measurement,” Physiological Measurement, vol. 28, no. 3, pp. R1–39, Mar. 2007.
    [107] W. Karlen, J. M. Ansermino, and G. Dumont, "Adaptive pulse segmentation and artifact detection in photoplethysmography for mobile applications," 2012 Annual Int'l Conference of the IEEE Engineering in Medicine and Biology Society, San Diego, CA, pp. 3131–3134, 2012.
    [108] C. Fischer, B. Domer, T. Wibmer, and T. Penzel, "An algorithm for real-time pulse waveform segmentation and artifact detection in photoplethysmograms," IEEE J. of Biomedical and Health Informatics , vol. 21, no. 2, pp. 372–381, March 2017.
    [109] D. A. Birrenkott, M. A. F. Pimentel, P. J. Watkinson, and D. A. Clifton, "Robust estimation of respiratory rate via ECG- and PPG-derived respiratory quality indices," 2016 38th Annual Int'l Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Orlando, FL, pp. 676–679, 2016.
    [110] J. F. Weng, Z. Y. ,and J. L. Weng, "An improved pre-processing approach for photoplethysmographic signal," 2005 IEEE Engineering in Medicine and Biology 27th Annual Conference, Shanghai, pp. 41–44, 2005.
    [111] C. Ungureanu, R.M. Aarts, and J.B.A.M. Arends, “Real-time extraction of the respiratory rate from photoplethysmographic signals using wearable devices,” in Proc. European Conference on Ambient Intelligence, Eindhoven, Netherlands, pp. 1–17, 2014.
    [112] X. R. Ding, Y. T. Zhang, H. K. Tsang, and W. Karlen, "A pulse transit time based fusion method for the noninvasive and continuous monitoring of respiratory rate," 2016 38th Annual Int'l Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Orlando, FL, pp. 4240–4243, 2016.
    [113] J. Lázaro, E. Gil, R. Bailón, and P. Laguna, "Deriving respiration from the pulse photoplethysmographic signal," 2011 Computing in Cardiology, Hangzhou, pp. 713–716, 2011.
    [114] C. Karmakar, A. Khandoker, T. Penzel, C. Schöbel, and M. Palaniswami, "Detection of respiratory arousals using photoplethysmography (PPG) signal in sleep apnea patients," IEEE J. of Biomedical and Health Informatics, vol. 18, no. 3, pp. 1065–1073, May 2014.
    [115] T. Zhu, M. A. F. Pimentel, G. D. Clifford, and D. A. Clifton, "Bayesian fusion of algorithms for the robust estimation of respiratory rate from the photoplethysmogram," 2015 37th Annual Int'l Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Milan, pp. 6138–6141, 2015.
    [116] R. E. Thayer, "The biopsychology of mood and arousal," New York: Oxford University Press, 1989
    [117] J. Russell, “Core affect and the psychological construction of Emotion,” Psychological Review, vol. 110, no. 1, pp. 145–172, 2003.
    [118] P. Ekman, “An argument for basic emotions,” Cognition & Emotion, vol. 6, no. 3, pp. 169–200, 1992.
    [119] R. A. Calvo and S. D’Mello, “Affect detection: An interdisciplinary review of models, methods, and their applications,” IEEE Trans. Affective Computing, vol. 1, no. 1, pp. 18–37, Jan. 2010.
    [120] L. A. Bugnon, R. A. Calvo, and D. H. Milone, "Dimensional affect recognition from HRV: An approach based on supervised SOM and ELM," IEEE Trans. Affective Computing, October 2017
    [121] H. Gunes and B. Schuller, “Categorical and dimensional affect analysis in continuous input: Current trends and future directions,” IMAVIS, vol. 31, no. 2, pp. 120–136, 2013.
    [122] C. Julien, "The enigma of mayer waves: Facts and models," Cardiovascular Research, vol. 70, no. 1, pp. 12–21, April 2006.
    [123]W. Karlen, M. Turner, E. Cooke, G. Dumont, and J. M. Ansermino, "CapnoBase: Signal database and tools to collect, share and annotate respiratory signals, " in Annual Meeting of the Society for Technology in Anesthesia, 2010

    下載圖示 校內:2023-06-14公開
    校外:2023-06-14公開
    QR CODE