簡易檢索 / 詳目顯示

研究生: 謝欣龍
Hsieh, Hsin-Lung
論文名稱: 若干獨立成份分析與未知訊號分離之研究
Some Studies on Independent Component Analysis and Blind Source Separation
指導教授: 簡仁宗
Chien, Jen-Tzung
學位類別: 博士
Doctor
系所名稱: 電機資訊學院 - 資訊工程學系
Department of Computer Science and Information Engineering
論文出版年: 2012
畢業學年度: 100
語文別: 英文
論文頁數: 107
中文關鍵詞: 未知訊號分離獨立成份分析非負矩陣分解語音辨識線上貝氏學習
外文關鍵詞: Blind source separation, Independent component analysis, Nonnegative matrix factorization, Speech recognition, Online Bayesian learning
相關次數: 點閱:154下載:1
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 獨立成份分析(Independent Component Analysis, ICA)在機器學習領域中是相當重要的,舉例而言,在多語者同時講話的雞尾酒會問題上,獨立成份分析技術可以將不同來源語者的聲音分離出來,另外,它也是一種非監督式學習演算法,可以挖掘出觀察訊號裡所隱藏的獨立因子,以語音訊號為例,獨立成份分析可以擷取出腔調、性別、甚至是不同環境中通道及雜訊等特性。
    本論文提出若干獨立成分分析與盲目來源分離(Blind Source Separation, BSS)演算法並應用於語音辨識、語音分離及音樂分離等多媒體系統。在語音辨識方面,我們將不同語者之隱藏式馬可夫模型之平均值向量透過快速獨立成份分析(Fast ICA)找出獨立聲音(Independent Voices)為基底並架構出可以表達語者特性之獨立空間,此獨立空間比主成分分析(Principal Component Analysis)所訓練出來的特徵聲音(Eigenvoices)或架構出來的特徵空間能更有效率的表達語者特性,在噪音環境下使用本方法做語者調整已獲得較高之語音辨識率。
    在語音分離方面,我們首先提出新穎之凸差異函數(Convex Divergence)用以量測訊號間之獨立性,它是將一組凸函數代入詹氏不等式(Jensen's Inequality)所推倒出來的,不同的凸參數(Convexity Parameter)可以實現出不同的凸差異函數,我們透過此函數及以非參數型機率密度函數(Nonparametric Density Function)發展出一套凸差異獨立成分分析(Convex Divergence ICA, C-ICA),並利用這套演算法估測出解混合參數並實現出語音分離,實驗結果發現本方法可以用較少的迴圈數達到參數收斂的效果,分離出來之語音訊號干擾比例(Signal-to-Interference Ratio, SIR)也獲得顯著的改善。
    我們也發展出獨立成分分析之貝氏學習(Bayesian Learning)法則並解決真實世界中非穩定性(Nonstationary)來源訊號及混合系統之訊號分離問題,本論文提出以線上學習(Online Learning)為基礎的非穩定性貝氏獨立成份分析(Nonstationary Bayesian ICA, NB-ICA),從線上觀察到的混合訊號不斷追蹤來源訊號及混合矩陣之統計特性,透過事前(Prior)及事後(Posterior)機率分布不間斷在不同音框傳遞並更新的機制,有效補償變遷環境中的系統參數並偵測不斷變化的來源訊號源及其個數,此方法可克服訊號源移動或突然出現、消失等非穩定性環境問題,我們使用變異性貝氏(Variational Bayesian)推論法估測每個音框的模型參數。另外,為了有效表示具時間關聯性(Temporally-Correlated)之混合係數及來源訊號,本論文提出線上高斯程序(Online Gaussian Process)獨立成份分析(OGP-ICA)演算法,從線上觀察到的混合訊號,透過具貝氏學習能力之高斯程序有效描繪非穩定性環境下混合係數及來源訊號之時間變化特性,經由變異性貝氏推論法實現OGP-ICA並在不同非穩定性情境下獲得相當好的語音分離效果。
    在音樂分離方面,本論文透過非負矩陣分解(Nonnegative Matrix Factorization, NMF)發展出一套NMF-ICA演算法並實現於語音與音樂之訊號分離,基本上非負矩陣分解是採用部份表示(Parts-Based Representation)為基礎之線性模型來表示觀察資料,我們將來源訊號透過所對應的累積密度函數做轉換,並透過非參數型量化法建立起非負矩陣,每個矩陣元素值代表著轉換過後訊號值的聯合機率密度函數,藉由非負矩陣分解找出解混合矩陣。此外,本論文還提出具群組稀疏性(Group Sparsity)之貝氏(Bayesian)非負矩陣分解(GS-BNMF)演算法並實現單通道分離(Single-Channel Separation)技術,分解出具韻律性(Rhythmic)及具諧波性(Harmonic)的音樂訊號源,我們使用以拉普拉斯比例混合 (Laplacian Scale Mixture)機率分佈為主之稀疏事前(Sparse Prior)機率分佈表示共享基底(Common Basis)及獨有基底(Individual Basis)兩組基底之組合係數,減緩模型過度估測問題,將共享性(韻律性)訊號及殘差性(諧波性)訊號分解出來進而達到單通道音樂訊號分離的效果,在多組音樂訊號分離實驗中驗證本方法的有效性。

    關鍵字:未知訊號分離、獨立成份分析、非負矩陣分解、語音辨識、線上貝氏學習。

    Independent component analysis (ICA) is known as one of the most important research topic in the areas of machine learning. ICA provides fundamental mechanism for solving the cocktail-party problem and separating the mixed signals from different source speakers. Basically, ICA is an unsupervised learning algorithm which is feasible to explore the latent clusters or independent components from observation data. For example, the latent clusters in speech signals may reflect accent, gender, channel and noise conditions.
    This dissertation proposes some studies on ICA and blind source separation (BSS) for multimedia applications including speech recognition, speech separation and music separation. In the application of speech recognition, we establish the independent voices or span the independent space by finding independent components from a set of speaker-specific hidden Markov model mean vectors through the fast ICA algorithm. Compared with the eigenvoices or eigenspace constructed via principal component analysis, independent voices have higher information redundancy reduction when building speaker space for rapid speaker adaptation. Independent voices obtained higher recognition accuracy than eigenvoices.
    In application of speech separation, we first propose a new convex divergence as a measure of independence. This measure is derived by substituting joint distribution and product of marginal distributions into a general convex function and the Jensen’s inequality. The convexity parameter in convex function is adjustable to realize different divergence measures. By further incorporating nonparametric density function based on Parzen window, we develop the convex divergence ICA (C-ICA) algorithm and applied it to estimate the demixing matrix for speech separation. We illustrate that the proposed C-ICA under a specialized convexity attained the best convergence property among different ICAs. The signal-to-interference ratios (SIRs) of demixed signals are significantly improved.
    In addition, we develop Bayesian learning for ICA and apply it for speech separation under nonstationary environments. The nonstationary Bayesian ICA (NB-ICA) is exploited through an online learning based on a recursive Bayesian algorithm. The probability model for mixed signals is based on a noisy ICA model. We adopt the conjugate priors in Bayesian learning so that reproducible prior/posterior pair enables an online learning mechanism. This is a never-ending learning for speech separation frame by frame. An automatic relevance determination (ARD) parameter is introduced and treated as an indicator for number of source signals. NB-ICA algorithm works for the mixing systems that sources are moving or abruptly appear and disappear. This algorithm is implemented via variational Bayesian (VB) inference and demonstrated to be effective in the experiments. On the other hand, we propose an online Gaussian process ICA (OGP-ICA) where the temporally-correlated source signals and mixing coefficients within a frame are compensated by Gaussian process (GP). GP is a nonparametric Bayesian method for delicate modeling of temporal information in time-series signals. VB inference is performed to realize OGP-ICA for nonstationary and temporally-correlated source separation. We obtain significant improvement in terms of SIRs in the demixed speech signals.
    In application of speech and music separation, we introduce a nonnegative matrix factorization ICA (NMF-ICA) algorithm. Basically, NMF performs parts-based representation for observed signals based on a linear model. Our idea is to transform source signals by their cumulative distribution functions and perform the nonparametric quantization to construct a nonnegative matrix where each entry represents the joint probability density of two transformed signals. NMF is realized to carry out the proposed NMF-ICA for speech and music separation. We also develop the solution to single-channel separation and apply it for separation of music signals into rhythmic signals and harmonic signals. A Bayesian NMF with group sparsity (GS-BNMF) is established. Using GS-BNMF, we represent music signals by a group of common basis and a group of individual basis. The sensing weights are modeled by Laplacian scale mixture distribution which is a sparse prior and is used for sparse representation so that over-estimation problem can be alleviated. We estimate the rhythmic signals based on common basis vectors and the harmonic signals based on individual basis vectors. In the experiments on single-channel separation of music signals, we show the effectiveness of using GS-BNMF.

    Keyword: Blind source separation, independent component analysis, nonnegative matrix factorization, speech recognition, online Bayesian learning.

    中文摘要 I Abstract III 致謝 VI TABLE OF CONTENTS VII LIST OF TABLES XI LIST OF FIGURES XII Chapter 1 INTRODUCTION 1 1.1 Motivations 9 1.2 Outline of This Dissertation 10 1.3 Contributions of This Dissertation 11 Chapter 2 BACKGROUND SURVEY 15 2.1 FastICA Algorithm 15 2.1.1 Sparse Coding 16 2.2 Divergence Measures for ICA 17 2.3 Nonstationary Source Separation 19 2.3.1 Separation Based on Nonstationary Source Model 19 2.3.2 Separation Based on Temporal Structure 20 2.4 Nonnegative Matrix Factorization and Quasi-Entropy 21 2.4.1 Sparseness Measure 21 2.4.2 Independence Versus Uniformity 23 2.5 Nonnegative Matrix Factorization for Blind Source Separation 24 2.5.1 NMF with Sparseness Constraint 24 2.5.2 Nonnegative Matrix Partial Co-factorization 24 2.5.3 Group-Based NMF 25 Chapter 3 INDEPENDENT COMPONENT ANALYSIS FOR NOISY SPEECH RECOGNITION 27 3.1 PCA Versus ICA 27 3.2 Adaptation in Independent Voice Space 28 3.3 Experiments 29 3.3.1 Experimental Setup 29 3.3.2 Evaluation for Information Redundancy Reduction 31 3.3.3 Evaluation for Noisy Speech Recognition 32 Chapter 4 CONVEX DIVERGENCE INDEPENDENT COMPONENT ANALYSIS 34 4.1 New Divergence Measure 34 4.1.1 Evaluation of Divergence Measures 37 4.2 Learning Algorithms for ICA Procedure 39 4.3 Convex Divergence ICA Procedure 41 4.4 Experiments 44 4.4.1 Sensitivity of Divergence Measures to Demixing Matrix 44 4.4.2 Evaluation of Convergence Speed of the ICA Procedure 47 4.4.3 Evaluation of Separation of Speech and Music Signals 50 Chapter 5 BAYESIAN LEARNING FOR NONSTATIONARY SOURCE SEPARATION 54 5.1 Online Bayesian Learning 55 5.1.1 Nonstationary Bayesian ICA 57 5.1.2 Online Gaussian Process ICA 59 5.2 Variational Bayesian Inference 62 5.2.1 Model Inference for NB-ICA 62 5.2.2 Model Inference for OGP-ICA 65 5.3 Experiments 67 5.3.1 Experimental Setup 67 5.3.2 Effects of the ARD Parameter and Mixing Matrix 70 5.3.3 Evaluation of Signal-to-Interference Ratios 71 5.3.4 Evaluation of Signal Predictability 73 Chapter 6 NMF-ICA ALGORITHM 75 6.1 Uniformity as a Measure of Independence 75 6.2 Nonparametric Quantization 76 6.3 Evaluation of Uniformity Using NMF 77 6.4 Learning Algorithm 78 6.5 Experiments 79 Chapter 7 BAYESIAN NONNEGATIVE MATRIX FACTORIZATION WITH GROUP SPARSITY 82 7.1 Model Construction 82 7.2 Bayesian Learning and Sparse Prior 83 7.3 Model Inference 84 7.4 Experiments 86 Chapter 8 CONCLUSIONS & FUTURE WORKS 89 APPENDIX 4-A 93 APPENDIX 4-B 95 APPENDIX 4-C 96 APPENDIX 5-A 97 Bibliography 99 著作目錄 107

    Amari, S., Differential-Geometrical Methods in Statistics, Springer-Verlag, 1985.
    Amari, S., “Natural gradient works efficiently in learning”, Neural Computation, vol. 10, pp. 251-276, 1998.
    Amari, S., Chen, T.-P. and Cichocki A., “Nonholonomic orthogonal learning algorithms for blind source separation”, Neural Computation, vol. 12, no. 6, pp. 1463-1484, 2000.
    Araki, S., Mukai, R., Makino, S., Nishikawa, T. and Saruwatari, H., “The fundamental limitation of frequency domain blind source separation for convolutive mixtures of speech”, IEEE Transactions on Speech and Audio Processing, vol. 11, no. 2, pp. 109-116, 2003.
    Babacan, S. D., Molina, R. and Katsaggelos, A. K., “Bayesian compressive sensing using Laplace priors”, IEEE Transactions on Image Processing, vol. 19, no. 1, pp. 53-63, 2010.
    Bach, F. R. and Jordan, M. I., “Finding clusters in independent component analysis”, Proc. of International Workshop on Independent Component Analysis and Blind Signal Separation, pp. 891-896, 2003.
    Bell, A. J. and Sejnowski, T. J., “An information-maximization approach to blind separation and blind deconvolution”, Neural Computation, vol. 7, pp. 1129-1159, 1995.
    Bishop, C. M., Pattern Recognition and Machine Learning, Springer Science, 2006.
    Boscolo, R., Pan H. and Roychowdhury V. P., “Independent component analysis based on nonparametric density estimation”, IEEE Transactions on Neural Networks, vol. 15, no. 1, pp. 55-65, 2004.
    Boyd, S. and Vandenberghe L., Convex Optimization, Cambridge University Press, 2004.
    Cardoso, J.-F., “Higher-order contrasts for independent component analysis”, Neural Computation, vol. 11, no. 1, pp. 157-192, 1999.
    Cemgil, A. T., “Bayesian inference for nonnegative matrix factorization models”, University of Cambridge, Technical Report CUED/F-INFENG/TR.609, 2008.
    Cichocki, A., Douglas, S. C. and Amari, S., “Robust techniques for independent component analysis (ICA) with noisy data”, Neurocomputing, vol. 22, pp. 113-129, 1998.
    Cichocki, A. and Amari, S., Adaptive Blind Signal and Image Processing: Learning Algorithms and Applications, John Wiley & Sons, Ltd, 2002.
    Cichocki, A., Zdunek, R. and Amari, S., “New algorithms for non-negative matrix factorization in applications to blind source separation”, Proc. of International Conference on Acoustics, Speech, and Signal Processing, vol. 5, pp. 621-624, 2006.
    Cichocki, A., Lee, H., Kim, Y.-D. and Choi, S., “Nonnegative matrix factorization with α-divergence”, Pattern Recognition Letters, vol. 29, no. 9, pp. 1433-1440, 2008.
    Cover, T. M. and Thomas, J. A., Elements of Information Theory, John Wiley and Sons, Inc., 2006.
    Chan, K., Lee, T. W. and Sejnowski, T. J., “Variational learning of clusters of undercomplete nonsymmetric independent components”, Journal of Machine Learning Research, vol. 3, pp. 99-114, 2002.
    Chen, Y., “Blind separation using convex function”, IEEE Transactions on Signal Processing, vol. 53, no. 6, pp. 2027-2035, 2005.
    Chib, S. and Greenberg, E., “Understanding the Metropolis-Hastings algorithm”, The American Statistician, vol. 49, no. 4, pp. 327-335, 1995.
    Chien, J.-T., “Online hierarchical transformation of hidden Markov models for speech recognition”, IEEE Transactions on Speech and Audio Processing, vol. 7, no. 6, pp. 656-667, 1999.
    Chien, J.-T. and Chen, J.-C., “Recursive Bayesian linear regression for adaptive classification”, IEEE Transactions on Signal Processing, vol. 57, no. 2, pp. 565-575, 2009.
    Chien, J.-T. and Chen, B.-C., “A new independent component analysis for speech recognition and separation”, IEEE Transactions on Audio, Speech and Language Processing, vol. 14, no. 4, pp. 1245-1254, 2006.
    Chien, J.-T., Hsieh, H.-L. and Furui, S., “A new mutual information measure for independent component analysis”, Proc. of International Conference on Acoustics, Speech, and Signal Processing, pp. 1817-1820, 2008.
    Choudrey, R., Penny, W. D. and Roberts, S. J., “An ensemble learning approach to independent component analysis”, in Proc. of IEEE Workshop on Neural Networks for Signal Processing, pp. 435-444, 2000.
    Choudrey, R. A. and Roberts, S. J., “Bayesian ICA with hidden Markov sources”, Proc. of International Workshop on Independent Component Analysis and Blind Signal Separation, pp. 809-814, 2003.
    Cover, T.-M. and Thomas, J.-A., Elements of Information Theory, John Wiley, 1991.
    Comon, P., “Independent component analysis, a new concept?”, Signal Processing, vol. 36, no. 3, pp. 287-314, 1994.
    Csiszár, I. and Shields, P. C., “Information theory and statistics: a tutorial”, Foundations and Trends in Communications and Information Theory, vol. 1, no. 4, pp. 417-528, 2004.
    Everson, R. and Roberts S., “Blind source separation for non-stationary mixing”, Journal of VLSI Signal Processing, vol. 26, pp. 15- 23, 2000.
    Douglas, S. C. and Gupta, M., “Scaled natural gradient algorithms for instantaneous and convolutive blind source separation”, Proc. of International Conference on Acoustics, Speech, and Signal Processing, pp. 637-640, 2007.
    Hirayama, J., Maeda, S. and Ishii, S., “Markov and semi-Markov switching of source appearances for nonstationary independent component analysis”, IEEE Transactions on Neural Networks, vol. 18, no. 5, pp. 1326-1342, 2007.
    Honkela, A. and Valpola, H., “On-line variational Bayesian learning”, Proc. of International Workshop on Independent Component Analysis and Blind Signal Separation, pp. 803-808, 2003.
    Hsieh, H.-L. and Chien, J.-T., “A new nonnegative matrix factorization for independent component analysis”, Proc. of International Conference on Acoustics, Speech, and Signal Processing, pp. 2026-2029, 2010.
    Hsieh, H.-L., Chien, J.-T., Shinoda, K. and Furui, S., “Independent component analysis for noisy speech recognition”, Proc. of International Conference on Acoustics, Speech, and Signal Processing, pp. 4369-4372, 2009.
    Hsieh, H.-L. and Chien, J.-T., “Online Bayesian learning for dynamic source separation”, Proc. of International Conference on Acoustics, Speech and Signal Processing, pp. 1950-1953, 2010.
    Hsieh, H.-L. and Chien, J.-T., “Online Gaussian process for nonstationary speech separation”, Proc. of Annual Conference of International Speech Communication Association, pp. 394-397, 2010.
    Hyvarinen, A., “Fast and robust fixed-point algorithms for independent component analysis”, IEEE Transactions on Neural Networks, vol. 10, no. 3, pp. 626-634, 1999.
    Hyvarinen, A., Karhunen, J. and Oja, E., Independent Component Analysis, John Wiley & Sons, Inc., 2001.
    Hyvarinen, A., Oja, E., Hoyer, P. O. and Hurri, J., “Image feature extraction by sparse coding and independent component analysis”, Proc. of International Conference on Pattern Recognition, pp. 1268–1273, 1998.
    Hoyer, P.-O., “Non-negative sparse coding”, Proc. IEEE workshop on Neural Networks for Signal Processing, pp. 557- 565, 2002.
    Hoyer, P.-O., “Non-negative matrix factorization with sparseness constraints”, Journal of Machine Learning Research, vol. 5, pp.1457-1469, 2004.
    Huang, Q., Yang, J. and Zhou, Y., “Bayesian nonstationary source separation”, Neurocomputing, vol. 71, pp. 1714-1729, 2008.
    Kompass, R., “A generalized divergence measure for nonnegative matrix factorization”, Neural Computation, vol. 19, pp. 780-791, 2007.
    Kim, Y.-D. and Choi, S., “Weighted nonnegative matrix factorization”, Proc. of International Conference on Acoustics, Speech, and Signal Processing, pp. 1541-1544, 2009.
    Kim, M., Yoo, J., Kang, K. and Choi, S., “Blind rhythmic source separation: nonnegativity and repeatability”, Proc. International Conference on Acoustics, Speech and Signal Processing, pp. 2006-2009, 2010.
    Kuhn, R., Junqua, J.-C., Nguyen, P. and Niedzielski, N., “Rapid speaker adaptation in eigenvoice space”, IEEE Trans. Speech and Audio Processing, vol. 8, no. 6, pp. 695-707, 2000.
    Koldovsky, Z., Malek, J., Tichavsky, P., Deville, Y. and Hosseini, S., “Blind separation of piecewise stationary non-Gaussian sources”, Signal Processing, vol. 89, no. 12, pp. 2570-2584, 2009.
    Lawrence, N. D. and Bishop, C. M., “Variational Bayesian independent component analysis”, Technical Report, University of Cambridge, 2000.
    Lee, D. D. and Seung, H. S., “Algorithms for non-negative matrix factorization”, Advances in Neural Information Processing Systems, vol. 13, pp. 556-562, MIT Press, 2001.
    Lee, T.-W. and Jang, G.-J., “The statistical structures of male and female speech signals”, Proc. of International Conference on Acoustics, Speech and Signal Processing, vol. 1, pp. 105-108, 2001.
    Lee, T.-W., Girolami, M. and Sejnowski, T. J., “Independent component analysis using an extended infomax algorithm for mixed subgaussian and supergaussian sources”, Neural Computation, vol. 11, pp. 417-441, 1999.
    Lee, H. and Choi, S., “Group nonnegative matrix factorization for EEG Classification”, in Proc. International Conference on Artificial Intelligence and Statistics, pp.320-327, 2009.
    Lin, J., “Divergence measures based on the Shannon entropy”, IEEE Transactions on Information Theory, vol. 37, no. 1, pp. 145-151, 1991.
    Luttinen, J. and Ilin, A., “Variational Gaussian-process factor analysis for modeling spatio-temporal data”, Advances in Neural Information Processing Systems, pp. 1177-1185, 2009.
    Mak, B., Kwok, J.-T. and Ho, S., “Kernel eigenvoice speaker adaptation”, IEEE Trans. Speech and Audio Processing, vol. 13, no. 5, pp.984-992, 2005.
    MacKay, D. J. C., “Probable networks and plausible predictions-a review of practical Bayesian methods for supervised neural networks”, Network: Computation in Neural Systems, vol. 6, pp. 469-505, 1995.
    Matsuyama, Y., Katsumata, N., Suzuki, Y. and Imahara, S., “The α-ICA algorithm”, Proc. of International Workshop on Independent Component Analysis and Blind Signal Separation, pp. 297-302, 2000.
    Mejuto, C., Dapena, A. and Castedo, L., “Frequency-domain infomax for blind separation of convolutive mixtures”, Proc. of International Workshop on Independent Component Analysis and Blind Signal Separation, pp. 315-320, 2000.
    Moussaoui, S., Brie, D., Mohammad-Djafari, A. and Carteret, C., “Separation of non-negative mixture of non-negative sources using a Bayesian approach and MCMC sampling”, IEEE Transactions on Signal Processing, vol. 54, no. 11, pp. 4133-4145, 2006.
    Naqvi, S. M., Zhang, Y. and Chambers, J. A., “Multimodal blind source separation for moving sources”, Proc. of International Conference on Acoustics, Speech and Signal Processing, pp. 125-128, 2009.
    Principe, J. C., Xu, D. and Fisher, J. W., “Information theoretic learning”, in S. Haykin, editor, Unsupervised Adaptive Filtering, pp. 265-319, Wiley, New York, 2000.
    Park, S. and Choi, S., “Gaussian processes for source separation”, Proc. of International Conference on Acoustics, Speech and Signal Processing, pp. 1909-1912, 2008.
    Rasmussen, C. E. and Williams, C. K. I., Gaussian Processes for Machine Learning, the MIT Press, 2006.
    Rissanen, J., “Modeling by shortest data description”, Automatica, vol. 14, pp. 465-471, 1978.
    Sawada, H., Mukai, R., Araki, S. and Makino, S., “Polar coordinate based nonlinear function for frequency-domain blind source separation”, Proc. of International Conference on Acoustics, Speech, and Signal Processing, pp. 1001-1004, 2002.
    Schobben, D., Torkkola K. and Smaragdis P., “Evaluation of blind signal separation methods”, Proc. of International Workshop on Independent Component Analysis and Blind Signal Separation, pp. 261-266, 1999.
    Schmidt, M. N. and Olsson, R. K., “Single-channel speech separation using sparse non-negative matrix factorization”, Proc of European Conference on Speech Communication and Technology, pp. 2614-2617, 2006.
    Schmidt, M. N., Winther, O. and Hansen, L. K., “Bayesian non-negative matrix factorization”, Proc. International Conference on Independent Component Analysis and Signal Separation, pp. 540-547, 2009.
    Schwarz, G., “Estimating the dimension of a model”, Annals of Statistics, vol. 6, no. 2, pp.461-464, 1978.
    Shannon, C. E., “A mathematical theory of communication”, The Bell System Technical Journal, vol. 27, pp. 379-423, 1948.
    Silverman, B. W., Density Estimation for Statistics and Data Analysis, New York: Chapman and Hall, 1985.
    Spragins, J., “A note on the iterative application of Bayes’ rule”, IEEE Transactions on Information Theory, vol. IT-11, no. 4, pp. 544-549, 1965.
    Stone, J. V., “Blind source separation using temporal predictability”, Neurocomputation, vol. 13, pp. 1559-1574, 2001.
    Tipping, M. E., “Sparse Bayesian learning and the relevance vector machine”, Journal of Machine Learning Research, vol. 1, pp. 211-244, 2001.
    Winther, O. and Petersen, K. B., “Flexible and efficient implementations of Bayesian independent component analysis”, Neurocomputing, vol. 71, pp. 221-233, 2007.
    Xu, D., Principe, J. C., Fisher, J. III and Wu H.-C., “A novel measure for independent component analysis (ICA)”, Proc. of International Conference on Acoustics, Speech, and Signal Processing, vol. 2, pp. 1161-1164, 1998.
    Xu, M. and Golay, M.-W., “Data-guided model combination by decomposition and aggregation”, Machine Learning, vol. 63, pp.43-67, 2006.
    Yoo, J., Kim, M., Kang, K. and Choi, S., “Nonnegative matrix partial co-factorization for drum source separation”, Proc. International Conference on Acoustics, Speech and Signal Processing, pp. 1942-1945, 2010.
    Zheng C.-H., Huang D.-S., Sun Z.-L., Lyu M. R. and Lok T.-M., “Nonnegative independent component analysis based on minimizing mutual information technique”, Neurocomputing, vol. 69, pp. 878-883, 2006.
    Zhang J., “Divergence function, duality, and convex analysis”, Neural Computation, vol. 16, pp.159-195, 2004.

    無法下載圖示 校內:2022-01-01公開
    校外:不公開
    電子論文尚未授權公開,紙本請查館藏目錄
    QR CODE