| 研究生: |
凃瀞珽 Tu, Ching-Ting |
|---|---|
| 論文名稱: |
Direct Combined Model運用於:人臉特徵點偵測、人臉素描合成及被遮蔽人臉恢復 Direct Combined Model for Facial Feature Point Detection, Face Sketch Synthesis, and Occluded Face Recovery |
| 指導教授: |
連震杰
Lien, Jenn-Jier James |
| 學位類別: |
博士 Doctor |
| 系所名稱: |
電機資訊學院 - 資訊工程學系 Department of Computer Science and Information Engineering |
| 論文出版年: | 2010 |
| 畢業學年度: | 98 |
| 語文別: | 英文 |
| 論文頁數: | 96 |
| 外文關鍵詞: | Active Appearance Model, Active Shape Model, Direct Combined Model, Eigenspace, Facial Feature Point Detection, Face Sketch Synthesis, Occluded Face Recovery |
| 相關次數: | 點閱:114 下載:0 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
在本論文,我們提出一個新的學習架構,Direct Combined Model(DCM),來分析及擷取兩個相關多維度變數之間的關係。DCM尋找一個最佳的結合特徵空間來詮釋兩變數的相對關係:同時,也從個別變數取出有意義的特徵。當只有一個的變數可獲得,而另一個變數是不可觀察時,我們利用基於DCM的特性發展出一個特徵轉換技術。由於DCM具有轉換兩各相關特徵空間的能力,本論文裡。我們藉由三個應用,包含:臉部特徵點偵測、人臉素描合成、人臉遮蔽回復,來闡述DCM的有效性。
在臉部特徵點偵測應用裡,DCM有效地建立一個單一、結合的eigenspace來描述臉部特徵點(例如:人臉的形狀)和輸入紋理資訊(例如:人臉影像)之間的關係,因此保留了兩者相依性中有意義的部分。DCM利用已學習的臉部特徵點和輸入紋理資訊關係,發展一個預測器去偵測人臉影像的臉部特徵點。以往對重建人臉形狀的研究,大都嚴重依賴紋理重建結果的品質,換句話說,容易受到輸入影像是否遮蔽和光線變化的影響。實驗顯示DCM能精確的重建人臉形狀,即使輸入影像有不適合的遮蔽或光線變化,也不需要去回復損失的紋理資訊。
自動合成一張人臉影像的人臉素描是一個很大的挑戰,因為人臉影像存在廣範圍的角度、表情、大小變化和不同程度的遮蔽或光線變化。尤其,當我們想要模擬特定畫家的繪畫風格和筆觸來合成人臉素描時,問題將變得更複雜。在自動人臉素描系統裡,我們利用DCM來學習畫家的人臉素描作品模擬以產生具畫家繪畫風的合成素描。此系統有幾個有三個主要的優點:第一,DCM同時考慮人臉的區域細部特徵和人臉整體的形狀,因此合成出來的結果更精確的模仿藝術家誇張的筆觸。第二,雖然樣本資料庫(training database)中只有含正面的自然表情影像,此系統卻能夠成功的合成不同的人臉角度、注視方向和人臉表情。第三,之前的研究大都嚴重依賴紋理重建結果的品質,容易受到輸入影像是否遮蔽和光線變化的影響。即使輸入影像不完整,DCM仍能成功的合成栩栩如生的人臉素描,並且不需要回復損失的紋理資訊。
在第三個人臉遮蔽回復的應用,我們提出一個以DCM為基礎的particle filter方法來解決人臉遮蔽回復以及遮蔽人臉的特徵點偵測。我們首先推導一個Bayesian Network架構將遮蔽人臉恢復及他的特徵點偵測整合在一起,並利用一個particle粒子點的集合來代表recovery objective function的複雜的機率分佈。在此定義裡,每一個particle粒子為一張校準後的人臉影像;對於每一個粒子,被遮蔽的人臉區域首先需要被偵測出來,然後利用沒有被遮蔽區域的人臉來回復遮蔽區域。藉由人臉整體形狀的限制,被回復的影像(即particles粒子)不易受影像雜訊影響,同時他們幾何校正參數也會被精確地重新計算。我們的實驗結果顯示被回復的影像是大致上都接近真實結果,並且沒有人為的涉入。
In this thesis, we present a new learning framework, direct combined model (DCM), for measuring the relationship between two related multidimensional variables. DCM finds one combined feature space to optimize their corresponding correlations, and, at the same time, it extracts the significant features of the individual variables. When only one variable is observed (known) and the other variable is hidden (unknown), a feature transformation, called DCM transformation, is computed with the aid of the combined model, DCM. DCM transformation estimates the transforms between the two related random variables and their significant correlations are directly considered. The effectiveness of DCM approach is demonstrated in three applications including facial feature point detection, face sketch synthesis, and occluded face recovery.
In the facial feature point detection framework, DCM efficiently models the correlation between the facial feature points (i.e., the facial shape) and the input texture information (i.e., the facial image) in a single combined eigenspace, which preserves the significant components of their dependency. Then, a DCM estimator is developed for estimating the facial feature points from the input image, where the learned correlation is explicitly take into account. Previous proposals for reconstructing facial shapes are heavily reliant on the quality of the texture reconstruction results, which are highly sensitive to occlusion and lighting effects in the input image. Our experiments show that the DCM approach accurately reconstructs the facial shape without the need to restore the texture information lost as a result of unfavorable occlusion or lighting conditions.
Automatically synthesizing the facial sketches of a facial image is highly challenging because facial images typically exhibit a wide range of poses, expressions and scales, and have differing degrees of illumination and/or occlusion. When facial sketches are synthesized in the unique sketching style of a particular artist, the problem becomes even more complex. The automatic facial sketch synthesis framework based on the DCM algorithm has three major advantages. First, DCM approach takes the local details of each facial feature and the global geometric structure of the face into account; thus, the synthesized sketches more accurately mimic the caricatures drawn by the artist. Second, although the training database contains only full-frontal facial images with neutral expressions, sketches with a wide variety of facial poses, gaze directions, and facial expressions can be successfully synthesized. Third, previous synthesizing proposals are heavily reliant on the quality of the texture reconstruction results, which are highly sensitive to occlusion and lighting effects in the input image. DCM approach accurately produces lifelike synthesized facial sketches without the need to restore the texture information lost as a result of such unfavorable conditions.
With regards to the third application, the occluded face recovery framework, we present a DCM-based particle filter solution for recovering and aligning the occluded facial image without the aid of manual face alignment. We first derive a Bayesian framework to unify the recovery stage with face alignment, where the complex distribution of the recovery objective function is represented by a particle set. In this work, a particle is an aligned facial image. Its occluded facial regions are first detected and then recovered by two basic DCM modules, namely, a shape recovery module and a texture recovery module. Each module models the occluded and non-occluded regions of the facial image in a DCM, which preserves the correlations between the geometry of the facial features and the pixel gray values, respectively, in the two regions. As a result, when shape or texture information is available only for the non-occluded region of the facial image, the optimal shape and texture of the occluded region can be reconstructed via a process of Bayesian inference. To enhance the quality of the reconstructed results, the shape reconstruction module suppresses the effects of biased noises, so that it is robust to facial feature point labeling errors. Furthermore, the texture reconstruction module recovers the texture of the occluded facial image by synthesizing the global texture image and the local detailed texture image. Our extensive experimental results demonstrate that the recovered images are quantitatively closer to the actual images without manual involvement.
[1]. S. Baker, and T. Kanade (2000), Hallucinating Faces, Proceedings of International Conference on Automatic Face and Gesture Recognition, pp. 83-88.
[2]. S. Baker, and T. Kanade (2002), Limits on Super-Resolution and How to Break Them, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 24, No. 9, pp. 1167-1183.
[3]. S. Basu, N. Oliver, and A. Pentland (1998), 3D Modeling and Tracking of Human Lip Motions, Proceedings of the IEEE International Conference on Computer Vision, pp. 337-343.
[4]. V. Blanz, S. Romdhani, and T. Vetter (2002), Face Identification across Different Poses and Illuminations with a 3D Morphable Model, Proceedings of the IEEE International Conference on Automatic Face and Gesture Recognition, pp. 202-207.
[5]. J. S. D. Bonet (1997), Multiresolution Sampling Procedure for Analysis and Synthesis of Texture Images, ACM SIGGRAPH, pp. 361–368.
[6]. C. Bregler, and S. M. Omohundro (1996), Probabilistic Models of Verbal and Body Gestures, Computer Vision in Man-Machine Interfaces.
[7]. X. Chai, S. Shan, X. Chen, and W. Gao (2006), Local Linear Regression (LLR) for Pose Invariant Face Recognition, International Conference on Face and Gesture, pp. 1716-1725.
[8]. H. Chang, D. Y. Yeung, and Y. Xiong (2004), Super-Resolution through Neighbor Embedding, IEEE International Conference on Computer Vision and Pattern Recognition, Vol. 1, pp. 275-282.
[9]. H. Chen, Y. Q. Xu, H. Y. Shum, S. C. Zhu, and N. N. Zhen (2001), Example-Based Facial Sketch Generation with Non-Parametric Sampling, Proceedings of the IEEE International Conference on Computer Vision, pp. 433-438.
[10]. H. Chen, L. Liang, Y. Q. Xu, H. Y. Shum, and N. N. Zhen (2002), Example-Based Automatic Portraiture, Proceedings of Asian Conference on Computer Vision, pp. 95-153.
[11]. H. Chen, Z. Liu, C. Rose, Y. Xu, H. Y. Shum, and D. Salesin (2004), Example-Based Composite Sketching of Human Portraits, Proceedings of the International ACM Symposium on Non-Photorealistic Animation and Rendering, pp. 95-153.
[12]. T. F. Cootes, and C. J. Taylor (2000), Statistical Models of Appearance for Computer Vision, Technical Report, University of Manchester.
[13]. T. F. Cootes, and C. J. Taylor (2001), Constrained Active Appearance Models, Proceedings of the IEEE International Conference on Computer Vision, pp. 748–754.
[14]. M. Covell (1996), Eigen-Points: Control-Point Location Using Principal Component Analysis, Proceedings of the IEEE International Conference on Automatic Face and Gesture Recognition, pp. 122-127.
[15]. M. Covell, and M. Withgott (1994), Spanning the Gap between Motion Estimation and Morphing, IEEE International Conference on Acoustics, Speech, and Signal Processing, Vol. 2, pp. 213-216.
[16]. D. Cristinacce, and T. F. Cootes (2003), Facial Feature Detection Using Adaboost with Shape Constraints, Proceedings of British Machine Vision Conference, pp. 231-240.
[17]. D. Cristinacce, and T. F. Cootes (2004.a), A Comparison of Shape Constrained Facial Feature Detectors, Proceedings of IEEE International Conference on Automatic Face and Gesture Recognition, pp. 375-380.
[18]. D. Cristinacce, T. F. Cootes, and I. Scott (2004.b), A Multi-Stage Approach to Facial Feature Detection, Proceedings of British Machine Vision Conference, pp. 277-286.
[19]. F. De la Torre, and M. J. Black (2001), Dynamic Coupled Component Analysis, IEEE International Conference on Computer Vision and Pattern Recognition, Vol. 2, pp. 643-650.
[20]. F. Dornaika, and J. Ahlberg (2004), Fast and Reliable Active Appearance Model Search for 3-D Face Tracking, IEEE Trans. Systems, Man, and Cybernetics, Part B, Vol. 34, No. 4, pp. 1838-1853.
[21]. R. Donner, M. Reiter, G. Langs, P. Peloschek, and H. Bischof (2006), Fast Active Appearance Model Search Using Canonical Correlation Analysis, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 28, No. 10, pp. 1690–1694.
[22]. A. A. Efros, and W. T. Freeman (2001), Image Quilting for Texture Synthesis and Transfer, ACM SIGGRAPH, pp. 341-346.
[23]. W. T. Freeman, E. C. Pasztor, and O. T. Carmichael (2000), Learning low-level vision, International Journal of Computer Vision, Vol. 40, No. 1, pp. 25-47.
[24]. W. T. Freeman, J. B. Tenenbaum, and E.C. Pasztor (1999), An Example-Based Approach to Style Translation for Line Drawings, MERL Technical Report, University of Cambridge, MA.
[25]. Y. Gao, M. K. H. Leung, S. C. Hui, and M. W. Tananda (2003), Facial Expression Recognition from Line-based Caricatures, IEEE Transactions on Systems, Man, and Cybernetics, Part A, Vol. 33, No. 3, pp. 407-412.
[26]. G. H. Golub, and C. F. Van Loan (1996), Matrix Computations, 3rd Edition Baltimore, MD: Johns Hopkins University Press.
[27]. B. Guenter, C. Grimm, and D. Wood (2005), Making Faces, ACM SIGGRAPH.
[28]. A. Hertzmann, C. E. Jacobs, N. Oliver, B. Curless, and D. H. Salesin (2001), Image Analogies, Proceedings of the SIGGRAPH, pp. 327–340.
[29]. P. Hong, Z. Wen, and T. S. Huang (2001), iFACE: A 3D Synthetic Talking Face, International Journal of Image and Graphics, Vol. 1, pp. 1-8.
[30]. T. Horprasert, Y. Yacoob, and L. S. Davis (1996), Computing 3-D Head Orientation from a Monocular Image Sequence, Proceedings of IEEE International Conference on Automatic Face and Gesture Recognition, pp. 242-251.
[31]. X. Hou, S. Li, H. Zhang, and Q. Cheng (2001), Direct Appearance Models, IEEE Conference on Computer Vision and Pattern Recognition, Vol. 1, pp. 828–833.
[32]. L. Huang, and C. Y. Su (2006), Facial Expression Synthesis Using Manifold Learning and Belief Propagation, Soft Computing, Vol. 10, No. 12, pp. 1193-1200.
[33]. B. W. Hwang, V. Blanz, T. Vetter, and S. W. Lee (2000), Face Reconstruction from a Small Number of Feature Points, Proceedings of International Conference on Pattern Recognition, Vol. 2, pp. 838–841.
[34]. B. W. Hwang, and S. W. Lee (2003), Reconstruction of Partially Damaged Face Images based on a Morphable Face Model, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 25, No. 3, pp. 365–372.
[35]. A. Hyvarinen, J. Karhumen, and E. Oja (2001), Independent Component Analysis, Hoboken, NJ: Wiley.
[36]. M.J. Jones and T. Poggio (1998), Multidimensional Morphable Models: A Framework for Representing and Matching Object Classes. IJCV, Vol. 29, No. 2, pp. 107-131.
[37]. T. Kanade (1977), Computer Recognition of Human Faces, Basel & Stuttgart, Birkhauser Verlag.
[38]. H. Koshimizu, M. Tominaga, T. Fujiwara, and K. Murakami (1999), On KANSEI Facial Image Processing for Computerized Facial Caricaturing System PICASSO, IEEE International Conference on Systems, Man, and Cybernetics, Vol. 6, pp. 294–299.
[39]. R. T. Kumar, S. K. Raja, and A. G. Ramakrishnan (2002), Eye Detection Using Color Cues and Projection Functions, Proceedings of IEEE. International. Conference on Image Processing, Vol. 3, pp. 337-340.
[40]. M. P. Kumar, P. H. S. Ton, and A. Zisserman (2005), OBJ CUT, IEEE International Conference on Computer Vision and Pattern Recognition, Vol. 1, pp. 18–25.
[41]. S. Z. Li, Y. S. Cheng, H. J. Zhang, and Q. S. Cheng (2002), Multi-View Face Alignment Using Direct Appearance Models, Proceedings of IEEE International Conference on Automatic Face and Gesture Recognition, pp. 324–329.
[42]. L. Liang, H. Chen, Y. Q. Xu, and H. Y. Shum (2002), Example-Based Caricature Generation with Exaggeration, Proceedings of the Pacific Conference on Computer Graphics and Applications, pp. 386-393.
[43]. L. Liang, C. Liu, Y. Q. Xu, B. Guo, and H. Y. Shum (2001), Real-time Texture Synthesis by Patch-based Sampling, ACM Transactions on Graphics, Vol. 20, No. 3, pp. 127–150.
[44]. L. Liang, F. Wen, Y. Q. Xu, X. Tang, and H. Y. Shum (2006), Accurate Face Alignment Using Shape Constrained Markov Network, Proceedings of the IEEE International Conference on Computer Vision, Vol. 1, pp. 1313-1319.
[45]. S. E. Librande (1992), Example-Based Character Drawing, Massachusetts Institute of Technology.
[46]. J. J. Lien, T. Kanade, C. C. Li, and J. F. Cohn (1998), Subtly Different Facial Expression Recognition and Expression Intensity Estimation, IEEE International Conference on Computer Vision and Pattern Recognition, pp. 853-859.
[47]. D. Lin, W. Liu, and X. Tang (2005.a), Layered Local Prediction Network with Dynamic Learning for Face Super-Resolution, Proceedings of International Conference on Pattern Recognition, Vol. 1, pp. 885-888.
[48]. D. Lin, and X. Tang (2005.b), Coupled Space Learning of Image Style Transformation, Proceedings of the IEEE International Conference on Computer Vision, Vol. 2, pp. 1699-1706.
[49]. D. Lin and X. Tang (2009), Quality-Driven Face Occlusion Detection and Recovery, ICCV, pp. 1-7.
[50]. W. Liu, D. Lin, and X. Tang (2005.a), Face Hallucination through Dual Associative Learning, Proceedings of IEEE. International. Conference on Image Processing, Vol. 1, pp. 873-876.
[51]. W. Liu, D. Lin, and X. Tang (2005.b), Hallucinating Faces: TensorPatch Super-Resolution and Coupled Residue Compensation, IEEE International Conference on Computer Vision and Pattern Recognition, Vol. 2, pp. 478-484.
[52]. W. Liu, D. Lin, and X. Tang (2005.c), Neighbor Combination and Transformation for Hallucinating Faces, Proceedings of the IEEE International Conference on Multimedia and Expo, pp. 145-148.
[53]. Z. Liu, Y. Shan, and Z. Zhang (2001), Expressive Expression Mapping with Ratio Images, ACM SIGGRAPH, pp. 271-276.
[54]. C. Liu, H. Y. Shum, and C. S. Zhang (2001), A Two-Step Approach to Hallucinating Faces: Global Parametric Model and Local Nonparametric Model, IEEE International Conference on Computer Vision and Pattern Recognition, Vol. 1, pp. 192-198.
[55]. Q. Liu, X. Tang, H. Jin, H. Lu, and S. Ma (2005), A Nonlinear Approach for Face Sketch Synthesis and Recognition, IEEE International Conference on Computer Vision and Pattern Recognition, Vol. 1, pp. 1005-1010.
[56]. Z. Liu, Z. Zhang, C. Jacobs, and M. Cohen (2000), Rapid Modeling of Animated Faces From Video, Journal of Visualization and Compute Animation, Vol. 12, pp. 227-240.
[57]. A. M. Martínez (2002), Recognizing Imprecisely Localized, Partially Occluded, and Expression Variant Faces from a Single Sample per Class, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 24, No. 6, pp. 748-763.
[58]. I. Matthews, and S. Baker (2004), Active Appearance Models Revisited, International Journal of Computer Vision, Vol. 60, No. 2, pp. 135-164.
[59]. Z. Mo, J.P. Lewis, and U. Neumann (2004), Face Inpainting with Local Linear Representations, Proceedings of British Machine Vision Conference, Vol. 1, pp. 347-356
[60]. U. Mohammed, S. J. D. Prince, and J. Kautz (2009), Visio-lization: Generating Novel Facial Images, ACM Transactions on Graphics, Vol. 28, pp. 57:1-57:8.
[61]. J. Nishino, T. Kamyama, H. Shira, T. Odaka, and H. Ogura (1999), Linguistic Knowledge Acquisition System on Facial Caricature Drawing System, Proceedings of IEEE International Conference on Fuzzy Systems, Vol. 3, pp. 1591-1596.
[62]. J.S. Park, Y.H. Oh, S.C. Ahn, and S.W. Lee (2005), Glasses Removal from Facial Image Using Recursive PCA Reconstruction, IEEE Trans. on PAMI, Vol. 27, No. 5, pp. 805-811.
[63]. M. Pantic, and I. Patras (2006), Dynamics of Facial Expression: Recognition of Facial Actions and their Temporal Segments from Face Profile Image Sequences, IEEE Transactions on Systems, Man, and Cybernetics, Part B, Vol. 36, No. 2, pp. 433-449.
[64]. Y.Saito, Y. Kenmochi, and K. Kotani (1999), Estimation of Eyeglassless Facial Images Using Principal Component Analysis. Proceedings of ICIP, Vol. 4, pp. 197-201
[65]. T. A. Stephenson, and T. Chen (2006), Adaptive Markov Random Fields for Example-based Super-Resolution of Faces, EURASIP Journal on Applied Signal Processing, pp. 1-11.
[66]. E. B. Sudderth, A. T. Ihler, W. T. Freeman, and A. S. Willsky (2003), Nonparametric Belief Propagation, IEEE International Conference on Computer Vision and Pattern Recognition, Vol. 1, pp. 605-612.
[67]. J. Sun, N. N. Zheng, H. Tao, and H. Y. Shum (2003), Image Hallucination with Primal Sketch Priors, IEEE International Conference on Computer Vision and Pattern Recognition, Vol. 2, pp.729-736.
[68]. T. Tamminen, and J. Lampinen (2006), Sequential Monte Carlo for Bayesian Matching of Objects with Occlusions, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 28, No. 6, pp. 930-941.
[69]. X. Tang, and X. Wang (2003), Face Sketch Synthesis and Recognition, Proceedings of the IEEE International Conference on Computer Vision, Vol. 2, pp. 687-694.
[70]. Z. Tu, X. Chen, A. L. Yuille, and S. C. Zhu (2003), Image Parsing: Unifying Segmentation, Detection, and Recognition, Proceedings of the IEEE International Conference on Computer Vision, Vol. 1, pp. 18-25.
[71]. Z. Tu, X. Chen, A. L. Yuille, and S. C. Zhu (2005), Image Parsing: Unifying Segmentation, Detection and Recognition, International Journal of Computer Vision, Vol. 63, No. 2, pp. 113–140.
[72]. R. Uhl, and N. da Vitoria Lobo (1996), A Framework for Recognizing a Facial Image from A Police Sketch, IEEE International Conference on Computer Vision and Pattern Recognition, pp. 586-593.
[73]. T. Vetter, and T. Poggio (1997), Linear Object Classes and Image Synthesis from a Single Example Image, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 19, No. 7, pp. 733-742.
[74]. P. Viola, and M. Jones (2001), Rapid Object Detection Using a Boosted Cascade of Simple Features, IEEE International Conference on Computer Vision and Pattern Recognition, Vol. 1, pp. 511–518.
[75]. X. Wang, and X. Tang (2009), Face Photo-Sketch Synthesis and Recognition, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 31, No. 11, pp. 1955-1967.
[76]. C. Wu, C. Liu, H. Y. Shum, Y. Q. Xu, and Z. Zhang (2004), Automatic Eyeglasses Removal from Face Images, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 26, No. 3, pp. 322-336.
[77]. R. Xiao, M. J. Li, and H. J. Zhang (2004), Robust Multipose Face Detection in Images, IEEE Trans. Circuits System for Video Technology, Vol. 14, No. 1, pp. 31-41.
[78]. D. Yu, and T. Sim (2008), Using Targeted Statistics for Face Regeneration, Proceedings of IEEE International Conference on Automatic Face and Gesture Recognition, pp. 1-8.
[79]. Q. Zhang, Z. Liu, G. Quo, D. Terzopoulos, and H. Y. Shum (2006), Geometry-Driven Photorealistic Facial Expression Synthesis, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 12, No. 1, pp. 48-60.
[80]. M. Zhao, C. Chen, S. Z. Li, and J. Bu (2004), Subspace Analysis and Optimization for AAM Based Face Alignment, Proceedings of IEEE International Conference on Automatic Face and Gesture Recognition, pp. 290-295.
[81]. S. C. Zhu (2003), Statistical Modeling and Conceptualization of Visual Patterns, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 25, No. 6, pp. 691-712.
[82]. S. C. Zhu, Y. N. Wu, and D. Mumford (1997), Minimax Entropy Principle and Its Application to Texture Modeling, Neural Computation, Vol. 9, pp. 1627–1660.
校內:2020-12-31公開