簡易檢索 / 詳目顯示

研究生: 賴宗亨
Lai, Tzung-Heng
論文名稱: 使用漸進式3維到2維投影模型來分離頭部轉動軌跡與人臉表情軌跡
Incremental Perspective Motion Model for Rigid Head Motion and Non-Rigid Facial Expression Separation
指導教授: 連震杰
Lien, Jenn-Jier James
學位類別: 碩士
Master
系所名稱: 電機資訊學院 - 資訊工程學系
Department of Computer Science and Information Engineering
論文出版年: 2008
畢業學年度: 96
語文別: 英文
論文頁數: 40
中文關鍵詞: 漸進式投影頭部轉動人臉表情
外文關鍵詞: local linear regression.region combination, Separating rigid and non-rigid motion, incremental perspective motion model, multi-resolution approach
相關次數: 點閱:90下載:3
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 當輸入一段從正面沒表情到側面有表情的影像時,本篇研究透過將影像分為三區塊並個別使用漸進式投影模型,來分離固定式頭部轉動與非固定式臉部表情且移除掉固定式頭部轉動。在分離固定式頭部轉動與非固定式臉部表情之後,本篇研究必須將三區塊的結果影像合併。本篇研究考慮三區塊中間有兩個重疊區塊,對於每個重疊區塊而言,本篇研究一開始透過內插法給每個畫素初始值並藉由在個別子區塊中找到較佳的畫素值來更新每個畫素,如此本篇研究就能克服將三區塊合併時的邊界問題。本篇研究發現會有未知區塊問題出現在分離固定式頭部轉動與非固定式臉部表情的結果影像,因為原本側面影像就不包含該人臉區塊資訊,所以本篇研究需藉由局部線性回歸方法合成出虛擬的表情影像。最後,本篇研究透過漸進式投影模型將局部線性回歸結果轉置到固定式頭部轉動與非固定式臉部表情結果,並藉由此轉置結果,本篇研究可以將分離固定式頭部轉動與非固定式臉部表情結果的未知區塊取代掉並將取代的區塊與分離的結果區塊合併,重建出最後沒有未知區塊問題的影像結果。

    When inputting one image sequence that is from the frontal view with neutral to the side view with expression, our research wants to separate the rigid head motion and non-rigid facial expression and removes the head motion by incremental perspective motion model with three sub-regions. After separating the rigid head motion and non-rigid facial expression, our system wants to combine three sub-regions of the warping result. Our research considers that there are two overlap regions of three sub-regions, for each overlap region, our system interpolates each pixel gray value in the first, and our system updates each pixel gray value by finding two candidate pixels in sub-regions, so our research could overcome the edge problem of the sub-regions combination. The separating rigid head motion and non-rigid facial expression result, we could find the missing problem because of the side view image with expression wouldn’t include some facial information, so our research wants to synthesize the virtual expression image by the local linear regression. Finally, our research warps the local linear regression result to the separating rigid head motion and non-rigid facial expression result by incremental perspective transformation, through this warping result image, our research could replace the missing region of separating result and combine these two sub-regions to reconstruct the final result without the missing problem.

    Chapter 1. Introduction....................................................................... 1 Chapter 2. System Description............................................................ 5 Chapter 3. Incremental Perspective Motion Model for Rigid Head Motion and Non-Rigid Facial Expression Separation.. 9 3.1. Incremental Perspective Motion Model…………………….. 9 3.2. Rigid Head Motion and Non-Rigid Facial Expression Separation….………………………………………………. 13 3.3. Sub-Region Combination………………………………….. 17 Chapter 4. Facial Image Synthesis by Local Linear Regression… 20 4.1. Local Linear Regression Model…….……………………... 20 4.2. Synthesizing Virtutal Expression Image…………………... 22 Chapter 5. Recovering Missing Region of Reconstructing Frontal View Image…....……………………………….............. 27 Chapter 6. Experimental Results………………………………….. 29 6.1. The Performance of Separating Non-Rigid Facial Expression from Rigid Head Motion…....……………………………... 29 6.2. The Local Linear Regression Results and System Results… 33 Chapter 7. Conclusions…………………………………………….. 37 References…………………………………………………………... 39

    1. K. Anderson and P.W. McOwan, “A Real Time Automated System for the Recognition of Human Facial Expression”, IEEE Tran. on Systems, Man, and Cybernetics, Vol. 36, No. 1, pp. 96-105, 2006.
    2. M.S. Bartlett, G. Littlewort, M. Frank, C. Lainscsek, I. Fasel, and J.Fully Movellan, “Automatic Facial Action Recognition in Spontaneous Behavior”, International Conf. on Face and Gesture, pp. 223-230, 2006.
    3. J.R. Bergen, P. Anandan, K.J. Hanna and R. Hingorani, “Hierarchical model-based motion estimation”, Proc. of European Conference on Computer Vision, pp. 237-252, May 1992.
    4. M. Black and Y. Yacoob, “Recognizing Facial Expressions in Image Sequences Using Local Parameterized Models of Image Motion”, International Journal of Computer Vision, pp. 23-48, 1997.
    5. B. Braathen, M.S. Bartlett, G. Littlewort, E. Smith and J.R. Movellan, “An Approach to Automatic Recognition of Spontaneous Facial Actions”, International Conf. on Face and Gesture, pp. 345-350, 2002.
    6. X. Chai, S. Shan, X. Chen and W. Gao “Local Linear Regression (LLR) for Pose Invariant Face Recognition”, International Conf. on Face and Gesture, pp. 1716-1725, 2006.
    7. P. Ekman and W.V. Friesen, “Facial Action Coding System”, Consulting Psychologist Press Inc., San Francisco, CA, 1978.
    8. S.B. Gokturk, J.Y. Bouguet, C. Tomasi and B. Girod, “Model-Based Face Tracking for View-Independent Facial Expression Recognition”, International Conf. on Face and Gesture, pp. 272-278, 2002.
    9. W. Hua, “Building Facial Expression Analysis System”, CMU Tech. Report, 1998.
    10. J.J. Lien, T. Kanade, J.F. Cohn and C.C. Li , “Subtly Different Facial Expression Recognition and Expression Intensity Estimation”, Computer Vision and Pattern Recognition, pp. 853-859, 1998.
    11. S. Lucey, I. Matthews, C. Hu, Z. Ambadar, F. de la Torre and J. Cohn, “AAM Derived Face Representations for Robust Facial Action Recognition”, International Conf. on Face and Gesture, pp. 155-160, 2006.
    12. M. Rosenblum, Y. Yacoob and L.S. Davis, “Human Emotion Recognition from Motion Using a Radial Basis Function Network Architecture”, Uni. of Maryland, CS-TR-3304, 1994.
    13. R. Szeliski and H. Shum, “Creating Full View Panoramic Image Mosaics and Environment Maps”, Proc. of Special Interest Group on GRAPHics and Interactive Techniques, August, 1997.
    14. Y. Tian, T. Kanade and J.F. Cohn, “Evaluation of Gabor-Wavelet-Based Facial Action Unit Recognition in Image Sequences of Increasing Complexity”, International Conference on Face and Gesture, pp. 218-223, 2002.
    15. F. De la Torre, Y. Yacoob and L.S.Davis, “A Probabilistic Framework for Rigid and Non-rigid Appearance Based Tracking and Recognition”, International Conf. on Face and Gesture, pp. 491-498, 2000.
    16. C.T. Tu and J.J. Lien, “Automatic Location of Facial Feature Points and Synthesis of Facial Sketches Using Direct Combined Model”, IEEE Trans. Systems, Man, and Cybernetics, 2008.
    17. L.Y. Wei, “Texture Synthesis by Fixed Neighborhood Searching”. PhD thesis, Stanford University, 2002.
    18. L.Y. Wei, “Texture Synthesis from Multiple Sources”, Proc. of Special Interest Group on GRAPHics and Interactive Techniques, 2003.
    19. Y. Yacoob and L.S. Davis, “Recognizing Human Facial Expressions from Long Image Sequence Using Optical Flow”, IEEE Tran. on Pattern Analysis and Machine Intelligence, Vol. 18, No. 6, pp. 636-642, 1996.
    20. Y. Zhang and Q. Ji, “Active and Dynamic Information Fusion for Facial Expression Understanding from Image Sequences”, IEEE Tran. on Pattern Analysis and Machine Intelligence, Vol. 27, No. 5, pp. 699-714, 2005.

    下載圖示 校內:立即公開
    校外:2008-08-27公開
    QR CODE