簡易檢索 / 詳目顯示

研究生: 王德勳
Wang, Te-Hsun
論文名稱: 固定與非固定運動的分離用於臉部表情分析
Rigid and Non-Rigid Motion Separation for Facial Expression Analysis
指導教授: 連震杰
Lien, Jenn-Jier
學位類別: 碩士
Master
系所名稱: 電機資訊學院 - 資訊工程學系
Department of Computer Science and Information Engineering
論文出版年: 2004
畢業學年度: 92
語文別: 中文
論文頁數: 33
中文關鍵詞: 固定運動非固定運動
外文關鍵詞: Rigid Motion, Non-Rigid Motion
相關次數: 點閱:108下載:2
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  •   這篇論文提出一個新方法來估計分析頭部有較大的轉動下的臉部表情,在作臉部表情分析時,首先要將Non-Rigid 的臉部表情與Rigid 的頭部轉動分離。為了克服一般常見的“Affine Model”與“8-parameter Perspective Projection model”的缺點,如:Out-of-plane motion、人臉的凹凸深度變化(明顯如鼻頭與眼角),在論文中所提的方法利用了3D 人頭Model 來幫助模擬。另外,利用Optical Flow 來追蹤所選取的特徵點,這些特徵點有只受轉動(Rigid Motion)影響的點與估計表情變化的點,將Tracking 的結果Mapping 到3D 人頭Model,估計出轉動後,利用3D 人頭Model 模擬正面時的表情,以完成Rigid Motion 與Non-Rigid Motion的分離。

    none

    第1章 序論 ⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯ 1 1.1 研究動機與目的 ⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯ 1 1.2 相關研究 ⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯ 2 第2章 影像的合併(Images Registration) ⋯⋯⋯⋯⋯⋯⋯ 4 2.1 Optical Flow ⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯ 4 2.2 Image Pyramid ⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯ 7 第3章 3D 座標與2D 座標之間的關係 ⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯ 9 第4章 系統描述 ⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯ 13 4.1 初始化 ⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯ 13 4.2 對新進的Image 估計Rigid Motion ⋯⋯⋯⋯⋯⋯⋯⋯ 15 第5章 實驗結果 ⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯ 19 5.1 Pose 估計結果 ⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯ 19 5.2 本論文所提的方法、Affine Model、Perspective Projection Model 估計Facial Expression 的比較 ⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯ 22 第6章 結論與未來方向 ⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯ 29 參考文獻 ⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯⋯ 30

    [1] M. Black and Y. Yacoob, “Recognizing Facial Expressions in Image
    Sequences Using Local Parameterized Models of Image Motion,” Int.
    Journal of Computer Vision, 25(1), pp. 23-48, 1997.

    [2] F. Dornaika, and J. Ahlberg, “Efficient Active Model for Real-Time
    Head and Facial Feature Tracking,” IEEE International Workshop on
    Analysis and Modeling of Faces and Gestures, pp. 173-180, 17 Oct. 2003.

    [3] I. Essa, T. Darrell, and A. Pentland, “Tracking Facial Motion,” IEEE
    Workshop on Non-rigid and Articulated Motion, pp. 36-42, 1994.

    [4] I. Essa, “Analysis, Interpretation and Synthesis of Facial
    Expressions,” MIT Media Lab. Technical Report #303, 1995.

    [5] B. Guenter, C. Grimm, D. Wood, H. Malvar, and F. Pighin, “Making
    Faces,” SIGGRAPH, pp. 55-66, 1998.

    [6] T. Horprasert, Y. Yacoob and L.S. Davis, “Computing 3-D Head
    Orientation from a Monocular Image Sequence,” IEEE FG, pp. 242-247,
    1996.

    [7] J.J. Lien, T. Kanade, J.F. Cohn, and C.C. Li, “Subtly Different Facial
    Expression Recognition and Expression Intensity Estimation,” IEEE CVPR,
    pp. 853-859, 1998.

    [8] J.J. Lien, T. Kanade, J.F. Cohn, and C.C. Li, “Detection, Tracking and
    Classification of Action Units in Facial Expression,” IEEE Journal of
    Robotics and Autonomous Systems, Systems 31, pp. 131-146, 2000.

    [9] Z. Liu, Z. Zhang, C. Jacobs, and M. Cohen, “Rapid Modeling of
    Animated Faces from Video,” Microsoft Research, Technical Report,
    MSR-TR-2000-11, 2000.

    [10] B. Lucas and T. Kanade, “An Iterative Image Registration Technique
    with an Application in Stereo Vision,” International Joint Conference on
    Artificial Intelligence, pp. 674-679, 1981.

    [11] Frederic I. Parke, and Keith Waters, "Computer Facial Animation,"
    A.K. PETERS, LTD., pp. 229-239、 pp. 249-250, 1996.

    [12] M. Rosenblum, Y. Yacoob, and L.S. Davis, “Human Emotion
    Recognition from Motion using a Radial Basis Function Network
    Architecture,” U. of Maryland, CS-TR-3304, 1994.

    [13] Arno Schodl, Antonio Haro, and Irfan Essa, "Head Tracking Using a
    Textured Polygonal Model," Proceedings Workshop on Perceptual User
    Interfaces, November 1998.
    Also available as Georgia Tech, GVU Center Tech Report No.
    GIT-GVU-TR-98-24.

    [14] Ming-Shing Su, Chun-Yen Chen, and Kuo-Young Cheng, “The
    Reconstruction of 3D Head Model From Two Orthogonal-View 2D Face
    Images.,” National Computer Symposium 2001, Taiwan, D320-D329, 2001.

    [15] D. Terzopoulos and K. Waters, “Analysis and Synthesis of Facial
    Image Sequences Using Physical and Anatomical Models,” IEEE PAMI, Vol.
    15, No. 6, pp. 569-579, 1993.

    [16] Y. Tian, T. Kanade, and J.F. Cohn, “Recognizing Action Units for
    Facial Expression Analysis,” IEEE PAMI, Vol. 23, No. 2, pp. 97-115, 2002.

    [17] Charles S. Wiles, Atsuto Maki, and Natsuko Matsuda, “Hyperpatches
    for 3D model Acquisition and Tracking” IEEE PAMI, Vol. 23, No. 12, pp.
    1391-1403, December 2001.

    [18] Y. Yacoob and L.S. Davis, “Recognizing Human Facial Expression,”
    U. of Maryland, CS-TR-3265, May 1994.

    下載圖示
    2004-07-07公開
    QR CODE