簡易檢索 / 詳目顯示

研究生: 林庭緯
Lin, Ting-Wei
論文名稱: 多人皺眉笑臉辨識系統之研究與實現
Research and Implementation for Multi-people Frown and Smile Detection System
指導教授: 王駿發
Wang, Jhing-Fa
學位類別: 碩士
Master
系所名稱: 電機資訊學院 - 電機工程學系
Department of Electrical Engineering
論文出版年: 2015
畢業學年度: 103
語文別: 英文
論文頁數: 52
中文關鍵詞: 笑臉偵測皺眉偵測表情辨識臉部表情單位索貝爾濾波器嘴角特徵擷取
外文關鍵詞: Smile Face detection, Facial Expression Unit, Facial Expression Recognition, Sobel Filter, Mouth corner Feature
相關次數: 點閱:75下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 幸福科技的目標是將沒感到幸福的人變成感到幸福的人,本論文提出以皺眉當作負面情緒,而微笑當作正面情緒,結合正負面表情觀測,以達到幸福科技的觀測目標。本論文提出一個新的表情辨識單位,臉部表情單位(FEU)。此單位可將表情的辨識系統模組化。臉部表情向量可將表情空間作更細微的切割,有助於人機介面(HMI)中,機器瞭解人類表情的提升。此表情辨識系統以索貝爾(Sobel)濾波器作皺眉特徵值擷取,以兩個閥值作皺眉定義。辨識笑臉的部分,使用了新的特徵值擷取演算法-嘴角特徵值(MCF)。本實驗因為是模組化的,可將兩個辨識器的結果完全獨立。笑臉部分的辨識率,在現場測試(real-time test)結果正確率高於80%。且在資料庫中測試正確率高達88%以上。在辨識皺眉的部分,在現場測試(real-time test)結果正確率亦高於80%。資料庫測試中,辨識結果高達85%以上。模組化的結果,更可降低重新訓練(re-train)的時間消耗(cost)。實驗結果顯示本研究可有效偵測與辨識微笑及皺眉。

    Orange Technology aims to offer some technological support to a certain extend to make people happy. Indeed, to understand the status of happiness of human beings, detection of negative and positive expressions plays a crucial role which could benefit to develop the system more reliable. In this purpose, this work focuses to develop an expression recognition system which considers frown and smile as negative and positive expression, respectively. The Facial Expression Unit (FEU) is proposed as the basic unit in facial-space, the FEU is also modularized as an expression recognition system. FEUs could always represent the facial-space with more classifications and this information is helpful to recognized expression from machine’s perspective in Human Machine Interface (HMI). The proposed Mouth Corner Feature (MCF) retrieves the smile and Sobel filter is adopted for frown detection. The experimental results can be observed independently, in smile detection, the real-time test accuracy rate is greater than 80% and 87% for off-line (database) test. As well as for the frown detection, 80% accuracy is achieved for real-time test and 85% for off-line test. The modularize system can further reduce the time consumption when re-training is required. As a result, the proposed system is efficient in both of smile and frown detection.

    中文摘要 Ⅰ Abstract Ⅱ 誌謝 Ⅲ Content Ⅳ List of Table Ⅵ List of Figures Ⅶ Chapter 1 Introduction 1 1.1 Background 1 1.2 Motivation 2 1.3 Research objective 3 1.4 Organization 3 Chapter 2 Survey and Discussion for Facial Expression Recognition 4 2.1 History of Facial Expression Recognition 4 2.1.1 Trend of Face Recognition 5 2.1.2 Geometric-Based Approaches 5 2.1.3 Appearance-Based Approaches 7 2.1.4 Hybrid-Based Approaches 9 2.1 Facial Expression Units (FEUs) 10 2.1.1 Facial Action Coding System 10 2.1.2 Definition of Facial Expression Units (FEUs) 11 2.1.3 Comparison of FEUs and traditional FER 13 Chapter 3 Multi-people Frown and Smile Recognition System 16 3.1 System Overview 16 3.2 Single-face FEU detection and Multi-face FEU detection 17 3.2.1 Single-face FEU detection 17 3.2.2 Multi-face FEU detection 17 3.3 System Preprocessing 18 3.3.1 Face detection algorithm 18 3.3.2 Skin color detection 20 3.3.3 Connected-component labeling 21 3.4 Frown detection algorithm 22 3.4.1 Sobel filter 22 3.4.2 Grayscale analysis 24 3.5 Smile detection algorithm 25 Chapter 4 Experimental Results 27 4.1.1 Smile database 27 4.1.2 Multi-people database 28 4.2 Mouth and brow detection 29 4.2.1 Face detection 29 4.2.2 Skin color detection 30 4.2.3 Mouth and eye detection 30 4.3 Feature Extraction 31 4.3.1Frown extraction by Sobel filter 31 4.3.2 Frown extraction by grayscale analysis 33 4.3.3 MCF extraction 35 4.4 Result of Multi-people Frown and Smile Recognition 37 4.5 Results and Comparison 42 Chapter 5 Conclusions and Future Works 44 5.1 Conclusions 44 5.2 Future Works 44 References 45 Appendix 1 48 Appendix 2 51

    [1] Matsumoto, David "More evidence for the universality of a contempt expression". Motivation and Emotion. Springer Netherlands. Volume 16, Number 4 / December, 1992
    [2] P. Ekman and W. Friesen. Facial Action Coding System: A Technique for the Measurement of Facial Movement. Consulting Psychologists Press, Palo Alto, 1978.
    [3] Paul Ekman, Wallace V. Friesen, and Joseph C. Hager. Facial Action Coding System: The Manual on CD ROM. A Human Face, Salt Lake City, 2002.
    [4] P. Ekman, “A methodological discussion of nonverbal behavior,” Psychology, vol.43, 141-149, 1957.
    [5] P. Ekman and W. Friesen, “The repertoire of nonverbal behavior: Categories, origins, usage, and coding,” Semiotica, 1, 49–98,1969.
    [6] P. Ekman and W. Friesen, “Facial action coding system: A technique for the measurement of facial movement,” Consulting Psychologists Press, Palo Alto, 1978.
    [7] “FACS,” http://face-and-emotion.com/dataface/facs/description.jsp
    [8] T. Kanade, F. John, and T. Yingli, “Comprehensive database for facial expression analysis,” in Proc. Int. Conf. on Automatic Face and Gesture Recognition, France, 2000, March , pp. 46-53.
    [9] M. Rosenblum, Y. Yacoob, and L. S. Davis, “Human expression recognition from motion using a radial basis function network architecture,” IEEE Trans. Neural Networks, vol. 7, no. 12, pp. 1121-1138, Sep. 1996.
    [10] I. Cohen, N. Sebe, A. Garg, L. S. Chen, and T. S. Huang, “Facial expression recognition from video sequences: Temporal and static modeling,” Computer Vision and Image Understanding, vol. 91, pp. 160-187, July 2003.
    [11] Md. Z. Uddin, J. J. Lee, and T.-S. Kim, “An enhanced independent component-based human facial expression recognition from video,” IEEE Trans. Consumer Electronics, vol. 55, no. 4, pp. 1121-1138, Nov. 2009.
    [12] Y.-J. Li, S.-K. Kang, Y.-U. Kim, and S.-T. Jung, “Development of a facial expression recognition system for the laughter therapy,” in Proc. 4th IEEE Int. Conf. Cybernetics and Intelligent Systems, Grand Copthorne Waterfront Hotel, Singapore, 2010, June 28-30, pp. 168-171.
    [13] P. Li, S. L. Phung, A. Bouzerdoum, and F. H. C. Tivive, “Automatic recognition of smiling and neutral facial expressions,” in Proc. IEEE Int. Conf. Digital Image Computing: Techniques and Applications, Sydney, Australia, 2010, Dec. 1-3, pp. 581-586,.
    [14] M. J. Lyons, S. Akamatsu, M. Kamachi, and J. Gyoba, “Coding facial expressions with gabor wavelets,” in Proc. 3rd IEEE Int. Conf. Automatic Face and Gesture Recognition, Nara, Japan, 1998, Apr. 14-16, pp. 200-205.
    [15] X.-B. Bai, J. O, Q.-X. Wu, and Y.-Z. Chen, “An application study on facial expression recognition based on gabor wavelets,” in Proc. Int. Symp. Computer Network and Multimedia Technology, Wuhan, China, 2009, Dec. 18-20, pp. 1-4.
    [16] Z. Ying and X. Fang, “Combining LBP and adaboost for facial expression recognition,” in Proc. IEEE Int. Conf. Signal Processing, Leipzig, Germany, 2008, May 10-11, pp. 1461-1464.
    [17] G. Zhao and M. Pietikainen, “Dynamic texture recognition using local binary patterns with an application to facial expressions,” IEEE Trans. Pattern Analysis And Machine Intelligence, vol. 29, no. 6, pp. 915-928, June 2007.
    [18] J. Chen and Y. Bai, “Classification of smile expression using hybrid phog and gabor features,” in Proc. IEEE Int. Conf. Computer Application and System Modeling, Taiyuan, China, 2010, Oct. 22-24 pp. 417-420.
    [19] J. Whitehill, G. Littlewort, I. Fasel, M. Bartlett, and J. Movellan, “Toward practical smile detection,” IEEE Trans. Pattern Analysis And Machine Intelligence, vol. 31, no. 11, pp. 2106-2111, Nov. 2009.
    [20] Y. Bai, L. Guo, L. Jin, and Q. Huang, “A novel feature extraction method using pyramid histogram of orientation gradients for smile recognition,” in Proc. IEEE Int. Conf. Image Processing, Cairo, Egypt, 2009, Nov. 7-10, pp. 3305-3308.
    [21] T. F. Cootes, G. J. Edwards, and C. J. Taylor, “Active appearance models,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 23, no. 6, pp. 681-685, June 2001.
    [22] T. Xiong, L. Xu, K. Wang, J. Li, and Y. Ma, “Local binary pattern probability model based facial feature localization,” in Proc. IEEE 17th Int. Conf. Image Processing, Hong Kong convention and exhibition centre, Hong Kong, 2010, Sep. 26-29, pp. 1425-1428.
    [23] Yuanzhong Li and Wataru Ito, “Shape parameter optimization for adaboosted active shape model,” in Proc. 10th IEEE Int. Conf. Computer Vision, Beijing, China, 2005, Oct. 17-20, pp. 251-258.
    [24] T. Ojala, M. Pietikäinen, and D. Harwood, “Performance evaluation of texture measures with classification based on Kullback discrimination of distributions,” in Proc. 12th IEEE Int. Conf. Pattern Recognition, Jerusalem, Israel, 1994, vol. 1, pp. 582-585.
    [25] T. Ojala, M. Pietikäinen, and D. Harwood, “A comparative study of texture measures with classification based on feature distributions,” in Proc. 13th IEEE Int. Conf. Pattern Recognition, Vienna, Austria, 1996, vol. 29, pp. 51-59
    [26] P. Viola and M.J. Jones, “Rapid object detection using a boosted cascade of simple features,” in Proc. Computer Society Int. Conf. Computer Vision and Pattern Recognition, 2001, Dec. vol. 1, pp. 511-518.
    [27] “Y-CbCr”,https://en.wikipedia.org/wiki/YCbCr
    [28] H. Samet and M. Tamminen (1988). "Efficient Component Labeling of Images of Arbitrary Dimension Represented by Linear Bintrees". IEEE Transactions on Pattern Analysis and Machine Intelligence (TIEEE Trans. Pattern Anal. Mach. Intell.) 10: 579. doi:10.1109/34.3918
    [29] Michael B. Dillencourt and Hannan Samet and Markku Tamminen (1992). "A general approach to connected-component labeling for arbitrary image representations"
    [30]Weijie Chen, Maryellen L. Giger and Ulrich Bick (2006). "A Fuzzy C-Means (FCM)-Based Approach for Computerized Segmentation of Breast Lesions in Dynamic Contrast-Enhanced MR Images". Academic radiology (Academic Radiology) 13 (1): 63–72.
    [31] Kesheng Wu, Wendy Koegler, Jacqueline Chen and Arie Shoshani (2003). "Using Bitmap Index for Interactive Exploration of Large Datasets".
    [32]“Connected-component_labeling”,https://en.wikipedia.org/wiki/Connected-component_labeling
    [33] M. Hassan, N. Khalid, A. Ibrahim, and N. Noor, “Evaluation of Sobel, Canny, Shen & Castan using sample line histogram method,” in Int. Symp. Info. Technol., pp. 1–7, 2008
    [34]M.-T. Lee and S.-S. Chen, “Image copyright protection scheme using sobel technology and genetic algorithm,” in Int. Symp. Comput. Commu. Control and Automation, pp. 201–204, 2010
    [35] Wei-Kang Fan, “Multiple People Smile Intensity Estimation Using Multi-Region Histogram of Oriented Gradients with Discriminative Classification of Smiling Face Clues”,碩士論文,2012

    無法下載圖示 校內:2020-08-31公開
    校外:不公開
    電子論文尚未授權公開,紙本請查館藏目錄
    QR CODE