簡易檢索 / 詳目顯示

研究生: 楊鵬穎
Yang, Peng-Ying
論文名稱: 影像情感辨識技術研發
Development of Image Emotion Identification Technology
指導教授: 陳裕民
Chen, Yuh-Ming
學位類別: 碩士
Master
系所名稱: 電機資訊學院 - 製造資訊與系統研究所
Institute of Manufacturing Information and Systems
論文出版年: 2014
畢業學年度: 102
語文別: 中文
論文頁數: 43
中文關鍵詞: 影像情感影像理解人臉表情色彩關聯模型色彩心理學隱含資訊
外文關鍵詞: image emotion, image realization, facial expression, color image scale, color psychology, implicit semantic
相關次數: 點閱:162下載:6
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 影像因為方便使用且具有能保存豐富的隱含資訊的特性(包含當下的情緒以及回憶),而被人們廣泛使用於紀錄生活中發生的事情,其中包括事件、地點、物品、人等構成日常生活的要素。因此在這些日常的生活照中必定保存有作者想表達的情感,甚至能與觀看者產生共鳴。但是許多研究多注重在外顯元素的辨識,如:場景的辨識、物件的標註等,忽略了內隱影像情感的辨識。情感是影像中最核心也是最重要的內涵,卻少有相關研究。
    本研究提出一個辨識影像情感的方法,利用色彩以及人物情緒作為影像情感的特徵,因為這兩個特徵是影像中最為直接影響觀看者的情緒的特徵。其中,色彩的特徵萃取是基於色彩心理學定義,每種色系會給予人不同的感觸進而誘發各式各樣的情緒。而人物情緒則是基於人類觀看影像的方式而選定的特徵,研究指出(Vonikakis & Winkler, 2012),當影像中有人存在時,觀看者會立刻將注意力放在人物及人臉上,並且大幅地忽略影像中的其他特徵。因此,人物因素(人臉、表情、姿勢、活動等)在影像中是很重要的特徵。本研究提出的方法包含使用色彩特徵的場景氣氛辨識、辨識人物情緒特徵的表情辨識以及最後將兩個特徵整合並使用於影像情感辨識的影像情感特徵整合。本研究將能以影像情感來分類影像並可以分得更細膩、同時拓展了可應用之影像的種類,並且將抽象的影像情感轉換為較具體的形容詞,更方便於相關應用的發展。

    This paper proposes a method that can be used for image emotion identification which uses colors and human facial expression as features since these features can directly influence observers. Color feature is based on Color Image Scale, which is proposed by Kobayashi. Color Image Scale is a system that uses 130 basic colors to make different three-color combinations and there are high-level semantic concepts for each combination. These semantic concepts contain “modern”, “classic”, “natural”, etc. Moreover, these concepts are in a 2-dimensional space and grouped by perceived similarity. Facial expression feature is based on Ekman’s research, which proposed six basic emotions include “happiness”, “sadness”, “fear”, “anger”, “surprise”, and “disgust”. The six basic emotions are cross-cultural so that this paper will not consider influence of different races of people in the image. However, even though the basic emotion can be used in general model, there still exist differences for people to express their emotions. Therefore, this paper uses geometric feature, which is more general than appearance feature, to classify the facial expressions. After extracting color and facial expression feature, SOM (Self-Organized Map) is used for classifying the image emotion with the two features. The intended applications are image realization and information extraction.

    摘要 I Development of Image Emotion Identification Technology II Figure1 The image emotion classification method IV Figure 2. Scene Emotion Classification model V Figure 3. Color, word, concept transforming VI 第一章 緒論 1 1.1. 研究背景 1 1.2. 研究動機 1 1.3. 研究目的 2 1.4. 問題分析 3 1.5. 研究項目與方法 4 1.6. 研究步驟 4 圖1.1研究流程圖 6 1.7 論文架構 6 第二章 文獻探討 8 2.1. 人物情緒辨識 8 2.2. 色彩與人之情緒 9 圖2.1以幾個範例表示色彩情緒關連模型(Kobayashi & Matsunaga, 1991) 10 2.3. 支援向量機(Support Vector Machine) 10 圖2.2 SMV座標超平面(Y. Chen, Zhou, & Huang, 2001) 11 2.4. 自組織映射圖(Self-Organized Map) 11 圖2.3 SOM拓樸空間(Topological map) 12 2.5. 類似研究 13 第三章 方法設計與技術開發 14 3.1. 影像情感辨識方法設計 14 圖3.1影像情感辨識方法 15 3.2. 場景氣氛辨識(Scene Emotion Classification) 15 16 圖3.2 場景氣氛辨識架構圖 16 圖3.3 HSV色彩空間 17 圖3.4 色彩情緒關聯模型(Color Image Scale) 19 圖3.5 色彩、文字、概念轉換圖 20 3.3. 人物情緒辨識 20 圖3.6人物情緒辨識架構圖 21 圖3.7 betaface部位點標記結果(單人) 22 圖3.8 betaface部位點標記結果(多人) 23 3.4. 影像情感特徵整合與影像情感分類 25 第四章 實驗與驗證 26 4.1. 人物情緒辨識方法實驗與驗證 26 圖4.1 JAFFE人臉表情資料庫 27 圖4.2 網路資料集 28 4.2. 影像情感辨識方法實驗與驗證 31 圖4.3 SOM鄰近權重距離,特徵v=lhe,fr,r 32 圖4.4 SOM鄰近權重距離,特徵v=lhe,he,fr 32 圖4.5 樣本分群圖 33 第五章 結論與未來方向 37 5.1. 結論 37 5.2. 相關應用 39 5.3. 未來研究方向 39 參考文獻 40

    Acevedo-Rodríguez, J., Maldonado-Bascón, S., Lafuente-Arroyo, S., Siegmann, P., & López-Ferreras, F. (2009). Computational load reduction in decision functions using support vector machines. Signal Processing, 89(10), 2066-2071.
    Betaface. http://www.betaface.com/.
    Boll, S. (2007). Share It, reveal It, reuse It, and push multimedia into a new decade. MultiMedia, IEEE, 14(4), 14-19.
    Chang, H.-T., Mastorakis, N., Mladenov, V., Bojkovic, Z., Simian, D., Kartalopoulos, S., . . . Narayanan, S. (2008). Automatic web image annotation for image retrieval systems. Paper presented at the WSEAS International Conference. Proceedings. Mathematics and Computers in Science and Engineering.
    Chen, X., Yuan, X., Yan, S., Tang, J., Rui, Y., & Chua, T.-S. (2011). Towards multi-semantic image annotation with graph regularized exclusive group lasso. Paper presented at the Proceedings of the 19th ACM international conference on Multimedia.
    Chen, Y., Zhou, X. S., & Huang, T. S. (2001). One-class SVM for learning in image retrieval. Paper presented at the Image Processing, 2001. Proceedings. 2001 International Conference on.
    Cootes, T. F., Edwards, G. J., & Taylor, C. J. (2001). Active appearance models. IEEE Transactions on pattern analysis and machine intelligence, 23(6), 681-685.
    Dellagiacoma, M., Zontone, P., Boato, G., & Albertazzi, L. (2011). Emotion based classification of natural images. Paper presented at the Proceedings of the 2011 international workshop on DETecting and Exploiting Cultural diversiTy on the social web.
    Edwards, J., Jackson, H. J., & Pattison, P. E. (2002). Emotion recognition via facial expression and affective prosody in schizophrenia: a methodological review. Clinical psychology review, 22(6), 789-832.
    Ekman, P. (1971). Universals and cultural differences in facial expressions of emotion. Paper presented at the Nebraska symposium on motivation.
    Ekman, P., Rolls, E., Perrett, D., & Ellis, H. (1992). Facial expressions of emotion: An old controversy and new findings [and discussion]. Philosophical Transactions of the Royal Society of London. Series B: Biological Sciences, 335(1273), 63-69.
    Elliot, A. J., & Maier, M. A. (2014). Color psychology: Effects of perceiving color on psychological functioning in humans. Annual review of psychology, 65, 95-120.
    Fei-Fei, L., Iyer, A., Koch, C., & Perona, P. (2007). What do we perceive in a glance of a real-world scene? Journal of Vision, 7(1).
    Gong, Z., Liu, Q., & Zhang, J. (2006). Automatic image annotation by mining the web Data warehousing and knowledge discovery (pp. 449-458): Springer.
    Hinde, R. A. (1972). Non-verbal communication: Cambridge University Press.
    Huang, C.-L., & Huang, Y.-M. (1997). Facial expression recognition using model-based feature extraction and action parameters classification. Journal of Visual Communication and Image Representation, 8(3), 278-290.
    Ishak, E. W., & Feiner, S. K. (2006). Content-aware scrolling. Paper presented at the Proceedings of the 19th annual ACM symposium on User interface software and technology.
    Juanjuan, Z., Huijun, L., Yue, L., & Junjie, C. (2012). A Kind of Fuzzy Decision Tree Based on the Image Emotion Classification. Paper presented at the Computing, Measurement, Control and Sensor Network (CMCSN), 2012 International Conference on.
    Kobayashi, S. (1981). The aim and method of the color image scale. Color Research & Application, 6(2), 93-107.
    Kobayashi, S., & Matsunaga, L. (1991). Color image scale: Kodansha international Tokyo.
    Kochetkov, A. (2013). Cloud-based biometric services: just a matter of time. Biometric Technology Today, 2013(5), 8-11.
    Kohonen, T. (1990). The self-organizing map. Proceedings of the IEEE, 78(9), 1464-1480.
    Kress, G., & van Leeuwen, T. (2006). The Semiotic Landscape. Images: A Reader, 119.
    Labrecque, L. I., & Milne, G. R. (2012). Exciting red and competent blue: the importance of color in marketing. Journal of the Academy of Marketing Science, 40(5), 711-727.
    Lajevardi, S. M., & Wu, H. R. (2012). Facial expression recognition in perceptual color space. Image Processing, IEEE Transactions on, 21(8), 3721-3733.
    Liu, Y., Zhang, D., Lu, G., & Ma, W.-Y. (2007). A survey of content-based image retrieval with high-level semantics. Pattern Recognition, 40(1), 262-282.
    Liu, Z., & Yu, X. (2011). Research on linguistic computing model for image emotion semantic. Paper presented at the Transportation, Mechanical, and Electrical Engineering (TMEE), 2011 International Conference on.
    Liu, Z., & Yu, X. (2012). Identification of Image Emotional Semantic Based on Feature Fusion. Paper presented at the Computer Science & Service System (CSSS), 2012 International Conference on.
    Lucassen, M. P., Gevers, T., & Gijsenij, A. (2011). Texture affects color emotion. Color Research & Application, 36(6), 426-436.
    Machajdik, J., & Hanbury, A. (2010). Affective image classification using features inspired by psychology and art theory. Paper presented at the Proceedings of the international conference on Multimedia.
    Michael J. Lyons, Kamachi, M., & Gyoba., J. (1997). Japanese Female Facial Expressions (JAFFE), Database of digital images (1997).
    Ratliff, M. S., & Patterson, E. (2008). Emotion recognition using facial expressions with active appearance models. Paper presented at the Proceedings of the Third IASTED International Conference on Human Computer Interaction,(Innsbruck, Austria).
    Shen, J., Sun, H., Mao, X., Guo, Y., & Jin, X. (2011). Color-Mood-Aware Clothing Re-texturing. Paper presented at the Computer-Aided Design and Computer Graphics (CAD/Graphics), 2011 12th International Conference on.
    Shin, Y., & Kim, E. Y. (2010). Affective prediction in photographic images using probabilistic affective model. Paper presented at the Proceedings of the ACM International Conference on Image and Video Retrieval.
    Siriluck, W., Kamolphiwong, S., Kamolphiwong, T., & Sae-Whong, S. (2007). Blink and click. Paper presented at the Proceedings of the 1st international convention on Rehabilitation engineering & assistive technology: in conjunction with 1st Tan Tock Seng Hospital Neurorehabilitation Meeting.
    Solli, M., & Lenz, R. (2010). Color semantics for image indexing. Paper presented at the Conference on Colour in Graphics, Imaging, and Vision.
    Solli, M., & Lenz, R. (2011). Color emotions for multi‐colored images. Color Research & Application, 36(3), 210-221.
    Stonawski, J., & Zelinka, I. (2013). Recommending New Links in Social Networks Using Face Recognition. Paper presented at the Semantic and Social Media Adaptation and Personalization (SMAP), 2013 8th International Workshop on.
    Tang, J., Hong, R., Yan, S., Chua, T.-S., Qi, G.-J., & Jain, R. (2011). Image annotation by k nn-sparse graph-based label propagation over noisily tagged web images. ACM Transactions on Intelligent Systems and Technology (TIST), 2(2), 14.
    Vapnik, V. (2000). The nature of statistical learning theory: springer.
    Venkatraman, A. (2006). enVisage: Face Recognition in Videos. Imperial College, University of London.
    Vicente, M. A., Fernandez, C., & Coves, A. M. (2009). Supervised Face Recognition for Railway Stations Surveillance. Paper presented at the Advanced Concepts for Intelligent Vision Systems.
    Vonikakis, V., & Winkler, S. (2012). Emotion-based sequence of family photos. Paper presented at the Proceedings of the 20th ACM international conference on Multimedia.
    Wang, X., Jia, J., Liao, H., & Cai, L. (2012). Image colorization with an affective word Computational Visual Media (pp. 51-58): Springer.
    Xiao, R., Zhao, Q., Zhang, D., & Shi, P. (2011). Facial expression recognition on multiple manifolds. Pattern Recognition, 44(1), 107-116.
    Yang, C.-K., & Peng, L.-K. (2008). Automatic mood-transferring between color images. Computer Graphics and Applications, IEEE, 28(2), 52-61.
    Zhang, D., Islam, M. M., & Lu, G. (2012). A review on automatic image annotation techniques. Pattern Recognition, 45(1), 346-362.
    Zhou, X., Shi, Y., Zhang, P., Nie, G., & Jiang, W. (2009). A new classification method for PCA-based face recognition. Paper presented at the Business Intelligence and Financial Engineering, 2009. BIFE'09. International Conference on.

    下載圖示 校內:2019-09-03公開
    校外:2019-09-03公開
    QR CODE