| 研究生: |
楊勝棠 Yang, Sheng-Tang |
|---|---|
| 論文名稱: |
深度學習在頭髮分割與髮型分類之應用 An Application of Deep Learning for Hair Segmentation and Hair Styles Classification |
| 指導教授: |
王明習
Wang, Ming-Shi |
| 學位類別: |
碩士 Master |
| 系所名稱: |
工學院 - 工程科學系 Department of Engineering Science |
| 論文出版年: | 2019 |
| 畢業學年度: | 107 |
| 語文別: | 中文 |
| 論文頁數: | 48 |
| 中文關鍵詞: | 頭髮分割 、頭髮檢測 、深度學習 、卷積神經網路 、紋理分析 |
| 外文關鍵詞: | hair segmentation, hair detection, deep learning, convolutional neural networks, texture analysis |
| 相關次數: | 點閱:123 下載:13 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
本篇論文採用深度學習技術進行髮型分類,提出的系統由兩個階段組成,第一個是頭髮分割,其目的為提取輸入圖像的頭髮區域,Alexne卷積神經網路用於輸入圖像的紋理分析,其輸出會輸入到隨機森林分類器,已將輸入圖片分割成三個區域,頭髮區域、非頭髮區域、不確定區域,對於不確定的區域,應用局部三元模式(LTP)和支援向量機(SVM)將不確定區域分為頭髮區域和非頭髮區域,組合所有頭髮區域以完成輸入圖像的頭髮區域分割,第二個階段為髮型分類,將其頭髮圖像分類為六種髮型的一種,為卷髮、直髮、辮子頭、馬尾、包頭、爆炸頭,分割階段的訓練集採用Patch-F1k數據庫,該數據庫由1050個純毛髮和1050個非純毛髮圖像組成,髮型分類階段的訓練數據集是從網路上和不同的美髮沙龍收集組成的,並且手動提取所有圖片中的頭髮部分,每種髮型都有250張圖片,為了評估所提出的髮型分類系統準確度,從網路上收集每種髮型30張(加上不適用,表示沒頭髮),沒有任何預處理,並將這些圖像應用到所提出的髮型分類系統中,實驗結果表示,分類準確度大於83%
In this study the deep learning technical is applied to do hair style classification. The proposed system is composited by two phases. The first one is hair segmentation which extracted the hair area of the input image. The Alexnet convolutional neural network is used for texture analysis of the input image. Its output is input to a random forest classifier to segment the input image into three regions, hair region, non-hair region, and uncertain area. For these uncertain area, local ternary patterns (LTP) and support vector machine (SVM) are applied to separate the uncertain area into hair region and non-hair region. All the hair regions are combined for completing the hair area segmentation of the input image. The second phase is hairstyle classification which classifying the hair image into one of six hairstyles, called curls hair, straight hair, braid hair, horsetail hair, buns hair, and explosive hair. The training dataset for segmentation phase is adopted the database of Patch-Flk which compose of 1050 pure hair and 1050 non-hair texture images. The training dataset for hairstyle classification phase is collected from Internet and different hair salons. Only the hair portion of collected picture is extracted by manual. Each hairstyle has 250 images. There have some varieties in the same style. To evaluate the classification accuracy of the proposed system, 30 images for each hairstyle (plus not applicable, means no hair) is collected, without any preprocessing, from Internet and applied these images into the proposed system for hairstyle decision. The experimental results show that the accuracy of the classification rate is more than 83%.
[1] LAWS, K. I. Textured Image Segmentation No. USCIPI-940, University of Southern California Los Angeles Image Processing INST, 1980, 76-118.
[2] JAIN, Anil K.; FARROKHNIA, Farshid. Unsupervised texture segmentation using Gabor filters. Pattern recognition, 1991, 24.12: 1167-1186.
[3] VARMA, Manik; ZISSERMAN, Andrew. Texture classification: Are filter banks necessary?. In: 2003 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2003. Proceedings. IEEE, 2003. p. II-691.
[4] KAKADIARIS, Ioannis A.; SARAFIANOS, Nikolaos; NIKOU, Christophoros. Show me your body: Gender classification from still images. In: 2016 IEEE International Conference on Image Processing (ICIP). IEEE, 2016. p. 3156-3160.
[6] SEGAL, Mark R. Machine learning benchmarks and random forest regression. 2004.
[7] KRIZHEVSKY, Alex; SUTSKEVER, Ilya; HINTON, Geoffrey E. Imagenet classification with deep convolutional neural networks. In: Advances in neural information processing systems. 2012. p. 1097-1105.
[8] EVERINGHAM, Mark, et al. The pascal visual object classes challenge: A retrospective. International journal of computer vision, 2015, 111.1: 98-136.
[9] YACOOB, Yaser; DAVIS, Larry S. Detection and analysis of hair. IEEE transactions on pattern analysis and machine intelligence, 2006, 28.7: 1164-1169.
[10] CHAI, Menglei, et al. Autohair: Fully automatic hair modeling from a single image. ACM Transactions on Graphics, 2016, 35.4.
[11] LIPOWEZKY, Uri; MAMO, Omri; COHEN, Avihai. Using integrated color and texture features for automatic hair detection. In: 2008 IEEE 25th Convention of Electrical and Electronics Engineers in Israel. IEEE, 2008. p. 051-055.
[12] ROUSSET, Cedric; COULON, Pierre-Yves. Frequential and color analysis for hair mask segmentation. In: 2008 15th IEEE International Conference on Image Processing. IEEE, 2008. p. 2276-2279.
[13] JULIAN, Pauline, et al. Automatic hair detection in the wild. In: 2010 20th International Conference on Pattern Recognition. IEEE, 2010. p. 4617-4620.
[14] KAKADIARIS, Ioannis A.; SARAFIANOS, Nikolaos; NIKOU, Christophoros. Show me your body: Gender classification from still images. In: 2016 IEEE International Conference on Image Processing (ICIP). IEEE, 2016. p. 3156-3160.
[15] HUANG, Mingming, et al. Local image region description using orthogonal symmetric local ternary pattern. Pattern Recognition Letters, 2015, 54: 56-62.
[16] OJALA, Timo; PIETIKÄINEN, Matti; MÄENPÄÄ, Topi. Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Transactions on Pattern Analysis & Machine Intelligence, 2002, 7: 971-987..
[17] SIMONYAN, Karen; ZISSERMAN, Andrew. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
[18] LONG, Jonathan; SHELHAMER, Evan; DARRELL, Trevor. Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2015. p. 3431-3440.
[19] CHANG, Chih-chung; LIN, Chih-jen. LIBSVM: a library for support vector machines," 2001. Software available at http://www. csie. ntu. edu. tw/~ cjlin/libsvm. 2012.
[20] FELZENSZWALB, Pedro F.; HUTTENLOCHER, Daniel P. Efficient graph-based image segmentation. International journal of computer vision, 2004, 59.2: 167-181.
[21] WALKER, Andre; WILTZ, Teresa. Andre Talks Hair!. Simon & Schuster, 1997.
[22] JOHNSTON, Robert A.; EDMONDS, Andrew J. Familiar and unfamiliar face recognition: A review. Memory, 2009, 17.5: 577-596.
[23] SVANERA, Michele, et al. Figaro, hair detection and segmentation in the wild. In: 2016 IEEE International Conference on Image Processing (ICIP). IEEE, 2016. p. 933-937.
[24] CHAI, Menglei, et al. High-quality hair modeling from a single portrait photo. ACM Transactions on Graphics (TOG), 2015, 34.6: 204.
[25] KHAN, Khalil; MAURO, Massimo; LEONARDI, Riccardo. Multi-class semantic segmentation of faces. In: 2015 IEEE International Conference on Image Processing (ICIP). IEEE, 2015. p. 827-831.
[26] ROTH, Joseph; LIU, Xiaoming. On hair recognition in the wild by machine. In: Twenty-Eighth AAAI Conference on Artificial Intelligence. 2014.
[27] KIM, Youngwook; MOON, Taesup. Human detection and activity classification based on micro-Doppler signatures using deep convolutional neural networks. IEEE geoscience and remote sensing letters, 2015, 13.1: 8-12.
[28] WANG, Ya, et al. Face recognition in real-world surveillance videos with deep learning method. In: 2017 2nd International Conference on Image, Vision and Computing (ICIVC). IEEE, 2017. p. 239-243.
[29] WRIGHT, Daniel B.; SLADDEN, Benjamin. An own gender bias and the importance of hair in face recognition. Acta psychologica, 2003, 114.1: 101-114.
[30] TOSEEB, Umar; KEEBLE, David RT; BRYANT, Eleanor J. The significance of hair for face recognition. PloS one, 2012, 7.3: e34144.