簡易檢索 / 詳目顯示

研究生: 蘇敬堯
Su, Ching-Yao
論文名稱: 具有燈光變化及對齊誤差強健性之人臉辨識系統之設計
Designing a Face Recognition System with Illumination Variation and Localization Error Robustness
指導教授: 楊家輝
Yang, Jar-Ferr
學位類別: 博士
Doctor
系所名稱: 電機資訊學院 - 電腦與通信工程研究所
Institute of Computer & Communication Engineering
論文出版年: 2014
畢業學年度: 102
語文別: 英文
論文頁數: 93
中文關鍵詞: 燈光變化正規化局部性特徵對齊誤差梯度相位直方圖方向梯度直方圖
外文關鍵詞: illumination variation normalization, local descriptor, localization error, histogram of gradient phase, histogram of oriented gradient
相關次數: 點閱:99下載:4
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 由於影像感光元件、視訊裝置及多媒體處理器等電子原件技術的進步以及影像監控需求的增加,人臉辨識系統在最近十幾年來,越來越受到關注。人臉辨識技術可被用在諸多應用上,例如:人機互動、視訊監控、以及存取控制等。雖然,已經有許多人臉辨識技術的演算法被提出來,然而,大部份演算法都是在一個比較理想、受控制環境底下進行測試,卻忽略了實際環境下諸多變化的影響,這些變化包含:對齊誤差、影像拍攝時間不同所引起的差異、表情及燈光變化等……這些變化會對辨識系統的性能造成相當程度的影響。所以,對設計者來說,要發展一套能在未受人為控制的實際環境下操作的人臉辨識系統,真的是一大挑戰!

    本篇論文將著重在如何發展一套能夠克服燈光變化及位移誤差的強健人臉辨識系
    統。為了解決上述的問題,我們提出三個主要部分:一個是以「影像梯度為基礎的局部性人臉特徵」(gradient-based local descriptor)及其萃取方法;個應用於「燈光變化正規化」(illumination variation normalization) 的「適應性機制」 (adaptive mechanism) ;以及一個具有「位移誤差」(localization error)補償及兩階段辨識之人臉辨識系統。

    近幾年,以影像梯度為基礎的局部性人臉特徵,漸漸受到重視,而且已經被成功用在人臉偵測及人臉辨識等應用上。這類特徵通常具有良好的位移誤差及燈光變化抵抗能力以及對人的表情變化比較具有強健性。所以在第一個部分,我們將提出一新的局部性人臉特徵,稱為「梯度相位直方圖」(histogram of gradient phase-HGP) 。與既有局部性人臉特徵,如「方向梯度直方圖」(histogram of oriented gradient-HOG)相比較,此特徵向量具有一些優勢及吸引人的地方,特別是應用在沒有人為控制環境下。與方向梯度直方圖不同之處是直方圖的累積方式。梯度相位直方圖特徵向量,是藉由累積梯度相位分佈密度函數(gradient phase probability density function),而不是累積梯度大小(gradientmagnitude)而得到。而梯度相位分怖密度函數是用一個高斯分佈函數來近似。其均值,是根據某個影像點的梯度相位來決定;而其標準差,是藉由所估計到某個影像點梯度的訊號雜訊比(signal-to-noise ratio)而推得。由於此新提出的局部性人臉特徵向量,是藉由影像梯度的統計特性推得,具有明確的物理意義。根據實際實驗結果,與其他局部性特徵向量相比,此特徵向量具有較好的辨識特性,並且對直方圖向量局部正規化運算(blocknormalization)的準確度要求比較低〈我們甚至可以省略向量正規化運算,來節省運算複雜度,也不會對性能造成太大影響〉。至於其他局部性特徵向量 ─ 若局部正規化運算沒有做好,就會對性能造成極大的衝擊。

    為了降低燈光變化對人臉辨識性能的影響,我們通常會對影像灰階值作正規化的動作。在本論文的第二部分,我們將指出:對影像灰階值正規化,雖然可改善辨識性能,但也可能會因正規化處理的過程中對特徵造成破壞,反而會有降低辨識性能的負作用發生。這種現象常發生在當測試影像與資料庫影像是在類似的拍攝條件下。所以我們提出一個適應性機制,來解決這個問題。在計算測試影像與每一張資料庫影像特徵向量距離時,此機制會利用適當的比例混合未經過正規化特徵向量的距離,和經過正規化特徵向量的距離。而混合比例,是經由兩個可靠度值〈一個是未經正規化特徵向量距離之可靠度,另一個則是經正規化特徵向量距離之可靠度〉的比例關係決定的。由於此兩可靠度值,可以反應出在未經正規化,與經正規化後的影像辨識的難易程度,所以,用此兩可靠度值的比例(ratio)關係可以決定最適合的混合比例,進而得到較佳的辨識結果。此適應性機制有許多優勢,包括:具有簡單及性能佳之特性;可以分辨不同的變化來源,而不會和其他變化所造成的影響混淆〈例如:燈光變化、表情變化、位移誤差、拍攝時間差異等〉;以及不需繁複的訓練過程等優點。

    位移誤差也是人臉辨識系統中,另一個具有挑戰性的問題。此誤差主要是在人臉辨識系統的前級 ─ 人臉偵測 (face detection) 演算法中的估計誤差所導致,此殘餘誤差會造成測試影像與資料庫中影像不對齊,以至於特徵向量的萃取及特徵向量間之距離計算都受到影響,而發生辨識錯誤,以至於辨識性能受到嚴重影響。雖然,我們可以在辨識過程中,先將測試影像一一的與每一張資料庫影像對齊,然後再進行辨識工作。然而,此法通常需要龐大的計算量!所以,在第三部分,我們提出了一個具有位移誤差補償的兩階段辨識系統,此系統包含兩級利用局部性特徵為基礎的辨識模組,其中第一級藉由利用較大的局部區域所產生的特徵向量來克服較大的位移誤差,並由測試影像與所有資料庫影像的距離當中,選出一個最可能的子集(most probable subset)。接下來,我們會利用一個以影像梯度相位(gradient phase)為資訊的循序對齊演算法(iterative alignment algorithm)來對齊測試影像與最可能的子集中的每一張影像,並根據所估計的誤差做修正。最後,我們利用第二級的辨識模組來辨識比對經位移修正過後的測試與最可能的子集的影像。其中,第二級的辨識模組藉由利用較小的局部區域所產生的特徵向量 ─ 具有較好的鑑別性,來達到比較好的辨識結果。這裡,我們要強調,我們系統中的循序對齊演算法是對齊測試影像與最可能的子集,而非整個資料庫影像,所以計算複雜度會降低很多。

    為了驗證我們以上所提三個主要部份的強健性,我們會故意在原有影像中,加入適當的位移誤差來模擬人臉偵測所產生的誤差以及利用燈光變化較大的資料庫來進行多項實驗,以驗證我們所提方法的可靠度。

    本學術論文的結構順序安排如下:第一章,我們簡介人臉辨識的概念及其挑戰,內容包含:複習既有的如何建立具鑑別性的人臉子空間技巧 (subspace) ;以及如何克服燈光變化及位移誤差的方法。第二章,包含我們以上所提的第一及第二部份,為了讓內容看起來較簡潔,我們將詳細的理論推導部份移至附錄章節。在第三章,包含我們所提的第三部份。最後,我們會在第四章,針對我們所提方法作總結及提出未來工作方向。

    Because of the rapid advances in technologies of image sensors, video devices, and multimedia processors, face recognition techniques have received more attention during the past decade. These techniques have been successfully used in many human-machine interactions, visual surveillance, and access control applications, etc. Although so many algorithms have been proposed for face recognition applications, most of them are evaluated under constrained conditions, that is, with well-controlled illumination and localization conditions. However, in a practical situation, there is a diversity of variations including localization errors, aging, expression and illumination variations, etc, which will have a significant impact on the system performance. Therefore, it is actually very challenging for researchers to develop a robust face recognition system under unconstrained conditions.

    In this dissertation, we will focus on the topic of how to develop a robust face recognition system under unconstrained conditions especially with the effect of illumination variations and localization errors. To address the above problems, three major parts are included: a new robust gradient-based local descriptor, an adaptive mechanism for illumination normalization, and a two-stage face recognition system with localization error compensation.

    Gradient-based local descriptors have received more attention these years and have been successfully used in many applications like human detection and face recognition. The advantages of local descriptors are the resistance to local geometric and photometric errors and the robustness to expression variations. In the first part, we will propose a new local descriptor called histogram of gradient phases (HGP), which has some intriguing properties compared with existing local descriptors such as histogram of orientated gradients (HOG) for face recognition under unconstrained conditions. In contrast with HOG, the orientation histogram is computed from the estimated gradient phase distributions instead of weighting votes of gradient magnitudes. In HGP, each phase distribution is determined by using the estimated gradient phase as its mean and deciding the standard deviation according to the estimated gradient signal-to-noise ratio (SNR) of a pixel in a local region. Simulation results show that the proposed HGP descriptor which takes confidence of the gradient phase into account is more discriminative and less sensitive to the block normalization process than most existing local descriptors, which generally suffer from significant performance loss without proper block normalization.

    To alleviate the effect of illumination variations in face recognition, some illumination normalization processes are required. In the second part, we point out the problem that most illumination normalization methods (especially filtering-based ones) suffer from the side effect of degrading the performance in case the test image and gallery ones have the similar imaging condition. Thus, we will propose a simple and robust mechanism to mitigate this side effect to further improve the performance. This mechanism is based on adaptive fusing two distances according to the ratio of two proposed confidence measures derived from distances between the test image and all gallery images. The proposed confidence measures can explore the difficulty in distinguishing these images; therefore, it will adaptively determine the suitable proportions for both distances to achieve better performance. Compared to previous work, this proposed mechanism is simple, robust, and capable of distinguishing illumination variations from other impairments like localization errors, expression variations and aging effects. It is noted that this mechanism doesn’t need a complicated training procedure.

    Another challenging issue for face recognition under unconstrained conditions is the localization error problem, which is the residual alignment error after the initial geometric normalization of the face detection algorithm. If we don’t take these errors into account, the recognition performance should be greatly degraded for most face recognition algorithms. Although directly apply an iterative alignment algorithm to compensate the error between each test image with all gallery ones is possible, it will increase the computation load significantly. In the third part, we will propose an adaptive two-stage face recognition system consisting of two block-based (i.e. using local appearance features) recognition stages; the first stage (with a relatively larger cell size) is used to deal with larger localization errors and choose a most probable subset from the gallery list. Next, a fine alignment algorithm is used to estimate and compensate the residual localization errors between the test face image and those of the most probable subset determined in the first stage. Once the residual errors have been compensated, the second stage recognition (with a relatively small cell size) is performed on this error compensated most probable subset to achieve better performance and save computation load. It should be stressed that the alignment algorithm is applied to the most probable subset instead of the whole gallery set.

    To justify the robustness of our methods, intensive simulations are conducted by deliberately imposing localization errors into images and using databases involving severe illumination.

    This dissertation is organized as follows: In Chapter 1, we will briefly introduce the concepts and challenges of face recognition technology. Some previous work about the approaches for creating discriminant feature subspaces, dealing with illumination normalization, and addressing localization error problems are also reviewed. In Chapter 2, the first and second parts mentioned above will be included. For clarity’s sake, we defer the detail derivations to the Appendix chapter. In Chapter 3, the third part mentioned above will be included. Finally, we will give a short summary and future work in Chapter 4.

    摘 要 1 ABSTRACT 4 CHAPTER 1 INTRODUCTION TO FACE RECOGNITION TECHNIQUES 10 1.1 INTRODUCTION 10 1.2 CHALLENGES IN FACE RECOGNITION 12 1.3 FACE RECOGNITION USING SUBSPACE APPROACHES 13 1.4 FACE RECOGNITION USING LOCAL DESCRIPTORS 17 1.5 REFLECTANCE MODEL AND ILLUMINATION NORMALIZATION APPROACHES 20 1.5.1 Reflectance Model 21 1.5.2 Illumination Insensitive Features 22 1.5.3 Illumination Normalization Approaches 23 1.6 APPROACHES TO REDUCE THE LOCALIZATION ERROR EFFECT 27 1.6.1 Modified Matching Strategy 29 1.6.2 Localization Error Robust Feature 30 1.6.3 Learning Localization Errors from Augmented Training Set 32 1.7 OVERVIEW OF FACE DATABASES 33 CHAPTER 2 DEALING WITH THE ILLUMINATION VARIATION PROBLEM 36 2.1 REVIEW OF THE HOG LOCAL DESCRIPTOR 36 2.2 THE PROPOSED HGP DESCRIPTOR 38 2.2.1 Generating Local Histograms using Statistical Information 38 2.2.2 Estimation of Gradient SNRs 41 2.2.3 Estimation of Variances of Gradient Phases 43 2.2.4 Distance Calculation 44 2.2.5 Dynamic Range Analyses of Block Norms 45 2.3 ADAPTIVE ILLUMINATION NORMALIZATION MECHANISMS 46 2.3.1 Learning-Based Adaptive Mechanism 46 2.3.2 Proposed Adaptive Mechanism 47 2.4 SIMULATION RESULTS AND DISCUSSIONS 49 2.4.1 General Settings and Abbreviations 49 2.4.2 Evaluation Method 50 2.4.3 Parameter Evaluation 51 2.4.4 Performance Results for Different Local Descriptors 53 2.4.5 Performance Results for Adaptive Illumination Normalization Mechanisms 57 2.5 BRIEF CONCLUSIONS 59 CHAPTER 3 DEALING WITH THE LOCALIZATION ERROR PROBLEM 60 3.1 PROBLEM STATEMENT 61 3.2 FACE ALIGNMENT AND RECOGNITION VIA SPARSE REPRESENTATION 61 3.3 THE PROPOSED SYSTEM 63 3.3.1 Manipulating local cell sizes to control localization error tolerance and recognition performance 63 3.3.2 Block-based Face Recognition Approaches 65 3.3.3 Confidence Metrics of the First Stage Recognition 66 3.3.4 Estimation of Warp Parameters 68 3.4 SIMULATION RESULTS AND DISCUSSIONS 70 3.5 BRIEF CONCLUSIONS 77 CHAPTER 4 DISCUSSIONS AND FUTURE WORK 78 REFERENCES 79 APPENDIX A 85 A.1 Characteristics of Gradient Phases and its pdf 85 A.2 Derivation of the Equation (A-4) 88 APPENDIX B 89 APPENDIX C 91 PUBLICATIONS 93

    [1] S. Z. Li (Editor) and A. J. Jain (Editor), Handbook of Face Recognition, 2nd Edition, Springer, 2011.
    [2] W. Zhao, R. Chellappa, P. J. Phillips, A. Rosenfeld, “Face recognition: A literature survey,”ACM Computing Surveys, vol. 35, no. 4, pp. 399–458, Dec. 2003.
    [3] R. Jafri and H. R. Arabnia, “A Survey of Face Recognition Techniques,” Journal of Information Processing Systems, Vol.5, No.2, June 2009.
    [4] L. Wiskott, J.M. Fellous, N. Kruger, and C. von der Malsburg, “Face Recognition by Elastic Bunch Graph Matching,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 19, no. 7, pp. 775-779, July 1997.
    [5] M. Lades, J. C. Vorbrüggen, J. Buhmann, J. Lange, C. v. d. Malsburg, R. P. Würtz, and W. Konen, "Distortion invariant object recognition in the dynamic link architecture," IEEE Trans. Computers, Vol.42, pp.300-311, 1993.
    [6] M. Turk and A. Pentland, “Eigenfaces for recognition,”Journal of Cognitive Neuroscience, vol. 3, no. 1, pp. 71–86, 1991.
    [7] P. N. Belhumeour, J. P. Hespanha, and D. J. Kriegman, “Eigenfaces vs. Fisherfaces: Recognition using class specific linear projection,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 19, no. 7, pp.711–720, 1997.
    [8] A. Pentland, B. Moghaddam, and T. Starner, “View-based and modular eigenspaces for face recognition,” In Proceedings, IEEE Conference on Computer Vision and Pattern Recognition, 1994.
    [9] N. Dalal and B. Triggs, “Histograms of oriented gradients for human detection,” Proc. IEEE Conf. Computer Vision and Pattern Recognition, vol. 2, pp. 886-893, June 2005
    [10] O. Deniz, G. Bueno, J. Salido, and F. D. la Torre, “Face recognition using histograms of oriented gradients," ELSEVIER Pattern Recognition Letters, pp. 1598-1603, 2011.
    [11] T. Ojala, M. Pietikäinen, and T. Mäenpää, “Multiresolution Gray-Scale and Rotation Invariant Texture Classification with Local Binary Patterns,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 24, no. 7, pp. 971-987, July 2002.
    [12] T. Ahonen, A. Hadid, and M. Pietikäinen, “Face Recognition with Local Binary Patterns,” Proc. of European Conference on Computer Vision (ECCV), pp. 469–481, 2004.
    [13] S. Theodoridis and K. Koutroumbas, Pattern Recognition, 4th Edition, 2008, Academic Press, ISBN: 9781597492720.
    [14] J. Wright, A. Y. Yang, A. Ganesh, S. S. Sastry, and Y. Ma, “Robust Face Recognition via Sparse Representation,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 31, no. 2, 2009.
    [15] I. Naseem, R. Togneri, and M. Bennamoun, “Linear regression for face recognition,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 32, no. 11, pp. 2106–2112, Jul. 2010.
    [16] I. Naseem, R. Togneri, and M. Bennamoun, “Robust regression for face recognition,” Pattern Recognition, vol. 45, no, 1, pp. 104–118, Jan. 2012.
    [17] W. T. Freeman and M. Roth, “Orientation histograms for hand gesture recognition, ” Proc. of Int. Workshop on Automatic Face and Gesture Recognition, IEEE Computer Society, Zurich, Switzerland, pp. 296-301, June 1995.
    [18] D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” Int. Journal of Computer Vision, vol. 60, no. 2, pp. 91 - 110, Nov. 2004.
    [19] M. Bicego et al., “On the use of SIFT features for face authentication,” Proc. of IEEE Conf. Computer Vision and Pattern Recognition, June 2006.
    [20] W. Zhang, S. Shan, W. Gao, X. Chen, H. Zhang, “Local gabor binary pattern histogram sequence (LGBPHS): a novel non-statistical model for face representation and recognition,” Proc. of ICCV, vol. 1, pp. 786–791, 2005.
    [21] B. Zhang, S. Shan, X. Chen, W. Gao, “Histogram of gabor phase patterns (HGPP): a novel object representation approach for face recognition,” IEEE Trans. on Image Processing, vol. 16, no. 1, pp. 57–68, 2007.
    [22] N.-S. Vu, H.-M. Dee, and A. Caplier, “Face recognition using the POEM descriptor,” Pattern Recognition, vol. 45, no. 7, pp. 2478-2488, 2012.
    [23] H. Wang, S. Z. Li, and Y. Wang, “Face recognition under varying lighting conditions using self quotient image,” Proc. of IEEE Int. Conf. on Automatic Face and Gesture Recognition, pp. 819–824, 2004.
    [24] H. Wang, S. Z. Li, and Y. Wang, “Generalized Quotient Image,” Proc. of IEEE Int. Conf. on Computer Vision and Pattern Recognition, pp. 498-505, 2004.
    [25] T. Chen, W. Yin, X.-S. Zhou, D. Comaniciu, and T. S. Huang, “Illumination Normalization for Face Recognition and Uneven Background Correction Using Total Variation Based Image Models,” Proc. of IEEE Int. Conf. on Computer Vision and Pattern Recognition, 2005.
    [26] T. Chen, W. Yin, X. Zhou, D. Comaniciu, and T. S. Huang, “Total variation models for variable lighting face recognition,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 28, no. 9, pp. 1519–1524, Sep. 2006.
    [27] X. Tan and B. Triggs, “Enhanced local texture feature sets for face recognition under difficult lighting conditions,” Proc. of Analysis and Modeling of Faces and Gestures (AMFG), 2007.
    [28] C.-P. Chen and C.-S. Chen, “Intrinsic Illumination Subspace for Lighting Insensitive Face Recognition,” IEEE Trans. on Systems, Man, and Cybernetics-Part B: Cybernetics, vol. 42, no. 2, Apr. 2012.
    [29] R. Rammamorthi and P. Hanrahan, “A Signal-Processing Framework for Inverse Rendering,” Proc. ACM SIGGRAPH, 2001.
    [30] T. Zhang, Y. Y. Tang, B. Fang, Z. Shang, and X. Liu, “Face recognition under illumination using gradientfaces,” IEEE Trans. on Image Processing, vol. 18, no. 11, pp. 2599-2606, Nov. 2009.
    [31] C.-H. T. Yang, S.–H. Lai, and L.-W. Chang, “Robust face image matching under illumination variations,” EURASIP Journal on Applied Signal Processing, 2004.
    [32] R. Gross, V. Brajovie, “An Image Preprocessing Algorithm for Illumination Invariant Face Recognition”, Proc. 4th Int. Conf. on Audio and Video Based Biometric Person Authentication, pp. 10-18, 2003.
    [33] A. Shashua and T. Riklin-Raviv, “The quotient image: Class-based re-rendering and recognition with varying illuminations”, IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 23, no. 2, pp129-139, 2001.
    [34] D. J. Jobson, Z. Rahman, and G. A. Woodell, “A multi-scale retinex for bridging the gap between color images and the human observation of scenes,” IEEE Trans. on Image Process, vol. 6, no. 7, pp. 965–976, Jul. 1997.
    [35] N.-S. Vu and A. Caplier, "Illumination-robust face recognition using retina modeling," Proc. of 16th IEEE Int. Conf. on Image Processing, pp. 3289 - 3292, Nov. 2009.
    [36] O. Jesorski, K. Kirchberg, and R. Frischholz, “Robust Face Detection using the Hausdorff Distance,” Proc. of the 3rd International Conference on Audio- and Video- Based Biometric Person Authentication, pp. 90–95, 2001.
    [37] M. Urschler, M. Storer, H. Bischof, and J. Birchbauer, “Robust facial component detection for face alignment applications,” in P. Roth, T. Mauthner, and T. Pock, editors, Visual Learning, Proc. of the 33rd AAPR Workshop, vol. 254, pp. 61-72. Austrian Computer Society, 2009.
    [38] P. Viola and M. J. Jones, “Robust Real-Time Face Detection,” International Journal of Computer Vision, 57(2):137–154, 2004.
    [39] S. Shan, W. Gao, Y. Chang, B. Cao, and P. Yang, “Review the strength of Gabor features for Face Recognition from the Angle of its Robustness to Mis-alignment,” Proc. of the 17th International Conference on Pattern Recognition (ICPR), pp.338-341, vol 1, 2004.
    [40] C.-Y. Su and J.-F. Yang, “A Two-Stage Low Complexity Face Recognition System for Face images with Alignment Errors,” Proc. of IEEE International Symposium on Circuits and Systems (ISCAS), pp. 2131 - 2134, 2013.
    [41] A. Albiol, D. Monzo, A. Martin, J. Sastre, and A. Albiol, “Face recognition using HOG–EBGM,” Pattern Recognition Letters, vol.29, pp.1537-1543, 2008.
    [42] D. Monzo, A. Albiol, and J. Sastre, “HOG-EBGM vs. Gabor-EBGM,” Proc. 15th IEEE Int. Conf. on Image Processing (ICIP), pp. 1636–1639, 2008.
    [43] G. Hua and A. Akbarzadeh, “A robust elastic and partial matching metric for face recognition,” Proc. of the 12th International Conference on Computer Vision (ICCV), pp. 2082-2089, 2009.
    [44] A. M. Martinez, “Recognizing Imprecisely Localized, Partially Occluded and Expression Variant Faces from a Single Sample per Class,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 24, no. 6, pp. 748-763, 2002.
    [45] S. Shan, Y. Chang, W. Gao, and B. Cao, “Curse of Mis-alignment in Face Recognition: Problem and a Novel Mis-alignment Learning Solution,” Proc. of the 6th IEEE Conf. on Automatic Face and Gesture Recognition, pp. 314-320, 2004.
    [46] D. P. Huttenlocher, G. A. Klanderman, and W. J. Rucklidge, “Comparing images using the Hausdorff distance,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 15, no. 19, Sep. 1993.
    [47] V. D. Gesù, V. Starovoitov, “Distance-based functions for image comparison,” Pattern Recognition Letter, vol. 20, no. 2, pp. 207–214, 1999.
    [48] J. G. Daugman, “Uncertainty Relation for resolution in space, spatial frequency, and orientation optimized by 2D cortical filters,” Journal of the Optical Society of America, vol. 2, no. 7, pp. 1160–1169, 1985.
    [49] V. Kyrki, J. K. Kamarainen, and H. Kalviainen, “Simple Gabor feature space for invariant object recognition,” Pattern Recognition Letters, vol. 25, no. 3, pp. 311–318, 2004.
    [50] Yale B face database, http://cvc.yale.edu/projects/yalefacesB/yalefacesB.html.
    [51] T. Sim, S. Baker, and M. Bsat, “The CMU pose, illumination, and expression (PIE) database,” Proc. of IEEE Int. Conf. Automatic Face and Gesture Recognition, 2002, pp. 46–51.
    [52] P. J. Phillips, H. Moon, S. A. Rizvi, and P. J. Rauss, “The FERET Evaluation Methodology for Face Recognition Algorithms,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 22, no. 10, pp. 1090-1104, Oct. 2000.
    [53] G. E. Healey and R. Kondepudy, ”Radiometric CCD Camera Calibration and Noise Estimation”, IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 16, no. 3, March 1994.
    [54] H. Stark and J. W. Woods, Probability, Statistics, and Random Processes for Engineers, 4th Edition. Prentice Hall, 2011.
    [55] A. Leon-Garcia, Probability, Statistics, and Random Processes for Electrical Engineering, 3rd Edition. Prentice Hall, 2008.
    [56] O. Arandjelovic and R. Cipolla, “A New Look at Filtering Techniques for Illumination Invariance in Automatic Face Recognition,” Proc. of the 7th Int. Conf. on Automatic Face and Gesture Recognition, 2006.
    [57] S. Baker and I. Matthews, “Lucas-Kanade 20 years on: A unifying framework: Part 1: The quantity approximated, the warp update rule, and the gradient descent approximation,” International Journal of Computer Vision, vol. 56, no. 3, pp. 221–255, March 2004.
    [58] E. Tola, V. Lepetit, and P. Fua. “Daisy: An Efficient Dense Descriptor Applied to Wide Baseline Stereo,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 32, no. 5, pp.815–830, 2010.
    [59] A. Wagner, J. Wright, A. Ganesh, Z. Zhou, H. Mobahi, and Y. Ma, “Towards a Practical Face Recognition System: Robust Alignment and Illumination by Sparse Representation,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 34, no. 2, Feb. 2012.
    [60] R. Szeliski, “Image Alignment and Stitching: a Tutorial,” Journal of Foundations and Trends in Computer Graphics and Vision archive, vol. 2, no. 1, pp. 1 - 104, January 2006.
    [61] R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision, Cambridge University Press, 2000.
    [62] S. M. Kay, Fundamentals of Statistical Signal Processing: Estimation Theory, vol. 1, ISBN: 0133457117, Prentice Hall, 1993.
    [63]http://www.eecs.berkeley.edu/~yang/software/l1benchmark/
    [64] A. Y. Yang, Z. Zhou, A. G. Balasubramanian, S. Sastry, and Y. Ma, “Fast -Minimization Algorithms for Robust Face Recognition,” IEEE Trans. on Image Processing, vol. 22, no. 8, pp. 3234-3246, 2013.

    下載圖示 校內:2016-08-26公開
    校外:2017-08-26公開
    QR CODE