簡易檢索 / 詳目顯示

研究生: 楊弘宇
Yang, Hung-Yu
論文名稱: 低複雜度影像強化技術之設計實現
The Implementation of Low-Complexity Image Enhancement Methods
指導教授: 陳培殷
Chen, Pei-Yin
學位類別: 博士
Doctor
系所名稱: 電機資訊學院 - 資訊工程學系
Department of Computer Science and Information Engineering
論文出版年: 2013
畢業學年度: 101
語文別: 英文
論文頁數: 90
中文關鍵詞: 低複雜度硬體實現霧氣濾除亮度校正即時處理
外文關鍵詞: low-complexity hardware implementation, haze removal, illumination adjustment, real-time
相關次數: 點閱:114下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 智慧型攝影機及智能監視系統已經廣泛的被使用於各種不同的應用上,但在某些情況下(例如:惡劣的天氣、強烈的背光、昏暗的環境等等),攝影機無法擷取到清晰且完整的影像資訊,為了提升系統的效能,一般會使用影像前處理的技術來加強影像的品質。對於需要滿足即時處理的智能系統來說,一個快速且低複雜度的影像強化前處理技術是不可或缺的。因此,本論文主要著重於高效能且適合硬體實現的低複雜度影像強化技術之設計研發。
    本文首先介紹的是一個快速且有效的霧氣濾除演算法,我們使用了極值近似法以及具有保留輪廓特性的濾波器,分別去估計大氣光以及霧的深度平面,該方法能有效地避免還原後的影像產生光暈。實驗結果顯示我們的方法可以用最短的執行時間得到媲美先前研究的結果。為了達到即時處理的目標,我們也將提出的霧氣濾除演算法設計成管線化架構的硬體電路。根據SYNOPSYS的Design Vision和TSMC 0.13-μm的標準元件庫合成結果,此電路每秒處理量可達到200百萬像素/秒的速度,對於解析度2560×2048的視訊影像,此電路可以即時處理並產生很好的結果。
    再者,本論文也提出一個基於視網膜皮層理論的低複雜度亮度校正演算法,根據二維經驗模態分解的概念,我們發展出一個新穎的低通濾波器,此濾波器可快速地從單一影像中得到較平滑的亮度平面,再搭配適應性亮度調整方法,可使還原後的影像具有較高的對比度。實驗結果顯示此方法可以用最短的執行時間得到媲美先前研究的結果。為了達到即時處理的目標,我們也將提出的亮度校正演算法設計成管線化架構的硬體電路,根據SYNOPSYS的Design Vision和TSMC 0.13-μm的標準元件庫合成結果,此電路每秒處理量可達到200百萬像素/秒的速度。對於一些規格較低的影像系統而言,此電路可以設計成用較慢的運作頻率來執行,藉此達到節省功率消耗的效果。

    There is an increasing demand for cameras and intelligent surveillance systems aiming at monitoring private and public areas. In some conditions such as bad weather, strong background illumination, or dark environment, a captured image in surveillance shows an irrevocable loss of visual information. Many image preprocessing algorithms are proposed to increase the performance of the system. In such a real-time intelligent system, a fast-enough and low-complexity solution for those preprocessing algorithms is necessary and must be considered. The main objective of this dissertation focuses on the development of the efficient and low-complexity image enhancement method which is suitable for hardware implementation.
    In this dissertation, we first propose a fast and efficient haze removal method based on atmospheric scattering model. An extremum approximate technique is employed to extract the atmospheric light. The contour preserving estimation is presented to obtain the transmission by using edge-preserving and mean filters alternately. Our method can efficiently avoid the halo artifact generated in the recovered image. Experimental results demonstrate that the proposed method can obtain the comparable results with the least execution time as compared with previous algorithms. To meet the requirement of real-time applications, pipelined hardware architecture for our haze removal method is presented. We use SYNOPSYS Design Vision to synthesize the design with TSMC 0.13-μm cell library. Synthesis results show that our design yields a processing rate of 200 Mpixels/second which is fast enough to process a video resolution of QSXGA (2560×2048) at 30 frames per second in real time.
    Next, we propose a low-complexity illumination adjustment algorithm based on the Retinex theory. The fast illumination estimation by using the concept of bi-dimensional empirical mode decomposition (FIECB) is presented to extract a smooth illumination. An adaptive gamma correction is employed to adjust the illumination component to obtain a high contrast and more pleasing image. Experimental results demonstrate that our method achieves far faster execution time and comparable visual quality as compared with previous algorithms. To meet the requirement of real-time applications, pipelined hardware architecture implemented as an intellectual property (IP) core is presented. We use SYNOPSYS Design Vision to synthesize the design with TSMC 0.13-μm cell library. Synthesis results show that the proposed IP core can achieve a processing rate of 200 Mpixels/second to meet the requirement of real-time applications with lower cost. In some low-cost imaging systems, the processing rate can be slow down and our hardware core can run at very low power consumption.

    摘要 I ABSTRACT II 誌謝 IV CONTENTS V CHAPTER 1 INTRODUCTION 1 1.1 BACKGROUND 1 1.2 MOTIVATION 1 1.3 ORGANIZATION 3 CHAPTER 2 FAMILIAR COLOR MODELS 4 2.1 THE RGB COLOR MODEL 4 2.2 THE CMY AND CMYK COLOR MODELS 5 2.3 THE HSI COLOR MODEL 6 2.4 THE HSV COLOR MODEL 9 CHAPTER 3 HARDWARE IMPLEMENTATION OF A FAST AND EFFICIENT HAZE REMOVAL METHOD 10 3.1 INTRODUCTION 10 3.2 RELATED ALGORITHM 11 3.3 FAST AND EFFICIENT HAZE REMOVAL METHOD 13 3.3.1 Atmospheric Light Estimation 14 3.3.2 Transmission Estimation 15 3.3.3 Scene Recovery 16 3.4 HARDWARE ARCHITECTURE 18 3.4.1 ALE unit 19 3.4.2 TE unit 20 3.4.3 SRSC unit 21 3.5 SIMULATION RESULTS AND HARDWARE IMPLEMENTATION 21 3.6 CONCLUDING REMARKS 36 CHAPTER 4 HARDWARE IMPLEMENTATION OF A FAST ILLUMINATION ADJUSTMENT METHOD 37 4.1 INTRODUCTION 37 4.2 RELATED ALGORITHM 39 4.3 FAST ILLUMINATION ADJUSTMENT ALGORITHM 42 4.3.1 Fast illumination estimation by using the concept of bi-dimensional empirical mode decomposition (FIECB) 43 4.3.1.1 Local extrema detection 44 4.3.1.2 Envelope Generation 45 4.3.2 Adaptive Illumination Adjustment 48 4.4 HARDWARE ARCHITECTURE 50 4.4.1 Operation reduction and Hardware Sharing 53 4.4.2 Parallel Processing Technique 54 4.4.3 Pipeline Hardware Architecture 55 4.5 IMPLEMENTATION RESULTS AND COMPARISONS 58 4.6 CONCLUDING REMARKS 81 CHAPTER 5 CONCLUSIONS AND FUTURE WORK 82 5.1 CONCLUSIONS 82 5.2 FUTURE WORK 83 REFERENCES 84

    [1]S. G. Narasimhan and S. K. Nayar, “Removing Weather Effects from Monochrome Images,” Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2001, pp. II-186-II-193.
    [2]S. G. Narasimhan and S. K. Nayar, “Vision and the Atmosphere,” International Journal of Computer Vision, vol. 48, no. 3, pp. 233-254, Jul. 2002.
    [3]S. G. Narasimhan and S. K. Nayar, “Contrast Restoration of Weather Degraded Images,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 25, no. 6, pp. 713-724, Jun. 2003.
    [4]Y. Y. Schechner, S. G. Narasimhan and S. K. Nayar, “Instant Dehazing of Images Using Polarization,” Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2001, pp. I-325-I-332.
    [5]Y. Y. Schechner, S. G. Narasimhan and S. K. Nayar, “Polarization-Based Vision Through Haze,” Applied Optics, vol. 42, no. 3, pp. 511-525, Jan. 2003.
    [6]S. Shwartz, E. Namer and Y. Y. Schechner, “Blind Haze Separation,” Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2006, pp. 1984-1991.
    [7]J. P. Oakley and H. Bu, “Correction of Simple Contrast Loss in Color Images,” IEEE Transactions on Image Processing, vol. 16, no. 2, pp. 511–522, Feb. 2007.
    [8]J. Kopf, B. Neubert, B. Chen, M. Cohen, D. Cohen-Or, O. Deussen, M. Uyttendaele and D. Lischinski, “Deep Photo: Model-Based Photograph Enhancement and Viewing,” ACM Transactions on Graphics, vol. 27, no. 5, pp. 116:1-116:10, Dec. 2008.
    [9]R. T. Tan, “Visibility in Bad Weather from a Single Image,” Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, 2008, pp. 1-8.
    [10]R. Fattal, “Single Image Dehazing,” ACM Transactions on Graphics, vol. 27, no. 3, pp. 72:1-72:9, Aug. 2008.
    [11]J.-P. Tarel and N. Hautière, “Fast Visibility Restoration from a Single Color or Gray Level Image,” Proceedings of IEEE International Conference on Computer Vision, 2009, pp. 2201-2208.
    [12]K. He, J. Sun and X. Tang, “Single Image Haze Removal Using Dark Channel Prior,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 12, pp. 2341-2353, Dec. 2011.
    [13]W. Middleton, Vision Through the Atmosphere. Toronto, ON, Canada:Univ. Toronto Press, 1952.
    [14]莊雅筑,「影像除霧電路之設計與實現」,國立成功大學資訊工程學系,碩士論文,99學年度。
    [15]王素霞,「色彩強化之低複雜度除霧演算法」, 國立成功大學資訊工程學系,碩士論文,100學年度。
    [16]K. He, J. Sun and X. Tang, “Guided Image Filtering,” Proceedings of the 11th European conference on Computer vision: Part I, 2010, pp. 1-14.
    [17]D. J. Jobson, Z. Rahman and G. A. Woodell, “Properties and Performance of a Center/Surround Retinex,” IEEE Transactions on Image Processing, vol. 6, no. 3, pp. 451-462, Mar. 1997.
    [18]D. J. Jobson, Z. Rahman and G. A. Woodell, “A Multiscale Retinex for Bridging the Gap Between Color Images and the Human Observation of Scenes,” IEEE Transactions on Image Processing, vol. 6, no. 7, pp. 965-976, Jul. 1997.
    [19]M. Ogata, T. Tsuchiya, T. KubozOnO and K. Ueda, “Dynamic Range Compression Based on Illumination Compensation,” IEEE Transactions on Consumer Electronics, vol. 47, no. 3, pp. 548-558, Aug. 2001.
    [20]R. Kimmel, M. Elad, D. Shaked, R. Keshet and I. Sobel, “A Variational Framework for Retinex,” International Journal of Computer Vision, vol. 52, no. 1, pp. 7-23, Apr. 2003.
    [21]S. Marsi, G. Impoco, A. Ukovich, G. Ramponi and S. Carrato, “Using a Recursive Rational Filter to Enhance Color Images,” IEEE Transactions on Instrumentation and Measurement, vol. 57, no. 6, pp. 1230-1236, Jun. 2008.
    [22]C.-T. Shen and W.-L. Hwang, “Color Image Enhancement Using Retinex with Robust Envelope,” Proceedings of IEEE International Conference on Image Processing, 2009, pp. 3141-3144.
    [23]R. Fattal, D. Lischinski and M. Werman, “Gradient Domain High Dynamic Range Compression,” Proceedings of SIGGRAPH, 2002, pp. 249-256.
    [24]J. Tang, E. Peli and S. Acton, “Image Enhancement Using a Contrast Measure in the Compressed Domain,” IEEE Signal Processing Letters, vol. 10, no. 10, pp. 289-292, Oct. 2003.
    [25]S. Lee, “An Efficient Content-Based Image Enhancement in the Compressed Domain Using Retinex Theory,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 17, no. 2, pp. 199-213, Feb. 2007.
    [26]J. Mukherjee and S. K. Mitra, “Enhancement of Color Images by Scaling the DCT Coefficients,” IEEE Transactions on Image Processing, vol. 17, no. 10, pp. 1783-1794, Oct. 2008.
    [27]E. H. Land and J. J. Mccann, “Lightness and Retinex Theory,” Journal of the Optical Society of America, vol. 61, no. 1, pp. 1-11, Jan. 1971.
    [28]N. E. Huang, Z. Shen, S. R. Long, M. C. Wu, H. H. Shih, Q. Zheng, N.-C. Yen, C. C. Tung and H. H. Liu, “The Empirical Mode Decomposition and the Hilbert Spectrum for Nonlinear and Non-Stationary Time Series Analysis,” Proceedings of the Royal Society A, vol. 454, no. 1971, pp. 903-995, Mar. 1998.
    [29]S. M. A. Bhuiyan, R. R. Adhami and J. F. Khan, “Fast and Adaptive Bidimensional Empirical Mode Decomposition Using Order-Statistics Filter Based Envelope Estimation,” EURASIP Journal on Advances in Signal Processing, vol. 2008, pp. 1-18, Jan. 2008.
    [30]J. C. Nunes, Y. Bouaoune, E. Del´echelle, O. Niang and Ph. Bunel, “Image Analysis by Bidimensional Empirical Mode Decomposition,” Image and Vision Computing, vol. 21, no. 12, pp. 1019-1026, May 2003.
    [31]J. C. Nunes, S. Guyot and E. Del´echelle, “Texture Analysis Based on Local Analysis of the Bidimensional Empirical Mode Decomposition,” Machine Vision and Applications, vol. 16, no. 3, pp. 177-188, Feb. 2005.
    [32]Z. Liu and S. Peng, “Boundary Processing of Bidimensional EMD Using Texture Synthesis,” IEEE Signal Processing Letters, vol. 12, no. 1, pp. 33-36, Jan. 2005.
    [33]P.-H. Chen, Y.-C. Yang and L.-M. Chang, “Illumination Adjustment for Bridge Coating Images Using BEMD-Morphology Approach (BMA),” Automation in Construction, vol. 19, no. 4, pp. 475-484, Jul. 2010.
    [34]S.-C. Pei, M. Tzeng and Y.-Z. Hsiao, “Enhancement of Uneven Lighting Text Image Using Line-Based Empirical Mode Decomposition,” Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing, 2011, pp. 1249-1252.
    [35]黎尚原,「基於經驗模態分析之彩色影像強化方法的設計與實現」,國立成功大學資訊工程學系,碩士論文,99學年度。
    [36]Y. Lu and M. Dai, “Sort Optimization Algorithm of Median Filtering Based on FPGA,” Proceedings of International Conference on Machine Vision and Human-machine Interface, 2010, pp. 250-253.
    [37]Z. Wang and A. C. Bovik, “A universal image quality index,” IEEE Signal Processing Letters, vol. 9, no. 3, pp. 81-84, Mar. 2002.
    [38]S. Susstrunk and S. Winkler, “Color image quality on the internet,” Proceedings of IS&T/SPIE Electronic Imaging: Internet Imaging V, 2004, pp. 118-131.
    [39]Retinex Image Processing, http://dragon.larc.nasa.gov/retinex/pao/news/.
    [40]R. C. Gonzalez and R. E. Woods, “Digital Image Processing (Second Edition),” Prentice Hall, 2002.
    [41]M. Sonka, V. Hlavac and R. Boyle, “Image Processing, Analysis, and Machine Vision (Third Edition),” Thomson, 2008.

    無法下載圖示 校內:2023-12-31公開
    校外:不公開
    電子論文尚未授權公開,紙本請查館藏目錄
    QR CODE