簡易檢索 / 詳目顯示

研究生: 曾桓愷
Tseng, Huan-Kai
論文名稱: 應用於多視角合成之補塊匹配填洞演算法及其VLSI實現
A Patch-matched Hole Filling Algorithm for Multiview Synthesis and Its VLSI Implementation
指導教授: 劉濱達
Liu, Bin-Da
楊家輝
Yang, Jar-Ferr
學位類別: 碩士
Master
系所名稱: 電機資訊學院 - 電機工程學系
Department of Electrical Engineering
論文出版年: 2015
畢業學年度: 103
語文別: 英文
論文頁數: 83
中文關鍵詞: 深度圖像生成法三維立體電視補洞法補塊匹配
外文關鍵詞: DIBR, multiview 3D TV, hole filling, patch match
相關次數: 點閱:152下載:3
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 本論文提出基於深度圖像生成虛擬影像之補塊匹配補洞演算法。本演算法不採用傳統平滑深度圖之方法,以避免合成之虛擬影像幾何失真,而是提出將圖像與其深度圖中物體邊緣部分對齊之演算法,提升補洞時資訊的正確性。本演算法藉由尋找周圍相似的區塊來填補洞且適應性調整區塊大小,並利用深度圖資訊確保填入的為背景資訊,最後所得到的結果會比一次填入一個像素或是直接將背景延伸的作法獲得更自然的圖像。
    硬體架構方面,本演算法實現在Altera之DE3-260 FPGA開發版,其中補塊匹配補洞演算法部份使用12.7 k 個邏輯元件、10.2 k個暫存器及2.39 Mb 的記憶體用量,最高操作頻率為 152.47 MHz,且即時輸出 1080p (1920 × 1080) 影像。

    In this thesis, a patch-matched hole filling algorithm for depth image-based rendering is proposed. Instead of traditional smoothing depth approaches, which reduce geometric distortion in virtual views, the proposed algorithm used to deal with the alignment of texture image and depth map increases the correctness of hole filling. In this algorithm, the key is to search similar patches with adaptive window size in surrounding region to fill holes. Also, considering the information of depth map insures holes will be filled with the patches in background. Compared to the methods of hole filling pixel-by-pixel and the methods of background extension, the results of the proposed algorithm will be more natural.
    In hardware architecture, the algorithm is implemented in Altera DE3-260 FPGA. The design of patch-matched hole filling algorithm requires 12.7 k logic elements, 10.2 k registers, and 2.39 Mb RAM. This work is supported to 1080p (1920 × 1080) video in real time at maximum operating frequency 152.47 MHz.

    Abstract (Chinese) i Abstract (English) iii Acknowledgement v Table of Contents vii List of Figures ix List of Tables xiii Chapter 1 Introduction 1 1.1 Motivation 1 1.2 Organization for the Thesis 3 Chapter 2 Overview of Relative Work 5 2.1 Basic Concepts of DIBR System 5 2.1.1 3D Warping 7 2.1.2 Hole Filling 12 2.1.3 Depth Map Preprocessing 14 2.2 Smooth-Depth-Based DIBR 15 2.3 Inpaiting Algorithms 18 2.3.1 Criminisi’s Exemplar-Based Image Inpainting 18 2.3.2 Depth-Aided Inpainting Algorithms 19 Chapter 3 The Proposed DIBR System 23 3.1 Overview of Proposed DIBR System 23 3.2 Depth Map Preprocessing 25 3.2.1 The Proposed Alignment Algorithm 26 3.2.2 Object Edge Detection 26 3.2.3 Color Similarity Checking 27 3.2.4 Depth Value Correction 28 3.3 3D Warping 31 3.4 Hole Filling for Interpolation 33 3.4.1 Merge 34 3.4.2 Background Extension 35 3.5 Hole Filling for Extrapolation 36 3.5.1 Tiny Holes Filling 37 3.5.2 Hole Searching 37 3.5.3 Window Size Decision 38 3.5.4 Patch Searching 40 3.5.5 Matching Cost 43 3.5.6 Patch Copying 44 3.6 Interlacing of View’s Subpixels 45 3.7 Hardware Design of Proposed DIBR System 46 3.7.1 Stereo-to-multiview system 46 3.7.2 Overview of Hardware Architecture 48 3.7.3 Alignment Module 49 3.7.4 Warping Module 50 3.7.5 Interpolation Module 51 3.7.6 Extrapolation Module 52 Chapter 4 Simulation Results and Comparison 55 4.1 Simulation Results 55 4.2 Hardware Resources and Performance 72 Chapter 5 Conclusion and Future Work 75 5.1 Conclusion 75 5.2 Future Work 77 References 79

    [1] C. Fehn, K. Hopf, and B. Quante, “Key technologies for an advanced 3D TV system,” in Proc. SPIE Three-Dimensional TV, Video, and Display Ⅲ, Oct. 2004, pp. 6680.
    [2] C. Fehn, R. D. L. Barre, and S. Pastoor, “Interactive 3DTV: concepts and key technologies,” Proc. IEEE, vol. 94, pp. 524538, 2006.
    [3] C. Fehn, “Depth-image-based rendering (DIBR), compression and transmission for a new approach on 3D-TV,” in Proc. SPIE Stereoscopic Displays and Virtual Reality Syst., Jan. 2004, pp. 93104.
    [4] K. J. Yoon and S. Kweon, “Adaptive support-weight approach for correspondence search,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 28, pp. 650656, Apr. 2006.
    [5] M. C. Yang, J. Y. Liu, Y. C. Yang, and K. H. Chen, “A quality-improved stereo matching by using incrementally-averaging orthogonally-scanned edge detection,” in Proc. Int. Conf. 3DSA, June 2012, pp. 489491.
    [6] K. Zhang, J. Lu, and G. Lafruit, “Cross-based local stereo matching using orthogonal integral images,” IEEE Trans. Circuits Syst. Video Technol., vol. 19, pp. 10731079, July 2009.
    [7] D. Kim, D. Min, and K. Sohn, “A stereoscopic video generation method using stereoscopic display characterization and motion analysis,” IEEE Trans. Broadcast., vol. 54, pp. 188197, June 2008.
    [8] J. Lee, S. Yoo, M. Kang, C. Chun, and C. Kim, “Depth estimation from a single image using salient objects detection,” in Proc. Int. Conf. 3DSA, June 2012, pp. 264267.
    [9] M. T. Pourazad, P. Nasiopoulos, and R. K. Ward, “An H.264-based scheme for 2D to 3D video conversion,” IEEE Trans. Consum. Electron., vol. 55, pp. 742748, May 2009.
    [10] Y. Feng, J. Ren, and J. Jiang, “Object-based 2D-to-3D video conversion for effective stereoscopic content generation in 3D-TV applications,” IEEE Trans. Broadcast., vol. 57, pp. 500509, June 2011.
    [11] Y. C. Fan and T. C. Chi, “The novel non-hole-filling approach of depth image based rendering,” in Proc. IEEE 3DTV, May 2008, pp. 325328.
    [12] C. Fehn, “A 3D-TV approach using depth-image-based rendering (DIBR),” in Proc. Vis., Imaging, and Image Process., Sep. 2003, pp. 482487
    [13] A. Redert, M. O. de Beeck, C. Fehn, W. Ijsselsteijn, M. Pollefeys, L. V. Gool, E. Ofek, I. Sexton, and P. Surman, “ATTEST-advanced three-dimensional television system technologies,” in Proc. IEEE 3DPVT, Jan. 2002, pp. 313319.
    [14] A. Woods, T. Docherty, and R. Koch, “Image distortions in stereoscopic video systems,” in Proc. SPIE Stereoscopic Displays and Appl., Feb. 1993, pp. 3648
    [15] W. A. IJsselsteijn, H. de Ridder, and J. Vliegen, “Effects of stereoscopic filming parameters and display duration on the subjective assessment of eye strain,” in Proc. SPIE Stereoscopic Displays and Virtual Reality Syst., Apr. 2000, pp. 1222.
    [16] I. Daribo and H. Saito, “A novel inpainting-based layered depth video for 3DTV,” IEEE Trans. Broadcast., vol. 57, pp. 533541, June 2011.
    [17] P. Ndjiki-Nya, M. Koppel, D. Doshkov, H. Lakshman, P. Merkle, K. Muller, and T. Wiegand, “Depth image-based rendering with advanced texture synthesis for 3-D video,” IEEE Trans. Multimedia, vol. 13, pp. 453465, June 2011.
    [18] M. Solh and G. AlRegib, “Hierarchical hole-filling (HHF): depth image based rendering without depth map filtering for 3D-TV,” in Proc. IEEE MMSP, Oct. 2010, pp. 8792.
    [19] M. S. Ko, D. W. Kim, D. L. Jones, and J. Yoo, “A new common-hole filling algorithm for arbitrary view synthesis,” in Proc. Int. Conf. 3DSA, June 2012, pp. 242245.
    [20] A. Criminisi, P. Perez, and K. Toyama, “Region filling and object removal by exemplar-based image inpainting,” IEEE Trans. Image Process., vol. 13, pp. 12001212, Sep. 2004.
    [21] W. J. Tam, G. Alain, L. Zhang, T. Martin, and R. Renaud, “Smoothing depth maps for improved stereoscopic image quality,” in Proc. SPIE Three-Dimensional TV, Video, and Display Ⅲ, Oct. 2004, pp. 162172.
    [22] L. Zhang and W. J. Tam, “Stereoscopic image generation based on depth images for 3DTV,” IEEE Trans. Broadcast., vol. 51, pp. 191199, June 2005.
    [23] W. Y. Chen, Y. L. Chang, S. F. Lin, L. F. Ding, and L. G. Chen, “Efficient depth image based rendering with edge dependent depth filter and interpolation,” in Proc. IEEE ICME, July 2005, pp. 13141317.
    [24] T. C. Lin, H. C. Huang, and Y. M. Huang, “Preserving depth resolution of synthesized images using parallax-map-based DIBR for 3D-TV,” IEEE Trans. Consum. Electron., vol. 56, pp. 720727, May 2010.
    [25] H. Dong, S. Jianfei, and X. Ping, “Improvement of virtual view rendering based on depth image,” in Proc. IEEE ICIG, Aug. 2011, pp. 254257.
    [26] P. J. Lee and Effendi, “Nongeometric distortion smoothing approach for depth map preprocessing,” IEEE Trans. Multimedia, vol. 13, pp. 246254, Apr. 2011.
    [27] L. H. Wang, X. J. Huang, M. Xi, D. X. Li, and M. Zhang, “An asymmetric edge adaptive filter for depth generation and hole filling in 3DTV,” IEEE Trans. Broadcast., vol. 56, pp. 425431, Sep. 2010.
    [28] I. Daribo, and B. Pesquet-Popescu, “Depth-aided image inpainting for novel view synthesis,” in Proc. IEEE MMSP, Oct. 2010, pp. 167170.
    [29] S. Reel, G. Cheung, P. Wong, and L.S. Dooley, “Joint texture-depth pixel inpainting of disocclusion holes in virtual view synthesis,” in Proc. IEEE APSIPA, Oct. 2013, pp. 17.
    [30] C. Cheng, J. Liu, H. Yuan, X. H. Yang, and W. Liu, “A DIBR method based on inverse mapping and depth-aided image inpainting,” in Proc. IEEE ChinaSIP, July 2013, pp. 518522.
    [31] X. Xu, L. M. Po, C. H. Cheung, L. T. Feng, K. H. Ng, and K. W. Cheung, “Depth-aided exemplar-based hole filling for DIBR view synthesis,” in Proc. IEEE ISCAS, May 2013, pp. 28402843.
    [32] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process., vol. 13, pp. 600612, Apr. 2004.
    [33] D. Scharstein, R. Szeliski, and R. Zabih, “A taxonomy and evaluation of dense two-frame stereo correspondence algorithms,” in Proc. IEEE SMBV, Dec. 2001, pp. 131140.
    [34] D. Scharstein and R. Szeliski, “High-accuracy stereo depth maps using structured light,” in Proc. IEEE CVPR, June 2003, pp. 195202.
    [35] D. Scharstein and C. Pal, “Learning conditional random fields for stereo,” in Proc. IEEE CVPR, June 2007, pp. 18.
    [36] Middlebury Stereo Vision Page. [Online]. Available: http://vision.middlebury.edu/stereo/

    下載圖示 校內:2020-08-20公開
    校外:2020-08-20公開
    QR CODE