| 研究生: |
張峻瑋 Chang, Chun-Wei |
|---|---|
| 論文名稱: |
影像置中深度解封裝之改良及其VLSI實現 Improvements of Centralized Texture Depth Depacking and Their VLSI Implementations |
| 指導教授: |
劉濱達
Liu, Bin-Da |
| 共同指導教授: |
楊家輝
Yang, Jar-Ferr |
| 學位類別: |
碩士 Master |
| 系所名稱: |
電機資訊學院 - 電機工程學系 Department of Electrical Engineering |
| 論文出版年: | 2017 |
| 畢業學年度: | 105 |
| 語文別: | 英文 |
| 論文頁數: | 64 |
| 中文關鍵詞: | 色彩轉換 、3D彩圖置中景深包裝 、深度圖 、前景擴張 |
| 外文關鍵詞: | color transform, CTDP, depth map, foreground extension |
| 相關次數: | 點閱:75 下載:0 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
本論文提出一用於水平3D彩圖置中景深包裝之仿真色彩轉換及前景擴張內插演算法。本演算法捨棄傳統顏色編碼之方法,透過色彩轉換,解決在解包裝時產生之影像過衝問題。同時,為了提升合成之虛擬影像品質,本演算法在內插放大時同時尋找前景之邊界並加以擴張一到兩個像素,此方法所合成之虛擬影像,比其他影像放大的演算法能獲得更完整且自然的邊界。
本演算法設計之硬體架構以Altera之DE4-230 FPGA開發板實現,對於3D彩圖置中景深解包裝之硬體架構,用到4 090個邏輯單元、7 406個暫存器及約1.189Mb的記憶體儲存量,於每秒60張1920 × 1080解析度之影像處理速度下,最高操作頻率可達176.65 MHz。
In this thesis, a texture-11/12 centralized texture depth vertical depacking system with emulated color transform and depth foreground extension interpolation algorithm is proposed. Instead of using traditional YUV conversion, the proposed emulated color transform is adopted to eliminate overshoot problems in the depacking process. In addition, with a view to enhance the quality of virtual views, the proposed foreground extension interpolation algorithm detects foreground edges and further extend them for one to two pixels. Compared to other approaches, the results of the proposed method preserve more complete edges without having unnatural boundaries.
For hardware architecture, the algorithm is realized in Altera DE4-230 FPGA. The design of CTDP depacking system requires 4.09k logic elements, 7.406k registers and 1.189 Mb RAM, and the design supports to 1080p (1920 × 1080) video in 60 frames per second at maximum operating frequency 176.65 MHz.
[1] C. Fehn, K. Hopf, and B. Quante, “Key technologies for an advanced 3D TV system,” Proc. SPIE, Three-dimentional TV, Video, and Display III, vol. 5599, pp. 66–80, Dec. 2004.
[2] C. Fehn, R. De La Barré, and S. Pastoor, “Interactive 3-DTV-concepts and key technologies,” Proc. IEEE, vol. 94, pp. 524–538, Mar. 2006.
[3] J. F. Yang, H. M. Wang, and A. T. Chiang, “2D backwards compatible centralized color-depth packing,” Joint Collaborative Team on 3D Video Coding Extensions of ITU-T SG 16 WP 3 and ISO/IEC JTC 1/SC 29/WG 11, the 6th Meeting: Document: JCT3V-F0087, Geneva, Nov. 2013.
[4] P. C. Kuo, A. J. Lin, B. D. Liu, and J. F. Yang, “An advanced video and depth depacking architecture for 3D applications,” J. Inform. Sci. Eng., vol. 31, pp. 1537–1555, Sep. 2015.
[5] S. C. Chan, H. Y. Shum, and K. T. Ng, “Image-based rendering and synthesis,” IEEE Signal Process. Mag., vol. 24, pp. 22–33, Nov. 2007.
[6] T. C. Yang, P. C. Kuo, B. D. Liu, and J. F. Yang, “Depth image-based rendering with edge-oriented hole filling for multiviewsynthesis,” in Proc. Int. Conf. Commun. Circuit Syst. (ICCCAS), Taichung, Taiwan, Aug. 2012, pp. 50–53.
[7] C. Fehn, “A 3D-TV approach using depth-image-based rendering (DIBR),” in Proc. Vis., Imaging, Image Process. (VIIP), Benalmádena, Spain, Sep. 2003, pp.482–487
[8] C. Fehn, “Depth-image-based rendering (DIBR), compression and transmission for a new approach on 3D-TV,” in Proc. SPIE Stereoscopic Displays and Virtual Reality Syst., Jan. 2004, pp. 93–104.
[9] Y. C. Fan and T. C. Chi, “The novel non-hole-filling approach of depth image based rendering,” in Proc. IEEE 3DTV, May 2008, pp. 325–328.
[10] M. Solh and G. AlRegib, “Hierarchical hole-filling (HHF): depth image based rendering without depth map filtering for 3D-TV,” in Proc. IEEE MMSP, Oct. 2010, pp. 87–92.
[11] P. Ndjiki-Nya, M. Koppel, D. Doshkov, H. Lakshman, P. Merkle, K. Muller, and T. Wiegand, “Depth image-based rendering with advanced texture synthesis for 3-D video,” IEEE Trans. Multimedia, vol. 13, pp. 453–465, June 2011.
[12] A. S. Glassner, Graphic Gem. Cambrige, MA: Academic Press, 1990, pp. 147–165.
[13] S. E. Reichenbach and F. Geng, “Two-dimensional cubic convolution,” IEEE Trans. Image Process., vol. 12, pp. 857–865, Aug. 2003.
[14] E. Angel and A. Jain, “A nearest neighbors approach to multidimensional filtering,” in Proc. IEEE conf. Decision Contr. (ICDC), Louisiana, USA, Dec. 1972, pp. 13–15.
[15] H. S. Hou and H. C. Andrews, “Cubic-splines for image interpolation and digital filtering,” IEEE Trans. Acoust. Speech Signal Process., vol. 26, pp. 508–517, Dec. 1978.
[16] R. G. Keys, “Cubic convolution interpolation for digital image-processing,” IEEE Trans. Acoust. Speech Signal Process., vol. 29, pp. 1153–1160, Dec. 1981.
[17] H. C. Andrews and C. L. Patterson, “Digital interpolation of discrete images,” IEEE Trans. Comput., vol. 25, pp. 196–202, Feb. 1976.
[18] X. Li and M. T. Orchard, “New edge-directed interpolation,” IEEE Trans. Image Process., vol. 10, pp. 1521–527, Oct. 2001.
[19] M. Elad, “On the origin of the bilateral filter and ways to improve it,” IEEE Trans. Image Process., vol. 11, pp. 1141–1151, Dec. 2012.
[20] J. Kopf, M. F. Cohen, D. Lischinski, and M. Uyttendaele, “Joint bilateral upsampling,” ACM Trans. Grap., vol. 26, p. 96, 2007.
[21] S. W. Jung, “Enhancement of image and depth map using adaptive joint trilateral filter,” IEEE Trans. Circuit Syst.Video Technol., vol. 23, pp. 258–269, Feb. 2013.
[22] K. H. Lo, Y. C. Wang, and K. L. Hua, “Joint trilateral filtering for depth map super-resolution,” in Proc. Vis. Commun. Image Process. (VCIP), Kuching, Malaysia, Jan. 2013, pp. 1–6.
[23] J. H. Cho, S. Ikehata, H. Yoo, M. Gelautz, and K. Aizawa, “Depth map up-sampling using cost-volume filtering,” in Proc. 3D Image/Video Technol. Appl. (IVMSP), Seoul, South Korea, June 2013, pp. 1–4.
[24] H. Deng, L. Yu, J. Qiu, and J. Zhang, “A joint texture/depth edge-directed up-sampling algorithm for depth map coding,” in Proc. IEEE Int. Conf. Multimedia Expo. (ICME), Melbourne, Australia, Jul. 2012, pp. 646–650.
[25] M. O. Wildeboer, T. Yendo, M. Panahpour Tehrani, T. Fujii, and M. Tanimoto, “Color based depth up-sampling for depth compression,” in Proc Picture Coding Symp. (PCS), Nagoya , Japan, Dec. 2010, pp. 170–173.
[26] P. Hanhart and T. Ebrahimi, “Calculation of average PSNR differences between RD-curves”, ITU-T SC16/Q6, Apr. 2001.
[27] Z. Wang, A. C. Bovik, H. R. Sheikh, and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process., vol. 13, pp. 600–612, Apr. 2004.
[28] F. Garcia, B. Mirbach, B. Ottersten, F. Grandidier, and A. Cuesta, “Pixel weighted average strategy for depth sensor data fusion,” in Proc. Int. Conf. Image Process. (ICIP), Hong Kong, China, Sep. 2010, pp. 2805–2808.
[29] F. Garcia, D. Aouada, B. Mirbach, T. Solignac, and B. Ottersten, “A new multi-lateral filter for real-time depth enhancement,” in Proc. IEEE Int. Conf. Adv. Video Signal-Based Surveill. (AVSS), Klagenfurt, Austria, Sep. 2011, pp. 42–47.
[30] F. Garcia, D. Aouada, B. Mirbach, B. Ottersten, “Spatio-temporal ToF data enhancement by fusion,” in Proc. IEEE Int. Conf. Image Process. (ICIP), Orlando, USA, Oct. 2012, pp. 981–984.
校內:2022-09-01公開