| 研究生: |
孫子益 Sun, Tze-Yi |
|---|---|
| 論文名稱: |
基於圖形處理器之立體匹配系統與深度修正 Stereo Matching and Depth Refinement on GPU Platform |
| 指導教授: |
楊家輝
Yang, Jar-Ferr |
| 學位類別: |
碩士 Master |
| 系所名稱: |
電機資訊學院 - 電腦與通信工程研究所 Institute of Computer & Communication Engineering |
| 論文出版年: | 2018 |
| 畢業學年度: | 106 |
| 語文別: | 英文 |
| 論文頁數: | 53 |
| 中文關鍵詞: | GPU實現 、立體匹配技術 、窗型代價合併 、深度強化技術 、三重濾波器 |
| 外文關鍵詞: | GPU implementation, stereo matching, window-based aggregation, depth refinement, trilateral filter |
| 相關次數: | 點閱:73 下載:1 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
近年來,多視角之裸眼3D螢幕為越來越熱門之研究項目,若3D訊號能包含景深資訊,其效能與實現將更為優化。因此,能夠產生深度資訊使多視角生成技術能夠加以應用之立體匹配技術便成為一項重要的議題。立體匹配技術中能大略區分為全局立體匹配演算法和區域立體匹配演算法,然而這兩種類型之演算法都各有其優缺點。本論文以實現於圖型理器為主要考量,區域立體匹配演算法較適合平行多工,可應用於本研究中。因此,我們先改進立體匹配演算法,並調整深度強化之演算法。在立體匹配系統裡,我們提出基於邊界資訊以調整傳統的代價估算及代價合併方法。在深度強化方面,我們比對生成之左深度圖與右深度圖並找出其不連續區域,最後再提出四階深度強化以及三重濾波器方式來修正產生之深度圖。最後,我們將提出之演算法實現在圖型處理器上並與其他方法比較實驗結果。本系統於GPU平台模擬之實驗結果顯示出,本系統具快速特性下比起其他立體匹配演算法能得到更好的深度圖品質。
Recently, the development of naked-eyes multi-view three-dimensional television (3DTV) system has become more and more popular. If the 3D signal contains depth information, its performances and realizations become more feasible. Therefore, a stereo matching system, which generates accurate depth maps for the depth-image based rendering (DIBR) to produce multi-views, is an important issue. The global and local stereo matching methods have their pros and cons due to the characteristics of the algorithms. Since the system will be implemented on the graphics processing unit (GPU) in this thesis, the local stereo matching methods are more suitable for data and algorithm parallel designs. Thus, the improved stereo matching and depth refinement methods are proposed. For the stereo matching system, we propose improved cost computation and cost aggregation methods based on texture edge information. Furthermore, after checking the consistency of the generated left and right depth images, we proposed a four-step refinement method and a trilateral filter to correct the disparity values. Finally, with much faster computation, we implement the proposed stereo matching system on a GPU platform and compare the experimental results with those achieved by the existed methods.
[1]I. P. Howard and B. J. Rogers. (1995, Nov. 30), Binocular vision and stereopsis (1st ed.) [Online]. 2. Available DOI: 10.1093/acprof:oso/9780195084764.001.0001.
[2]S. B. Kang. “Survey of image-based rendering techniques,” Videometrics VI, International Society for Optics and Photonics., vol. 3641, pp. 2–17, December 1998.
[3]C. Fehn, “Depth-image-based rendering (DIBR), compression and transmission for a new approach on 3D-TV,” Proc. SPIE, Stereosc. Displays Virt. Reality Syst. XI, vol. 5291, pp. 93–104, January 2004.
[4]J. Sun, N. N. Zheng, and H. Y. Shum, “Stereo matching using belief propagation,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 25, pp. 787–800, July 2003.
[5]Y. Boykov, O. Veksler, and R. Zabih, “Fast Approximate energy minimization via graph cuts,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 23, pp. 1222–1239, November 2001.
[6]X. Sun, X. Mei, S. Jiao, M. Zhou, and H. Wang, “Stereo matching with reliable disparity propagation,” in Proc. Int. Conf. 3D Image, Model. Process. Vis. Transm., pp. 132–139, May 2011.
[7]S. B. Kang, R. Szeliski, and J. Chai, “Handling occlusions in dense multi-view stereo,” in Proc. IEEE Comp. Soc. Conf. Comp. Vis. Pattern Recognit., vol. 1, pp. 103–110, 2001.
[8]Veksler, “Fast variable window for stereo correspondence using integral images,” in Proc. IEEE Comp. Soc. Conf. Comp. Vis. Pattern Recognit., vol. 1, pp. 556–561, June 2003.
[9]K. J. Yoon and I. S. Kweon, “Adaptive Support-weight Approach for Correspondence Search,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 28, no.4, pp. 650–656, 2006.
[10]K. Zhang, J. Lu, and G. Lafruit, “Cross-based local stereo matching using orthogonal integral images,” IEEE Trans. Circuits Syst. Video Technol., vol. 19, pp. 1073–1079, July 2009.
[11]N. Y.-C. Chang, T.-H. Tsai, B.-H. Hsu, Y.-C. Chen, and T.-S. Chang, “Algorithm and architecture of disparity estimation with mini-census adaptive support weight,” IEEE Trans. Circuits Syst. Video Technol., vol. 20, pp. 792–805, June 2010.
[12]W. S. Fife and J. K. Archibald, “Improved census transforms for resource-optimized stereo vision,” IEEE Trans. Circuits Syst. Video Technol., vol. 23, pp. 60–73, January 2013.
[13]K. L. Lo, “A real-time stereo matching algorithm with iterative aggregation and its VLSI implementation,” M.S. thesis, National Cheng Kung University, Tainan, Taiwan, July 2015.
[14]H. T. Kuo, “VLSI Implementation of Real-Time Stereo Matching and Centralized Texture Depth Packing for 3D Video Broadcasting,” M.S. thesis, National Cheng Kung University, Tainan, Taiwan, July 2017.
[15]Kopf, J., Cohen, M. F., Lischinski, D., & Uyttendaele, M. “Joint bilateral upsampling.” ACM Transactions on Graphics (ToG) vol. 26, No. 3, pp. 96, August 2007.
[16]NVIDIA Computer Vision Corporation Website [Online]. Available: http://www.nvidia.com.tw/page/home.html
[17]NVIDIA DEVELOPER ZONE: CUDA C Programming Guide [Online]. Available: http://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html
[18]Amer, G. M. H., & Abushaala, A. M., “Edge detection methods,” 2015 2nd World Symposium on Web Applications and Networking (WSWAN), IEEE, pp. 1–7, March 2015.
[19]Stereo - Middlebury Computer Vision [Online]. Available: http://vision.middlebury.edu/stereo/
[20]C. L. Hsieh, “A Two-View to Multi-View Conversion System and Its VLSI Implementation for 3D Displays,” M.S. thesis, National Cheng Kung University, Tainan, Taiwan, July 2017.
[21]Emlek, A., Peker, M., & Dilaver, K. F., “Variable window size for stereo image matching based on edge information,” 2017 International Artificial Intelligence and Data Processing Symposium (IDAP), IEEE, pp. 1–4, September 2017.
[22]Cheng, F. H., & Huang, K. Y., “Real-time stereo matching for depth estimation using GPU,” 2015 8th International Conference on Ubi-Media Computing (UMEDIA), IEEE, pp. 3–6, Auguest 2015.