| 研究生: |
陳彥宏 Chen, Yan-Hong |
|---|---|
| 論文名稱: |
靜態影片立體化之合成演算法 Static Stereoscopic Video Generation Algorithms |
| 指導教授: |
楊家輝
Yang, Jar-Ferr |
| 學位類別: |
碩士 Master |
| 系所名稱: |
電機資訊學院 - 電腦與通信工程研究所 Institute of Computer & Communication Engineering |
| 論文出版年: | 2008 |
| 畢業學年度: | 96 |
| 語文別: | 中文 |
| 論文頁數: | 61 |
| 中文關鍵詞: | 靜態影片 、立體影片合成 |
| 外文關鍵詞: | static video, stereo video synthesis |
| 相關次數: | 點閱:108 下載:3 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
隨著立體液晶螢幕的技術逐漸進步,人們將不再需要配戴立體眼鏡即可透過螢幕欣賞立體影像。也有越來越多的研究放在如何將影片立體呈現。其中最常見的方式是先做深度的估測,再依深度合成出視差,進而提供我們左、右眼對應的影像,而使得我們觀看時產生立體感。若是影像內容都是靜態內容,例如藝術品導覽、建築導覽…等。則我們可以在影片中挑選出合適另一眼影像的來進行合成。
本論文將利用二種不同的方式來進行挑選,第一種是運動向量輔助挑選,另一種是水平位移估測挑選。此二種方式想要達成的目的是:挑選出來的影像框架(frame)和目前的框架(current frame)間拍攝距離可以和我們的雙眼距離有一比例關係。如此一來我們再透過一些調整便可以將此二張框架形成一對立體影像。
After availability of 3D LCD display systems, people can perceive stereo images without wearing any special 3D glasses. There are many researches focused on 2D to 3D video generations. The common 2D to 3D conversion technique is to perform the depth map estimation first and then use this depth information to generate the horizontal parallax for each pixel. Finally, each pixel is according to its corresponding horizontal parallax to shift. After shifting and hole-filling, we can about another eye image for two-view stereo display. However, the object segmentation will be the most difficult function before the depth map estimation. If the video is with ‘still scene’, which means that the objects captured by camera are no motion, without above procedure, we can select a proper image from the original video sequence to achieve the similar results.
In this thesis, we propose two different methods to properly select an image to achieve effective 2D to 3D conversion by using horizontal motion vector selection and its related horizontal displacements. We want to achieve that the baseline distance between the current and selected frame is similar to the distance between left and right eyes. Finally, we can adjust the selected and the current frames to form a stereo image pair.
[1] E. Rotem, K. Wolowelsky, D. Pelz, “Automatic Video to Stereoscopic Video Conversion” in Proc. of SPIE-IS&T Electronic Imaging, SPIE Vol. 5664 © 2005 SPIE and IS&T.
[2] G. Zhang, W. Hua, X. Qin, T.T. Wong, H. Bao, “Stereoscopic Video Synthesis from a Monocular Video” IEEE Transaction on Visualization and Computer Graphic, Vol.13, pp. 686-696, 2007.
[3] H.Y. Shum and S.B. Kang, “A review of image-based rendering techniques,” in Proc. of IEEE/SPIE Visual Communications and Image Processing (VCIP), 2000, pp. 2–13.
[4] C. Bregler, M. F. Cohen, P. Debevec, L. McMillan, F. X. Sillion, and R. Szeliski, “Image-based modeling, rendering, and lighting,” in SIGGRAPH 1999 Course #39, 1999.
[5] W.K. Pratt. “Digital Image Processing,” Second Edition, John Wiley and Sons, 1991.
[6] J. Weng, N. Ahuja, T.S. Huang “Matching Two Perspective Views,” IEEE Transaction on Pattern Analysis and Machine Intelligence, Vol. 14, No.8, August 1992.
[7] C. Harris and M. Stephens, “A combined corner and edge detector,” in Proc. of 4th ALVEY vision conference, pp. 147-151, 1988.
[8] H.C. Longuet-Higgins, “The Visual Ambiguity of a Moving Plane.”
Proceedings of the Royal Society of London. Series B, Biological Sciences, Vol. 223, No. 1231. (Dec. 22, 1984), pp. 165-175.
[9] B.D. Lucas and T. Kanade. “An Iterative Image Registration Technique with an Application to Stereo Vision.” International Joint Conference on Artificial Intelligence, pages 674-679, 1981.
[10] C. Tomasi and T. Kanade. Detection and Tracking of Point Features. Carnegie Mellon University Technical Report CMU-CS -91-132, April 1991.
[11] J. Shi and C. Tomasi. “Good Features to Track.” in proc. of IEEE Conference on Computer Vision and Pattern Recognition, pages 593-600, 1994.
[12] M. Irani and P. Anandan, “A Unified Approach to Moving Object Detection in 2D and 3D Scenes” IEEE Transactions on Pattern Analysis and Machine Intelligence, VOL. 20, NO. 6, JUNE 1998.