| 研究生: |
劉建麟 Liu, Chein-Lin |
|---|---|
| 論文名稱: |
基於人群運動軌跡於固定視角之2D影片上的合成技術 2D Fixed-View Video Synthesis for Crowd Trajectory-Based Movement |
| 指導教授: |
李同益
Lee, Tong-Yee |
| 學位類別: |
碩士 Master |
| 系所名稱: |
電機資訊學院 - 資訊工程學系 Department of Computer Science and Information Engineering |
| 論文出版年: | 2017 |
| 畢業學年度: | 105 |
| 語文別: | 英文 |
| 論文頁數: | 44 |
| 中文關鍵詞: | 影片合成 、影像分割技術 、背景去除 、2D運動軌跡 |
| 外文關鍵詞: | video synthesis, image segmentation and saliency, background subtraction, 2D motion trajectory analysis |
| 相關次數: | 點閱:67 下載:0 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
影片合成技術一直以來都是影片處理領域的重要課題之一,在本篇論文當中,我們主要的目的在於從固定視角的影片當中,於不同時間點偵測並擷取各式各樣的人物運動軌跡,並且於同一時間影格重新合成。以往這類型的研究著重在於合成時避免同一時間的人物碰撞、以及分割人物上的背景去除技術,但是這些研究針對運動軌跡的描述都鮮少有深入探討,也少有將時間軌跡與放置位置一併考慮的合成方法。本篇則著重於將偵測之人物運動軌跡以內容的差異分門別類為不同的事件,在合成的過程之中考慮使用者所選定之軌跡,與偵測後的人物運動事件作匹配,盡可能地合成出使用者所想要的合成影片。
我們的系統包含了兩個部分:前處理與使用者自定義處理。首先,我們會先針對影片內容進行人物的偵測,並將偵測到的每一個人物分割之運動軌跡影格做背景去除。接著考慮每筆人物分割的軌跡內容,根據軌跡的變化將每筆人物以運動之角度的變化做區別和儲存。接下來是使用者自定義處理的部分,我們提供給使用者在原背景上畫出想要的人物運動軌跡,將其結果與先前已建立之事件資料庫進行匹配,找出符合使用者需要的人物並合成產生該影片。
在前處理部分,我們透過一連串實驗優化人物的偵測系統與背景的去除技術。並在人物軌跡內容與事件分類之轉換上重複實驗,找出較佳的分類方法。並在最後的測試中,找出將使用者繪製的曲線轉換成可與事件配對的資料型態的較佳轉換方法。最後我們的系統將處理好的人物軌跡內容複製並貼在原影片背景上,透過影象碰撞偵測的方式避免人物合成時的影像重疊等不合理情況,使之能有效產生出符合使用者之定義、令人能接受的合成影片,達到類似電影影片中人物合成影像之效果。
This paper presents a synthesis method by extracting different human motion trajectory in the 2D fixed-view video to create video with new frames ordering. Most video synthesis research makes the effort to summarize the video by enriching all the video contents in short time. However, less research cares about the interaction, timing description and synthesis position of these motions. We purpose a scheduling method to separate different motion curves into different events, search and match the curves what users want, to create a new interaction video.
The basic three ideas about this paper is matching, scheduling and synthesis. We present an outer rectangles video image pasting method in the preprocess to make our synthesis and scheduling process more efficient. Then we present a motion trajectory matching by converting the motion state into Munsell Color System coordinate value to obtain the specific motions. Last we paste all motions on the same background scene and prevent all possible collisions by scheduling the frames motion order, which is based on motion trajectory, collision timing and location. After reordering the motion frames and create video without collisions, the interaction effect between crowds and different synthesis position demonstrate a clear superiority of the proposed method over related methods in terms of interactive video synthesis.
[1] Jonathan Harel., et al. (2006). "Graph-Based Visual Saliency." NIPS'06 Proceedings of the 19th International Conference on Neural Information Processing Systems: 545-552.
[2] Federico Perazzi., et al. (2012). "Saliency Filters: Contrast Based Filtering for Salient Region Detection." Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on.
[3] Massimo Piccardi., et al. (2004). "Background Subtraction Techniques: A Review." IEEE International Conference on Systems, Man and Cybernetics.
[4] Zoran Zivkovic., (2004). "Improved Adaptive Gaussian Mixture Model for Background Subtraction." International Conference on Pattern Recognition (ICPR).
[5] Dar-Shyang Lee., (2005). "Effective Gaussian Mixture Learning for Video Background Subtraction." IEEE Transactions on Pattern Analysis and Machine Intelligence. Vol.27, No.5, May 2005.
[6] Shao-Ping Lu., et al. (2013). "Time-Line Editing of Objects in Video." IEEE Transaction on Visualization and Computer Graphics. Vol.19, No.7, July 2013.
[7] Matthew Flagg., et al. (2013). "Video-Based Crowd Synthesis." IEEE Transaction on Visualization and Computer Graphics. Vol.19, No.11 November 2013.
[8] Yongwei Nie., et al. (2014). "Object Movements Synopsis via Part Assembling and Stitching." IEEE Transaction on Visualization and Computer Graphics. Vol.20, No.9, September 2014.
[9] Christopher Richard Wren., et al. (1997). "Pfinder: Real-Time Tracking of the Human Body." IEEE Transactions on Pattern Analysis and Machine Intelligence. Vol.19, No.7, July 1997.
[10] T. Bouwmans. (2009). "Subspace Learning for Background Modeling: A Survey." Recent Patents on Computer Science, 2(3):223-234, 2009. 1937.
[11] T. Bouwmans. (2011). "Recent Advanced Statistical Background Modeling for Foreground Detection - A Systematic Survey." volume 4 of Handbook of Pattern Recognition and Computer Vision, chapter 3. World Scientific Publishing, 2010. 1937.
[12] Gunnar Farneback. (2003). "Two-Frame Motion Estimation Based on Polynomial Expansion." In Scandinavian Conference on Image Analysis, 2003.
[13] Yael Pritch., et al. (2008). "Nonchronological Video Synopsis and Indexing." IEEE Transactions on Pattern Analysis and Machine Intelligence. Vol.30, No.11, November 2008.
[14] Yongwei Nie., et al. (2013). "Compact Video Synopsis via Global
Spatiotemporal Optimization." IEEE Transaction on Visualization and Computer Graphics. Vol.19, No.10, October 2013.
[15] Iddo Drori., et al. (2003). "Fragment-Based Image Completion." ACM Trans. Graphics (SIGGRAPH), vol. 22, pp. 303-312, 2003.
[16] Nikos Komodakis and Georgios Tziritas. (2006). "Image Completion Using Global Optimization." In the proceedings of IEEE Computer Vision and Pattern Recognition 2006.
[17] Jianbing Shen., et al. (2007). "Gradient based image completion by solving the Poisson equation." ScienceDirect Computer and Graphics. 31 (2007): 119-126.
[18] Assa, J., et al. (2005). "A Robust Object Segmentation System Using a Probability-Based Background Extraction Algorithm." IEEE Transaction on Circuits and Systems for Video Technology. Vol.20, No.4, April 2010.
[19] Yael Pritch, J., et al. (2009). "Clustered Synopsis of Surveillance Video." In Advanced Video and Signal Based Surveillance, pages 195-200, 2009.
[20] Connelly, Barnes., et al. (2010). "Video Tapestries with Continuous Temporal Zoom." ACM Trans. Graph. 29(4).
[21] Tao Chen., et al. (2012). "Visual storylines: Semantic visualization of movie sequence." ScienceDirect Computer and Graphics. 36 (2012): 241-249.
[22] Bing-Yu Chen., et al. (2008). " Capturing Intention-based Full-Frame Video Stabilization." Pacific Graphics 2008 Vol.27.
校內:2022-09-01公開