簡易檢索 / 詳目顯示

研究生: 林穎稘
Lin, Ying-Chi
論文名稱: 交互式流動畫系統設計與應用
Interactive flow animation system design and applications
指導教授: 李同益
Lee, Tong-Yee
學位類別: 碩士
Master
系所名稱: 電機資訊學院 - 資訊工程學系
Department of Computer Science and Information Engineering
論文出版年: 2019
畢業學年度: 108
語文別: 英文
論文頁數: 60
中文關鍵詞: 單張圖片交互式設計流動畫
外文關鍵詞: Single image, Interactive design, Fluid animation
相關次數: 點閱:86下載:2
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 本篇論文中,我們介紹一種讓使用者輸入單一靜態圖像透過半自動的方式生成動畫。當獲得絕美的風景或繪畫時,我們的感知會根據過去的經驗,想像該圖像流動的動態場景。而本篇論文提出的方法能夠讓使用者根據對目標圖像的想像,繪製想像中流動的動畫線條,接著系統直接使靜態圖像生成動畫。
    我們使用半自動的方法將目標圖像中感興趣的區域分割成一組動畫區塊,然後使用者根據想像中的動畫型態,繪製動畫的流動線條。接著系統計算其流動一致性與來源相干性,然後翹曲目標圖像產生每一個動畫幀,其翹曲會沿著特定方向產生一個循環路徑,最後翹曲回原目標圖像產生循環的視頻紋理。根據目標圖像與翹曲後的幀計算最佳化;產生翹曲時所需填補的像素,最後重新組合所有感興趣的區域產生動畫幀。
    該動畫結果僅輸入單一靜態圖像產生,且其動畫結果具有循環的視頻紋理,具有比從視頻範例或視頻資料庫產生動畫更加可控並且速度更快以及不受限於輸入視頻的優點,此外,我們即使當目標圖像為繪畫製品或者非自然圖像時,也能產生良好的結果,我們在各種圖像和繪畫上展示了這種技術。

    In this paper, we introduce an approach that allows the user to input a single still image to generate an animation in a semi-automatic manner. When we get a beautiful landscape or painting, our perception will imagine the dynamic scene of the image flowing based on past experience. The method proposed in this paper allows the user to draw a stream of imaginary flow based on the imagination of the target image, and then the system directly animate the still image.
    We use a semi-automatic method to segment the region of interest in the target image into a set of animation blocks, and then the user draws the stream of animation according to the animation style in the imagination. Then the system calculates its flow consistency and source coherence, then warps the target image to generate each animation frame, its warp will generate a circular path along a specific direction, and finally warp back to the original target image to produce a circular video texture. The optimization is calculated according to the target image and the warped frame; the pixels that need to be filled when warping are generated, and finally all the regions of interest are reconstructed to generate an animation frame.
    The animation results are given a single still image, and the animation results have a looping video texture that is more controllable, faster, and the result is not limited to the input video than the animation generated from the video examples or video database. In addition, we can produce good results even when the target image is an anime or an unnatural image. Finally, we demonstrate the system in various scenes and animations.

    Table of Contents 摘要 I Abstract II 致謝 III Table of Contents IV List of Tables VI List of Figures VII Chapter 1 Introduction 1 Chapter 2 Related Work 5 Chapter 3 System Overview 18 Chapter 4 Method 21 4.1. Mask Design 21 4.1.1. Mask Definition 21 4.1.2. Image Matting Mask 22 4.1.2.1. Expansion of Known Regions 23 4.1.2.2. Sample Gathering 23 4.1.2.3. Local Smoothing 25 4.1.3. Image Content Segmentation 25 4.2. Animation Driver 29 4.2.1. Animation Consistency 31 4.2.1.1. Temporal Coherence 32 4.2.1.2. Source Coherence 32 4.2.2. Animation Speed Setting 34 4.2.3. Cycle Warping 35 4.2.4. Warp Filling Optimization 37 4.2.5. Layer Inpainting and Merge 41 Chapter 5 Experimental Results 44 5.1. Results 44 5.2. Comparisons 47 5.3. Bad Cases 53 5.4. Discussion 54 Chapter 6 Conclusion 57 6.1. Conclusions 57 6.2. Constraints and Future Works 57 References 59

    References
    [1] N. a. M. M. Chentanez, “Real-time Eulerian Water Simulation Using a Restricted Tall Cell Grid,” ACM SIGGRAPH 2011 Papers, pp. 82:1--82:10, 2011.
    [2] Q. {. a. F. {. a. E. {. a. N. {Holzschuch}, “Lagrangian Texture Advection: Preserving both Spectrum and Velocity Field,” IEEE Transactions on Visualization and Computer Graphics, pp. 1612-1623, Nov 2011.
    [3] O. a. F. J. a. A. P. a. L. J. a. S. E. a. S. D. Jamriv{s}ka, “LazyFluids: Appearance Transfer for Fluid Animations,” ACM Trans. Graph., p. 10, 2015.
    [4] M. a. A. K. a. I. T. a. S. H.-P. Okabe, “Animating Pictures of Fluid using Video Examples,” Computer Graphics Forum, pp. 677-686, 2009.
    [5] E. a. N. M. a. V. D. a. S. P. Prashnani, “A Phase-Based Approach for Animating Images Using Video Examples,” Computer Graphics Forum, pp. 303-311, 2017.
    [6] C.-Y. a. H. Y.-W. a. S. T. K. Lin, “Creating waterfall animation on a single image,” Multimedia Tools and Applications, pp. 6637--6653, 01 Mar 2019.
    [7] M. a. A. K. a. O. R. Okabe, “Creating Fluid Animation from a Single Image using Video Database,” Computer Graphics Forum, pp. 1973-1982, 2011.
    [8] M. a. D. Y. a. A. K. Okabe, “Animating pictures of water scenes using video retrieval,” The Visual Computer, pp. 347--358, 01 Mar 2018.
    [9] M.-T. a. L. T.-Y. a. Q. Y. a. W. T.-T. Chi, “Self-animating images: illusory motion using repeated asymmetric patterns,” ACM Transactions on Graphics (TOG), p. 62, 2008.
    [10] W. T. a. A. E. H. a. H. D. J. Freeman, Motion without movement, Citeseer, 1991.
    [11] Y.-Y. a. G. D. B. a. Z. K. C. a. C. B. a. S. D. H. a. S. R. Chuang, “Animating Pictures with Stochastic Motion Textures,” ACM SIGGRAPH 2005 Papers, pp. 853--860, 2005.
    [12] V. a. E. I. a. B. A. a. K. N. Kwatra, “Texture Optimization for Example-based Synthesis,” ACM SIGGRAPH 2005 Papers, pp. 795--802, 2005.
    [13] Y. {. a. B. {. a. K. {. a. W. {. a. C. {. a. E. {Zhang}, “Data-Driven NPR Illustrations of Natural Flows in Chinese Painting,” IEEE Transactions on Visualization and Computer Graphics, pp. 2535-2549, 2017.
    [14] Y. a. K. Y. a. K. S. Endo, “Animating Landscape: Self-supervised Learning of Decoupled Motion and Appearance for Single-image Video Synthesis,” ACM Trans. Graph., pp. 175:1--175:19, nov 2019.
    [15] E. N. a. B. W. A. Mortensen, “Intelligent scissors for image composition,” Proceedings of the 22nd annual conference on Computer graphics and interactive techniques, pp. 191--198, 1995.
    [16] E. S. L. a. O. M. M. Gastal, “Shared Sampling for Real-Time Alpha Matting,” Computer Graphics Forum, pp. 575-584, 2010.
    [17] T. Ucicr, “Feature-based image metamorphosis,” Computer graphics, p. 2, 1992.
    [18] Y. {. a. E. {. a. M. {Irani}, “Space-time video completion,” Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004., pp. I-I, 2004.
    [19] J. a. K. R. H. a. G. T. a. W. L.-Y. a. S. J. a. F. G. Xing, “Energy-Brushes: Interactive Tools for Illustrating Stylized Elemental Dynamics,” Proceedings of the 29th Annual Symposium on User Interface Software and Technology, pp. 755--766, 2016.
    [20] C. {. a. S. {. a. P. K. {. a. C. {. a. T. {Lee}, “Interactive High-Relief Reconstruction for Organic and Double-Sided Objects from a Photo,” IEEE Transactions on Visualization and Computer Graphics, pp. 1796-1808, 2017.
    [21] C. a. S. E. a. F. A. a. G. D. B. Barnes, “PatchMatch: A Randomized Correspondence Algorithm for Structural Image Editing,” ACM SIGGRAPH 2009 Papers, pp. 24:1--24:11, 2009.

    無法下載圖示 校內:2024-11-22公開
    校外:不公開
    電子論文尚未授權公開,紙本請查館藏目錄
    QR CODE