| 研究生: |
王明俊 Wang, Ming-Jiun |
|---|---|
| 論文名稱: |
移動估計於視訊編碼及信號處理 Motion Estimation for Visual Coding and Signal Processing |
| 指導教授: |
李國君
Lee, Gwo-Giun |
| 學位類別: |
博士 Doctor |
| 系所名稱: |
電機資訊學院 - 電機工程學系 Department of Electrical Engineering |
| 論文出版年: | 2013 |
| 畢業學年度: | 101 |
| 語文別: | 英文 |
| 論文頁數: | 120 |
| 中文關鍵詞: | 移動估計 、視訊編碼 、視覺信號處理 、演算法/架構共同設計 |
| 外文關鍵詞: | motion estimation, video coding, visual signal processing, algorithm/architecture co-design |
| 相關次數: | 點閱:79 下載:1 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
本文呈現了一個基於演算法暨架構共同設計方法的新穎移動估計,可應用至視訊編碼及視覺訊號處理。在高品質的即時視訊應用中,移動估計扮演著重要的角色,並決定視訊系統單晶片的成本。本文提出之移動估計演算法具有高效率之時空域移動向量預測、改良逐步搜尋法以及多重更新路徑等啟發自最佳化理論之特色。藉由在早期設計階段對於移動估計演算法複雜度的分析,在演算法暨架構共同設計空間裡可以尋得介於效能與成本間合宜取捨之移動估計設計實例,因而形成有效率之超大型積體電路架構,其內部具有儲存參考資料之快取記憶體。實作結果顯示此移動估計應用在H.264編碼標準中,效能不僅勝過最近提出之研究且相當接近全域搜尋法之表現;其相較於其他方法,以超大型積體電路面積而言具有極低之複雜度。應用此具有真實移動特性之移動估計至視訊處理演算法亦獲得傑出之設計。藉由巧妙地運用移動資訊進行視訊內容分析,本文所提出之移動適應性及移動補償解交錯處理得以選擇適合特定場景之頻譜濾波器進行補點,不僅提供較佳處理後之品質,相較其他最前瞻之研究僅需要較少之超大型積體電路閘數。且演算法與架構共同設計方法亦協助探索設計空間以決定處理精細度,並瞭解不同處理模式之間共通性以設計出極有效率之可重組超大型積體電路架構。在另一方面,本文所提出之移動估計亦在二維至立體視訊轉換中協助分析視訊場景,移動向量伴隨矩陣之特徵可對具有各種移動對於深度關係之視訊場景進行分類,自二維視訊之移動向量中估計深度時該移動對於深度關係相當重要。與其他基於移動之二維至立體視訊轉換演算法相較,本文提出之二維至立體視訊轉換演算法能夠提供更合理之深度以合成立體視角。實驗結果顯示本文提出之移動估計可廣泛應用至視訊編碼及視覺信號處理上,並提供較佳定性、定量之效能以及較低之實現成本。
This dissertation presents novel motion estimation applicable to video coding and visual signal processing based on algorithm/architecture co-design methodology. Motion estimation employs a significant role in high quality and real-time video applications and determines the cost of video system-on-a-chip. The motion estimation algorithm is characterized by efficient spatio-temporal motion vector prediction, modified one-at-a-time search, and multiple update paths as innovated by optimization theory. By analyzing the algorithmic complexity in the early design stage, the introduced motion estimation locates a desirable design instance in the co-design space with an effective trade-off between performance and complexity, resulting in an efficient architecture that features internal caches for reference data. The implementation results of the introduced motion estimation not only surpasses recently published research and achieves comparable performance to full search in H.264/AVC video coding, but also possesses ultralow complexity in terms of silicon area against other strategies. Applying the introduced motion estimation with true-motion characteristics to video processing algorithms also obtains outstanding designs. By tactically utilizing motion information for video content analysis, the introduced motion-adaptive and motion-compensated deinterlacing algorithms select an appropriate spectrum filter for a specific video scene, render better interpolation quality, and require less gate count compared with state-of-the-art. Moreover, the algorithm/architecture co-design methodology helps with exploring the design space, determining the processing granularity, and studying the commonality between different processing modes, resulting in a cost-efficient reconfigurable architecture. On the other hand, the introduced motion estimation facilitates the analysis of video scenes for 2D-to-3D video conversion. The signatures of co-occurrence matrix of motion vector classify video scenes having various motion-depth relations, which is important to depth estimation from the motion vectors of 2D video. As compared to other motion-based 2D-to-3D conversion algorithms, the introduced algorithm, which incorporates the introduced motion estimation, provides more reasonable depth for 3D view synthesis. Experimental results indicate that the motion estimation can be widely applied to video coding and visual signal processing, for achieving better qualitative and quantitative performance with lower cost.
[1] T. Wiegand, Final draft international standard for joint video specification H.264, in JVT of ISO/IEC MPEG and ITU-T VCEG, JVT-G050, Mar. 2003.
[2] B. Bross, W.-j. Han, G. J. Sullivan, J.-R. Ohm, and T. Wiegand, High Efficiency Video Coding (HEVC) Text Specification Draft 9, document JCTVC-K1003, ITU-T/ISO/IEC JCT-VC, Oct. 2012.
[3] G. G. Lee, E. S. Jang, M. Mattavelli, M. Raulet, C. Lucarz, H. Kim, S. Lee, H.-Y. Lin, J. Janneck, D. Ding, and C.-J. Tsai, “Text of ISO/IEC FDIS 23001-4: Codec Configuration Representation,” Information technology — MPEG systems technologies — Part 4: Codec configuration representation, 2009.
[4] H. S. Shin, Y.-S. Tung, C. Lucarz, K. Sugimoto, M. Raulet, Y. Yamada, H.-Y. Lin, Y.-L. Cheng, and M. Mattavelli, “Text of ISO/IEC FDIS 23002-4: Video Tool Library,” Information technology — MPEG video technologies — Part 4: Video tool library, 2010.
[5] A. Vetro, P. Pandit, H. Kimata, A. Smolic, and Y.-K. Wang, eds., "Joint Draft 8 of Multiview Video Coding", Joint Video Team (JVT) Doc. JVT-AB204, Hannover, Germany, July 2008.
[6] Video Group, “Report on Experimental Framework for 3D Video Coding,“ ISO/IEC JTC1/SC29/WG11 MPEG Doc. N11631, Guangzhou, CN, 2010.
[7] G. de Haan, and E. B. Bellers, “Deinterlacing – an overview,” Proceedings of the IEEE, vol. 86, issue 9, pp. 1839–1857, Sep. 1998.
[8] P. Lippens, B. D. Loore, G. de Haan, P. Eeckhout, H. Huijgen, A. Loning, B. McSweeny, M. Verstraelen, B. Pham, and J. Kettenis, “A video signal processor for motion-compensated field-rate upconversion in consumer television” IEEE Journal of Solid-state circuits, vol. 31, no 11, pp. 1762-1769, Nov. 1996.
[9] L. Zhang, C. Vazquez, and S. Knorr, “3D-TV content creation: automatic 2D-to-3D video conversion,” IEEE Trans. On Broadcasting, vol. 57, no. 2, pp. 372-383, June, 2011.
[10] T. Fujii and M. Tanimoto, “Free-viewpoint TV system based on Ray-Space representation,” in Proc. of SPIE, vol. 4864, pp. 175-189, 2002.
[11] I. Koprinska and S. Carrato, “Temporal video segmentation: A survey,” Signal Processing: Image Communication, vol. 16, pp 477-500, 2001.
[12] A. Yilmaz, O. Javed, and M. Shah, “Object tracking: A survey,” Journal of ACM Computing Surveys, vol. 38, no. 13, Apr. 2006.
[13] C. Sujatha and U. Mudenagudi, “A Study on Keyframe Extraction Methods for Video Summary,” in Proc. of 2011 Intl. Conf. on Computational intelligence and communication networks, Oct. 2011.
[14] D. Brezeale and D. J. Cook, “Automatic video classification: A survey of the literature,” IEEE Trans. On Systems, Man, and Cybernetics, vol 38, May 2008.
[15] T.-K. Kang, H. Zhang, and G.-T. Park, “Vision based ego-motion estimation for robot systems by type-2 fuzzy sets,” in Proc. of 2009 IEEE Intl. Conf. on Fuzzy Systems, pp. 147-152, Aug. 2009.
[16] H. Gharavi and H. Reza-Alikhani, “Pel-recursive motion estimation algorithm,” Electronics Letters, vol 37, pp. 1285-1286, Oct 2001.
[17] J. R. Bergen, P. Anandan, K. J. Hanna, R. Hingorani, “Hierarchical model-based motion estimation,” in Proc. of the 2nd European Conf. on Computer Vision, pp. 237-252, 1992.
[18] S. Kumar, M. Biswas and T. Q. Nguyen, “Efficient phase correlation motion estimation using approximate normalization,” in Proc. of the 32th Asilomar conf. on Signals, Systems, and Computers, pp. 1727-1730, Nov. 2004.
[19] M. Jakubowski and G. Pastuszak, “Block-based motion estimation algorithms-a survey,” Journal of Opto-Electronics Review, vol. 21, pp 86-102, Mar. 2013.
[20] C. Chen, S. Huang, Y. Chen, T. Wang, L. Chen, “Analysis and Architecture Design of Variable Block-Size Motion Estimation for H.264/AVC,” IEEE. Trans.Circuits Syst. I, vol 53, pp. 578-593, Mar. 2006.
[21] Display Search Q3’09 Quarterly Double Frame Rate TV Panel Shipment & Forecast Report. Available at: www.displaysearch.com.tw/press_releases/20090916.aspx
[22] MPEG, “Call for test materials for future high-performance video coding standardization,” ISO/IEC JTC1/SC29/WG11/N10362, Lausanne, Switzerland, Feb. 2009.
[23] M. Bartkowiak, “Improved interpolation of 4: 2: 0 colour images to 4: 4: 4 format exploiting inter-component correlation,” in Proc. of 12th European Signal Processing Conf., pp. 581-584, Sept. 2009.
[24] Y. I. Fet, “Vertical processing systems: a survey,” IEEE Micro, vol. 15, pp. 65-75, Aug. 2002.
[25] S.-M. Moon and K. Ebcioglu, “On performance and efficiency of VLIW and superscalar,” in Proc. of Intl. Conf. on Parallel Processing, pp. 283-287, Aug. 1993.
[26] K. K. Parhi, VLSI Digital Signal Processing Systems: Design and Implementation., Danvers, MA: John Wiley & Sons, 1999.
[27] G. Blake, R. G. Dreslinski, and T. Mudge, “A survey of multicore processors,” IEEE Signal Processing Magazine, vol. 26, pp. 26-37, Oct. 2009.
[28] T. Cervero, S. López, G. M. Callicó, F. Tobajas, V. De Armas, J. López, R. Sarmiento, “Survey of reconfigurable architectures for multimedia applications,” Proceedings of SPIE, vol. 7363, May 2009.
[29] M. Gries, “Methods for Evaluating and Covering the Design Space during Early Design Development,” Integration, the VLSI Journal, Elsevier, 38, 2, pp. 131-183, Dec., 2004.
[30] D. Feldman and D. Weinshall, “Motion segmentation and depth ordering using an occlusion detector,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 30, no. 7, pp. 1171-1185, July 2008.
[31] T. Koga, K. Linuma, A. Hirano, and T. Ishiguro, “Motion compensated interframe coding for video conferencing,” in Proc. NTC, pp. C9.6.1-9.6.5, New Orleans LA, Nov./Dec. 1981.
[32] S. Zhu and K. K. Ma, “A new diamond search algorithm for fast block matching motion estimation,” IEEE Trans. Image Processing, vol. 9, pp. 369-377, Aug. 1998.
[33] C. Zhu, X. Lin, and L.-P. Chau, “Hexagon-based search pattern for fast block motion estimation,” IEEE Trans. Circuits Syst. Video Technol., vol.12, pp. 349-355, May 2002.
[34] R. Srinivasan and K. R. Rao, “Predictive coding based on efficient motion estimation,” IEEE Trans. Circuits Commun, vol. COM-33, pp. 888-896, Aug. 1985.
[35] C. Zhu, X. Lin, L.-P. Chau, and L. Po, “Enhanced hexagonal search for fast block motion estimation,” IEEE. Trans. Circuits Syst. Video Technol., vol 14, pp. 1210-1214, Oct. 2004.
[36] C. Cheung and L. Po, “Novel cross-diamond-hexagonal search algorithms for fast block motion estimation,” IEEE Trans. Multimedia. vol. 7, pp. 16-22, Feb. 2005.
[37] X. Yi and N. Ling, “Zero-motion vector-biased cross-diamond search algorithm for rapid block matching motion estimation,” in Proc. SPIE, vol. 5685, pp. 995-1006, Mar. 2005.
[38] K. Venkatachalapathy, R. Krishnamoorthy, and K. Viswanath, “A new adaptive search strategy for fast block based motion estimation algorithms,” Journal of Visual Communication and Image Representation, vol. 15, pp. 203-213, June 2004.
[39] M. Tekalp, Digital video processing. Upper Saddle River, NJ : Prentice Hall PTR, pp.72-196, Aug. 1995.
[40] J. Tabatabai, R. S. Jasinschi, and T. Naveen, “Motion estimation methods for video compression-A review,” Journal of the Franklin Institute, vol. 335B, pp.1411-1441, Nov. 1998.
[41] A. Chimienti, C. Ferraris, and D. Pau, “A complexity-bounded motion estimation algorithm,” IEEE Trans. Image Processing, vol. 11, pp. 387-392, Apr. 2002.
[42] C.-F. Chen, G. G. Lee, J.-C. Wu, C.-J. Hsiao, and J.-Y. Ke, "Variable Block Size Motion Estimator Design for Scan Rate Up-convertor," in Proc. of 2012 IEEE Workshop on Signal Processing Systems (SiPS 2012), pp. 67-72, Oct. 2012.
[43] B. Montrucchio and D. Quaglia, “New sorting-based lossless motion estimation algorithms and a partial distortion elimination performance analysis,” IEEE Trans. Circuits Syst. Video Technol., vol. 15, pp.210-220, Feb. 2005.
[44] G. de Haan, P. W. A. C. Biezen, H. Huijgen, O. A. Ojo, “True-motion estimation with 3-D recursive search block matching,” IEEE. Trans.Circuits Syst. Video Technol., vol 3, pp. 368-379, Oct. 1993.
[45] G. de Haan, P. W. A. C. Biezen, “An efficient true-motion estimator using candidate vectors from a parametric motion model,” IEEE. Trans.Circuits Syst. Video Technol., vol. 8, pp. 85-91, Feb. 1998.
[46] M. Tourapis, O.C. Au, and M. L. Liou, “Predictive motion vector field adaptive search technique (PMVFAST)-enhancing block based motion estimation,” in Proc. SPIE, vol. 4310, pp. 883-892, 2001.
[47] K. R. Namuduri, “Motion estimation using spatio-temporal contextual information,” IEEE. Trans.Circuits Syst. Video Technol., vol. 14, pp. 1111-1115, Aug. 2004.
[48] Y. Nie and K. Ma, “Adaptive irregular pattern search with matching prejudgment for fast block-matching motion estimation,” IEEE Trans. Circuits Syst. Video Technol., vol.15, pp. 789-794, June 2005.
[49] B. C. Sonq and K. W. Chun, “Multi-resolution block matching algorithm and its VLSI architecture for fast motion estimation in an MPEG-2 video encoder,” in Proc. SPIE, vol 5022, pp. 236-247, 2003.
[50] X. Song, T. Chiang, X. Lee, and Y. Zhang, “New fast binary pyramid motion estimation for MPEG2 and HDTV encoding,” IEEE. Trans.Circuits Syst. Video Technol., vol. 10, pp. 1015-1028, Oct. 2000.
[51] Y. Murachi, K. Hamano, T. Matsuno, J. Miyakoshi, M. Miyama, and M. Yoshimoto, “A 95 mW MPEG2 MP@HL motion estimation processor core for portable high-resolution video application,” IEICE Trans. Fund. Electron. Commun. Comput. Sci., vol E88-A, pp. 3492-3499, Dec. 2005.
[52] T.-H Tsai and Y.-N. Pan, “High Efficiency Architecture Design of Real-Time QFHD for H.264/AVC Fast Block Motion Estimation,” IEEE Trans. On CSVT, vol 21, Issue 11, pp. 1646-1658, Mar. 2011.
[53] F. Balarin, Y. Watanabe, H. Hsieh, L. Lavagno, C. Paserone, A. Sangiovanni-Vincentelli, Metropolis: an integrated electronic system design environment, IEEE Computer, pp.45-52 , April, 2003.
[54] S. Mohanty, V. K. Prasanna, S. Neema, J. Davis, Rapid design space exploration of heterogeneous embedded systems using symbolic search and multi-granular simulation, in: Workshop on Languages, Compilers, and Tools for Embedded Systems (LCTES), 2002.
[55] R. A. Bergamaschi, Y. Shin, N. Dhanwada, S. Bhattacharya, W. Dougherty, I. Nair, J. Darringer, S. Paliwal, SEAS: A system for early analysis of SoCs, in: CODES/ISSS, pp. 150-155, 2003.
[56] Celoxica, available at: www.xilinx.com.
[57] Davis II et al., “Overview of the Ptolemy Project”, ERL Technical Report UCB/ERL n. M99/37, Dept. EECS, University of California, Berkeley, CA 94720, July 1999.
[58] L. Thiele, S. Chakraborty, M. Gries, S. Künzli, A framework for evaluating design tradeoffs in packet processing architectures, in: Proc. of 39th Design Automation Conference (DAC), New Orleans LA, USA, pp. 880-885, 2002.
[59] Physical compiler, available at www.synopsys.com
[60] J. Konig and L. Thiele, “Algorithm-architecture co-design by example: a coprocessor for on-line arithmetic,” Microprocess. Microprogram., vol. 41, pp. 339-357, Oct. 1995.
[61] N. Zhang, “Algorithm/architecture co-design for wireless communications systems,” Ph.D. dissertation, Engineering-Electrical Engineering and Computer Sciences, University of California, Berkeley, USA, July, 2001.
[62] G. G. Lee, H.-Y. Lin, C.-F. Chen, T.-Y. Huang, "Quantifying Intrinsic Parallelism Using Linear Algebra for Algorithm/Architecture Co-Exploration," IEEE Transactions on Parallel and Distributed Systems, Vol. 23, Iss. 5, pp. 944-957, May. 2012
[63] A. V. Oppenheim and R. W. Schafer, Discrete-Time Signal Processing (3rd Edition).Upper Saddle River, NJ: Prentice Hall, 2010.
[64] K. Bondalapati and V. K. Prasanna, “Reconfigurable computing systems, ” Proceedings of the IEEE, vol. 90, issue 7, pp. 1201-1217, Jul. 2002.
[65] S. V. Nikolaos and M. Konstantinos, System Level Design of Reconfigurable Systems-on-Chip. Secaucus, NJ: Springer, 2005.
[66] A. Shoa and S. Shirani, “Run-time reconfigurable systems for digital signal processing applications: A survey,” Journal of VLSI Signal Processing Systems for Signal, Image, and Video Technology, vol 39, no. 3, pp. 213-235, Mar. 2005.
[67] B. Mei, B. De Sutter, T. Vander Aa, M. Wouters, A. Kanstein, S. Dupont, ”Implementation of a coarse-grained reconfigurable media processor for AVC decoder,” Journal of Signal Processing Systems, vol. 51, no. 3, pp. 225-243, Jun. 2008.
[68] L. Liang, J. V. McCanny, S. Sezer, “Reconfigurable Video Motion Estimation Processor,” Proceedings of Anniversary IEEE International SOC Conference, Hsinchu, Taiwan, pp. 55-58, 2007.
[69] T. Vogt, N. Wehn, “A Reconfigurable ASIP for Convolutional and Turbo Decoding in an SDR Environment,” IEEE Transactions on Very Large Scale Integration Systems, vol. 16, no. 10, pp. 1309-1320, Oct. 2008.
[70] R. Veljanovski, A. Stojcevski, J. Singh, A. Zayegh, M. Faulkner, “A Low Cost Reconfigurable Architecture for a UMTS Receiver,” IEICE Transactions on Communications, vol. E86-B, no. 12, pp. 3441-3451, Dec. 2003
[71] S. Khawam, I. Nousias, M. Milward, Y. Yi, M. Muir, T. Arslan, “The Reconfigurable Instruction Cell Array,” IEEE Transactions on Very Large Scale Integration Systems, vol. 16, no. 1, pp. 75-85, Jan. 2008.
[72] G. Ansaloni, P. Bonzini and L. Pozzi, “Design and architectural exploration of expression-grained reconfigurable arrays,” IEEE Symposium on Application Specific Processors, Anaheim Convention Center, California, June 8-9, 2008.
[73] S. Jung, T. G. Kim, “An operation and interconnection sharing algorithm for reconfiguration overhead reduction using static partial reconfiguration,” IEEE Transactions on Very Large Scale Integration Systems, vol. 16, no. 12, pp. 1589-1595, Dec. 2008.
[74] N. Moreano, E. Bonn, C. De Souza and G. Araujo, “Efficient Datapath Merging for Partially Reconfigurable Architectures,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 24, no. 7, pp. 969-980, Jul. 2005.
[75] V. Bhaskaran and K. Konstantinides, Image and Video Compression Standards-Algorithms and Architectures. Boston, MA: Kluwer, pp.117-119, 1997.
[76] T. M. Cover and J. A. Thomas, Elements of Information Theory, 2nd edition, Danvers, MA: John Wiley & Sons, 2006.
[77] J. M. Wozencraft and I. M. Jacobs, Principles of Communication Engineering, Danvers, MA: John Wiley & Sons, 1990.
[78] N. Ahmed, T. Natarajan, and K L. R L. Rao, “Discrete cosine transform, ”IEEE Trans Comput , vol C-23, pp 90-93, Jan 1974.
[79] I. Daubechies, “Wavelet transform, time-frequency localization and signal analysis,” IEEE Trans. on Information Theory, vol. 36, no. 5, pp. 961-1005, Sept. 1990.
[80] Information technology - JPEG 2000 image coding system, ISO/IEC 15444, 2004.
[81] R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision, NY: Cambridge University Press, 2003.
[82] T. Wiegand and B. Girod, “Lagrange multiplier selection in hybrid video coder control,” in Proc. of 2001 Intl. Conf. on Image Processing, vol. 3, pp. 542-545, Oct. 2001.
[83] J. Eker and J. W. Janneck, “An introduction to the Caltrop actor language,” available at: embedded.eecs.berkeley.edu/caltrop/index.html
[84] MPEG-1 Coding of Moving Pictures and Associated Audio for Digital Storage Media at Up to About 1.5 Mb/s, ISO/IEC 11172,1993.
[85] MPEG-2 Generic Coding of Moving Pictures and Associated Audio Information, ISO/IEC 13818, 1996.
[86] MPEG-4-Information Technology-coding of Audio-Visual Objects-part2: Visual, ISO/IEC 14496-2, 2000.
[87] “Video Codec for Audiovisual Services at p × 64 kbits,” International Telecommunications Union, ITU-T Recommendation H.261, 1993.
[88] “Video Coding for Low Bitrate Communication,” International Telecommunications Union, ITU-T Recommendation H.263, 1998.
[89] J. R. Jain and A. K. Jain, “Displacement measurement and its application in interframe coding,” IEEE Trans. Commun., vol. 29, pp. 1799-1808, Dec. 1991.
[90] M. Bierling, “Displacement estimation by hierarchical blockmatching,”in Proc. SPIE, vol. 1001, pp. 942–951. 1988.
[91] B. Natarajan, V. Bhaskaran, K. Konstantinides, “Low-complexity block-based motion estimation via one-bit transforms,” IEEE. Trans.Circuits Syst. Video Technol., vol. 7, pp. 702-706, Aug. 1997.
[92] C. Wang, S. Yang, C. Liu, and T. Chiang, “A hierarchical N-Queen decimation lattice and hardware architecture for motion estimation,” IEEE. Trans.Circuits Syst. Video Technol., vol. 14, pp. 429-440, Apr. 2004.
[93] J. Luo, C. Wang, T. Chiang, “A novel all-binary motion estimation (ABME) with optimized hardware architectures,” IEEE. Trans.Circuits Syst. Video Technol., vol. 12, pp. 700-712. Aug. 2002.
[94] K. Lenqwehasatit and A. Orteqa, “Probabilistic partial-distance fast matching algorithms for motion estimation,” IEEE. Trans.Circuits Syst. Video Technol., vol. 11, pp. 139-152. Feb. 2001.
[95] M. Bruniq and W. Niehsen, “Fast full-search block matching,” IEEE Trans. Circuits Syst. Video Technol., vol. 11, pp. 241-247, Feb. 2001.
[96] C. Zhu, W. Qi, and W. Ser, “Predictive fine granularity successive elimination for fast optimal block-matching motion estimation,” IEEE Trans. Circuits Syst. Video Technol., vol. 14, pp. 213-221, Feb. 2005.
[97] L. Louro, P. Santos, N. Rodrigues, V. Silva and S. Faria, “DSP performance evaluation for motion estimation,” in Proc. of the 7th Intl. Symposium on Signal Processing and Its Applications, vol. 2, pp. 137-140, July 2003.
[98] K. M. Yang, M. T. Sun, and L. Wu, “A family of VLSI designs for the motion compensation block-matching algorithm,” IEEE Trans. Circuits Syst., vol. 36, no. 10, pp. 1317–1325, Oct. 1989.
[99] H. Yeo and Y. H. Hu, “A novel modular systolic array architecture for full-search block matching motion estimation,” IEEE Trans. Circuits Syst. Video Technol., vol. 5, no. 5, pp. 407–416, Oct. 1995.
[100] T. Komarek and P. Pirsch, “Array architectures for block matching algorithms,” IEEE Trans. Circuits Syst., vol. 36, no. 10, pp. 1301–1308,Oct. 1989.
[101] C. H. Hsieh and T. P. Lin, “VLSI architecture for block-matching motion estimation algorithm,” IEEE Trans. Circuits Syst. Video Technol., vol. 2,no. 2, pp. 169–175, Jun. 1992.
[102] T. Doyle and M. Looymans, “Progressive scan conversion using edge information,” in Signal Processing of HDTV II, L. Chiariglione, Ed. Amsterdam, The Netherlands: Elsevier, pp. 711–721, 1990.
[103] C.J. Kuo, C. Liao and C.C. Lin, “Adaptive Interpolation Technique for Scanning Rate Conversion”, IEEE Transactions on Circuits and Systems for Video Technology, Vol 6, No. 3, June 1996
[104] Y.L. Chang, S.F. Lin, and L.G. Chen, “Extended Intelligent Edge-based Line Average with Its Implementation and Test Method”, in Proc. of 2004 IEEE Int. Symp. on Circuit and Syst, Vol. 2, Page II-341-4, May 2004.
[105] B. Bhatt, F. Templin, B. Hogstrom, H. Derovanessian, S. Lamadrid, and J. Mailhot, “Grand-alliance HDTV multi-format scan converter,” IEEE Transactions on Consumer Electronics, vol. 41, pp. 1020-1031, Nov. 1995.
[106] A. M. Bock, “Motion-adaptive standards conversion between formats of similar field rates,” Signal Processing: Image Communication, vol. 6, no. 3, pp. 275-280, June 1994.
[107] D. Haan, C.Y. Shin, S.J. Choi and J.S. Park, “A Motion Adaptive 3-D De-interlacing Algorithm Based on the Brightness Profile Pattern Difference”, IEEE Transactions on Consumer Electronics, Vol. 45, No. 3, August 1999.
[108] T. Koivunen, “Motion Detection of an Interlaced Video Signal,” IEEE Transactions on Consumer Electronics, Vol. 40, no. 3, pp.753-760, 1994.
[109] G. G. Lee, D. W.-C. Su, H.-Y. Lin, and M.-J. Wang, “Multiresolution-based texture adaptive motion detection for de-interlacing,” in Proc. of 2006 IEEE Int. Symp. on Circuit and Syst. (ISCAS 2006), Kos Island, Greece, 21-24, May, 2006.
[110] D. Hargreaves and J. Vaisey, “Bayesian Motion Estimation and Interpolation in Interlaced Video Sequences”, IEEE Transactions on Image Processing, Vol 6, No. 5, May 1997.
[111] K. Sugiyama and H. Nakamura, “A Method of De-interlacing with Motion Compensated Interpolation”, IEEE Transactions on Consumer Electronics, Vol. 45, No. 3, August 1999.
[112] M. Biswas and T. Nguyen, “A Novel De-interlacing Technique Based on Phase Plane Correlation Motion Estimation”, in Proc. of 2003 IEEE Int. Symp. on Circuit and Syst. (ISCAS 2003), Bangkok, Thailand, 23th-26th, May, 2006.
[113] Y.Y. Jung, S. Yang and P. Yu, “An Effective De-interlacing Technique Using Two Types of Motion Information”, IEEE Transactions on Consumer Electronics, Vol. 49, No. 3, August 2003.
[114] G. G. Lee, K. A. Vissers, and B. D. Liu, "On A 3D Recursive Motion Estimation Algorithm and Architecture for Digital Video," in Proc. of IEEE Midwest Symposium on Circuits and Systems, 2004.
[115] G. G. Lee, M.-J. Wang, H.-Y. Lin, D. W.-C. Su, B.-Y. Lin, “A 3D Spatio-Temporal Motion Estimation Algorithm for Video Coding,” in Proc. of 2006 IEEE Int. Conf. on Multimedia and Expo (ICME 2006), Toronto, Canada, 9th-12th, July, 2006.
[116] G. G. Lee, M.-J. Wang, H.-Y. Lin, D. W.-C. S., B.-Y. Lin, “Algorithm/Architecture Co-Design of 3-D Spatio–Temporal Motion Estimation for Video Coding,” IEEE Trans. on Multimedia, vol. 9, no. 3, pp.455-465, April, 2007.
[117] E. B. Bellers, G. de Haan, De-interlacing: A Key Technology for Scan Rate Conversion, Amsterdam, Elsevier, 2000.
[118] A. B. Palmer and A. P. Howden, “Reproduction of anamorphic on television,” Brit Kinematography Sound & Television, vol. 51, no. 12, pp. 434-438, Dec. 1969.
[119] G. J. Tonge, “Television motion portrayal” presented at Les Assises des Jeunes Chercheurs, Rennes, France, Sept. 1985.
[120] A. Gebhardt, G. Moller, R. Suhrmann, “TV standard conversion with picture memories,“ IEEE Trans. On Consumer Electron. Vol. 34, No.1, Feb, 1988.
[121] T. C. Chen, K. T. Wang, “A high-speed and high-performance video format conversion system,” Proc. of IEEE, vol. 2, Aug, 1997.
[122] G. de Haan, H. Huijgen, P. W. A. C. Biezen, and O. A. Ojo, “Method and apparatus for discriminating between movie film and non-movie film and generating a picture signal processing mode control signal,” U.S. Patent 5365280, Nov. 15, 1994.
[123] A. Pelagotti and G. de Haan, “A new algorithm for high quality video format conversion,” Proc. of IEEE, 2001.
[124] C. Fehn, “Depth-image-based rendering (DIBR), compression and transmission for a new approach on 3D-TV,” in Proc. of SPIE, vol. 5291, pp. 93-104, 2004.
[125] A. A. Alatan, Y. Yemez, U. Gudukbay, X. Zabulis, K. Muller, C. E. Erdem, C. Weigel, and A. Smolic. “Scene representation technologies for 3DTV-a survey,” IEEE Trans. on Circuits and Syst. for Video Techno., vol. 17, no. 11, pp. 1587-1605, Nov. 2007.
[126] G. Toulminet, M. Bertozzi, S. Mousset, A. Bensrhair, and A. Broggi, “Vehicle detection by means of stereo vision-based obstacles features extraction and mono-view pattern analysis,” IEEE. Trans. on Image Processing, vol. 15, no. 8, pp. 2364-2375, Aug, 2006.
[127] Z. Chen, N. Pears, and B. Liang, “Mono-view obstacle detection using reciprocal-polar rectification,” Image and Vision Computing, vol. 24, no. 12, pp. 1301-1312, Dec. 2006.
[128] A. Patrioli, and G. Sansoni, “Non contact 3D sensing of free-form complex surfaces,” in Proc. of SPIE, vol. 4309, pp. 232-239, 2001.
[129] P. Kauff, N. Atzpadin, C. Fehn, M. Muller, O. Schreer, A. Smolic, and R. Tanger, “Depth map creation and image-based rendering for advanced 3DTV services providing interoperability and scalability,” Signal Processing: Image Communication, vol 22, no. 2, pp.217-234, Feb, 2007.
[130] A. Redert, E. Hendriks, J. Biemond, “Correspondence estimation in image pairs,” IEEE Signal Processing Magazine, vol. 16, no. 3, May, 1999, pp.29-46.
[131] B. S. Marapane, M. M. Trivedi, “Region-based stereo analysis for robotic applications,” IEEE Trans. on Systems, Man and Cybernetics, vol. 19, no.6, pp. 1447-1464, Nov-Dec, 1989.
[132] D. Geiger, B. Ladendorf, A. Yuille, “Occlusions and binocular stereo,” Intl. Journal of Computer Vision, vol. 14, no. 3, pp.211-226, Apr, 1995.
[133] Y. S. Hung and H. T. Ho, “Kalman filter approach to direct depth estimation incorporating surface structure,” IEEE. Trans. on Pattern Analysis and Machine Intell., vol 21, no. 6, pp. 570-575, June, 1999.
[134] F. Morier, H. Nicolas, J. Benois, D. Barba, and H. Sanson, “Relative depth estimation of video objects for image interpolation,” in Proc. of IEEE Intl. Conf. on Image Processing, pp. 953-957, 1998.
[135] S. Battiato, S. Curti, M. La Cascia, M. Tortora, and E. Scordato, “Depth map generation by image classification,” in Proc. of SPIE, Three-Dimensional Image Capture and Applications VI, vol. 5302, pp. 95–104, Apr. 2002.
[136] T. N. Tan and K. D. Baker, “Efficient Image Gradient Based Vehicle Localization,” IEEE Trans. on Image Processing, vol. 9, no. 8, pp 1343-1356, Aug, 2000.
[137] T. N. Tan, G. D. Sullivan, and K.D. Baker, “Model-based localisation and recognition of road vehicles,” Intl. Journal of Computer Vision, vol. 27, no. 1, Mar., 1998, pp. 5-25.
[138] J. Zhou and B. Li, “Homography-based ground detection for a mobile robot platform using a single camera,” in Proc. of IEEE Intl. Conf. on robotics and Automation, pp. 4100-4105, 2006.
[139] V. Aslantas, “A depth estimation algorithm with a single image,” Optics Express, vol. 15, no. 8, pp. 5024-5029, Apr., 16, 2007.
[140] S. A. Valencia and R. M. R. Dagnino, “Synthesizing stereo 3D views from focus cues in monoscopic 2D images,” Proc of SPIE, The intl. Society for Optical Engineering, vol. 5066, pp. 377-388, 2003.
[141] A. Kalavade and E. A. Lee, “A hardware-software codesign methodology for DSP applications,” IEEE Design & Test of Computers, vol. 10, Issue 3, 1993.
[142] A. Torralba and A. Oliva, “Depth estimation from image structure,” IEEE Trans. on Pattern Analysis and Machine Intell., vol. 24, no. 9, pp.1226-1238, Sept. 2002.
[143] H. Y. Lee, J. W. Park, T. M. Bae, S. U. Choi, and Y. Ho. Ha, "Adaptive scan rate up-conversion system based on human visual characteristics," IEEE Trans. on Consumer Electronics, vol. 46, No. 4, Nov, 2000.
[144] H. Yoo, J. Jeong, “Direction-oriented interpolation and its application to de-interlacing,” IEEE Transactions on Consumer Electronics, vol. 48, no. 4, pp. 954-962, Nov. 2002
[145] R. C. Gonzalez, R. E. Woods, Digital Image processing. Upper Saddle River, NJ: Prentice-Hall, 2002.
[146] G. G. Lee, H.-Y. Lin, M.-J. Wang, R.-L. Lai, C. W. Jhuo and B.-H. Chen, "Spatial-Temporal Content-Adaptive Deinterlacing Algorithm," IET Image processing, Vol. 2, No. 6, pp. 323-336, Dec. 2008.
[147] G. G. Lee, H.-Y. Lin, M.-J. Wang, B.-H. C. and Y.-L. Cheng, "On The Verification of Multi-Standard SOC'S for Reconfigurable Video Coding Based on Algorithm/Architecture Co-Exploration," in Proc. of IEEE 2008 Workshop on Signal Processing Systems (SiPS 2008).
[148] C.-C. Lin , C.-J. Wei , M.-H. Sheu, H.-K. Chiang, C. Liaw , “The VLSI Design of deinterlacing with scene change detection,” IEEE Intl. Symposium on Circuits and Systems, Island of Kos, Greece, May 21-24, 2006.
[149] H. Sun, N. Zheng, C. Ge, D. Wang, P. Ren, “An efficient motion adaptive de-interlacing and its VLSI architecture design, ” Proceedings of IEEE Computer Society Annual Symposium on VLSI, Montpellier, France, pp. 455-458, Apr. 2008.
[150] G. G. Lee, M.-J. Wang, H.-Y. Lin and R.-L. Lai, "On The Efficient Algorithm/Architecture Co-Exploration for Complex Video Processing," in Proc. Of 2008 IEEE International Conference on Multimedia & Expo (ICME 2008).
[151] S. F. Lin, Y. L. Chang, and L. G. Chen, “Motion Adaptive Interpolation with Horizontal Motion Detection for De-interlacing”, IEEE Transactions on Consumer Electronics, Vol. 49, No. 4, pp.1256-1265, November. 2003.
[152] H. M. Mohammadi, P. Langlois, and Y. Savaria, “A five-field motion compensated deinterlacing method based on vertical motion,” IEEE Transactions on Consumer Electronics, Vol. 53, No. 3, pp.1117-1124, Aug. 2007.
[153] Z. Li, X. Cao and Q. Dai, “A Novel Method for 2D-to-3D Video Conversion using Bi-directional Motion Estimation,” in Proc. of 2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), March, 2012.
[154] C.-H. Choi, B.-H. Kwon, and M.-R. Choi, “A real-time field-sequential stereoscopic image converter,” IEEE Trans. On Consum. Electron., Aug, 2004.
[155] Z. Zhao, M. Chen, L. Yang, Z. Fan, L. Ma, “2D to 3D video conversion based on interframe pixel matching,” in Proc. of 2010 IEEE International Conference on Information Science and Engineering (ICISE), December, 2010.
[156] ISO/IEC JTC1/SC29/WG11, Call for proposals on 3D video coding technology, ISO/IEC JTC1/SC29/WG11, doc. N12036, Geneva, Switzerland, Mar. 2011.
校內:2023-12-31公開