| 研究生: |
翁振育 Weng, Zhen-Yu |
|---|---|
| 論文名稱: |
利用SURF特徵點匹配進行HDR圖片校準 HDR aligment with matching SURF feature points |
| 指導教授: |
賴源泰
Lai, Yen-Tai |
| 學位類別: |
碩士 Master |
| 系所名稱: |
電機資訊學院 - 電機工程學系 Department of Electrical Engineering |
| 論文出版年: | 2017 |
| 畢業學年度: | 105 |
| 語文別: | 英文 |
| 論文頁數: | 50 |
| 中文關鍵詞: | 高動態範圍 、對齊 、特徵點擷取 、音調映射 |
| 外文關鍵詞: | High Dynamic Range, Alignment, Feature Extraction, Tone mapping |
| 相關次數: | 點閱:111 下載:5 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
高動態範圍,簡稱HDR,是指場景中最高輻射度值跟最低輻射度值的範圍。人
眼對於場景的色彩敏感比例通常是100,000,000:1,而相機因礙於硬體的限制通常是
1000:1 。為了將場景的影像能貼近甚至等同人眼所觀察到的感知動態範圍,HDR 會
將多張影像依據其曝光度來儲存多張影像的資訊。最後再依據tone mapping 使得該像
素點的值得以重現。然而在進行多張影像合成時,往往會因為拍攝時間不同,而造成
人為與自然的變素。因此多張圖片在合成時會進行對齊等前置工作。
本篇論文主要探討對齊多張不同動態範圍的影像。利用Speeded Up Robust
Features,簡稱SURF,擷取並比對特徵點。接著利用這些特徵點來建立Affine 矩陣
來對齊影像。最後合成出高動態範圍的結果。
High dynamic range, HDR, means the range of the maximum and the minimum
radiance value in scene. The sensitivity of human eyes is usually 100,000,000:1, but
camera limited by hardware storage memory is usually 1000:1. In order to make images in
scene close to observation with human eyes, HDR will store the information of images
which are based on their exposure time. Finally, with tone mapping, the value of pixel of
image will be rebuilt. However, process to synchronize the multiple images generates the
human and natural factors caused by different filming time. Therefore, before multiple
images are synchronized, they will align first.
In this paper, we discuss alignment about different dynamic range images. With
Speeded Up Robust Features, SURF, we extract the feature points from images and match
these points. Next, we build affine transform to align these images with feature points.
Finally, the result of high dynamic range will be synchronized.
[1] S. Decker, D. McGrath, K. Brehmer, and C. Sodini, “A 256 × 256 CMOS imaging
array with wide dynamic range pixels and columnparallel digital output,” IEEE
Journal of Solid-State Circuits, vol. 33, no. 12, pp. 2081 –2091, Dec 1998.
[2] D. Stoppa, A. Simoni, L. Gonzo, M. Gottardi, and G.-F. Dalla Betta, “A 138 dB
dynamic range CMOS image sensor with new pixel architecture,” in IEEE
International Solid-State Circuits Conference (ISSCC), Digest of Technical Papers,
vol. 1, pp. 40–442, Feb 2002.
[3] L. McIlrath, “A low-power low-noise ultrawide-dynamic-range CMOS imager with
pixel-parallel A/D conversion,” IEEE Journal of Solid-State Circuits, vol. 36, no. 5,
pp. 846 –853, May 2001.
[4] S. Kavusi and A. El Gamal, “A quantitative study of high dynamic range image sensor
architectures,” in Proceedings of the SPIE Electronic Imaging ’04 Conference, vol.
5301, pp. 264–275, Jan 2004.
[5] O. Yadid-Pecht and A. Belenky, “In-Pixel Autoexposure CMOS APS,” IEEE Journal
of Solid-State Circuits, vol. 38, no. 8, pp. 1425–1428, Aug 2003.
[6] P. Acosta-Serafini, M. Ichiro, and C. Sodini, “A 1/3” VGA linear wide dynamic range
CMOS image sensor implementing a predictive multiple sampling algorithm with
overlapping integration intervals,” IEEE Journal of Solid-State Circuits, vol. 39, no. 9,
pp. 1487–1496, Sept 2004.
[7] M. Sakakibara, S. Kawahito, D. Handoko, N. Nakamura, M. Higashi, K. Mabuchi,
and H. Sumi, “A high-sensitivity CMOS image sensor with gain-adaptative column
amplifiers,” IEEE Journal of Solid-State Circuits, vol. 40, no. 5, pp. 1147–1156, May
2005.
48
[8] A. Krymsky and T. Niarong, “A 9-V/Lux 5000-frames/s 512 × 512 CMOS sensor,”
IEEE Transactions on Electron Devices, vol. 50, no. 1, pp. 136–143, Jan 2003.
[9] G. Cembrano, A. Rodriguez-Vazquez, R. Galan, F. Jimenez-Garrido, S. Espejo, and R.
Dominguez-Castro, “A 1000 FPS at 128 × 128 vision processor with 8-bit digitized
I/O,” IEEE Journal of Solid-State Circuits, vol. 39, no. 7, pp. 1044–1055, Jul 2004.
[10] L. Lindgren, J. Melander, R. Johansson, and B. Mller, “A multiresolution 100-GOPS
4-Gpixels/s programmable smart vision sensor for multisense imaging,” IEEE Journal
of Solid-State Circuits, vol. 40, no. 6, pp. 1350–1359, Jun 2005.
[11] Y. Sugiyama, M. Takumi, H. Toyoda, N. Mukozaka, A. Ihori, T. kurashina, Y.
Nakamura, T. Tonbe, and S. Mizuno, “A high-speed CMOS image sensor with profile
data acquiring function,” IEEE Journal of Solid-State Circuits, vol. 40, no. 12, pp.
2816–2823, Dec 2005.
[12] J. Dubois, D. Ginhac, M. Paindavoine, and B. Heyrman, “A 10 000 fps CMOS sensor
with massively parallel image processing,” IEEE Journal of Solid-State Circuits, vol.
43, no. 3, pp. 706–717, Mar 2008.
[13] P. E. Debevec and J. Malik, “Recovering high dynamic range radiance maps from
photographs”, Proc. ACM SIGGRAPH’97, pp. 369 – 378, 1997.
[14] T. Mitsunaga and S. K. Nayar, “High dynamic range imaging: Spatially varying pixel
exposures”, Proc. CVPR’2000, vol. 1, pp. 472-479, 2000.
[15] S. B. Kang, M. Uyttendale, S. Winder and R. Szeliski, “High dynamic range video”,
ACM Transactions on Graphics, vol.22, no. 3, pp. 319 – 325, July 2003.
[16] J.E. Dowling, The Retina: An Approachable Part of the Brain. Harvard Univ. Press,
1987.
[17] D.A. Baylor and M.G.F. Fuortes, “Electrical Responses of Single Cones in the Retina
of the Turtle,” J. Physiology, vol. 207, pp. 77-92, 1970.
49
[18] J. Kleinschmidt and J.E. Dowling, “Intracellular Recordings from Gecko
Photoreceptors during Light and Dark Adaptation,” J gen Physiology, vol. 66, pp.
617-648, 1975.
[19] K.I. Naka and W.A.H. Rushton, “S-Potentials from Luminosity Units in the Retina of
Fish (Cyprinidae),” J. Physiology, vol. 185, pp. 587-599, 1966.
[20] R.A. Normann and I. Perlman, “The Effects of Background Illumination on the
Photoresponses of Red and Green Cones,” J. Physiology, vol. 286, pp. 491-507, 1979.
[21] R. J. Deeley, N. Drasdo, and W. N. Charman. “A Simple Parametric Model of the
Human Ocular Modulation Transfer Function, “ Opthalmology and Physiological
Optics, pp.91-93, 1991.
[22] G. Wyszecki and W. S. Stiles. Color Science: Concepts and Methods, Quantitative
Data and Formulae, 2nd ed., New York: John Wiley & Sons, 2000.
[23] G, J, Ward, “The RADIANCE Lighting Simulation and Rendering System,” in A.
Glassner (ed.), Proceedings of SIGGRAPH ’94, pp. 459-472, 1994.
[24] G. Ward, H. Rushmeier, and C. Piatko. “A Visibility Matching Tone Reproduction
Operator for High Dynamic Range Scenes,” IEEE Tractions on Visualization and
Computer Graphics, pp. 291-306, 1997.
[25] T. M. Lillesand and R. W. Kiefer and J. Chipman, Remote Sensing and Image
Interpretation 6th ed. New York: John Wiley & Sons, 1994.
[26] Adobe.Digital negative (DNG), 2004, http://www.adobe.com/prodcuts/dng/main.html
[27] J. Munkberg, P. Clarberg, J. Hasselgren, and T. Akenine-Mӧller. “Practical HDR
Texture Compression, “ Computer Graphics Forum, pp. 1664-1676, 2008.
[28] Adams, Ansel. The Negative. Boston: New York Graphic Society. ISBN
0-8212-1131-5, 1981.
[29] G.S. Miller and C. R. Hotffman, “Illumination and ReflectionMaps : Simulated
50
Object in Simulated and Real Environments”, SIGGRAPH 84 Course Notes for
Advanced Computer Graphics Animation, July 1984.
[30] J. Tumblin and H. Rushmeier, “ Tone Reproduction for Computer Generated Image”,
IEEE Computer Graphics and Application, pp.42-48, November 1993.
[31] K. Chiu, M.Herf, P.Shirley,. S. Swamy, C. Wang, and K. Zimmerman, “ Spatially
Nonumuniform Scaling Function for High Contrast Images”, in Proceeding of
Graphics Interface ’93, pp.245-253, May 1993.
[32] M. Ashikhmin, “A Tone Mapping Algorithm for High Contrast Images”, Proceeding
of 13 th Eurographics Work shop on Rendering , pp.145-155, 2002.
[33] R. Fattal , D. Lischinski, and M. Werman., “ Gradient Domain High Dynamic Range
Compression”, ACM Trans. on Graphics, pp.249-256, 2002.
[34] E. Reinhard, G. Ward, S. Pattanaik, and P. Debevec “High Dynamic Range Imaging”,
Morgan Kanfmann.
[35] G. Ward, “Fast, robust image registration for compositing high-dynamic range
photographs from handheld exposures,” Journal of Graphics Tools, vol. 8, no. 2, pp. 17–
30, 2003.
[36] David Lowe, “Distinctive Image Features from Scale-Invariant Keypoints,” in
International Journal of Computer Vision, IJCV 60, pp. 91-110, 2004.
[37] Herbert Bay, Tinne Tuytelaars, and Luc Van Gool, “SURF: Speeded Up Robust
Features,” in European Conference on Computer Vision, ECCV 2006.