簡易檢索 / 詳目顯示

研究生: 黃如鵬
Huang, Ju-Peng
論文名稱: 基於散焦演算法及逆向熱傳導方程式實現單鏡頭相機深度估測
Realization of Depth Estimation from Monocular Camera Based on Defocus Algorithm and Reverse Heat Equation
指導教授: 廖德祿
Liao, Teh-Lu
學位類別: 碩士
Master
系所名稱: 工學院 - 工程科學系
Department of Engineering Science
論文出版年: 2017
畢業學年度: 105
語文別: 英文
論文頁數: 65
中文關鍵詞: 散焦測距逆向熱傳導方程式區域尺度控制法
外文關鍵詞: Depth from Defocus, Reverse Heat Equation, Local Scale Control
相關次數: 點閱:66下載:1
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 近年來,無人飛行載具與無人載具吸引了諸多的關注與研究。因為這兩種系統經常用於軍事與民用的目的上,如:監控、偵查與救援任務。為了使無人系統能自由行駛而不發生碰撞,對周遭環境的監測是至關重要的,特別是深度資訊。最常見用於環境監測的傳感器是雷射、紅外線和超音波。隨著智慧機器人越來越進步,傳感器也變得更加智慧化,Kinect傳感器就是一個智慧化傳感器的例子。然而體感概念的商品不斷的推陳出新,但在行動裝置上的應用卻非常的稀少,雖然諸如Kinect傳感器的智能傳感器可以通過兩個紅外線攝影機獲得深度信息,但是在行動裝置上的應用是昂貴的,並且尺寸需要小型化。在行動裝置中安裝兩台攝影機是不利的,因此本論文提出一個針對單鏡頭相機影像的深度估測方法,使得透過影像獲取深度資訊的方法更加方便與普及,為了能從單張影像中獲取深度資訊,本論文採用的散焦演算法是利用影像中的模糊半徑與深度的關係來計算深度資訊。然而,估測模糊半徑的結果並不理想,因此本論文提出利用逆向熱傳導方程式計算相對深度並重新排列模糊半徑的位置。由實驗結果顯示,本論文所提出的方法可以使獲得準確度更好的深度估測。

    Over the last few years, unmanned aerial vehicles and unmanned vehicles have attracted a lot of attention and research because both systems are frequently used for military and civilian purposes, such as supervision, reconnaissance and rescue missions. To enable unmanned systems to travel freely without collision, monitoring of the surrounding environment, particularly depth, is essential. The most common sensors used for environmental monitoring are laser, infrared and ultrasonic sensors. As intelligent robots become more advanced, available sensors have become more intelligent. The Kinect sensor is an example of an intelligent sensor. Although products based on a somatosensory concept are continuously developed, applications on mobile devices are limited. While intelligent sensors such as the Kinect sensor can obtain depth information with two infrared cameras, applications on mobile devices are expensive and the size necessitates miniaturization. The installation of two cameras in mobile devices is unfavorable. Therefore, we propose a method for depth estimation using a single monocular camera image. This method can improve the convenience and popularity of obtaining depth information through images. To obtain depth information from a single image, we use a defocus algorithm to calculate the depth information based on the relationship between the blurred radius of a circle and the depth, making the blurred radius an important factor in depth estimation. Because blurred radius estimation is very reliable, we propose a method involving the reverse heat equation for relative depth estimation. The position of the blurred radius is rearranged according to the relative depth. The experimental results show that the method proposed in this thesis can improve the depth estimation accuracy.

    Contents 摘要 I Abstract II 誌謝 IV Contents V List of Figs VIII CHAPTER 1 INTRODUCTION 1 1.1 Background 1 1.2 Motivation 2 1.3 Thesis Organization 2 CHAPTER 2 DEPTH ESTIMATION METHODS 4 2.1 Binocular Depth Estimation 4 2.2 Monocular Depth Estimation 8 2.2.1 Shape from Shading 8 2.2.2 Vanishing Point 9 2.2.3 Depth from Focus 10 CHAPTER 3 FUNDAMENTAL KNOWLEDGE 12 3.1 Imaging Principle 12 3.1.1 Convex Lens Imaging 12 3.1.2 Point Spread Function 15 3.1.3 Blurred Radius of Circle 18 3.2 Depth from Defocus Algorithm 19 CHAPTER 4 METHOD OF RADIUS ESTIMATION AND CORRECTION 24 4.1 Local scale control for the edge detection and blur estimate 25 4.2 Relative Depth Estimation 32 4.2.1 The Heat Equation Algorithm 32 4.2.2 The Reverse Heat equation Algorithm 36 4.2.3 The Pseudo Reverse Heat equation Algorithm 38 4.3 Local scale correction 42 CHAPTER 5 EXPERIMENTAL RESULTS 44 5.1 Implementation and Verification 44 5.2 System Design and Architecture 46 5.2.1 Operation Display User Interface 49 5.2.2 Result of Static Objects 50 5.2.3 Result of Dynamic Objects 54 5.3 Implementation in Embedded Systems 56 5.3.1 Introduction of Jetson TX1 56 5.3.2 GPU Implementation 57 CHAPTER 6 CONCLUSION AND FUTURE WORK 60 6.1 Conclusion 60 6.2 Future Work 61 REFERENCE 62

    [1] R. W. Beard, T. W. McLain, D. B. Nelson, D. Kingston, and D. Johanson, “ Decentralized cooperative aerial surveillance using fixed-wing miniature UAVs,” Proc. IEEE, vol. 94, no. 7, pp. 1306 - 1324, 2006.
    [2] B. Shang, C. Wu, Y. Hu, and J. Yang, “An algorithm of visual reconnaissance path planning for UAVs in complex spaces,” Journal of Computational Information Systems, vol. 10, no. 19, pp. 8363 - 8370, 2014.
    [3] A. N. Chaves, P. S. Cugnasca, and J. J. Neto, “Adaptive search control applied to search and rescue operations using unmanned aerial vehicles (UAVs),” IEEE Latin America Transactions, vol. 12, no. 7, pp. 1278 - 1283, 2014.
    [4] S. Mahmoudpour, M. Kim, “Depth from defocus using superpixel-based affinity model and cellular automata,” Electronics Letters ,vol.52, no.12, pp. 1020 - 1022, 2016
    [5] O. Ghita, P. F. Whelan, “ Real time 3D estimation using depth from defocus,” Proc. lrish Machine Vision and image processing conference , pp. 167 - 181, 1999
    [6] Y. Lou, P. Favaro, A.L. Bertozzi, S. Soatto,“ Autocalibration and uncalibrated reconstruction of shape from defocus,” IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp.1 - 8, 2007.
    [7] H. Shim, S. Lee,“Recovering translucent objects using a single time-of-flight depth camera,” IEEE Transactions on Circuits and Systems for Video Technology, vol.26, no.5, pp. 841 - 854, 2016
    [8] Y. Salih, and A. Malik, “Depth and geometry from a single 2D image using triangulation,” Proceedings of the IEEE International Conference on Multimedia and Expo Workshops (ICMEW), pp. 511 - 515, 2012.
    [9] T. Gaspar and P. Oliveira, “Monocular depth from focus estimation with complementary filters,” in Proc. IEEE Int. Conf. Robot. Autom, pp. 4986–4991 ,2011
    [10] S. Krishna, S. Kansal, A. Makhal, P. Chakraborty, G.C. Nandi, “Systematic study of binocular depth finding using two web cameras,” Third International Conference on Computer and Communication Technology (ICCCT), pp. 88 - 89 ,2012
    [11] M. Cagnazzo, B. Pesquet-Popescu, “Dense depth map estimation for multiple view coding,” IEEE 14th Signal Processing and Communications Applications,pp. 1 - 4, 2006
    [12] U.K. Thikshaja, A. Paul, S. Rho, D. Bhattacharjee, “An adaptive transcursive algorithm for depth estimation in deep learning networks,” 2016 International Conference on Platform Technology and Service.
    [13] H. Hiyadi, F. Ababsa, C. Montagne, E.H. Bouyakhf, F. Regragui, “A depth-based approach for 3D dynamic gesture recognition,” 12th International Conference on Informatics in Control, Automation and Robotics (ICINCO), pp. 103 - 110, 2015
    [14] X. Li, N. Aouf, and A. Nemra, “3D Mapping based vslam for uavs,” in Control & Automation (MED), 2012 20th Mediterranean Conference on. IEEE, pp. 348–352, 2012
    [15] A. J. Davison, I. D. Reid, N. D. Molton, and O. Stasse, “MonoSLAM: real-time single camera SLAM,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 29, no. 6, pp. 1052 - 1067, 2007.
    [16] X. Xu, H. Fan, “Feature based simultaneous localization and semi-dense mapping with monocular camera,” 2016 9th International Congress on Image and Signal Processing, BioMedical Engineering and Informatics, P17-22, 2016
    [17] Z. Zhengyou, “Camera calibration with one-dimensional objects,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 26, no. 7, pp. 892 - 889, 2014
    [18] R. Zhang, P. Tsai, J. Cryer, and M. Shah, “Shape from shading: a survey,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 21, no. 8, pp. 690 - 706, 1999.
    [19] Y. C. Fan, P. W. Chen, and Y. C. Chiu, “A 2D-to-3D image conversion system using block slope pattern based vanishing point detection technology,” in 2012 IEEE Computer, Consumer and Control, pp. 321 - 324, 2012
    [20] J. Bing Zhong, Y. Daniel , “Recovering depth from images using adaptive depth from defocus,” International Conference on Machine Learning and Cybernetics (ICMLC), pp. 1205 - 1211, 2012
    [21] S. Huadong, Z. Zhijie,“ Depth from defocus and blur for single image,” Visual Communications and Image Processing (VCIP), pp. 1 - 5, 2013
    [22] M. Visentini-Scarzanella, D. Stoyanov, and G. Z. Yang, “Metric depth recovery from monocular images using shape-from-shading and specularities,” IEEE International Conference on Image Processing (ICIP), pp. 25 - 28, 2012
    [23] J. Shi, J. Wang, F. Fu, “Fast and robust vanishing point detection for unstructured road following,” IEEE Transactions on Intelligent Transportation Systems, vol. 17, no. 4, pp. 970 - 979, 2016.
    [24] F. Liu, C. Shen, G. Lin, and I. Reid, “Learning depth from single monocular images using deep convolutional neural fields,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol.38, no.10, pp. 2024 - 2039, 2015.
    [25] “Point spread function,” https://en.wikipedia.org/wiki/Point_spread_function
    [26] A. P. Pentland, “A new sense for depth of field,” IEEE Transactions on PAMI, vol.9, no.4, pp. 523-531, 1987.
    [27] M. Subbarao, “Parallel depth recovery by changing camera parameters,” Second International Conference on Computer Vision, pp 149 - 155, 1988
    [28] J.H. Elder and S.W. Zucker, “Local scale control for edge detection and blur estimation,” Pattern Analysis and Machine Intelligence, IEEE Transactions on, vol. 20, no. 7, pp. 699–716, 1998
    [29] A. Buades, “Non-local regularization of the reverse heat equation for digital images enhancement,” CMLA, 2006
    [30] V. P. Namboodiri, S. Chaudhuri, “Recovery of relative depth from a single observation using an uncalibrated (real-aperture) camera,” pp.1 - 6, CVPR 2008.
    [31] “The filter of Kramer and Bruckner,” https://dpt-info.u- strasbg.fr/~cronse/ TIDOC/ FILTER/kb.html
    [32] S. Osher and L. I. Rudin, “Feature-oriented image enhancement using shock filters,” SIAM Journal on Numerical Analysis, pp.919–940, 1990.

    無法下載圖示 校內:2020-07-06公開
    校外:不公開
    電子論文尚未授權公開,紙本請查館藏目錄
    QR CODE