| 研究生: |
廖登峰 Liao, Teng-Feng |
|---|---|
| 論文名稱: |
應用於物體重建的多台深度相機校正與融合演算法 Multiple Depth Cameras Calibration and Fusion Algorithm for Object Reconstruction |
| 指導教授: |
謝明得
Shieh, Ming-Der |
| 學位類別: |
碩士 Master |
| 系所名稱: |
電機資訊學院 - 電機工程學系 Department of Electrical Engineering |
| 論文出版年: | 2019 |
| 畢業學年度: | 107 |
| 語文別: | 中文 |
| 論文頁數: | 56 |
| 中文關鍵詞: | 多台深度相機 、相機校正 、3D重建 |
| 外文關鍵詞: | Multiple depth camera, Camera calibration, 3D reconstruction |
| 相關次數: | 點閱:101 下載:0 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
本論文提出一套多台深度相機的校準與建模程序,藉由校準程序可以準確的計算多台深度相機的相機參數,之後透過合併演算法將測量資訊融合成各種角度無縫的模型。
啟發於傳統校準使用棋盤格等已知圖案,本論文使用已知的三維(Three Dimensions, 3D)模型做為校正的參考幾何。首先由測量影像中分割出參考幾何,並透過使用迭代最近點演算法(Iterative Closet Point, ICP)計算量測值與實際值的座標系統轉換及所有點的匹配關係,再利用座標下降法(Coordinate Descent)來最佳化相機參數。另外,由於相機投影所造成的測量點密度會隨著沿相機軸心方向的距離成二次方下降,此現象導致越近的點對最佳化的影響越大,因此本論文亦在校正時加入了基於軸心方向之分群微調以避免過度擬合。
為了解決深度相機的隨機誤差,本論文提出兩步驟的重建方法,基於兩種不同的隱函數去擬合量測之表面,藉此消除高頻雜訊與離群值。此外,因為深度相機的拍攝畸變程度與物體本身的幾何形狀或是表面材質有關,實驗結果亦顯示,選用不同的參考幾何會得到不同的校準結果,與參考幾何越相似之物體可以得到越好的重建效果。本論文所提之方法可以視應用需求更改參考幾何的形狀,並得到相較於傳統方法更好的校正與重建結果。
This thesis presents an efficient calibration flow for multi-depth-camera systems together with an image fusion process for 3-dimensional (3-D) object reconstruction. With the proposed three-step calibration scheme, the intrinsic and extrinsic parameters of multiple depth cameras can be accurately estimated. Then, the images captured by the system can be fused to derive a seamless 3-D model using the presented object reconstruction process.
Inspired by the traditional calibration scheme using known patterns such as the checkerboard, we adopted a given 3D model as the reference geometry to refine camera parameters. First, by segmenting the images according to the reference geometry and then applying the iterative closet point (ICP) algorithm, we can obtain the transformation and matching relationship between the measured point set and the actual values in reference model. The intrinsic and extrinsic parameters of the cameras can be further optimized by applying the coordinate descent algorithm. Moreover, since the density of the measured points decreases quadratically with the distance along the camera axis, the closer points should have a more pronounced impact on the optimization process than the farther ones. Thus, the axial effect is considered by introducing clustering refinement during calibration to avoid the over-fitting problem.
To relax the random error of depth cameras, this thesis adopts a two-step reconstruction method to fit the measured surface based on two different implicit functions for eliminating the high-frequency noise and outliers. Furthermore, we observed that the depth camera distortion is related to the characteristics of the object itself, and experimental results also reveal that the reconstructed results can be further improved as the topology of the reference geometry is approaching to that of the targeted objects. The proposed schemes can be adapted to different application requirements by altering the reference geometry and achieve a better calibration and reconstruction results than the conventional approaches.
[1] Kinect calibration. http://nicolas.burrus.name/index.php/Research/KinectCalibration. Accessed: 20190708.
[2] The stanford bunny. https://www.cc.gatech.edu/~turk/bunny/bunny.html. Accessed: 20190708.
[3] Nina Amenta, Marshall Bern, and Manolis Kamvysselis. A new voronoibased surface reconstruction algorithm. 1998.
[4] Matthew Berger, Andrea Tagliasacchi, Lee M Seversky, Pierre Alliez, Gael Guennebaud, Joshua A Levine, Andrei Sharf, and Claudio T Silva. A survey of surface
reconstruction from point clouds. In Computer Graphics Forum, volume 36, pages 301–329. Wiley Online Library, 2017.
[5] Paul J Besl and Neil D McKay. Method for registration of 3d shapes. In Sensor Fusion IV: Control Paradigms and Data Structures, volume 1611, pages 586–607. International Society for Optics and Photonics, 1992.
[6] Matthew Grant Bolitho. The Reconstruction of Large Threedimensional Meshes. Johns Hopkins University, 2010.
[7] G. Bradski. The OpenCV Library. Dr. Dobb’s Journal of Software Tools, 2000.
[8] Yang Chen and Gérard Medioni. Object modelling by registration of multiple range images. Image and vision computing, 10(3):145–155, 1992.
[9] Brian Curless and Marc Levoy. A volumetric method for building complex models from range images. 1996.
[10] XiaoShan Gao, XiaoRong Hou, Jianliang Tang, and HangFei Cheng. Complete solution classification for the perspectivethreepoint problem. IEEE transactions on pattern analysis and machine intelligence, 25(8):930–943, 2003.
[11] Janne Heikkila, Olli Silven, et al. A fourstep camera calibration procedure with implicit image correction. In cvpr, volume 97, page 1106. Citeseer, 1997.
[12] Carlos Hernández, Francis Schmitt, and Roberto Cipolla. Silhouette coherence for camera calibration under circular motion. IEEE transactions on pattern analysis and machine intelligence, 29(2):343–349, 2007.
[13] Daniel Herrera, Juho Kannala, and Janne Heikkilä. Accurate and practical calibration of a depth and color camera pair. In International Conference on Computer analysis of images and patterns, pages 437–445. Springer, 2011.
[14] Daniel Herrera, Juho Kannala, and Janne Heikkilä. Joint depth and color camera calibration with distortion correction. IEEE Transactions on Pattern Analysis and Machine Intelligence, 34(10):2058–2064, 2012.
[15] WenYuan Hsiao. Efficient correspondencequalityaware keypoint selection for realtime icpbased visual odometry. 2018.
[16] Shahram Izadi, David Kim, Otmar Hilliges, David Molyneaux, Richard Newcombe, Pushmeet Kohli, Jamie Shotton, Steve Hodges, Dustin Freeman, Andrew Davison, et al. Kinectfusion: realtime 3d reconstruction and interaction using a moving depth camera. In Proceedings of the 24th annual ACM symposium on User interface software and technology, pages 559–568. ACM, 2011.
[17] Michael Kazhdan, Matthew Bolitho, and Hugues Hoppe. Poisson surface reconstruction. In Proceedings of the fourth Eurographics symposium on Geometry processing, volume 7, 2006.
[18] Michael Kazhdan and Hugues Hoppe. Screened poisson surface reconstruction. ACM Transactions on Graphics (ToG), 32(3):29, 2013.
[19] Maik Keller, Damien Lefloch, Martin Lambers, Shahram Izadi, Tim Weyrich, and Andreas Kolb. Realtime 3d reconstruction in dynamic scenes using pointbased fusion. In 2013 International Conference on 3D Vision3DV 2013, pages 1–8. IEEE, 2013.
[20] Kourosh Khoshelham and Sander Oude Elberink. Accuracy and resolution of kinect depth data for indoor mapping applications. Sensors, 12(2):1437–1454, 2012.
[21] J Kilner, A Neophytou, and A Hilton. 3d scanning with multiple depth sensors. In 3rd International Conference on 3D Body Scanning Technologies. Lugano, Switzerland: Hometrica Consulting, pages 295–301, 2012.
[22] Marek Kowalski, Jacek Naruniec, and Michal Daniluk. Livescan3d: A fast and inexpensive 3d data acquisition system for multiple kinect v2 sensors. In 2015 International Conference on 3D Vision, pages 318–325. IEEE, 2015.
[23] KH Lee, H Woo, and T Suk. Data reduction methods for reverse engineering. The International Journal of Advanced Manufacturing Technology, 17(10):735–743, 2001. [24] Vincent Lepetit, Francesc MorenoNoguer, and Pascal Fua. Epnp: An accurate o (n) solution to the pnp problem. International journal of computer vision, 81(2):155, 2009.
[25] William E Lorensen and Harvey E Cline. Marching cubes: A high resolution 3d surface construction algorithm. In ACM siggraph computer graphics, volume 21, pages 163– 169. ACM, 1987.
[26] Richard A Newcombe, Shahram Izadi, Otmar Hilliges, David Molyneaux, David Kim, Andrew J Davison, Pushmeet Kohli, Jamie Shotton, Steve Hodges, and Andrew W Fitzgibbon. Kinectfusion: Realtime dense surface mapping and tracking. In ISMAR, volume 11, pages 127–136, 2011.
[27] Chuong V Nguyen, Shahram Izadi, and David Lovell. Modeling kinect sensor noise for improved 3d reconstruction and tracking. In 2012 second international conference on 3D imaging, modeling, processing, visualization & transmission, pages 524–530. IEEE, 2012.
[28] Massimo Piccardi. Background subtraction techniques: a review. In 2004 IEEE International Conference on Systems, Man and Cybernetics (IEEE Cat. No. 04CH37583), volume 4, pages 3099–3104. IEEE, 2004.
[29] Carolina Raposo, Joao Pedro Barreto, and Urbano Nunes. Fast and accurate calibration of a kinect sensor. In 2013 International Conference on 3D Vision3DV 2013, pages 342–349. IEEE, 2013.
[30] Szymon Rusinkiewicz and Marc Levoy. Efficient variants of the icp algorithm. In 3dim, volume 1, pages 145–152, 2001.
[31] Radu Bogdan Rusu. Semantic 3d object maps for everyday manipulation in human living environments. KIKünstliche Intelligenz, 24(4):345–348, 2010.
[32] Radu Bogdan Rusu, Nico Blodow, and Michael Beetz. Fast point feature histograms (fpfh) for 3d registration. In 2009 IEEE International Conference on Robotics and
Automation, pages 3212–3217. IEEE, 2009.
[33] Oliver Schall, Alexander Belyaev, and HP Seidel. Robust filtering of noisy scattered point data. In Proceedings Eurographics/IEEE VGTC Symposium PointBased Graphics,
2005., pages 71–144. IEEE, 2005.
[34] Johannes Lutz Schönberger and JanMichael Frahm. Structurefrommotion revisited.In Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
[35] Johannes Lutz Schönberger, Enliang Zheng, Marc Pollefeys, and JanMichael Frahm. Pixelwise view selection for unstructured multiview stereo. In European Conference on Computer Vision (ECCV), 2016.
[36] Jan Smisek, Michal Jancosek, and Tomas Pajdla. 3d with kinect. In Consumer depth cameras for computer vision, pages 3–25. Springer, 2013.
[37] Thomas Whelan, Michael Kaess, Maurice Fallon, Hordur Johannsson, John Leonard, and John McDonald. Kintinuous: Spatially extended kinectfusion. 2012.
[38] Roy Sirui Yang, Yuk Hin Chan, Rui Gong, Minh Nguyen, Alfonso Gastelum Strozzi, Patrice Delmas, Georgy Gimel’farb, and Rachel Ababou. Multikinect scene reconstruction: Calibration and depth inconsistencies. In 2013 28th International Conference on Image and Vision Computing New Zealand (IVCNZ 2013), pages 47–52. IEEE, 2013.
[39] Cha Zhang and Zhengyou Zhang. Calibration between depth and color sensors for commodity depth cameras. In Computer vision and machine learning with RGBD sensors, pages 47 64. Springer, 2014.
[40] Zhengyou Zhang. A flexible new technique for camera calibration. IEEE Transactions on pattern analysis and machine intelligence, 22, 2000.
[41] Jiejie Zhu, Liang Wang, Ruigang Yang, and James Davis. Fusion of timeofflight depth and stereo for high accuracy depth maps. In 2008 IEEE Conference on Computer Vision and Pattern Recognition, pages 1–8. IEEE, 2008.
校內:2024-07-24公開