簡易檢索 / 詳目顯示

研究生: 皮安卓
Pyronnet, Rémi
論文名稱: 三維室內點雲邊界最佳化及網路重建
Edge-Preserving Optimisation and Meshing of Indoor Environment Point Cloud
指導教授: 譚俊豪
Tarn, Jiun-Haur
學位類別: 碩士
Master
系所名稱: 工學院 - 航空太空工程學系
Department of Aeronautics & Astronautics
論文出版年: 2019
畢業學年度: 107
語文別: 英文
論文頁數: 37
外文關鍵詞: point cloud, edge detection, edge reconstruction, point reduction, meshing
相關次數: 點閱:74下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • This Master’s Thesis focus on the processing of indoor point clouds from their
    registration using a RGB-D camera to the color meshing of their optimised version.
    On the images coming from the camera, keypoints are robustly localised using
    scale-invariant feature transform algorithm and computed through the bag of words
    model. These features are then compared in order to merge the successive images
    and the corresponding point clouds as well as executing loop closure.
    The obtained point cloud is separated in two categories. The edges are cleaned
    and reinforced in order to preserve them all along the project. The remaining surface
    points are reduced in number, while focusing again of the edge preservation.
    The optimised point cloud is meshed using ball pivoting algorithm, with two
    different ball radius for different edge and surface point densities.
    The output of this project is the optimised color mesh of a previously registered
    point cloud.

    Declaration of Authorship ii Abstract iii Acknowledgements iv 1 Introduction 1 1.1 Project Introduction 1 1.2 Related Work 1 2 Equipment and Software 3 2.1 RGB-D Camera 3 2.1.1 RGB-D Camera Components 3 2.1.2 Operation Principle 4 2.2 Computer Hardware and Software 7 2.2.1 Hardware 7 2.2.2 Software 7 3 RTAB-Map: Creation of a Three Dimensional Point Cloud 9 3.1 SIFT detection algorithm 10 3.1.1 Difference of Gaussian 10 3.1.2 Extrema Detection 12 3.1.3 Keypoint Localization 12 3.1.4 Elimination of Edges Keypoints 13 3.1.5 Orientation Assignment 14 3.1.6 Local Image Descriptor 15 3.2 Images Merging 16 3.2.1 Bag of Visual Words Representation 16 3.2.2 Keypoint Matching 16 3.2.3 Loop Closure 17 3.2.4 3D point Cloud generation 18 4 Edge Preserving Optimisation of the Point Cloud 9 4.1 Edge Detection 20 4.1.1 Selection of Relevant Points for the Comparison 20 4.1.2 Edge Points Detection 21 4.2 Edge Reconstruction 23 4.2.1 Edge Points Reduction 23 4.2.2 Edge Reinforcement 24 4.3 Surface Optimisation 26 4.3.1 Surface Points Reinforcement around Edges 26 4.3.2 Surface Points Reduction 27 4.4 Color Attribution 27 4.4.1 Points Color Attribution 27 4.4.2 Points Normal Vector Attribution 28 5 Ball Pivoting Meshing 30 5.1 Algorithm explanation 30 5.1.1 Region-Growing Technique 30 5.1.2 Ball Pivoting Algorithm 31 5.2 Integration to the Project 32 5.2.1 Application to the Project 32 5.2.2 Results 33 Conclusion 35 Bibliography 36

    [1] P. Altman. “Using MS Kinect device for natural user interface”. In: Pilsen (2013).
    [2] M. Andrea. “Kinect Pattern Uncovered”. In: azt.tm’s Blog (2011).
    [3] C. L. Bajaj, F. Bernardini, and G. Xu. “Automatic Reconstruction of Surfaces and Scalar Fields from 3D Scans”. In: Annual Conference Series (1995).
    [4] F. Bernardini et al. “The Ball-Pivoting Algorithm for Surface Reconstruction”. In: IEEE Transactions on Visualization and Computer Graphics 5.4 (1999).
    [5] J.-D. Boissonnat. “Geometric Structures for Three-Dimensional Shape Representation”. In: ACM Transactions on Graphics 3.4 (1984).
    [6] A. Cox et al. “Point Cloud Generation and Stitching for a 3D Machine Vision Reference Design”. University of Puerto Rico, Mayaguez, USA. 2015.
    [7] J. D’Souza. “An Introduction to Bag-of-Words in NLP”. In: Medium (2018).
    [8] G. Gerig. “Structured Lighting”. Human Language Technology Research Institute, Dallas, USA. 2012.
    [9] Z. S. Harris. “Distributional Structure”. In: Word 10.2 (1954).
    [10] O. Hoeflein. How Microsoft Kinect works with Infrared. 2010. URL : https://www.youtube.com/watch?v=dTKlNGSH9Po .
    [11] M. Labbé and F. Michaud. “Appearance-Based Loop Closure Detection for Online Large-Scale and Long-Term Operation”. In: IEEE Transactions on Robotics 29.3 (2013).
    [12] M. Labbé and F. Michaud. “RTAB-Map as an Open-Source Lidar and Visual SLAM Library for Large-Scale and Long-Term Online Operation”. In: Journal of field robotics 36.2 (2018).
    [13] D. G. Lowe. “Distinctive Image Features from Scale-Invariant Keypoints”. In: International Journal of Computer Vision 60.2 (2004).
    [14] OpenKinect Protocol Documentation. 2013. URL : https://openkinect.org/wiki/Protocol_Documentation.
    [15] L. Raphalen. “Human Detection and Tracking with Visual Odometry and Automatic Vehicle Control”. MA thesis. Tainan, TWN: National Cheng Kung University, 2018.
    [16] U. Sinha. “SIFT: Theory and Practice”. In: AI Shack (2010).
    [17] J. Sivic and A. Zisserman. “Video Google: A Text Retrieval Approach to Object Matching in Videos”. In: 9th International Conference on Computer Vision. Nice, FRA, 2003.
    [18] “The difference between a small and large Gaussian blur”. In: (2015). URL : https://en.wikipedia.org/wiki/Gaussian_blur#/media/File:Cappadocia_Gaussian_Blur.svg .
    [19] C. Weber, S. Hahmann, and H. Hagen. “Sharp Feature Detection in Point Clouds”. In: HAL (2011).

    無法下載圖示 校內:立即公開
    校外:不公開
    電子論文尚未授權公開,紙本請查館藏目錄
    QR CODE