| 研究生: |
林偉宏 Lim, Wei Hong |
|---|---|
| 論文名稱: |
基於雙目立體視覺與影像處理的高精度 3D 量測系統開發 Development of High Precision 3D Measurement System Based on Binocular Stereo Vision and Image Processing |
| 指導教授: |
劉建聖
Liu, Chien-Sheng |
| 學位類別: |
碩士 Master |
| 系所名稱: |
工學院 - 機械工程學系 Department of Mechanical Engineering |
| 論文出版年: | 2023 |
| 畢業學年度: | 111 |
| 語文別: | 中文 |
| 論文頁數: | 100 |
| 中文關鍵詞: | 雙目立體視覺 、高精度3D量測 、定位誤差 、影像處理 、三維掃描 |
| 外文關鍵詞: | Binocular stereo vision, High-precision 3D measurement, Localization error, Image processing, 3D scanning |
| 相關次數: | 點閱:72 下載:0 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
近年來,隨著自動化、智慧化技術的進步與應用,工廠轉型為自動化生產已成為趨勢。儘管如此,仍有許多生產流程難以被機器取代,因此高精度量測與補償技術的應用變得越來越重要,這些技術能夠降低誤差、提高產品品質和生產效率,同時推動自動化轉型,增強企業競爭力。在製造與組裝過程中,工件留下製造與組裝誤差也是一大挑戰,這些誤差是造成系統自動化難度提升的原因之一。目前,產業對加工的要求不僅在於提高產能,對於產品的品質要求也要求越來越高,如汽機車業等對於零組件加工與組裝精度與強度的要求增加,量測並補償這些誤差能降低誤差和提高精度,進而實現自動化的生產與加工。
因此,本研究介紹了一套雙目立體視覺系統,能夠同時量測待測物上所有特徵點的,系統結合了影像處理技術與邊緣強化特徵演算法尋找待測物特徵中心,並對特徵中心進行三維點座標重建,取得待加工圓心點的相對三維座標。系統能夠在距離待測物215 mm的位置進行量測,量測範圍可達450 mm × 200 mm,且準確度可達0.1 mm以內,重複度可達0.03 mm以內。雙目視覺技術能有效避免待測物反光所影響,提供解決方案並保持掃描精度與速度,此方法適合不能直接接觸待測物的掃描量測,產出的三維座標轉換至機械手臂世界座標軸,最終解決加工問題與工件製造過程中的誤差疊加問題,有效提升了機械手臂的加工精度。
In recent years, automation and intelligent technology have been increasingly applied in factories, leading to a trend of automated production. However, in many production processes, machines are still difficult to replace humans. High-precision measurement and compensation technology are increasingly important in reducing errors and improving product quality. One major challenge in the manufacturing and assembly process is the errors left by workpieces, which makes automation difficult. The industry now demands not only increased production capacity but also increased precision and strength of component processing and assembly. Measuring and compensating for these errors can achieve automated production and processing.
To address this challenge, this study proposes a binocular stereo vision system that can simultaneously measure all 3D coordinates of feature points on the workpiece. The system combines image processing technique with edge-enhancement feature algorithms to find the feature of workpiece and perform three-dimensional point coordinates reconstruction of the feature centers. The system can measure at 215 mm from the workpiece, with a measurement range of 450 mm × 200 mm, an accuracy within 0.1 mm and a repeatability within 0.03 mm. This method is suitable for non-contact scanning measurement and avoids the influence of workpiece reflection. The proposed system can effectively improve the machining accuracy of the robot’s machining processes and solve the error accumulation problem in the manufacturing process.
[1] A. Haleem, M. Javaid, R. P. Singh et al., “Exploring the Potential of 3D Scanning in Industry 4.0: An Overview,” International Journal of Cognitive Computing in Engineering, vol. 3, pp. 161-171, 2022.
[2] “3D Scanners Market by Offering (Hardware, Software, Services), Type (3D Laser Scanners, Structured Light Scanners), Technology (Laser Triangulation, Pattern Fringe, Laser Pulse, Laser Phase-Shift), Range, Industry and Region - Global Forecast to 2028”, https://www.marketsandmarkets.com/Market-Reports/3d-scanner-market-119952472.html (accessed Feb. 14, 2023).
[3] “Comparing Three Prevalent 3D Imaging Technologies—ToF, Structured Light and Binocular Stereo Vision”, https://www.revopoint3d.com/comparing-three-prevalent-3d-imaging-technologies-tof-structured-light-and-binocular-stereo-vision/ (accessed Feb. 15, 2023).
[4] W. Flores-Fuentes, G. Trujillo-Hernández, I. Y. Alba-Corpus et al., “3D Spatial Measurement for Model Reconstruction: A Review,” Measurement, vol. 207, no. 112321, pp. 1-54, 2023.
[5] R. A. Jarvis, “A Perspective on Range Finding Techniques for Computer Vision,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. PAMI-5, no. 2, pp. 122-139, 1983.
[6] I. Moring, T. Heikkinen, R. Myllyla et al., “Acquisition of Three-Dimensional Image Data by a Scanning Laser Range Finder,” Optical Engineering, vol. 28, no. 8, pp. 897-902, 1989.
[7] R. Lange, P. Seitz, A. Biber et al., “Time-of-Flight Range Imaging with a Custom Solid State Image Sensor,” Proceedings of SPIE, vol. 3823, pp. 180-191, 1999.
[8] B. Asim, Stereo Vision, IntechOpen, Croatia, pp. 103-120, 2008.
[9] Y. D. Chen and J. Ni, “Dynamic Calibration and Compensation of a 3D Laser Radar Scanning System,” IEEE Transactions on Robotics and Automation, vol. 9, no. 3, pp. 318-323, 1993.
[10] A. Wehr and U. Lohr, “Airborne Laser Scanning—an Introduction and Overview,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 54, no. 2, pp. 68-82, 1999.
[11] M. A. Stafne, L. D. Mitchell, and R. L. West, “Positional Calibration of Galvanometric Scanners Used in Laser Doppler Vibrometers,” in Proceedings of SPIE, vol. 3411, pp. 210-223, 1998.
[12] M. A. Isa and I. Lazoglu, “Design and Analysis of a 3D Laser Scanner,” Measurement, vol. 111, pp. 122-133, 2017.
[13] S. Li, X. Jia, M. Chen et al., “Error Analysis and Correction for Color in Laser Triangulation Measurement,” Optik, vol. 168, pp. 165-173, 2018.
[14] Q. Yao and M. Cao, “Design of Optical Emission System in 3D Shape Detection with Oblique Laser Triangulation Probe,” Journal of Physics: Conference Series, vol. 1774, no. 012063, pp. 1-5, 2021.
[15] S. Zhang, “High-Speed 3D Shape Measurement with Structured Light Methods: A Review,” Optics and Lasers in Engineering, vol. 106, pp. 119-131, 2018.
[16] J. Salvi, J. Pagès, and J. Batlle, “Pattern Codification Strategies in Structured Light Systems,” Pattern Recognition, vol. 37, no. 4, pp. 827-849, 2004.
[17] J. Tajima and M. Iwakawa, “3-D Data Acquisition by Rainbow Range Finder,” in International Conference on Pattern Recognition, vol. 1, pp. 309-313, 1990.
[18] N. G. Durdle, J. Thayyoor, and V. J. Raso, “An Improved Structured Light Technique for Surface Reconstruction of the Human Trunk,” in IEEE Canadian Conference on Electrical and Computer Engineering, vol. 2, pp. 874-877, 1998.
[19] F. H. Cheng, C. T. Lu, and Y. S. Huang, “3D Object Scanning System by Coded Structured Light,” in International Symposium on Electronic Commerce and Security, pp. 213-217, 2010.
[20] Z. Li, B. Curless, and S. M. Seitz, “Rapid Shape Acquisition Using Color Structured Light and Multi-Pass Dynamic Programming,” in International Symposium on 3D Data Processing Visualization and Transmission, pp. 24-36, 2002.
[21] F. J. MacWilliams and N. J. A. Sloane, “Pseudo-Random Sequences and Arrays,” in Proceedings of the IEEE, vol. 64, no. 12, pp. 1715-1729, 1976.
[22] P. M. Griffin, L. S. Narasimhan, and S. R. Yee, “Generation of Uniquely Encoded Light Patterns for Range Data Acquisition,” Pattern Recognition, vol. 25, no. 6, pp. 609-616, 1992.
[23] J. Geng, “Structured-Light 3D Surface Imaging: A tutorial,” Advances in Optics and Photonics, vol. 3, no. 2, pp. 128-160, 2011.
[24] J. L. Posdamer and M. D. Altschuler, “Surface Measurement by Space-Encoded Projected Beam Systems,” Computer Graphics and Image Processing, vol. 18, no. 1, pp. 1-17, 1982.
[25] D. Bergmann, “New Approach for Automatic Surface Reconstruction with Coded Light,” in Proceedings of SPIE, vol. 2572, pp. 2-9, 1995.
[26] G. Jens, “Dense 3D Surface Acquisition by Structured Light Using Off-the-Shelf Components,” in Proceedings of SPIE, vol. 4309, pp. 220-231, 2000.
[27] 林家珍, “基於偽曝光類 HDR 圖像融合方法的金屬反光表面 3D 掃描技術”, 國立成功大學機械工程學系, 碩士論文, 2022.
[28] D. Marr, T. Poggio, and S. Brenner, “A Computational Theory of Human Stereo Vision,” in Proceedings of the Royal Society of London. Series B. Biological Sciences, vol. 204, no. 1156, pp. 301-328, 1979.
[29] P. J. Besl and R. C. Jain, “Three-Dimensional Object Recognition,” ACM Computer Survey, vol. 17, no. 1, pp. 75–145, 1985.
[30] R. Blake and H. Wilson, “Binocular Vision,” Vision Research, vol. 51, no. 7, pp. 754-770, 2011.
[31] Taryudi and M. S. Wang, “Eye to Hand Calibration Using Anfis for Stereo Vision-Based Object Manipulation System,” Microsystem Technologies, vol. 24, no. 1, pp. 305-317, 2018.
[32] X. Lin, J. Wang, and C. Lin, “Research on 3D Reconstruction in Binocular Stereo Vision Based on Feature Point Matching Method,” in IEEE International Conference on Information Systems and Computer Aided Education, pp. 551-556, 2020.
[33] C. G. Harris and M. J. Stephens, “A Combined Corner and Edge Detector,” in Alvey Vision Conference, 1988.
[34] D. G. Lowe, “Distinctive Image Features from Scale-Invariant Keypoints,” International Journal of Computer Vision, vol. 60, no. 2, pp. 91-110, 2004.
[35] H. Bay, T. Tuytelaars, and L. Van Gool, “Surf: Speeded up Robust Features,” in European Conference on Computer Vision, pp. 404-417, 2006.
[36] E. Rosten and T. Drummond, “Machine Learning for High-Speed Corner Detection,” in European Conference on Computer Vision, pp. 430-443, 2006.
[37] S. Leutenegger, M. Chli, and R. Y. Siegwart, “Brisk: Binary Robust Invariant Scalable Keypoints,” in International Conference on Computer Vision, pp. 2548-2555, 2011.
[38] E. Rublee, V. Rabaud, K. Konolige et al., “Orb: An Efficient Alternative to Sift or Surf,” in International Conference on Computer Vision, pp. 2564-2571, 2011.
[39] R. Maini and H. Aggarwal, “Study and Comparison of Various Image Edge Detection Techniques,” International Journal of Image Processing, vol. 3, no. 1, pp. 1-11, 2009.
[40] D. Marr and T. Poggio, “Cooperative Computation of Stereo Disparity,” Science, vol. 194, no. 4262, pp. 283-287, 1976.
[41] D. Scharstein and R. Szeliski, “A Taxonomy and Evaluation of Dense Two-Frame Stereo Correspondence Algorithms,” International Journal of Computer Vision, vol. 47, no. 1, pp. 7-42, 2002.
[42] A. Toshev, J. Shi, and K. Daniilidis, “Image Matching Via Saliency Region Correspondences,” in IEEE Conference on Computer Vision and Pattern Recognition, pp. 1-8, 2007.
[43] D. Chen, M. Ardabilian, X. Wang et al., “An Improved Non-Local Cost Aggregation Method for Stereo Matching Based on Color and Boundary Cue,” in IEEE International Conference on Multimedia and Expo, pp. 1-6, 2013.
[44] Q. Yang, “Stereo Matching Using Tree Filtering,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 37, no. 4, pp. 834-846, 2015.
[45] G. da Silva Vieira, J. C. de Lima, N. M. de Sousa et al., “A Three-Layer Architecture to Support Disparity Map Construction in Stereo Vision Systems,” Intelligent Systems with Applications, vol. 12, no. 200054, pp. 1-14, 2021.
[46] M. Muja and D. Lowe. (2009). Flann-Fast Library for Approximate Nearest Neighbors User Manual. Available: http://www.fit.vutbr.cz/~ibarina/pub/VGE/reading/flann_manual-1.6.pdf
[47] H. Bay, V. Ferraris, and L. V. Gool, “Wide-Baseline Stereo Matching with Line Segments,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 1, pp. 329-336, 2005.
[48] Z. F. Wang and Z. G. Zheng, “A Region Based Stereo Matching Algorithm Using Cooperative Optimization,” in IEEE Conference on Computer Vision and Pattern Recognition, pp. 1-8, 2008.
[49] D. Li, H. Zhao, and H. Jiang, “Fast Phase-Based Stereo Matching Method for 3D Shape Measurement,” in International Symposium on Optomechatronic Technologies, pp. 1-5, 2010.
[50] D. Scharstein, H. Hirschmüller, Y. Kitajima et al., “High-Resolution Stereo Datasets with Subpixel-Accurate Ground Truth,” Pattern Recognition, vol. 8753, pp. 31-42, 2014.
[51] D. Kendal, “Measuring Distances Using Digital Cameras,” Australian senior mathematics journal, vol. 21, pp. 24-28, 2007.
[52] A. Masoumian, H. A. Rashwan, J. Cristiano et al., “Monocular Depth Estimation Using Deep Learning: A Review,” Sensors, vol. 22, no. 14, pp. 1-24, 2022.
[53] Y. Lee, G. Lee, D. S. Moon et al., “Vision-Based Displacement Measurement Using a Camera Mounted on a Structure with Stationary Background Targets Outside the Structure,” Structural Control and Health Monitoring, vol. 29, no. 11, p. e3095, 2022.
[54] Y. Hu, Q. Chen, S. Feng et al., “A New Microscopic Telecentric Stereo Vision System - Calibration, Rectification, and Three-Dimensional Reconstruction,” Optics and Lasers in Engineering, vol. 113, pp. 14-22, 2019.
[55] B. Guan, Z. Su, Q. Yu et al., “Monitoring the Blades of a Wind Turbine by Using Videogrammetry,” Optics and Lasers in Engineering, vol. 152, p. 106901, 2022.
[56] F. Fooladgar, S. Samavi, S. M. R. Soroushmehr et al., “Geometrical Analysis of Localization Error in Stereo Vision Systems,” IEEE Sensors Journal, vol. 13, no. 11, pp. 4236-4246, 2013.
[57] L. R. Ramírez-Hernández, J. C. Rodríguez-Quiñonez, M. J. Castro-Toscano et al., “Improve Three-Dimensional Point Localization Accuracy in Stereo Vision Systems Using a Novel Camera Calibration Method,” International Journal of Advanced Robotic Systems, vol. 17, no. 1, pp. 1-15, 2020.
[58] Z. Luo, K. Zhang, Z. Wang et al., “3D Pose Estimation of Large and Complicated Workpieces Based on Binocular Stereo Vision,” Applied Optics, vol. 56, no. 24, pp. 6822-6836, 2017.
[59] Z. Liu, X. Liu, G. Duan et al., “Precise Pose and Radius Estimation of Circular Target Based on Binocular Vision,” Measurement Science and Technology, vol. 30, no. 2, pp. 1-14, 2019.
[60] A. Fitzgibbon, M. Pilu, and R. B. Fisher, “Direct Least Square Fitting of Ellipses,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 21, no. 5, pp. 476-480, 1999.
[61] Z. Zhao, Y. Zhu, Y. Li et al., “Multi-Camera-Based Universal Measurement Method for 6-DOF of Rigid Bodies in World Coordinate System,” Sensors, vol. 20, no. 19, p. 5547, 2020.
[62] Z. Zhang, H. Zhang, K. Xu et al., “A Non-Iterative Calibration Method for the Extrinsic Parameters of Binocular Stereo Vision Considering the Line Constraints,” Measurement, vol. 205, no. 112151, pp. 1-15, 2022.
[63] S. Han, M. D. Do, M. Kim et al., “A Precise 3D Scanning Method Using Stereo Vision with Multipoint Markers for Rapid Workpiece Localization,” Journal of Mechanical Science and Technology, vol. 36, no. 12, pp. 6307-6318, 2022.
[64] M. Bertels, B. Jutzi, and M. Ulrich, “Automatic Real-Time Pose Estimation of Machinery from Images,” Sensors, vol. 22, no. 7, p. 2627, 2022.
[65] J. Canny, “A Computational Approach to Edge Detection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. PAMI-8, no. 6, pp. 679-698, 1986.
[66] J. Zhu, Q. Zeng, F. Han et al., “Design of Laser Scanning Binocular Stereo Vision Imaging System and Target Measurement,” Optik, vol. 270, no. 169994, pp. 1-21, 2022.
[67] B. Cyganek and J. P. Siebert, An Introduction to 3D Computer Vision Techniques and Algorithms, John Wiley & Sons Ltd., UK, pp. 15-94, 2009.
[68] R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision, Cambridge University Press, USA, pp. 237-362, 2004.
[69] R. Tsai, “A Versatile Camera Calibration Technique for High-Accuracy 3D Machine Vision Metrology Using Off-the-Shelf Tv Cameras and Lenses,” IEEE Journal on Robotics and Automation, vol. 3, no. 4, pp. 323-344, 1987.
[70] Z. Zhang, “A Flexible New Technique for Camera Calibration,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 11, pp. 1330-1334, 2000.
[71] Y. Cui, F. Zhou, Y. Wang et al., “Precise Calibration of Binocular Vision System Used for Vision Measurement,” Optics Express, vol. 22, no. 8, pp. 9134-9149, 2014.
[72] Z. Jia, J. Yang, W. Liu et al., “Improved Camera Calibration Method Based on Perpendicularity Compensation for Binocular Stereo Vision Measurement System,” Optics Express, vol. 23, no. 12, pp. 15205-15223, 2015.
[73] Y. Yin, H. Zhu, P. Yang et al., “High-Precision and Rapid Binocular Camera Calibration Method Using a Single Image Per Camera,” Optics Express, vol. 30, no. 11, pp. 18781-18799, 2022.
[74] D. Brown, “Close-Range Camera Calibration,” Photogrammetric Engineering, vol. 37, no. 8, pp. 855-866, 1971.
[75] “What Is Camera Calibration? - MATLAB & Simulink”, https://www.mathworks.com/help/vision/ug/camera-calibration.html (accessed Nov. 7, 2022).
[76] O. Faugeras, Three-Dimensional Computer Vision: A Geometric Viewpoint, The MIT press, UK, pp. 33-68, 1993.
[77] J. J. Moré, “The Levenberg-Marquardt Algorithm: Implementation and Theory,” Numerical Analysis, pp. 105-116, 1978.
[78] H. C. Longuet-Higgins, “A Computer Algorithm for Reconstructing a Scene from Two Projections,” Nature, vol. 293, no. 5828, pp. 133-135, 1981.
[79] Z. Zhang, “Determining the Epipolar Geometry and Its Uncertainty: A Review,” International Journal of Computer Vision, vol. 27, no. 2, pp. 161-195, 1998.
[80] G. Bradski and A. Kaehler, Learning OpenCV: Computer Vision with the OpenCV Library, O’Reilly Media Inc., USA, pp. 109-458, 2008.
[81] G. Xu and Z. Zhang, Epipolar Geometry in Stereo, Motion and Object Recognition: A Unified Approach, Springer Dordrecht, USA, pp. 7-78, 1996.
[82] M. A. Fischler and R. C. Bolles, “Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography,” Communications of the ACM, vol. 24, no. 6, pp. 381–395, 1981.
[83] D. L. Massart, L. Kaufman, P. J. Rousseeuw et al., “Least Median of Squares: A Robust Method for Outlier and Model Error Detection in Regression and Calibration,” Analytica Chimica Acta, vol. 187, pp. 171-179, 1986.
[84] O. Sorkine-Hornung and M. Rabinovich. (2017). Least-Squares Rigid Motion Using SVD. Available: https://igl.ethz.ch/projects/ARAP/svd_rot.pdf
[85] K. S. Arun, T. S. Huang, and S. D. Blostein, “Least-Squares Fitting of Two 3-D Point Sets,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. PAMI-9, no. 5, pp. 698-700, 1987.
[86] P. J. Besl and N. D. McKay, “A Method for Registration of 3-D Shapes,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 14, no. 2, pp. 239-256, 1992.
[87] H. Hirschmuller, “Accurate and Efficient Stereo Processing by Semi-Global Matching and Mutual Information,” Computer Vision and Pattern Recognition, vol. 2, pp. 807-814, 2005.
[88] H. Hirschmuller, “Stereo Processing by Semiglobal Matching and Mutual Information,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 30, no. 2, pp. 328-341, 2008.
[89] A. Distante and C. Distante, Handbook of Image Processing and Computer Vision, Springer, USA, pp. 79-176, 2020.
[90] N. Otsu, “A Threshold Selection Method from Gray-Level Histograms,” IEEE Transactions on Systems, Man, and Cybernetics, vol. 9, no. 1, pp. 62-66, 1979.
[91] P. Wellner, “Adaptive Thresholding for the DigitalDesk,” Xerox, EPC1993-110, vol. 404, pp. 1-17, 1993.
[92] D. Bradley and G. Roth, “Adaptive Thresholding Using the Integral Image,” Journal of Graphics Tools, vol. 12, no. 2, pp. 13-21, 2007.
[93] Y.-T. Pai, Y.-F. Chang, and S.-J. Ruan, “Adaptive Thresholding Algorithm: Efficient Computation Technique Based on Intelligent Block Detection for Degraded Document Images,” Pattern Recognition, vol. 43, no. 9, pp. 3177-3187, 2010.
[94] J. M. Prewitt, “Object Enhancement and Extraction,” Picture Processing and Psychopictorics, vol. 10, no. 1, pp. 15-19, 1970.
[95] I. Sobel, “Camera Models and Machine Perception”, Stanford University Computer Science Department, Ph.D. Thesis, 1970.
[96] R. A. Kirsch, “Computer Determination of the Constituent Structure of Biological Images,” Computers and Biomedical Research, vol. 4, no. 3, pp. 315-328, 1971.
[97] Frei and C. C. Chen, “Fast Boundary Detection: A Generalization and a New Algorithm,” IEEE Transactions on Computers, vol. C-26, no. 10, pp. 988-998, 1977.
[98] R. C. Gonzalez, R. E. Woods, and S. L. Eddins, Digital Image Processing Using MATLAB, Gatesmark Publishing, USA, pp. 785-888, 2020.
[99] R. M. Haralick, S. R. Sternberg, and X. Zhuang, “Image Analysis Using Mathematical Morphology,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. PAMI-9, no. 4, pp. 532-550, 1987.
[100] R. van den Boomgaard and R. van Balen, “Methods for Fast Morphological Image Transforms Using Bitmapped Binary Images,” Graphical Models and Image Processing, vol. 54, no. 3, pp. 252-258, 1992.
[101] J. Illingworth and J. Kittler, “The Adaptive Hough Transform,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. PAMI-9, no. 5, pp. 690-698, 1987.
[102] H. K. Yuen, J. Princen, J. Illingworth et al., “Comparative Study of Hough Transform Methods for Circle Finding,” Image and Vision Computing, vol. 8, no. 1, pp. 71-77, 1990.
[103] E. R. Davies, Computer and Machine Vision: Theory, Algorithms, Practicalities, Academic Press, UK, pp. 303-332, 2012.
[104] O. Maimon and L. Rokach, Data Mining and Knowledge Discovery Book, Springer, USA, pp. 321-352, 2005.
[105] X. Rui and D. Wunsch, “Survey of Clustering Algorithms,” IEEE Transactions on Neural Networks, vol. 16, no. 3, pp. 645-678, 2005.
[106] K. Schreve, “How Accurate Can a Stereovision Measurement Be?,” in International Workshop on Research and Education in Mechatronics, pp. 1-7, 2014.
[107] L. Yang, B. Wang, R. Zhang et al., “Analysis on Location Accuracy for the Binocular Stereo Vision System,” IEEE Photonics Journal, vol. 10, no. 1, pp. 1-16, 2018.
[108] “DFK 33UX226 - USB3.0 彩色工業相機”, https://www.theimagingsource.com/products/industrial-cameras/usb-3.0-color/dfk33ux226/ (accessed Oct. 28, 2022).
[109] “Using the Stereo Camera Calibrator App - MATLAB & Simulink”, https://www.mathworks.com/help/vision/ug/using-the-stereo-camera-calibrator-app.html (accessed Oct. 28, 2022).
校內:2028-08-07公開