簡易檢索 / 詳目顯示

研究生: 陳舒哲
Chen, Shu-Zhe
論文名稱: 基於雙目立體視覺與影像處理的水車水尾量測系統開發
Development of Water-Wheel Tail Measurement System Based on Binocular Stereo Vision and Image Processing
指導教授: 劉建聖
Liu, Chien-Sheng
學位類別: 碩士
Master
系所名稱: 工學院 - 機械工程學系
Department of Mechanical Engineering
論文出版年: 2024
畢業學年度: 112
語文別: 中文
論文頁數: 137
中文關鍵詞: 雙目立體視覺影像處理水車水尾目標檢測水產養殖
外文關鍵詞: Binocular stereo vision, Image processing, Water-wheel tail, Object detection, Aquaculture
相關次數: 點閱:45下載:1
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 臺灣地區水產養殖業正面臨多重挑戰,急需轉型升級。智慧養殖成為引領行業走向綠色、高效、可持續發展的關鍵。隨著機器視覺領域的發展,立體視覺技術因其獨特優勢在漁產養殖中發揮著越來越重要的作用。傳統水質監測方法雖準確但操作復雜、成本高,且難以普及至小規模漁民。老一輩漁民通過觀察增氧機水車運轉產生的泡沫變化判斷水質,但缺乏系統性方法和量化標準。因此,研究便捷、準確的水質監測方法成為當務之急。
    針對這一問題,本研究開發出基於雙目立體視覺系統的水車式增氧機水花尾端檢測與長度量測系統。該系統通過一系列影像處理方法,包括改進的雙峰法二值化以高效分離水面與泡沫,並結合灰階共生矩陣的四種紋理特徵對候選點進行深入分析,準確偵測出水車水尾的終點即斷點判別二維圖像的位置座標,結合雙目立體視覺技術獲取三維坐標,精確量測水尾長度。本系統針對魚塭在多個不同環境條件下,量測水車打出水尾之長度皆達到了理想效果,系統得到充分驗證,證明了本系統的可行性與較廣的適用性。並且經過精準度的分析,本系統所量測長度在5-10 m量測範圍(一般水尾長度範圍)內,量測誤差不超過7 cm,誤差百分比低於0.69%。該系統的應用可實現養殖水域的實時監測,提高水質管理的科學化和精細化水平。同時,這也為養殖漁業的現代化和智慧化管理提供了有力支持。

    The aquaculture industry in Taiwan faces challenges that necessitate transformation towards green, efficient, and sustainable development. Traditional water quality monitoring methods are accurate but costly and complex, making them impractical for small-scale farmers who often rely on subjective observations of water foam. To address this issue, a new system using binocular stereo vision was developed to measure the length of the Water-wheel tail.
    This study developed a detection and measurement system for the tail length of Water-wheels based on a binocular stereo vision system. The system employs a series of image processing techniques, including an improved bimodal thresholding method for efficient separation of water surface and foam. It additionally integrates four texture features derived from the Gray Level Co-occurrence Matrix (GLCM) to conduct a thorough analysis of potential points, enabling precise detection of the endpoint of the Water-wheel tail in two-dimensional images and subsequent coordinate determination. By harnessing binocular stereo vision technology, the system is able to acquire three-dimensional coordinates and accurately measure the length of the Water-wheel tail. The system has undergone rigorous testing in diverse environmental conditions within fish ponds, yielding outstanding results in measuring the length of the Water-wheel tail, thereby validating its feasibility and broad applicability. Accuracy assessments indicate that the system's measurement error remains within a tight margin of 0.4%. The implementation of this system facilitates real-time monitoring of aquaculture ponds, fostering more scientific and precise water quality management. Furthermore, it serves as a robust foundation for modernizing and enabling intelligent management within the aquaculture industry.

    摘要 III ABSTRACT IV 誌謝 XI 目錄 XII 表目錄 XV 圖目錄 XVI 第一章 緒論 1 1-1研究背景 1 1-2研究動機與目的 3 1-3 論文架構 4 第二章 文獻回顧 5 2-2 計算機視覺 5 2-2-1 計算機視覺發展 5 2-1-2計算機視覺泡沫檢測與水產養殖 9 2-1雙目立體視覺 12 2-1-1 雙目立體視覺的研究 12 2-1-2 單目視覺與雙目視覺對比 17 2-1-3 大視野場景下的雙目視覺量測 18 第三章 基礎理論 25 3-1 雙目立體視覺原理 25 3-2 三維重建原理 25 3-2-1 相機模型與幾何關係 26 3-2-2 相機畸變模型 30 3-2-3 相機標定原理 32 3-2-4 雙目相機立體校正 35 3-2-5 立體匹配算法 41 3-2-6 三維重建 44 3-3 影像處理 45 3-3-1 灰階化 45 3-3-2 二值化 45 3-3-3 高斯加權 47 3-3-3 形態學 48 3-3-4 紋理特徵提取 50 第四章 系統架構與影像處理方法 55 4-1 系統架構 55 4-2 實驗設備 57 4-3 實驗流程 61 4-3-1系統調校 61 4-3-2 雙目相機標定與立體校正 63 4-3-3 影像處理 67 4-3-4 三維重建與量測 76 第五章 實驗結果與討論 78 5-1 實驗系統校正 78 5-1-1 平均重投影誤差 78 5-1-2 校正板生成三維點空間距離 79 5-2 影像處理結果 82 5-2-1 灰階化與二值化處理驗證 82 5-2-2過程處理獲取水尾終點候選點 85 5-2-3灰階共生矩陣判別最佳終點驗證 88 5-3 量測與結果 94 5-3-1 系統量測值準確度分析 94 5-3-1 系統影像處理準確度分析 99 5-4 問題與討論 100 第六章 結論與未來展望 103 參考文獻 107

    [1] 林怡均, “農委會公布養殖漁業白皮書,4年投87億拚產業轉型,2025漁電共生規劃一萬公頃”, https://www.newsmarket.com.tw/blog/152031/ (accessed 2024/05).
    [2] 聯合國糧食及農業組織, “2022年世界漁業和水產養殖狀況”, https://www.fao.org/3/cc0461en/cc0461en.pdf (accessed 2024/05).
    [3] 農業部漁業署, “民國111年(2022)漁業統計年報”, https://www.fa.gov.tw/view.php?theme=FS_AR&subtheme=&id=22 (accessed 2024/05).
    [4] Y. Wang, X. Dan, J. Li et al., “Multi-Perspective Digital Image Correlation Method Using a Single Color Camera,” Science China Technological Sciences, vol. 61, pp. 61-67, 2018.
    [5] Y. Guo, Y. Liu, A. Oerlemans et al., “Deep Learning for Visual Understanding: A Review,” Neurocomputing, vol. 187, pp. 27-48, 2016.
    [6] 張涵鈺, 李振波, 李蔚然 et al., “基於機器視覺的水產養殖計數研究綜述,” 計算機應用, vol. 43, no. 9, p. 2970, 2023.
    [7] Z. Cui, J. Wu, and H. Yu, “A Review of the Application of Computer Vision Technology in Aquaculture,” Marine Science Bulletin, vol. 20, no. 1, pp. 53-66, 2018.
    [8] “行政院農業委員會水產試驗所-水質管理”, https://www.tfrin.gov.tw/News_Content.aspx?n=310&s=34891 (accessed 2023).
    [9] M. Lánský, I. Ruz̆ic̆ková, A. Benáková et al., “Effect of Coagulant Dosing on Physicochemical and Microbiological Characteristics of Activated Sludge and Foam Formation,” Acta hydrochimica et hydrobiologica, vol. 33, no. 3, pp. 266-269, 2005.
    [10] Å. D. Westlund, E. Hagland, and M. Rothman, “Foaming in Anaerobic Digesters Caused by Microthrix Parvicella,” Water Science and Technology, vol. 37, no. 4, pp. 51-55, 1998.
    [11] Y. Wang, M. Sun, Y. Tang et al., “Effects of Haematococcus Pluvialis on the Water Quality and Performance of Litopenaeus Vannamei Using Artificial Substrates and Water Exchange Systems,” Aquaculture International, vol. 30, no. 4, pp. 1779-1797, 2022.
    [12] N. Abdel-Raouf, A. A. Al-Homaidan, and I. B. M. Ibraheem, “Microalgae and Wastewater Treatment,” Saudi Journal of Biological Sciences, vol. 19, no. 3, pp. 257-275, 2012.
    [13] 陳建能 and 張國鳳, “計算機視覺技術在農業中的應用及展望,” 甘肅農業大學學報, vol. 38, no. 2, pp. 248-253, 2003.
    [14] 宋春華 and 彭泫知, “機器視覺研究與發展綜述,” 裝備製造技術, vol. 6, pp. 213-216, 2019.
    [15] C. M. Funke, J. Borowski, K. Stosio et al., “Five Points to Check When Comparing Visual Perception in Humans and Machines,” Journal of Vision, vol. 21, no. 3, pp. 16-16, 2021.
    [16] V. Braitenberg, Vehicles: Experiments in Synthetic Psychology, MIT press, 1986.
    [17] M. Dujmović, G. Malhotra, and J. S. Bowers, “What Do Adversarial Images Tell Us About Human Vision?,” Elife, vol. 9, p. e55978, 2020.
    [18] N. Iqbal, R. Mumtaz, U. Shafi et al., “Gray Level Co-Occurrence Matrix (Glcm) Texture Based Crop Classification Using Low Altitude Remote Sensing Platforms,” PeerJ Computer Science, vol. 7, p. e536, 2021.
    [19] J. C. Germain and J. M. Aguilera, “Identifying Industrial Food Foam Structures by 2d Surface Image Analysis and Pattern Recognition,” Journal of Food Engineering, vol. 111, no. 2, pp. 440-448, 2012.
    [20] N. HLPOEHOFSA, “Sustainable Fisheries and Aquaculture for Food Security and Nutrition,” A report by the High Level Panel of Experts on Food Security and Nutrition, 2014.
    [21] 朱從容, “計算機視覺技術在水產養殖中的應用,” 浙江海洋學院學報: 自然科學版, vol. 27, no. 4, pp. 439-443, 2008.
    [22] Z. Zhang, Z. Niu, and S. Zhao, “Identification of Freshwater Fish Species Based on Computer Vision,” Transactions of the Chinese Society of Agricultural Engineering, vol. 27, no. 11, pp. 388-392, 2011.
    [23] J. Li, J. Sun, X. Cui et al., “Automatic Counting Method of Fry Based on Computer Vision,” IEEJ Transactions on Electrical and Electronic Engineering, vol. 18, no. 7, pp. 1151-1159, 2023.
    [24] R. Petrell, X. Shi, R. Ward et al., “Determining Fish Size and Swimming Speed in Cages and Tanks Using Simple Video Techniques,” Aquacultural Engineering, vol. 16, no. 1-2, pp. 63-84, 1997.
    [25] D. Marr, T. Poggio, E. C. Hildreth et al., A Computational Theory of Human Stereo Vision, Springer, 1991.
    [26] R. Blake and H. Wilson, “Binocular Vision,” Vision research, vol. 51, no. 7, pp. 754-770, 2011.
    [27] P. Besl and R. Jain, "Three-Dimensional Object Recognition, Acm Comput," ed: Surwys, 1985.
    [28] Q. Xie, Q. Long, J. Li et al., “Application of Intelligence Binocular Vision Sensor: Mobility Solutions for Automotive Perception System,” IEEE Sensors Journal, 2023.
    [29] D. Murray and J. J. Little, “Using Real-Time Stereo Vision for Mobile Robot Navigation,” autonomous robots, vol. 8, pp. 161-171, 2000.
    [30] R. Wang, X. Guo, S. Li et al., “Separation and Calibration Method of Structural Parameters of 6r Tandem Robotic Arm Based on Binocular Vision,” Mathematics, vol. 11, no. 11, p. 2491, 2023.
    [31] R. Zhang, S. Lian, L. Li et al., “Design and Experiment of a Binocular Vision-Based Canopy Volume Extraction System for Precision Pesticide Application by Uavs,” Computers and Electronics in Agriculture, vol. 213, p. 108197, 2023.
    [32] Y. Ma, Q. Li, L. Chu et al., “Real-Time Detection and Spatial Localization of Insulators for Uav Inspection Based on Binocular Stereo Vision,” Remote Sensing, vol. 13, no. 2, p. 230, 2021.
    [33] B. Wang, J. Zhou, G. Tang et al., “Research on Visual Localization Method of Lunar Rover,” Sci. China Inf. Sci, vol. 44, no. 4, pp. 260-452, 2014.
    [34] E. R. MISSION, “Results of Rover Localization and Topographic Mapping for the 2003 Mars.”
    [35] Taryudi and M.-S. Wang, “Eye to Hand Calibration Using Anfis for Stereo Vision-Based Object Manipulation System,” Microsystem Technologies, vol. 24, pp. 305-317, 2018.
    [36] Y. I. Abdel-Aziz, H. M. Karara, and M. Hauck, “Direct Linear Transformation from Comparator Coordinates into Object Space Coordinates in Close-Range Photogrammetry,” Photogrammetric engineering & remote sensing, vol. 81, no. 2, pp. 103-107, 2015.
    [37] B. Triggs, P. F. McLauchlan, R. I. Hartley et al., “Bundle Adjustment—a Modern Synthesis,” in Vision Algorithms: Theory and Practice: International Workshop on Vision Algorithms Corfu, Greece, September 21–22, 1999 Proceedings, pp. 298-372, 2000.
    [38] A. Heyden and K. Astrom, “Euclidean Reconstruction from Constant Intrinsic Parameters,” in proceedings of 13th International Conference on Pattern Recognition, vol. 1, pp. 339-343, 1996.
    [39] R. I. Hartley, “Projective Reconstruction and Invariants from Multiple Images,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 16, no. 10, pp. 1036-1041, 1994.
    [40] O. D. Faugeras, Q.-T. Luong, and S. J. Maybank, “Camera Self-Calibration: Theory and Experiments,” in Computer Vision—ECCV'92: Second European Conference on Computer Vision Santa Margherita Ligure, Italy, May 19–22, 1992 Proceedings 2, pp. 321-334, 1992.
    [41] S. De Ma, “A Self-Calibration Technique for Active Vision Systems,” IEEE Transactions on Robotics and Automation, vol. 12, no. 1, pp. 114-120, 1996.
    [42] R. Y. Tsai, “An Efficient and Accurate Camera Calibration Technique for 3d Machine Vision,” in Proc. IEEE Conf. Computer Vision and Pattern Recognition, 1986, pp. 364-374, 1986.
    [43] Z. Zhang, “A Flexible New Technique for Camera Calibration,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 22, no. 11, pp. 1330-1334, 2000.
    [44] D. Marr and T. Poggio, “Cooperative Computation of Stereo Disparity,” Science, vol. 194, no. 4262, pp. 283-287, 1976.
    [45] D. Scharstein and R. Szeliski, “A Taxonomy and Evaluation of Dense Two-Frame Stereo Correspondence Algorithms,” International Journal of Computer Vision, vol. 47, no. 1, pp. 7-42, 2002.
    [46] A. Toshev, J. Shi, and K. Daniilidis, “Image Matching Via Saliency Region Correspondences,” in IEEE Conference on Computer Vision and Pattern Recognition, pp. 1-8, 2007.
    [47] D. Chen, M. Ardabilian, X. Wang et al., “An Improved Non-Local Cost Aggregation Method for Stereo Matching Based on Color and Boundary Cue,” in IEEE International Conference on Multimedia and Expo, pp. 1-6, 2013.
    [48] Q. Yang, “Stereo Matching Using Tree Filtering,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 37, no. 4, pp. 834-846, 2015.
    [49] G. da Silva Vieira, J. C. de Lima, N. M. de Sousa et al., “A Three-Layer Architecture to Support Disparity Map Construction in Stereo Vision Systems,” Intelligent Systems with Applications, vol. 12, no. 200054, pp. 1-14, 2021.
    [50] D. Scharstein, H. Hirschmüller, Y. Kitajima et al., “High-Resolution Stereo Datasets with Subpixel-Accurate Ground Truth,” Pattern Recognition, vol. 8753, pp. 31-42, 2014.
    [51] D. Scharstein and R. Szeliski, “A Taxonomy and Evaluation of Dense Two-Frame Stereo Correspondence Algorithms,” International journal of computer vision, vol. 47, pp. 7-42, 2002.
    [52] M. Bai, Y. Zhuang, and W. Wang, “Progress in Binocular Stereo Matching Algorithms,” Control and Decision, vol. 23, no. 7, pp. 721-729, 2008.
    [53] L. Hong and G. Chen, “Segment-Based Stereo Matching Using Graph Cuts,” in Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004., vol. 1, pp. I-I, 2004.
    [54] L. ZHANG, D. QU, and F. XU, “An Improved Stereo Matching Algorithm Based on Graph Cuts,” Robot, vol. 32, no. 1, pp. 104-108, 2010.
    [55] M. C. Sung, S. H. Lee, and N. I. Cho, “Stereo Matching Using Multi-Directional Dynamic Programming,” in 2006 International Symposium on Intelligent Signal Processing and Communications, pp. 697-700, 2005.
    [56] H.-q. Wang, M. Wu, Y.-b. Zhang et al., “Effective Stereo Matching Using Reliable Points Based Graph Cut,” in 2013 Visual Communications and Image Processing (VCIP), pp. 1-6, 2013.
    [57] H. Hirschmuller, “Accurate and Efficient Stereo Processing by Semi-Global Matching and Mutual Information,” in 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR'05), vol. 2, pp. 807-814, 2005.
    [58] T. Matsuo, S. Fujita, N. Fukushima et al., “Efficient Edge-Awareness Propagation Via Single-Map Filtering for Edge-Preserving Stereo Matching, Is&T/Spie Electronic Imaging, Three-Dimensional Image Processing,” Measurement, and Applications, pp. 9393-27, 2015.
    [59] M. Laraqui, A. Saaidi, A. Mouhib et al., “Dense Matching for Multi-Scale Images by Propagation,” Multimedia Tools and Applications, vol. 77, pp. 22923-22952, 2018.
    [60] H. Hirschmuller, “Stereo Processing by Semiglobal Matching and Mutual Information,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 30, no. 2, pp. 328-341, 2007.
    [61] X. Shi, Z. Chen, and T.-K. Kim, “Multivariate Probabilistic Monocular 3d Object Detection,” in Proceedings of the IEEE/CVF winter conference on applications of computer vision, pp. 4281-4290, 2023.
    [62] S. Pillai and J. Leonard, “Monocular Slam Supported Object Recognition,” arXiv preprint arXiv:1506.01732, 2015.
    [63] J. Wu, H. Wang, T. Shan et al., “An Omnidirectional Spatial Monocular Visual Localization and Tracking Method for Indoor Unmanned Aerial Vehicles Based on the Two-Axis Rotary Table,” Measurement Science and Technology, vol. 35, no. 6, p. 066306, 2024.
    [64] Y. Sun, X. Wang, Q. Lin et al., “A High-Accuracy Positioning Method for Mobile Robotic Grasping with Monocular Vision and Long-Distance Deviation,” Measurement, vol. 215, p. 112829, 2023.
    [65] T. Qin and S. Shen, “Robust Initialization of Monocular Visual-Inertial Estimation on Aerial Robots,” in 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp. 4225-4232, 2017.
    [66] X.-T. Chen, C.-S. Liu, and J.-H. Yen, “Development of Water-Wheel Tail Measurement System Based on Image Projective Transformation,” Water, vol. 15, no. 22, p. 3889, 2023.
    [67] 陳歆婷, “基於影像投影變換與目標檢測的水車水尾量測系統開發”, 2023.
    [68] D. Kendal, “Measuring Distances Using Digital Cameras,” Australian senior mathematics journal, vol. 21, pp. 24-28, 2007.
    [69] A. Masoumian, H. A. Rashwan, J. Cristiano et al., “Monocular Depth Estimation Using Deep Learning: A Review,” Sensors, vol. 22, no. 14, pp. 1-24, 2022.
    [70] Y. Lee, G. Lee, D. S. Moon et al., “Vision-Based Displacement Measurement Using a Camera Mounted on a Structure with Stationary Background Targets Outside the Structure,” Structural Control and Health Monitoring, vol. 29, no. 11, p. e3095, 2022.
    [71] Y. Hu, Q. Chen, S. Feng et al., “A New Microscopic Telecentric Stereo Vision System - Calibration, Rectification, and Three-Dimensional Reconstruction,” Optics and Lasers in Engineering, vol. 113, pp. 14-22, 2019.
    [72] B. Guan, Z. Su, Q. Yu et al., “Monitoring the Blades of a Wind Turbine by Using Videogrammetry,” Optics and Lasers in Engineering, vol. 152, p. 106901, 2022.
    [73] G. Kurillo, Z. Li, and R. Bajcsy, “Wide-Area External Multi-Camera Calibration Using Vision Graphs and Virtual Calibration Object,” in 2008 Second ACM/IEEE International Conference on Distributed Smart Cameras, pp. 1-9, 2008.
    [74] J. Sun, Q. Liu, Z. Liu et al., “A Calibration Method for Stereo Vision Sensor with Large Fov Based on 1d Targets,” Optics and Lasers in Engineering, vol. 49, no. 11, pp. 1245-1250, 2011.
    [75] T. Jiang, X. Cheng, and H. Cui, “Calibration Method for Binocular Vision with Large Fov Based on Normalized 1d Homography,” Optik, vol. 202, p. 163556, 2020.
    [76] S. Yang, Y. Gao, Z. Liu et al., “A Calibration Method for Binocular Stereo Vision Sensor with Short-Baseline Based on 3d Flexible Control Field,” Optics and Lasers in Engineering, vol. 124, p. 105817, 2020.
    [77] F. Abedi, Y. Yang, and Q. Liu, “Group Geometric Calibration and Rectification for Circular Multi-Camera Imaging System,” Optics express, vol. 26, no. 23, pp. 30596-30613, 2018.
    [78] 藍畋 and 華雲松, “一種基於雙目立體視覺的立體標定方法,” 電子測量技術, vol. 43, no. 8, pp. 86-90, 2020.
    [79] Y. Zhang, J. Yang, G. Li et al., “Camera Calibration for Long-Distance Photogrammetry Using Unmanned Aerial Vehicles,” Journal of Sensors, vol. 2022, 2022.
    [80] J. Zhu, Q. Zeng, F. Han et al., “Design of Laser Scanning Binocular Stereo Vision Imaging System and Target Measurement,” Optik, vol. 270, no. 169994, pp. 1-21, 2022.
    [81] F. Fooladgar, S. Samavi, S. M. R. Soroushmehr et al., “Geometrical Analysis of Localization Error in Stereo Vision Systems,” IEEE Sensors Journal, vol. 13, no. 11, pp. 4236-4246, 2013.
    [82] L. R. Ramírez-Hernández, J. C. Rodríguez-Quiñonez, M. J. Castro-Toscano et al., “Improve Three-Dimensional Point Localization Accuracy in Stereo Vision Systems Using a Novel Camera Calibration Method,” International Journal of Advanced Robotic Systems, vol. 17, no. 1, pp. 1-15, 2020.
    [83] B. Wang, “A Study on Length Measurement Method of Hot Large Forgings Based on Binocular Vision System,” Measurement, vol. 199, p. 111586, 2022.
    [84] L. Deng, T. Sun, L. Yang et al., “Binocular Video-Based 3d Reconstruction and Length Quantification of Cracks in Concrete Structures,” Automation in Construction, vol. 148, p. 104743, 2023.
    [85] W. Feng, Q. Li, W. Du et al., “Remote 3d Displacement Sensing for Large Structures with Stereo Digital Image Correlation,” Remote Sensing, vol. 15, no. 6, p. 1591, 2023.
    [86] G. Li, Z. Xu, Y. Zhang et al., “Calibration Method for Binocular Vision System with Large Field of View Based on Small Target Image Splicing,” Measurement Science and Technology, 2024.
    [87] R. Hartley and A. Zisserman, Multiple View Geometry in Computer Vision, Cambridge University Press, USA, pp. 237-362, 2004.
    [88] D. Brown, “Close-Range Camera Calibration,” Photogrammetric Engineering, vol. 37, no. 8, pp. 855-866, 1971.
    [89] “What Is Camera Calibration? - Matlab & Simulink”, https://www.mathworks.com/help/vision/ug/camera-calibration.html (accessed Nov. 7, 2022).
    [90] R. Tsai, “A Versatile Camera Calibration Technique for High-Accuracy 3d Machine Vision Metrology Using Off-the-Shelf Tv Cameras and Lenses,” IEEE Journal on Robotics and Automation, vol. 3, no. 4, pp. 323-344, 1987.
    [91] O. Faugeras, Three-Dimensional Computer Vision: A Geometric Viewpoint, The MIT press, UK, pp. 33-68, 1993.
    [92] J. J. Moré, “The Levenberg-Marquardt Algorithm: Implementation and Theory,” Numerical Analysis, pp. 105-116, 1978.
    [93] H. C. Longuet-Higgins, “A Computer Algorithm for Reconstructing a Scene from Two Projections,” Nature, vol. 293, no. 5828, pp. 133-135, 1981.
    [94] Z. Zhang, “Determining the Epipolar Geometry and Its Uncertainty: A Review,” International Journal of Computer Vision, vol. 27, no. 2, pp. 161-195, 1998.
    [95] G. Bradski and A. Kaehler, Learning Opencv: Computer Vision with the Opencv Library, O’Reilly Media Inc., USA, pp. 109-458, 2008.
    [96] G. Xu and Z. Zhang, Epipolar Geometry in Stereo, Motion and Object Recognition: A Unified Approach, Springer Dordrecht, USA, pp. 7-78, 1996.
    [97] M. A. Fischler and R. C. Bolles, “Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography,” Communications of the ACM, vol. 24, no. 6, pp. 381–395, 1981.
    [98] D. L. Massart, L. Kaufman, P. J. Rousseeuw et al., “Least Median of Squares: A Robust Method for Outlier and Model Error Detection in Regression and Calibration,” Analytica Chimica Acta, vol. 187, pp. 171-179, 1986.
    [99] O. Sorkine-Hornung and M. Rabinovich. (2017). Least-Squares Rigid Motion Using Svd. Available: https://igl.ethz.ch/projects/ARAP/svd_rot.pdf
    [100] A. Distante and C. Distante, Handbook of Image Processing and Computer Vision, Springer, USA, pp. 79-176, 2020.
    [101] N. Otsu, “A Threshold Selection Method from Gray-Level Histograms,” IEEE Transactions on Systems, Man, and Cybernetics, vol. 9, no. 1, pp. 62-66, 1979.
    [102] P. Wellner, “Adaptive Thresholding for the Digitaldesk,” Xerox, EPC1993-110, vol. 404, pp. 1-17, 1993.
    [103] D. Bradley and G. Roth, “Adaptive Thresholding Using the Integral Image,” Journal of Graphics Tools, vol. 12, no. 2, pp. 13-21, 2007.
    [104] Y.-T. Pai, Y.-F. Chang, and S.-J. Ruan, “Adaptive Thresholding Algorithm: Efficient Computation Technique Based on Intelligent Block Detection for Degraded Document Images,” Pattern Recognition, vol. 43, no. 9, pp. 3177-3187, 2010.
    [105] “Probability Density Function for the Normal Distribtion”, https://commons.wikimedia.org/wiki/File:Normal_Distribution_PDF.svg (accessed 05, 2024).
    [106] R. M. Haralick, S. R. Sternberg, and X. Zhuang, “Image Analysis Using Mathematical Morphology,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. PAMI-9, no. 4, pp. 532-550, 1987.
    [107] R. van den Boomgaard and R. van Balen, “Methods for Fast Morphological Image Transforms Using Bitmapped Binary Images,” Graphical Models and Image Processing, vol. 54, no. 3, pp. 252-258, 1992.
    [108] MathWorks, “Graycomatrix”, https://ww2.mathworks.cn/help/images/ref/graycomatrix.html (accessed 2024/06).
    [109] R. M. Haralick, K. Shanmugam, and I. H. Dinstein, “Textural Features for Image Classification,” IEEE Transactions on systems, man, and cybernetics, no. 6, pp. 610-621, 1973.
    [110] K. Schreve, “How Accurate Can a Stereovision Measurement Be?,” in International Workshop on Research and Education in Mechatronics, pp. 1-7, 2014.
    [111] L. Yang, B. Wang, R. Zhang et al., “Analysis on Location Accuracy for the Binocular Stereo Vision System,” IEEE Photonics Journal, vol. 10, no. 1, pp. 1-16, 2018.
    [112] “Using the Stereo Camera Calibrator App - Matlab & Simulink”, https://www.mathworks.com/help/vision/ug/using-the-stereo-camera-calibrator-app.html (accessed Oct. 28, 2022).
    [113] J. Zbontar and Y. LeCun, “Computing the Stereo Matching Cost with a Convolutional Neural Network,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1592-1599, 2015.
    [114] W. Luo, A. G. Schwing, and R. Urtasun, “Efficient Deep Learning for Stereo Matching,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 5695-5703, 2016.
    [115] L. P. Matias, M. Sons, J. R. Souza et al., “Veigan: Vectorial Inpainting Generative Adversarial Network for Depth Maps Object Removal,” in 2019 IEEE Intelligent Vehicles Symposium (IV), pp. 310-316, 2019.
    [116] G. Xu, J. Cheng, P. Guo et al., “Attention Concatenation Volume for Accurate and Efficient Stereo Matching,” in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp. 12981-12990, 2022.

    下載圖示 校內:立即公開
    校外:立即公開
    QR CODE