| 研究生: |
陳歆婷 Chen, Xin-Ting |
|---|---|
| 論文名稱: |
基於影像投影變換與目標檢測的水車水尾量測系統開發 Development of Water-Wheel Tail Measurement System Based on Image Projective Transformation and Object Detection |
| 指導教授: |
劉建聖
Liu, Chien-Sheng |
| 學位類別: |
碩士 Master |
| 系所名稱: |
工學院 - 機械工程學系 Department of Mechanical Engineering |
| 論文出版年: | 2023 |
| 畢業學年度: | 111 |
| 語文別: | 中文 |
| 論文頁數: | 106 |
| 中文關鍵詞: | 水車水尾 、影像校正 、投影變換 、目標檢測 、水產養殖 |
| 外文關鍵詞: | Water-wheel tail, image calibration, projective transformation, object detection, aquaculture |
| 相關次數: | 點閱:53 下載:7 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
漁業為臺灣重要的經濟命脈,近十年來每年之產值皆達六百億元以上,其中超過四成的漁產品來自養殖漁業。早年傳統養殖漁業多仰賴漁民長年累積之經驗,透過目測水車水尾(後簡稱水尾)長度之變化趨勢,以判斷魚塭水質好壞。然而傳統養殖漁業正面臨人口高齡化、缺乏青年勞動力與前述經驗難以傳承等問題。由以上可知養殖漁業需引入精準且有效率調控水質的技術,降低人為誤判造成的損失。若透過影像處理與物體辨識等技術,更精確地量測和紀錄水尾長度,同時保留並傳承老一輩漁民的經驗,最終可達到利用影像即時得知魚塭水質狀態之目的,以提升養殖漁業的產品品質與產量,發展出低成本、高效率的新興智慧監測養殖模式。
因此本研究提出了一套水車水尾長度量測系統,透過投影變換之方式校正輸入之影像,以得知圖像中待測目標轉換後之座標,再透過已知條件如水車底座長度,依等比例關係推算出水車水尾實際長度。另透過兩種不同規格之校正板,驗證出該系統校正平均誤差百分比絕對值最高1.19%,最低可達0.19%。量測棋盤格長邊兩角點連線長度之標準差最高0.1891公尺,最低可達0.0207公尺;透過資料擴增增加數據集之數量與多樣性,並使用YOLO(You Only Look One)v8深度學習模型訓練,使其可辨識出水車水尾特徵。該模型最終mAP50最大值可達0.99013,mAP50-95最大值則達0.885。
Fishery is vital for Taiwan's economy, with an annual value exceeding 60 billion New Taiwan Dollars. Over 40% of the fishery products come from aquaculture. Traditional aquaculture relies on visual observation of water-wheel tail length to assess water quality. However, aging population, lack of young labor, and difficulty in passing down experience pose challenges. To address the challenge, a precise and efficient water quality control system is proposed.
Therefore, this study proposes a water-wheel tail length measurement system that corrects input images through image projective transformation to obtain the transformed coordinates. By utilizing known conditions of the water-wheel, such as the length of the base, the actual length of the water-wheel tail is deduced based on proportional relationships. Validated with two different calibration boards, the system exhibits an average absolute percentage error ranging from 0.19% to a maximum of 1.19%. The standard deviation of measuring the length of the calibration grid ranges from 0.0207 meters to a maximum of 0.1891 meters. Data augmentation techniques are employed to increase the quantity and diversity of the dataset, and the YOLO (You Only Look One) v8 deep learning model is trained to recognize water-wheel tail features. The model achieves a maximum mAP50 value of 0.99013 and a maximum mAP50-95 value of 0.885.
[1] “行政院農業委員會漁業署(2020)漁業統計年報”, https://www.fa.gov.tw/view.php?theme=FS_AR&subtheme=&id=20 (accessed 2023).
[2] “行政院農業委員會水產試驗所-水質管理”, https://www.tfrin.gov.tw/News_Content.aspx?n=310&s=34891 (accessed 2023).
[3] “農委會公布養殖漁業白皮書,4年投87億拚產業轉型,2025漁電共生規劃一萬公頃”, https://www.newsmarket.com.tw/blog/152031/ (accessed 2023).
[4] “行政院農業委員會漁業署(2021)漁業統計年報”, https://www.fa.gov.tw/view.php?theme=FS_AR&subtheme=&id=21 (accessed 2023).
[5] “魚塭水質易變 嘉義養殖戶天天送驗”, https://news.pts.org.tw/article/439736 (accessed 2023).
[6] “串連在地 進擊南臺灣產業”, https://www.itri.org.tw/ListStyle.aspx?DisplayStyle=18_content&SiteID=1&MmmID=1036452026061075714&MGID=1071255105510462405 (accessed 2023).
[7] “【Ces 2019 台灣新創團隊】每池魚塭都該裝的水質監測器: 精準揪出水中有毒物質,養殖漁業不再只能靠天吃飯”, https://technews.tw/2018/12/26/gintel-technology-smart-farm-fish-farming/ (accessed 2023).
[8] “水產養殖水質監測fish Farming”, https://www.hach.com.tw/mod/industry/index.php?REQUEST_ID=6f11269c4fa89f16dcaaf298183d8b%202407bb564983335863bbb17c03a7e184b6&pn=0 (accessed 2023).
[9] L. Madrange, P. Chaboury, O. Ferrandon et al., “Study of the Formation and Stability of Chemical Foam on the Vienne River between Limoges and Confolens[Etude De La Formation Et De La Stabilite Des Mousses Chimiques De Surface De La Vienne],” Revue des Sciences de l'eau/Journal of Water Science, vol. 6, no. 3, pp. 315-334, 1993.
[10] M. Defrain and R. Schulze-Rettmer, “Schaumentwicklung in Biologisch Gereinigtem Abwasser Und Im Gewässer. I: Voruntersuchungen Zur Schaumentstehung Auf Abwässern,” Vom Wasser, vol. 73, pp. 251-257, 1989.
[11] K. Hladikova, I. Ruzickova, P. Klucova et al., “An Investigation into Studying of the Activated Sludge Foaming Potential by Using Physicochemical Parameters,” Water Science and Technology, vol. 46, no. 1-2, pp. 525-528, 2002.
[12] S.-H. Oh, Y. Min Oh, J.-Y. Kim et al., “A Case Study on the Design of Condenser Effluent Outlet of Thermal Power Plant to Reduce Foam Emitted to Surrounding Seacoast,” Ocean Engineering, vol. 47, pp. 58-64, 2012.
[13] I. R. Jenkinson, L. Seuront, H. Ding et al., “Biological Modification of Mechanical Properties of the Sea Surface Microlayer, Influencing Waves, Ripples, Foam and Air-Sea Fluxes,” Elementa: Science of the Anthropocene, vol. 6, p. 26, 2018.
[14] M. Lánský, I. Ruz̆ic̆ková, A. Benáková et al., “Effect of Coagulant Dosing on Physicochemical and Microbiological Characteristics of Activated Sludge and Foam Formation,” Acta hydrochimica et hydrobiologica, vol. 33, no. 3, pp. 266-269, 2005.
[15] Å. D. Westlund, E. Hagland, and M. Rothman, “Foaming in Anaerobic Digesters Caused by Microthrix Parvicella,” Water Science and Technology, vol. 37, no. 4, pp. 51-55, 1998.
[16] Y. Wang, M. Sun, Y. Tang et al., “Effects of Haematococcus Pluvialis on the Water Quality and Performance of Litopenaeus Vannamei Using Artificial Substrates and Water Exchange Systems,” Aquaculture International, vol. 30, no. 4, pp. 1779-1797, 2022.
[17] N. Abdel-Raouf, A. A. Al-Homaidan, and I. B. M. Ibraheem, “Microalgae and Wastewater Treatment,” Saudi Journal of Biological Sciences, vol. 19, no. 3, pp. 257-275, 2012.
[18] Z. Dai, D. Fornasiero, and J. Ralston, “Particle–Bubble Attachment in Mineral Flotation,” Journal of Colloid and Interface Science, vol. 217, no. 1, pp. 70-76, 1999.
[19] D. R. Seaman, E. V. Manlapig, and J. P. Franzidis, “Selective Transport of Attached Particles across the Pulp–Froth Interface,” Minerals Engineering, vol. 19, no. 6, pp. 841-851, 2006.
[20] K. Schilling and M. Zessner, “Foam in the Aquatic Environment,” Water Research, vol. 45, no. 15, pp. 4355-4366, 2011.
[21] P. Walstra, “Principles of Foam Formation and Stability,” in Foams: Physics, Chemistry and Structure, pp. 1-15, 1989.
[22] R. Das, H. N. Chanakya, and L. Rao, “Study Towards Understanding Foaming and Foam Stability in Urban Lakes,” Journal of Environmental Management, vol. 322, p. 116111, 2022.
[23] S. Kundu, M. V. Coumar, S. Rajendiran et al., “Phosphates from Detergents and Eutrophication of Surface Water Ecosystem in India,” Current Science, vol. 108, no. 7, pp. 1320-1325, 2015.
[24] D. M. Mahapatra, H. N. Chanakya, and T. V. Ramachandra, “Assessment of Treatment Capabilities of Varthur Lake, Bangalore, India,” International Journal of Environmental Technology and Management, vol. 14, no. 1-4, pp. 84-102, 2011.
[25] D. Craig, R. J. Ireland, and F. Bärlocher, “Seasonal Variation in the Organic Composition of Seafoam,” Journal of Experimental Marine Biology and Ecology, vol. 130, no. 1, pp. 71-80, 1989.
[26] C. Wegner and M. Hamburger, “Occurrence of Stable Foam in the Upper Rhine River Caused by Plant-Derived Surfactants,” Environmental Science & Technology, vol. 36, no. 15, pp. 3250-3256, 2002.
[27] T. V. Ramachandra, K. S. Asulabha, V. Sincy et al., “Pathetic Status of Wetlands in Bangalore: Epitome of Inefficient and Uncoordinated Governance,” Centre for Ecological Sciences, Indian Institute of Science: Bangalore, India, 2015.
[28] W. R. Ferreira, L. T. Paiva, and M. Callisto, “Development of a Benthic Multimetric Index for Biomonitoring of a Neotropical Watershed,” Brazilian Journal of Biology, vol. 71, pp. 15-25, 2011.
[29] A. Lumb, D. Halliwell, and T. Sharma, “Application of Ccme Water Quality Index to Monitor Water Quality: A Case Study of the Mackenzie River Basin, Canada,” Environmental Monitoring and Assessment, vol. 113, no. 1, pp. 411-429, 2006.
[30] A. Said, D. K. Stevens, and G. Sehlke, “An Innovative Index for Evaluating Water Quality in Streams,” Environmental Management, vol. 34, no. 3, pp. 406-414, 2004.
[31] A. A. Khan, R. Paterson, and H. Khan, “Modification and Application of the Canadian Council of Ministers of the Environment Water Quality Index (Ccme Wqi) for the Communication of Drinking Water Quality Data in Newfoundland and Labrador,” Water Quality Research Journal, vol. 39, no. 3, pp. 285-293, 2004.
[32] A. Lumb, T. C. Sharma, and J.-F. Bibeault, “A Review of Genesis and Evolution of Water Quality Index (Wqi) and Some Future Directions,” Water Quality, Exposure and Health, vol. 3, no. 1, pp. 11-24, 2011.
[33] L. S. G. Kovásznay and H. M. Joseph, “Image Processing,” Proceedings of the IRE, vol. 43, no. 5, pp. 560-570, 1955.
[34] N. B. A. Mustafa, F. Bakri, and S. K. Ahmed, “Identification of Image Angle Using Projective Transformation: Application to Banana Images,” pp. 408-413.
[35] R. I. Hartley, “Theory and Practice of Projective Rectification,” International Journal of Computer Vision, vol. 35, no. 2, pp. 115-127, 1999.
[36] S. Zokai and G. Wolberg, “Image Registration Using Log-Polar Mappings for Recovery of Large-Scale Similarity and Projective Transformations,” IEEE transactions on image processing, vol. 14, no. 10, pp. 1422-1434, 2005.
[37] N. Ayache and F. Lustman, “Trinocular Stereovision for Robotics,” Ieee transactions on pattern analysis and machine intelligence, vol. 13, no. 1, 1991.
[38] O. D. Faugeras, Q. T. Luong, and S. J. Maybank, “Camera Self-Calibration: Theory and Experiments,” pp. 321-334.
[39] R. I. Hartley, “Estimation of Relative Camera Positions for Uncalibrated Cameras,” pp. 579-587.
[40] R. I. Hartley, R. Gupta, and T. Chang, “Stereo from Uncalibrated Cameras,” vol. 92, pp. 761-764.
[41] S. Demey, A. Zisserman, and P. A. Beardsley, “Affine and Projective Structure from Motion,” pp. 1-10.
[42] C. F. Chong, Y. Wang, B. Ng et al., “Projective Transformation Rectification for Camera-Captured Chest X-Ray Photograph Interpretation with Synthetic Data,” arXiv preprint arXiv:2210.05954, 2022.
[43] G. Montavon, W. Samek, and K.-R. Müller, “Methods for Interpreting and Understanding Deep Neural Networks,” Digital signal processing, vol. 73, pp. 1-15, 2018.
[44] V. Sze, Y.-H. Chen, T.-J. Yang et al., “Efficient Processing of Deep Neural Networks: A Tutorial and Survey,” Proceedings of the IEEE, vol. 105, no. 12, pp. 2295-2329, 2017.
[45] C. Szegedy, A. Toshev, and D. Erhan, “Deep Neural Networks for Object Detection,” Advances in neural information processing systems, vol. 26, 2013.
[46] G. F. Montufar, R. Pascanu, K. Cho et al., “On the Number of Linear Regions of Deep Neural Networks,” Advances in neural information processing systems, vol. 27, 2014.
[47] T. Diwan, G. Anirudh, and J. V. Tembhurne, “Object Detection Using Yolo: Challenges, Architectural Successors, Datasets and Applications,” multimedia Tools and Applications, vol. 82, no. 6, pp. 9243-9275, 2023.
[48] R. Laroca, E. Severo, L. A. Zanlorensi et al., “A Robust Real-Time Automatic License Plate Recognition Based on the Yolo Detector,” pp. 1-10.
[49] M. J. Shafiee, B. Chywl, F. Li et al., “Fast Yolo: A Fast You Only Look Once System for Real-Time Embedded Object Detection in Video,” arXiv preprint arXiv:1709.05943, 2017.
[50] J. Terven and D. Cordova-Esparza, “A Comprehensive Review of Yolo: From Yolov1 to Yolov8 and Beyond,” arXiv preprint arXiv:2304.00501, 2023.
[51] W. Fang, L. Wang, and P. Ren, “Tinier-Yolo: A Real-Time Object Detection Method for Constrained Environments,” IEEE Access, vol. 8, pp. 1935-1944, 2020.
[52] M. Yandouzi, M. Grari, M. Berrahal et al., “Investigation of Combining Deep Learning Object Recognition with Drones for Forest Fire Detection and Monitoring,” International Journal of Advanced Computer Science and Applications, vol. 14, no. 3, 2023.
[53] M. Berrahal and M. Azizi, “Augmented Binary Multi-Labeled Cnn for Practical Facial Attribute Classification,” Indones. J. Electr. Eng. Comput. Sci., vol. 23, no. 2, pp. 973-979, 2021.
[54] D. A. Van Dyk and X.-L. Meng, “The Art of Data Augmentation,” Journal of Computational and Graphical Statistics, vol. 10, no. 1, pp. 1-50, 2001.
[55] C. Shorten, T. M. Khoshgoftaar, and B. Furht, “Text Data Augmentation for Deep Learning,” Journal of big data, vol. 8, pp. 1-34, 2021.
[56] Z. Zhong, L. Zheng, G. Kang et al., “Random Erasing Data Augmentation,” vol. 34, pp. 13001-13008.
[57] P. Chen, S. Liu, H. Zhao et al., “Gridmask Data Augmentation,” arXiv preprint arXiv:2001.04086, 2020.
[58] M. A. Tanner and W. H. Wong, “The Calculation of Posterior Distributions by Data Augmentation,” Journal of the American statistical Association, vol. 82, no. 398, pp. 528-540, 1987.
[59] L. Taylor and G. Nitschke, “Improving Deep Learning with Generic Data Augmentation,” pp. 1542-1547.
[60] D. S. Park, W. Chan, Y. Zhang et al., “Specaugment: A Simple Data Augmentation Method for Automatic Speech Recognition,” arXiv preprint arXiv:1904.08779, 2019.
[61] T. DeVries and G. W. Taylor, “Improved Regularization of Convolutional Neural Networks with Cutout,” arXiv preprint arXiv:1708.04552, 2017.
[62] E. D. Cubuk, B. Zoph, D. Mane et al., “Autoaugment: Learning Augmentation Policies from Data,” arXiv preprint arXiv:1805.09501, 2018.
[63] H. Zhang, M. Cisse, Y. N. Dauphin et al., “Mixup: Beyond Empirical Risk Minimization,” arXiv preprint arXiv:1710.09412, 2017.
[64] 楊凱期, “基於深度學習並整合病徵偵測之醫學影像分類”, 國立交通大學資訊科學與工程學系, 碩士論文, 2018.
[65] Y. Zhang and S. Lu, “Data Augmented Network for Facial Landmark Detection,” in 2021 The 5th International Conference on Compute and Data Analysis, pp. 180-186, 2021.
[66] X. Zhang, X. Zhou, M. Lin et al., “Shufflenet: An Extremely Efficient Convolutional Neural Network for Mobile Devices,” pp. 6848-6856.
[67] “What Is Camera Calibration?”, https://ww2.mathworks.cn/help/vision/ug/camera-calibration.html?w.mathworks.com (accessed 2023).
[68] W. J. Smith, Modern Optical Engineering: The Design of Optical Systems, McGraw-Hill Education, 2008.
[69] R. E. W. R. C. Gonzalez, and S. L. Eddins,, Digital Image Processing Using Matlab, Gatesmark Publishing, USA.
[70] J. Yu and W. Zhang, “Face Mask Wearing Detection Algorithm Based on Improved Yolo-V4,” Sensors, vol. 21, no. 9, p. 3263, 2021.
[71] F.-F. L. J. J. S. Yeung. (2017). Lecture 11: Detection and Segmentation. Available: http://cs231n.stanford.edu/slides/2017/cs231n_2017_lecture11.pdf
[72] R. Girshick, J. Donahue, T. Darrell et al., “Rich Feature Hierarchies for Accurate Object Detection and Semantic Segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 580-587, 2014.
[73] H. Wang, Z. Li, X. Ji et al., “Face R-Cnn,” arXiv preprint arXiv:1706.01061, 2017.
[74] J. Redmon, S. Divvala, R. Girshick et al., “You Only Look Once: Unified, Real-Time Object Detection,” in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 779-788, 2016.
[75] R. Girshick, “Fast R-Cnn,” pp. 1440-1448.
[76] W. Liu, D. Anguelov, D. Erhan et al., “Ssd: Single Shot Multibox Detector,” pp. 21-37.
[77] J. Redmon and A. Farhadi, “Yolov3: An Incremental Improvement,” arXiv preprint arXiv:1804.02767, 2018.
[78] “Ultralytics Yolov8 Docs”, https://docs.ultralytics.com/modes/train/ (accessed 2023).
[79] “Yolov8-Github”, https://github.com/ultralytics/ultralytics (accessed 2023).
[80] 高怡宣, “Camera Calibration: From Image to World Coordinate”, https://www.youtube.com/watch?v=Sjx1Db1CVic (accessed 2021).
[81] J. Rodríguez-Quiñonez, O. Sergiyenko, W. Flores-Fuentes et al., “Improve a 3d Distance Measurement Accuracy in Stereo Vision Systems Using Optimization Methods’ Approach,” Opto-Electronics Review, vol. 25, no. 1, pp. 24-32, 2017.
[82] “Ultralytics Yolov8 Docs-Train”, https://docs.ultralytics.com/modes/train/ (accessed 2023).