| 研究生: |
黃琢雅 Huang, Jhuo-Ya |
|---|---|
| 論文名稱: |
基於深度學習架構之混凝土表面損壞實時辨識系統 Real-Time Concrete Damage Detection Based on Deep Learning Technique |
| 指導教授: |
胡宣德
Hu, Hsuan-The |
| 學位類別: |
碩士 Master |
| 系所名稱: |
工學院 - 土木工程學系 Department of Civil Engineering |
| 論文出版年: | 2020 |
| 畢業學年度: | 108 |
| 語文別: | 中文 |
| 論文頁數: | 106 |
| 中文關鍵詞: | 深度學習 、YOLOv3 、混凝土損壞檢測 、影像辨識 |
| 外文關鍵詞: | Deep Learning, YOLOv3, Concrete Damage Detection, Object Detection |
| 相關次數: | 點閱:158 下載:3 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
混凝土建物長年使用產生的表面損壞可以透過目視檢測辨識損壞狀況,但是檢測倚賴專業人員經驗且具危險性。近年來深度學習蓬勃發展已在各領域廣泛應用,可使用深度學習影像辨識模型輔助檢測。因此本研究的目標為建立一套實時辨識鋼筋混凝土表面裂縫與鋼筋外露的深度學習模型。
本研究主要分為兩部分,第一部分是建立裂縫圖像辨識模型,模型包含6層的神經網路,其正確率可以達到99.1%。第二部分是建立混凝土損壞影像辨識模型,採用Keras函式庫建立的YOLOv3神經網路,使用不同條件的樣本,逐步增加損壞類別,損壞類型包含裂縫、鋼筋外露以及裂縫分支。透過mAP、loss及影片檢測效果評估模型,最後再使用效果最好的模型進行實時辨識的測試。
影像辨識模型的分為四種:一般裂縫模型、橋梁裂縫模型、兩類損壞模型及三類損壞模型。一般裂縫模型分為三種條件,皆採用3510張樣本訓練,其中包含同樣數量的正負樣本,以不同類型的照片取代同樣比例的負樣本,其中加入建築照取代負樣本的模型裂縫AP最高,可以達到83.61%,但影片效果不佳,所以並未繼續採用此作法。橋梁裂縫模型在一般裂縫模型的基礎上繼續加入橋梁檢測照片,裂縫AP最高為69.38%,影片效果更優於一般裂縫模型,所以採用此作法繼續加入鋼筋外露的類別。兩類損壞模型所有類別mAP最高可以達80.57%,且影片效果也最好,最後採用此類模型進行實時辨識的測試。三類損壞模型加入裂縫分支的類別,以探討該裂縫分類法對於裂縫辨識成效的影響。針對上述模型做定量分析及影片效果比較後,最終以兩類損壞模型作為實時辨識系統的效果測試。
實時辨識的測試場所選於台南市安南區鹽水溪周邊橋梁,測試採用手機視訊電腦進行,並另外拍攝影片進行影像辨識,比較實時辨識與影片辨識的差異。結果顯示,鋼筋外露與裂縫在實時辨識與影片辨識中皆有相當良好的辨識成效。
Visual inspection is one of the commonest approaches in the field of Structural Health Monitoring (SHM). However, the works rely heavily on the inspectors’ knowledge and experience leading to subjective assessments. On the other hand, with the rapid development of the Convolution Neural Network (CNN), deep learning technique has been widely adopted for damage detection. In this study, a real-time concrete surface damage detecting system was developed based on YOLOv3 network. The influences of using different types of datasets for training on the accuracy of the models was also investigated.
The study is divided into three parts: image, video and real-time objects detection. First, an image classification model is developed for the recognition of the cracked and uncracked concrete images. Second, for the object detection in video, YOLOv3 was used for the crack and spalling detection using different types of datasets. The model with the best performance was therefore adopted for the real-time surface damage detection.
Consequently, four locations in Tainan City were selected for the validation of the real-time damage detection model. The results show that the real-time damage detection model has an outstanding performance with an AP of 79.78% in the detection of concrete crack and AP of 81.35% for the exposed rebar damages.
Y.Lecun, L.Bottou, Y.Bengio, and P.Haffner, “Gradient-Based Learning Applied to Document Recognition,” proc. IEEE, 1998.
[2] O.Russakovsky et al., “ImageNet Large Scale Visual Recognition Challenge,” Int. J. Comput. Vis., vol. 115, no. 3, pp. 211–252, 2015, doi: 10.1007/s11263-015-0816-y.
[3] A.Krizhevsky, I.Sutskever, and G. E.Hinton, “ImageNet classification with deep convolutional neural networks,” Adv. Neural Inf. Process. Syst., vol. 2, pp. 1097–1105, 2012.
[4] K.Simonyan and A.Zisserman, “Very deep convolutional networks for large-scale image recognition,” 3rd Int. Conf. Learn. Represent. ICLR 2015 - Conf. Track Proc., pp. 1–14, 2015.
[5] K.He, X.Zhang, S.Ren, and J.Sun, “Deep residual learning for image recognition,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 2016-Decem, pp. 770–778, 2016, doi: 10.1109/CVPR.2016.90.
[6] R.Girshick, J.Donahue, T.Darrell, and J.Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., pp. 580–587, 2014, doi: 10.1109/CVPR.2014.81.
[7] R.Girshick, “Fast R-CNN,” Proc. IEEE Int. Conf. Comput. Vis., vol. 2015 Inter, pp. 1440–1448, 2015, doi: 10.1109/ICCV.2015.169.
[8] S.Ren, K.He, R.Girshick, and J.Sun, “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 39, no. 6, pp. 1137–1149, 2017, doi: 10.1109/TPAMI.2016.2577031.
[9] T.-Y.Lin, P.Goyal, R.Girshick, K.He, and P.Dollar, “Focal Loss for Dense Object Detection,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 42, no. 2, pp. 318–327, 2018, doi: 10.1109/TPAMI.2018.2858826.
[10] T.-Y.Lin, P.Dollár, R.Girshick, K.He, B.Hariharan, and S.Belongie, “Feature pyramid networks for object detection,” in Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, 2017, doi: 10.1109/CVPR.2017.106.
[11] J.Redmon, S.Divvala, R.Girshick, and A.Farhadi, “You only look once: Unified, real-time object detection,” Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit., vol. 2016-Decem, pp. 779–788, 2016, doi: 10.1109/CVPR.2016.91.
[12] J.Redmon and A.Farhadi, “YOLO v.3,” Tech Rep., pp. 1–6, 2018.
[13] C.Ralphs, “Better, faster, stronger,” TLS - Times Lit. Suppl., vol. 2018-June, no. 6009, p. 28, 2018, doi: 10.5860/lrts.53n4.261.
[14] A.Ramcharan et al., “Assessing a mobile-based deep learning model for plant disease surveillance,” 2018.
[15] L.Gao, Y.He, X.Sun, X.Jia, and B.Zhang, “Incorporating negative sample training for ship detection based on deep learning,” Sensors (Switzerland), vol. 19, no. 3, 2019, doi: 10.3390/s19030684.
[16] Y.Li, Z.Han, H.Xu, L.Liu, X.Li, and K.Zhang, “YOLOv3-lite: A lightweight crack detection network for aircraft structure based on depthwise separable convolutions,” Appl. Sci., vol. 9, no. 18, 2019, doi: 10.3390/app9183781.
[17] D.Han and G.Tang, “Damage detection of quayside crane structure based on improved Faster R-CNN,” Int. J. New Dev. Eng. Soc., vol. 3, no. 1, pp. 284–301, 2019, doi: 10.25236/IJNDES.190238.
[18] 赵庆安, “基于深度学习方法的古建筑砌体结构表层损伤 识别与定位,” 大連理工大學, 2017.
[19] Z.Fan, Y.Wu, J.Lu, and W.Li, “Automatic Pavement Crack Detection Based on Structured Prediction with the Convolutional Neural Network,” pp. 1–9, 2018.
[20] H.-W.Huang, Q.-T.Li, and D.-M.Zhang, “Deep Learning Based Image Recognition for Crack and Leakage Defects of Metro Shield Tunnel,” Tunn. Undergr. Sp. Technol., vol. 77, no. March, pp. 166–176, 2018, doi: 10.1016/j.tust.2018.04.002.
[21] W.Silva and D.Lucena, “Concrete Cracks Detection Based on Deep Learning Image Classification,” Proceedings, vol. 2, no. 8, p. 489, 2018, doi: 10.3390/icem18-05387.
[22] L.Yang, B.Li, W.Li, Z.Liu, G.Yang, and J.Xiao, “A robotic system towards concrete structure spalling and crack database,” 2017 IEEE Int. Conf. Robot. Biomimetics, ROBIO 2017, vol. 2018-Janua, pp. 1–6, 2018, doi: 10.1109/ROBIO.2017.8324593.
[23] L.Yang, B.Li, W.Li, Z.Liu, G.Yang, and J.Xiao, “Deep Concrete Inspection Using Unmanned Aerial Vehicle Towards CSSC Database,” Int. Conf. Intell. Robot. Syst., no. 61528303, 2017.
[24] Y.-J.Cha, W.Choi, G.Suh, S.Mahmoudkhani, and O.Büyüköztürk, “Autonomous Structural Visual Inspection Using Region-Based Deep Learning for Detecting Multiple Damage Types,” Comput. Civ. Infrastruct. Eng., vol. 33, no. 9, pp. 731–747, 2018, doi: 10.1111/mice.12334.
[25] C.Zhang, C. C.Chang, and M.Jamshidi, “Bridge Damage Detection using a Single-Stage Detector and Field Inspection Images,” 2018.
[26] 楊松儒, “以深度學習為基礎之路面破損與閥栓檢測系統,” 國立臺灣師範大學, 2019.
[27] S.Murao, Y.Nomura, H.Furuta, and C. W.Kim, “Concrete crack detection using UAV and deep learning,” 13th Int. Conf. Appl. Stat. Probab. Civ. Eng. ICASP 2019, 2019.
[28] A.Satoshi, Y.Nobuyoshi, and F.Tomohiro, “Comparison of Deep Learning Model Precision for Detecting Concrete Deterioration Types from Digital Images,” Computing in Civil Engineering 2019. pp. 196–203, 22-Jul-2019, doi: doi:10.1061/9780784482445.025.
[29] Ç. F.Özgenel, “Concrete Crack Images for Classification,” 23 Jul, 2019. .
[30] 施威銘研究室, tf.keras 技術者們必讀!深度學習攻略手冊. 台北市: 旗標, 2020.
[31] 斎藤康毅, Deep Learning:用Python進行深度學習的基礎理論實作, 1st ed. 台北市: 碁峰資訊, 2017.
[32] N.Srivastava, G.Hinton, A.Krizhevsky, I.Sutskever, and R.Salakhutdinov, “Dropout: A Simple Way to Prevent Neural Networks from Overfittin,” J. Mach. Learn. Res., vol. 15, pp. 1929–1958, Jan.2014, doi: 10.5555/2627435.2670313.
[33] F.Chollet, Deep learning 深度學習必讀:Keras 大神帶你用 Python 實作. 台北市: 旗標, 2019.
[34] A.Kathuria, “What’s new in YOLO v3?” [Online]. Available: https://towardsdatascience.com/yolo-v3-object-detection-53fb7d3bfe6b. [Accessed: 27-May-2020].
[35] S.Ding, F.Long, H.Fan, L.Liu, and Y.Wang, “A Novel YOLOv3-tiny Network for Unmanned Airship Obstacle Detection,” in 2019 IEEE 8th Data Driven Control and Learning Systems Conference (DDCLS), 2019, pp. 277–281, doi: 10.1109/DDCLS.2019.8908875.
[36] I.Goodfellow, Y.Bengio, and A.Courville, Deep Learning. MIT Press, 2016.
[37] T. Y.Lin et al., “Microsoft COCO: Common objects in context,” Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 8693 LNCS, no. PART 5, pp. 740–755, 2014, doi: 10.1007/978-3-319-10602-1_48.
[38] 財團法人中華顧問工程司, “混凝土橋常見裂化樣態探討,” 2017.
[39] J.Cartucho, “mean Average Precision - This code evaluates the performance of your neural net for object recognition.,” 2019. [Online]. Available: https://github.com/Cartucho/mAP. [Accessed: 02-Oct-2019].
[40] M.Everingham, L.VanGool, C. K. I.Williams, J.Winn, and A.Zisserman, “The pascal visual object classes (VOC) challenge,” Int. J. Comput. Vis., vol. 88, no. 2, pp. 303–338, 2010, doi: 10.1007/s11263-009-0275-4.
[41] Qqwweee, “keras-yolo3 - A Keras implementation of YOLOv3 (Tensorflow backend),” 2018. [Online]. Available: https://github.com/qqwweee/keras-yolo3. [Accessed: 01-Oct-2019].
[42] T.Lin, “labelImg - LabelImg is a graphical image annotation tool and label object bounding boxes in images,” 2015. [Online]. Available: https://github.com/tzutalin/labelImg. [Accessed: 25-Sep-2019].