簡易檢索 / 詳目顯示

研究生: 劉德儀
Liu, Te-Yi
論文名稱: 台灣海域中大型軍艦多環境下的影像自動辨識
Automatic Image Recognition of Medium and Large Warships in Seas around Taiwan in Multiple Environments
指導教授: 陳政宏
Chen, Jeng-Horng
共同指導教授: 江佩如
Jiang, Pei-Ru
學位類別: 碩士
Master
系所名稱: 工學院 - 系統及船舶機電工程學系
Department of Systems and Naval Mechatronic Engineering
論文出版年: 2022
畢業學年度: 110
語文別: 中文
論文頁數: 70
中文關鍵詞: 卷積神經網路軍艦影像辨識
外文關鍵詞: convolutional neural network, warship image recognition
相關次數: 點閱:102下載:1
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 本研究以台灣海域附近中大型軍艦多環境下的影像辨識及建立為主考慮到台灣在戰地位置上,台灣擁有許多優勢,不論在軍事、航運等等對各國都是十分重要的地理位置。假設台海安全為由的情形下做的多環境下的辨識系統。所謂的多環境就是船艦在雨天、模擬起霧、空拍照片等等。首先要將以各國家的中大型船艦(排水量大於5000噸)為選取條件,加入資料庫中,原實船影像4006張、空拍實船影像1424張,因為這些照片在網路上不易取得,為了擴建資料庫先將照片做影像後處理(資料增強),使得照片符合這些環境,本研究所使用的軟體使用Python語言撰寫。資料增強的方式來擴充影像資料庫。其中選擇的方式有高斯模糊、描繪邊框、高斯雜訊等等。為了要達成此目的,以卷積神經網路建構系統,這邊先介紹機器學習及深度學習還有各項網路的優缺點及架構,參考完現在常見的模型後導出一個更改後的網路模型,讓它跟Xception及MobileNet網路模型對比包含訓練及影像辨識,影像辨識的部分,以混淆舉證的方式做出辨識圖,最終讓三個模型在各環境下對比後,與陳仕強架構比較。 本研究之船艦辨識系統在更改的模型中訓練正確率可達98.58%,在天氣良好的情況下的辨識率為96.67%。在空拍的情況下的辨識率為84.44%,而在環境情況下,在雨天情況下的辨識率為96.67%,在鏡頭髒污情況下的辨識率為92.77%。

    The research focuses on image identiffication and the establishment of medium and large warships on seas around Taiwan in multiple environments. Considering that Taiwan has many advantages in the battlefield position, it is very important geographical location. First of all, the selection conditions should be based on medium and large ships of various countries, and the photos of ships on rainy days, simulated fogging, and aerial photos will also be added to the database. These photos are not easy to obtain on the Internet. To expand the database, the photos were post-processed so that they fit into these environments, and the software used was written in Python. The Data enhancement method expands the image database by a total of 10,288 images. The methods chosen are border, Gaussian blur, and Gaussian noise. Use the convolutional neural network construction system to refer to the current common model and export a modified network model to compare it with the Xception and MobileNet network model, including training and image recognition. In the part of image recognition, the identification map will be made in the form of confusing evidence, and finally, the three models will be compared in each environment and compared with shi-qiang Chen’s paper, and finally, a solution will be written. The ship recognition system in this study can achieve a training accuracy of 98.58% in the modified model and a recognition rate of 96.67% in good weather conditions. In the case of empty shooting, the recognition rate is 84.44%, while in environmental conditions, the recognition rate in rainy weather is 96.67%, and the recognition rate in the case of the dirty lens is 92.77%.

    摘要 I Extended Abstract II 誌謝 VI 表目錄 IX 圖目錄 X 符號說明 XII 第一章 緒論 1 1-1 研究動機與背景 1 1-2 文獻回顧 2 1-3 研究目的 4 1-4 論文架構 5 第二章 研究方法 6 2-1 深度學習與資料庫的建構 6 2-1-1機器學習 6 2-1-2深度學習 7 2-1-3水面艦的挑選及分類 7 2-1-4訓練資料庫影像蒐集與分類 9 2-2 系統架構 10 2-3 類神經網路 11 2-3-1 網路架構 12 2-3-2 類神經網路選擇 13 2-3-3 Xception介紹 14 2-3-4 MobileNetV1介紹 17 2-3-5所更改的模型介紹 19 2.4影像處理 21 2-4-1 顏色變換-高斯模糊 22 2-4-2 顏色變換-Sobel 23 2-4-3 顏色變換-高斯雜訊 24 2-4-4 顏色變換-加入雜訊 26 2-4-5 環境變換-增加雨點 27 2-4-6 環境變換-鏡頭髒污 31 2-5 網路架構調整 33 2-6 模型訓練設備與環境 33 第三章 研究成果與分析 36 3-1 各模型訓練環境與結果 36 3-2 實船辨識結果 48 3-3 實船在天氣環境下的辨識結果 56 3-3-1雨天辨識 56 3-3-2鏡頭髒污辨識 60 3-4 各環境下分析 64 3-5 與先前研究模型對比 65 第四章 結論與未來展望 67 4-1結論 67 4-2未來展望 68 參考文獻 69

    Duarte, C. C., Naranjo, B. P. D., Lopez, A. A., & Del Campo, A. B. (2007).
    CWLFM radar for ship detection and identification. IEEE Aerospace and Electronic Systems Magazine, 22(2), 22-26.
    Azzouz, E. E., & Nandi, A. K. (1995). Automatic identification of digital modulation types. Signal processing, 47(1), 55-69.
    Casella, G., & Berger, R. (2001). Hypothesis testing in statistics.
    Chellapilla, K., Puri, S., & Simard, P. (2006). High performance convolutional neural networks for document processing. In Tenth international workshop on frontiers in handwriting recognition. Suvisoft.
    Chollet, F. (2017). Xception: Deep learning with depthwise separable convolutions. Proceedings of the IEEE conference on computer vision and pattern recognition, (pp. 1251-1258).
    Ciregan, D., Meier, U., & Schmidhuber, J. (2012). Multi-column deep neural networks for image classification. 2012 IEEE conference on computer vision and pattern recognition, (pp. 3642-3649). IEEE.
    Elhoseiny, M., Huang, S., & Elgammal, A. (2015). Weather classification with deep convolutional neural networks. 2015 IEEE International Conference on Image Processing (ICIP), (pp. 3349-3353). IEEE.
    Flusser, J., Farokhi, S., Höschl, C., Suk, T., Zitova, B., & Pedone, M. (2015). Recognition of images degraded by Gaussian blur. IEEE transactions on Image Processing, 25(2), 790-806.
    Fukushima, K. (1988). Neocognitron: A hierarchical neural network capable of visual pattern recognition. Neural networks, 1(2), 119-130.
    He, K., Sun, J., & Tang, X. (2010). Single image haze removal using dark channel prior. IEEE transactions on pattern analysis and machine intelligence, 33(12), 2341-2353.
    He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In proceedings of the IEEE conference on computer vision and pattern recognition, (pp. 770-778).
    Howard, A. G., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Andreetto, M., & Adam, H. (2017). Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861.
    Krishna, S. T., & Kalluri, H. K. (2019). Deep learning and transfer learning approaches for image classification. International Journal of Recent Technology and Engineering (IJRTE), 7(5S4), 427-432.
    Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25.
    Lee, E. R., Kim, P. K., & Kim, H. J. (1994). Automatic recognition of a car license plate using color image processing. In proceedings of 1st international conference on image processing, (Vol. 2, pp. 301-305). IEEE.
    Preacher, K. J., & Leonardelli, G. J. (2001). Calculation for the Sobel test. Retrieved January, 20, 2009.
    Shi, Q., Li, W., Zhang, F., Hu, W., Sun, X., & Gao, L. (2018). Deep CNN with multi-scale rotation invariance features for ship classification. Ieee Access, 6, 38656-38668.
    Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556.
    Skottun, B. C., De Valois, R. L., Grosof, D. H., Movshon, J. A., Albrecht, D. G., & Bonds, A. (1991). Classifying simple and complex cells on the basis of response modulation. Vision research, 31(7-8), 1078-1086.
    Starik, S., & Werman, M. (2003). Simulation of rain in videos. Texture Workshop, ICCV, (Vol. 2, pp. 406-409).
    Wang, Y., Wang, C., Zhang, H., Dong, Y., & Wei, S. (2019). Automatic ship detection based on RetinaNet using multi-resolution Gaofen-3 imagery. Remote Sensing, 11(5), 531.
    陳仕强. (2021). 台灣鄰近海域常見軍艦辨識系統之建構,國立成功大學系統及船舶機電工程研究所碩士論文。
    陳建村. (1993). 利用模糊特徵表示法執行船艦辨識,元智大學電機與資訊工程研究所82學年度碩士論文。
    曾偉銘. (2018). 植基於卷積神經網路技術之自動化船艦偵測與切割,國立台中科技大學資訊工程系(所) 106學年度碩士論文。
    鄭富元. (2006). 一個多方位船艦辨識系統雛型之研究,國立中興大學電機工程學系(所) 94學年度碩士論文。

    [網路]
    維基百科,高斯分佈(常態分佈) 示意圖(維基百科常態分佈條目)
    https://zh.wikipedia.org/wiki/%E6%AD%A3%E6%80%81%E5%88%86%E5%B8%83,下載日期:2022年 8月 2日

    無法下載圖示 校內:立即公開
    校外:2027-09-06公開
    電子論文尚未授權公開,紙本請查館藏目錄
    QR CODE