簡易檢索 / 詳目顯示

研究生: 廖沁旋
Liao, Chin-Hsuan
論文名稱: 基於YOLOv7深度學習與影像處理的多樣化天氣下車輛影像系統優化
Optimization of Vehicle Image System in Diverse Weather Conditions based on YOLOv7 and Image Processing
指導教授: 廖德祿
Liao, Teh-Lu
學位類別: 碩士
Master
系所名稱: 工學院 - 工程科學系
Department of Engineering Science
論文出版年: 2023
畢業學年度: 111
語文別: 中文
論文頁數: 58
中文關鍵詞: 多樣化天氣影像處理深度學習車燈辨識分群法
外文關鍵詞: Diverse weather conditions, Image processing, Deep learning, Car light recognition, Clustering
相關次數: 點閱:157下載:55
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 隨著特斯拉(Tesla)等電動車行業巨頭將自駕車引入公眾視野,以及工業智能化的趨勢,確保乘客安全在減少或取代人工干預的情況下成為汽車領域的重要研究課題之一。目前,一些市售車輛提供了基本安全系統,例如車道變換輔助、車道偏離控制、前後車預防追撞以及夜間行車輔助系統等。然而,這些安全措施通常只在天氣良好且光線充足的情況下有效果,對於需要更多人力和安全措施的實際環境(如濃霧、大雨、光線不足)仍無法有效實現輔助和減輕人力負擔的功能。
    因此,本研究提出一種基於YOLOv7深度學習與影像處理的多樣化天氣下車輛影像系統優化,旨在改善車輛影像系統在各種天氣條件下的影像辨識效能,而增強乘客的安全性。為實現此目標,本論文結合影像處理和YOLOv7(You Only Look Once version 7)物體檢測算法,以準確識別車燈。同時,在研究中引入了高斯平滑、Canny邊緣增強法作為抵抗FGSM攻擊(Fast Gradient Sign Method)的防禦機制,以對抗惡意干擾引起的識別錯誤,提高系統的安全性。此外,我們還運用分群法對車燈進行分類,利用色相、飽和度、明度、光流和中心位置等特徵值進行分群,以減少與車輛其他傳感器的交互作用所需的運算時間,實現即時車輛追蹤。
    實驗結果顯示,當抵禦FGSM攻擊的情況下,複合式YOLOv7系統的準確率達到了87.41%。每幀影像所需的最大運算時間為0.658秒。因此,本研究的方法可以提高車輛影像系統在多樣化天氣條件下的自主行駛能力,增加行駛安全性,並推動車輛影像系統技術的進一步發展。

    The rise of self-driving cars and industrial automation has prompted extensive research in passenger safety and reducing human intervention in autonomous vehicles. However, existing safety systems in commercially available vehicles are insufficient in adverse weather conditions. This study proposes an optimization method for self-driving car image systems that focuses on car-light detection to enhance image recognition performance across diverse weather conditions. By integrating image processing techniques with the YOLOv7 object detection algorithm, the study achieves precise identification of vehicle lights. Defense mechanisms such as Gaussian smoothing and the Canny edge enhancement method are introduced to counter FGSM attacks and improve system security. A clustering method utilizing various features is employed to classify car lights and enable real-time vehicle tracking. Experimental results show an 87.41% accuracy rate and an average computation time of 0.658 seconds per image frame using Improved-YOLOv7 system. This study's approach holds promise for enhancing autonomous driving capabilities in different weather conditions and advancing vehicle image system technology. Consequently, this study's approach holds promise for enhancing the autonomous driving capabilities of image system in diverse weather conditions, thereby improving driving safety and driving further advancements in vehicle image system technology.

    摘要 I 致謝 XIII 目錄 XIV 圖目錄 XVI 表目錄 XVIII 第一章 緒論 1 1.1 研究背景與動機 1 1.2 研究目的 2 1.3 限制與範圍 2 1.4 章節摘要 3 第二章 文獻探討 4 2.1 汽車安全技術概論 4 2.2 機器視覺應用於車載輔助系統 5 2.3 YOLO 物件辨識模型 7 2.3.1 模型訓練 8 2.3.2 模型辨識 9 2.3.3 YOLOv7 10 2.4 資料增強 11 2.5 對抗式攻擊 11 2.5.1 FGSM對抗式攻擊 12 2.5.2 防禦FGSM對抗式攻擊機制 13 2.6 分群法 13 2.6.1 K-Means演算法 13 2.6.2 層次聚類演算法(Hierarchical Clustering) 14 第三章 研究方法 15 3.1 研究流程 15 3.2 資料集 16 3.2.1公開資料集選擇 17 3.2.2資料標記 18 3.2.3 資料增強 19 3.2.4資料切割 20 3.3 模型改良 21 3.3.1防禦對抗式攻擊機制 ─ Gaussian Smoothing 高斯平滑法 21 3.3.2特徵增強 ─ Canny 邊緣增強 23 3.3.3特徵增強 ─非銳化濾鏡 Unsharp Masking( USM ) 25 3.3.4評估指標 27 3.3.5 Baseline選擇 28 3.4 利用分群法進行車燈配對 28 3.4.1特徵提取 29 3.4.2 分群法比較:K-Means與Hierarchical Clustering 29 3.4.3 評估指標 30 第四章 實驗結果與討論 33 4.1 實驗環境 33 4.2 YOLOv7參數設定 34 4.3 實驗結果討論 35 4.3.1 訓練資料集資料增強 35 4.3.2 高斯平滑作為防禦對抗式攻擊機制 38 4.3.3 Canny邊緣增強與Unsharp masking提升模型效能 41 4.3.4 複合式YOLOv7系統於各天氣的車燈辨識效果比較 45 4.3.5 複合式YOLOv7系統於現今車輛辨識模型比較 50 4.3.6 分群法對於車燈配對的效果比較 50 第五章 結論 54 5.1 結論 54 5.2 未來展望 55 參考文獻 56

    [1] 永豐金證投顧, “2025臺灣車用產值有望突破6000億!3分鐘帶你一探汽車製造3流程,” 豐雲學院, 2023.
    [2] Z. Sun , G. Bebis and R. Miller, "On-RoadVehicleDetection:AReview," IEEE Trans Pattern Anal Mach Intell, 2006.
    [3] J. Li, . F. Yang, M. Tomizuka and C. Choi, "EvolveGraph: Multi-Agent Trajectory Prediction with Dynamic Relational Reasoning," in Advances in neural information processing systems, 2020.
    [4] Q. Liu, Z. Li, S. Yuan, Z. Yuzheng and L. Xueyuan , "Review on Vehicle Detection Technology for Unmanned Ground Vehicles," Sensors, vol. 21(4), no. https://doi.org/10.3390/s21041354, p. 1354, 2021.
    [5] D. V. d. u. 3.-L. a. ConvNet, "DepthCN: Vehicle detection using 3D-LIDAR and ConvNet," in IEEE 20th International Conference on Intelligent Transportation Systems (ITSC), Japan, 2017.
    [6] J. Fang, H. Meng, H. Zhang and X. Wang, "A Low-cost Vehicle Detection and Classification System based on Unmodulated Continuous-wave Radar," in 2007 IEEE Intelligent Transportation Systems Conference, Bellevue, WA, USA, 2007.
    [7] R. S. Feris, B. Siddiquie, J. Petterson, Y. Zhai, A. Datta, L. M. Brown and S. Pankanti, "Large-Scale Vehicle Detection, Indexing, and Search in Urban Surveillance Videos," IEEE Transactions on Multimedia, vol. 14, no. 1, pp. 28-42, 2012.
    [8] L. Andreone, P. C. Antonello, M. Bertozzi, . A. Broggi, A. Fascioli and D. Ranzato, "Vehicle detection and localization in infra-red images," in The IEEE 5th International Conference on Intelligent Transportation Systems, Singapore, 2022.
    [9] C. B. G. Toney, "Adaptive Headlamps in Automobile: A Review on the Models, Detection Techniques, and Mathematical Models," IEEE Access, vol. 9, pp. 87462-87474,, 2021.
    [10] G. Y. Y. W. X. W. a. Y. M. Y. Xu, "A Hybrid Vehicle Detection Method Based on Viola-Jones and HOG + SVM from UAV Images," Sensors, vol. 16, p. 1325, 2016.
    [11] W. L. C. Y. a. M. P. M. Cheon, "Vision-Based Vehicle Detection System With Consideration of the Detecting Location," IEEE Transactions on Intelligent Transportation Systems, vol. 13, no. 3, pp. 1243-1252, 2012.
    [12] X. C. a. Q. Meng, "Vehicle Detection from UAVs by Using SIFT with Implicit Shape Model," in 2013 IEEE International Conference on Systems, Man, and Cybernetics, Manchester, UK, 2013.
    [13] B. F. M. a. S. M. Kumbhare, "Vehicle detection in video surveillance system using Symmetrical SURF," in 2015 IEEE International Conference on Electrical, Computer and Communication Technologies (ICECCT), Coimbatore, India, 2015.
    [14] R.-H. Zhang, F. You, F. Chen and W.-Q. He, "Vehicle detection method for intelligent vehicle at night time based on video and laserInformation," International Journal of Pattern Recognition & Artificial Intelligence, vol. 32, no. 4, pp. 1-20, 2018.
    [15] C. S. a. A. Srikaew, "2DPCA for Vehicle Detection from CCTV Captured Image," in 2011 International Conference on Information Science and Applications, Jeju, Korea (South), 2011.
    [16] Q. F. R. S. F. N. V. Zhaowei Cai, A Unified Multi-scale Deep Convolutional Neural Network for Fast Object Detection, Computer Vision – ECCV 2016: Springer International Publishing, 2016.
    [17] R. D. J. D. T. &. M. J. Girshick, "Rich feature hierarchies for accurate object detection and semantic segmentation," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2014.
    [18] “New Headlight Sensors Make Night Driving Safer,” Road&Travel Magazine. Available: https://www.roadandtravel.com/autoadvice/2007/highbeams.htm.
    [19] P. Alcantarilla, L. Bergasa, P. Jimenez, M. Sotelo, I. Parr, D. Fernandez. and S. Mayoral, "Night time vehicle detection for driving assistance lightbeam controller," in 2008 IEEE Intelligent Vehicles Symposium, Eindhoven, Netherlands, 2008.
    [20] E. J. a. M. G. R. O'Malley, "Rear-Lamp Vehicle Detection and Tracking in Low-Exposure Color Video for Night Conditions," IEEE Transactions on Intelligent Transportation Systems, vol. 11, no. 2, pp. 453-462, 2010.
    [21] S. Z. X. Z. a. L. E. L. K. Qian, "Robust Multimodal Vehicle Detection in Foggy Weather Using Complementary Lidar and Radar Signals," in 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 2021.
    [22] A. C. N. H. a. D. A. R. Gallen, "Towards Night.Fog Detection through use of In-Vehicle Multipurpose Cameras," in 2011 IEEE Intelligent Vehicles Symposium (IV), Baden-Baden, Germany, 2011.
    [23] T. A. M. Maze and G. Burchett, "Whether Weather Matters to Traffic Demand, Traffic Safety, and Traffic Operations and Flow," Transportation Research Record 1948 , 2006.
    [24] M. A. K. K. M. a. S. M. M. Hassaballah, "Vehicle Detection and Tracking in Adverse Weather Using a Deep Learning Framework," IEEE Transactions on Intelligent Transportation Systems, vol. 22, no. 7, pp. 4320-4242, 2021.
    [25] S. D. R. G. A. F. Joseph Redmon, “You Only Look Once: Unified, Real-Time Object Detection,” 於 Proceedings of the IEEE conference on computer vision and pattern recognition, 2016.
    [26] C. Y. B. A. &. L. H. Y. M. Wang, "YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors," In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7464-7475, 2023.
    [27] J. Solawetz, "What is YOLOv7? A Complete Guide.," roboflow, 17 JUL 2022. [Online]. Available: https://blog.roboflow.com/yolov7-breakdown/. [Accessed 26 MAY 2023].
    [28] C. K. T. Shorten, "A survey on Image Data Augmentation for Deep Learning," Big Data, vol. 6, p. 60, 2019.
    [29] S. B. Almutairi, "Securing DNN for smart vehicles: an overview of adversarial attacks, defenses, and frameworks," Journal of Engineering and Applied Science , vol. 70, no. 16, 2023.
    [30] M. Z. a. M. S. L. J. Moukahal, "Boosting Grey-box Fuzzing for Connected Autonomous Vehicle Systems," in 2021 IEEE 21st International Conference on Software Quality, Reliability and Security Companion (QRS-C), Hainan, China, 2021.
    [31] C. V. R. M. a. C. K. M. K. N. Kumar, "Black-box Adversarial Attacks in Autonomous Vehicle Technology," in 2020 IEEE Applied Imagery Pattern Recognition Workshop (AIPR), Washington DC, DC, USA, 2020.
    [32] J. IanJ.Goodfellow, "Explaining and Harnessing Adversarial Examples," in ICLR, MountainView,CA, 2015.
    [33] K. Z. T. Q. Z. &. L. X. Ren, "Adversarial Attacks and Defenses in Deep Learning," Engineering, vol. 6, no. 3, pp. 346-360, 2020.
    [34] A. L. P. T. a. G. A. Short, "Defending Against Adversarial Examples.," United States, 2019.
    [35] J. &. Z. A. Bac, "Local intrinsic dimensionality estimators based on concentration of measure," in International Joint Conference on Neural Networks (IJCNN) , 2020.
    [36] X. &. J. W. Linyao, "Improved k-means algorithm based on optimizing initial cluster centers and its application.," International Journal of Advanced Network Monitoring and Controls , vol. 2, no. 2, pp. 9-16, 2017.
    [37] R. X. a. D. Wunsch, "Survey of clustering algorithms," IEEE Transactions on Neural Networks, vol. 16, no. 3, pp. 645-678, 2005.
    [38] X. D. L. Z. Feng Li, "Adversarial Attacks Defense Method Based on Multiple Filtering and Image Rotation," Discrete Dynamics in Nature and Society, vol. 2022, 2022.
    [39] M. M. R. N. C. M. M. D. Mendez J, "Camera-LiDAR Multi-Level Sensor Fusion for Target Detection at the Network Edge," Sensors, vol. 21, no. 12, p. 3992, 2021.
    [40] A. a. K. D. Pfeuffer, "Optimal Sensor Data Fusion Architecture for Object Detection in Adverse Weather Conditions," in 2018 21st International Conference on Information Fusion (FUSION), Cambridge, UK, 2018.

    下載圖示 校內:立即公開
    校外:立即公開
    QR CODE