簡易檢索 / 詳目顯示

研究生: 顏綸
Yen, Lun
論文名稱: 應用邊緣計算於多類型交通辨識系統之設計與效能分析
Application of edge computing in the design and performance analysis of multi-type traffic recognition systems
指導教授: 賴槿峰
Lai, Chin-Feng
學位類別: 碩士
Master
系所名稱: 工學院 - 工程科學系
Department of Engineering Science
論文出版年: 2025
畢業學年度: 113
語文別: 中文
論文頁數: 112
中文關鍵詞: 邊緣運算影像偵測影像辨識處理機器學習量化知識蒸餾交通號誌辨識
外文關鍵詞: edge computing, image detection, image recognition processing, machine learning, quantization, knowledge distillation, traffic sign recognition
相關次數: 點閱:12下載:1
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 本研究目標在設計應用邊緣計算之多類型交通辨識系統,解決傳統雲端架構方面的不足。隨著智慧交通與先進駕駛輔助系統的發展,交通號誌之即時辨識對於提升道路安全愈顯重要。現今多數的系統依賴雲端運算,這種方式易受網路延遲與穩定性限制影響。本研究嘗試將 YOLO11 物件辨識模型部署至邊緣裝置,並結合 AI 加速器,以實現高效且穩定的即時辨識能力,特別針對台灣複雜的平面道路環境進行優化。研究首先建構涵蓋台灣多元天候與光照條件的交通號誌資料集,並透過灰階轉換、飽和度調整、噪點添加等影像預處理方法提升模型的泛化能力。於模型設計方面,選用 YOLO11 系列為基礎,搭配知識蒸餾與模型量化等技術進行最佳化,使模型於保有辨識精度的同時,又能夠減少運算資源需求來提升部署的效率。從實驗結果顯示,經蒸餾與量化後的 YOLO11s 模型於 Precision、Recall 及mAP@0.5 等指標上表現良好,成功將大型教師模型之知識遷移至輕量化學生模型。在邊緣裝置測試中,搭配 AI 加速器可顯著提升平均推論速度(FPS),而YOLO11m 模型則於 PC 測試中展現最佳準確度,YOLO11n 則擁有最快速度。本研究主要貢獻包括:成功建構適用於台灣場景的邊緣交通辨識系統、提出結合知識蒸餾與模型量化的高效優化流程,以及強化系統對於光照與環境變異之適應性。實驗驗證結果證明,本系統具備於邊緣裝置中進行即時交通辨識的技術可行性,未來亦可延伸應用至違規行為偵測、車輛分類等智慧交通場景中,展現良好發展潛力。

    The goal of this research is to design a multi-type traffic identification system that applies edge computing to address the shortcomings of traditional cloud architecture.With the development of smart transportation and advanced driver assistance systems,real-time recognition of traffic signs is becoming increasingly important for improving road safety. Most systems today rely on cloud computing, which is susceptible to network latency and stability limitations. This study attempts to deploy the YOLO11 object recognition model to edge devices and combine it with the Hailo-8 AI accelerator to achieve efficient and stable real-time recognition capabilities, especially optimized for Taiwan's complex flat road environment. The study first constructed a traffic sign dataset covering Taiwan's diverse weather and lighting conditions, and improved the model's generalization capabilities through image preprocessing methods such as grayscale conversion, saturation adjustment, and noise addition. In terms of model design, the YOLO11 series is selected as the basis, and technologies such as knowledge distillation and model quantization are used for optimization. This allows the model to maintain recognition accuracy while reducing computing resource requirements to improve deployment efficiency. The experimental results show that the YOLO11s model after distillation and quantization performs well in terms of indicators such as Precision, Recall and mAP@0.5, and successfully transfers the knowledge of the large teacher modelto the lightweight student model. In edge device tests, the Hailo-8 AI accelerator can significantly improve the average inference speed (FPS), while the YOLO11m model shows the best accuracy in PC tests, and YOLO11n has the fastest speed. The main contributions of this study include: successfully constructing an edge traffic identification system suitable for Taiwan scenarios, proposing an efficient optimization process combining knowledge distillation and model quantization, and enhancing the system's adaptability to lighting and environmental variations. The experimental verification results prove that this system has the technical feasibility of performing real-time traffic identification in edge devices. In the future, it can also be extended to smart traffic scenarios such as violation detection and vehicle classification, showing good development potential.

    摘要i Research Methods v RESULTS AND DISCUSSION viii 致謝 xiii 目錄 xvi 圖目錄 xvii 表目錄 xix 縮寫對照 xx 第一章簡介 1 1-1 研究動機 1 1-2 研究目的 2 1-3 本研究的貢獻 3 1-4 章節結構 4 第二章文獻探討 7 2-1 交通號誌辨識的方法與近年研究 7 2-1-1 傳統交通號誌辨識方法 7 2-1-2 探討 Edge AI 於智慧交通的應用案例 8 2-2 基於深度學習的交通號誌辨識方法 9 2-2-1 YOLO 系列模型於交通物件辨識中的應用 9 2-2-2 分析不同版本模型在速度、準確率、模型大小等方面的取捨 10 2-3 模型量化的應用 11 2-4 模型蒸餾的應用 13 第三章研究方法 15 3-1 研究架構 15 3-2 資料蒐集與預處理 16 3-2-1 影像擷取與標註影像預處理 16 3-2-2 影像預處理與增強 17 3-2-3 資料集劃分 18 3-3 模型建構與訓練方法 20 3-4 模型最佳化技術 23 3-5 模型格式轉換 26 3-5-1 邊緣裝置部署 26 3-6 本章小節 27 第四章實驗設計 28 4-1 實驗平台與設備配置 28 4-1-1 硬體設備規格 28 4-1-2 邊緣運算平台規格 29 4-1-3 軟體規格 30 4-2 實驗設計目標與理念 31 4-3 實驗階段劃分與流程 32 4-4 實驗組別設定 34 4-5 本章小節 35 第五章實驗結果 36 5-1 YOLO11 模型訓練結果 39 5-1-1 YOLO11 模型訓練關鍵指標說明 40 5-1-2 YOLO11 模型訓練關鍵指標結果 43 5-2 YOLO11 模型在 PC 裝置測試 51 5-2-1 PC 裝置模型效能測試數據 52 5-2-2 PC 裝置模型效能測試結論 55 5-3 YOLO11 模型在邊緣裝置測試 58 5-3-1 邊緣裝置模型效能測試數據 58 5-3-2 邊緣裝置模型效能測試結論 60 5-3-3 模型選擇考量 61 5-4 模型跨平台推論效能綜合比較與分析 62 5-4-1 指標比較分析 62 5-4-2 平均信心度比較分析 65 5-4-3 平均 FPS 比較分析 66 5-4-4 推論時間比較分析 67 5-4-5 延遲與功耗比較分析 68 5-4-6 Hailo-8 AI 加速器效能整合分析 70 5-4-7 模型辨識結果比較 71 5-5 本章小結 78 第六章結論與未來展望 81 6-1 結論 81 6-2 未來展望 83 參考文獻 85

    [1] Tanya Garg and Gurjinder Kaur. “A systematic review on intelligent transport systems”. In: Journal of Computational and Cognitive Engineering 2.3 (2023), pp. 175–188.
    [2] Kapileswar Rana and Narendra Khatri. “Automotive intelligence: Unleashing the potential of AI beyond advance driver assisting system, a comprehensive review”. In:Computers and Electrical Engineering 117 (2024), p. 109237.
    [3] Franklin Oliveira et al. “Internet of Intelligent Things: A convergence of embedded systems, edge computing and machine learning”. In:Internet of Things(2024), p. 101153.
    [4] Ajantha Vijayakumar and Subramaniyaswamy Vairavasundaram. “Yolo-based object detection models: A review and its applications”. In: Multimedia Tools and Applications 83.35 (2024), pp. 83535–83574.
    [5] G Victor Daniel et al. “AI Model Optimization Techniques”. In: Model Optimization Methods for Efficient and Edge AI: Federated Learning Architectures, Frameworks and Applications (2025), pp. 87–108.
    [6] Ying Liu et al. “Image recognition based on lightweight convolutional neural network: Recent advances”. In: Image and Vision Computing (2024), p. 105037.
    [7] Yunlong Wang et al. “Design and Research of AI Accelerators for Onboard Computing Platforms Targeting YOLOs Network”. In: 2025 IEEE 17th International Conference on Computer Research and Development (ICCRD). IEEE. 2025, pp. 123–131.
    [8] Jiapei Wei et al. “A Review of YOLO Algorithm and Its Applications in Autonomous Driving Object Detection”. In: IEEE Access (2025).
    [9] Safat B Wali et al. “Vision-based traffic sign detection and recognition systems: Current trends and challenges”. In: Sensors 19.9 (2019), p. 2093.
    [10] Pablo Flores-Vidal et al. “New aggregation approaches with HSV to color edge detection”. In: International Journal of Computational Intelligence Systems 15.1 (2022),p. 78.
    [11] Himangi Agrawal and Krish Desai. “CANNY EDGE DETECTION: A COMPREHENSIVE REVIEW”. In: International Journal of Technical Research & Science 9 (2024), pp. 27–35.
    [12] Noor Ul Ain Tahir et al. “Object detection in autonomous vehicles under adverse weather: A review of traditional and deep learning approaches”. In: Algorithms 17.3 (2024), p. 103.
    [13] Raghubir Singh and Sukhpal Singh Gill. “Edge AI: a survey”. In: Internet of Things and Cyber-Physical Systems 3 (2023), pp. 71–92.
    [14] Baoming Wang et al. “Edge computing and AI-driven intelligent traffic monitoring and optimization”. In: Applied and Computational Engineering 77 (2024), pp. 225–230.
    [15] Ruhul Amin Khalil et al. “Advanced learning technologies for intelligent transportation systems: Prospects and challenges”. In: IEEE Open Journal of Vehicular Technology (2024).
    [16] Madhusri Maity, Sriparna Banerjee, and Sheli Sinha Chaudhuri. “Faster r-cnn and yolo based vehicle detection: A survey”. In: 2021 5th international conference on computing methodologies and communication (ICCMC). IEEE. 2021, pp. 1442–1447.
    [17] Joseph Redmon et al. “You only look once: Unified, real-time object detection”. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2016,pp. 779–788.
    [18] Licheng Jiao et al. “A survey of deep learning-based object detection”. In: IEEE access7 (2019), pp. 128837–128868.
    [19] Tausif Diwan, G Anirudh, and Jitendra V Tembhurne. “Object detection using YOLO:Challenges, architectural successors, datasets and applications”. In: multimedia Tools and Applications 82.6 (2023), pp. 9243–9275.
    [20] Mupparaju Sohan, Thotakura Sai Ram, and Ch Venkata Rami Reddy. “A review on yolov8 and its advancements”. In: International Conference on Data Intelligence and Cognitive Informatics. Springer. 2024, pp. 529–545.
    [21] Yuechen Luo et al. “A novel lightweight real-time traffic sign detection method based on an embedded device and YOLOv8”. In: Journal of Real-Time Image Processing 21.2 (2024), p. 24.
    [22] Areeg Fahad Rasheed and M Zarkoosh. “YOLOv11 Optimization for Efficient Resource Utilization”. In: arXiv preprint arXiv:2412.14790 (2024).
    [23] Qi Wang and Qi Long Wang. “BT-YOLO11: Automatic Driving Road Target Detection in Complex Scenarios”. In: IEEE Access (2025).
    [24] Haris Ačkar, Ali Almisreb, and Mohd.A. Saleh. “A Review on Image Enhancement Techniques”. In: Southeast Europe Journal of Soft Computing 8 (Apr. 2019). DOI:10.21533/scjournal.v8i1.175.
    [25] Defu Liu et al. “A survey of model compression techniques: Past, present, and future”.In: Frontiers in Robotics and AI 12 (2025), p. 1518965.
    [26] Younes El Bouzekraoui, James Tsai, and Zhang Xiuwei. “Real-Time Stop Sign Detection on Low-End Devices with Deep Learning Model Quantization”. In: Available at SSRN 4927563 ().
    [27] Benoit Jacob et al. “Quantization and training of neural networks for efficient integerarithmetic-only inference”. In: Proceedings of the IEEE conference on computer vision and pattern recognition. 2018, pp. 2704–2713.
    [28] Markus Nagel et al. “A white paper on neural network quantization”. In: arXiv preprint arXiv:2106.08295 (2021).
    [29] Markus Nagel et al. “Data-free quantization through weight equalization and bias correction”. In: Proceedings of the IEEE/CVF international conference on computer vision. 2019, pp. 1325–1334.
    [30] Jonghoon Kwak et al. “Quantization aware training with order strategy for CNN”.In: 2022 IEEE International Conference on Consumer Electronics-Asia (ICCE-Asia).IEEE. 2022, pp. 1–3.
    [31] Younes El Bouzekraoui. “Real-time detection of traffic signs on mobile devices using deep learning”. PhD thesis. Georgia Institute of Technology, 2023.
    [32] Jianping Gou et al. “Knowledge distillation: A survey”. In: International Journal of Computer Vision 129.6 (2021), pp. 1789–1819.
    [33] Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. “Distilling the knowledge in a neural network”. In: arXiv preprint arXiv:1503.02531 (2015).
    [34] Liang Zhao et al. “Sedg-yolov5: A lightweight traffic sign detection model based on knowledge distillation”. In: Electronics 12.2 (2023), p. 305.
    [35] Peng-Wei Guan and Wen-Xing Zhu. “Knowledge distillation and attention mechanism analysis of traffic sign detection”. In: 2022 China Automation Congress (CAC). IEEE.2022, pp. 2686–2691.
    [36] Adriana Romero et al. “Fitnets: Hints for thin deep nets”. In: arXiv preprint arXiv:1412.6550(2014).
    [37] Sergey Zagoruyko and Nikos Komodakis. “Paying more attention to attention: Improving the performance of convolutional neural networks via attention transfer”. In:arXiv preprint arXiv:1612.03928 (2016).
    [38] Amir M Mansourian et al. “A Comprehensive Survey on Knowledge Distillation”. In:arXiv preprint arXiv:2503.12067 (2025).
    [39] Arief Setyanto et al. “Knowledge Distillation in Object Detection for Resource-Constrained Edge Computing”. In: IEEE Access (2025).
    [40] Cui-jin Li, Zhong Qu, and Sheng-ye Wang. “A method of knowledge distillation based on feature fusion and attention mechanism for complex traffic scenes”. In: Engineering Applications of Artificial Intelligence 124 (2023), p. 106533.
    [41] Youdi Gong et al. “A survey on dataset quality in machine learning”. In: Information and Software Technology 162 (2023), p. 107268.
    [42] Mingle Xu et al. “A comprehensive survey of image augmentation techniques for deep learning”. In: Pattern Recognition 137 (2023), p. 109347.
    [43] Anakhi Hazarika et al. “Edge ml technique for smart traffic management in intelligent transportation systems”. In: IEEE Access 12 (2024), pp. 25443–25458.
    [44] Shreya Nandanwar. “Cross-Framework Validation of CNN Architectures: From PyTorch to ONNX”. In: (2024).
    [45] Peter Holt. Efficient inventory validation using machine vision. 2025.
    [46] Jasmin Bharadiya. “Machine learning in cybersecurity: Techniques and challenges”.In: European Journal of Technology 7.2 (2023), pp. 1–14.
    [47] VV Kukartsev et al. “Deep Learning for Object Detection in Images Development and Evaluation of the YOLOv8 Model Using Ultralytics and Roboflow Libraries”. In:Computer Science On-line Conference. Springer. 2024, pp. 629–637.
    [48] Daniyal Rajput, Wei-Jen Wang, and Chun-Chuan Chen. “Evaluation of a decided sample size in machine learning applications”. In: BMC bioinformatics 24.1 (2023), p. 48.
    [49] Rahima Khanam and Muhammad Hussain. “Yolov11: An overview of the key architectural enhancements”. In: arXiv preprint arXiv:2410.17725 (2024).
    [50] ultralytics. https://docs.ultralytics.com/zh/datasets/detect/coco/.2025/07/06.
    [51] Chellammal Surianarayanan et al. “A survey on optimization techniques for edge artificial intelligence (AI)”. In: Sensors 23.3 (2023), p. 1279.
    [52] Chidiebere Joshua et al. “Cross-Platform Optimization of ONNX Models for Mobile and Edge Deployment”. In: (2025).
    [53] Yurii Kryvenchuk et al. “UAV detection with YOLO on a standalone Raspberry Pi 5 system”. In: (2025).

    下載圖示 校內:立即公開
    校外:立即公開
    QR CODE