簡易檢索 / 詳目顯示

研究生: 江奇勳
Chiang, Chi-Hsun
論文名稱: 紅綠燈自動辨識在自走車之應用
Automatic Traffic Light Recognition for Mobile Robot Applications
指導教授: 周榮華
Chou, Jung-Hua
學位類別: 碩士
Master
系所名稱: 工學院 - 工程科學系
Department of Engineering Science
論文出版年: 2017
畢業學年度: 105
語文別: 中文
論文頁數: 77
中文關鍵詞: 紅綠燈影像辨識自走車
外文關鍵詞: Traffic Light Recognition, Automatic Mobile Robot
相關次數: 點閱:122下載:29
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 本論文研究之主要目的在於提出一套紅綠燈影像偵測與辨識的演算法,能針對一般不同的天氣氣候、不同的複雜背景,皆能成功地偵測紅綠燈並且辨識所處的狀態,希望於車輛駕駛輔助系統ADAS(Advanced Driver Assistance Systems)或是導盲輔助系統上,能應付現實狀況,以帶來更加便利與安全的生活。
    本實驗以筆電作為其運算中心,運用網路攝影機擷取輸入影像,首先藉由影像前處理搭配濾光鏡,先解決光源所帶來的影像干擾,接著利用多特徵結合來偵測紅綠燈的位置。白天時,採用HOG特徵擷取法與SVM機器學習法來辨別紅綠燈的狀態;晚上時,由於夜晚比較不適用HOG擷取紅綠燈的邊緣梯度,故採用顏色特徵為依據來辨識紅綠燈狀態,最後把筆電運算的影像辨識結果透過RS232傳輸到單晶片上,使自走車能根據影像結果做出相對應的行走或停止,本系統在一般情況下定點辨識率達99%以上,且每張影像運算時間平均為76(ms)。

    The main purpose of this thesis is to provide a system for traffic light detection and recognition by image processing. The goal is to bring more convenience and safety to life with the hope that this traffic light recognition system can be applied to ADAS (Advanced Driver Assistance System) or blind aid system to deal with any real condition.
    This system utilizes an on-board notebook as the computing center; the images are captured by a webcam. First, through the image pre-processing and filters to solve image interferences which result from various illuminations; then using a multi-feature fusion to detect the position of a traffic light. We employ an SVM (Support Vector Machine) trained with HOG (Histogram of Oriented Gradients) features to obtain the state of the traffic light in the daytime. However, it’s not effective to obtain edge information with adding filters at night. We adopt another method called color feature extraction for the traffic light classification. The system can successfully detect and recognize traffic light status, irrespective of weather conditions or complicated backgrounds. Finally, through the RS-232 communication system to transmit image recognition result to our control chip to enable the mobile robot work appropriately such as move forward or stop. Under normal situations, the recognition rate of our system is higher than 99%, and the average calculation time of each frame is 76 (ms).

    摘要 I Extended Abstract II 致謝 VII 目錄 IX 圖目錄 XIII 表目錄 XVIII 第一章 緒論 1 1.1 前言 1 1.2 研究動機與目的 1 1.3 文獻回顧 3 1.4 研究方法 5 1.5 論文架構 6 第二章 自走車系統設計與軟硬體規格 7 2.1 自走車系統整體架構 7 2.2 自走車機構 8 2.3 硬體規格 12 2.3.1 感測元件-Webcam攝影機 12 2.3.2 偏光鏡( Polarizer filters) [27] 13 2.3.3 中性密度濾鏡(Neutral density filters)[28] 14 2.3.4 核心晶片-PIC18f4550 15 2.3.5 RS-232通訊 17 2.3.6 直流馬達 19 2.3.7 直流馬達驅動晶片 TA7291P 19 2.3.8 光電耦合器TLP250 21 2.3.9 降壓晶片LM7805與U型金色散熱片 22 2.4 軟體規格 24 第三章 紅綠燈影像處理與辨識 25 3.1 紅綠燈影像的演算法架構 25 3.2 影像前處理 26 3.2.1 紅綠燈ROI位置 26 3.2.2 RGB色彩空間 27 3.2.3 HSV色彩空間 28 3.2.4 外在環境對影像的影響 30 3.3 紅綠燈影像偵測 32 3.3.1 影像明亮度閥值化 32 3.3.2 HSV色彩閥值化 37 3.3.3 形態學[38] 38 3.3.4 連通區域標記 40 3.3.5 紅綠燈幾何特性[11] 42 3.4 紅綠燈影像辨識 43 3.4.1 白天辨識方法 43 3.4.2 夜晚辨識方法 48 3.5 紅綠燈影像處理完整流程 49 3.6 軟體仿硬體ND鏡功能 51 第四章 實驗結果與討論 52 4.1 定點紅綠燈偵測與辨識實驗 52 4.2 定點紅綠燈影像辨識結果與討論 63 4.3 影像辨識搭配自走車行走實驗 66 4.3.1 實驗流程 66 4.3.2 實驗結果 67 4.4自走車實驗結果與討論 70 第五章 結論與建議 71 5.1 結論 71 5.2 建議 71 參考文獻 73

    [1] 王宇, 刘泓滨, 吴智恒, 陈启愉, 蔺志敏, and 童季刚, “机器视觉的救灾机器人越障性能分析,” 煤矿机械, vol. 37, no. 7, pp. 92-94, 2016.
    [2] NASACuriosity. "https://www.nasa.gov/mission_pages/msl/index.html," 2015/7/15.
    [3] J. Stoev, S. Gillijns, A. Bartic, and W. Symens, “Badminton playing robot - a multidisciplinary test case in Mechatronics,” IFAC Proceedings Volumes, vol. 43, no. 18, pp. 725-731, 2010/01/01, 2010.
    [4] 彭政茂, “具家用監控功能之輪型跳躍機器人,” 國立中央大學,桃園,台灣, 2016.
    [5] 吳思蒨, “自走車於十字路口紅綠燈之影像辨識,” 國立成功大學工程科學研究所論文,台南,台灣, 2015.
    [6] J.-K. Oh, G. Jang, S. Oh, J. H. Lee, B.-J. Yi, Y. S. Moon, J. S. Lee, and Y. Choi, “Bridge inspection robot system with machine vision,” Automation in Construction, vol. 18, no. 7, pp. 929-941, 11//, 2009.
    [7] 鄭佳其, “應用即時影像處理於導盲磚辨識之自走車,” 國立成功大學工程科學研究所論文,台南,台灣, 2009.
    [8] 郭懿慧, “沙灘清潔機器人系統之研製,” 國立成功大學工程科學研究所論文,台南,台灣, 2016.
    [9] 曹舜賢, “視障者人行交通號誌辨識輔具之研發,” 臺灣大學電機工程學研究所學位論文, pp. 1-77, 2007.
    [10] 余懿真, 林佳秀, 趙偉善, 李阡筠, 陳妍如, and 秦群立, “老人行動照護系統,” 2011.
    [11] C. Yu, C. Huang, and Y. Lang, “Traffic light detection during day and night conditions by a camera,” IEEE 10th INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING PROCEEDINGS, pp. 821-824, 24-28 Oct. 2010, 2010.
    [12] J. Gong, Y. Jiang, G. Xiong, C. Guan, G. Tao, and H. Chen, “The recognition and tracking of traffic lights based on color segmentation and CAMSHIFT for intelligent vehicles,” 2010 IEEE Intelligent Vehicles Symposium, pp. 431-435, 21-24 June 2010, 2010.
    [13] H. Moizumi, Y. Sugaya, M. Omachi, and S. Omachi, “Traffic Light Detection Considering Color Saturation Using In-Vehicle Stereo Camera,” Journal of Information Processing, vol. 24, no. 2, pp. 349-357, 2016.
    [14] H.-K. Kim, J. H. Park, and H.-Y. Jung, “Effective traffic lights recognition method for real time driving assistance system in the daytime,” World Academy of Science, Engineering and Technology 59th, 2011.
    [15] O. Masako, and O. Shinichiro, “Traffic light detection with color and edge information,” 2009 2nd IEEE International Conference on Computer Science and Information Technology, pp. 284-287, 8-11 Aug. 2009, 2009.
    [16] M. Diaz-Cabrera, P. Cerri, and J. Sanchez-Medina, “Suspended traffic lights detection and distance estimation using color features,” 2012 15th International IEEE Conference on Intelligent Transportation Systems, pp. 1315-1320, 16-19 Sept. 2012, 2012.
    [17] M. Diaz-Cabrera, P. Cerri, and P. Medici, “Robust real-time traffic light detection and distance estimation using a single camera,” Expert Systems with Applications, vol. 42, no. 8, pp. 3911-3923, 5/15/, 2015.
    [18] S. Sooksatra, and T. Kondo, “Red traffic light detection using fast radial symmetry transform,” Electrical Engineering/Electronics, Computer, Telecommunications and Information Technology (ECTI-CON), 2014 11th International Conference on, pp. 1-6, 2014.
    [19] S. Yehu, U. Ozguner, K. Redmill, and J. Liu, “A robust video based traffic light detection algorithm for intelligent vehicles,” 2009 IEEE Intelligent Vehicles Symposium, pp. 521-526, 3-5 June 2009, 2009.
    [20] V. John, K. Yoneda, Z. Liu, and S. Mita, “Saliency map generation by the convolutional neural network for real-time traffic light detection using template matching,” IEEE Transactions on Computational Imaging, vol. 1, no. 3, pp. 159-173, 2015.
    [21] Y. Jie, C. Xiaomin, G. Pengfei, and X. Zhonglong, “A new traffic light detection and recognition algorithm for electronic travel aid,” 2013 Fourth International Conference on Intelligent Control and Information Processing (ICICIP), pp. 644-648, 9-11 June 2013, 2013.
    [22] Y. Zhang, J. Xue, G. Zhang, Y. Zhang, and N. Zheng, “A multi-feature fusion based traffic light recognition algorithm for intelligent vehicles,” Proceedings of the 33rd Chinese Control Conference, pp. 4924-4929, 28-30 July 2014, 2014.
    [23] J. Zhu, H. Zou, S. Rosset, and T. Hastie, “Multi-class adaboost,” Statistics and its Interface, vol. 2, no. 3, pp. 349-360, 2009.
    [24] R. d. Charette, and F. Nashashibi, “Real time visual traffic lights recognition based on Spot Light Detection and adaptive traffic lights templates,” 2009 IEEE Intelligent Vehicles Symposium, pp. 358-363, 3-5 June 2009, 2009.
    [25] R. d. Charette, and F. Nashashibi, “Traffic light recognition using image processing compared to learning processes,” 2009 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 333-338, 10-15 Oct. 2009, 2009.
    [26] F. Lindner, U. Kressel, and S. Kaelberer, “Robust recognition of traffic signals,” IEEE Intelligent Vehicles Symposium, 2004, pp. 49-53, 14-17 June 2004, 2004.
    [27] P. Beyersdorf, “Polarization of Light,” pp. http://www.sjsu.edu/faculty/beyersdorf/Archive/Phys158F06/11-9%20polarization%20of%20light.pdf, 2006/09/11.
    [28] R. Robilotto, and Q. Zaidi, “Perceived transparency of neutral density filters across dissimilar backgrounds,” Journal of Vision, vol. 4, no. 3, pp. 5-5, 2004.
    [29] “PIC18F4550 datasheet, Microchip Technology Inc,” http://ww1.microchip.com/downloads/en/devicedoc/39632c.pdf, 2006.
    [30] 毛星雲, OpenCV 程式設計參考手冊, 松崗資產有限公司, 2015/08.
    [31] K. v. d. Sande, T. Gevers, and C. Snoek, “Evaluating Color Descriptors for Object and Scene Recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 32, no. 9, pp. 1582-1596, 2010.
    [32] A. R. Smith, “Color gamut transform pairs,” ACM Siggraph Computer Graphics, vol. 12, no. 3, pp. 12-19, 1978.
    [33] 網頁. "http://wikipedia.qwika.com/en2zh/HSV_color_space," 1978.
    [34] 鍾國亮, 影像處理與電腦視覺, 東華書局, 2015/08/06.
    [35] F. Meyer, and S. Beucher, “Morphological segmentation,” Journal of Visual Communication and Image Representation, vol. 1, no. 1, pp. 21-46, 1990/09/01, 1990.
    [36] F. Meyer, “Contrast feature extraction in Quantitative Analysis of Microstructures in Material Sciences,” Biology and Medicine,(JL Chermant, ed.),(Stuttgart, FRG), Riederer Verlag, Special issue of Practical Metallography, 1978.
    [37] N. Otsu, “A threshold selection method from gray-level histograms,” Automatica, vol. 11, no. 285-296, pp. 23-27, 1975.
    [38] F. Shih, Image processing and mathematical morphology: CRC press Boca Raton, 2009.
    [39] 蔡坤佑, “行動裝置上之手語翻譯系統,” 成功大學工程科學系學位論文, pp. 1-51, 2016.
    [40] F. Chang, and C.-J. Chen, “A Component-Labeling Algorithm Using Contour Tracing Technique,” ICDAR, pp. 741-745, 2003.
    [41] F. Chang, C.-J. Chen, and C.-J. Lu, “A linear-time component-labeling algorithm using contour tracing technique,” computer vision and image understanding, vol. 93, no. 2, pp. 206-220, 2004.
    [42] T. Barbu, “Pedestrian detection and tracking using temporal differencing and HOG features,” Computers & Electrical Engineering, vol. 40, no. 4, pp. 1072-1079, 2014.
    [43] R. Kadota, H. Sugano, M. Hiromoto, H. Ochi, R. Miyamoto, and Y. Nakamura, "Hardware Architecture for HOG Feature Extraction." pp. 1330-1333.
    [44] 網頁. "http://blog.csdn.net/xl890727/article/details/7920199," 2012.
    [45] C.-C. Chang, and C.-J. Lin, “LIBSVM: a library for support vector machines,” ACM Transactions on Intelligent Systems and Technology (TIST), vol. 2, no. 3, pp. 27, 2011.

    下載圖示 校內:2022-05-01公開
    校外:2022-05-01公開
    QR CODE