簡易檢索 / 詳目顯示

研究生: 黃雅雯
Huang, Ya-Wen
論文名稱: 利用時間與空間資訊追蹤汽車影像序列之智慧型駕駛輔助系統
An Intelligent Driver Assistance System Based on Spatiotemporal Tracking from Car Image Sequence
指導教授: 孫永年
Sun, Yung-Nien
學位類別: 碩士
Master
系所名稱: 電機資訊學院 - 資訊工程學系
Department of Computer Science and Information Engineering
論文出版年: 2004
畢業學年度: 92
語文別: 英文
論文頁數: 75
中文關鍵詞: 駕駛輔助系統追蹤汽車
外文關鍵詞: Vehicle Tracking, Driver Assistance System
相關次數: 點閱:88下載:3
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  •   近幾年來,發展智慧型汽車系統並將之運用於降低日益嚴重的交通事已被視為一個重要的研究領域。在本論文中,我們提出一智慧型駕駛輔助系統,系統包含的功能有:前方車輛自動偵測、車輛追蹤、以及提供駕駛者有關前方車輛的參數,例如距離和高度。而駕駛本身的行進方向也能藉由車道線的時間空間樣形而得知。
      由於車道的角度、長度甚至寬度會受到攝影器材的架設參數等因素所影響,藉由反向透視投影的對應方式,我們可將透視影像轉換至地面座標系統。透過此一幾何轉換方式,我們可得知前車的距離與高度等資訊。
      本文提出的利用時間與空間資訊追蹤汽車影像的演算法中,我們使用一個二維的矩形來標示車輛的輪廓並進行追蹤。同時使用事後機率模型追蹤前方車道內之車輛。該模型透過結合空間與時間的資訊進而最大化前車運動參數之事後機率。在時間資訊的運用上,我們藉由已知的前車過去移動的軌跡和駕駛本身的行進方向,預測出前方車輛的狀態。而在空間資訊的運用上,我們則結合外型與亮度的特徵評量相似度。透過時間與空間資訊之整合,我們可獲得較穩定之追蹤結果。再者,為了進一步提高追蹤結果的準確度,我們採用馬可夫模型來微調二維矩形的位置,並藉此逼近車輛實際的邊緣。此外,藉由主成份分析法進行事前的訓練,本系統可辨識和分類前方的車輛以得知前方車輛的車種,例如小客車、貨車或休旅車…等。
      在本系統中,我們用來擷取影像的設備相當簡便,僅需一個數位的CCD攝影機和一台筆記型電腦。該系統已被測試在許多真實道路的影像上,這些影像擷取的時間包含白天、黃昏以及晚上,地點則涵蓋高速公路跟一般道路。藉由車輛參數計算結果的評估,該系統可被證實能夠在長時間中成功地追蹤到前方的車輛且獲得高準確度的前車資訊。未來我們會朝著讓系統的功能更完善的方向邁進,增加系統的適應性,讓在各種天氣狀況以及夜晚的情形下以及各種車況下,系統也能有好的效能,提升駕駛者的行車安全。

       In recent years, intelligent vehicle (IV) systems have already been regarded as an important research field and applied to improve the growing traffic problems. In this thesis, an intelligent and automatic driver assistance system was proposed. The functions of the proposed system include: automatic vehicle detection, spatio-temporal vehicle tracking, and the information of the front vehicle, such as the relative distance and the height. The driver’s movement is also measured by the generated spatio-temporal patterns of lane lines. The estimation of the relative distance and the height is achieved by using inverse perspective mapping (IPM) to obtain the ground plane coordinate.
      In the proposed spatio-temporal vehicle tracking algorithm, the front vehicle is tracked using a bounding box and the estimation is based on the posterior probability. In order to maximize the posterior probability, the temporal and spatial information are both adopted. The temporal information is used to predict the states of the front vehicle and takes the driver’s movement into consideration. And the spatial information is used to measure the likelihood of these predicted states, which integrates the shape and color features. For increasing the accuracy, Markov random field (MRF) is also adopted for refining the location of the tracked bounding box. Besides, the proposed system can also recognize and classify the type of the vehicle, such as sedans, trucks or recreation vehicles, by learning various patterns of vehicles with the principal components analysis (PCA) technique.
      In the proposed system, only one single CCD camera is required and it is mounted on the vehicle and connected to a laptop for capturing the front views. The proposed system was tested with various image sequences captured under different time and environments. It is mostly successful in identifying and tracking the front vehicles in several video sequences for a long period of time and the estimated front car information is highly accurate.

    Chapter 1 Introduction....................................... 1 1.1 Motivation....................................... 1 1.2 Previous Work..................................... 3 1.3 Outline.......................................... 5 Chapter 2 Theoretical Background...................... 7 2.1 Inverse Perspective Mapping.........................7 2.2 Principal Components Analysis.......................11 2.2.1 Concept...........................................11 2.2.2 Method............................................12 Chapter 3 Driver Assistance Functions..................17 3.1 Lane Detection..................................... 17 3.2 Vehicle Detection...................................20 Scheme 1 — daytime...................................21 Scheme 2 — dusk..................................24 Scheme 3 — nighttime............................... 26 3.3 Vehicle Recognition and Classification........... 27 3.4 Driver’s Movement Measurement......................32 3.5 Spatio-temporal Vehicle Tracking.....................36 Step 1: Tracking the Vehicle by a the fixed ratio bounding box. 36 Step 2: Adjusting the edges of the bounding box................ 43 3.6 Distance and Height of the Front Vehicle............46 Chapter 4 Experimental Results and Discussion..........49 4.1 Experimental environment......................... 49 4.2 Experimental Results................................50 4.3 Discussion..........................................67 Chapter 5 Conclusions and Future Work..................69 5.1 Conclusions.........................................69 5.2 Future Work.........................................70 References..............................................71

    [1]. R. Bishop, “A survey of intelligent vehicle applications worldwide,” Proceedings of the IEEE Intelligent Vehicles Symposium, pp.25-30, Oct. 2000.
    [2]. D. Beymer, P. Melauchlan, B. Coifman, and J. Malik, “A real-time computer vision system for measuring traffic parameters,” Computer Vision and Pattern Recognition, pp.495-501, June, 1997.
    [3]. S.M. Smith and J. M. Brady. “ASSET-2: Real-time motion segmentation and shape tracking,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 17. No. 8, pp. 814-820, Aug. 1995.
    [4]. C. Stauffer and W. E. L. Grimson. “Adaptive background mixture models for real-time tracking,” Computer Vision and Pattern Recognition, Vol. 2. pp. 246-252, June.1999.
    [5]. D. Koller, K. Daniilidis, and H.H. Nagel, “Model-based object tracking in monocular image sequences of road traffic scenes,” International Journal of Computer Vision. Vol.10, No.3, pp. 257-281, 1993.
    [6]. W.F. Gradner and D. T. Lawton. “Interactive model-based vehicle tracking,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 18, No.11, pp. 1115-1121, Nov. 1996.
    [7]. W. Kim, C. Lee, J. Lee, “Tracking moving object using Snaker’s jump based on image flow,” Mechatronics, Vol. 11, No. 2, pp. 199-226, 2001.
    [8]. N. J. Ferrier, S. M. Rowe, A. Blake, “Real-time traffic monitoring,” Proc. of 2nd, the IEEE Workshop on Applications of Computer Vision. pp. 81-88, Dec. 1994.
    [9]. Tao. Xiong, C. Debrunner, “Stochastic Car Tracking with Line-and Color-based Features”, Intelligent Transportation Systems, 2003. Proceedings, Vol. 2, pp. 999 – 1003, Oct. 2003.
    [10]. NavLab, Robotics Institute, Carnegie Mellon University.
    http://www.ri.cmu.edu/labs/lab 28.html
    [11]. Y. Ninomiya, S. Matsuda, M. Ohta, Y. Harata and T. Suzuki, “A real-time vision for intelligent vehicles,” Proceedings of the Intelligent Vehicles Symposium, pp. 315-320, Sept. 1995.
    [12]. S. Kamijo, Y. Matsushita, K. Ikeuchi and M. Sakauchi “Traffic monitoring and accident detection at intersections” IEEE Transactions on Intelligent Transportation System, Vol. 1, No. 2, pp. 108-118, June 1998.
    [13]. M. Bertozzi, A. Broggi, and A. Facioli, “Gold: A parallel real-time stereo vision system for generic obstacle and lane detection,” IEEE Transactions on Image Processing, Vol. 7, No. 1, January 1998.
    [14]. H. A. Mallot, H. H. Bülthoff, J. J. Little, and S. Bohrer, “Inverse perspective mapping simplifies optical flow computation and obstacle detection,” Biological Cybernetics 64, pp. 177-185, 1991.
    [15]. W. M. Newman and R. F. Sproull, Principles of Interactive Computer Graphics. New York: McGraw-Hill, 1981.
    [16]. A. Broggi, “Robust real-time lane and road detection in critical shadow conditions,” Proc. IEEE Int. Symp. Comput. Vis., Coral Gables, FL, pp. 353-358, Nov.1995.
    [17]. E. Dickmanns and A. Zapp, “A curvature-based scheme for improving road vehicle guidance by computer vision,” Proceedings of SPIE Conference in Mobile Robots, pp. 161-168, Oct. 1986.
    [18]. U. Solder and V. Graefe, “Visual detection of distant objects,” Proceedings of IEEE/RSJ International Conference on Intelligent Transportation Systems, pp. 1042-1049, 1993.
    [19]. C. Stiller, W. Pochmuller, and B. Hurtgen, “Stereo vision in driver assistance systems,” Proceedings of IEEE Conference in Intelligent Transportation Systems, pp. 888-893, Nov. 1997.
    [20]. M. Betke, E. Haritaoglu, and L. Davis, “Multiple vehicle detection and tracking in hard real-time,” Proceedings of the IEEE Intelligent Vehicles Symposim, pp. 351-356, Sept. 1996.
    [21]. S. Kyo, T. Koga, K. Sakurai, and S. Okazaki, “A robust vehicle detection and tracking system for wet weather condition using the IMAP-VISION image processing board,” Proceedings of the IEEE Intelligent Vehicles Symposim, pp. 423-428, Oct. 1999.
    [22]. J. Miura, M. Itoh, and Y. Shirai, “Toward vision-based intelligent navigator: its concepts and prototype,” IEEE Transactions on Intelligent Transportation Systems, Vol. 3, No.2, pp. 136-146, June 2002.
    [23]. M. Bertozzi, A. Broggi, and A. Facioli, “Stereo inverse perspective mapping: theory and applications,” Image and Vision Computing, Vol. 16, pp. 585-590, 1998.
    [24]. A. Broggi, M. Bertozzi, A. Fascioli, C. G. L. Bianco, and A. Piazzi, “Visual perception of obstacle and vehicles for platooning,” IEEE Transactions on Intelligent Transportation Systems, Vol. 1, No.3, pp. 164-176, No. 3, 2000.
    [25]. A. Broggi, M. Bertozzi, A. Fascioli, and G. Conte, “Automatic vehicle guidance: the experience of the ARGO vehicle,” Singapore: World Scientific, ISBN: 981-02-3720-2, Apr. 1998.
    [26]. D. M. Gavrila, U. Franke, C. Woler, and S. Gorzig, “Real-time vision for intelligent vehicles,” IEEE Instrumentation & Measurement Magazine, Vol. 4, No. 2, pp. 22-27, June 2001.
    [27]. S. Avidan “Support vector Tracking,” IEEE Transactions on Pattern Analysis and Machine Intelligence, pp.I-184-I-190, Vol.1, Dec. 2001.
    [28]. S. M. Smith and J. M. Brady, “A scene segmenter: visual tracking of moving vehicles,” Engineering Applications of Artificial Intelligence, Vol. 7, No. 2, pp. 191-204,1994.
    [29]. C. Ngo; T. Pong; H. Zhang, “Motion analysis and segmentation through spatio-temporal slices processing,” IEEE Transactions on Image Processing, Vol. 12, No. 3, pp.341-355, March 2003.
    [30]. Y. Chen, ”Highway overhead structure detection using video image sequences,” The IEEE 5th International Conference on Intelligent Transportation Systems, Vol.4, No.2, pp. 72-77, June 2002.
    [31]. S. Hirata, Y. Shirai, and M. Asada, “Scene interpretation using 3-D information extracted from monocular color images,” Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems, Vol.3, pp. 1603-1610, July 1992.
    [32]. M. Han and T. Kanade, “Scene reconstruction from multiple uncalibrated views,” Technical Report (CMU-RI-TR-00-09), Jan. 2000
    [33]. H. Tao, S. Sawhney and R. Kumar, “Object tracking with Bayesian estimation of dynamic layer representations,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 24, No. 1, pp. 75-89, 2002.
    [34]. R. Cucchiara, M. Piccardi and P. Mello, “Image analysis and rule-based reasoning for a traffic monitoring system,” IEEE Transactions on Intelligent Transportation Systems, Vol. 1, No.2, pp. 119-130, June 2000.
    [35]. S. Chen1, M. Shyu, C. Zhang “An intelligent framework for spatio-temporal vehicle tracking,” IEEE Intelligent Transportation Systems Proceedings, pp. 213-218, Aug. 2001.
    [36]. Z. Sun, R. Miller, G. Bebis, and D. DiMeo “A real-time precrash vehicle detection system,” Proceeding of the Sixth IEEE Workshop on Application of Computer Vision (WACV' 02), pp.171-176, Dec.2002.
    [37]. T. Kato, Y. Ninomiya, and I. Masaki “Preceding vehicle recognition based on learning from sample images,” IEEE Transactions on Intelligent Transportation Systems, Vol. 3, No. 4, pp.252-260, Dec. 2002.
    [38]. M. Betke, E. Haritaoglu, L.S. Davis. “Real-time multiple vehicle detection and tracking from a moving vehicle,” Machine Vision and Applications Springer-Verlag, Vol. 12, pp. 69-83, 2000.
    [39]. S. Gupte, O. Martin, R.F.K. Masoud, and N.P. Papanikolopoulos, “Detection and classification of vehicles,” IEEE Transactions on Intelligent Transportation Systems Vol. 3, No. 1, pp. 37-47, March 2002.
    [40]. T. Bucher, C. Curio, J. Edelbrunner, C. Igel, D. Kastrup, I. Leefken, G. Lorenz, A. Steinhage, and W. von Seelen, “Image processing and behavior planning for intelligent vehicles,” IEEE Transactions on Industrial Electronics, Vol.50, No.1, pp. 62-75, Feb. 2003.
    [41]. M. Marcus and H. Minc. Introduction to linear Algebra New York: Dover, pp. 145, 1988.
    [42]. B. Li, R. Chellappa, “A generic approach to simultaneous tracking and verification in video,” IEEE Transactions on Image Processing, Vol.11, No.5, pp. 530-544, May 2002.

    下載圖示 校內:2009-08-19公開
    校外:2009-08-19公開
    QR CODE