簡易檢索 / 詳目顯示

研究生: 蕭東榕
Hisao, Tung-Jung
論文名稱: 智慧型攝影機網路射頻與視覺定位之實現與分析
Investigation of RF-Vision Based Localization in Smart Camera Network
指導教授: 莊智清
Juang, Jyh-Ching
學位類別: 碩士
Master
系所名稱: 電機資訊學院 - 電機工程學系
Department of Electrical Engineering
論文出版年: 2010
畢業學年度: 98
語文別: 中文
論文頁數: 110
中文關鍵詞: 射頻定位視覺定位智慧型攝影機
外文關鍵詞: RF-Vision Based Localization, Vision Localization, Smart Camera
相關次數: 點閱:100下載:2
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 近年來混合式感測系統的發展非常迅速,混合式系統的優點是將各種不同的感測器互相結合使用,克服彼此之間的缺點使的應用能夠更加廣泛,例如:在智慧型載具可能同時搭載攝影機系統、GPS 定位系統、超波音測距系統跟無線溝通系統,情境感知生活可能同時運用到無線射頻技術、攝影機系統跟身體監測方面的感測器,一台機器人則可能同時使用攝影機、無線感測器、紅外線、超音波等儀器。
    本論文的目的是架設智慧型攝影機網路系統來進行人員的室內定位,本論文使用無線感測儀器來輔助智慧型攝影機,使攝影機能以無線傳輸的方式將影像資料傳送回主控端,同時也以射頻定位資訊來增加無線攝影機室內定位系統的完整性,消除攝影機涵蓋率不足產生的定位死角。在本論文的研究中,會先針對使用影像對環境中人員進行偵測的人員偵測系統進行說明並將此系統實現在嵌入式影像感測模組上,接著會簡單介紹射頻訊號的特性以其基本的定位演算法,再來則是本論文使用平台的介紹跟傳輸資料與影像射頻定位系統的說明,最後則是以室內環境的實驗來驗證此套系統的定位效能。

    Heterogeneous sensor network is an important research subject today, the approach uses different sensors to overcome each other’s disadvantages so that the applications of sensors become more popular and robust, in the research of Intelligent vehicle, it will combine the sensors about camera, GPS receiver, ultrasound and communicate between vehicles by wireless network technology. In the research of context aware system, it may have many different system like vision surveillance system, wireless sensor system and monitoring systems. Moreover, in the research of robot, user can adapt the sensors like cameras, RF sensors, IR sensors or ultrasound sensors to meet different requirement.
    The goal of this thesis is to implement a human localization system by smart camera network and RF-vision based localization algorithm in the indoor environment. The smart camera network in this thesis is constructed by embedded vision sensor and RF sensor, the embedded vision sensors are used to implement the human detection system and the RF sensors are used to transport the information about result of human detection and RF LQI to computer. In addition, the RF-vision localization algorithm in this thesis is used to solve the problem of the coverage of cameras when human can be observed by over two cameras or when the vision information is not enough to locate the human. The system can refer to the both of information of vision and RF to locate the human. This thesis propose kernel particle filter for human tracking, and the result of experiments will show the performance of proposed algorithm in human localization system.

    摘要 ................................ I ABSTRACT............................ II 誌謝 ................................. IV 目錄 ................................. VI 表目錄 ............................... VIII 圖目錄 ................................ IX 第一章 緒論 ............................ 1 1.1 前言 ............................... 1 1.2 研究動機與目的 .......................... 1 1.3 文獻回顧 ............................. 3 1.4 主要貢獻 ............................. 4 1.5 論文架構 ............................. 5 第二章 影像人物偵測與電腦視覺定位 .................. 6 2.1 影像色彩空間 ........................... 7 2.2 影像目標偵測 ........................... 9 2.2.1 背景模型建立與前景分割 ................... 10 2.2.1.1 適應性背景模型建立 ................... 10 2.2.1.2 人物陰影偵測 ...................... 13 2.2.2 形態學處理 ......................... 16 2.2.3 聚類成長演算法 ....................... 17 2.3 電腦視覺 ............................ 19 2.3.1 攝影機模型 ......................... 20 2.3.2 三維座標點的重建 ...................... 27 第三章 無線射頻網路 ....................... 32 3.1 無線感測網路概述 ........................ 32 3.2 ZigBee 無線通訊技術 ...................... 35 3.3 訊號品質特性分析 ........................ 38 3.3.1 電磁波性質分析 ....................... 39 3.3.2 訊號品質變動因素 ...................... 41 3.3.3 訊號衰減模型 ........................ 42 3.4 射頻定位演算法分析 ....................... 43 3.4.1 無線訊號三邊定位 ...................... 43 3.4.2 無線訊號特徵比對定位 .................... 46 3.5 訊號品質與距離轉換分析 ..................... 47 第四章 視覺與射頻結合之定位系統 .................. 49 4.1 影像人物偵測系統架構 ...................... 50 4.1.1 影像區域分割 ........................ 51 4.2 無線通訊系統架構 ........................ 56 4.3 電腦端定位系統架構 ....................... 59 4.3.1 攝影機FOV 分析 ....................... 61 4.3.2 影像資料比對分析與定位演算法 ................ 63 4.4 核心粒子濾波器 ......................... 66 4.4.1 濾波器簡介 ......................... 67 4.4.2 粒子濾波器 ......................... 68 4.4.3 核心密度估測 ........................ 71 4.4.4 均值移動 .......................... 73 4.4.5 模擬與分析 ......................... 76 第五章 系統實現與實驗結果分析 ................... 80 5.1 影像感測模組簡介 ........................ 80 5.2 無線通訊模組簡介 ........................ 83 5.3 定位實驗 ............................ 88 5.3.1 實驗環境設定 ........................ 88 5.3.2 實驗結果與分析 ....................... 92 第六章 結論與未來工作 ...................... 103 6.1 結論 .............................. 103 6.2 未來工作 ........................... 104 文獻參考 ............................. 105

    [1] J. K. Aggarwal and Q. Cai, “Human Motion Analysis:A Review,” Computer. Vision Image Understanding, vol. 73, pp. 428-440, 1999.
    [2] C. Ackerman and L. Itti, "Robot Steering With Spectral Image Information," IEEE Transaction on Robotics, Vol. 21, No. 2, pp. 247-251, April, 2005.
    [3] A. Bhatti, Stereo Vision, InTech Education and Publishing 2008.
    [4] J. Y. Bouguet, “Camera Calibration Toolbox for Matlab,” Available : http://www.vision.caltech.edu/bouguetj/calib_doc/#start
    [5] J. Black and T. Ellis, “Multi Camera Image Tracking,” in Processing of Performance Evaluation of Tracking and Surveillance, Hawaii USA, December 2001, pp. 68-75, December 2001.
    [6] Y. Cheng, “Mean Shift, Mode Seeking, and Cluster,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 17, No. 8, pp. 790-799, August 1995.
    [7] C. Chang and R. Ansari, “Kernel Particle Filter:Iterative Sampling for Efficient Visual Tracking,” in Proceedings of IEEE International Conference on Image Processing, Vol. 3, pp. III–977–80, September 2003.
    [8] C. Chang and R. Ansari, “Kernel Particle Filter for Visual Tracking,” IEEE Signal Processing Letters, Vol. 12, No. 3, pp. 242-245, March 2005.
    [9] T. H. Chang and S. Gong, “Tracking Multiple People with a Multi-Camera System,” in Proceedings of 2001 IEEE Workshop on Multi-Object Tracking, with ICCV’01, Vancouver, BC Canada, pp. 19-26, July 2001.
    [10] K. G. Derpanis, “Integral Image-based Representations,” Technical Report. Computer Science and Engineering York University, July 14, 2007. 106
    [11] C. C. Ding and X. He, “K-means Clustering via Principal Component Analysis,” in Proceedings of International Conference on Machine Learning, pp. 255-232, 2004.
    [12] A. O. Ercan, A. E. Gamal, and L. J. Guibas, “Object Tracking in the Presence of Occlusions via a Camera Network,” in Proceedings of the 6th International Conference on Information Processing in Sensor Networks, pages 509–518, New York, NY, USA, 2007.
    [13] A. Elgammal, R. Duraiswami, D. Harwood, and L.Davis, “Background and Foreground Modeling using Nonparametric Kernel Density Estimation for Visual Surveillance,” Proceedings of the IEEE, Vol. 90, No. 7, pp. 1151-1163, July 2002.
    [14] A. Ford and A. Roberts “Colour Space Conversions,” August 11, 1998, Available: http://www.wmin.ac.uk/ITRG/docs/coloureq.html
    [15] Jennic “Data Sheet-JN513X”, Available:http://www.jennic.com/
    [16] J. C. S. J. Jr, C. R. Jung, and S. R. Musse, “Background Subtraction and Shadow Detection in Grayscale Video Sequences,” in Proceedings of the XVIII Brazilian Symposium on Computer Graphics and Image Processing (SIBGRAPI’05), pp. 1530-1834, 2005.
    [17] J. Kang, I. Cohen, and G. Medioni, “Continuous Tracking Within and Across Camera Streams,” in Proceedings of IEEE Conference on Computer Vision and Pattern CVPR’2003, Madison, Wisconsin, pp. 16-22, June 2003.
    [18] S. Khan, O. Javed, Z. Rasheed, and M. Shah, “Human Tracking in Multiple Cameras,” in 8th International Conference on Computer Vision, Vancouver, Canada, July 2001, pp. 331–336, July 2001.
    [19] S. Khan, O. Javed, and M. Shsh, “Tracking in Uncalibrated Cameras with Overlapping Field of View,” in Proceedings Performance Evaluation of Tracking and Surveillance PETS 2001, December 2001, pp. 84-91, December 2001. 107
    [20] S. M. Khan and M. Shah, “A Multiview Approach to Tracking People in Crowded Scenes Using a Planar Homography Constraint,” in Proceedings of European Conference on Computer Vision, Graz, Austria, May 2006, pp. 133-146, May 2006.
    [21] S. J. Mckenna, S. Jabri, Z. Duric, A. Rosenfeld, and H. Wechsler, “Tracking Groups of People,” Computer Vision and Image Understanding, Vol. 50, No. 1, pp. 42-56, 2000.
    [22] Q. Masoud and N. P. Papanikolopouos, “A Novel Method for Tracking and Counting Pedestrains in Real-Time Using a Single Camera,” IEEE Transactions on Vehicular Technology, Vol. 50, No. 5, pp. 1267-1278, September 2001.
    [23] M. Meingast, S. Oh, and S, Sastry, “Automatic Camera Network Localization Using Object Image Tracks,” in Proceedings of IEEE International Conference on Computer Vision, 2007, pp. 1-8, 2007.
    [24] R. T. Ng and J. Han, “CLARANS:A Method for Clustering Objects for Spatial Data Mining,” IEEE Transactions on Knowledge and Data Engineering, Vol. 14, No. 5, pp. 1003-1016, 2002.
    [25] M. A. Patricio, J. Carbo, O. Perez, J. Garcia, and J. M. Molina, “Multi-Agent Framework in Visual Sensor Network,” EURASIP Journal on Advances in Signal Processing, Vol. 2007, No. 1, pp. 226–226, Jan 2007.
    [26] Robotics Institute Carnegie Mellon University “Data Sheet-Embedded Vision Processor”, Available:http://www.cmucam.org/
    [27] K. Rohr, “Towards Model-Based Recognition of Human Movements in Image Sequences,” in Proceedings of Conference on Computer Vision, Graphics and Image Processing, January 1994, Vol. 59, No. 1, pp. 94-115, January, 1994.
    [28] H. Rowley, S. Baluja, and T. Kanade, “Neural Network-based Face Detection,” in IEEE Pattern Analysis and Machine Intelligence, Vol. 20, pp. 22–38, 1998. 108
    [29] A. Rowe, A. Goode, D. Goel, and I. Nourbakhsh, “CMUcam3 :An Open Programmable Embedded Vision Sensor,” Technical Report CMU-RI-TR-07-13, Robotics Institute, Carnegie Mellon University, Pittsburgh, PA, May 2007.
    [30] R. Rosales and S. Sclaroff, “Improved Tracking of Multiple Humans with Trajectory Prediction and Occlusion Modeling,” in Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, CA, April 1998, pp. 437–442, April 1998.
    [31] B. W. Silverman, Density Estimation for Statistics and Data Analysis, Published in Monographs on Statistics and Applied Probability, Chapman and Hall, 1986.
    [32] C. Stauffer and W. E. L. Grimson, “Adaptive Background Mixture Models for Real-Time Tracking,” in Proceedings of the Computer Vision and Pattern Recognition 1999, pp. 246-252, June, 1999.
    [33] S. Soro and W. B. Heinzelman, “On the Coverage Problem in Video-based Wireless Sensor Networks,” in Proceedings of the IEEE International Conference on Broadband Communications, Networks, and Systems, Boston, MA, USA, pp.9-16, October 2005.
    [34] H.Schneiderman, and T. Kanade, "A Statistical Method for 3D Object Detection Applied to Faces and Cars," in proceedings of the International Conference on Computer Vision and Pattern Recognition, vol. 1, pp. 746-751, 2000.
    [35] C. Sankaranarayanan, A. Veeraraghavan, and R. Chellappa, “Object Detection, Tracking and Recognition for Multiple Smart Cameras,” Proceedings of the IEEE, Vol. 96, No. 10, pp. 1606-1624, October 2008.
    [36] K. T. Song and C. W. Wang, “Self-Localization and Control of Omni-Directional Mobile Robot Based on an Omni-Directional Camera,” in Proceedings of the 7th Asian Control Conference, Hong Kong, China, August 2009, pp. 899-904, August 109 2009.
    [37] P. Viola and M. Jones, “Robust Real-Time Face Detection” in Proceedings of International Conference on Computer Vision, 2001, vol. 20, No. 11, pp. 1254–1259, 2001.
    [38] M. Valera and S. A. Velastin, “Intelligent Distributed Surveillance Systems: a review,” in Proceeding of IEE Vision. Image Signal Process, Vol. 152, No. 2, pp. 192-204, April 2005.
    [39] C. Wu and H. Aghajan, “Real-Time Human Pose Estimation:A Case Study in Algorithm Design for Smart Camera Network,” in Proceedings of the IEEE, Vol. 96, No. 10, pp. 1715-1732, October 2008.
    [40] B. Wu and R. Nevatia, “Detection of Multiple, Partially Occluded Humans in a Single Image by Bayesian Combination of Edgelet Part Detectors,” in International Conference on Computer Vision, Beijing, China, October 2005, Vol. 1, pp.90-97, October 2005.
    [41] I. T. Young, J. J. Gerbrands and L. J. V. Vliet, ”Image Processing Fundamentals,” Available:http://www.ph.tn.tudelft.nl/Courses/FIP/noframes/fip- Morpholo.html
    [42] ZigBee Alliance, “ZigBee Specification,” Tech, Version 1.0, June 2005. Available: http://www.ZigBee.org
    [43] Z. Zhang, “A Flexible New Technique for Camera Calibration,” IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 22, No. 11, pp. 1330–1334, 2000.
    [44] L. Zhang and S. Member, “Vision Attention Learning Model and its Application in Robot,” in Proceedings of the 7th Asian Control Conference, Hong Kong, China, August 2009, pp970-975, August 2009.
    [45] T. Zhao and R. Nevatia, “Tracking Multiple Humans in Complex Situations,” IEEE 110 Transactions on Pattern Analysis and Machine Intelligence, Vol. 26, No. 9, pp. 1201-1221, September 2004.
    [46] 王俊明和陳世旺,「漸進背景影像的建構」,師大學報:數理與科技類,47(2), 43-54 頁, 民國91 年
    [47] 李易學,「無線射頻與視覺定位演算法之實現」,國立成功大學電機工程研究所 碩士論文,2009
    [48] 翁達庚,「無線感測網路自我定位演算法之實驗與分析」,國立成功大學電機工 程研究所碩士論文,2008
    [49] 蘇信華,「利用無線訊號場強量測實現輪型載具追蹤系統」,國立成功大學電機 工程研究所碩士論文,2009

    下載圖示 校內:立即公開
    校外:2012-08-03公開
    QR CODE