| 研究生: |
宋正威 Sung, Cheng-Wei |
|---|---|
| 論文名稱: |
基於Kinect之物件辨識及測距之影像互動遙控車 Image Interactive Remote Car with Object Recognition and Distance Measurement Based on Kinect |
| 指導教授: |
戴政祺
Tai, Cheng-Chi |
| 共同指導教授: |
羅錦興
Luo, Ching-Hsing |
| 學位類別: |
碩士 Master |
| 系所名稱: |
電機資訊學院 - 電機工程學系 Department of Electrical Engineering |
| 論文出版年: | 2017 |
| 畢業學年度: | 105 |
| 語文別: | 英文 |
| 論文頁數: | 70 |
| 中文關鍵詞: | 物件辨識 、距離偵測 、障礙物偵測 、嵌入式系統 、Kinect 、無線傳輸 |
| 外文關鍵詞: | Object Recognition, Distance Measurement, Obstacle Detection, Embedded System, Kinect Sensor, Wireless Transmission |
| 相關次數: | 點閱:124 下載:10 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
對於近年來逐漸增多的高齡者與身心障礙者,市面上已有許多行動輔具因應而生,然而對於重度身障者而言,不管是電動輪椅、電動代步車,都不是十分容易操作且不能有效保障其人身安全,無法外出以及高度依賴他人的生活使重度身障者缺乏自我照顧與社會參與的能力,本系統用於居家,透過即時影像增進臥床的患者與家人、鄰居的互動,由此建立出一套屬於使用者的遠距離互動模式。
本系統主要由嵌入式系統、遙控車、個人電腦、Kinect感測器組成,並透過Wi-Fi、藍芽等無線傳輸方式傳輸影像及控制指令。Kinect感測器可同時提供RGB影像以及深度資訊,提高系統執行效率;以個人電腦作為使用者介面,在電腦上的Server程式顯示行車過程所需的安全資訊以及操作指示;系統執行的任務由兩塊嵌入式系統微控制器分別處理,Odroid-XU3負責影像處理與Wi-Fi回傳等計算量龐大的部分,STM32F407開發板負責遙控車馬達的控制,其中兩個嵌入式系統以藍芽作為指令傳輸的方式。
在影像處理方面,本論文透過Kinect感測器擷取深度資料,經過優化後能夠準確計算出與物體之距離,再透過二值化找出障礙物,達到測距、避障的效果,並且同時使用兩種影像物件辨識的方法,根據不同需求切換模式,當需要精準辨識物件時,使用SIFT(Scale-Invariant Feature Transform)演算法;當需要快速辨識物件時,使用Haar Classifier辨識模型。在影像傳輸與控制方面,使用者在個人電腦端透過控制介面經由網路接收來自Odroid-XU3處理後的影像,並根據影像傳送控制指令給負責馬達控制的STM32F407開發板,進而能自由於居家中操縱遙控車同時維持遙控車安全。
根據實驗,由Kinect感測器所得的深度資訊可以成功測量障礙物的距離,在明亮室內區域誤差為1.71%。SIFT演算法對於資料庫內的物件,在明亮室內區域能夠有100%的辨識率,但伴隨著1.4秒的延遲,而Haar Classifier辨識模型,在明亮室內區域則有65%的辨識率且幾乎沒有延遲,藉由此兩種模式的結合能夠有效辨識物件。此外,影像在傳輸之前先經過壓縮,使其在電腦端顯示可保有8 FPS的效果。綜合上述結果,加上Wi-Fi、藍芽的無線傳輸,以及兩嵌入式系統的資料溝通,使本系統可以達到穩定、安全與便利的目的。
本論文延續過去研究成果,透過不同的模組以及影像處理演算法來提升實用性,除了可做為居家溝通互動的科技輔具之外,未來也能夠擴展到外出型的電動輪椅做小範圍的社區活動,藉此增加重度身障者之社會參與機會,同時也減輕照護者負擔。
According to the increasing number of elders and disables in recent years, there are many assistive devices well developed on the market. However, for the people with severe disabilities, it is still not easy for them to control power wheelchairs or electric scooters which cannot guarantee their safety. People with severe disabilities are lack of the ability of self-care and social participation because they are highly dependent on others and cannot go outside by themselves. This system is developed for household environment with real-time image to enhance the interaction between bedridden patients and their family or neighbors. Under this structure, we can build an exclusive remote interaction mode for users.
This system mainly consists of embedded systems, remote car, a personal computer(PC), and a Kinect sensor, which can send images and control commands through wireless transmission way like Wi-Fi or Bluetooth. Kinect sensor can provide RGB images and depth information simultaneously, which can improve the efficiency of the system. The server program on the PC is served as the user interface, which can show the safety information and control commands while driving. The system is processed by two embedded microprocessors. Odroid-XU3 processes images and returns them via Wi-Fi which would cause large calculation. STM32F407 controls the motors of the remote car. These two embedded systems communicate each other through Bluetooth.
In terms of image processing, this research retrieves depth information through Kinect sensor. After implementing optimization methods, the distances with objects can be accurately calculated from depth map, and then the obstacles can be found through binarization to reach the goal of distance measurement and obstacle avoidance. Besides, two object recognition methods are implemented according to different requirements. When objects need to be recognized precisely, SIFT(Scale-Invariant Feature Transform) Algorithm is performed. When objects need to be recognized rapidly, Haar Classifier recognition model is performed. In terms of image transmission and control, users receive images processed by Odroid-XU3 with control interface on PC through internet, and send control commands to STM32F407 which deals with motor control according to the images. Therefore, the remote car can be operated in the household environment successfully and keep itself in safe at the same time.
According to the experiment, the depth information obtained by the Kinect sensor can successfully measure the distance with the obstacle, and the error in the bright indoor area is 1.71%. For objects within the database, SIFT algorithm can have 100% recognition rate in the bright indoor area, but with a 1.4 second delay. As for Haar Classifier identification model, it can have 65% of the recognition rate and almost no delay in the bright indoor area. By switching between the two modes, it can effectively identify the object. In addition, the image is compressed before transmission, so that it can be displayed on the computer side of the effect of 8 FPS. Based on the above results, coupled with Wi-Fi, Bluetooth wireless transmission, and data communication between two embedded systems, the system can achieve stable, secure and convenient purposes.
This research continues the past research results and enhances the practicality through different modules and image processing algorithms. It can not only be used as an assistive device for household interaction but also be extended to outdoor electric wheelchair for small range community activities. Therefore, the social participation opportunities for people with severe disabilities would increase and the burden on caregivers would reduce.
1] Ching-Yu Hyu. “Implementation of Context-Aware Television Interaction System.” 2013.
[2] Chung-Min Wu and Ching-Hsing Luo. “Morse Code Recognition System With Fuzzy Algorithm for Disabled Persons.” J. Med. Eng. & Tech., Vol. 26, Num. 5, pp. 202-207, 2002.
[3] Chin-Hsien Liang, Chung-Min Wu, Shu-Wen Lin, and Ching-Hsing Luo. “A Portable and Low-cost Assistive Computer Input Device for Quadriplegics.” Technology and Disability, vol. 21, no. 3, pp. 67-78, 2009.
[4] Chuan-Chi Hsiao. “Image Object Tracking with LiDAR Obstacle Avoidance System of Interactive Electrical Power Wheelchair.” 2016.
[5] Murrman. “Ubuntu 15.04 Robotics Edition: XU3/XU4 (ROS+OpenCV+PCL).” forum.odroid.com, 2015.
[6] OpenKinect, “libfreenect.” openkinect.org, 2012
[7] Kourosh Khoshelham and Sander Oude Elberink, “Accuracy and Resolution of Kinect Depth Data for Indoor Mapping Applications.” Sensors 2012, 12(2), 1437-1454.
[8] Mukesh P. and Aniket Patel, “Depth based room mapping using Kinect.” 2015.
[9] Michael Beyeler, “Hand Gesture Recognition Using a Kinect Depth Sensor.” 2015.
[10] David G. Lowe, “Distinctive Image Features from Scale-Invariant Keypoints.” 2004.
[11] Rainer Lienhart and Jochen Maydt, “An extended set of Haar-like features for rapid object detection.” ICIP02, pp. I: 900–903, 2002
[12] Justin Hensley, “Interactive Summed-Area Table.” University of North Carolina at Chapel Hill, 2005.
[13] Thorsten Ball, “Train Your Own OpenCV Haar Classifier.” coding-robin.de, 2013.
[14] Wojciech Gomolka, “The concept of Sockets and basic Function Blocks for communication over Ethernet.” Researchgate.net, publication 262198350, 2014.
[15] Huy-Hieu Pham, Thi-Lan Le, and Nicolas Vuillerme, “Real-Time Obstacle Detection System in Indoor Environment for the Visually Impaired Using Microsoft Kinect Sensor.” Hindawi, Journal of Sensors, Volume 2016, Article ID 3754918, 13 pages.
[16] Teng Wang, Leping Bu, and Zhongyi Huang “A New Method for Obstacle Detection Based on Kinect Depth Image.” Chinese Automation Congress (CAC), 2015.
[17] Hsieh-Chang Huang, Ching-Tang Hsieh, and Cheng-Hsiang Yeh, “An Indoor Obstacle Detection System Using Depth Information and Region Growth.” Sensors 2015, 15, 27116-27141.
[18] Zheng-You Zhang, “Microsoft Kinect Sensor and Its Effect.” IEEE MultiMedia, Vol. 19, Issue: 2, Feb. 2012.
[19] Jungong Han, Ling Shao, Dong Xu, and Jamie Shotton, “Enhanced Computer Vision with
Microsoft Kinect Sensor: A Review.” IEEE Transactions on Cybernetics, Vol. 43, No. 5, Oct. 2013.