| 研究生: |
陳政國 Chen, Cheng-Guo |
|---|---|
| 論文名稱: |
自動化奉茶機器人 An Automatic Tea Serving Robot |
| 指導教授: |
王宗一
Wang, Zong-Yi |
| 學位類別: |
碩士 Master |
| 系所名稱: |
工學院 - 工程科學系 Department of Engineering Science |
| 論文出版年: | 2015 |
| 畢業學年度: | 103 |
| 語文別: | 中文 |
| 論文頁數: | 76 |
| 中文關鍵詞: | 手勢辨識 、影像辨識 、機械手臂 、雙眼視覺系統 、PID控制 |
| 外文關鍵詞: | Hand gesture recognition, Image recognition, Robot arm, Binocular vision, PID control |
| 相關次數: | 點閱:163 下載:5 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
本論文提出自動奉茶機器人的設計與建構,使之成為人們接待客人送茶水的小幫手,讓機器人產品能融入人類生活中。自動化奉茶機器人利用高扭力直流有刷馬達做為驅動,搭載具有雙眼視覺辨識與定位能力的五軸機械手臂完成服務動作、以及提供人機互動的手勢辨識系統。使用者可透過手勢告知機器人任務及目標,機器人辨識系統辨識即載入環境地圖、路徑和目標物的圖樣資料庫,影像辨識運用SURF演算法提取輸入影像的特徵點,與載入的圖樣資料做匹配,計算出目標物於三維空間中的位置,並透過其所求出的位置資訊驅動機械及其手臂順利到達所要求之位置,完成倒水服務。
本文利用雙攝影鏡頭,並使用座標轉換(coordinate transformation)公式推導出三維重建座標,將物件影像中的二維像素座標轉換成空間中之三維座標,以辨識出物件於真實世界中的正確位置。利用手勢辨識達到人機互動的目的。機器人底盤的直流有刷馬達透過PID速度控制成功穩定機器人行走速度與路徑。
This study designs and implements an automatic water (tea) serving robot that can be fitted into human daily life and become a handy helper. The service robot uses a high torque DC brush motor as the driving force for moving. For human interaction, it is equipped with a binocular vision system to identify human hand gestures and object locations, and a five-axis robot arm to complete the service. By using hand gestures, guests can signal the robot to start its services. Upon receiving hand gesture from a guest, the robot starts to load the objective image patterns, the environment map, and the path information. It then uses image recognition algorithms to extract SURF feature points of the input hand gesture images, try to match them with image patterns’, and, in case of any success, use loaded map and path information to guide the robot to move to a fixed position opposing the desk in front of the guest. The robot then uses its binocular vision system to identify the mug on the desk, calculates the position of the mug in the three-dimensional space, and drives its five-axis robot arm, carrying a bottle of water or tea, to the correct position to pour water or tea into the mug. The robot can serve three guests simultaneously in a same run.
This study, using the binocular vision system, can successfully identify an object’s correct position in the real world by using the coordinate transformation formula to derive a three-dimensional reconstruction of the two-dimensional pixel coordinates of the object’s image. Hand gesture recognition achieves human-computer interaction. The DC brush motors on the robot’s chassis, controlled by a PID, make robot walking stably along a successfully planned path. Several experiments are conducted to verify the functionality of this robot successfully.
[1] 王國榮,基於資料手套的智慧型手勢辨識,國立台灣科技大學電機工程所,民國90年。
[2] Parvini, Farid, et al. "An approach to glove-based gesture recognition." Human-Computer Interaction. Novel Interaction Methods and Techniques. Springer Berlin Heidelberg, pp. 236-245, 2009.
[3] Hartanto, Rudy, Adhi Susanto, and Paulus Insap Santosa. "Real time hand gesture movements tracking and recognizing system." Electrical Power, Electronics, Communications, Controls and Informatics Seminar (EECCIS), IEEE. pp. 137–141, 2014.
[4] Panwar, Meenakshi. "Hand gesture recognition based on shape parameters." Computing, Communication and Applications (ICCCA), International Conference on. IEEE, pp. 1–6, 2012.
[5] Jolliffe , "Principal Component Analysis",second edition.
[6] Lowe, David G. "Distinctive image features from scale-invariant keypoints." International journal of computer vision, pp. 1–26, 2004.
[7] Bay, Herbert, Tinne Tuytelaars, and Luc Van Gool. "Surf: Speeded up robust features." Computer vision–ECCV. Springer Berlin Heidelberg, pp. 1–14, 2006.
[8] Kong, Minxiu, et al. "Optimal point-to-point motion planning of heavy-duty industry robot with indirect method." Robotics and Biomimetics (ROBIO), International Conference on. IEEE, pp. 768–773, December 2013.
[9] Cai, Jun, et al. "Trajectory planning and simulation for intersecting line cutting of the industry robot." Intelligent Control and Automation (WCICA), World Congress on. IEEE, pp. 63–68, June 29 – July 4 2014.
[10] Kuo, Chung-Hsien, et al. "Motion planning and control of interactive humanoid robotic arms." Advanced robotics and Its Social Impacts.Workshop on. IEEE, pp. 1–6, August 2008.
[11] Baek, JunYoung, and MinCheol Lee. "A study on detecting elevator entrance door using stereo vision in multi floor environment." ICCAS-SICE. IEEE, pp. 1-4, August 2009.
[12] Okada, Kei, et al. "Multi-cue 3D object recognition in knowledge-based vision-guided humanoid robot system." Intelligent Robots and Systems. IEEE/RSJ International Conference on. IEEE, pp. 3217-3222, October 2007.
[13] Winkelbach, Simon, Sven Molkenstruck, and Friedrich M. Wahl. "Low-cost laser range scanner and fast surface registration approach." Pattern Recognition. Springer Berlin Heidelberg, pp. 718-728, 2006.
[14] 袁聖翔,即時影像辨識之跨樓層物料收發機器人,國立成功大學工程科學所,April 2013.
[15] 徐啟勝,基於雙眼視覺之物件位置偵測及計算,國立中央大學電機工程所,August 2011.
[16] Ang, Kiam Heong, Gregory Chong, and Yun Li. "PID control system analysis, design, and technology." Control Systems Technology, International Conference on. IEEE, pp. 559-576, 2005.
[17] Abhinav, Rishabh, and Satya Sheel. "An adaptive, robust control of DC motor using fuzzy-PID controller." Power Electronics, Drives and Energy Systems (PEDES), International Conference on. IEEE, pp. 1-5, 2012.
[18] PIC18F887Datasheet:
http://ww1.microchip.com/downloads/en/DeviceDoc/41291D.pdf