| 研究生: |
李侑蓉 Li, You-Rong |
|---|---|
| 論文名稱: |
在3D虛擬環境下使用LSTM進行機械手臂姿態模擬與模仿 Robotic Arm Pose Simulation and Imitation using LSTM in Unity3D |
| 指導教授: |
蘇文鈺
Su, Wen-Yu |
| 學位類別: |
碩士 Master |
| 系所名稱: |
電機資訊學院 - 資訊工程學系 Department of Computer Science and Information Engineering |
| 論文出版年: | 2023 |
| 畢業學年度: | 111 |
| 語文別: | 英文 |
| 論文頁數: | 41 |
| 中文關鍵詞: | 機械人模仿 、虛擬模擬 、姿態相似性 |
| 外文關鍵詞: | Robotic imitation, Virtual simulation, Posture similarity |
| 相關次數: | 點閱:58 下載:3 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
近年來,機械手臂不僅在工業領域中扮演著重要角色,也應用於模仿人類手臂功能的各個領域,像是醫療手術、藝術繪畫、服務機器人甚至涉及人機舞蹈表演,展示了機械手臂所應用的多樣領域。因此機械手臂正在從工廠走入普通消費者的生活中,其應用範圍不斷擴大。然而,在現實世界中操作機械手臂存在著重大風險。不正確的操作可能損壞機械系統本身並對周圍環境造成破壞。因此,建立虛擬環境來模擬與設計機械手臂的操作變得更加關鍵。通過這樣的虛擬模擬,我們可以減輕操作機械手臂所涉及的風險,並在更安全的環境中進行探索和實驗。
透過在虛擬環境中的機械手臂模型進行資料蒐集與訓練,我們可以實現所期望的應用目標,本論文所採用的虛擬環境是架構在 Unity3D 此一 3D 動畫遊戲平台之上。除了常用的逆向動力學控制方法外,我們利用機器學習模型尋找一種方法,不僅可以讓末端端點到達與被模仿對象的位置,還可以使機械手臂的整體姿態與被模仿的對象相似。這種方法將使機械手臂透過整體運動姿態更加貼近被模仿對象的運動方式,簡化了操作機械手的步驟與困難,並達到即時(real-time) 操控的目標。本論文中展示了以六軸機械手臂針對五軸機械手臂以及人類手臂姿態的模仿,其中人類手臂的姿態是採用 Google MediaPipe 進行即時的姿態偵測並轉換為六軸機械手臂的控制。
In recent years, robotic arms have played a significant role not only in the industrial sector but also in areas involving the replication of human arm functionality such as medical surgery, artistic painting, service robots, and even performances involving human-robot dance, demonstrating the diverse domains where robotic arms are employed. As a result, robotic arms are transitioning from factory settings to the ordinary consumers, and their areas of utilization continue to expand limitlessly. However, there are significant risks involved in operating robotic arms in real-world scenarios. Improper handling can potentially damage the robotic system itself and cause destruction to the surrounding environment. Therefore, establishing virtual environments to simulate and design the operation of robotic arms becomes even more crucial. Through such virtual simulations, we can mitigate the risks associated with manipulating robotic arms and conduct exploration and experimentation in a safer environment.
By collecting data and training robotic arm models in a virtual environment, we could achieve our desired application goals. In this work, the virtual environment is built on the Unity3D platform, which is a three-dimensional animation and gaming platform. In addition to the commonly used inverse kinematics control method, we utilize a machine learning model to search for an approach that not only allows the end-effector to reach the position of the imitated object but also makes the overall posture of the robotic arm similar to the imitated object. This approach allows the robotic arm to closely mimic the motion of the imitated object through its overall movement and posture, simplifying the steps and challenges of operating the robotic arm and achieving real-time control. This study demonstrates the imitation of postures using a 6-degree-of-freedom (DOF) robotic arm for both 5-DOF robotic arm and human arm. The posture of the human arm is achieved through real-time pose detection using Google MediaPipe and then transformed into control commands for the 6-DOF robotic arm.
[1] Song A. Zhu T. WuWNiu, Y. A real-time upper-body robot imitation system. In Proceedings of the 2019 IEEE International Conference on Real-time Computing and Robotics (RCAR), pages 159–163, 2019.
[2] Bakker P. Robot see, robot do: An overview of robot imitation. pages 3–11, 1996
[3] Ude A Wade K Atkeson C. G Riley, M. Enabling real-time full body imitation: a natural way of transferring human movement to humanoids. IEEE International Conference on Robotics and Automation(ICRA), pages 2368–2374, 2003.
[4] Tang C Ou Y Xu Y Wang, F. A real-time human limitation system. World Congress on Intelligent Control and Automation (WCICA), pages 3692–3697, 2012.
[5] Sakka S. Cehajic D. etal Poubel, L. P. Support changes during online human motion imitation by a humanoid robot using task specification. IEEE International Conference on Robotics and Automation (ICRA), pages 1782–1787, 2014.
[6] Cortez S. Vasquez K. Murray V. Ramos O. E Avalos, J. Telepresence using the kinect sensor and then a robot. IEEE 7th Latin American Symposium on Circuits Systems (LASCAS), 2016.
[7] Burget F. Bennewitz M Koenemann, J. Real-time imitation of human wholebody motions by humanoids. IEEE International Conference on Robotics and Automation (ICRA), 2014.
[8] C. H. Lee J. H. Sohn, S. Oh and S. S. Kim. Recursive inverse kinematics analysis for teaching human motion to a humanoid social robot using a depth camera. 20th International Conference on Control, Automation and Systems (ICCAS), pages 1151–1154, 2020.
[9] J. J Craig. Introduction to robotics: mechanics and control. 1986.
[10] S. R Buss. Introduction to inverse kinematics with jacobian transpose, pseudoinverse, and damped least squares methods. 2009.
[11] A. A. Canutescu and R. L. Dunbrack Jr. Cyclic coordinate descent: A robotics algorithm for protein loop closure. Protein Sci., 2003.
[12] Doe A. B. Johnson C. D Smith, J. Fabrik: A fast, iterative solver for the inverse kinematics problem. journal of robotics. Journal of Robotics.
[13] Shih-En Wei Zhe Cao, Tomas Simon and Yaser Sheikh. Realtime multi-person 2d pose estimation using part affinity fields. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 7291–7299, 2017.
[14] Hadon Nash Chris McClanahan Esha Uboweja Michael Hays Fan Zhang Chou Ling Chang Ming Guang Yong Juhyun Lee et al Camillo Lugaresi, Jiuqiang Tang. Mediapipe: A framework for building perception pipelines. 2019.
[15] R. Wang F. Cheng S. Wang, X. Zuo and R. Yang. A generative human-robot motion retargeting approach using a single depth sensor. IEEE International Conference on Robotics and Automation (ICRA), pages 5369–5376, 2017.
[16] C. K. Liu M. J. Gielniak and A. L. Thomaz. Generating human-like motion for robots. Int. J. Robot. Res., 32(11):1275–1301, 2013.
[17] Wollherr D Turnwald, A. Human-like motion planning based on game theoretic decision making. Int J of Soc Robotics, 11:151–170, 2019.
[18] W.; Bicho E Gulletta, G.; Erlhagen. Human-like arm motion generation: A review. robotics 2020. 9(102), 2020.
[19] C. Kim S. Kim and J. H. Park. Human-like arm motion generation for humanoid robots using motion capture database. IEEE/RSJ International Conference on Intelligent Robots and Systems, 2006.
[20] J. Zhao B. Xie and Y. Liu. Human-like motion planning for robotic arm system. 15th International Conference on Advanced Robotics (ICAR), pages 88–93, 2011.
[21] R. Haschke J. Maycock, J. Steffen and H. Ritter. Robust tracking of human hand postures for robot teaching. IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 2947–2952, 2011.
[22] P. K. Artemiadis M. V. Liarokapis and K. J. Kyriakopoulos. Telemanipulation with the dlr/hit ii robot hand using a dataglove and a low cost force feedback device. 21st Mediterranean Conference on Control and Automation, pages 431–436, 2013.
[23] G. Saggio and M. Bizzarri. Feasibility of teleoperations with multifingered robotic hand for safe extravehicular manipulations. Aerosp. Sci. Technol., 39:666-674, 2014.
[24] S. Einenkel A. Peer and M. Buss. Multi-fingered telemanipulation - mapping of a human hand to a three finger gripper. The 17th IEEE International Symposium on Robot and Human Interactive Communication, pages 465–470, 2008.
[25] Nakamura Y. Ishii H Maeda, G. Position teaching of a robot arm by demonstration with a wearable input device. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), pages 3044–3049, 2012.
[26] Kim J. Lee D Kim, J. Learning human arm motion for humanoid robot using recurrent neural network. In Proceedings of the IEEE-RAS 16th International Conference on Humanoid Robots (Humanoids), pages 404–409, 2016.
[27] Hiwin robot sdk manual: https://www.hiwin.tw/download/tech_doc/mar/ Software_Development_Kit-(C).pdf.
[28] Hiwin robot system software manual: https://www.hiwin.tw/download/tech_ doc/mar/Robot_Software_Manual_3.3_(C).pdf.
[29] Hiwin robot communication manual: https://www.hiwin.tw/download/tech_ doc/mar/Robot_Communication_Manual-(C).pdf.
[30] Hiwin articulated robot manual: https://www.hiwin.tw/download/tech_doc/ mar/RART605_User_Manual-(C).pdf.
[31] Perception neuron manual: https://support.neuronmocap.com/hc/en-us.
[32] Paul S. A. Reitsma Jessica K. Hodgins Jehee Lee, Jinxiang Chai and Nancy S. Pollard. Interactive control of avatars animated with human motion data. In Proceedings of the 29th annual conference on Computer graphics and interactive techniques (SIGGRAPH ’02), pages 491–500, 2002.
[33] Chun-Wei Lai and Wen-Yu Su. Infilling and reconstruction of fragmented 3d human motion sequence using bilstm-vae. 2023.