簡易檢索 / 詳目顯示

研究生: 邱郁智
Chiu, Yu-Chih
論文名稱: 基於AI人體骨架追蹤技術研製高齡有氧運動評估系統之前期研究
A Pilot Study of an AI Human Skeleton Tracking System for Elderly Aerobic Exercise Evaluation
指導教授: 杜翌群
Du, Yi-Chun
學位類別: 碩士
Master
系所名稱: 工學院 - 生物醫學工程學系
Department of BioMedical Engineering
論文出版年: 2024
畢業學年度: 112
語文別: 英文
論文頁數: 61
中文關鍵詞: 肌少症人體姿態估計關節資訊轉換3D關節運動偵測有氧運動關節偵測
外文關鍵詞: Sarcopenia, human posture estimation (HPE), joint information transformation, 3D joint motion detection, Aerobic exercise joint detection
相關次數: 點閱:49下載:5
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 肌肉減少症是老年人失能的主要原因之一。許多研究表明,有氧運動以及力量訓練可以有效改善高齡長者的肌少症症狀。因此,如何降低高齡者運動持續傷害的風險以及避免環境因素的影響成為一個重要議題。有鑑於此,一套具備人體姿態估計技術的居家運動自動偵測系統可以有效解決上述問題。先前研究提出使用Kinect的關節點座標對各個動作進行關節點相對位置的比對,並設計出一套計分方式。然而,關節點座標數據量大且冗餘,且對姿態變化敏感。另外,此計分方式需要對各個動作進行人為分析,存在效率低下、主觀性和一致性問題。因此,本系統整合人體姿態估計技術、關節資訊轉換方法和3D關節運動偵測技術,設計了一套使用RGB攝影機偵測關節運動的方法。該方法基於OpenPose和VideoPose3D,以關節角度評估有氧動作完整度。本實驗分為兩個部分,第一部分是對人體姿態估計的關節角度計算準確性評估,將人體姿態估計的關節角度測量結果與慣性傳感器的測量結果進行均方根誤差評估。第二部分是以3D關節角度變化進行關節運動偵測,評估受試者在單一關節運動、複合性關節運動和有氧運動中的3D關節運動偵測結果並對受試者運動效果進行評估。實驗結果顯示,人體姿態估計在單一關節運動和複合性關節運動中的3D關節運動偵測準確率高達90%以上,有氧運動的關節偵測準確率也在80% 以上,展示了人體姿態估計進行3D動作辨識的可行性。

    Sarcopenia was one of the primary causes of disability in the elderly. Numerous studies had shown that aerobic exercise and strength training could effectively alleviate symptoms of sarcopenia in older adults. Therefore, reducing the risk of exercise-related injuries and minimizing the impact of environmental factors had become crucial issues. In light of this, a home exercise detection system equipped with human posture estimation technology could have effectively addressed these concerns. Previous research had proposed using Kinect's joint coordinates to compare the relative positions of joints for various movements, and a scoring method was developed accordingly. However, the joint coordinate data was often large, redundant, and sensitive to posture variations. Additionally, this scoring method required manual analysis of each movement, leading to inefficiencies, subjectivity, and consistency issues. To overcome these challenges, this system integrated human posture estimation technology, joint information transformation methods, and 3D joint motion detection technology to design a method for detecting joint movements using an RGB camera. This approach was based on OpenPose and VideoPose3D, utilizing joint angles to assess the completeness of aerobic movements. The experiment consisted of two parts: the first evaluated the accuracy of joint angle calculations in human posture estimation by comparing the results with measurements from inertial sensors using root mean square error analysis. The second part involved detecting joint movements through 3D joint angle variations, assessing the results in single joint movements, compound joint movements, and aerobic exercises, and evaluating the subjects' exercise performance. The experimental results indicated that the accuracy of 3D joint motion detection in single and compound joint movements exceeded 90%, while the accuracy for aerobic exercise joint detection was also above 80%, demonstrating the feasibility of using human posture estimation for 3D motion recognition.

    摘要 i Abstract ii 誌謝 iv Contents v List of Tables vii List of Figures viii Chapter 1. Introduction 1 1.1. Preface 1 1.2. Research Background 4 1.2.1. Development of HPE Technology 5 1.2.2. Applications Related to Calculating Joint Angles in Motion 6 1.2.3. Image-Based Motion Recognition Technology 8 1.3. Research Motivation and Purpose 9 Chapter 2. Literature Review 10 2.1. Research related to Sarcopenia 10 2.2. Research on HPE 11 2.2.1. Two-dimensional HPE 11 2.2.2. Three-dimensional HPE 13 2.3. Applications for calculating joint angles in movement 14 Chapter 3. Materials and Methods 16 3.1. System Architecture 16 3.2. HPE Methods 16 3.2.1. The components and application methods of wearable devices 17 3.2.2. Image recognition methods and processes 18 3.3. Joint Information Transformation 20 3.4. Joint Motion Recognition Techniques 22 Chapter 4. Experimental Design and Results 25 4.1. Verification of Joint Angle Accuracy in Image Recognition 25 4.2. Single Joint Movement Detection 31 4.3. Composite Joint Movement Detection 34 4.4. Aerobic Exercises Joint Movement Detection 38 4.5. Detection of Changes in Accuracy Dynamic Thresholds during Aerobic Exercise 42 Chapter 5. Discussion and Conclusion 45 References 49

    [1] Internal Affairs, "Statistical Bulletin for the 24th Week of 2013," 2024.
    [2] "Elders measure six strengths and care about the distance between aging and disability," 2022.
    [3] C. C. Sieber, "Malnutrition and sarcopenia," Aging Clinical and Experimental Research, vol. 31, pp. 793-798, 2019.
    [4] J. Cruz-Jentoft, et al., "Sarcopenia: revised European consensus on definition and diagnosis," Age and Ageing, vol. 48, no. 1, pp. 16-31, 2019.
    [5] J.-A. D. Chase, "Physical activity interventions among older adults: a literature review," Research and Theory for Nursing Practice, vol. 27, no. 1, pp. 53-80, 2013.
    [6] M. A. Fiatarone, et al., "Exercise training and nutritional supplementation for physical frailty in very elderly people," New England Journal of Medicine, vol. 330, no. 25, pp. 1769-1775, 1994.
    [7] Z. Cao, et al., "Realtime multi-person 2d pose estimation using part affinity fields," in Proc. IEEE Conf. on Computer Vision and Pattern Recognition, 2017.
    [8] D. Pavllo, et al., "3d human pose estimation in video with temporal convolutions and semi-supervised training," in Proc. IEEE/CVF Conf. on Computer Vision and Pattern Recognition, 2019.
    [9] R. Baker, "Gait analysis methods in rehabilitation," Journal of Neuroengineering and Rehabilitation, vol. 3, pp. 1-10, 2006.
    [10] T. O. Smith, et al., "Operative versus non-operative management following Rockwood grade III acromioclavicular separation: a meta-analysis of the current evidence base," Journal of Orthopaedics and Traumatology, vol. 12, pp. 19-27, 2011.
    [11] A. Shahroudy, et al., "Ntu rgb+ d: A large scale dataset for 3d human activity analysis," in Proc. IEEE Conf. on Computer Vision and Pattern Recognition, 2016.
    [12] M. O'Reilly, et al., "Wearable inertial sensor systems for lower limb exercise detection and evaluation: a systematic review," Sports Medicine, vol. 48, pp. 1221-1246, 2018.
    [13] A. F. Bobick and J. W. Davis, "The recognition of human movement using temporal templates," IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 23, no. 3, pp. 257-267, 2001.
    [14] K. Simonyan and A. Zisserman, "Two-stream convolutional networks for action recognition in videos," Advances in Neural Information Processing Systems, vol. 27, 2014.
    [15] D. Tran, et al., "Learning spatiotemporal features with 3d convolutional networks," in Proc. IEEE Int. Conf. on Computer Vision, 2015.
    [16] J. Carreira and A. Zisserman, "Quo vadis, action recognition? a new model and the kinetics dataset," in Proc. IEEE Conf. on Computer Vision and Pattern Recognition, 2017.
    [17] L. Jing and Y. Tian, "Self-supervised visual feature learning with deep neural networks: A survey," IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 43, no. 11, pp. 4037-4058, 2020.
    [18] J. Zheng, et al., "Multi-modal 3d human pose estimation with 2d weak supervision in autonomous driving," in Proc. IEEE/CVF Conf. on Computer Vision and Pattern Recognition, 2022.
    [19] S. K. Papadopoulou, "Sarcopenia: a contemporary health problem among older adult populations," Nutrients, vol. 12, no. 5, p. 1293, 2020.
    [20] R. O. Duda and P. E. Hart, "Use of the Hough transformation to detect lines and curves in pictures," Commun. of the ACM, vol. 15, no. 1, pp. 11-15, 1972.
    [21] D. H. Ballard, "Generalizing the Hough transform to detect arbitrary shapes," Pattern Recognition, vol. 13, no. 2, pp. 111-122, 1981.
    [22] D. G. Lowe, "Object recognition from local scale-invariant features," in Proc. Seventh IEEE Int. Conf. on Computer Vision, 1999, vol. 2.
    [23] J. Martinez, et al., "A simple yet effective baseline for 3d human pose estimation," in Proc. IEEE Int. Conf. on Computer Vision, 2017.
    [24] L. Zhang, R. Liu, and H. Deng, "A hybrid approach for human action recognition based on optical flow and joint coordinates," IEEE Access, vol. 8, pp. 85705-85717, 2020.
    [25] L. Zhang, R. Liu, and H. Deng, "Advanced fusion techniques for human motion recognition in challenging environments," IEEE Trans. on Human-Machine Systems, vol. 50, no. 6, pp. 576-585, 2020.

    下載圖示 校內:立即公開
    校外:立即公開
    QR CODE