| 研究生: |
莊智宇 Chuang, Chih-Yu |
|---|---|
| 論文名稱: |
基於YOLOv8實現工作場域人員動作辨識-以某光電公司為例 Realization of Workplace Human Action Recognition Based on YOLOv8 - A Case of an Opto-electronic Corporation |
| 指導教授: |
朱威達
Chu, Wei-Ta |
| 學位類別: |
碩士 Master |
| 系所名稱: |
電機資訊學院 - 資訊工程學系 Department of Computer Science and Information Engineering |
| 論文出版年: | 2025 |
| 畢業學年度: | 113 |
| 語文別: | 中文 |
| 論文頁數: | 53 |
| 中文關鍵詞: | 動作偵測 、YOLO 、人員行為管理 |
| 外文關鍵詞: | Image recognition, object detection, action detection, YOLO, personnel behavior management |
| 相關次數: | 點閱:16 下載:4 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
工廠人員管理是基層管理者每日面臨的挑戰。即使企業提供完善的訓練與規範,但在廠區範圍廣大、難以全面巡視的情況下,仍需投入大量人力進行現場稽核與監控。為提升管理效率與降低工安危害風險,本研究導入YOLO(You Only Look Once)深度學習模型,透過網路攝影機(Webcam)即時擷取作業現場影像,建構人員作業安全行為辨識與異常通報系統,協助管理者即時掌握現場狀況,降低工安危害風險與人力成本。
本研究利用YOLOv8模型來進行人員動作辨識分類,實現工作場域安全及違規行為自動化監控。本模型主要以8類常見行為進行分類(站立、移動、危險動作、坐在地板、坐在椅子、蹲下、其他、圈記),其結果顯示,僅移動準確率(Accuracy)為84%,其餘動作準確率皆在95%以上,整體辨識表現穩定。模型佈署在場域實際上線後,對人員危險與違規動作,有明顯改善趨勢,可協助管理者提升管理效率及有效減少稽核人力,降低工安危害風險,展現其在管理自動化與經濟效益上的實用價值。
Factory personnel management encounters challenges such as extensive areas and difficulties in manpower inspections. This study introduces the YOLOv8 deep learning model, integrated with real-time image recognition technology, to establish an automated system for monitoring personnel behavior and reporting anomalies. The system can identify eight common actions, achieving over 95% accuracy for all actions except movement, demonstrating consistently stable recognition. Following practical application, it effectively enhances management efficiency, reduces workplace safety risks, and minimizes the need for manual audits, highlighting the advantages of management automation and practicality.
[1] Chen, S., Dong, F., and Demachi, K. J. S. s. (2023). Hybrid visual information analysis for on-site occupational hazards identification: a case study on stairway safety. 159, 106043.
[2] Chern, W.-C., Hyeon, J., Nguyen, T. V., Asari, V. K., and Kim, H. J. A. i. C. (2023). Context-aware safety assessment system for far-field monitoring. 149, 104779.
[3] Dong, X., Wang, X., Li, B., Wang, H., Chen, G., and Cai, M. J. E. A. o. A. I. (2024). YH-Pose: Human pose estimation in complex coal mine scenarios. 127, 107338.
[4] Du, J., Dang, M., Qiao, L., Wei, M., and Hao, L. J. J. o. M. A. (2023). Drill pipe counting method based on improved spatial-temporal graph convolution neural network. 49(01), 90-98.
[5] Gupta, D., Artacho, B., and Savakis, A. J. P. R. (2022). HandyPose: Multi-level framework for hand pose estimation. 128, 108674.
[6] Jiang, Z., Fang, D., and Zhang, M. J. J. o. M. i. E. (2015). Understanding the causation of construction workers’ unsafe behaviors based on system dynamics modeling. 31(6), 04014099.
[7] Lee, S.-K., and Yu, J.-H. J. A. i. C. (2023). Ontological inference process using AI-based object recognition for hazard awareness in construction sites. 153, 104961.
[8] Park, M., Kulinan, A. S., Dai, T. Q., Bak, J., and Park, S. J. A. i. C. (2024). Preventing falls from floor openings using quadrilateral detection and construction worker pose-estimation. 165, 105536.
[9] Redmon, J. (2016). You only look once: Unified, real-time object detection. Paper presented at the Proceedings of the IEEE conference on computer vision and pattern recognition.
[10] Sapkota, R., Qureshi, R., Calero, M. F., Badjugar, C., Nepal, U., Poulose, A., . . . Shoman, M. J. a. p. a. (2024). YOLOv10 to its genesis: a decadal and comprehensive review of the you only look once (YOLO) series.
[11] Yan, J., and Wang, Z. J. J. o. M. S. (2022). YOLO V3+ VGG16-based automatic operations monitoring and analysis in a manufacturing workshop under Industry 4.0. 63, 134-142.
[12] Yang, M., Wu, C., Guo, Y., He, Y., Jiang, R., Jiang, J., and Yang, Z. J. A. E. I. (2024). A teacher–student deep learning strategy for extreme low resolution unsafe action recognition in construction projects. 59, 102294.
[13] Yang, T., Guo, Y., Li, D., and Wang, S. J. M. (2025). Vision-Based obstacle detection in dangerous region of coal mine driverless rail electric locomotives. 239, 115514.
[14] Yin, W., Chen, L., Huang, X., Huang, C., Wang, Z., Bian, Y., . . . Yi, M. J. M. I. A. (2024). A self-supervised spatio-temporal attention network for video-based 3D infant pose estimation. 96, 103208.
[15] Zhang, P., Zhao, X., Dong, L., Lei, W., Zhang, W., Lin, Z. J. C. V., and Understanding, I. (2024). A framework for detecting fighting behavior based on key points of human skeletal posture. 248, 104123.