簡易檢索 / 詳目顯示

研究生: 歐子毓
Ou, Tzu-Yu
論文名稱: 手術室流程效率分析與手術階段識別
Process Efficiency Analysis and Surgical Workflow Recognition in Operating Room
指導教授: 李家岩
Lee, Chia-Yen
王宏鍇
Wang, Hung-Kai
學位類別: 碩士
Master
系所名稱: 電機資訊學院 - 製造資訊與系統研究所
Institute of Manufacturing Information and Systems
論文出版年: 2022
畢業學年度: 110
語文別: 英文
論文頁數: 52
中文關鍵詞: 手術室效率指標手術流程識別深度學習
外文關鍵詞: Operating Room, Efficiency Indicator, Surgical Workflow Recognition, Deep Learning
相關次數: 點閱:90下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 手術室是任何醫院的核心,他們創造巨大的收入,但也伴隨著可觀的成本。因此,對手術室效率的關注是必要的。在這樣的情況下,基於精益醫療概念的 ORE 指標被提出來,用來計算手術室表現及識別時間損失。根據 ORE 指標的結果,本研究進一步描述手術流程中的各步驟,目的是減少非增值步驟的時間。並與台灣一家教學醫院合作,透過實際的手術室使用資料,驗證改進的指標。
    隨著現代醫療技術的發展,機器人和計算機輔助系統被廣泛應用於手術室,協助外科醫師執行手術。在不干預手術流程的狀況下,這些設備可用於自動地搜集大量的手術視頻,以利後續的分析。手術流程識別是其中一項重要的工作,它提供計算機輔助系統最基本的資訊。然而,目前大多數的手術流程識別方法為了提高性能,使得計算效率較低,這阻礙他們進行實際應用。因此,本研究提出新穎的時空網路,更有效地利用視覺與時間的信息,不但在公開資料集中取得出色的表現,且超過其他流程識別的方法。
    關鍵字:手術室、效率指標、手術流程識別、深度學習

    Operating rooms are the core of any hospital, they generate huge revenues but are also accompanied by considerable costs. Therefore, it is necessary to pay attention to the efficiency of the operating room. In this context, the ORE indicator, based on the concept of lean healthcare, has been proposed to calculate OR performance and identify time loss. Based on the results of the ORE indicator, this study further describes the steps in the surgical process with the aim of reducing the time spent on non-value-added steps. We also collaborate with a teaching hospital in Taiwan to validate the improved metrics through actual operating room usage data.
    With the development of modern medical technology, robotic and computer-assisted systems are widely used in the operating room to assist surgeons in performing surgery. Without interfering with the surgical process, these devices can be used to automatically collect large amounts of surgical video for subsequent analysis. Surgical workflow recognition is one of the important tasks that provide the most essential information for computer-assisted systems. However, most of the current surgical workflow recognition methods are computationally inefficient in order to improve performance, which prevents them from practical applications. Thus, this study proposes a novel spatial-temporal network that makes more efficient use of visual and temporal information, which achieves excellent performance not only in public datasets, but also outperforms other workflow recognition methods.
    Keywords: Operating Room, Efficiency Indicator, Surgical Workflow Recognition, Deep Learning

    中文摘要 i Abstract ii Acknowledgements iii Table of Content iv List of Tables vi List of Figures vii Chapter 1. Introduction 1 1.1 Background and Motivation 1 1.2 Research Scope and Aims 3 1.3 Thesis Framework 4 Chapter 2. Literature Review 6 2.1 Operating Room Efficiency 6 2.1.1 Operating Room Performance Measurement 6 2.1.2 Operating Room Effectiveness 8 2.2 Surgical Workflow Recognition 11 2.2.1 Non-video-based Workflow Recognition 12 2.2.2 Video-based Workflow Recognition 12 2.3 Deep Learning 14 2.3.1 Convolutional Neural Network 14 2.3.2 Recurrent Neural Network 16 Chapter 3. Process Efficiency Analysis 18 3.1 Overall Operating Room Effectiveness 18 3.1.1 Surgical Process in Operating Room 18 3.1.2 Value-Added Time in Surgical Process 20 3.2 Case Study 21 3.2.1 Dataset 21 3.2.2 Data Preprocessing 23 3.2.3 Results and Discussion 23 Chapter 4. Surgical Workflow Recognition 28 4.1 Multi-class Classification Model 28 4.1.1 Spatial Model 29 4.1.2 Temporal Model 29 4.1.3 Configuration 30 4.2 Case Study 31 4.2.1 Dataset 31 4.2.2 Data Preprocessing and Evaluation Metrics 35 4.2.3 Comparison of Different Convolutional Networks 35 4.2.4 Comparison of Different Temporal Models 37 4.2.5 Comparison with State-of-the-art Methods 41 4.2.6 Summary of Case Study 43 Chapter 5. Conclusion and Future Research 45 5.1 Summary and Contribution 45 5.2 Future Research 46 References 47

    Bouget, D., Allan, M., Stoyanov, D., & Jannin, P. (2017). Vision-based and marker-less surgical tool detection and tracking: A review of the literature. Medical Image Analysis, 35, 633–654.
    Castaldi, M., Sugano, D., Kreps, K., Cassidy, A., & Kaban, J. (2016). Lean philosophy and the public hospital. Perioperative Care and Operating Room Management, 3, 25–28.
    Cerfolio, R. J., Ferrari-Light, D., Ren-Fielding, C., Fielding, G., Perry, N., Rabinovich, A., … Pachter, H. L. (2019). Improving Operating Room Turnover Time in a New York City Academic Hospital via Lean. The Annals of Thoracic Surgery, 107, 1011–1016.
    Chang, D.-S., Leu, J.-D., Wang, W.-S., & Chen, Y.-C. (2020). Improving waiting time for surgical rooms using workflow and the six-sigma method. Total Quality Management & Business Excellence, 31, 869–886.
    Cho, K., Van Merriënboer, B., Bahdanau, D., & Bengio, Y. (2014). On the properties of neural machine translation: Encoder-decoder approaches. ArXiv Preprint ArXiv:1409.1259.
    Chung, J., Gulcehre, C., Cho, K., & Bengio, Y. (2014). Empirical evaluation of gated recurrent neural networks on sequence modeling. ArXiv Preprint ArXiv:1412.3555.
    Czempiel, T., Paschali, M., Keicher, M., Simson, W., Feussner, H., Kim, S. T., & Navab, N. (2020). Tecno: Surgical phase recognition with multi-stage temporal convolutional networks. International Conference on Medical Image Computing and Computer-Assisted Intervention, 343–352. Springer.
    Dergachyova, O., Bouget, D., Huaulmé, A., Morandi, X., & Jannin, P. (2016). Automatic data-driven real-time segmentation and recognition of surgical workflow. International Journal of Computer Assisted Radiology and Surgery, 11, 1081–1089.
    Donahue, J., Anne Hendricks, L., Guadarrama, S., Rohrbach, M., Venugopalan, S., Saenko, K., & Darrell, T. (2015). Long-term recurrent convolutional networks for visual recognition and description. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2625–2634.
    Dyas, A. R., Lovell, K. M., Balentine, C. J., Wang, T. N., Porterfield, J. R., Chen, H., & Lindeman, B. M. (2018). Reducing cost and improving operating room efficiency: Examination of surgical instrument processing. Journal of Surgical Research, 229, 15–19.
    Farha, Y. A., & Gall, J. (2019). Ms-tcn: Multi-stage temporal convolutional network for action segmentation. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 3575–3584.
    Fixler, T., & Wright, J. G. (2013). Identification and use of operating room efficiency indicators: The problem of definition. Canadian Journal of Surgery, 56, 224–226.
    He, K., Zhang, X., Ren, S., & Sun, J. (2015). Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. Proceedings of the IEEE International Conference on Computer Vision, 1026–1034.
    He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 770–778.
    Hochreiter, S., & Schmidhuber, J. (1997). Long Short-Term Memory. Neural Computation, 9, 1735–1780.
    Jin, Y., Dou, Q., Chen, H., Yu, L., Qin, J., Fu, C.-W., & Heng, P.-A. (2018). SV-RCNet: Workflow Recognition From Surgical Videos Using Recurrent Convolutional Network. IEEE Transactions on Medical Imaging, 37, 1114–1126.
    Jin, Y., Li, H., Dou, Q., Chen, H., Qin, J., Fu, C.-W., & Heng, P.-A. (2020). Multi-task recurrent convolutional network with correlation loss for surgical video analysis. Medical Image Analysis, 59, 101572.
    Jin, Y., Long, Y., Chen, C., Zhao, Z., Dou, Q., & Heng, P.-A. (2021). Temporal Memory Relation Network for Workflow Recognition From Surgical Video. IEEE Transactions on Medical Imaging, 40, 1911–1923.
    Kale, E. B., & Swisher, C. B. (2014). Introduction to the Operating Room. In A. M. Husain (Ed.), A Practical Approach to Neurophysiologic Intraoperative Monitoring (pp. 1–17). New York: Springer Publishing Company.
    Kingma, D. P., & Ba, J. (2014). Adam: A method for stochastic optimization. ArXiv Preprint ArXiv:1412.6980.
    Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). ImageNet Classification with Deep Convolutional Neural Networks. Advances in Neural Information Processing Systems, 25. Curran Associates, Inc.
    Lecun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86, 2278–2324.
    Lee, D. J., Ding, J., & Guzzo, T. J. (2019). Improving Operating Room Efficiency. Current Urology Reports, 20, 28.
    Madni, T. D., Imran, J. B., Clark, A. T., Cunningham, H. B., Taveras, L., Arnoldo, B. D., … Wolf, S. E. (2018). Prospective Evaluation of Operating Room Inefficiency. Journal of Burn Care & Research, 39, 977–981.
    Nwoye, C. I., Yu, T., Gonzalez, C., Seeliger, B., Mascagni, P., Mutter, D., … Padoy, N. (2022). Rendezvous: Attention mechanisms for the recognition of surgical action triplets in endoscopic videos. Medical Image Analysis, 78, 102433.
    Padoy, N. (2019). Machine and deep learning for workflow recognition during surgery. Minimally Invasive Therapy & Allied Technologies, 28, 82–90.
    Padoy, N., Blum, T., Ahmadi, S.-A., Feussner, H., Berger, M.-O., & Navab, N. (2012). Statistical modeling and recognition of surgical workflow. Medical Image Analysis, 16, 632–641.
    Phieffer, L., Hefner, J. L., Rahmanian, A., Swartz, J., Ellison, C. E., Harter, R., … Moffatt-Bruce, S. D. (2017). Improving Operating Room Efficiency: First Case On-Time Start Project. Journal for Healthcare Quality: Official Publication of the National Association for Healthcare Quality, 39, e70–e78.
    Rivas-Blanco, I., Pérez-Del-Pulgar, C. J., García-Morales, I., & Muñoz, V. F. (2021). A Review on Deep Learning in Minimally Invasive Surgery. IEEE Access, 9, 48658–48678.
    Rivoir, D., Bodenstedt, S., Funke, I., Bechtolsheim, F. von, Distler, M., Weitz, J., & Speidel, S. (2020). Rethinking anticipation tasks: Uncertainty-aware anticipation of sparse surgical instrument usage for context-aware assistance. International Conference on Medical Image Computing and Computer-Assisted Intervention, 752–762. Springer.
    Rothstein, D. H., & Raval, M. V. (2018). Operating room efficiency. Seminars in Pediatric Surgery, 27, 79–85.
    Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. ArXiv Preprint ArXiv:1409.1556.
    Smith, L. N. (2017). Cyclical learning rates for training neural networks. 2017 IEEE Winter Conference on Applications of Computer Vision (WACV), 464–472. IEEE.
    Souza, T. A., Roehe Vaccaro, G. L., & Lima, R. M. (2020). Operating room effectiveness: A lean health-care performance indicator. International Journal of Lean Six Sigma, 11, 973–988.
    Stauder, R., Okur, A., Peter, L., Schneider, A., Kranzfelder, M., Feussner, H., & Navab, N. (2014). Random forests for phase detection in surgical workflow analysis. International Conference on Information Processing in Computer-Assisted Interventions, 148–157. Springer.
    Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., … Rabinovich, A. (2015). Going deeper with convolutions. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1–9.
    Tan, M., & Le, Q. (2019). EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks. Proceedings of the 36th International Conference on Machine Learning, 6105–6114. PMLR.
    Tlapa, D., Zepeda-Lugo, C. A., Tortorella, G. L., Baez-Lopez, Y. A., Limon-Romero, J., Alvarado-Iniesta, A., & Rodriguez-Borbon, M. I. (2020). Effects of Lean Healthcare on Patient Flow: A Systematic Review. Value in Health, 23, 260–273.
    Twinanda, Andru P., Shehata, S., Mutter, D., Marescaux, J., de Mathelin, M., & Padoy, N. (2017). EndoNet: A Deep Architecture for Recognition Tasks on Laparoscopic Videos. IEEE Transactions on Medical Imaging, 36, 86–97.
    Twinanda, Andru Putra. (2017). Vision-based approaches for surgical activity recognition using laparoscopic and RBGD videos (These de doctorat, Strasbourg). Strasbourg.
    Twinanda, Andru Putra, Yengera, G., Mutter, D., Marescaux, J., & Padoy, N. (2019). RSDNet: Learning to Predict Remaining Surgery Duration from Laparoscopic Videos Without Manual Annotations. IEEE Transactions on Medical Imaging, 38, 1069–1078.
    Yi, F., & Jiang, T. (2019). Hard frame detection and online mapping for surgical phase recognition. International Conference on Medical Image Computing and Computer-Assisted Intervention, 449–457. Springer.
    Yuan, K., Holden, M., Gao, S., & Lee, W.-S. (2021). Surgical workflow anticipation using instrument interaction. International Conference on Medical Image Computing and Computer-Assisted Intervention, 615–625. Springer.

    下載圖示 校內:2025-08-30公開
    校外:2025-08-30公開
    QR CODE