簡易檢索 / 詳目顯示

研究生: 石曜愷
Shih, Yao-Kai
論文名稱: 基於顯著圖引導之人工關節感染預測
Periprosthetic Joint Infection Prediction by Leading with Saliency Map
指導教授: 蔣榮先
Chiang, Jung-Hsien
學位類別: 碩士
Master
系所名稱: 電機資訊學院 - 資訊工程學系
Department of Computer Science and Information Engineering
論文出版年: 2022
畢業學年度: 110
語文別: 英文
論文頁數: 51
中文關鍵詞: 人工關節感染核子醫學骨骼掃描卷積神經網路弱監督式學習分類模型
外文關鍵詞: Periprosthetic Joint Infection, Nuclear Medicine, Bone Scan, Convolutional Neural Networks, Weakly Supervised, Classification Model
相關次數: 點閱:111下載:7
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 人工膝關節及人工髖關節置換手術是目前最有效治療重度退化性關節炎的治療方式之一。人工關節感染是人工關節置換手術後的嚴重併發症。人工關節一旦發生深層感染,抗生素及手術治療皆難以根除,最終往往需要將人工關節拔除。研究指出,人工關節置換術的需求人數於未來將會逐年成長,因此感染人數未來也可能日漸上升。
    臨床上有許多工具可用於診斷人工關節感染,如 X 光、血液檢測、骨骼掃描影像等等。但是這些工具都有各自的侷限性,因此目前對於人工關節感染之診斷,並沒有任何一種工具可以單獨使用就能做出確定的診斷。在診斷流程上,核子醫學科醫師根據骨骼掃瞄影像判讀來撰寫報告,骨科醫師則會參考核子醫學科醫師的報告以及其他臨床檢驗和病患症狀做出感染與否的臨床診斷,由於沒有任何一種檢驗項目是人工關節感染的決定性指標,且各工具的使用需要長期經驗的累積,因此我們希望藉由深度學習的方式整合這些資訊,以更全面的預測人工關節感染。近年來深度學習的興起,在影像分類的任務上 CNN 模型取得了卓越的成果,並且 AI 在醫療領域的應用上也有文獻指出是能加速診斷流程、提升診斷品質的。但就我們所知,目前尚未有研究利用 AI 及影像資訊來診斷人工關節感染。
    綜上所述,本研究希望藉由深度學習的方法,讓模型學習辨識人工關節感染在骨骼掃描上的顯影型態、位置等等,並加入臨床血液檢測數據等不同面向之資訊,建置一個跨科別的人工關節感染診斷模型。並提供臨床醫師一個具參考性的新指標,輔助醫師進行人工關節感染之診斷。本研究中,我們從高雄長庚紀念醫院收集了 802 筆人工關節感染病患的資料。並在實驗中發現了模型無法關注於重要的區域,因此我們基於弱監督式學習的方法提出了以顯著圖進行位置引導的損失函數輔助模型訓練。我們也使用了注意力機制的模塊,使模型具有增加或抑制相關特徵的能力,以此提升模型的表現。我們實際比較骨科醫師及核醫科醫師與深度學習模型在相同測試資料集上的結果,發現模型能判斷出醫師可能會被混淆的案例。基於此觀察,我們希望未來此研究成果可以作為一個輔助診斷工具幫助醫師,並且為將來相關的研究提供一個基石。

    Total hip and knee joint arthroplasty are one of the most effective treatments for advanced osteoarthritis. Periprosthetic joint infection (PJI) is a terrible complication of total joint arthroplasty (TJA) surgery. Failure rates are high despite the use of intravenous antibiotics and surgical debridement once PJI has developed, and removal of the prostheses is often needed in order to eradicate the infection. Prior study (Runner et al., 2019) indicated that the number of TJAs will continue to increase in the coming years. The importance of diagnosis and treatment of PJI will also increase.
    There are several kinds of diagnostic tools that can be used for PJI diagnosis, such as X-ray, laboratory test and bone scan. However, there is no gold standard method that can make accurate diagnosis. In the clinical setting, bone scans are often interpreted by physicians specializing in nuclear medicine images. The orthopedic surgeons are responsible for make the clinical diagnosis based on the report of bone scan images and other laboratory tests. No tool is a decisive indicator of PJI, and the use of various tools requires the accumulation of long-term experience. Therefore, we hope to integrate these tools through deep learning to more comprehensively predict PJI. With the rise of deep learning in recent years, the CNN model has achieved excellent results in the task of image classification. In addition, the application of AI in the medical field has also been shown to increase the speed and the quality of disease diagnosis. However, to our knowledge, there is no application of AI and clinical images in the diagnosis of PJI.
    This study aims to use deep learning methods to interpret bone scan images and add other clinical data such as laboratory tests to build a cross-domain model for PJI prediction. In this way, a new reference index is provided for physicians to assist in the diagnosis of periprosthetic joint infection. In this study, we collected a total of 802 cases from patients with bone scan images and laboratory data from Kaohsiung Chang Gung Memorial Hospital. We propose a position-guided loss function based on a weakly supervised learning approach to assist the model training. In order to improve the performance, we adopt an attention mechanism module which can enhance the important features along both spatial and channel axis.
    We collected test data from ten physicians and compared their f1-score and sensitivity etc. with our model on the same dataset. The results show that our model outperforms physicians in the test dataset. We also conducted several experiments to demonstrate that the proposed method really enhances the performance. We hope the model can assist orthopedic surgeons in the diagnosis of PJI in a quick and accurate fashion. We also hope that more clinical information can be integrated into this model in the future to further increase the performance of PJI prediction.

    中文摘要 I Abstract III 誌謝 V Contents VII List of Tables IX List of Figures X Chapter 1 Introduction 1 1.1 Background 1 1.2 Motivation 2 1.3 Research Objectives 3 1.4 Thesis Organization 3 Chapter 2 Related Work 5 2.1 Development of CNN in Classification 5 2.2 Application of Deep Learning in Nuclear Medicine 5 2.3 Weakly-Supervised Methods 6 2.4 Visualization of Model Attention 8 Chapter 3 Preliminary Study 10 3.1 Investigate Performance of Bone Scan 10 3.2 Investigate Performance of Laboratory Data 12 3.3 Investigate Performance of Integrating Both Data 12 Chapter 4 PJI Prediction Model 14 4.1 Data Preprocessing 14 4.2 Lead Position Model Architecture 15 4.3 Operation Loss 17 4.4 Lead Position Loss 19 Chapter 5 Experiments 21 5.1 Experimental Design 21 5.2 Dataset and Setting 21 5.3 Ablation Study 27 5.4 Investigating RandAugment Transformations 33 5.5 Clinical Validation 36 Chapter 6 Conclusion and Future Work 43 6.1 Conclusion 43 6.2 Future work 44 Reference 46

    Arden, N., & Hawker, G. (2018). The OARSI white paper explained. Osteoarthritis and Cartilage, 26. https://doi.org/10.1016/j.joca.2018.02.008
    Bearman, A., Russakovsky, O., Ferrari, V., & Fei-Fei, L. (2016). What’s the point: Semantic segmentation with point supervision. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 9911 LNCS. https://doi.org/10.1007/978-3-319-46478-7_34
    Bochkovskiy, A., Wang, C.-Y., & Liao, H.-Y. M. (2020). Yolov4: Optimal speed and accuracy of object detection. ArXiv Preprint ArXiv:2004.10934.
    Carion, N., Massa, F., Synnaeve, G., Usunier, N., Kirillov, A., & Zagoruyko, S. (2020). End-to-End Object Detection with Transformers. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 12346 LNCS. https://doi.org/10.1007/978-3-030-58452-8_13
    Chattopadhay, A., Sarkar, A., Howlader, P., & Balasubramanian, V. N. (2018). Grad-CAM++: Generalized gradient-based visual explanations for deep convolutional networks. Proceedings - 2018 IEEE Winter Conference on Applications of Computer Vision, WACV 2018, 2018-January. https://doi.org/10.1109/WACV.2018.00097
    Chaudhari, A. S., Mittra, E., Davidzon, G. A., Gulaka, P., Gandhi, H., Brown, A., Zhang, T., Srinivas, S., Gong, E., Zaharchuk, G., & Jadvar, H. (2021). Low-count whole-body PET with deep learning in a multicenter and externally validated study. Npj Digital Medicine, 4(1). https://doi.org/10.1038/s41746-021-00497-2
    Cheng, D. C., Hsieh, T. C., Yen, K. Y., & Kao, C. H. (2021). Lesion-based bone metastasis detection in chest bone scintigraphy images of prostate cancer patients using pre-train, negative mining, and deep learning. Diagnostics, 11(3). https://doi.org/10.3390/diagnostics11030518
    Cheng, D. C., Liu, C. C., Hsieh, T. C., Yen, K. Y., & Kao, C. H. (2021). Bone metastasis detection in the chest and pelvis from a whole-body bone scan using deep learning and a small dataset. Electronics (Switzerland), 10(10). https://doi.org/10.3390/electronics10101201
    Cubuk, E. D., Zoph, B., Shlens, J., & Le, Q. v. (2020). Randaugment: Practical automated data augmentation with a reduced search space. IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, 2020-June. https://doi.org/10.1109/CVPRW50498.2020.00359
    Fu, R., Hu, Q., Dong, X., Guo, Y., Gao, Y., & Li, B. (2020). Axiom-based grad-cam: Towards accurate visualization and explanation of cnns. ArXiv Preprint ArXiv:2008.02312.
    He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2016-December. https://doi.org/10.1109/CVPR.2016.90
    Huang, G., Liu, Z., van der Maaten, L., & Weinberger, K. Q. (2017). Densely connected convolutional networks. Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, 2017-January. https://doi.org/10.1109/CVPR.2017.243
    James, S. L., Abate, D., Abate, K. H., Abay, S. M., Abbafati, C., Abbasi, N., Abbastabar, H., Abd-Allah, F., Abdela, J., Abdelalim, A., Abdollahpour, I., Abdulkader, R. S., Abebe, Z., Abera, S. F., Abil, O. Z., Abraha, H. N., Abu-Raddad, L. J., Abu-Rmeileh, N. M. E., Accrombessi, M. M. K., … Murray, C. J. L. (2018). Global, regional, and national incidence, prevalence, and years lived with disability for 354 Diseases and Injuries for 195 countries and territories, 1990-2017: A systematic analysis for the Global Burden of Disease Study 2017. The Lancet, 392(10159). https://doi.org/10.1016/S0140-6736(18)32279-7
    Jiang, P. T., Hou, Q., Cao, Y., Cheng, M. M., Wei, Y., & Xiong, H. (2019). Integral object mining via online attention accumulation. Proceedings of the IEEE International Conference on Computer Vision, 2019-October. https://doi.org/10.1109/ICCV.2019.00216
    Kolesnikov, A., Dosovitskiy, A., Weissenborn, D., Heigold, G., Uszkoreit, J., Beyer, L., Minderer, M., Dehghani, M., Houlsby, N., Gelly, S., Unterthiner, T., & Zhai, X. (2021). An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale.
    Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2017). ImageNet classification with deep convolutional neural networks. Communications of the ACM, 60(6). https://doi.org/10.1145/3065386
    Li, K., Wu, Z., Peng, K. C., Ernst, J., & Fu, Y. (2020). Guided Attention Inference Network. IEEE Transactions on Pattern Analysis and Machine Intelligence, 42(12). https://doi.org/10.1109/TPAMI.2019.2921543
    Liu, Y., Jain, A., Eng, C., Way, D. H., Lee, K., Bui, P., Kanada, K., de Oliveira Marinho, G., Gallegos, J., Gabriele, S., Gupta, V., Singh, N., Natarajan, V., Hofmann-Wellenhof, R., Corrado, G. S., Peng, L. H., Webster, D. R., Ai, D., Huang, S. J., … Coz, D. (2020). A deep learning system for differential diagnosis of skin diseases. Nature Medicine, 26(6). https://doi.org/10.1038/s41591-020-0842-3
    Li, Z., Liu, F., Yang, W., Peng, S., & Zhou, J. (2021). A Survey of Convolutional Neural Networks: Analysis, Applications, and Prospects. IEEE Transactions on Neural Networks and Learning Systems. https://doi.org/10.1109/tnnls.2021.3084827
    Love, C., Tomas, M. B., Marwin, S. E., Pugliese, P. v., & Palestro, C. J. (2001). Role of nuclear medicine in diagnosis of the infected joint replacement. Radiographics, 21(5). https://doi.org/10.1148/radiographics.21.5.g01se191229
    Maksoud, S., Zhao, K., Hobson, P., Jennings, A., & Lovell, B. C. (2020). SOS: Selective objective switch for rapid immunofluorescence whole slide image classification. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. https://doi.org/10.1109/CVPR42600.2020.00392
    Omeiza, D., Speakman, S., Cintas, C., & Weldermariam, K. (2019). Smooth grad-cam++: An enhanced inference level visualization technique for deep convolutional neural network models. ArXiv Preprint ArXiv:1908.01224.
    Papandrianos, N., Papageorgiou, E., Anagnostis, A., & Papageorgiou, K. (2020). Efficient bone metastasis diagnosis in bone scintigraphy using a fast convolutional neural network architecture. Diagnostics, 10(8). https://doi.org/10.3390/diagnostics10080532
    Pucar, D., Jankovic, Z., Bascarevic, Z., Starcevic, S., Cizmic, M., & Radulovic, M. (2017). The role of three-phase 99mTc-MDP bone scintigraphy in the diagnosis of periprosthetic joint infection of the hip and knee. Vojnosanitetski Pregled, 74(10). https://doi.org/10.2298/vsp160303152p
    Reader, A. J., Corda, G., Mehranian, A., Costa-Luis, C. da, Ellis, S., & Schnabel, J. A. (2020). Deep Learning for PET Image Reconstruction. IEEE Transactions on Radiation and Plasma Medical Sciences, 5(1). https://doi.org/10.1109/trpms.2020.3014786
    Ren, S., He, K., Girshick, R., & Sun, J. (2017). Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 39(6). https://doi.org/10.1109/TPAMI.2016.2577031
    Runner, R. P., Mener, A., Roberson, J. R., Bradbury, T. L., Guild, G. N., Boden, S. D., & Erens, G. A. (2019). Prosthetic Joint Infection Trends at a Dedicated Orthopaedics Specialty Hospital. Advances in Orthopedics, 2019. https://doi.org/10.1155/2019/4629503
    Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A. C., & Fei-Fei, L. (2015). ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision, 115(3). https://doi.org/10.1007/s11263-015-0816-y
    Selvaraju, R. R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., & Batra, D. (2020). Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization. International Journal of Computer Vision, 128(2). https://doi.org/10.1007/s11263-019-01228-7
    Shah, R. F., Bini, S. A., Martinez, A. M., Pedoia, V., & Vail, T. P. (2020). Incremental inputs improve the automated detection of implant loosening using machine-learning algorithms. Bone and Joint Journal, 102-B(6). https://doi.org/10.1302/0301-620X.102B6.BJJ-2019-1577.R1
    Signore, A., Sconfienza, L. M., Borens, O., Glaudemans, A. W. J. M., Cassar-Pullicino, V., Trampuz, A., Winkler, H., Gheysens, O., Vanhoenacker, F. M. H. M., Petrosillo, N., & Jutte, P. C. (2019). Consensus document for the diagnosis of prosthetic joint infections: a joint paper by the EANM, EBJIS, and ESR (with ESCMID endorsement). European Journal of Nuclear Medicine and Molecular Imaging, 46(4). https://doi.org/10.1007/s00259-019-4263-9
    Simonyan, K., & Zisserman, A. (2015). Very deep convolutional networks for large-scale image recognition. 3rd International Conference on Learning Representations, ICLR 2015 - Conference Track Proceedings.
    Szegedy, C., Ioffe, S., Vanhoucke, V., & Alemi, A. A. (2017). Inception-v4, inception-ResNet and the impact of residual connections on learning. 31st AAAI Conference on Artificial Intelligence, AAAI 2017.
    Wang, H., Wang, Z., Du, M., Yang, F., Zhang, Z., Ding, S., Mardziel, P., & Hu, X. (2020). Score-CAM: Score-weighted visual explanations for convolutional neural networks. IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, 2020-June. https://doi.org/10.1109/CVPRW50498.2020.00020
    Woo, S., Park, J., Lee, J. Y., & Kweon, I. S. (2018). CBAM: Convolutional block attention module. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 11211 LNCS. https://doi.org/10.1007/978-3-030-01234-2_1
    Yang, J., Shi, L., Wang, R., Miller, E. J., Sinusas, A. J., Liu, C., Gullberg, G. T., & Seo, Y. (2021). Direct attenuation correction using deep learning for cardiac SPECT: A feasibility study. In Journal of Nuclear Medicine (Vol. 62, Issue 11). https://doi.org/10.2967/jnumed.120.256396
    Yan, H., Li, Z., Li, W., Wang, C., Wu, M., & Zhang, C. (2021). ConTNet: Why not use convolution and transformer at the same time? ArXiv Preprint ArXiv:2104.13497.
    Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., & Torralba, A. (2016). Learning Deep Features for Discriminative Localization. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2016-December. https://doi.org/10.1109/CVPR.2016.319

    下載圖示 校內:2024-08-01公開
    校外:2024-08-01公開
    QR CODE