簡易檢索 / 詳目顯示

研究生: 何俊逸
He, Chun-Yi
論文名稱: 應用深度學習演算法之結核菌與非結核分枝桿菌偵測
MTB and NTM Detection Based on Deep Learning Algorithm
指導教授: 孫永年
Sun, Yung-Nien
學位類別: 碩士
Master
系所名稱: 電機資訊學院 - 資訊工程學系
Department of Computer Science and Information Engineering
論文出版年: 2020
畢業學年度: 108
語文別: 英文
論文頁數: 82
中文關鍵詞: 抗酸性染色痰液抹片偵測分類結核菌非結核分枝桿菌卷積神經網路
外文關鍵詞: acid-fast stain, sputum smear, detection, classification, Mycobacterium tuberculosis (MTB), Non-Tuberculosis Mycobacteria (NTM), convolutional neural network
相關次數: 點閱:82下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 根據世界衛生組織報告,結核病是目前全球十大死亡原因之一。據統計, 2018 年死於結核病的總人數共有一百四十五萬人,其中包含愛滋病陽性患者二十五萬人。結核病是由結核分枝桿菌(Mycobacterium tuberculosis, MTB)所引起,在未開發與開發中國家尤其嚴重,已成為全球性的公衛問題。相對於結核病,由非結核分枝桿菌 (Non-Tuberculosis Mycobacteria, NTM) 所引起的感染疾病,雖致死率較低卻容易發生在免疫力低下的族群。隨著現代社會逐漸的高齡化,非結核分枝桿菌對醫療體系的衝擊也因此而日益顯著。
    痰液抗酸性染色鏡檢是目前最廣泛使用的檢測方式,其方法是利用一般光學顯微鏡觀察抹片樣本中是否存有對試劑產生抗酸性作用的細菌,相對於動輒需要 2~8 週時間的細菌培養檢測方法而言,鏡檢則擁有快速、經濟、有效等優點。雖然如此,由於結核菌與非結核分枝桿菌在外觀上極為相似,目前從鏡檢上幾乎無法分辨出兩者的差異,以致容易造成治療與用藥錯誤等問題。
    本論文提出一個基於深度學習演算法的方法來偵測結核菌與非結核分枝桿菌,系統會先針對所拍攝到的痰液抹片顯微影像進行偵測,找出結核菌與非結核分枝桿菌並給出一個置信度分數,分數越高代表可能性越高。本系統主要分成兩個階段:候選菌偵測,以及第二階段分類器。在候選菌偵測階段中,偵測模型對整張影像提出疑似有菌的區域,在此階段中幾乎所有可疑的候選區域都會被系統找出來。候選區域會送至第二階段分類器來進行結核菌、非結核分枝桿菌與背景的分類作業。藉由兩階段方法有效地對候選區域進行更精準的分類,大幅度提高整體精確度。
    實驗樣本來自三家醫院所收集之痰液抹片,利用全自動取像系統將抹片樣本拍攝成影像資料,而本實驗中所使用的影像共超過 40 萬張。實驗結果顯示,本論文提出的兩階段式方法成績優異,其中第二階段分類器明顯地提升系統整體效能。在結核菌偵測上,我們可以達到召回率 83.9% 和精確度 82.26%;在非結核分枝桿菌上,我們可以達到召回率 86.81% 和精確度 88.64%;平均的敏感度與特異性可以達到90%。在創新性方面,本研究結果證明深度學習神經網路突破了人工鏡檢無法分辨結核菌與非結核分枝桿菌的情況。以我們的了解,本論文是第一篇成功探討此問題的發表。在應用方面,本系統提供一套結核病自動鏡檢方法,可以提升準確率、大幅減少醫檢師所耗費的時間。與此同時,對每一片抹片,系統可以給出是屬於結核菌或是非結核分枝桿菌的輔助參考,期望此系統未來可以有效的幫助醫檢人員與結核病防治。

    According to the report of the World Health Organization, tuberculosis (TB) is one of the ten leading causes of death in the world. An estimation of 1.45 million people died from tuberculosis in 2018. TB is an airborne disease caused by Mycobacterium tuberculosis (MTB), which is prevalent in developing countries and becomes a public health problem worldwide. In addition to MTB, Non-Tuberculosis Mycobacteria (NTM) are susceptible to infect of immunocompromised groups despite of their low mortality rate. With the coming of the aging society, the impact of NTM on the medical system has become significant in recent years.
    For the diagnosis of TB, the widely applied method is the acid-fast sputum smear microscopy. It is performed by using an optical microscope to directly observe whether bacteria exist in a smear slide. Comparing with the long time required for the bacteria culture, 2 to 8 weeks, sputum smear microscopy has the advantages of fast, efficient, and low cost. However, at present, it is almost impossible to clarify between MTB and NTM under the microscope by manual examination, and this may lead to the incorrect treatment of patient.
    In this thesis, we propose a novel method based on deep learning algorithms to solve the problem of detecting MTB and NTM simultaneously. The microscopy images are captured from each sputum smear by an auto-focusing acquisition system. And then, we apply the proposed detection system to process each image. The system finds the regions which contain MTB or NTM and gives each of them a confidence score. The higher the score, the greater the model’s confidence in the classification results. The system consists of two stages: candidate bacteria detection and second stage classifier. In the candidate bacteria detection stage, the detection model proposes the suspected regions which contain bacteria. Almost all candidate regions can be successfully found by the candidate bacteria detection. The obtained candidate regions are passed to the second stage classifier which is used to perform a more accurate classification. From the experimental results, after adding the second stage, the system precision can be greatly improved.
    The sputum smears were collected from three hospitals, and images captured from them became the experimental material of this study. The total image number of our dataset is over 400000. From the experimental results, it is proved that our proposed two-stage system achieved remarkable performance. For the MTB detection, we can reach the recall of 83.9% and the precision of 82.26%. For NTM, we can reach the recall of 86.81% and the precision of 88.64%. And the average sensitivity and specificity for the detection system of both MTB and NTM are higher than 90%. From the best of our knowledge, this is the first study of investigating the automatic detection of MTB and NTM under microscopy images. It is a great break-through for both pathology and computer-assisted system studies. The system not only significantly saves the time of manual microscopic examination, but also gives an auxiliary suggestion on whether the smear belongs to MTB or NTM. We expect our system can assist pathologists for the diagnosis and the control of TB in the future.

    摘要 I Abstract III 誌謝 V LIST OF TABLES VIII LIST OF FIGURES IX CHAPTER 1 INTRODUCTION 1 1.1 Motivation 1 1.2 Related Works 4 1.2.1 Mycobacterium Tuberculosis 4 1.2.2 Deep Learning 5 1.3 Overview of the Thesis 8 CHAPTER 2 MTB and NTM Detection System 9 2.1 Overview 9 2.2 Experimental Instruments and Data Preprocess 10 2.3 Candidate Bacteria Detection 11 2.3.1 Feature Fusion V.S. FPN [28] 11 2.3.2 RPN Center Condition 19 2.3.3 Cascade R-CNN [30] 29 2.3.4 R-CNN Target Conditions 31 2.3.5 RoI Pooling 32 2.3.6 Non-Maximum Suppression (NMS) 33 2.3.7 Candidate Bacteria Detection Architecture 34 2.3.8 Loss Functions 36 2.4 Second Stage Classifier (SSC) 38 2.4.1 Classification Models 41 2.4.2 Offset Method 42 2.4.3 Extension of SSC Target Conditions 44 CHAPTER 3 EXPERIMENTAL RESULTS 47 3.1 Candidate Bacteria Detection 47 3.1.1 CBD Dataset and Experiment Environment 47 3.1.2 Training Details 48 3.1.3 Evaluation Metrics 48 3.1.4 Results of CBD 49 3.2 SSC 59 3.2.1 SSC Dataset and Experiment Environment 59 3.2.2 Training Details 59 3.2.3 Evaluation Metrics 59 3.2.4 Results of SSC 61 3.3 CBD + SSC 62 3.4 Slide Auxiliary Judgment 70 3.5 Practical Discussions 71 CHAPTER 4 CONCLUSIONS 75 4.1 Conclusions 75 4.2 Future Works 76 Reference 77

    [1] Annabel, B., D. Anna, and M. Hannah. "Global tuberculosis report 2019." Geneva: World Health Organization (2019).
    [2] 盧柏樑. "認識結核病抗酸菌染色檢驗" 高醫醫訊月刊第二十八卷第十期, 2009
    [3] Sadaphal, P., et al. "Image processing techniques for identifying Mycobacterium tuberculosis in Ziehl-Neelsen stains." The International Journal of Tuberculosis and Lung Disease 12.5 (2008): 579-582.
    [4] Costa Filho, Cicero Ferreira Fernandes, et al. "Automatic identification of tuberculosis mycobacterium." Research on biomedical engineering 31.1 (2015): 33-43.
    [5] Panicker, Rani Oomman, et al. "Automatic detection of tuberculosis bacilli from microscopic sputum smear images using deep learning methods." Biocybernetics and Biomedical Engineering 38.3 (2018): 691-699.
    [6] Lopez-Garnier, Santiago, Patricia Sheen, and Mirko Zimic. "Automatic diagnostics of tuberculosis using convolutional neural networks analysis of MODS digital images." PloS one 14.2 (2019): e0212094.
    [7] Kuok, Chan‐Pang, et al. "An effective and accurate identification system of Mycobacterium tuberculosis using convolution neural networks." Microscopy research and technique 82.6 (2019): 709-719.
    [8] Cortes, Corinna, and Vladimir Vapnik. "Support-vector networks." Machine learning 20.3 (1995): 273-297.
    [9] Ren, Shaoqing, et al. "Faster r-cnn: Towards real-time object detection with region proposal networks." Advances in neural information processing systems. 2015.
    [10] Zeiler, Matthew D., and Rob Fergus. "Visualizing and understanding convolutional networks." European conference on computer vision. Springer, Cham, 2014.
    [11] Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton. "Imagenet classification with deep convolutional neural networks." Advances in neural information processing systems. 2012.
    [12] Russakovsky, Olga, et al. "Imagenet large scale visual recognition challenge." International journal of computer vision 115.3 (2015): 211-252.
    [13] Simonyan, Karen, and Andrew Zisserman. "Very deep convolutional networks for large-scale image recognition." arXiv preprint arXiv:1409.1556 (2014).
    [14] He, Kaiming, et al. "Deep residual learning for image recognition." Proceedings of the IEEE conference on computer vision and pattern recognition. 2016.
    [15] Huang, Gao, et al. "Densely connected convolutional networks." Proceedings of the IEEE conference on computer vision and pattern recognition. 2017.
    [16] Xie, Saining, et al. "Aggregated residual transformations for deep neural networks." Proceedings of the IEEE conference on computer vision and pattern recognition. 2017.
    [17] Tan, Mingxing, and Quoc V. Le. "Efficientnet: Rethinking model scaling for convolutional neural networks." arXiv preprint arXiv:1905.11946 (2019).
    [18] Liu, Wei, et al. "Ssd: Single shot multibox detector." European conference on computer vision. Springer, Cham, 2016
    [19] Fu, Cheng-Yang, et al. "Dssd: Deconvolutional single shot detector." arXiv preprint arXiv:1701.06659 (2017).
    [20] Redmon, Joseph, and Ali Farhadi. "Yolov3: An incremental improvement." arXiv preprint arXiv:1804.02767 (2018).
    [21] Bochkovskiy, Alexey, Chien-Yao Wang, and Hong-Yuan Mark Liao. "YOLOv4: Optimal Speed and Accuracy of Object Detection." arXiv preprint arXiv:2004.10934 (2020).
    [22] Lin, Tsung-Yi, et al. "Microsoft coco: Common objects in context." European conference on computer vision. Springer, Cham, 2014.
    [23] Girshick, Ross. "Fast r-cnn." Proceedings of the IEEE international conference on computer vision. 2015.
    [24] Ren, Shaoqing, et al. "Faster r-cnn: Towards real-time object detection with region proposal networks." Advances in neural information processing systems. 2015.
    [25] He, Kaiming, et al. "Mask r-cnn." Proceedings of the IEEE international conference on computer vision. 2017.
    [26] Huang, Zhaojin, et al. "Mask scoring r-cnn." Proceedings of the IEEE conference on computer vision and pattern recognition. 2019.
    [27] 陸坤泰. "結核菌檢驗手冊" 2004. [Online]. Availabel: https://www.cdc.gov.tw/File/Get/b9caTsXjd5ay1gzk-XaNow
    [28] Lin, Tsung-Yi, et al. "Feature pyramid networks for object detection." Proceedings of the IEEE conference on computer vision and pattern recognition. 2017
    [29] Zhu, Chenchen, Yihui He, and Marios Savvides. "Feature selective anchor-free module for single-shot object detection." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2019
    [30] Cai, Zhaowei, and Nuno Vasconcelos. "Cascade r-cnn: Delving into high quality object detection." Proceedings of the IEEE conference on computer vision and pattern recognition. 2018.
    [31] T. Grel, "Region of interest pooling explained" 2017. [Online]. Available: https://deepsense.ai/region-of-interest-pooling-explained/.
    [32] Tommy Huang, "機器/深度學習:物件偵測 Non-Maximum Suppression (NMS)" 2018. [Online]. Available: https://reurl.cc/V6brYR
    [33] Hu, Mengying, et al. "Automatic Detection of Tuberculosis Bacilli in Sputum Smear Scans Based on Subgraph Classification." 2019 International Conference on Medical Imaging Physics and Engineering (ICMIPE). IEEE, 2019.
    [34] Szegedy, Christian, et al. "Rethinking the inception architecture for computer vision." Proceedings of the IEEE conference on computer vision and pattern recognition. 2016.

    無法下載圖示 校內:2025-08-28公開
    校外:不公開
    電子論文尚未授權公開,紙本請查館藏目錄
    QR CODE