簡易檢索 / 詳目顯示

研究生: 李致賢
Lee, Chih-Hsien
論文名稱: 使用條件循環生成對抗網路實現染色風格轉換:應用於H&E染色肝臟組織病理影像分析
Condition Cycle-consistent GAN-based stain style translation: Application to H&E-Stained Liver Histopathological Image Analysis
指導教授: 詹寶珠
Chung, Pau-Choo
學位類別: 碩士
Master
系所名稱: 電機資訊學院 - 電腦與通信工程研究所
Institute of Computer & Communication Engineering
論文出版年: 2021
畢業學年度: 110
語文別: 英文
論文頁數: 69
中文關鍵詞: 域適應染色標準化電腦輔助偵測及診斷卷積神經網路深度學習淋巴球偵測肝癌偵測數位組織切片影像
外文關鍵詞: Domain adaptation, Stain normalization, Computer-aided detection and diagnosis, convolutional neural networks, deep learning, lymphocyte detection, liver cancer detection, digital histopathological image
ORCID: 0000-0001-9072-0578
相關次數: 點閱:131下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 數位病理影像分析的需求無論是在台灣或全世界都日益提升,市面上因此有許多不同品牌的掃片機,各自都有不一樣的顯色風格,然而類神經網路模型對於色彩變異敏感度高,因而形成模型無法在多台掃片機之間共用的問題。域適應(Domain Adaptation)即為解決此問題的研究之一,然而部分方法僅針對特徵擷取層做規範,並未檢視轉換後圖像的品質;而使用圖像風格轉換的方式並無法保證轉換後的圖片和原圖有相同的語意資訊。於是此篇論文提出一個使用條件循環生成對抗網路的染色風格模型,此模型既能保證轉換的圖像品質也能針對特定類神經網路模型生成理想的染色風格圖片,而我們進而使用此生成對抗網路來輔助訓練類神經網路模型,使得不同掃片機之間的病理影像可以共用一個偵測模型。此方法將以H&E染色肝臟病理影像中的肝癌和淋巴球偵測任務中評斷效能,實驗結果顯示此論文方法能夠在不同掃片機中皆能達到肝癌細胞偵測中達到78%的IOU值且在淋巴球偵測中達到90%的F1值。

    Digital pathology diagnosis is being increasingly applied in hospitals and clinics, both in Taiwan and around the world. Typically, digital pathology involves scanning biological tissue, and then observing the image through microscope. As computer technology advances, the images are increasingly analyzed by some form of neural network models. However, different scanners have different color rendering styles, and thus a neural network trained on one scanner cannot necessarily be applied to analyze the images obtained from another scanner. In general, this type of problem is usually handled through domain adaptation methods. However, feature-wise adaptation methods neglect pixel-level differences between the source domain and target domain, while image style conversion methods usually lose some of the semantic information of the source images during the adaptation process. Therefore, this paper proposes a cycle-consistent GAN-based stain style translation method with an additional label from source domain, which not only guarantees the pixel-level quality of the transformed images, but also retains the semantic features required by the neural network models. We further update our neural network model using this well-trained GAN model. The feasibility of the proposed method is demonstrated by liver cancer tumor segmentation and inflammation cells detections in liver whole slide images scanned by three different devices. The results show that the proposed method achieves an IOU score of 78% for tumor segmentation and an F1 score more than 90% for lymphocyte detection irrespective of the scanner used.

    摘要 VI Abstract VII Table of Content X List of Tables XII List of Figures XIII Chapter 1 Introduction 1 Chapter 2 Related Works 6 2.1 Transfer learning for medical image 6 2.2 Color Normalization and Augmentation 7 2.3 Neural network learning based domain adaptation methods 8 2.4 CNN methods for medical image segmentation 9 Chapter 3 Proposed Method 10 3.1 Train basis segmentation model 11 3.1.1 Histopathology Images Acquisition 12 3.1.2 Data Augmentation 13 3.1.3 Loss functions of segmentation models 15 3.2 Train cCycleGAN model with pretrained segmentation model 16 3.2.1 CycleGAN training details 17 3.2.2 cCycleGAN training details 19 3.3 Finetune Pretrained Segmentation Model with cCycleGAN 22 3.4 Implementation Details 23 Chapter 4 Experimental Results and Discussions 24 4.1 H&E-Stained Liver Tissue Dataset 24 4.1.1 Lymphocyte Dataset 24 4.1.2 Tumor Dataset 25 4.2 Evaluation Criteria 26 4.3 Inference Basic Segmentation Model 27 4.4 Detection Result of Basic Lymphocyte Segmentation Model 29 4.5 Detection Result of Basic Tumor Segmentation Model 31 4.6 Inference CycleGAN 38 4.7 CycleGAN RGB, CycleGAN LAB Comparison 40 4.8 Inference cCycleGAN 41 4.9 Inference Segmentation Model Finetuned by cCycleGAN 42 4.10 Result of Lymphocyte Cell Detection applying cCycleGAN 44 4.11 Result of Tumor Cell Segmentation trained on Leica Dataset applying cCycleGAN 51 4.12 Result of Tumor Cell Segmentation trained on 3D-Histech Dataset applying cCycleGAN 58 4.13 Processing time 65 Chapter 5 Conclusion and future work 66 References 67

    [1] Zhu, J.-Y., Park, T., Isola, P., and Efros, A. A. Unpaired image-toimage translation using cycle-consistent adversarial networks. In International Conference on Computer Vision (ICCV), 2017.
    [2] Erik Reinhard, Michael Ashikhmin, Bruce Gooch, and Peter Shirley. Color transfer between images. IEEE Comput. Graph. Appl., 2001.
    [3] Pinky Bautista and YukakoYagi. Staining correction in digital pathology by utilizing a dye amount table. Journal of digital imaging, 2015.
    [4] Macenko, M., Niethammer, M., Marron, J., Borland, D., Woosley, J.T., Guan, X., Schmitt, C., Thomas, N.E.: A method for normalizing histology slides for quantitative analysis. In: Biomedical Imaging: From Nano to Macro, 2009. ISBI’09. IEEE International Symposium on. pp. 1107–1110. IEEE (2009)
    [5] Khan, A.M., Rajpoot, N., Treanor, D., Magee, D.: A nonlinear mapping approach to stain normalization in digital histopathology images using image-specific color deconvolution. IEEE Transactions on Biomedical Engineering 61(6), 1729–1738 (2014)
    [6] Bejnordi, Babak Ehteshami, et al. "Stain specific standardization of whole-slide histopathological images." IEEE transactions on medical imaging 35.2 (2015): 404-415.
    [7] Vahadane, T. Peng, A. Sethi, S. Albarqouni, L. Wang, M. Baust, K. Steiger, A. M. Schlitter, I. Esposito, and N. Navab. Structure-preserving color normalization and sparse stain separation for histological images. IEEE Transactions on Medical Imaging, 2016.
    [8] Ganin, Yaroslav, and Victor Lempitsky. "Unsupervised domain adaptation by backpropagation." International conference on machine learning. PMLR, 2015.
    [9] Long, M. and Wang, J. Learning transferable features with deep adaptation networks. In International Conference on Machine Learning (ICML), 2015.
    [10] Sun, B. and Saenko, K. Deep CORAL: correlation alignment for deep domain adaptation. In ICCV workshop on Transferring and Adapting Source Knowledge in Computer Vision (TASK-CV), 2016.
    [11] Bousmalis, Konstantinos, et al. "Unsupervised pixel-level domain adaptation with generative adversarial networks." Proceedings of the IEEE conference on computer vision and pattern recognition. 2017.
    [12] Shrivastava, Ashish, et al. "Learning from simulated and unsupervised images through adversarial training." Proceedings of the IEEE conference on computer vision and pattern recognition. 2017.
    [13] Liu, M., Breuel, T., and Kautz, J. Unsupervised image-to-image translation networks. In Neural Information Processing Systems (NIPS), 2017.
    [14] Ishak, K., Baptista, A., Bianchi, L., Callea, F., De Groote, J., Gudat, F., ... & Phillips, M. J. (1995). Histological grading and staging of chronic hepatitis. Journal of hepatology, 22(6), 696-699.
    [15] He, K., Girshick, R., & Dollár, P. (2018). Rethinking imagenet pre-training. arXiv preprint arXiv:1811.08883.
    [16] Cao, H., Bernard, S., Heutte, L., & Sabourin, R. (2018, June). Improve the performance of transfer learning without fine-tuning using dissimilarity-based multi-view learning for breast cancer histology images. In International Conference Image Analysis and Recognition (pp. 779-787). Springer, Cham.
    [17] Ronneberger, Olaf, Philipp Fischer, and Thomas Brox. "U-net: Convolutional networks for biomedical image segmentation." International Conference on Medical image computing and computer-assisted intervention. Springer, Cham, 2015.
    [18] Ho, Yaoshiang, and Samuel Wookey. "The real-world-weight cross-entropy loss function: Modeling the costs of mislabeling." IEEE Access 8 (2019): 4806-4813.
    [19] Liu, Liyuan, et al. "On the variance of the adaptive learning rate and beyond." arXiv preprint arXiv:1908.03265 (2019).
    [20] Zhang, Michael R., et al. "Lookahead optimizer: k steps forward, 1 step back." arXiv preprint arXiv:1907.08610 (2019).

    無法下載圖示 校內:2027-01-20公開
    校外:2027-01-20公開
    電子論文尚未授權公開,紙本請查館藏目錄
    QR CODE