簡易檢索 / 詳目顯示

研究生: 洪政凱
Hung, Cheng-Kai
論文名稱: 使用深度學習達成領域調適於磁振造影之應用
Domain Adaptation of MRI Datasets Using Deep Learning
指導教授: 吳明龍
Wu, Ming-Long
學位類別: 碩士
Master
系所名稱: 電機資訊學院 - 醫學資訊研究所
Institute of Medical Informatics
論文出版年: 2020
畢業學年度: 108
語文別: 英文
論文頁數: 54
中文關鍵詞: 領域調適CycleGAN資料分布偏移磁振造影
外文關鍵詞: Domain Adaptation, CycleGAN, Covariate Shift, Magnetic Resonance Imaging
相關次數: 點閱:111下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 電腦輔助診斷(Computer aided diagnosis)降低了醫師工作負擔、減少了觀察者間變異(Interobserver variability)、提升了診斷的準確度與精確度。深度學習已經成為電腦輔助診斷的主流方法。然而,從頭訓練一個深層的卷積神經網路需要足夠且已標記的醫療資料集。卷積神經網路事先經非醫療領域標記之大量資料集訓練後,再進行微調(Fine-tuning),或許可應用於電腦輔助診斷。一旦資料集嚴重不足,從頭訓練或微調卷積神經網路則不再適用。而來自不同機構的資料集,存在著資料分布偏移(Covariate shift),將使得深度學習的模型無法被通用,故我們欲使用CycleGAN來解決資料集不足與資料分布偏移的問題。BraTS 2019 包含TCIA和CBICA子資料集,透過CycleGAN的應用,可將上述兩資料集之資料分布互相轉換,且不需事先配對。最後,透過U-nets測試資料集在腦瘤切割的表現,結果顯示沒有被轉換到CBICA分布的TCIA資料,平均Dice相似係數只有0.80,而有轉換的則達到0.86。因此,當沒有足夠的已標記資料集時,透過CycleGAN可以達到領域調適(Domain adaptation)的目的,並能有較好的結果。

    Computer aided diagnosis (CAD) helps reduce doctors’ workload, minimize interobserver variability, and increase diagnostic accuracy and precision. Deep learning approaches have become a mainstream technique for CAD. However, training a deep convolutional neural networks (CNN) from scratch requires sufficient labelled medical training data. Fine-tuning a CNN pre-trained by a large-scale labelled dataset for a different application might be applicable. Nevertheless, when available training dataset is excessively small, neither fine-tuning nor training-from-scratch methods work. Covariate shift among datasets results from many factors such as various imaging parameters from different institutes. Covariate shift makes generalization of deep learning models very challenging in medical applications. We present a deep learning approach to overcome covariate shift. The proposed approach is using CycleGAN to achieve domain adaptation. The datasets of Multimodal Brain Tumor Segmentation Challenge (BraTS) 2019 from Medical Image Computing and Computer Assisted Intervention Society (MICCAI) contain 2 sub-datasets: The Cancer Imaging Archive (TCIA) and Center for Biomedical Image Computing and Analytics (CBICA). We transfer TCIA data to CBICA domain and vice versa. Adapted datasets are used as testing data to U-net models pre-trained in pre-adapted domain for evaluating performance of brain tumour segmentation. Domain adaptation by CycleGAN improves U-net performance from naïve cases. DCs for full tumour segmentation in TCIA testing data are 0.80 without domain adaptation (i.e., naïve case) and 0.86 with adaptation. Reduced training data for U-Net brain tumour segmentation models and CycleGAN shows that when lacking insufficient labelled data, domain adaptation using CycleGAN achieve better performance.

    Contents Abstract i 中文摘要 ii Acknowledgement iii Contents iv Figures vi Tables vii Chapter 1 Introduction 1 Chapter 2 Materials and Methods 6 2.1 Overview 6 2.2 Materials 6 2.3 Experimental environment 7 2.4 Methods 8 2.4.1 Preprocessing 8 2.4.2 Deep learning models 9 2.4.2.1 Fully Convolutional Networks (FCN) 9 2.4.2.2 ResNet 10 2.4.2.3 U-net 11 2.4.2.4 CycleGAN 13 2.4.3 Loss function 19 2.4.4 Postprocessing 21 2.4.5 Evaluation 21 Chapter 3 Result 23 3.1 Parameters of CycleGAN 23 3.2 Comparison of generator models for CycleGAN 23 3.3 Training curves of CycleGAN 25 3.4 Pre-trained U-net brain tumour segmentation models 27 3.5 KDE 32 3.6 Post-processing of CycleGAN outputs 38 3.7 Reduced training data for CycleGAN 40 3.8 Reduced training data for U-net brain tumour segmentation models 42 Chapter 4 Discussion 44 4.1 Model selection for generators 44 4.2 Training curves of CycleGAN 44 4.3 Computation time for CycleGAN 45 4.4 KDE 46 4.5 Post-processing of CycleGAN outputs 46 4.6 Reduced training data for CycleGAN 47 4.7 Reduced training data for U-net brain tumour segmentation models 47 Chapter 5 Conclusion 49 Reference 51

    1. Doi, K., Computer-aided diagnosis in medical imaging: historical review, current status and future potential. Computerized medical imaging and graphics, 2007. 31(4-5): p. 198-211.
    2. Gudigar, A., et al., Brain pathology identification using computer aided diagnostic tool: A systematic review. Computer Methods and Programs in Biomedicine, 2020. 187: p. 105205.
    3. Shiraishi, J., et al., Computer-aided diagnosis to distinguish benign from malignant solitary pulmonary nodules on radiographs: ROC analysis of radiologists’ performance—initial experience. Radiology, 2003. 227(2): p. 469-474.
    4. Kasai, S., et al., Computerized detection of vertebral compression fractures on lateral chest radiographs: Preliminary results with a tool for early detection of osteoporosis. Medical Physics, 2006. 33(12): p. 4664-4674.
    5. Afonso, L.C., et al., A recurrence plot-based approach for Parkinson’s disease identification. Future Generation Computer Systems, 2019. 94: p. 282-292.
    6. Jeyaraj, P.R. and E.R.S. Nadar, Computer-assisted medical image classification for early diagnosis of oral cancer employing deep learning algorithm. Journal of cancer research and clinical oncology, 2019. 145(4): p. 829-837.
    7. Tajbakhsh, N., et al., Convolutional neural networks for medical image analysis: Full training or fine tuning? IEEE transactions on medical imaging, 2016. 35(5): p. 1299-1312.
    8. Shin, H.-C., et al., Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning. IEEE transactions on medical imaging, 2016. 35(5): p. 1285-1298.
    9. Kouw, W.M. and M. Loog, An introduction to domain adaptation and transfer learning. arXiv preprint arXiv:1812.11806, 2018.
    10. Li, Y., et al., Application of covariate shift adaptation techniques in brain–computer interfaces. IEEE Transactions on Biomedical Engineering, 2010. 57(6): p. 1318-1324.
    11. Seeböck, P., et al. Using cyclegans for effectively reducing image variability across oct devices and improving retinal fluid segmentation. in 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019). 2019. IEEE.
    12. Chen, C., et al. Semantic-aware generative adversarial nets for unsupervised domain adaptation in chest x-ray segmentation. in International Workshop on Machine Learning in Medical Imaging. 2018. Springer.
    13. Isola, P., et al. Image-to-image translation with conditional adversarial networks. in Proceedings of the IEEE conference on computer vision and pattern recognition. 2017.
    14. Wang, Y.-H., Brain Tumor Segmentation Using Deep learning from Multi-Contrast MRI, in Institute of Computer Science and Information Engineering. 2019, National Cheng Kung University. p. 50.
    15. Bakas, S., et al., Segmentation labels and radiomic features for the pre-operative scans of the TCGA-LGG collection. The Cancer Imaging Archive, 2017. 286.
    16. Bakas, S., et al., Segmentation labels and radiomic features for the pre-operative scans of the TCGA-GBM collection. The Cancer Imaging Archive. 2017.
    17. Bakas, S., et al., Advancing the cancer genome atlas glioma MRI collections with expert segmentation labels and radiomic features. Scientific data, 2017. 4: p. 170117.
    18. Bakas, S., et al., Identifying the best machine learning algorithms for brain tumor segmentation, progression assessment, and overall survival prediction in the BRATS challenge. arXiv preprint arXiv:1811.02629, 2018.
    19. Menze, B.H., et al., The multimodal brain tumor image segmentation benchmark (BRATS). IEEE transactions on medical imaging, 2014. 34(10): p. 1993-2024.
    20. Whitcher, B., V.J. Schmid, and A. Thornton, Working with the DICOM and NIfTI Data Standards in R. Journal of Statistical Software, 2011(6).
    21. Lowekamp, B.C., et al., The design of SimpleITK. Frontiers in neuroinformatics, 2013. 7: p. 45.
    22. He, K., et al. Deep residual learning for image recognition. in Proceedings of the IEEE conference on computer vision and pattern recognition. 2016.
    23. Long, J., E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. in Proceedings of the IEEE conference on computer vision and pattern recognition. 2015.
    24. Ronneberger, O., P. Fischer, and T. Brox. U-net: Convolutional networks for biomedical image segmentation. in International Conference on Medical image computing and computer-assisted intervention. 2015. Springer.
    25. Zhu, J., et al. Unpaired Image-to-Image Translation Using Cycle-Consistent Adversarial Networks. in 2017 IEEE International Conference on Computer Vision (ICCV). 2017.
    26. Dai, J., et al. R-fcn: Object detection via region-based fully convolutional networks. in Advances in neural information processing systems. 2016.
    27. Bai, W., et al., Automated cardiovascular magnetic resonance image analysis with fully convolutional networks. Journal of Cardiovascular Magnetic Resonance, 2018. 20(1): p. 65.
    28. Roth, H.R., et al., Hierarchical 3D fully convolutional networks for multi-organ segmentation. arXiv preprint arXiv:1704.06382, 2017.
    29. Bengio, Y., P. Simard, and P. Frasconi, Learning long-term dependencies with gradient descent is difficult. IEEE transactions on neural networks, 1994. 5(2): p. 157-166.
    30. Bishop, C.M., Neural networks for pattern recognition. 1995: Oxford university press.
    31. Ripley, B.D., Pattern recognition and neural networks. 2007: Cambridge university press.
    32. Venables, W.N. and B.D. Ripley, Modern applied statistics with S-PLUS. 2013: Springer Science & Business Media.
    33. Budhiman, A., S. Suyanto, and A. Arifianto. Melanoma Cancer Classification Using ResNet with Data Augmentation. in 2019 International Seminar on Research of Information Technology and Intelligent Systems (ISRITI). 2019. IEEE.
    34. Khan, R.U., et al. Analysis of resnet model for malicious code detection. in 2017 14th International Computer Conference on Wavelet Active Media Technology and Information Processing (ICCWAMTIP). 2017. IEEE.
    35. Li, X., et al., H-DenseUNet: hybrid densely connected UNet for liver and tumor segmentation from CT volumes. IEEE transactions on medical imaging, 2018. 37(12): p. 2663-2674.
    36. Zyuzin, V., et al. Identification of the left ventricle endocardial border on two-dimensional ultrasound images using the convolutional neural network Unet. in 2018 Ural Symposium on Biomedical Engineering, Radioelectronics and Information Technology (USBEREIT). 2018. IEEE.
    37. Welander, P., S. Karlsson, and A. Eklund, Generative adversarial networks for image-to-image translation on multi-contrast MR images-A comparison of CycleGAN and UNIT. arXiv preprint arXiv:1806.07777, 2018.
    38. Yang, H., et al., Unpaired brain MR-to-CT synthesis using a structure-constrained CycleGAN, in Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support. 2018, Springer. p. 174-182.
    39. Ulyanov, D., A. Vedaldi, and V. Lempitsky, Instance normalization: The missing ingredient for fast stylization. arXiv preprint arXiv:1607.08022, 2016.
    40. Johnson, J., A. Alahi, and L. Fei-Fei. Perceptual losses for real-time style transfer and super-resolution. in European conference on computer vision. 2016. Springer.
    41. Lv, X. and X. Zhang. Generating Chinese Classical Landscape Paintings Based on Cycle-Consistent Adversarial Networks. in 2019 6th International Conference on Systems and Informatics (ICSAI). 2019. IEEE.
    42. Omdal, M., Final: CycleGAN and DualGAN on Artistic Image-to-Image Translation. 2019.

    無法下載圖示 校內:2025-07-28公開
    校外:不公開
    電子論文尚未授權公開,紙本請查館藏目錄
    QR CODE