簡易檢索 / 詳目顯示

研究生: 曾紹銘
Tseng, Shau-Ming
論文名稱: 使用增強型U-Net分割大腦MRI中的腦腫瘤
Segmentation of Brain Tumors from Brain MRI Using Enhanced U-Net
指導教授: 戴顯權
Tai, Shen-Chuan
學位類別: 碩士
Master
系所名稱: 電機資訊學院 - 電腦與通信工程研究所
Institute of Computer & Communication Engineering
論文出版年: 2021
畢業學年度: 109
語文別: 英文
論文頁數: 58
中文關鍵詞: 醫學影像核磁造影腦腫瘤分割深度學習
外文關鍵詞: medical image, MRI, brain tumor segmentation, deep learning
相關次數: 點閱:131下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 自動化分割腦瘤演算法可以輔助醫師及專家,快速且初步地描繪出潛在的腫瘤位置,節省標記腦瘤所耗費的人力與時間。
    本論文提出利用深度學習方法,分割大腦MRI影像中的腦瘤位置。深度學習模型結合U-Net及MobileNet,對多模態的3D MRI大腦影像進行分割,產生需進行手術切除的區域,並利用通道融合的方式生成通道間各自的空間注意力圖來提高準確率,而所訓練出來的模型與其他比較方法相比只使用不到5%的參數量。本論文的方法在2021腦腫瘤分割亞洲盃冠軍挑戰賽中,取得第二名的成績。

    Automated brain tumor segmentation algorithms can assist neuro-radiologist and experts quickly and preliminarily delineate the location of potential tumors, thereby saving manual resources and time for marking brain tumors.
    In this Thesis, a multimodal 3D MRI image brain tumor segmentation method based on Mobile U-Net is proposed. The aim is to automatically depict the brain tumor area on a 3D volume, and then generate the area where needs to be surgically resected. Utilizing channel fusion to generate a spatial attention map for a single channel improves accuracy. Compared with other methods, the parameters of the proposed model use less than 5% of the computational cost, and the accuracy is competitive on the BraTS2020 validation set. The proposed algorithm won the Runner-Up of the 2021 Brain Tumor Segmentation Asian Cup Championship Challenge.

    摘 要 i Abstract ii Acknowledgements iii Contents iv List of Tables vii List of Figures viii Chapter 1 Introduction 1 1.1 Overview 1 Chapter 2 Background and Related Works 3 2.1 Magnetic Resonance Imaging (MRI) 3 2.1.1 Principles of nuclear magnetic resonance (NMR) 4 2.1.2 Imaging 7 2.1.3 The relationship between T1, T2, and image contrast 8 2.2 U-Net 9 2.3 Efficient Convolution 10 2.3.1 Standard Convolution 10 2.3.2 Depth-wise Separable Convolution 11 2.4 Attention and gating mechanisms 13 2.4.1 Attention Gate 13 2.4.2 Squeeze-and-Excitation block 14 2.5 Convolutional Block in MobileNet 16 2.5.1 MobileNet V1 16 2.5.2 MobileNet V2 17 2.5.3 MobileNet V3 19 Chapter 3 The Proposed Algorithm 20 3.1 Proposed Network Architecture 21 3.2 Proposed Convolutional Block 22 3.3 Swish Activation 25 3.4 Loss Function 27 3.4.1 Focal Loss 27 3.4.2 Dice Loss 29 3.5 Instance Normalization 29 Chapter 4 Experiment 31 4.1 Experimental Dataset 31 4.2 Preprocessing 33 4.3 Training Strategies 34 4.3.1 Patch-based training 34 4.3.2 Data Augmentation 34 4.3.3 Loss function and Region-based training 36 4.3.4 Ranger Optimizer 36 4.3.5 Hyper-parameters Setting 37 4.3.6 Automatic Mixed Precision Training 37 4.4 Post-processing 38 4.5 Evaluation 40 4.5.1 Dice Similarity Coefficient 40 4.5.2 Hausdorff Distance 41 Chapter 5 Result 42 5.1 BraTS2020 Dataset 42 5.2 Mobile U-Net 44 5.3 Ablation Study 46 5.4 Qualitative Results 47 5.5 2021-Brain Tumor Segmentation Asian Cup Championship Challenge 49 Chapter 6 Discussion, Conclusion, and Future Work 52 6.1 Discussion 52 6.2 Conclusion 53 6.3 Future Work 53 References 54

    [1] Bakas, S., et al., Identifying the best machine learning algorithms for brain tumor segmentation, progression assessment, and overall survival prediction in the BRATS challenge. arXiv preprint arXiv:1811.02629, 2018.
    [2] Menze, B.H., et al., The multimodal brain tumor image segmentation benchmark (BRATS). IEEE transactions on medical imaging, 2014. 34(10): p. 1993-2024.
    [3] Jiang, Z., et al. Two-stage cascaded u-net: 1st place solution to brats challenge 2019 segmentation task. in International MICCAI Brainlesion Workshop. 2019. Springer.
    [4] Myronenko, A. 3D MRI brain tumor segmentation using autoencoder regularization. in International MICCAI Brainlesion Workshop. 2018. Springer.
    [5] McKinley, R., et al. Triplanar ensemble of 3d-to-2d cnns with label-uncertainty for brain tumor segmentation. in International MICCAI Brainlesion Workshop. 2019. Springer.
    [6] Zhao, Y.-X., Y.-M. Zhang, and C.-L. Liu. Bag of tricks for 3d mri brain tumor segmentation. in International MICCAI Brainlesion Workshop. 2019. Springer.
    [7] Isensee, F. and K.H. Maier-Hein. nnU-Net for Brain Tumor Segmentation. in Brainlesion: Glioma, Multiple Sclerosis, Stroke and Traumatic Brain Injuries: 6th International Workshop, BrainLes 2020, Held in Conjunction with MICCAI 2020, Lima, Peru, October 4, 2020, Revised Selected Papers, Part II. 2021. Springer Nature.
    [8] Wang, Y., et al., Modality-Pairing Learning for Brain Tumor Segmentation. arXiv preprint arXiv:2010.09277, 2020.
    [9] Jia, H., et al., H2NF-Net for Brain Tumor Segmentation using Multimodal MR Imaging: 2nd Place Solution to BraTS Challenge 2020 Segmentation Task. arXiv preprint arXiv:2012.15318, 2020.
    [10] Yuan, Y. Automatic Brain Tumor Segmentation with Scale Attention Network. in BrainLes@ MICCAI (1). 2020.
    [11] Milletari, F., N. Navab, and S.-A. Ahmadi. V-net: Fully convolutional neural networks for volumetric medical image segmentation. in 2016 fourth international conference on 3D vision (3DV). 2016. IEEE.
    [12] Lin, T.-Y., et al. Focal loss for dense object detection. in Proceedings of the IEEE international conference on computer vision. 2017.
    [13] Ronneberger, O., P. Fischer, and T. Brox. U-net: Convolutional networks for biomedical image segmentation. in International Conference on Medical image computing and computer-assisted intervention. 2015. Springer.
    [14] Long, J., E. Shelhamer, and T. Darrell. Fully convolutional networks for semantic segmentation. in Proceedings of the IEEE conference on computer vision and pattern recognition. 2015.
    [15] Howard, A.G., et al., Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861, 2017.
    [16] Cao, C., et al. Look and think twice: Capturing top-down visual attention with feedback convolutional neural networks. in Proceedings of the IEEE international conference on computer vision. 2015.
    [17] Chen, L., et al. Sca-cnn: Spatial and channel-wise attention in convolutional networks for image captioning. in Proceedings of the IEEE conference on computer vision and pattern recognition. 2017.
    [18] Oktay, O., et al., Attention u-net: Learning where to look for the pancreas. arXiv preprint arXiv:1804.03999, 2018.
    [19] Wang, F., et al. Residual attention network for image classification. in Proceedings of the IEEE conference on computer vision and pattern recognition. 2017.
    [20] Woo, S., et al. Cbam: Convolutional block attention module. in Proceedings of the European conference on computer vision (ECCV). 2018.
    [21] Hu, J., L. Shen, and G. Sun. Squeeze-and-excitation networks. in Proceedings of the IEEE conference on computer vision and pattern recognition. 2018.
    [22] Lin, M., Q. Chen, and S. Yan, Network in network. arXiv preprint arXiv:1312.4400, 2013.
    [23] Szegedy, C., et al. Going deeper with convolutions. in Proceedings of the IEEE conference on computer vision and pattern recognition. 2015.
    [24] Simonyan, K. and A. Zisserman, Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
    [25] Iandola, F.N., et al., SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and< 0.5 MB model size. arXiv preprint arXiv:1602.07360, 2016.
    [26] Szegedy, C., et al. Rethinking the inception architecture for computer vision. in Proceedings of the IEEE conference on computer vision and pattern recognition. 2016.
    [27] Sandler, M., et al. Mobilenetv2: Inverted residuals and linear bottlenecks. in Proceedings of the IEEE conference on computer vision and pattern recognition. 2018.
    [28] Howard, A., et al. Searching for mobilenetv3. in Proceedings of the IEEE/CVF International Conference on Computer Vision. 2019.
    [29] Tan, M., et al. Mnasnet: Platform-aware neural architecture search for mobile. in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019.
    [30] He, K., et al. Deep residual learning for image recognition. in Proceedings of the IEEE conference on computer vision and pattern recognition. 2016.
    [31] Nair, V. and G.E. Hinton. Rectified linear units improve restricted boltzmann machines. in Icml. 2010.
    [32] He, K., et al. Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. in Proceedings of the IEEE international conference on computer vision. 2015.
    [33] Clevert, D.-A., T. Unterthiner, and S. Hochreiter, Fast and accurate deep network learning by exponential linear units (elus). arXiv preprint arXiv:1511.07289, 2015.
    [34] Klambauer, G., et al., Self-normalizing neural networks. arXiv preprint arXiv:1706.02515, 2017.
    [35] Ramachandran, P., B. Zoph, and Q.V. Le, Searching for activation functions. arXiv preprint arXiv:1710.05941, 2017.
    [36] Elfwing, S., E. Uchibe, and K. Doya, Sigmoid-weighted linear units for neural network function approximation in reinforcement learning. Neural Networks, 2018. 107: p. 3-11.
    [37] Ioffe, S. and C. Szegedy. Batch normalization: Accelerating deep network training by reducing internal covariate shift. in International conference on machine learning. 2015. PMLR.
    [38] Ulyanov, D., A. Vedaldi, and V. Lempitsky, Instance normalization: The missing ingredient for fast stylization. arXiv preprint arXiv:1607.08022, 2016.
    [39] Nalepa, J., M. Marcinkiewicz, and M. Kawulok, Data augmentation for brain-tumor segmentation: a review. Frontiers in computational neuroscience, 2019. 13: p. 83.
    [40] Wright, L., Ranger - a synergistic optimizer. GitHub repository, 2019.
    [41] Zhang, M.R., et al., Lookahead optimizer: k steps forward, 1 step back. arXiv preprint arXiv:1907.08610, 2019.
    [42] Liu, L., et al., On the variance of the adaptive learning rate and beyond. arXiv preprint arXiv:1908.03265, 2019.
    [43] Loshchilov, I. and F. Hutter, Sgdr: Stochastic gradient descent with warm restarts. arXiv preprint arXiv:1608.03983, 2016.
    [44] Micikevicius, P., et al., Mixed precision training. arXiv preprint arXiv:1710.03740, 2017.

    無法下載圖示 校內:2026-06-28公開
    校外:2026-06-28公開
    電子論文尚未授權公開,紙本請查館藏目錄
    QR CODE