| 研究生: |
謝庭瑜 Hsieh, Ting-Yu |
|---|---|
| 論文名稱: |
利用 U-Net 分割胸腔電腦斷層攝影的肺部腫瘤 Segmentation of Lung Tumors from Chest CT Using U-Net |
| 指導教授: |
戴顯權
Tai, Shen-Chuan |
| 學位類別: |
碩士 Master |
| 系所名稱: |
電機資訊學院 - 電腦與通信工程研究所 Institute of Computer & Communication Engineering |
| 論文出版年: | 2020 |
| 畢業學年度: | 108 |
| 語文別: | 英文 |
| 論文頁數: | 44 |
| 中文關鍵詞: | 醫學影像 、電腦斷層 、肺部腫瘤分割 、深度學習 |
| 外文關鍵詞: | medical image, Computer Tomography, lung tumor segmentation, deep learning |
| 相關次數: | 點閱:137 下載:4 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
根據2018年死因統計報告,近年來肺癌罹患率與死亡率不斷攀升。實驗證明,利用X光影像或電腦斷層攝影,若及早發現腫瘤區域並進行追蹤檢查,可以降低肺癌高危險族群的死亡率,其中電腦斷層攝影相較於X光影像更能觀測到病變區域。
本論文提出利用深度學習,分割出胸腔電腦斷層影像中屬於腫瘤的區域。深度學習模型主要基於U-Net而建,目標為對三維CT影像做分割,最後產生二維的結果;並同時訓練一個輔助分類器,用以濾除不包含腫瘤的影像,從而提升單一病患所有切面的分割準確率。模型採用Tversky index 作為分割結果的損失函數,可以對醫學影像這類前景和背景比例失衡的資料在訓練時做更好的優化。
According to the report about cause of death in 2018 in Taiwan, the morbidity and mortality rate of lung cancer are constantly increasing in recent years. A research result in America showed that diagnosis with CT in early stage relatively reduces the rate of death from lung cancer for high risk populations.
In this Thesis, a segmentation method of lung tumors from chest CT based on U-net is proposed. The goal is to automatically segment the lung tumor region on 3D volume, then produce a mask for each slice of CT scans. The system combines the U-Net based fully convolutional network with an additional classifier block in order to eliminate the non-tumor slices as much as possible. Besides, since data imbalance is a common issue in medical image segmentation, Tversky Loss is used to optimize our model to get better performance in segmentation.
[1] American Cancer Society, “Cancer Facts & Figures 2019”, pp. 19, 2020.
[2] Health Promotion Administration Ministry of Health and Welfare, “2018 Cause of Death Statistics”, December 2018.
[3] The National Lung Screening Trial Research Team, “Reduced Lung-Cancer Mortality with Low-Dose Computed Tomographic Screening”, The NEW ENGLAND JOURNAL of MEDICINE, 2011; 365: pp. 395-409.
[4] Selin Uzelaltinbulat, Buse Ugur, “Lung tumor segmentation algorithm”, Procedia Computer Science, Volume 120, 2017, pp. 140-147.
[5] Mohammad Havaei, Axel Davy, David Warde-Farley, Antoine Biard, Aaron Courville, Yoshua Bengio, Chris Pal, Pierre-Marc Jodoin, Hugo Larochelle, “Brain tumor segmentation with Deep Neural Networks”, Medical Image Analysis, Volume 35, January 2017, pp. 18-31.
[6] Özgün Çiçek, Ahmed Abdulkadir, Soeren S. Lienkamp, Thomas Brox, Olaf Ronneberger, “3D U-Net: Learning Dense Volumetric Segmentation from Sparse Annotation”, in MICCAI, 2016.
[7] Uday Kamal, Abdul Muntakim Rafi, Rakibul Hoque, Md. Kamrul Hasan, “Lung Cancer Tumor Region Segmentation Using Recurrent 3D-DenseUNet”, arXiv:1812.01951, 2018
[8] Urata, M., Kijima, Y., Hirata, M. et al., “Computed tomography Hounsfield units can predict breast cancer metastasis to axillary lymph nodes.”, BMC Cancer, 2014;14:730
[9] Olaf Ronneberger, Philipp Fischer, Thomas Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation”, in MICCAI, 2015.
[10] J. Long, E. Shelhamer and T. Darrell, “Fully convolutional networks for semantic segmentation,” 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, 2015, pp. 3431-3440.
[11] Ozan Oktay, Jo Schlemper, Loic Le Folgoc, Matthew Lee, Mattias Heinrich, Kazunari Misawa, Kensaku Mori, Steven McDonagh, Nils Y Hammerla, Bernhard Kainz, Ben Glocker, Daniel Rueckert, “Attention U-Net: Learning Where to Look for the Pancreas”, in MIDL, 2018.
[12] C. Szegedy et al., “Going deeper with convolutions”, 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, 2015, pp. 1-9.
[13] C. Szegedy, V. Vanhoucke, S. Ioffe, J. Shlens and Z. Wojna, “Rethinking the Inception Architecture for Computer Vision,” 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, 2016, pp. 2818-2826.
[14] Christian Szegedy, Sergey Ioffe, Vincent Vanhoucke, Alex Alemi, “Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning”, in AAAI, 2017.
[15] K. He, X. Zhang, S. Ren and J. Sun, “Deep Residual Learning for Image Recognition”, 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, 2016, pp. 770-778.
[16] X. Li, W. Wang, X. Hu and J. Yang, “Selective Kernel Networks,” 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 2019, pp. 510-519.
[17] S. Ioffe and C. Szegedy. Batch normalization: “Accelerating deep network training by reducing internal covariate shift”, in ICML, 2015.
[18] Yuxin Wu, Kaiming He, “Group Normalization”, in ECCV, 2018.
[19] Seyed Sadegh Mohseni Salehi, Deniz Erdogmus, Ali Gholipour, “Tversky loss function for image segmentation using 3D fully convolutional deep networks”, in MICCAI, 2017.
[20] Arash Mohammadi, Parnian Afshar, Amir Asif, Keyvan Farahani, Justin Kirby, Anastasia Oikonomou, and Konstantinos N. Plataniotis, “Lung Cancer Radiomics, Highlights from the IEEE Video and Image Processing Cup 2018 Student Competition”, 2018.
[21] Aerts, H. J. W. L., Wee, L., Rios Velazquez, E., Leijenaar, R. T. H., Parmar, C., Grossmann, P., … Lambin, P. (2019). Data From NSCLC-Radiomics [Data set]. The Cancer Imaging Archive.
[22] Aerts, H. J. W. L., Velazquez, E. R., Leijenaar, R. T. H., Parmar, C., Grossmann, P., Cavalho, S., … Lambin, P. (2014, June 3). Decoding tumour phenotype by noninvasive imaging using a quantitative radiomics approach. Nature Communications. Nature Publishing Group.
[23] Clark K, Vendt B, Smith K, Freymann J, Kirby J, Koppel P, Moore S, Phillips S, Maffitt D, Pringle M, Tarbox L, Prior F. The Cancer Imaging Archive (TCIA): Maintaining and Operating a Public Information Repository, Journal of Digital Imaging, Volume 26, Number 6, December 2013, pp. 1045-1057.