簡易檢索 / 詳目顯示

研究生: 蕭祺恩
Xiao, Qi-En
論文名稱: 使用多尺度感知域卷積神經網路分割H&E染色肝臟門脈區域
H&E Stained Liver Portal Area Segmentation Using Multi-Scale Receptive Field Convolutional Neural Network
指導教授: 詹寶珠
Chung, Pau-Choo
學位類別: 碩士
Master
系所名稱: 電機資訊學院 - 電腦與通信工程研究所
Institute of Computer & Communication Engineering
論文出版年: 2018
畢業學年度: 106
語文別: 英文
論文頁數: 60
中文關鍵詞: 肝臟門脈區域卷積神經網路分割多尺度感知域小物體敏感損失函數
外文關鍵詞: liver portal area, convolutional neural network, segmentation, multi-scale receptive field, small object sensitive loss
相關次數: 點閱:56下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 辨認肝臟中各個門脈區域是自動化肝炎分級的組織定量分析過程中的其中一個重要步驟。然而,鑒於染色、組織外觀、門脈區域間的尺寸以及切片過程中造成的各種差異,使得辨認過程極具挑戰性。卷積神經網路具有題曲複雜特徵的能力,並且實現了良好的分割結果。因此,本論文提出了一多尺度感知域卷積神經網路,用於蘇木精-依紅(H&E)染色的整個載玻片圖像(WSI)中自動分割肝臟門脈區域。此架構結合了DeepLab v3和U-Net各自的優點,即空洞空間金字塔池化模塊(ASPP)與對稱型編碼器-解碼器特徵併列架構。此外,為了處理門脈區域中的多尺度物件,使用特定空洞比率的空洞卷積層取得具意義的感知域用於平行地擷取不同的組織特徵。最終,提出一種小物體敏感懲罰的損失函數,用以強調擷取纖細和微小的門脈區域。

    結果表明該模型的準確率(IOU)達0.87,靈敏度(Sensitivity)為0.92。與FCN,U-Net和SegNet等以分割為目的的卷積神經網路相比,本論文所提出的網路架構實現了最佳的準確率以及靈敏度性能。此外,具意義的空洞空間金字塔池化模塊確實地有助於門脈區域的特徵擷取。而相較於使用原始的交叉熵,改進後的小物體敏感損失函數識別小門脈區域的能力顯著改善了分割結果。

    Identification of the individual portal areas in the liver is an important step in automating the quantitative histological analysis process for hepatitis grading. However, the identification process is extremely challenging due to differences in staining, tissue appearance, portal area size, and sectioning procedures, respectively. Convolutional neural networks (CNN) have the proven ability to extract complex features, and achieve a good segmentation result. Accordingly, this paper proposes a multi-scale receptive field convolutional neural network to automatically segmentation the liver portal areas in hematoxylin and eosin (H&E) stained whole slide images (WSIs). The proposed network combines the respective advantages of DeepLab v3 and U-Net, namely atrous spatial pyramid pooling (ASPP) and symmetric encoder-decoder with feature concatenation architecture. Furthermore, in order to deal with multiple scale objects in the portal area, atrous convolution is applied with specific atrous rates for meaningful receptive fields to extract diverse level tissue features in parallel. Finally, a small object sensitive penalty in loss function is proposed to emphasize on slim and tiny portal area with fibrosis.
    The results show that the proposed model achieves IOU of 0.87 and sensitivity of 0.92. Compared to recent segmentation researches as FCN, U-Net and SegNet, the proposed network achieves an overall the best IOU and Sensitivity performances. Moreover, the designed ASPP block truly assists in feature extraction, and the ability of identifying small objects in proposed small object sensitive loss has a significant improvement of the segmentation result comparing to the original cross entropy loss.

    摘要.................. I Abstract................ II Table of Content.............. V List of Tables................ VII List of Figures............... VIII Chapter 1 Introduction............ 1 1-1 Motivation............... 1 1-2 Purpose.............. 4 1-3 Challenges............... 5 1-4 Overview of the Proposed Method.......... 7 1-5 Conceptual Frameworks............ 7 Chapter 2 Related Works............. 8 2-1 Image Processing............ 8 2-2 Machine Learning............. 9 2-3 Convolutional Neural Network......... 11 Chapter 3 Materials and Methods.......... 14 3-1 Preprocessing............. 15 3-1-1 Patch Generation.......... 15 3-1-2 Blank Patch Exclusion.......... 16 3-1-3 Data Augmentation.......... 17 3-2 Multi-Scale Receptive Field Convolutional Neural Network..... 19 3-2-1 Fundamental Operation......... 21 3-2-2 Encoder and Decoder......... 25 3-2-3 Atrous Spatial Pyramid Pooling Block...... 26 3-2-4 Small Object Sensitive Loss Function....... 28 Chapter 4 Experimental Results and Discussion....... 31 4-1 Portal Area Segmentation Dataset.......... 31 4-2 Evaluation Criteria............ 32 4-3 Results of Portal Area Segmentation........ 33 4-3-1 Performance of the Proposed Network....... 33 4-3-2 Performance of Small Object Sensitive Loss Function.... 41 4-3-3 Performance of Hue Augmentation....... 48 Chapter 5 Conclusion............. 56 5-1 Conclusion.............. 56 5-2 Future Work.............. 56 Reference............... 57

    [1] World Health Organization, Global Hepatitis Report, 2017. 2017.
    [2] K.Ishak et al., “Histological Grading and Staging of Chronic Hepatitis,” J. Hepatol., pp. 696-699, 1995.
    [3] A.Janowczyk and A.Madabhushi, “Deep Learning for Digital Pathology Image Analysis: A Comprehensive Tutorial with Selected Use Cases,” J. Pathol. Inform., 2016.
    [4] T. N.Series and T.Nanozoomer, “NanoZoomer S210 Digital Slide Scanner”
    [5] L. C.Chen, G.Papandreou, F.Schroff, and H.Adam, “Rethinking Atrous Convolution for Semantic Image Segmentation,” arXiv: 1706.05587, 2017.
    [6] O.Ronneberger, P.Fischer, and T.Brox, “U-Net: Convolutional Networks for Biomedical Image Segmentation,” Miccai, pp. 234-241, 2015.
    [7] H. S.Wu, R.Xu, N.Harpaz, D.Burstein, and J.Gil, “Segmentation of Microscopic Images of Small Intestinal Glands with Directional 2-D Filters,” Anal. Quant. Cytol. Histol., pp. 291-300, 2005.
    [8] H. S.Wu, R.Xu, N.Harpaz, D.Burstein, and J.Gil, “Segmentation of Intestinal Gland Images with Iterative Region Growing,” J. Microsc., pp. 190-204, 2005.
    [9] S.Naik, S.Doyle, M.Feldman, J.Tomaszewski, and A.Madabhushi, “Gland Segmentation and Computerized Gleason Grading of Prostate Histology by Integrating Low, High-Level and Domain Specific Information.,” Proc. 2nd Work. Microsopic Image Anal. with Appl. Biol., pp. 1-8, 2007.
    [10] R.Farjam, H.Soltanian-Zadeh, K.Jafari-Khouzani, and R. A.Zoroofi, “An Image Analysis Approach for Automatic Malignancy Determination of Prostate Pathological Images,” Cytom. Part B - Clin. Cytom., pp. 227-240, 2007.
    [11] C.Gunduz-Demir, M.Kandemir, A. B.Tosun, and C.Sokmensuer, “Automatic Segmentation of Colon Glands Using Object-Graphs,” Med. Image Anal., pp. 1-12, 2010.
    [12] K.Nguyen, B.Sabata, and A.Jain, “Prostate Cancer Detection: Fusion of Cytological and Textural Features,” J. Pathol. Inform., 2011.
    [13] Y.Peng, Y.Jiang, L.Eisengart, M. aHealy, F. H.Straus, and X. J.Yang, “Computer-Aided Identification of Prostatic Adenocarcinoma: Segmentation of Glandular Structures.,” J. Pathol. Inform., 2011.
    [14] H.Mousavi, G.Rao, A. K.Rao, and V.Monga, “Automated Discrimination of Lower and Higher Grade Gliomas Based on Histopathological Image Analysis,” J. Pathol. Inform., 2015.
    [15] B.Barry, K.Buch, J. A.Soto, H.Jara, A.Nakhmani, and S. W.Anderson, “Quantifying Liver Fibrosis Through the Application of Texture Analysis to Diffusion Weighted Imaging,” Magn. Reson. Imaging, pp. 84-90, 2014.
    [16] L.Moraru, S.Moldovanu, A. L.Culea-Florescu, D.Bibicu, A. S.Ashour, and N.Dey, “Texture Analysis of Parasitological Liver Fibrosis Images,” Microsc. Res. Tech., pp. 862-869, 2017.
    [17] J.Long, E.Shelhamer, and T.Darrell, “Fully Convolutional Networks for Semantic Segmentation,” in Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), pp. 3431-3440, 2015.
    [18] K.Simonyan and A.Zisserman, “Very Deep Convolutional Networks for Large-Scale Image Recognition,” Int. Conf. Learn. Represent., 2015.
    [19] V.Badrinarayanan, A.Kendall, and R.Cipolla, “SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation,” IEEE Trans. Pattern Anal. Mach. Intell., 2017.
    [20] K.He, X.Zhang, S.Ren, and J.Sun, “Deep Residual Learning for Image Recognition,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
    [21] K.Sirinukunwattana et al., “Gland Segmentation in Colon Histology Images: The Glas Challenge Contest,” Med. Image Anal., pp. 489-502, 2017.
    [22] A.Krizhevsky, I.Sutskever, and H.Geoffrey E., “ImageNet Classification with Deep Convolutional Neural Networks,” Adv. Neural Inf. Process. Syst., pp. 1097-1105, 2012.
    [23] Y.LeCun, L.Bottou, Y.Bengio, and P.Haffner, “Gradient-Based Learning Applied to Document Recognition,” Proc. IEEE, pp. 2278-2324, 1998.
    [24] H.Chen, X.Qi, L.Yu, Q.Dou, J.Qin, and P. A.Heng, “DCAN: Deep Contour-Aware Networks for Object Instance Segmentation from Histology Images,” Med. Image Anal., pp. 135-146, 2017.
    [25] Y.Xu, Y.Li, M.Liu, Y.Wang, M.Lai, and E. I. C.Chang, “Gland Instance Segmentation by Deep Multichannel Side Supervision,” in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), pp. 496-504, 2016.
    [26] M.Windows et al., “Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift,” arXiv:1502.03167, 2015.
    [27] D. A.Clevert, T.Unterthiner, and S.Hochreiter, “Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs),” pp. 1–14, 2015.
    [28] V.Nair and G. E.Hinton, “Rectified Linear Units Improve Restricted Boltzmann Machines,” Proc. 27th Int. Conf. Mach. Learn., pp. 807-814, 2010.
    [29] D. P.Kingma and J. L.Ba, “Adam: A Method for Stochastic Optimization,” Int. Conf. Learn. Represent. 2015, 2015.
    [30] G.Huang, Z.Liu, L.Van DerMaaten, and K. Q.Weinberger, “Densely Connected Convolutional Networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.

    無法下載圖示 校內:2023-08-23公開
    校外:不公開
    電子論文尚未授權公開,紙本請查館藏目錄
    QR CODE