簡易檢索 / 詳目顯示

研究生: 陳致嘉
Chen, Chih-Chia
論文名稱: 利用雙相面X光影像與卷積神經網路進行脊椎立體結構之重建
Using Bi-planar X-ray Images to Reconstruct the Spine Structure by the Convolutional Neural Network
指導教授: 方佑華
Fang, Yu-Hua
學位類別: 碩士
Master
系所名稱: 工學院 - 生物醫學工程學系
Department of BioMedical Engineering
論文出版年: 2019
畢業學年度: 107
語文別: 英文
論文頁數: 51
中文關鍵詞: 深度學習卷積神經網路脊椎分割雙相面X光影像
外文關鍵詞: Deep Learning, Convolutional Neural Network, Reconstruct 3D model, Bi-planar X-ary image
相關次數: 點閱:106下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 背景:
    電腦斷層利用多角度照射的X光影像,進行三維重建產生斷層影像,其優點是可以獲得更詳細的結構資訊。然而一次CT掃描所接收到的輻射量,約為傳統X光的150倍至1100倍。若使用CT進行常規健康檢查,可能會提升罹癌的機率。特別是對於脊椎側彎,因為脊椎側彎好發於十多歲之青少年,而在長期追蹤治療病患情況時所接受到的輻射量,將對尚未發育完全之青少年有更大的影響。針對此議題,法國EOS imaging公司,開發出了一款名為EOS系統的產品。病患可以透過此系統的硬體設備,同時拍攝正面及側面兩張X光影像,稱之為「雙相面X光影像」,放射師可利用EOS系統內的軟體,標記雙相面影像中骨骼的特徵點,EOS系統便可利用影像標記點計算出相關參數,並根據其內部的統計模型,選擇最合理的三維模型作為輸出。然而,在利用EOS系統標記特徵點的過程中,需要耗費大量的時間。既使是專業的放射師,也會花費將近2小時的時間才能標記完一組病人。以此作為發想,我們希望同樣利用雙相面X光影像作為輸入,而利用深度學習的方式取代人為標記的步驟,以節省影像重建處理時間,最後重建出其三維脊椎模型。

    方法:
    為了訓練神經網路,我們使用了The Cancer Imaging Archive (TCIA) 資料庫來獲取數據,此為一開源的醫學影像資料庫,所有數據皆已去名化處理過,目前我們採用45組CT資料進行訓練,3組做為測試。接著,我們利用TIGRE這個MATLAB工具箱將CT資料作前投影,產生出其對應的雙相面影像,作為訓練神經網路的輸入。接著,我們設計了一項分離脊椎及肋骨的演算法,該演算法可以半自動的從CT資料中分割出脊椎的結構,作為訓練神經網路時的參照標準。在神經網路的部份,我們將其設計成類似於自動編碼(Autoencoder)結構,整個神經網路可分成編碼器(encoder)部分與解碼器(decoder)部分。我們將雙向面影像上下疊在一起做為輸入資料,透過編碼器部分將其壓縮成1024×1的向量,再透過解碼器部分將其逐漸重建回三維的立體模型。最後,我們使用Dice係數與結構相似性(SSIM)以評估神經網路輸出的重建結果。

    結果:
    我們設計了一種精準且快速的脊椎分割演算法,將該演算法運用於有專家標記過的脊椎位置的SpineWeb資料庫,其分割結果之Dice係數為0.91,SSIM為 0.98。由此可知,該演算法的確可有效地將脊椎與肋骨分離。接著,經由測試,將透過Gamma校正過後的影像作為輸入的結果會比以原影像作為輸入來得好。同時,sigmoid激勵函數會是最適合本神經網路架構的激勵函數層。目前,該網路所生成的重建結果之Dice係數為0.82,SSIM為 0.97。

    結論:
    在這項研究裡,我們開發了一個只要輸入雙相面X光影像,就能輸出三維脊椎模型的深度學習方法。在目前的進度中,我們的神經網路已經成功地做到從2D平面影像至3D立體模型的轉換。

    Background:
    Computerized tomography(CT) is a three-dimensional imaging modality that reconstructs the data by using multi-angle X-ray images. CT has the advantage of obtaining more details of personal anatomical and structural information. However, the amount of radiation dose received for a CT scan is about 150 to 1100 times more than X-rays, increasing the risks of cancer. Scoliosis is a kind of spinal disease which usually occurs in adolescents and the radiation received during treatment may affect adolescents. For this issue, a company named "EOS imaging" in France develop a product called the "EOS system". A patient can be simultaneously photographed for two X-ray images in the anterior-posterior and lateral view as bi-planar X-ray images. The radiologist or technologist can use the software in the EOS system to mark the feature points of the spinal vertebraes in the bi-planar X-ray images. The EOS system will then calculate the related parameters from the marked points and select the best matched three-dimension model as the output from its patient database. However, the process of marking feature points in the EOS system's software costs a lot of time. Even an experienced technologist may spend about 1 to 2 hours to process a patient. In this study, we aim to use the method of deep learning to eliminate the manual marking process to accelerate such reconstruction. The deep learning network is designed as a generator that uses the bi-planar X-ray images as input data and eventually generates the corresponding three-dimensional spinal model.

    Material and Methods:
    In order to train the generator, we used “The Cancer Imaging Archive (TCIA)” datasets and acquired the training and testing data. It’s a large and open-source medical image database. Currently, we used 45 datasets for training and 3 for testing. The TIGRE Matlab toolbox was applied on the CT data to get its corresponding bi-planar images as the training input data for the generator. Then, we designed an algorithm for delineating the spine from ribs, which can semi-automatically segment the spinal part of the CT data and serve as the ground truth for training the generator. The neural network structure of the generator was designed to be similar to the “Autoencoder”. The whole generator can be divided into an encoder and a decoder. Through the encoder, we stacked the bi-planar images together as the input data and compressed them into a 1024×1 vector. Then the vector would be gradually reconstructed back into the three-dimensional model in the decoder part. Last but not least, we use the Dice coefficient and structural similarity index (SSIM) to evaluate the reconstructed results.

    Results:
    We designed an accurate and fast spine segmenting algorithm and applied this algorithm on the data from "SpineWeb". The dice coefficient of the segmenting results is 0.91 and the SSIM is 0.98, showing that the algorithm can effectively separate the spine from the ribs. Then, using the bi-planar images with gamma correction for input is better than using the original images. We also found the sigmoid activation function is the most suitable activation layer for our network of the generator. Currently, the reconstructed spinal structure in the test datasets has a dice coefficient of 0.82 and an SSIM of 0.97.

    Conclusion:
    In this study, we propose a generator which only has two input bi-planar images and it can satisfactorily generate the three-dimensional spinal model. Our data indicate that this generator can successfully convert the two-dimension images to the three-dimension model.

    Chapter 1 Introduction 1 1.1 Spine 1 1.1.1 Function and structure of the spine 1 1.1.2 Scoliosis 2 1.1.3 Cobb’s angle 3 1.2 Medical imaging 4 1.2.1 X-ray image 5 1.2.2 Computed tomography (CT) 6 1.2.3 X-ray v.s. CT 7 1.2.4 Current problems with diagnosing scoliosis by medical images 7 1.3 The EOS imaging system 8 1.3.1 Advantage of the EOS imaging system 8 1.3.2 The hardware part of the EOS system 9 1.3.3 The software part of the EOS system 10 1.3.4 The problems of the EOS imaging system 16 1.4 Deep learning 17 1.4.1 Autoencoder 17 1.4.2 Reconstruct 3D model from 2D images with deep learning 19 1.4.3 Cases of using deep learning in medical imaging in recent years 20 1.5 Specific Aims 20 Chapter 2 Material and Methods 22 2.1 Data Acquisition 22 2.1.1 The reason why we choose a public dataset 22 2.1.2 The Cancer Imaging Archive (TCIA) 22 2.1.3 SpineWeb 23 2.2 Data preprocessing and the preparation for training 23 2.2.1 Data selection and zero padding 23 2.2.2 Normalize the CT data into the same size by linear interpolation 24 2.2.3 Get the simulated bi-planar X-ray images by the forward projection 25 2.2.4 Image enhance by the gamma correction 26 2.2.5 Separate the spine from the ribs 27 2.2.5.1 Pick out the bone in the CT data 27 2.2.5.2 Semi-auto segmenting algorithm 29 2.3 The neural network structure of the generator 31 2.3.1 Encoder part 31 2.3.2 Decoder part 32 2.3.3 The basic architecture of the generator 33 2.3.4 Some adjustments for the generator 34 2.3.4.1 Activation function at the end of the generator 34 2.3.4.2 Input data of the generator 35 2.4 Method of validation 36 2.4.1 Dice coefficient 36 2.4.2 Structural similarity (SSIM) 36 2.5 Operating Environment 37 Chapter 3 Results and Discussion 38 3.1 Simulated bi-planar X-ray images 38 3.1.1 Gamma correction 39 3.2 Semi-auto spinal segmenting algorithm 40 3.2.1 Valid by the data from SpineWeb 40 3.2.2 Apply our segmenting algorithm on the training data 41 3.3 The result of the spine reconstruction 42 3.3.1 Compare with different activation function 42 3.3.2 Compare with different input data 44 3.4 Summary 46 3.5 Future works 47 Chapter 4 Conclusion 48 References 49

    [1] M.Aebi, “The adult scoliosis,” European Spine Journal. 2005.
    [2] J. E. H.Pruijs, M. A. P. E.Hageman, W.Keessen, R.van derMeer, andJ. C.vanWieringen, “Variation in Cobb angle measurements in scoliosis,” Skeletal Radiol., 1994.
    [3] D.Folinais, P.Thelen, C.Delin, C.Radier, Y.Catonne, andJ. Y.Lazennec, “Measuring femoral and rotational alignment: EOS system versus computed tomography,” Orthop. Traumatol. Surg. Res., vol. 99, no. 5, pp. 509–516, 2013.
    [4] J. N.Johnson et al., “Cumulative radiation exposure and cancer risk estimation in children with heart disease,” Circulation, 2014.
    [5] D. S.David J. Brenner, Ph.D., D.Sc., and Eric J. Hall, D.Phil. andT, “Computed Tomography — An Increasing Source of Radiation Exposure David,” N. Engl. J. Med., 2007.
    [6] A.Berrington De González et al., “Projected Cancer Risks from Computed Tomographic Scans Performed in the United States in 2007,” Archives of Internal Medicine. 2009.
    [7] M. A.Asher andD. C.Burton, “Adolescent idiopathic scoliosis: Natural history and long term treatment effects,” Scoliosis. 2006.
    [8] “EOS | EOS imaging.” [Online]. Available: https://www.eos-imaging.com/professionals/eos/eos. [Accessed: 19-Jun-2019].
    [9] L.Humbert, J. A.DeGuise, B.Aubert, B.Godbout, andW.Skalli, “3D reconstruction of the spine from biplanar X-rays using parametric models based on transversal and longitudinal inferences,” Med. Eng. Phys., vol. 31, no. 6, pp. 681–687, 2009.
    [10] G. E.Hinton, S.Osindero, andY.-W.Teh, “A Fast Learning Algorithm for Deep Belief Nets,” Neural Comput., vol. 18, no. 7, pp. 1527–1554, Jul.2006.
    [11] Y.Bengio, P.Lamblin, D.Popovici, andH.Larochelle, “Greedy Layer-Wise Training of Deep Networks.”
    [12] Andrew Beam, “Deep Learning 101 - Part 1: History and Background.” .
    [13] J.Deng, W.Dong, R.Socher, L.-J.Li, K.Li, andL.Fei-Fei, “ImageNet: A Large-Scale Hierarchical Image Database.”
    [14] M. Z.Alom et al., “The History Began from AlexNet: A Comprehensive Survey on Deep Learning Approaches,” Mar.2018.
    [15] Z.Wojna et al., “Attention-Based Extraction of Structured Information from Street View Imagery,” in Proceedings of the International Conference on Document Analysis and Recognition, ICDAR, 2018.
    [16] R.Filipovych andC.Davatzikos, “Semi-supervised pattern classification of medical images: Application to mild cognitive impairment (MCI),” Neuroimage, 2011.
    [17] H. R.Roth et al., “Anatomy-specific classification of medical images using deep convolutional nets,” in Proceedings - International Symposium on Biomedical Imaging, 2015.
    [18] F.Milletari, N.Navab, andS. A.Ahmadi, “V-Net: Fully convolutional neural networks for volumetric medical image segmentation,” in Proceedings - 2016 4th International Conference on 3D Vision, 3DV 2016, 2016.
    [19] K.Clark et al., “The Cancer Imaging Archive (TCIA): Maintaining and Operating a Public Information Repository,” J. Digit. Imaging, vol. 26, no. 6, pp. 1045–1057, 2013.
    [20] “The Cancer Imaging Archive (TCIA) - A growing archive of medical images of cancer.” [Online]. Available: https://www.cancerimagingarchive.net/. [Accessed: 10-Jun-2019].
    [21] “SpineWeb : Main / HomePage : browse.” [Online]. Available: http://spineweb.digitalimaginggroup.ca/. [Accessed: 10-Jun-2019].
    [22] “TIGRE: Tomographic Iterative GPU-based Reconstruction Toolbox - File Exchange - MATLAB Central.” [Online]. Available: https://www.mathworks.com/matlabcentral/fileexchange/58042-tigre-tomographic-iterative-gpu-based-reconstruction-toolbox. [Accessed: 10-Jun-2019].
    [23] A.Lohstroh, S.Kazemi, O.Diaz, P.Elangovan, andK.Wells, “Validation and application of a new image reconstruction software toolbox (TIGRE) for breast cone-beam computed tomography,” 2018.
    [24] Z.Huang, T.Zhang, Q.Li, andH.Fang, “Adaptive gamma correction based on cumulative histogram for enhancing near-infrared images,” Infrared Phys. Technol., 2016.
    [25] N. L. Q.Cuong, N. H.Minh, H. M.Cuong, P. N.Quoc, N. H.VanAnh, andN.VanHieu, “Porosity Estimation from High Resolution CT SCAN Images of Rock Samples by Using Housfield Unit,” Open J. Geol., 2018.
    [26] M. J.Duchesne, F.Moore, B. F.Long, andJ.Labrie, “A rapid method for converting medical Computed Tomography scanner topogram attenuation scale to Hounsfield Unit scale and to obtain relative density values,” Eng. Geol., 2009.

    無法下載圖示 校內:2024-01-01公開
    校外:不公開
    電子論文尚未授權公開,紙本請查館藏目錄
    QR CODE