簡易檢索 / 詳目顯示

研究生: 陳香君
Chen, Hsiang-Chun
論文名稱: 利用先驗知識導向的生成對抗式網路增強醫療影像辨識品質:以錐束電腦斷層掃描為例
REGAN for Medical Image Diagnostic Quality Enhancement: A Case Study of CBCT
指導教授: 蔣榮先
Chiang, Jung-Hsien
學位類別: 碩士
Master
系所名稱: 電機資訊學院 - 醫學資訊研究所
Institute of Medical Informatics
論文出版年: 2022
畢業學年度: 110
語文別: 英文
論文頁數: 52
中文關鍵詞: 電腦斷層生成對抗式網路影像品質增強非監督式學習
外文關鍵詞: Computed Tomograpy, Generative Adversarial Network, Image Quality Enhancement, Unsupervised Learning
相關次數: 點閱:86下載:15
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 放射治療是一種用於治療癌症病患腫瘤的醫療方式,放射治療計畫包含多次的放射治療,每一次放射治療時,放射腫瘤科醫師會同時參考電腦斷層和錐束電腦斷層。其中,錐束電腦斷層影像是在當次放射治療時拍攝的影像,該影像於放射治療當下病人身體組織結構有代表性,定位精準度較高;而電腦斷層影像,是在計畫開始前拍攝的,影像的定位精準度較低,放射科醫師會根據電腦斷層影像為病人規劃治療計畫,並於電腦斷層影像模擬放射治療行程。放射腫瘤科醫師透過錐束電腦斷層影像提高當次放射治療的定位精準度,以提高放射治療成效。
    錐形束電腦斷層影像的臨床應用極廣,骨科、牙科、以及手術,皆有錐形束電腦斷層影像的參與。錐束電腦斷層影像,不同於電腦斷層蒐集人體資訊的方式,具有低成本和快速掃描時間的優勢,但伴隨較差的影像品質及容易產生假影的缺點,從而降低影像的可識別性。錐形束電腦斷層影像的缺點,會影響到許多臨床應用,並亦會影響放射治療的成效。
    由於硬體運算速度的發展和深度學習網路的成功,不乏研究透過深度卷積神經網路欲改善錐形束電腦斷層影像品質,而這些研究又可依據成對影像的有無,區分為監督式學習和非監督式學習,其中,非監督式學習具有較低的訓練門檻,可因應不同人體部位的變化程度多寡,唯其設計往往具有較大的參數量,造成佈署不易。儘管錐形束電腦斷層影像優化的研究蓬勃,現今,沒有任何恰當的指標能定量評估深度學習模型的優劣,現有的指標無法反應醫師判斷影像的好壞,故前人的研究距離落地臨床仍有一段差距,即使其研究結果於指標評估能獲得相當可觀的分數。
    本研究目的是設計一個非監督式框架以優化錐形束電腦斷層影像品質,此框架是輕量的,採用生成對抗式網路作為主架構,結合電腦斷層影像的先驗知識——假影的限制性、電腦斷層影像的高對比度、和電腦斷層影像的連續性,旨在引導模型學習。本研究除了使用指標量化模型的表現,由於現常用的指標無法有效反應專家於影像的看法,故加入醫師對於影像的主觀評分實驗設計以彌補指標的侷限性。由於人體骨盆腔所包含的組織最多元,故選擇骨盆腔錐形束電腦斷層影像作為驗證目標。相較於前人的設計,本研究的影像優化結果明顯遠優於前人,並在定性評估中,能收獲相比於原始錐形束電腦斷層影像較佳的主觀滿意度,證實本研究設計方法於臨床的有效性。此外,本研究發現,適當先驗知識的設計明顯有助於模型訓練。

    Radiation therapy is one of the treatments that works against cancer by killing cancer cells or by shrinking tumor size using high-dose radiation. The radiation therapy plan includes multiple fractions. Computed tomography (CT) and cone-beam computed tomography (CBCT) are used for each treatment fraction. CBCT scan is obtained right before the radiation treatment, and it has high positioning accuracy for the anatomic structures of the patient; on the other hand, CT scan is acquired before the radiation therapy plan is designed, and the oncologist utilizes CT scan to design radiation therapy plan. During each radiation therapy, the oncologist will adapt the plan with CBCT to improve the positioning accuracy of anatomic structure for more effective radiation treatment.
    CBCT has various clinical applications, including dental pathology assessment and image-guided surgery. CBCT can acquire enough information for image reconstruction with low radiation dose, which contributes to its advantages of rapid scan time and lower radiation dose. Unfortunately, its downsides are its image quality degradation and artifacts. The overall image quality degradation of CBCT will hinder its diagnostic performance and have significant negative impacts on the clinical applications.
    Because of the development of GPU and the great success of DCNN at various image-related tasks such as recognition and object detection, DCNN-based approaches for improving CBCT image quality have sprung up. These approaches can be categorized as supervised learning if using paired data and unsupervised learning if using unpaired data. In consideration of the impracticality of acquiring paired CT and CBCT in real-world scenarios, unsupervised learning is the better choice than supervised learning. However, current unsupervised learning approaches use a heavy framework of training, which is deployment unfriendly. Besides, the metrics for evaluating model performance are limited to reflect experts’ assessment. Even though previous research has achieved high metrics scores, but their enhanced CBCT are not qualified for clinical use.
    Motivated by the clinical needs for CBCT image quality improvement and the limited achievement of the previous results, this research proposed a GAN-based framework with prior-oriented auxiliary Regularizer and sharpness Enhancer (REGAN), a lightweight unsupervised framework, to enhance CBCT image quality. REGAN uses vanilla image-to-image translation GAN as its primary architecture and is encoded with CT priors. The three characteristics of CT – the consistency of air and bone, the high sharpness, and the sequential information – are integrated into primary architecture in different forms and guide the training process.
    The experiments are well-designed to validate the aims of this research. This research shows that REGAN outperforms other unsupervised frameworks used for CBCT image quality enhancement with less parameters. The CT priors manifest their efficacy at the ablation study. Since the traditional metrics cannot reflect the subjective assessment of experts well, the experts’ assessments are also conducted to complement the limitation of the metrics. We will show that REGAN greatly improves the image quality of CBCT.

    中文摘要...........................................................................................................................I Abstract ...........................................................................................................................III 誌謝..................................................................................................................................V Content ........................................................................................................................... VI List of Tables................................................................................................................VIII List of Figures...............................................................................................................VIII Chapter 1. Introduction........................................................................................ 1 1.1. Background ....................................................................................................... 1 1.2. Motivation......................................................................................................... 2 1.3. Research Objectives .......................................................................................... 3 1.4. Thesis Organization........................................................................................... 3 Chapter 2. Related Works.................................................................................... 4 2.1 Radiation Therapy ............................................................................................. 4 2.1.1 Computed Tomography.................................................................. 5 2.1.2 Cone-beam Computed Tomography ............................................... 5 2.1.3 CBCT Image Degradation: Artifacts............................................... 6 2.1.4 Image-guided Radiation Therapy.................................................... 6 2.2 Image-to-image Translation............................................................................... 7 2.2.1 Obstructions Removal with GAN ................................................... 8 2.3 CT Image Quality Improvement ........................................................................ 9 2.3.1 Metal Artifact Reduction on CT ..................................................... 9 2.3.2 General Image Quality Improvement on CBCT.............................10 Chapter 3. Preliminary Study .............................................................................11 3.1 Unet for CBCT Enhancement ...........................................................................12 3.2 Preliminary Results ..........................................................................................12 3.3 The Failure of Preliminary Study......................................................................13 Chapter 4. GAN with Auxiliary Regularizer and Sharpness Enhancer ................14 4.1 Data Preprocessing ...........................................................................................15 4.2 REGAN............................................................................................................16 4.3 Primary Architecture ........................................................................................17 4.4 Auxiliary Regularizers......................................................................................18 4.4.1 Bone Regularizer Refinement........................................................20 4.5 Sharpness Enhancer..........................................................................................21 4.6 Sequence Fusion Module..................................................................................22 Chapter 5. Experiments......................................................................................23 5.1 Experimental Designs.......................................................................................23 5.1.1 Datasets.........................................................................................24 5.1.2 Implementation Details of Experiments.........................................26 5.1.3 Evaluation .....................................................................................28 5.2 Compare with Unsupervised Framework ..........................................................34 5.3 Framework Variants.........................................................................................35 5.3.1 Sequence Fusion Mechanism.........................................................36 5.3.2 Auxiliary Regularizer....................................................................36 5.3.3 Merge Discriminator and Sharpness Enhancer...............................37 5.3.4 PatchGAN Discriminator ..............................................................37 5.3.5 Multiscale Discriminator ...............................................................37 5.4 Ablation Studies...............................................................................................37 5.4.1 Auxiliary Regularizer....................................................................39 5.4.2 Sharpness Enhancer.......................................................................39 5.4.3 Sequence Fusion Mechanism.........................................................39 5.4.4 Identity Mapping...........................................................................39 5.5 Case Study........................................................................................................40 5.5.1 Experts’ Assessment......................................................................40 5.5.2 Slices with Tumor .........................................................................42 5.6 Research Limitations........................................................................................44 Chapter 6. Conclusions.......................................................................................44 6.1 Conclusions......................................................................................................45 6.2 Future Works....................................................................................................45 Reference.........................................................................................................................46

    Barney, B. M., Lee, R. J., Handrahan, D., Welsh, K. T., Cook, J. T., & Sause, W. T. (2011). Image-guided radiotherapy (IGRT) for prostate cancer comparing kV imaging of fiducial markers with cone beam computed tomography (CBCT). International Journal of Radiation Oncology* Biology* Physics, 80(1), 301–305.
    Barrett, J. F., & Keat, N. (2004). Artifacts in CT: recognition and avoidance. Radiographics, 24(6), 1679–1691.
    Baskar, R., Lee, K. A., Yeo, R., & Yeoh, K.-W. (2012). Cancer and radiation therapy: current advances and future directions. International Journal of Medical Sciences, 9(3), 193.
    Boda-Heggemann, J., Lohr, F., Wenz, F., Flentje, M., & Guckenberger, M. (2011). kV cone-beam CT-based IGRT. Strahlentherapie Und Onkologie, 187(5), 284–291.
    Chang, Y.-C., Lu, C.-N., Cheng, C.-C., & Chiu, W.-C. (2021). Single image reflection removal with edge guidance, reflection classifier, and recurrent decomposition. Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2033–2042.
    de Los Santos, J., Popple, R., Agazaryan, N., Bayouth, J. E., Bissonnette, J.-P., Bucci, M. K., Dieterich, S., Dong, L., Forster, K. M., & Indelicato, D. (2013). Image guided radiation therapy (IGRT) technologies for radiation therapy localization and delivery. International Journal of Radiation Oncology, Biology, Physics, 87(1), 33–45.
    Du, M., Liang, K., Liu, Y., & Xing, Y. (2021). Investigation of domain gap problem in several deep-learning-based CT metal artefact reduction methods. ArXiv Preprint ArXiv:2111.12983.
    Fu, X., Huang, J., Zeng, D., Huang, Y., Ding, X., & Paisley, J. (2017). Removing rain from single images via a deep detail network. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 3855–3863.
    Gardner, S. J., Kim, J., & Chetty, I. J. (2019). Modern radiation therapy planning and delivery. Hematology/Oncology Clinics, 33(6), 947–962.
    Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., & Bengio, Y. (2014). Generative adversarial nets. Advances in Neural Information Processing Systems, 27.
    Hu, X., Jiang, Y., Fu, C.-W., & Heng, P.-A. (2019). Mask-ShadowGAN: Learning to remove shadows from unpaired data. Proceedings of the IEEE/CVF International Conference on Computer Vision, 2472–2481.
    Huang, Z., Zhang, G., Lin, J., Pang, Y., Wang, H., Bai, T., & Zhong, L. (2022). Multi-modal feature-fusion for CT metal artifact reduction using edge-enhanced generative adversarial networks. Computer Methods and Programs in Biomedicine, 217, 106700.
    Isola, P., Zhu, J.-Y., Zhou, T., & Efros, A. A. (2017). Image-to-image translation with conditional adversarial networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 1125–1134.
    Jacobs, R., Salmon, B., Codari, M., Hassan, B., & Bornstein, M. M. (2018). Cone beam computed tomography in implant dentistry: recommendations for clinical use. BMC Oral Health, 18(1), 1–16.
    Kaeppler, G. (2010). Applications of Cone Beam Computed Tomography in Dental and Oral Medicine Anwendungen der dentalen Volumentomografie in der Zahn-, Mund-und Kieferheilkunde. International Journal of Computerized Dentistry, 13, 203–219.
    Kurz, C., Maspero, M., Savenije, M. H. F., Landry, G., Kamp, F., Pinto, M., Li, M., Parodi, K., Belka, C., & van den Berg, C. A. T. (2019). CBCT correction using a cycle-consistent generative adversarial network and unpaired training to enable photon and proton dose calculation. Physics in Medicine & Biology, 64(22), 225004.
    Landry, G., Hansen, D., Kamp, F., Li, M., Hoyle, B., Weller, J., Parodi, K., Belka, C., & Kurz, C. (2019). Comparing Unet training with three different datasets to correct CBCT images for prostate radiotherapy dose calculations. Physics in Medicine & Biology, 64(3), 035011.
    Le, H., & Samaras, D. (2019). Shadow removal via shadow image decomposition. Proceedings of the IEEE/CVF International Conference on Computer Vision, 8578–8587.
    Li, X., Wu, J., Lin, Z., Liu, H., & Zha, H. (2018). Recurrent squeeze-and-excitation context aggregation net for single image deraining. Proceedings of the European Conference on Computer Vision (ECCV), 254–269.
    Li, Y., Zhu, J., Liu, Z., Teng, J., Xie, Q., Zhang, L., Liu, X., Shi, J., & Chen, L. (2019). A preliminary study of using a deep convolution neural network to generate synthesized CT images based on CBCT for adaptive radiotherapy of nasopharyngeal carcinoma. Physics in Medicine & Biology, 64(14), 145010.
    Liang, X., Chen, L., Nguyen, D., Zhou, Z., Gu, X., Yang, M., Wang, J., & Jiang, S. (2019). Generating synthesized computed tomography (CT) from cone-beam computed tomography (CBCT) using CycleGAN for adaptive radiation therapy. Physics in Medicine & Biology, 64(12), 125002.
    Liao, H., Lin, W.-A., Zhou, S. K., & Luo, J. (2019). ADN: artifact disentanglement network for unsupervised metal artifact reduction. IEEE Transactions on Medical Imaging, 39(3), 634–643.
    Lin, J., Gan, C., & Han, S. (2019). Tsm: Temporal shift module for efficient video understanding. Proceedings of the IEEE/CVF International Conference on Computer Vision, 7083–7093.
    Lin, W.-A., Liao, H., Peng, C., Sun, X., Zhang, J., Luo, J., Chellappa, R., & Zhou, S. K. (2019). Dudonet: Dual domain network for ct metal artifact reduction. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 10512–10521.
    Liu, M.-Y., Breuel, T., & Kautz, J. (2017). Unsupervised image-to-image translation networks. Advances in Neural Information Processing Systems, 30.
    Liu, Y., Lei, Y., Wang, T., Fu, Y., Tang, X., Curran, W. J., Liu, T., Patel, P., & Yang, X. (2020). CBCT‐based synthetic CT generation using deep‐attention cycleGAN for pancreatic adaptive radiotherapy. Medical Physics, 47(6), 2472–2483.
    Lyu, Y., Fu, J., Peng, C., & Zhou, S. K. (2021). U-DuDoNet: Unpaired dual-domain network for CT metal artifact reduction. International Conference on Medical Image Computing and Computer-Assisted Intervention, 296–306.
    Lyu, Y., Lin, W.-A., Liao, H., Lu, J., & Zhou, S. K. (2020). Encoding metal mask projection for metal artifact reduction in computed tomography. International Conference on Medical Image Computing and Computer-Assisted Intervention, 147–157.
    Ma, D., Wan, R., Shi, B., Kot, A. C., & Duan, L.-Y. (2019). Learning to jointly generate and separate reflections. Proceedings of the IEEE/CVF International Conference on Computer Vision, 2444–2452.
    MacDonald, D. (2017). Cone‐beam computed tomography and the dentist. Journal of Investigative and Clinical Dentistry, 8(1), e12178.
    Maspero, M., Houweling, A. C., Savenije, M. H. F., van Heijst, T. C. F., Verhoeff, J. J. C., Kotte, A. N. T. J., & van den Berg, C. A. T. (2020). A single neural network for cone-beam computed tomography-based radiotherapy of head-and-neck, lung and breast cancer. Physics and Imaging in Radiation Oncology, 14, 24–31.
    Nasseh, I., & Al-Rawi, W. (2018). Cone beam computed tomography. Dental Clinics, 62(3), 361–391.
    Park, T., Efros, A. A., Zhang, R., & Zhu, J.-Y. (2020). Contrastive learning for unpaired image-to-image translation. European Conference on Computer Vision, 319–345.
    Patel, S., Brown, J., Pimentel, T., Kelly, R. D., Abella, F., & Durack, C. (2019). Cone beam computed tomography in Endodontics–a review of the literature. International Endodontic Journal, 52(8), 1138–1152.
    Puvanasunthararajah, S., Fontanarosa, D., Wille, M., & Camps, S. M. (2021). The application of metal artifact reduction methods on computed tomography scans for radiotherapy applications: A literature review. Journal of Applied Clinical Medical Physics, 22(6), 198–223.
    Ronneberger, O., Fischer, P., & Brox, T. (2015). U-net: Convolutional networks for biomedical image segmentation. International Conference on Medical Image Computing and Computer-Assisted Intervention, 234–241.
    Rossi, M., & Cerveri, P. (2021). Comparison of Supervised and Unsupervised Approaches for the Generation of Synthetic CT from Cone-Beam CT. Diagnostics, 11(8), 1435.
    Scarfe, W. C., & Farman, A. G. (2008). What is cone-beam CT and how does it work? Dental Clinics of North America, 52(4), 707–730.
    Schulze, R., Heil, U., Groβ, D., Bruellmann, D. D., Dranischnikow, E., Schwanecke, U., & Schoemer, E. (2011). Artefacts in CBCT: a review. Dentomaxillofacial Radiology, 40(5), 265–273.
    Shen, Z., Zhou, S. K., Chen, Y., Georgescu, B., Liu, X., & Huang, T. (2020). One-to-one Mapping for Unpaired Image-to-image Translation. Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 1170–1179.
    Stieb, S., McDonald, B., Gronberg, M., Engeseth, G. M., He, R., & Fuller, C. D. (2019). Imaging for target delineation and treatment planning in radiation oncology: current and emerging techniques. Hematology/Oncology Clinics of North America, 33(6), 963–975.
    Stokes, K., Thieme, R., Jennings, E., & Sholapurkar, A. (2021). Cone beam computed tomography in dentistry: practitioner awareness and attitudes. A scoping review. Australian Dental Journal, 66(3), 234–245.
    Tien, H.-J., Yang, H.-C., Shueng, P.-W., & Chen, J.-C. (2021). Cone-beam CT image quality improvement using Cycle-Deblur consistent adversarial networks (Cycle-Deblur GAN) for chest CT imaging in breast cancer patients. Scientific Reports, 11(1), 1–12.
    Ulyanov, D., Vedaldi, A., & Lempitsky, V. (2016). Instance normalization: The missing ingredient for fast stylization. ArXiv Preprint ArXiv:1607.08022.
    Veikutis, V., Budrys, T., Basevicius, A., Lukosevicius, S., Gleizniene, R., Unikas, R., & Skaudickas, D. (2015). Artifacts in computer tomography imaging: how it can really affect diagnostic image quality and confuse clinical diagnosis? Journal of Vibroengineering, 17(2), 995–1003.
    Verellen, D., de Ridder, M., Tournel, K., Duchateau, M., Reynders, T., Gevaert, T., Linthout, N., & Storme, G. (2008). An overview of volumetric imaging technologies and their quality assurance for IGRT. Acta Oncologica, 47(7), 1271–1278.
    Wan, R., Shi, B., Li, H., Duan, L.-Y., Tan, A.-H., & Kot, A. C. (2019). CoRRN: Cooperative reflection removal network. IEEE Transactions on Pattern Analysis and Machine Intelligence, 42(12), 2969–2982.
    Wang, T., Xia, W., Huang, Y., Sun, H., Liu, Y., Chen, H., Zhou, J., & Zhang, Y. (2021). DAN-Net: Dual-domain adaptive-scaling non-local network for CT metal artifact reduction. Physics in Medicine & Biology, 66(15), 155009.
    Wang, Z., Bovik, A. C., Sheikh, H. R., & Simoncelli, E. P. (2004). Image quality assessment: from error visibility to structural similarity. IEEE Transactions on Image Processing, 13(4), 600–612.
    Xing, L., Thorndyke, B., Schreibmann, E., Yang, Y., Li, T.-F., Kim, G.-Y., Luxton, G., & Koong, A. (2006). Overview of image-guided radiation therapy. Medical Dosimetry, 31(2), 91–112.
    Xu, B., Wang, N., Chen, T., & Li, M. (2015). Empirical evaluation of rectified activations in convolutional network. ArXiv Preprint ArXiv:1505.00853.
    Yan, K., Wang, X., Lu, L., Zhang, L., Harrison, A. P., Bagheri, M., & Summers, R. M. (2018). Deep lesion graphs in the wild: relationship learning and organization of significant radiology image findings in a diverse large-scale lesion database. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 9261–9270.
    Yang, W., Tan, R. T., Feng, J., Guo, Z., Yan, S., & Liu, J. (2019). Joint rain detection and removal from a single image with contextualized deep networks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 42(6), 1377–1393.
    Yu, L., Zhang, Z., Li, X., Ren, H., Zhao, W., & Xing, L. (2021). Metal artifact reduction in 2D CT images with self-supervised cross-domain learning. Physics in Medicine & Biology, 66(17), 175003.
    Yu, L., Zhang, Z., Li, X., & Xing, L. (2020). Deep sinogram completion with image prior for metal artifact reduction in CT images. IEEE Transactions on Medical Imaging, 40(1), 228–238.
    Yuan, N., Dyer, B., Rao, S., Chen, Q., Benedict, S., Shang, L., Kang, Y., Qi, J., & Rong, Y. (2020). Convolutional neural network enhancement of fast-scan low-dose cone-beam CT images for head and neck radiotherapy. Physics in Medicine & Biology, 65(3), 035003.
    Zhang, Y., Yue, N., Su, M., Liu, B., Ding, Y., Zhou, Y., Wang, H., Kuang, Y., & Nie, K. (2021). Improving CBCT quality to CT level using deep learning with generative adversarial network. Medical Physics, 48(6), 2816–2826.
    Zheng, C., Cham, T.-J., & Cai, J. (2021). The spatially-correlative loss for various image translation tasks. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 16407–16417.
    Zheng, Q., Qiao, X., Cao, Y., & Lau, R. W. H. (2019). Distraction-aware shadow detection. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 5167–5176.
    Zhu, J.-Y., Park, T., Isola, P., & Efros, A. A. (2017). Unpaired image-to-image translation using cycle-consistent adversarial networks. Proceedings of the IEEE International Conference on Computer Vision, 2223–2232.

    下載圖示 校內:立即公開
    校外:立即公開
    QR CODE