簡易檢索 / 詳目顯示

研究生: 楊境睿
Yang, Ching-Juei
論文名稱: 應用對抗生成網路建立脊椎二維至三維影像自動化轉換建模
Implementation of Generative Adversarial Network (GAN) for Automatic 2D/3D Registration Modeling of Spine
指導教授: 蘇芳慶
Su, Fong-Chin
學位類別: 博士
Doctor
系所名稱: 工學院 - 生物醫學工程學系
Department of BioMedical Engineering
論文出版年: 2024
畢業學年度: 112
語文別: 英文
論文頁數: 99
中文關鍵詞: 三維脊椎生成對抗網路雙平面X光
外文關鍵詞: 3-dimensional (3D) spine, Generative adversarial network (GAN), bi-planar X-ray
ORCID: 0000-0001-5539-6899
ResearchGate: Ching-Juei Yang
相關次數: 點閱:61下載:5
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 假設人類對影像的認知來自於對不同特徵以不同的權重組合去感知分辨一個物件。過去影像學工程技術主要在時域與頻率域針對特徵訊號進行處理,並以過往經驗建立映射數學模型(mapping model),進行解釋分析。在我們有限的知識範疇內,認為深度學習是一個最佳化的理論技術。應用在視覺影像領域方面,它可以將人類共同感官之集合物件,以深度學習工程技術進行自動化的特徵擷取並進入類神經網路進行建模分析。深度學習應用在視覺影像領域方面主要有三大範疇(一)物件偵測(二)物件切割以及(三)物件生成。這三者任務第一個最重要的關鍵點為如何合理設定收集具有特徵性的原像(preimage)與像(image)的集合以符合實際應用目標需求,第二個關鍵點是映射模型的設計,包括模型本身架構設計以及特徵擷取骨幹(feature extraction backbone)的選取。此研究中以脊椎為研究目標,建立全自動化由正交雙平面脊椎影像推論立體脊椎影像實際技術應用。傳統三維度立體骨骼結構的建立需要仰賴電腦斷層掃描,使得患者必須暴露在較多的輻射劑量。在過去四十年文獻回顧中,由二維影像轉換到三維影像一直是視覺影像研究的重要主題,最早期研究由線性轉換開始,後繼為統計模型預測,到目前的深度學習方法皆朝向最佳泛化映射模型的建立。然而到目前為止,絕大部分模型仍需仰賴人工標記主要骨骼定位點,如此才可以達到可接受的三維結構脊椎建模的準確性。此論文以影像專家經驗對脊椎源域訊號進行調整並合併工程技術,以生成對抗網路(generative adversarial networks)為架構,構建出一個自動化由正交二維度影像推論出三維度影像的實際應用流程。本論文主要研究目標為假設基準真相(ground truth)三維結構脊椎存在一個原像位於隱空間(latent space)中,而現階段技術並完全無法企及此基準真相原像,因爲影像專家對於原始二維影像的訊號經驗性調整或不同深度學習架構,皆可能影響到靠近基準真相原像程度,進而影響到映射模型的訓練成效。根據這個假設,本研究以生成對抗網路為基礎架構設計了不同的輸入原始訊號的訊號狀態以及不同的特徵擷取骨幹,建立一套自動化由雙平面X光影像推論出立體脊椎結構的作業架構。此研究中結合了臨床影像學專家經驗以及工程師思維闡述了在深度學習臨床應用建模時可能遭遇的困難以及解決方法。這份研究證實了脊椎影像不同維度轉換由理論上到實際臨床測試的可行性,其中的關鍵技術不僅止於工程理論模型的架構,還必須包含影像專家感官經驗對源域訊號的篩選調整,達到最終可接受的應用目標。此研究的目的並非要取代現今其他醫療儀器針對立體脊椎評估的顯影技術,而是希望在目前常規的X光影像檢查之下,能以不同的方法,提供更多的診斷訊息。

    It is hypothesized that human understanding of an image stems from the amalgamation of several differently weighted features, which together enable an object to be sensed and discerned. Traditional imaging engineering technology primarily processes feature signals in the time domain and frequency domain and draws on previous experience to build mapping models for interpretation and analysis. Recently, deep learning has been put forward as an optimal theoretical technology, which, when applied to visual imaging, can automatically extract features from a set of objects recognizable to humans, and feed this information into neural networks for further analysis and modeling. Deep learning is currently being applied to three main aspects of visual imaging, namely: (1) object detection; (2) object segmentation; and (3) object generation. The first critical step common to all three of these tasks is the proper selection and collection of characteristic preimage and image sets that can meet the requirements dictated by applicational goals. The second critical step involves the design of mapping models, including the design of the architecture and the choice of the feature extraction backbone. In this study, the spine was selected as the target of research, and a fully automated method of generating three-dimensional (3D) spine images from two-dimensional (2D) bi-planar images was realized. Traditional 3D imaging of skeletal structures requires computer tomography (CT), which exposes patients to elevated levels of radiation. A review of the published literature over the past four decades revealed that the conversion of 2D images to 3D representations has consistently been a major topic in visual imaging research, with the earliest methods utilizing linear conversion, later followed by statistical modeling, and more recently, deep learning. These approaches all sought to establish a generalized mapping model, but most existing models still rely on manual identification of key bony landmarks yet, to generate 3D spinal modeling results with acceptable accuracy. This thesis utilized the experience of imaging experts to adjust spinal source domain signals and leveraged engineering technology to develop an automated method of inferring 3D images from orthogonal 2D images, with generative adversarial networks (GAN) serving as the underlying architecture. This study hypothesized that for the ground truth 3D structure of the spine, a ground truth preimage exists in the latent space. However, current technology cannot fully capture this ground truth preimage, and it proposed that empirical adjustment of raw data signals and different feature extraction backbones by imaging experts may affect the degree of approximation to this ground truth preimage, thereby influencing the training outcomes of mapping models. Based on this hypothesis, this research used GAN as the basic architecture to design different input signal states and feature extraction backbones, to establish an automated framework for 2D/3D registration modeling. The expertise of clinical imaging specialists and the reasoning of engineers was integrated, and potential challenges and corresponding solutions during the model-building process for deep learning clinical applications was described. This study provides proof of concept from theory to practice for the conversion of spinal images from 2D to 3D and involved the domain knowledge of imaging experts in addition to theoretical engineering models, to achieve the desired applicational results. This study does not aim to replace current imaging modalities for 3D spinal assessment, but rather seeks to supplement routine X-ray examination by offering additional diagnostic information.

    中文摘要 I Abstract III 致謝 V List of Tables VIII List of Figures IX CHAPTER 1 INTRODUCTION 1 1.1 Early related work 1 1.2 Fluoroscopy imaging system 2 1.3 Bi-planar 2D/3D registration imaging system of spine. 3 1.4 Deep learning technology 4 CHAPTER 2 APPLIED MATHEMATICS OF MODELING 6 2.1 Definition of sets by features 6 2.2 Establishment of mapping relationships 6 2.3 Optimization function 7 2.4 Supervised machine learning 8 2.5 Image and preimage 8 CHAPTER 3 MATERIALS 10 3.1 Data collection 11 3.2 Ground truth (GT) segmentation of thoracic spine 16 3.2.1 Semi-automatic segmentation of thoracic spine 17 3.3 Generation of simulated chest X-ray images 24 3.4 Cropping of lumbar spine CT and real X-ray images 29 CHAPTER 4 METHODS 34 4.1 GAN modeling 34 4.1.1 Generator design 34 4.1.2 Discriminator design 35 4.2 Loss function of GAN modeling 36 4.2.1 U-Net validation loss (Val_loss) 36 4.2.2 Discriminator loss (D_loss) 37 4.2.3 Generator loss (G_loss) 37 4.3 Data binarization for output signals of GAN 38 4.4 Training based on pre-trained models 39 4.5 Statistics 40 4.5.1 Evaluation metrices 40 4.5.2 Dataset splitting 41 4.5.3 5-fold cross validation 41 4.6 3D inference method 42 4.7 Hardware and software equipment 42 CHAPTER 5 RESULTS 43 5.1 Trend of loss 43 5.2 Prediction of 3D thoracic from simulated X-ray images 46 5.2.1 Densenet121 with enhanced and nonenhanced groups 46 5.2.2 Densenet121 with different training sizes. 50 5.2.3 Comparison of different feature extraction backbones_Densenet121, ResNet101, and ResNet50 52 5.3 Prediction of 3D lumbar spine from real X-ray images 57 5.4 3D Morphological assessment 59 CHAPTER 6 DISCUSSIONS AND CONCLUSIONS 70 6.1 Discussions 70 6.1.1 Clinical demands of medical imaging of spine 70 6.1.2 Applicability of clinical deployment 71 6.1.3 Theoretical considerations of GAN 71 6.2 Study limitations 72 6.2.1 Framework of research 72 6.2.2 Validation on real X-ray of different segments of spine 73 6.2.3 Optimization of bone enhanced X-ray images 73 6.3 Future studies 73 6.3.1 Fundamental mathematical theory 73 6.3.2 Applied science considerations 74 6.4 Conclusions 75 References 76 Appendix 85 Appendix A. MATLAB codes of XrayEnhance 85 Appendix B. Python codes of evaluation metrices 86

    1. Benz, R.M.; Garcia, M.A.; Amsler, F.; Voigt, J.; Fieselmann, A.; Falkowski, A.L.; Stieltjes, B.; Hirschmann, A. Initial evaluation of image performance of a 3-D x-ray system: phantom-based comparison of 3-D tomography with conventional computed tomography. J Med Imaging (Bellingham) 2018, 5, 015502, doi:10.1117/1.Jmi.5.1.015502.
    2. Brenner, D.J.; Hall, E.J. Computed tomography--an increasing source of radiation exposure. N Engl J Med 2007, 357, 2277-2284, doi:10.1056/NEJMra072149.
    3. Hui, S.C.; Pialasse, J.P.; Wong, J.Y.; Lam, T.P.; Ng, B.K.; Cheng, J.C.; Chu, W.C. Radiation dose of digital radiography (DR) versus micro-dose x-ray (EOS) on patients with adolescent idiopathic scoliosis: 2016 SOSORT- IRSSD "John Sevastic Award" Winner in Imaging Research. Scoliosis Spinal Disord 2016, 11, 46, doi:10.1186/s13013-016-0106-7.
    4. Lell, M.M.; Kachelrieß, M. Recent and Upcoming Technological Developments in Computed Tomography: High Speed, Low Dose, Deep Learning, Multienergy. Investigative Radiology 2020, 55, 8-19, doi:10.1097/rli.0000000000000601.
    5. Openshaw, P.; Edwards, S.; Helms, P. Changes in rib cage geometry during childhood. Thorax 1984, 39, 624-627, doi:10.1136/thx.39.8.624.
    6. Szirtes, T. Development and application of contour radiography. 1981.
    7. Dansereau, J.; Stokes, I.A.F. Measurements of the three-dimensional shape of the rib cage. Journal of Biomechanics 1988, 21, 893-901, doi:https://doi.org/10.1016/0021-9290(88)90127-3.
    8. Drerup, B.; Hierholzer, E. Evaluation of frontal radiographs of scoliotic spines—Part I measurement of position and orientation of vertebrae and assessment of clinical shape parameters. Journal of Biomechanics 1992, 25, 1357-1362, doi:https://doi.org/10.1016/0021-9290(92)90291-8.
    9. Drerup, B.; Hierholzer, E. Evaluation of frontal radiographs of scoliotic spines—Part II. Relations between lateral deviation, lateral tilt and axial rotation of vertebrae. Journal of Biomechanics 1992, 25, 1443-1450, doi:https://doi.org/10.1016/0021-9290(92)90057-8.
    10. André, B.; Dansereau, J.; Labelle, H. Optimized vertical stereo base radiographic setup for the clinical three-dimensional reconstruction of the human spine. J Biomech 1994, 27, 1023-1035, doi:10.1016/0021-9290(94)90219-4.
    11. Aubin, C.É.; Dansereau, J.; Parent, F.; Labelle, H.; de Guise, J.A. Morphometric evaluations of personalised 3D reconstructions and geometric models of the human spine. Medical and Biological Engineering and Computing 1997, 35, 611-618, doi:10.1007/BF02510968.
    12. Cootes, T.F.; Taylor, C.J.; Cooper, D.H.; Graham, J. Training models of shape from sets of examples. In BMVC92; Springer: 1992; pp. 9-18.
    13. Benameur, S.; Mignotte, M.; Parent, S.; Labelle, H.; Skalli, W.; de Guise, J. 3D/2D registration and segmentation of scoliotic vertebrae using statistical models. Comput Med Imaging Graph 2003, 27, 321-337, doi:10.1016/s0895-6111(03)00019-3.
    14. Schueler, B.A. The AAPM/RSNA physics tutorial for residents general overview of fluoroscopic imaging. Radiographics 2000, 20, 1115-1126.
    15. Nickoloff, E.L. AAPM/RSNA physics tutorial for residents: physics of flat-panel fluoroscopy systems: survey of modern fluoroscopy imaging: flat-panel detectors versus image intensifiers and more. Radiographics 2011, 31, 591-602.
    16. Weyland, C.S.; Hemmerich, F.; Möhlenbruch, M.A.; Bendszus, M.; Pfaff, J.A. Radiation exposure and fluoroscopy time in mechanical thrombectomy of anterior circulation ischemic stroke depending on the interventionalist’s experience—a retrospective single center experience. European radiology 2020, 30, 1564-1570.
    17. Holly, L.T.; Foley, K.T. Intraoperative spinal navigation. Spine 2003, 28, S54-S61.
    18. Gelalis, I.D.; Paschos, N.K.; Pakos, E.E.; Politis, A.N.; Arnaoutoglou, C.M.; Karageorgos, A.C.; Ploumis, A.; Xenakis, T.A. Accuracy of pedicle screw placement: a systematic review of prospective in vivo studies comparing free hand, fluoroscopy guidance and navigation techniques. European spine journal 2012, 21, 247-255.
    19. Unberath, M.; Gao, C.; Hu, Y.; Judish, M.; Taylor, R.H.; Armand, M.; Grupp, R. The impact of machine learning on 2d/3d registration for image-guided interventions: A systematic review and perspective. Frontiers in Robotics and AI 2021, 8, 716007.
    20. Riley, S.A. Radiation exposure from fluoroscopy during orthopedic surgical procedures. Clinical Orthopaedics and Related Research® 1989, 248, 257-260.
    21. Mahesh, M. Fluoroscopy: patient radiation exposure issues. Radiographics 2001, 21, 1033-1045.
    22. Haque, M.U.; Shufflebarger, H.L.; O’Brien, M.; Macagno, A. Radiation exposure during pedicle screw placement in adolescent idiopathic scoliosis: is fluoroscopy safe? Spine 2006, 31, 2516-2520.
    23. Bang, J.Y.; Hough, M.; Hawes, R.H.; Varadarajulu, S. Use of artificial intelligence to reduce radiation exposure at fluoroscopy-guided endoscopic procedures. Official journal of the American College of Gastroenterology| ACG 2020, 115, 555-561.
    24. Klingler, J.-H.; Scholz, C.; Krüger, M.T.; Naseri, Y.; Volz, F.; Hohenhaus, M.; Brönner, J.; Hoedlmoser, H.; Sircar, R.; Hubbe, U. Radiation exposure in minimally invasive lumbar fusion surgery: a randomized controlled trial comparing conventional fluoroscopy and 3D fluoroscopy-based navigation. 2021.
    25. De La Fuente, M.; Ohnsorge, J.A.; Schkommodau, E.; Jetzki, S.; Wirtz, D.C.; Radermacher, K. Fluoroscopy-based 3-D reconstruction of femoral bone cement: A new approach for revision total hip replacement. IEEE transactions on biomedical engineering 2005, 52, 664-675.
    26. Van Walsum, T.; Baert, S.A.; Niessen, W.J. Guide wire reconstruction and visualization in 3DRA using monoplane fluoroscopic imaging. IEEE transactions on medical imaging 2005, 24, 612-623.
    27. Zheng, G.; Ballester, M.Á.; Styner, M.; Nolte, L.-P. Reconstruction of patient-specific 3D bone surface from 2D calibrated fluoroscopic images and point distribution model. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention–MICCAI 2006: 9th International Conference, Copenhagen, Denmark, October 1-6, 2006. Proceedings, Part I 9, 2006; pp. 25-32.
    28. Kurazume, R.; Nakamura, K.; Okada, T.; Sato, Y.; Sugano, N.; Koyama, T.; Iwashita, Y.; Hasegawa, T. 3D reconstruction of a femoral shape using a parametric model and two 2D fluoroscopic images. Computer Vision and Image Understanding 2009, 113, 202-211.
    29. Valenti, M.; De Momi, E.; Yu, W.; Ferrigno, G.; Akbari Shandiz, M.; Anglin, C.; Zheng, G. Fluoroscopy-based tracking of femoral kinematics with statistical shape models. International journal of computer assisted radiology and surgery 2016, 11, 757-765.
    30. Scarvell, J.M.; Pickering, M.R.; Smith, P.N. New registration algorithm for determining 3D knee kinematics using CT and single‐plane fluoroscopy with improved out‐of‐plane translation accuracy. Journal of Orthopaedic Research 2010, 28, 334-340.
    31. Tsai, T.Y.; Lu, T.W.; Chen, C.M.; Kuo, M.Y.; Hsu, H.C. A volumetric model‐based 2D to 3D registration method for measuring kinematics of natural knees with single‐plane fluoroscopy. Medical physics 2010, 37, 1273-1284.
    32. Zhang, Y.; Zhao, L.; Huang, S. Aortic 3D deformation reconstruction using 2D x-ray fluoroscopy and 3D pre-operative data for endovascular interventions. In Proceedings of the 2020 IEEE International Conference on Robotics and Automation (ICRA), 2020; pp. 2393-2399.
    33. Wagner, M.; Schafer, S.; Strother, C.; Mistretta, C. 4D interventional device reconstruction from biplane fluoroscopy. Medical physics 2016, 43, 1324-1334.
    34. Shrestha, P.; Xie, C.; Shishido, H.; Yoshii, Y.; Kitahara, I. 3D Reconstruction of Wrist Bones from C-Arm Fluoroscopy Using Planar Markers. Diagnostics 2023, 13, 330.
    35. Vöth, T.; Koenig, T.; Eulig, E.; Knaup, M.; Wiesmann, V.; Hörndler, K.; Kachelrieß, M. Real‐time 3D reconstruction of guidewires and stents using two update X‐ray projections in a rotating imaging setup. Medical Physics 2023, 50, 5312-5330.
    36. Illés, T.; Somoskeöy, S. The EOS™ imaging system and its uses in daily orthopaedic practice. International Orthopaedics 2012, 36, 1325-1331, doi:10.1007/s00264-012-1512-y.
    37. Wybier, M.; Bossard, P. Musculoskeletal imaging in progress: the EOS imaging system. Joint Bone Spine 2013, 80, 238-243, doi:10.1016/j.jbspin.2012.09.018.
    38. Grigoriou, E.; Pasha, S.; Dormans, J.P. EOS Imaging: Insight Into This Emerging Musculoskeletal Imaging System. 2014.
    39. Garg, B.; Mehta, N.; Bansal, T.; Malhotra, R. EOS® imaging: Concept and current applications in spinal disorders. Journal of Clinical Orthopaedics and Trauma 2020, 11, 786-793.
    40. Duke, A.; Marchese, R.; Komatsu, D.E.; Barsi, J. Radiation in adolescent idiopathic scoliosis management: estimated cumulative pre-operative, intra-operative, and post-operative exposure. Orthopedic Research and Reviews 2022, 487-493.
    41. Rose, L.; Williams, R.; Ajayi, B.; Abdalla, M.; Bernard, J.; Bishop, T.; Papadakos, N.; Lui, D. Reducing radiation exposure and cancer risk for children with scoliosis: EOS the new gold standard. Spine Deformity 2023, 1-5.
    42. Mahboub-Ahari, A.; Hajebrahimi, S.; Yusefi, M.; Velayati, A. EOS imaging versus current radiography: A health technology assessment study. Medical journal of the Islamic Republic of Iran 2016, 30, 331.
    43. Melhem, E.; Assi, A.; El Rachkidi, R.; Ghanem, I. EOS (®) biplanar X-ray imaging: concept, developments, benefits, and limitations. J Child Orthop 10: 1–14. 2016.
    44. Bennani, H.; McCane, B.; Cornwall, J. Three-dimensional reconstruction of In Vivo human lumbar spine from biplanar radiographs. Comput Med Imaging Graph 2022, 96, 102011, doi:10.1016/j.compmedimag.2021.102011.
    45. Rehm, J.; Germann, T.; Akbar, M.; Pepke, W.; Kauczor, H.-U.; Weber, M.-A.; Spira, D. 3D-modeling of the spine using EOS imaging system: Inter-reader reproducibility and reliability. PLOS ONE 2017, 12, e0171258, doi:10.1371/journal.pone.0171258.
    46. Kim, S.B.; Heo, Y.M.; Hwang, C.M.; Kim, T.G.; Hong, J.Y.; Won, Y.G.; Ham, C.U.; Min, Y.K.; Yi, J.W. Reliability of the EOS Imaging System for Assessment of the Spinal and Pelvic Alignment in the Sagittal Plane. Clin Orthop Surg 2018, 10, 500-507, doi:10.4055/cios.2018.10.4.500.
    47. Sarkalkan, N.; Weinans, H.; Zadpoor, A.A. Statistical shape and appearance models of bones. Bone 2014, 60, 129-140, doi:10.1016/j.bone.2013.12.006.
    48. Wu, X.; Kumar, V.; Ross Quinlan, J.; Ghosh, J.; Yang, Q.; Motoda, H.; McLachlan, G.J.; Ng, A.; Liu, B.; Yu, P.S.; et al. Top 10 algorithms in data mining. Knowledge and Information Systems 2008, 14, 1-37, doi:10.1007/s10115-007-0114-2.
    49. Chartrand, G.; Cheng, P.M.; Vorontsov, E.; Drozdzal, M.; Turcotte, S.; Pal, C.J.; Kadoury, S.; Tang, A. Deep Learning: A Primer for Radiologists. Radiographics 2017, 37, 2113-2131, doi:10.1148/rg.2017170077.
    50. Cheng, P.M.; Montagnon, E.; Yamashita, R.; Pan, I.; Cadrin-Chênevert, A.; Perdigón Romero, F.; Chartrand, G.; Kadoury, S.; Tang, A. Deep learning: an update for radiologists. Radiographics 2021, 41, 1427-1445.
    51. Goodfellow, I.J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; Bengio, Y. Generative Adversarial Networks. 2014, arXiv:1406.2661.
    52. Ledig, C.; Theis, L.; Huszár, F.; Caballero, J.; Cunningham, A.; Acosta, A.; Aitken, A.; Tejani, A.; Totz, J.; Wang, Z. Photo-realistic single image super-resolution using a generative adversarial network. In Proceedings of the Proceedings of the IEEE conference on computer vision and pattern recognition, 2017; pp. 4681-4690.
    53. Wu, B.; Duan, H.; Liu, Z.; Sun, G. SRPGAN: Perceptual Generative Adversarial Network for Single Image Super Resolution. 2017, arXiv:1712.05927.
    54. Zhu, J.-Y.; Park, T.; Isola, P.; Efros, A.A. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the Proceedings of the IEEE international conference on computer vision, 2017; pp. 2223-2232.
    55. Tung, H.F.; Harley, A.W.; Seto, W.; Fragkiadaki, K. Adversarial Inverse Graphics Networks: Learning 2D-to-3D Lifting and Image-to-Image Translation from Unpaired Supervision. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), 22-29 Oct. 2017, 2017; pp. 4364-4372.
    56. Wu, J.; Zhang, C.; Xue, T.; Freeman, B.; Tenenbaum, J. Learning a probabilistic latent space of object shapes via 3d generative-adversarial modeling. Advances in neural information processing systems 2016, 29.
    57. Mirza, M.; Osindero, S. Conditional generative adversarial nets. arXiv preprint arXiv:1411.1784 2014.
    58. Isola, P.; Zhu, J.-Y.; Zhou, T.; Efros, A.A. Image-to-image translation with conditional adversarial networks. In Proceedings of the Proceedings of the IEEE conference on computer vision and pattern recognition, 2017; pp. 1125-1134.
    59. Ying, X.; Guo, H.; Ma, K.; Wu, J.; Weng, Z.; Zheng, Y. X2CT-GAN: Reconstructing CT from Biplanar X-Rays with Generative Adversarial Networks. 2019, arXiv:1905.06902.
    60. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the Proceedings of the IEEE conference on computer vision and pattern recognition, 2016; pp. 770-778.
    61. Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the Proceedings of the IEEE conference on computer vision and pattern recognition, 2017; pp. 4700-4708.
    62. Yang, C.-J.; Wang, C.-K.; Fang, Y.-H.D.; Wang, J.-Y.; Su, F.-C.; Tsai, H.-M.; Lin, Y.-J.; Tsai, H.-W.; Yeh, L.-R. Clinical application of mask region-based convolutional neural network for the automatic detection and segmentation of abnormal liver density based on hepatocellular carcinoma computed tomography datasets. Plos one 2021, 16, e0255605.
    63. Mitchell, T.M. Machine learning. 1997.
    64. Werbos, P.J. Backpropagation through time: what it does and how to do it. Proceedings of the IEEE 1990, 78, 1550-1560.
    65. Clark, K.; Vendt, B.; Smith, K.; Freymann, J.; Kirby, J.; Koppel, P.; Moore, S.; Phillips, S.; Maffitt, D.; Pringle, M. The Cancer Imaging Archive (TCIA): maintaining and operating a public information repository. Journal of digital imaging 2013, 26, 1045-1057.
    66. Armato III, S.G.; McLennan, G.; Bidaut, L.; McNitt‐Gray, M.F.; Meyer, C.R.; Reeves, A.P.; Zhao, B.; Aberle, D.R.; Henschke, C.I.; Hoffman, E.A. The lung image database consortium (LIDC) and image database resource initiative (IDRI): a completed reference database of lung nodules on CT scans. Medical physics 2011, 38, 915-931.
    67. Yang, C.-J.; Lin, C.-L.; Wang, C.-K.; Wang, J.-Y.; Chen, C.-C.; Su, F.-C.; Lee, Y.-J.; Lui, C.-C.; Yeh, L.-R.; Fang, Y.-H.D. Generative Adversarial Network (GAN) for Automatic Reconstruction of the 3D Spine Structure by Using Simulated Bi-Planar X-ray Images. Diagnostics 2022, 12, 1121.
    68. Biguri, A.; Dosanjh, M.; Hancock, S.; Soleimani, M. TIGRE: a MATLAB-GPU toolbox for CBCT image reconstruction. Biomedical Physics & Engineering Express 2016, 2, 055010.
    69. Lee, J.; Reeves, A.P. Segmentation of individual ribs from low-dose chest CT. In Proceedings of the Medical Imaging 2010: Computer-Aided Diagnosis, 2010; pp. 1001-1008.
    70. 王靖耀. 以電腦斷層影像進行腰椎微結構分析. 國立成功大學, 台南市, 2018.
    71. 陳致嘉. 利用雙相面X光影像與卷積神經網路進行脊椎立體結構之重建. 國立成功大學, 台南市, 2019.
    72. Fedorov, A.; Beichel, R.; Kalpathy-Cramer, J.; Finet, J.; Fillion-Robin, J.-C.; Pujol, S.; Bauer, C.; Jennings, D.; Fennessy, F.; Sonka, M. 3D Slicer as an image computing platform for the Quantitative Imaging Network. Magnetic resonance imaging 2012, 30, 1323-1341.
    73. Hsieh, J.; Flohr, T. Computed tomography recent history and future perspectives. Journal of Medical Imaging 2021, 8, 052109-052109.
    74. Chou, R.; Fu, R.; Carrino, J.A.; Deyo, R.A. Imaging strategies for low-back pain: systematic review and meta-analysis. The Lancet 2009, 373, 463-472.
    75. Schneider, C.A.; Rasband, W.S.; Eliceiri, K.W. NIH Image to ImageJ: 25 years of image analysis. Nature methods 2012, 9, 671-675.
    76. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical image computing and computer-assisted intervention, 2015; pp. 234-241.
    77. Yuan, L.; Ruan, C.; Hu, H.; Chen, D. Image inpainting based on patch-GANs. IEEE Access 2019, 7, 46411-46421.
    78. Wang, T.-C.; Liu, M.-Y.; Zhu, J.-Y.; Tao, A.; Kautz, J.; Catanzaro, B. High-resolution image synthesis and semantic manipulation with conditional gans. In Proceedings of the Proceedings of the IEEE conference on computer vision and pattern recognition, 2018; pp. 8798-8807.
    79. Mao, X.; Li, Q.; Xie, H.; Lau, R.Y.; Wang, Z.; Paul Smolley, S. Least squares generative adversarial networks. In Proceedings of the Proceedings of the IEEE international conference on computer vision, 2017; pp. 2794-2802.
    80. Aumiller, M.; Heckl, C.; Quach, S.; Stepp, H.; Ertl-Wagner, B.; Sroka, R.; Thon, N.; Rühm, A. Interrelation between Spectral Online Monitoring and Postoperative T1-Weighted MRI in Interstitial Photodynamic Therapy of Malignant Gliomas. Cancers 2021, 14, 120.
    81. Delaunay, B. Sur la sphere vide. Izv. Akad. Nauk SSSR, Otdelenie Matematicheskii i Estestvennyka Nauk 1934, 7, 1-2.
    82. Vrettos, K.; Koltsakis, E.; Zibis, A.H.; Karantanas, A.H.; Klontzas, M.E. Generative adversarial networks for spine imaging: A critical review of current applications. European Journal of Radiology 2024, 111313.
    83. Merloz, P.; Troccaz, J.; Vouaillat, H.; Vasile, C.; Tonetti, J.; Eid, A.; Plaweski, S. Fluoroscopy-based navigation system in spine surgery. Proceedings of the Institution of Mechanical Engineers, Part H: Journal of Engineering in Medicine 2007, 221, 813-820.
    84. Balling, H.; Blattert, T.R. Rate and mode of screw misplacements after 3D-fluoroscopy navigation-assisted insertion and 3D-imaging control of 1547 pedicle screws in spinal levels T10-S1 related to vertebrae and spinal sections. European Spine Journal 2017, 26, 2898-2905.
    85. Malham, G.M.; Munday, N.R. Comparison of novel machine vision spinal image guidance system with existing 3D fluoroscopy-based navigation system: a randomized prospective study. The Spine Journal 2022, 22, 561-569.
    86. Cobb, J. Outline for the study of scoliosis. Instructional course lecture 1948.
    87. Nakamoto, A.; Hori, M.; Onishi, H.; Ota, T.; Fukui, H.; Ogawa, K.; Masumoto, J.; Kudo, A.; Kitamura, Y.; Kido, S. Three-dimensional conditional generative adversarial network-based virtual thin-slice technique for the morphological evaluation of the spine. Scientific Reports 2022, 12, 12176.
    88. Gorbachev, Y.; Fedorov, M.; Slavutin, I.; Tugarev, A.; Fatekhov, M.; Tarkan, Y. Openvino deep learning workbench: Comprehensive analysis and tuning of neural networks inference. In Proceedings of the Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops, 2019; pp. 0-0.
    89. Li, L.; Fan, Y.; Tse, M.; Lin, K.-Y. A review of applications in federated learning. Computers & Industrial Engineering 2020, 149, 106854.
    90. Zhang, C.; Xie, Y.; Bai, H.; Yu, B.; Li, W.; Gao, Y. A survey on federated learning. Knowledge-Based Systems 2021, 216, 106775.
    91. Pan, S.J.; Yang, Q. A survey on transfer learning. IEEE Transactions on knowledge and data engineering 2009, 22, 1345-1359.
    92. Mutasa, S.; Sun, S.; Ha, R. Understanding artificial intelligence based radiology studies: What is overfitting? Clinical imaging 2020, 65, 96-99.
    93. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: from error visibility to structural similarity. IEEE transactions on image processing 2004, 13, 600-612.
    94. Day, O.; Khoshgoftaar, T.M. A survey on heterogeneous transfer learning. Journal of Big Data 2017, 4, 1-42.
    95. Chen, B.; Liu, T.; Liu, K.; Liu, H.; Pei, S. Image Super-Resolution Using Complex Dense Block on Generative Adversarial Networks. In Proceedings of the 2019 IEEE International Conference on Image Processing (ICIP), 22-25 Sept. 2019, 2019; pp. 2866-2870.
    96. Zhang, B.; Gu, S.; Zhang, B.; Bao, J.; Chen, D.; Wen, F.; Wang, Y.; Guo, B. Styleswin: Transformer-based gan for high-resolution image generation. In Proceedings of the Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2022; pp. 11304-11314.
    97. Tian, C.; Zhang, X.; Lin, J.C.-W.; Zuo, W.; Zhang, Y.; Lin, C.-W. Generative adversarial networks for image super-resolution: A survey. arXiv preprint arXiv:2204.13620 2022.
    98. Zhang, X.; Karaman, S.; Chang, S.-F. Detecting and simulating artifacts in gan fake images. In Proceedings of the 2019 IEEE international workshop on information forensics and security (WIFS), 2019; pp. 1-6.

    下載圖示 校內:立即公開
    校外:立即公開
    QR CODE