簡易檢索 / 詳目顯示

研究生: 陳智凱
Chen, Chih-Kai
論文名稱: 多模影像融合技術應用於板機指經皮解離手術之超音波導引訓練系統
Multimodality Image Fusion for Ultrasound Guided Surgery Trainer: An Application to Percutaneous Trigger Finger Release
指導教授: 孫永年
Sun, Yung-Nien
學位類別: 碩士
Master
系所名稱: 電機資訊學院 - 資訊工程學系
Department of Computer Science and Information Engineering
論文出版年: 2012
畢業學年度: 100
語文別: 英文
論文頁數: 96
中文關鍵詞: 多模態影像影像融合鏈結式機制輻狀基底函數
外文關鍵詞: multimodality images, image fusion, joint mechanism, radial basis function
相關次數: 點閱:76下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 由於多模態的影像融合能夠整合多個模態影像中所提供的解剖資訊,並且提供人體內部組織完整的三維幾何結構,使得多模態的影像融合技術已經在臨床診斷以及治療當中扮演著重要的角色。然而,在不同模態的影像中通常存有姿勢以及灰階資訊變異度的差異,因此在影像融合的技術上仍相當具有挑戰性。在本論文中,我們提出了一個新穎的影像融合系統,將手指關節的核磁共振影像以及超音波影像進行整合,並將融合後的結果應用到板機指經皮解離手術的訓練系統中。
    我們提出的影像資訊融合系統主要有三個步驟,分別是核磁共振影像之三維手指模型建立、核磁共振影像與超音波影像之對位,以及假手手指影像模型之建立三個部份。在核磁共振影像之手指模型建立部份,我們將基於對位式的分割方式,從不同姿勢的手部核磁共振影像中,建立出包含骨頭、肌腱、皮膚以及關節運動系統的三維手指模型。接著,將透過鏈結式對位的技術對先前建立好的三維手指模型對至三維超音波影像中,並根據其中所得到的轉換以對核磁共振和超音波影像進行融合。最後,我們透過基底函式的兩階段式內插,建構出符合假手的影像資訊。透過上述三個步驟,我們可利用此同時擁有核磁共振以及超音波影像資訊的假手模擬訓練系統中的手術環境。並藉由結合光學追蹤系統來整合影像、假手以及手術器具之座標系,建構出讓手術者熟練手術流程的訓練系統。如同實驗結果顯示,根據我們所提出的方法,適當的導入結構資訊以及解剖模型將可以克服不同模態影像間姿勢和灰階變異度的差異並順利進行影像融合。除此之外,此技術也可被應用來融合擁有其他關節構造的影像上。

    Multimodality image fusion has become important in clinical diagnosis and treatment because it can integrate anatomical information acquired from various image modalities to provide complete 3D geometry of human internal structures. However, it is usually challenging due to pose (i.e., orientation and position) and intensity variations among different image modalities. In this thesis, we propose a new image fusion system of magnetic resonance (MR) and ultrasound (US) images for finger joints, and apply it to construct the surgical trainer for ultrasound-guided percutaneous trigger finger release (UGPR).
    The proposed system consists of three main steps, which are three-dimensional (3D) finger model construction from MR images, registration of MR and US images, and hand phantom image construction. At first, a drivable 3D finger model, containing bone, tendon, skin surfaces, and finger joint mechanism, is constructed from multi-postural hand MR images by using a registration-based segmentation strategy. Then, the 3D finger model is aligned to the 3D US images via articulated registration, and the resulting registration transformations are subsequently utilized to fuse the 3D MR and US images. At last, the fused images are warped to match the spatial configuration of hand phantom based on two-staged radial basis function (RBF). As a result, the hand phantom can be provided with both MR and US image contents to realistically simulate the surgery environment of UGPR. Combining with an optical tracking system to integrate the image, phantom and surgical tools’ coordinate systems, a surgical trainer is built to improve the proficiency of clinicians for UGPR. Experimental results showed that the proposed method can overcome the pose and intensity variations among different image modalities to fuse the images by referring to the structural information of anatomical models (e.g., bone, tendon and skin). It also has great potential to fuse multimodality images of the other joint structures.

    摘要 I ABSTRACT III ACKNOWLEDGEMENTS V CONTENTS VI LIST OF FIGURES VIII LIST OF TABLES XII CHAPTER 1 INTRODUCTION 1 1.1 Background and Motivation 1 1.2 Related Works 2 1.2.1 Image Fusion 2 1.2.2 Trigger Finger Release Surgery 9 1.3 Contributions 12 1.4 Overview of the Proposed Method and Thesis Organization 14 CHAPTER 2 IMAGE PREPARATION 17 2.1 Ultrasound 19 2.2 Magnetic Resonance 21 2.3 Computed Tomography 23 CHAPTER 3 FINGER MODEL CONSTRUCTION FROM MR IMAGES 25 3.1 Hand Bone Segmentation 26 3.1.1 Bone Segmentation 26 3.1.2 Coordinate System Definition 30 3.2 Finger Joint System 30 3.2.1 Joint Mechanism 30 3.2.2 Joint Parameter Estimation 31 3.3 Tendon Segmentation 34 3.4 Skin Surface Segmentation 36 CHAPTER 4 MODEL-BASED FUSION OF MR AND US IMAGES 38 4.1 Volar Bone Surface Extraction 39 4.2 Position Initialization 41 4.3 Transformation Optimization 45 4.3.1 Bone Boundary Enhancement 46 4.3.2 Articulated Registration of MR and US 47 CHAPTER 5 IMAGE CONSTRUCTION AND SURGICAL TRAINER DESIGN 53 5.1 Image Construction for Hand Phantom 54 5.1.1 Bone and Skin Segmentation from CT Images 54 5.1.2 Two-staged Radial Basis Function Warping 55 5.2 Trainer Implementation 66 CHAPTER 6 EXPERIMENTAL RESULTS 70 6.1 Accuracy in Finger Model Construction 70 6.2 Registration Accuracy in Image Fusion 72 6.3 Accuracy Evaluation in Two-staged RBF Warping 83 CHAPTER 7 CONCLUSIONS AND FUTURE WORKS 88 7.1 Conclusions 88 7.2 Future Works 89 REFERENCES 90 VITA 96

    [1]Porter, B.C., Rubens, D.J., Strang, J.G., Smith, J, Totterman., S, Parker, K.J., "Three-dimensional registration and fusion of ultrasound and MRI using major vessels as fiducial markers," IEEE Trans. on Medical Imaging, Vol. 20, No. 4, pp. 354-359, 2001.
    [2]Winter S, Pechlivanis I, Dekomien C, Igel C, Schmieder K, "Toward registration of 3D ultrasound and CT images of the spine in clinical praxis: design and evaluation of a data acquisition protocol," Ultrasound in Med. & Biol., Vol. 35, No. 11, pp. 1773–1782, 2009.
    [3]Amin DV, Kanade T, DiGioia AM 3rd, Jaramaz B, "Ultrasound registration of the bone surface for surgical navigation," Comput Aided Surg, Vol. 8, No. 1, pp. 1-16, 2003.
    [4]P. Cech, A. Andronache, L. Wang, G. Székely, P. Cattin, "Piecewise Rigid Multimodal Spine Registration," Bildverarbeitung für die Medizin Springer, pp. 211-215, 2006.
    [5]Rasoulian Abtin, Mousavi Parvin, Hedjazi Moghari Mehdi, Foroughi Pezhman, Abolmaesumi Purang, "Group-wise feature-based registration of CT and ultrasound images of spine," Medical Imaging 2010: Visualization, Image-Guided Procedures, and Modeling, Proceedings of the SPIE, Vol. 7625, pp. 76250R-76250R-9, 2010.
    [6]F. Maes, A. Collignon, D. Vandermeulen, G. Marchal, P. Suetens, "Multimodality Image Registration by Maximization of Mutual Information," IEEE Trans. on Medical Imaging, Vol. 16, No. 2, pp. 187-198, 1997.
    [7]R. H. Huesman, G. J. Klein, J. A. Kimdon, C. Kuo, S. Majumdar, "Deformable Registration of Multimodal Data Including Rigid Structures," IEEE Trans. on Nuclear Science, Vol. 50, No. 3, pp. 389-392, 2003.
    [8]Yujun Guo, Cheng Chang Lu, "Multi-modality Image Registration Using Mutual Information Based on Gradient Vector Flow," 18th IEEE International Confernece on Pattern Recognition, pp. 697-700, 2006.
    [9]Lee D, Nam WH, Lee JY, Ra JB, "Non-rigid registration between 3D ultrasound and CT images of the liver based on intensity and gradient information," Phys Med Biol, Vol. 56, No. 1, pp. 117-137, 2011.
    [10]Zhang, James Q., Sullivan, John M. Jr., Yu, Hongliang., Wu, Ziji., "Image Guided Multi-Modality Registration and Visualization for Breast Cancer Detection," Medical Imaging 2005: Visualization, Image-Guided Procedures, and Display, Proceedings of the SPIE, Vol. 5744, pp. 123-133, 2005.
    [11]S. Martin, M. Baumann, V. Daanen, J. Troccaz, "MR prior based automatic segmentation of the prostate in TRUS images for MR/TRUS data fusion," IEEE International Symposium on Biomedical Imaging, pp. 640-643, 2010.
    [12]J.G. Verly, L.M. Vigneron, C. Martin, N. Petitjean, R. Guran, "3D Nonrigid Registration and Multimodality Fusion for Image-Guided Neurosurgery," 6th International Conference on Information Fusion, pp. 852-859, 2003.
    [13]Hu Y, Carter TJ, Ahmed HU, Emberton M, Allen C, Hawkes DJ, Barratt DC, "Modelling Prostate Motion for Data Fusion during Image-guided Interventions," IEEE Trans. on Medical Imaging, Vol. 30, No. 11, pp. 1887-1900, 2011.
    [14]Hu Y, Ahmed HU, Taylor Z, Allen C, Emberton M, Hawkes D, Barratt D, "MR to ultrasound registration for image-guided prostate interventions," Medical Image Analysis, Vol. 16, No. 3, pp. 687-703, 2012.
    [15]Gemma Piella, "A general framework for multiresolution image fusion: from pixels to regions," Information Fusion, Vol. 4, No. 4, pp. 259-280, 2003.
    [16]Sabalan Daneshvar, Hassan Ghassemian, "MRI and PET image fusion by combining IHS and retina-inspired models," Information Fusion, Vol. 11, No. 2, pp. 114-123, 2010.
    [17]A. Wang, Haijing Sun, Yueyang Guan, "The Application of Wavelet Transform to Multi-modality Medical Image Fusion," IEEE International Conference on Networking, Sensing and Control, pp. 270-274, 2006.
    [18]Zheng Yufeng, Adel S. Elmaghraby, Frigui Hichem, "Three-band MRI Image Fusion Utilizing the Wavelet-based Method Optimized with Two Quantitative Fusion Metrics," Medical Imaging 2006: Image Processing, Proceedings of the SPIE, Vol. 6144, pp. 243-254, 2006.
    [19]Yong Yang, "Multimodal Medical Image Fusion Through a New DWT Based Technique," 4th IEEE International Conference on Bioinformatics and Biomedical Engineering, pp. 1-4, 2010.
    [20]Yong Yang, Dong Sun Park, Shuying Huang, Nini Rao, "Medical Image Fusion via an Effective Wavelet based Approach," EURASIP Journal on Advances in Signal Processing, Vol. 2010, No. 4, 2010.
    [21]Richa Singh, Mayank Vatsa, Afzel Noore, "Multimodal Medical Image Fusion using Redundant DiscreteWavelet Transform," 7th IEEE International Conference on Advances in Pattern Recognition, pp. 232-235, 2009.
    [22]Bo Yang, Zhongliang Jing, "Medical Image Fusion with a Shift-Invariant Morphological Wavelet," IEEE International Conference on Cybernetics and Intelligent Systems, pp. 175-178, 2008.
    [23]S. G. Mallat, "Multifrequency channel decompositions of images and wavelet models," IEEE Trans. on Acoustics, Speech and Signal Processing, Vol. 37, No. 12, pp. 2091-2110, 1989.
    [24]M. N. Do, M. Vetterli, "The Contourlet Transform: An Efficient Directional Multiresolution Image Representation," IEEE Trans. on Image Processing, Vol. 14, No. 12, pp. 2091-2106, 2005.
    [25]L. Yang, B. L. Guo, W. Ni, "Multimodality medical image fusion based on multiscale geometric analysis of contourlet transform," Neurocomputing, Vol. 72, No. 1-3, pp. 203-211, 2008.
    [26]N. A. Al-Azzawi, H. A. Mat Sakim, A. K. Wan Abdullah, "An Efficient Medical Image Fusion Method Using Contourlet Transform Based on PCM," IEEE Symposium on Industrial Electronics & Applications, pp. 11-14, 2009.
    [27]S. Das, M. K. Kundu, "Fusion of Multimodality Medical Images Using Combined Activity Level Measurement and Contourlet Transform," IEEE International Conference on Image Information Processing, pp. 1-6, 2011.
    [28]Al-Azzawi N, Sakim HA, Abdullah AK, Ibrahim H, "Medical Image Fusion Scheme Using Complex Contourlet Transform based on PCA," 31st IEEE International Conference on Engineering in Medicine and Biology Society, pp. 5813-5816, 2009.
    [29]Nemir Al-Azzawi, Wan Ahmed K. Wan Abdullah, "Medical Image Fusion Schemes using Contourlet Transform and PCA Based," Image Fusion and Its Applications, Chapter 6, 2011.
    [30]Ryzewicz M, Wolf JM, "Trigger digits: principles, management, and complications," J Hand Surg-Am, Vol. 31, No. 1, pp. 135–146, 2006.
    [31]Courtesy of Griffin LY (ed), "Essentials of Musculoskeletal Care 3rd Ed," Rosemont, IL. American Academy of Orthopaedic Surgeons, 2005.
    [32]Lorthioir J, "Surgical treatment of trigger finger by a subcutaneous method," J Bone Joint Surg, 40, pp. 793–795, 1958.
    [33]Bamroongshawgasame T, "A comparison of open and percutaneous pulley release in trigger digits," J Med Assoc Thai, Vol. 93, No. 2, pp. 199–204, 2010.
    [34]Jou IM, Chern TC, "Sonographically assisted percutaneous release of the A1 pulley: a new surgical technique for treating trigger digit," J Hand Surg-Brit Eur, Vol. 31B, No. 2, pp. 191–199, 2006.
    [35]William E. Lorensen, Harvey E. Cline, "Marching cubes: A high resolution 3D surface construction algorithm," ACM SIGGRAPH Computer Graphics, Vol. 21, No. 4, pp. 163–169, 1987.
    [36]Wu G, van der Helm FC, Veeger HE, Makhsous M, Van Roy P, Anglin C, Nagels J, Karduna AR, McQuade K, Wang X, Werner FW, Buchholz B, "ISB recommendation on definitions of joint coordinate systems of various joints for the reporting of human joint motion—Part II: shoulder, elbow, wrist and hand," Journal of Biomechanics, Vol. 38, No. 5, pp. 981–192, 2005.
    [37]Jintae Lee, T. L. Kunii, "Model-based analysis of hand posture," Computer Graphics and Applications, Vol. 15, No. 5, pp. 77–86, 1995.
    [38]Chen HC, Jou IM, Wang CK, Su FC, Sun YN, "Registration-based segmentation with articulated model from multipostural magnetic resonance images for hand bone motion animation," Medical Physics, Vol. 37, No. 6, pp. 2670–2682, 2010.
    [39]John Lin, Ying Wu, T. S. Huang, "Modeling the Constraints of Human Hand Motion," IEEE Workshop on Human Motion, pp. 121–126, 2000.
    [40]W.H. Press, S.A. Teukolsky, W.T. Vetterling, B.P. Flannery, "Numerical Recipes in C, 2nd ed," Cambridge: Cambridge University Press, 1992.
    [41]H. C. Chen, C. K. Chen, T. H. Yang, L. C. Kuo, I. M. Jou, F. C. Su, Y. N. Sun, "Model-based Segmentation of Flexor Tendons from Magnetic Resonance Images of Finger Joints," 33rd Annual International Conf. IEEE Engineering in Medicine and Biology Society, 2011.
    [42]M. D. Buhmann, "Radial Basis Functions: Theory and Implementations," Cambridge University Press, 2003.
    [43]Grace Wahba, "Spline Models for Observational Data," Philadelphia: Society for Industrial and Applied Mathematics, 1990.
    [44]A. Gray, "The Gaussian and Mean Curvatures, "§16.5 in Modern Differential Geometry of Curves and Surfaces with Mathematica, 2nd ed," Boca Raton, FL: CRC Press, pp. 373-380, 1997.
    [45]http://www.pitotech.com.tw/show_product.php?btype=8&ltype=0&id=342

    無法下載圖示 校內:2017-08-31公開
    校外:不公開
    電子論文尚未授權公開,紙本請查館藏目錄
    QR CODE