| 研究生: |
王宥盛 Wang, You-Sheng |
|---|---|
| 論文名稱: |
基於FDI牙位表示的全景及根尖X光片之牙周骨損失自動評估 FDI-based Deep Learning Method in Automatic Bone Loss Evaluation for Panoramic and Periapical Radiographs |
| 指導教授: |
洪昌鈺
Horng, Ming-Huwi |
| 共同指導教授: |
孫永年
Sun, Yung-Nien |
| 學位類別: |
碩士 Master |
| 系所名稱: |
電機資訊學院 - 資訊工程學系 Department of Computer Science and Information Engineering |
| 論文出版年: | 2024 |
| 畢業學年度: | 112 |
| 語文別: | 英文 |
| 論文頁數: | 76 |
| 中文關鍵詞: | 卷積神經網路 、變換器 、全景X光片 、根尖X光片 、牙周骨損失 、牙周病 |
| 外文關鍵詞: | Convolutional Neural Network, Transformer, Panoramic Radiographs, Periapical Radiographs, Alveolar Bone Loss, Periodontal |
| 相關次數: | 點閱:59 下載:0 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
牙周病是現代社會中常見的慢性疾病,早期的預防與診斷對於有效解決牙周病所造成的健康問題。然而,傳統的牙周病檢測和評估方法通常依賴訓練有素的專業人員進行人工觀察和判斷,這使得檢測過程不僅繁瑣,且耗時費力。
臨床上,患者通常需要拍攝全景 X 光片,再經由醫生判斷牙齒影像的牙周病嚴重程度,決定是否進一步拍攝根尖 X 光片。傳統方法下,醫生需要手動計算牙周骨損失率或直接判斷牙周骨的損失階段,這種方式不僅費時,且對醫生的經驗要求極高,也會因為不同醫生的診斷經驗差異而有不同結果。
因此,本研究旨在利用深度學習技術,通過偵測牙齒關鍵點來協助醫生計算牙周骨的損失。研究的第一部分目標是偵測全景 X 光片及根尖 X 光片中每顆牙齒的位置及類別,並據此計算牙周骨損失率。第二部分的目標是偵測全景 X 光片和根尖 X 光片中每顆牙齒的三種關鍵點,進而計算牙周骨損失率。第三部分則分析全景 X 光片和根尖 X 光片中同類別牙齒之間的關聯性,為牙醫提供更有力的診斷參考。實驗結果顯示,本研究所提出的方法在評估精確率上優於多數醫生的判斷結果,顯示其在輔助臨床診斷中具有重要的應用價值。
Periodontal disease is a common chronic condition in modern society, and early prevention and diagnosis are crucial for effectively addressing the health it caused. However, traditional methods for detecting and evaluating periodontal disease rely on trained professionals for manual observation and judgment, making the detection process not only tedious but also time-consuming and labor-intensive.
Clinically, patients usually take panoramic radiography, followed by a doctor’s assessment of the periodontal severity of the teeth in the images to determine whether further periapical radiography is needed. In traditional methods, doctors manually calculate the periodontal bone loss rate or directly assess the stage of bone loss. It is not only time-consuming but also requires significant experience from the doctor, leading to varying results depending on the doctor’s diagnostic experience.
Therefore, this study aims to utilize deep learning techniques to assist doctors in calculating periodontal bone loss by detecting dental keypoints. The first part of the study focuses on detecting the position and category of each tooth in both panoramic and periapical radiographs to calculate the periodontal bone loss rate. The second part aims to detect three types of keypoints on each tooth in panoramic and periapical radiographs, and these keypoints are used to calculate the bone loss rate. The third part analyzes the relationship between teeth of the same category in panoramic and periapical radiographs, providing dentists with stronger diagnostic references. Experimental results indicate that the proposed method outperforms most doctors in terms of diagnostic precision, demonstrating its significant value in assisting clinical diagnosis
[1] N. P. Lang, M. A. Schätzle, and H. Löe, "Gingivitis as a risk factor in periodontal disease," Journal of clinical periodontology, vol. 36, pp. 3-8, 2009.
[2] N. Kassebaum, E. Bernabé, M. Dahiya, B. Bhandari, C. Murray, and W. Marcenes, "Global burden of severe periodontitis in 1990-2010: a systematic review and meta-regression," Journal of dental research, vol. 93, no. 11, pp. 1045-1053, 2014.
[3] F. W. D. Federation. "FDI Two-Digit Notation." http://www.fdiworldental.org/ (accessed 16 July, 2024).
[4] I. O. f. S. (ISO), Dentistry - Designation system for teeth and areas of the oral cavity, ISO 3950:2009 ed. ISO, 2009.
[5] G. Hou, "Annually radiographic periodontal bone loss rates of tooth affected severe advanced periodontitis with secondary occlusal traumatism," Intern J Dent and Oral Health, vol. 6, no. 6, pp. 1-5, 2020.
[6] R. P. Danks et al., "Automating Periodontal bone loss measurement via dental landmark localisation," International Journal of Computer Assisted Radiology and Surgery, vol. 16, no. 7, pp. 1189-1199, 2021/07/01 2021, doi: 10.1007/s11548-021-02431-z.
[7] A. Newell, K. Yang, and J. Deng, "Stacked Hourglass Networks for Human Pose Estimation," p. arXiv:1603.06937doi: 10.48550/arXiv.1603.06937.
[8] C.-C. Chen et al., "Automatic recognition of teeth and periodontal bone loss measurement in digital radiographs using deep-learning artificial intelligence," Journal of Dental Sciences, vol. 18, no. 3, pp. 1301-1309, 2023/07/01/ 2023, doi: https://doi.org/10.1016/j.jds.2023.03.020.
[9] J. Y. Cha, H. I. Yoon, I. S. Yeo, K. H. Huh, and J. S. Han, "Peri-Implant Bone Loss Measurement Using a Region-Based Convolutional Neural Network on Dental Periapical Radiographs," (in eng), J Clin Med, vol. 10, no. 5, Mar 2 2021, doi: 10.3390/jcm10051009.
[10] K. He, G. Gkioxari, P. Dollár, and R. Girshick, "Mask r-cnn," in Proceedings of the IEEE international conference on computer vision, 2017, pp. 2961-2969.
[11] 陳鍇峰, "使用深度學習方法自動評估全景和根尖 X 光片上的牙周骨損失," 2022.
[12] 陳瑾漩, "基於關鍵點的深度學習方法自動評估全景及根尖 X 光片上的牙周骨損失," 2023.
[13] A. Bochkovskiy, C.-Y. Wang, and H.-Y. M. Liao, "YOLOv4: Optimal Speed and Accuracy of Object Detection," p. arXiv:2004.10934doi: 10.48550/arXiv.2004.10934.
[14] W. Liu et al., "SSD: Single Shot MultiBox Detector," p. arXiv:1512.02325doi: 10.48550/arXiv.1512.02325.
[15] R. Girshick, J. Donahue, T. Darrell, and J. Malik, "Rich feature hierarchies for accurate object detection and semantic segmentation," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2014, pp. 580-587.
[16] R. Girshick, "Fast R-CNN," p. arXiv:1504.08083doi: 10.48550/arXiv.1504.08083.
[17] S. Ren, K. He, R. Girshick, and J. Sun, "Faster r-cnn: Towards real-time object detection with region proposal networks," Advances in neural information processing systems, vol. 28, 2015.
[18] J. Wang et al., "Deep high-resolution representation learning for visual recognition," IEEE transactions on pattern analysis and machine intelligence, vol. 43, no. 10, pp. 3349-3364, 2020.
[19] S. Yang, Z. Quan, M. Nie, and W. Yang, "Transpose: Keypoint localization via transformer," in Proceedings of the IEEE/CVF international conference on computer vision, 2021, pp. 11802-11812.
[20] Z. Cao, G. Hidalgo, T. Simon, S.-E. Wei, and Y. Sheikh, "OpenPose: Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields," p. arXiv:1812.08008doi: 10.48550/arXiv.1812.08008.
[21] A. Vaswani et al., "Attention is all you need," Advances in neural information processing systems, vol. 30, 2017.
[22] C.-Y. Wang, A. Bochkovskiy, and H.-Y. M. Liao, "YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors," p. arXiv:2207.02696doi: 10.48550/arXiv.2207.02696.
[23] T.-Y. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie, "Feature pyramid networks for object detection," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 2117-2125.
[24] M. Tan, R. Pang, and Q. V. Le, "Efficientdet: Scalable and efficient object detection," in Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 2020, pp. 10781-10790.
[25] M. Tan and Q. V. Le, "EfficientNetV2: Smaller Models and Faster Training," p. arXiv:2104.00298doi: 10.48550/arXiv.2104.00298.
[26] A. Nibali, Z. He, S. Morgan, and L. Prendergast, "Numerical coordinate regression with convolutional neural networks," arXiv preprint arXiv:1801.07372, 2018.
[27] M. Tan and Q. Le, "Efficientnet: Rethinking model scaling for convolutional neural networks," in International conference on machine learning, 2019: PMLR, pp. 6105-6114.
[28] M. Sandler, A. Howard, M. Zhu, A. Zhmoginov, and L.-C. Chen, "Mobilenetv2: Inverted residuals and linear bottlenecks," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 4510-4520.
[29] J. Hu, L. Shen, and G. Sun, "Squeeze-and-excitation networks," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2018, pp. 7132-7141.
[30] K. Han, Y. Wang, Q. Tian, J. Guo, C. Xu, and C. Xu, "GhostNet: More Features from Cheap Operations," p. arXiv:1911.11907doi: 10.48550/arXiv.1911.11907.
[31] Y. Zhang, H. Liu, and Q. Hu, "Transfuse: Fusing transformers and cnns for medical image segmentation," in Medical image computing and computer assisted intervention–MICCAI 2021: 24th international conference, Strasbourg, France, September 27–October 1, 2021, proceedings, Part I 24, 2021: Springer, pp. 14-24.
校內:2029-08-30公開