| 研究生: |
李東璋 Li, Dong-Zhang |
|---|---|
| 論文名稱: |
發展肝臟纖維化分類方法基於遷移學習與卷積神經網路於超音波影像 Development of liver fibrosis classification methods based on transfer learning and convolutional neural network for ultrasound images |
| 指導教授: |
王士豪
Wang, Shyh-Hau |
| 學位類別: |
碩士 Master |
| 系所名稱: |
電機資訊學院 - 醫學資訊研究所 Institute of Medical Informatics |
| 論文出版年: | 2019 |
| 畢業學年度: | 107 |
| 語文別: | 英文 |
| 論文頁數: | 51 |
| 中文關鍵詞: | 超音波影像 、纖維化肝臟 、深度學習 、卷積神經網路 、可視化 |
| 外文關鍵詞: | liver fibrosis, deep learning, convolutional neural network, visualization, ultrasound image |
| 相關次數: | 點閱:241 下載:5 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
根據世界衛生組織統計,全球約有三億人口罹患肝臟相關疾病,每年約一億人死於肝病,在台灣根據衛生福利部統計,每年約一萬多人死於肝病,是台灣第九大死因,癌症死亡人口第二名。肝病包含肝纖維化、肝炎、肝硬化和肝癌,原本柔軟的肝臟若長期發炎導致肝細胞遭受破壞,會刺激肝臟內的纖維母細胞,製造出膠原蛋白填補肝細胞壞死後留下的空間,若持續累積,就會朝向不可逆的肝硬化演進。近年來,早期發現和治療的醫學觀念逐漸普及,與其負擔發病後龐大的醫療成本,不如在早期就發現給予治療,對社會體系和個人的負擔都比較小,而腹部超音波影像就是一個有效檢查肝臟的工具,本研究利用非侵入式超音波醫療系統擷取 Sprague Drawley大鼠的肝臟影像進行分析。而深度學習領域中,卷積神經網路是目前最有效,提取資料訊息的工具,包含 LeNet,AlexNet,VGGNet, GoogLeNet, ResNet, SENet,然而這些神經網路各有優缺點,強大的辨識能力需要大量的運算資源和運算時間,因此,選擇適合資料型態的神經網路是必要的能力,了解分類纖維化程度後,再將網路所學習到的特徵,也就是影像上關鍵的特徵視覺化,打破深度學習中裡的黑盒子,使研究能夠更進一步。
According to the World Health Organization, there are around 300 million people have liver-related diseases. There are around 1 million people died because of liver-related diseases. According to the Taiwan Health Promotion, Ministry of Health and Welfare, liver-related diseases are the ninth top cause of death and second top cause death of cancer. Liver diseases include liver fibrosis, hepatitis, cirrhosis, and liver cancer. Hepatocyte would. In recent years, the idea of timely diagnoses become polular. Timely diagnoses and therapy may be practical and are less burden for societies and people. Abdominal ultrasound image is an effective test tool for the liver. This study uses a non-invasive ultrasound system to acuquire liver images from Sprague Drawley rats. In deep learning field, convolution neural network is the most powerful tool to extract information from data by methods which including the LeNet, VGGNet, GoogLeNet, ResNet, and SENet, etc. However, their strong discriminative abilities need large computation power and time. Choosing a suitable network and finding the trade-off between accuracy and computation time are important. After optimizing the accuracy of classification of liver fibrosis, visualizing key features what the network learn is a key point to improve the study which breaks the black box in the deep learning field.
REFERENCES
[ 1 ] World Health Organization. “Global Hepatitis Report, 2017.” World Health Organisation, 2017.
[ 2 ] Yamaguchi, T., Hachiya, H., & Kamiyama, N. (1999). Modeling of the cirrhotic liver considering the liver lobule structure. Japanese journal of applied physics, 38(part 1), 3388-3392.
[ 3 ] Hoffman, Matthew. “Liver (Anatomy): Picture, Function, Conditions, Tests, Treatments.” WebMD, WebMD, www.webmd.com/digestive-disorders/picture-of-the-liver#1
[ 4 ] P. Bedossa, "Intraobserver and interobserver variations in liver biopsy interpretation in patients with chronic hepatitis C," Hepatology, vol. 20, no. 1, pp. 15-20, 1994.
[ 5 ] K. K. Shung and G. A. Thieme, Ultrasonic scattering in biological tissues. CRC press, 1992.
[ 6 ] K. K. Shung, "General engineering principles in diagnostic ultrasound," IEEE Engineering in Medicine and Biology Magazine, vol. 6, no. 4, pp. 7-13, 1987.
[ 7 ] D. H. Evans, W. N. McDicken, R. Skidmore, and J. Woodcock, Doppler ultrasound: Physics, instrumentation, and clinical applications. J. Wiley & Sons, 1989.
[ 8 ] E. Papini et al., "Risk of malignancy in nonpalpable thyroid nodules: predictive value of ultrasound and color-Doppler features," The Journal of Clinical Endocrinology & Metabolism, vol. 87, no. 5, pp. 1941-1946, 2002.
[ 9 ] J. Bamber et al., "EFSUMB guidelines and recommendations on the clinical use of ultrasound elastography. Part 1: Basic principles and technology," Ultraschall in der Medizin-European Journal of Ultrasound, vol. 34, no. 02, pp. 169-184, 2013
[ 10 ] P. Hammel et al., "Regression of liver fibrosis after biliary drainage in patients with chronic pancreatitis and stenosis of the common bile duct," New England Journal of Medicine, vol. 344, no. 6, pp. 418-423, 2001.
[ 11 ] D. Cosgrove et al., "EFSUMB guidelines and recommendations on the clinical use of ultrasound elastography. Part 2: Clinical applications," Ultraschall in der Medizin-European Journal of Ultrasound, vol. 34, no. 03, pp. 238-253, 2013.
[ 12 ] Moon, W. K., Lo, C. M., Huang, C. S., Chen, J. H., & Chang, R. F. (2012). Computer-aided diagnosis based on speckle patterns in ultrasound images. Ultrasound in medicine & biology, 38(7), 1251-1261.
[ 13 ] M.-C. Ho et al., "Using ultrasound Nakagami imaging to assess liver fibrosis in rats," Ultrasonics, vol. 52, no. 2, pp. 215-222, 2012
[ 14 ] Bo-Yen Huang, “In vivo Quantitative Assessment of Hepatic Fibrosis by Ultrasonic Backscattering and Statistical Model," National Cheng Kung University, 2017
[ 15 ] Meng, D., Zhang, L., Cao, G., Cao, W., Zhang, G., & Hu, B. (2017). Liver fibrosis classification based on transfer learning and FCNet for ultrasound images. Ieee Access, 5, 5804-5810.
[ 16 ] Krizhevsky, A., Sutskever, I., & Hinton, G. E. (2012). Imagenet classification with deep convolutional neural networks. In Advances in neural information processing systems (pp. 1097-1105).
[ 17 ] Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556.
[ 18 ] Deng, J., Dong, W., Socher, R., Li, L. J., Li, K., & Fei-Fei, L. (2009, June). Imagenet: A large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition (pp. 248-255). Ieee.
[ 19 ] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., ... & Berg, A. C. (2015). Imagenet large scale visual recognition challenge. International journal of computer vision, 115(3), 211-252.
[ 20 ] Kingma, D. P., & Ba, J. (2014). Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
[ 21 ] Bergstra, James, and Yoshua Bengio. "Random search for hyper-parameter optimization." Journal of Machine Learning Research 13.Feb (2012): 281-305.
[ 22 ] Zeiler, M. D., & Fergus, R. (2014, September). Visualizing and understanding convolutional networks. In European conference on computer vision (pp. 818-833). Springer, Cham.
[ 23 ] Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., & Wojna, Z. (2016). Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2818-2826).
[ 24 ] He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778)
[ 25 ] Huang, G., Liu, Z., Van Der Maaten, L., & Weinberger, K. Q. (2017, July). Densely connected convolutional networks. In CVPR (Vol. 1, No. 2, p. 3).
[ 26 ] Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., & Fergus, R. (2013). Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199.
[ 27 ] Szegedy, C., Ioffe, S., Vanhoucke, V., & Alemi, A. A. (2017, February). Inception-v4, inception-resnet and the impact of residual connections on learning. In Thirty-First AAAI Conference on Artificial Intelligence.
[ 28 ] Wu, Ting-An, “CNN Models Trained by Ultrasonic Images and Nakagami-m Parameters for Liver Fibrosis Classification," National Cheng Kung University, 201
[ 29 ] Shen, Dinggang, Guorong Wu, and Heung-Il Suk. "Deep learning in medical image analysis." Annual review of biomedical engineering 19 (2017): 221-248.
[ 30 ] Glorot, Xavier, and Yoshua Bengio. "Understanding the difficulty of training deep feedforward neural networks." Proceedings of the thirteenth international conference on artificial intelligence and statistics. 2010.
[ 31 ] LeCun, Y. A., Bottou, L., Orr, G. B., & Müller, K. R. (2012). Efficient backprop. In Neural networks: Tricks of the trade (pp. 9-48). Springer, Berlin, Heidelberg.
[ 32 ] Nguyen, Anh, Jason Yosinski, and Jeff Clune. "Deep neural networks are easily fooled: High confidence predictions for unrecognizable images." Proceedings of the IEEE conference on computer vision and pattern recognition. 2015.
[ 33 ] Sutskever, I., Martens, J., Dahl, G., & Hinton, G. (2013, February). On the importance of initialization and momentum in deep learning. In International conference on machine learning (pp. 1139-1147).
[ 34 ] Bengio, Y. (2012). Practical recommendations for gradient-based training of deep architectures. In Neural networks: Tricks of the trade (pp. 437-478). Springer, Berlin, Heidelberg.
[ 35 ] Glorot, Xavier, Antoine Bordes, and Yoshua Bengio. "Deep sparse rectifier neural networks." Proceedings of the fourteenth international conference on artificial intelligence and statistics. 2011.
[ 36 ] Srivastava, Nitish, et al. "Dropout: a simple way to prevent neural networks from overfitting." The Journal of Machine Learning Research 15.1 (2014): 1929-1958.
[ 37 ] Nair, V., & Hinton, G. E. (2010). Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th international conference on machine learning (ICML-10) (pp. 807-814).
[ 38 ] Ronneberger, Olaf, Philipp Fischer, and Thomas Brox. "U-net: Convolutional networks for biomedical image segmentation." International Conference on Medical image computing and computer-assisted intervention. Springer, Cham, 2015.
[ 39 ] Zoph, B., Vasudevan, V., Shlens, J., & Le, Q. V. (2018). Learning transferable architectures for scalable image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 8697-8710).
[ 40 ] Kermany, D. S., Goldbaum, M., Cai, W., Valentim, C. C., Liang, H., Baxter, S. L., ... & Dong, J. (2018). Identifying medical diagnoses and treatable diseases by image-based deep learning. Cell, 172(5), 1122-1131.
[ 41 ] He, Haibo, and Edwardo A. Garcia. "Learning from imbalanced data." IEEE Transactions on Knowledge & Data Engineering 9 (2008): 1263-1284.
[ 42 ] Simard, P. Y., Steinkraus, D., & Platt, J. C. (2003, August). Best practices for convolutional neural networks applied to visual document analysis. In Icdar (Vol. 3, No. 2003).
[ 43 ] Selvaraju, Ramprasaath R., et al. "Grad-cam: Visual explanations from deep networks via gradient-based localization." Proceedings of the IEEE International Conference on Computer Vision. 2017.
[ 44 ] Jaderberg, M., Dalibard, V., Osindero, S., Czarnecki, W. M., Donahue, J., Razavi, A., ... & Fernando, C. (2017). Population based training of neural networks. arXiv preprint arXiv:1711.09846.
[ 45 ] Liu, Tianjiao, et al. "Classification of thyroid nodules in ultrasound images using deep model based transfer learning and hybrid features." 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2017.
[ 46 ] Noble, Alison, and Djamal Boukerroui. "Ultrasound image segmentation: a survey." IEEE Transactions on medical imaging 25.8 (2006): 987-1010.
[ 47 ] Rajpurkar, P., Irvin, J., Bagul, A., Ding, D., Duan, T., Mehta, H., ... & Langlotz, C. (2017). Mura: Large dataset for abnormality detection in musculoskeletal radiographs. arXiv preprint arXiv:1712.06957.
[ 48 ] Glen, J., Floros, L., Day, C., & Pryke, R. (2016, September 07). Non-alcoholic fatty liver disease (NAFLD): Summary of NICE guidance. Retrieved from https://www.bmj.com/content/354/bmj.i4428
[ 49 ] B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba, "Learning deep features for discriminative localization," in Computer Vision and Pattern Recognition (CVPR), 2016 IEEE Conference on, 2016, pp. 2921-2929: IEEE.
[ 50 ] Canziani, A., Paszke, A., & Culurciello, E. (2016). An analysis of deep neural network models for practical applications. arXiv preprint arXiv:1605.07678.
[ 51 ] LeCun, Y., Bottou, L., Bengio, Y., & Haffner, P. (1998). Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11), 2278-2324.
[ 52 ] Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., ... & Rabinovich, A. (2015). Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1-9).