| 研究生: |
張瓊文 Zhang, Qiong-Wen |
|---|---|
| 論文名稱: |
基於自我訓練無源模型適應之新穎進步教師模型:應用於組織病理影像以偵測肝臟腫瘤 A Novel Progressive Teacher Model with the Self-Training-Based Source-Free Model Adaptation: Application in Histopathological Image for HCC Tumor Detection |
| 指導教授: |
鄭國順
Cheng, Kuo-Sheng |
| 共同指導教授: |
詹寶珠
Chung, Pau-Choo |
| 學位類別: |
碩士 Master |
| 系所名稱: |
工學院 - 生物醫學工程學系 Department of BioMedical Engineering |
| 論文出版年: | 2022 |
| 畢業學年度: | 110 |
| 語文別: | 英文 |
| 論文頁數: | 60 |
| 中文關鍵詞: | 肝臟腫瘤偵測 、老師學生模型 、無源域資料域適應模型 |
| 外文關鍵詞: | source-free domain adaptation, tumor detection, teacher-student framework,, progressive teacher model |
| 相關次數: | 點閱:82 下載:0 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
近年來,許多針對數位病理影像分析為目標所發展的電腦輔助分析方法被提出,尤其是基於人工智慧模型的分析方法被廣為研究。然而,數位病理影像容易因為醫院染色流程的不同以及掃片機廠商的不同,導致呈色差異,進而造成人工智慧模型無法泛化到各個不同呈色的影像。過去針對此問題所提出的模型架構,利用域適應(Domain Adaptation)使目標域(Target Domain)的資料能夠在源域(Source Domain)模型上達到與源域資料相似的表現。為了避免在目標域資料的人工標註,以及為了隱私問題而避免使用源域資料,此篇研究提出無須源域資料以及目標域標註的域適應模型。首先,因缺乏目標域的標註,此研究利用訓練好的源域模型進行知識蒸餾,以老師學生模型(Teacher Student Framework)為基礎,源域模型作為老師模型提供目標域資料偽標註(Pseudo label),學生模型則利用偽標註進行目標域資料的特徵學習。另外,此研究發現,目標域資料在源域模型上有較分散的特徵分布,此分布影響模型判斷的正確性,且無法以一般的老師學生模型進行改善。因此,本研究認為,持續進步的老師模型才能夠教導更好的學生模型,並提出了兩個損失函式使得老師模型能夠隨著適應過程持續提高偽標註的準確性,進而使學生模型學習到更加正確的目標資料特徵。本研究再根據一致性學習(Consistency Learning)的想法,希望學生模型對相同資料不同資料擴增的資料產生相似的特徵,使學生模型學習到目標域資料的特徵本身,降低對老師模型偽標註的依賴性,最終使學生模型能夠在目標域資料上達到更好的表現。此方法應用於H&E 染色肝臟病理影像分析,針對不同醫院以及不同掃片機對所提出模型架構進行評估,實驗顯示在兩者情境下,此研究所提出的架構能有效的提高模型在目標域的表現,分別提高IoU 7% 和22%,且此過程中無須額外標註成本以及源域資料的參與,並超越了目標域的監督模型的表現。總結來說,本方法能夠在考慮隱私以及標註成本下,進行域適應,並能夠達到與目標域監督模型類似的表現。
Histopathology image analysis suffers from the challenges of stain variance across different hospitals and scanning systems due to different staining protocols and image acquisition systems. In clinical practice, data-driven deep neural networks are sensitive to such variances and suffer a performance degradation as a result. Prior works on unsupervised domain adaptation usually assumed the source dataset to be available when learning an adapted target model. However, in reality, such an assumption rarely holds due to privacy concerns and storage limitations. To solve this problem, the present study proposes a self-training-based source-free domain adaptation framework embedded with progressive teacher model to deal with the problem of an unlabeled target dataset and the absence of the source dataset. In particular, a well-trained source model generates pseudo labels for a target-specific student model based on a teacher-student framework. To deal with noisy pseudo labels and dispersed target features due to distribution discrepancies, two newly-proposed loss functions, referred to as the domain center loss and self-labeling loss, respectively, are used to update the teacher model in order to create more reliable pseudo labels for the student model. Furthermore, a consistency regularization loss function is used to further improve the performance of the target dataset. The feasibility of the proposed approach is demonstrated experimentally for the tumor detection task under two different scenarios: stain variance across different hospitals and stain variance across different scanners. In the first case, the target IOU score is found to be improved from 0.67 to 0.74 compared with the source-only model. In the second case, the target IOU score is improved from 0.64 to 0.83 compared with the source-only model. Overall, the results show that the proposed method has significant potential for unsupervised source-free domain adaptation in histopathological applications.
[1] M. Khened, A. Kori, H. Rajkumar, G. Krishnamurthi, and B. Srinivasan, “A generalized deep learning framework for whole-slide image segmentation and analysis,” Sci Rep, vol. 11, no. 1, p. 11579, 2021.
[2] N. Dimitriou, O. Arandjelović, and P. D. Caie, “Deep Learning for Whole Slide Image Analysis: An Overview,” Front. Med., vol. 6, p. 264, 2019.
[3] W. -C. Huang et al., "Automatic HCC Detection Using Convolutional Network with Multi-Magnification Input Images," 2019 IEEE International Conference on Artificial Intelligence Circuits and Systems (AICAS), pp. 194-198, 2019.
[4] K. Stacke, G. Eilertsen, J. Unger, and C. Lundstrom, “Measuring Domain Shift for Deep Learning in Histopathology,” IEEE J. Biomed. Health Inform., vol. 25, no. 2, pp. 325–336, 2021.
[5] D. Magee, D. Treanor, D. Crellin, M. Shires, K. Mohee, and P. Quirke,“Colour normalisation in digital histopathology images,” in Proc. Opt.Tissue Image Anal. Microsc., Histopathol. Endoscopy, 2009, pp. 100–111
[6] Y. Yagi, “Color standardization and optimization in Whole Slide Imaging,” Diagn Pathol, vol. 6, no. S1, p. S15, 2011.
[7] A. C. Ruifrok, “Quantification of histochemical staining by color deconvolution,”. Analytical and quantitative cytology and histology, p. 22, 2001.
[8] N. Baker, H. Lu, G. Erlikhman, and P. J. Kellman, “Deep convolutional networks do not classify based on global object shape,” PLoS Comput Biol, vol. 14, no. 12, p. e1006613, 2018.
[9] M. Wang and W. Deng, “Deep Visual Domain Adaptation: A Survey,” Neurocomputing, vol. 312, pp. 135–153, 2018.
[10] G. Csurka, “A Comprehensive Survey on Domain Adaptation for Visual Applications,” in Domain Adaptation in Computer Vision Applications, G. Csurka, Ed. Cham: Springer International Publishing, 2017, pp. 1–35.
[11] I. Goodfellow et al., “Generative Adversarial Nets,” in Advances in Neural Information Processing Systems, 2014, vol. 27.
[12] M. Long, Y. Cao, J. Wang, and M. I. Jordan, “Learning transferable features with deep adaptation networks,” in Proceedings of the 32nd International Conference on International Conference on Machine Learning - Volume 37, Lille, France, Jul. 2015, pp. 97–105.
[13] Y. Liu, W. Zhang, J. Wang, and J. Wang, “Data-Free Knowledge Transfer: A Survey.” arXiv, Dec. 30, 2021.
[14] G. French, “Semi-supervised semantic segmentation needs strong, varied perturbations,” in the 31st British Machine Vision Conference, 2020.
[15] S. Laine and T. Aila, “Temporal ensembling for semi-supervised learning,” in the International Conference on Learning Representations , 2017.
[16] J. Jeong and J. Shin, “Consistency Regularization for Certified Robustness of Smoothed Classifiers,” in Advances in Neural Information Processing Systems, 2020, vol. 33, pp. 10558–10570.
[17] K. Nishi, Y. Ding, A. Rich, and T. Hollerer, “Augmentation Strategies for Learning with Noisy Labels,” in 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, Jun. 2021, pp. 8018–8027.
[18] A. Tarvainen and H. Valpola, “Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results,” in Advances in Neural Information Processing Systems, 2017, vol. 30.
[19] G. Hinton, O. Vinyals, and J. Dean, “Distilling the knowledge in a neural network,” in Proc. NIPS Deep Learning and Representation Learning Workshop, 2015.
[20] Chih-Hsien Lee, “Conditional Cycle-consistent GAN-based stain style translation: Application to H&E-Stained Liver Histopathological Image Analysis,” M.S. thesis, Computer and Communication Engineering, National Cheng Kung Univ., Tainan, Taiwan, 2021.
[21] M. Long, H. Zhu, J. Wang, and M. I. Jordan, “Deep transfer learning with joint adaptation networks,” in Proceedings of the 34th International Conference on Machine Learning - Volume 70, Sydney, NSW, Australia, Aug. 2017, pp. 2208–2217.
[22] G. Kang, L. Jiang, Y. Yang, and A. G. Hauptmann, “Contrastive Adaptation Network for Unsupervised Domain Adaptation,” in 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, Jun. 2019, pp. 4888–4897.
[23] E. Tzeng, J. Hoffman, N. Zhang, K. Saenko, and T. Darrell, “Deep Domain Confusion: Maximizing for Domain Invariance.” arXiv, Dec. 10, 2014.
[24] F. Zhuang, X. Cheng, P. Luo, S. J. Pan, and Q. He, “Supervised representation learning: transfer learning with deep autoencoders,” in Proceedings of the 24th International Conference on Artificial Intelligence, Buenos Aires, Argentina, Jul. 2015, pp. 4119–4125.
[25] B. Sun and K. Saenko, “Deep CORAL: Correlation Alignment for Deep Domain Adaptation,” in Computer Vision – ECCV 2016 Workshops, vol. 9915, G. Hua and H. Jégou, Eds. Cham: Springer International Publishing, 2016, pp. 443–450.
[26] B. Sun, J. Feng, and K. Saenko, “Return of frustratingly easy domain adaptation,” in Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, Phoenix, Arizona, Feb. 2016, pp. 2058–2065.
[27] C. Chen, Z. Chen, B. Jiang, and X. Jin, “Joint domain alignment and discriminative feature learning for unsupervised deep domain adaptation,” in Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence and Thirty-First Innovative Applications of Artificial Intelligence Conference and Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, Honolulu, Hawaii, USA, Jan. 2019, pp. 3296–3303.
[28] Y. Ganin and V. Lempitsky, “Unsupervised domain adaptation by backpropagation,” in Proceedings of the 32nd International Conference on International Conference on Machine Learning - Volume 37, Lille, France, Jul. 2015, pp. 1180–1189.
[29] E. Tzeng, J. Hoffman, K. Saenko, and T. Darrell, “Adversarial Discriminative Domain Adaptation,” in 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, Jul. 2017, pp. 2962–2971.
[30] J. N. Kundu, A. Kulkarni, A. Singh, V. Jampani, and R. V. Babu, “Generalize then Adapt: Source-Free Domain Adaptive Semantic Segmentation,” in 2021 IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada, Oct. 2021, pp. 7026–7036.
[31] G. French, M. Mackiewicz, and M. Fisher, “Self-ensembling for visual domain adaptation,” In ICLR, 2018.
[32] Y. Li, N. Wang, J. Shi, X. Hou, and J. Liu, “Adaptive Batch Normalization for practical domain adaptation,” Pattern Recognition, vol. 80, pp. 109–117, Aug. 2018.
[33] H. Chen et al., “Data-Free Learning of Student Networks,” in 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Korea (South), Oct. 2019, pp. 3513–3521.
[34] R. Li, Q. Jiao, W. Cao, H.-S. Wong, and S. Wu, “Model Adaptation: Unsupervised Domain Adaptation Without Source Data,” in 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, Jun. 2020, pp. 9638–9647.
[35] J. Liang, D. Hu, and J. Feng, “Do We Really Need to Access the Source Data? Source Hypothesis Transfer for Unsupervised Domain Adaptation,” in Proceedings of the 37th International Conference on Machine Learning, Nov. 2020, pp. 6028–6039.
[36] P. Zhang, B. Zhang, T. Zhang, D. Chen, Y. Wang, and F. Wen, “Prototypical Pseudo Label Denoising and Target Structure Learning for Domain Adaptive Semantic Segmentation,” in 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Jun. 2021, pp. 12409–12419.
[37] P. T. S and F. Fleuret, “Uncertainty Reduction for Model Adaptation in Semantic Segmentation,” in 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Jun. 2021, pp. 9608–9618.
[38] K. Saito, D. Kim, S. Sclaroff, and K. Saenko, “Universal Domain Adaptation through Self Supervision,” in Advances in Neural Information Processing Systems, 2020, vol. 33, pp. 16282–16292.
[39] J. Kirkpatrick et al., “Overcoming catastrophic forgetting in neural networks,” Proceedings of the National Academy of Sciences, vol. 114, no. 13, pp. 3521–3526, Mar. 2017.
[40] P. Kaushik, A. Gain, A. Kortylewski, and A. Yuille, “Understanding Catastrophic Forgetting and Remembering in Continual Learning with Optimal Relevance Mapping,” in NeurIPS 2021 Workshop MetaLearn Poster, 2021.
[41] A. Aich, “Elastic Weight Consolidation (EWC): Nuts and Bolts.” arXiv, May 09, 2021.
[42] R. Aljundi, F. Babiloni, M. Elhoseiny, M. Rohrbach, and T. Tuytelaars, “Memory Aware Synapses: Learning What (not) to Forget,” in Computer Vision – ECCV 2018, vol. 11207, 2018, pp. 144–161.
[43] H. Kim, M. Jang, and Y. N. Park, “Histopathological Variants of Hepatocellular Carcinomas: an Update According to the 5th Edition of the WHO Classification of Digestive System Tumors,” J. Liver Cancer, vol. 20, no. 1, pp. 17–24, 2020.
[44] L. Hou, D. Samaras, T. M. Kurc, Y. Gao, J. E. Davis, and J. H. Saltz, “Patch-Based Convolutional Neural Network for Whole Slide Tissue Image Classification,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jun. 2016, pp. 2424–2433.
[45] Faryna, Khrystyna, Jeroen van der Laak, and Geert Litjens. “Tailoring Automated Data Augmentation to H&E-Stained Histopathology.” PMLR, 168–78, 2021.
[46] K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition,” in 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jun. 2016, pp. 770–778.
[47] Y. Li, N. Wang, J. Shi, X. Hou, and J. Liu, “Adaptive batch normalization for practical domain adaptation,” Pattern Recognition, vol. 80, pp. 109–117, 2018.
[48] Z. Ke, D. Wang, Q. Yan, J. Ren, and R. W. H. Lau, “Dual Student: Breaking the Limits of the Teacher in Semi-supervised Learning.” arXiv, Sep. 03, 2019.
[49] Z. Cai, A. Ravichandran, S. Maji, C. Fowlkes, Z. Tu, and S. Soatto, “Exponential Moving Average Normalization for Self-supervised and Semi-supervised Learning,” in 2021 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jun. 2021, pp. 194–203.
[50] X. Huang and S. Belongie. "Arbitrary style transfer in real-time with adaptive instance normalization." InProceedings of the IEEE International Conference on Computer Vision, pp. 1501-1510. 2017.
[51] H. Xia, H. Zhao, and Z. Ding, “Adaptive Adversarial Network for Source-free Domain Adaptation,” in 2021 IEEE/CVF International Conference on Computer Vision (ICCV), Oct. 2021, pp. 8990–8999.
[52] E. Englesson and H. Azizpour, “Generalized Jensen-Shannon Divergence Loss for Learning with Noisy Labels,” in Advances in Neural Information Processing Systems, 2021, vol. 34, pp. 30284–30297.
[53] Y. Wen, K. Zhang, Z. Li, and Y. Qiao, “A Discriminative Feature Learning Approach for Deep Face Recognition,” ECCV 2016, vol. 9911, pp. 499–515, 2016.
校內:2027-08-12公開