| 研究生: |
蔡沛蓁 Tsai, Pei-Chen |
|---|---|
| 論文名稱: |
可信人工智慧之多任務預測與公平性推論:結合智慧代理與統計推論之方法論框架 A Framework for Trustworthy Multi-Task Prediction and Fairness Inference: Integrating Agentic AI and Statistical Inference |
| 指導教授: |
蔣榮先
Chiang, Jung-Hsien |
| 學位類別: |
博士 Doctor |
| 系所名稱: |
電機資訊學院 - 資訊工程學系 Department of Computer Science and Information Engineering |
| 論文出版年: | 2026 |
| 畢業學年度: | 114 |
| 語文別: | 英文 |
| 論文頁數: | 92 |
| 中文關鍵詞: | 公平性 、代理式人工智慧 、統計推論 、多任務學習 、視覺化分析 |
| 外文關鍵詞: | Fairness, Agentic AI, Statistical Inference, Multi-Task Prediction, Visualization |
| ORCID: | 0000-0003-4691-8979 |
| ResearchGate: | https://www.researchgate.net/profile/Pei-Chen-Tsai-5 |
| 相關次數: | 點閱:5 下載:0 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
隨著人工智慧逐漸應用於醫療、金融及公共決策等重要領域,模型在不同族群間的效能差異與偏差已成為人工智慧的重要議題。然而,現有研究多將公平性、統計推論與自動化學習視為獨立問題,缺乏整合性的系統架構。故本研究提出一個可信多任務學習框架,整合公平性學習、代理式人工智慧機制、統計不確定性分析與視覺化模組,去提升在不同族群間的公平性與穩健性,並強化人工智慧系統的可信度與臨床應用價值。
在方法上,本研究提出一種公平性導向的對比式表示學習策略,結合監督式對比學習與非歧視性正則化,使模型能在保留任務相關資訊的同時,降低對敏感族群特徵的依賴。再來透過自助法、DeLong檢定與多重比較校正,提供具統計支持的公平性與效能推論評估不同族群間的效能差異。此外,本研究導入學習策略代理進行自動化超參數搜尋,以及決策整合代理進行模型集成,去提升模型穩定性與泛化能力,同時維持公平性評估與模型訓練的分離。
本框架於計算病理學中進行具體實現,透過病理切片影像進行多任務預測,包含癌症斷、基因標記與存活分析,並在多個內外部資料集上進行驗證。實驗結果顯示,在訓練資料集中緩解74%顯著偏差、而外部驗證資料集改善44%,在維持整體預測效能情況下,達到統計顯著差異,顯示本方法具備良好的穩健性與泛化能力。未來研究可進一步延伸至多模態學習、交互式公平分析、因果推論與臨床實際部署,以建構更具社會責任與臨床價值的人工智慧系統。
As artificial intelligence (AI) is applied in critical domains such as healthcare, finance, and public decision-making, performance disparities across demographic groups have become a major concern. However, previous research often treats fairness, statistical inference, and automated learning as separate problems. This dissertation proposes TrustMi, a trustworthy multi-task learning framework, that integrates fairness-aware learning, agentic AI mechanisms, statistical uncertainty analysis, and visualization to improve fairness, robustness, and reliability in real-world AI systems.
TrustMi introduces a fairness-aware contrastive representation learning strategy, enabling models to preserve task-relevant information while reducing reliance on demographic-related features. We further incorporate Bootstrapping, DeLong testing, and Benjamini-Hochberg procedure to provide statistically rigorous evaluation of predictive performance and subgroup disparities. In addition, a Learning Policy Agent is designed for adaptive hyperparameter optimization, and a Decision Aggregation Agent enhances ensemble prediction, improving model stability and generalization.
TrustMi is instantiated in whole-slide histopathology images for multi-task prediction, including cancer diagnosis, biomarker prediction, and survival analysis. Across multiple internal and external cohorts, the proposed method mitigated approximately 74% of statistically significant disparities in training datasets and 44% in external validation, while preserving overall predictive performance.
Overall, TrustMi provides a scalable and statistically grounded framework for trustworthy AI. Future research will extend this approach to multimodal learning, causal modeling, and real-world clinical deployment to further enhance fairness and clinical impact.
[1] K.-H. Yu, A. L. Beam, and I. S. Kohane, “Artificial intelligence in healthcare,” Nature Biomed. Eng., vol. 2, no. 10, pp. 719–731, Oct. 2018
[2] K.-H. Yu, E. Healey, T.-Y. Leong, I. S. Kohane, and A. K. Manrai, “Medical artificial intelligence and human values,” New England J. Med., vol. 390, no. 20, pp. 1895–1904, 2024.
[3] S. V. Achanta, “Ai in public services,” Advances in Public Policy and Administration, pp. 113–152, Jan. 2025. doi:10.4018/979-8-3693-8372-8.ch005
[4] P.-C. Tsai et al., “Histopathology images predict multi-omics aberrations and prognoses in colorectal cancer patients,” Nature Communications, vol. 14, no. 1, Apr. 2023. doi:10.1038/s41467-023-37179-4
[5] M. P. Nasrallah et al., “Machine learning for cryosection pathology predicts the 2021 who classification of Glioma,” Med, vol. 4, no. 8, Aug. 2023. doi:10.1016/j.medj.2023.06.002
[6] J. G. de Almeida et al., “Medical Machine Learning Operations: A Framework to facilitate clinical AI development and deployment in Radiology,” European Radiology, vol. 35, no. 11, pp. 6828–6841, May 2025. doi:10.1007/s00330-025-11654-6
[7] H. Barnes et al., “Machine Learning in Radiology: The New Frontier in interstitial lung diseases,” The Lancet Digital Health, vol. 5, no. 1, Jan. 2023. doi:10.1016/s2589-7500(22)00230-8
[8] K. M. Almustafa, “Predictive modeling and optimization in dermatology: Machine Learning for Skin Disease Classification,” Computers in Biology and Medicine, vol. 189, p. 109946, May 2025. doi:10.1016/j.compbiomed.2025.109946
[9] T. Willem et al., “Risks and benefits of Dermatological Machine Learning Health Care Applications—an overview and ethical analysis,” Journal of the European Academy of Dermatology and Venereology, vol. 36, no. 9, pp. 1660–1668, May 2022. doi:10.1111/jdv.18192
[10] K.-H. Liu et al., “Ambient temperature and the occurrence of intradialytic hypotension in patients receiving hemodialysis,” Clinical Kidney Journal, vol. 17, no. 1, Dec. 2023. doi:10.1093/ckj/sfad304
[11] S.-Y. Lin, P.-C. Tsai, F.-Y. Su, C.-Y. Chen, F. Li, J. Zhao, et al., “Contrastive learning enhances fairness in pathology artificial intelligence systems,” Cell Reports Medicine, vol. 6, no. 12, Art. no. 102527, Dec. 2025, doi: 10.1016/j.xcrm.2025.102527.
[12] M. Altalhan, A. Algarni, and M. Turki-Hadj Alouane, “Imbalanced Data Problem in Machine Learning: A Review,” IEEE Access, vol. 13, pp. 13686–13699, 2025. doi:10.1109/access.2025.3531662
[13] L. Drew, “Targeting the racial disparity in kidney disease,” Knowable Magazine, Sep. 2024. doi:10.1146/knowable-091824-1
[14] M. Ehioghae et al., “Healthcare disparities in Spine Oncology: A Systematic Review,” Journal of Racial and Ethnic Health Disparities, Sep. 2025. doi:10.1007/s40615-025-02615-2
[15] N. Islam, L. Budvytyte, N. Khera, and T. Hilal, “Disparities in clinical trial enrollment– focus on car-t and bispecific antibody therapies,” Current Hematologic Malignancy Reports, vol. 20, no. 1, Dec. 2024. doi:10.1007/s11899-024-00747-6
[16] S. Cao, R. Cheng, and Z. Wang, “AGR: Age group fairness reward for bias mitigation in LLMS,” ICASSP 2025 - 2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1–5, Apr. 2025. doi:10.1109/icassp49660.2025.10890657
[17] H. Qudrat-Ullah, “Ethical, social, and practical considerations,” Navigating Complexity, pp. 119–151, 2025. doi:10.1007/978-3-031-82742-6_6
[18] R. T. Rabonato and L. Berton, “A systematic review of fairness in Machine Learning,” AI and Ethics, vol. 5, no. 3, pp. 1943–1954, Sep. 2024. doi:10.1007/s43681-024-00577-5
[19] C. Morris, Review of “using machine learning for healthcare treatment planning,” May 2023. doi:10.14293/s2199-1006.1.sor-uncat.a10167842.v1.rnvscy
[20] P. Staibano et al., “Methodological Review to develop a list of bias items for adaptive clinical trials: Protocol and Rationale,” PLOS ONE, vol. 19, no. 12, Dec. 2024. doi:10.1371/journal.pone.0303315
[21] D. B. Acharya, K. Kuppan, and B. Divya, “Agentic AI: Autonomous Intelligence for Complex Goals—a comprehensive survey,” IEEE Access, vol. 13, pp. 18912–18936, 2025. doi:10.1109/access.2025.3532853
[22] M. Abou Ali, F. Dornaika, and J. Charafeddine, “Agentic AI: A comprehensive survey of architectures, applications, and Future Directions,” Artificial Intelligence Review, vol. 59, no. 1, Nov. 2025. doi:10.1007/s10462-025-11422-4
[23] M. M. Ferdaus et al., “Towards trustworthy AI: A review of ethical and robust large language models,” ACM Computing Surveys, vol. 58, no. 7, pp. 1–43, Jan. 2026. doi:10.1145/3777382
[24] R. Xin, J. Wang, P. Chen, and Z. Zhao, “Trustworthy AI-based performance diagnosis systems for cloud applications: A Review,” ACM Computing Surveys, vol. 57, no. 5, pp. 1–37, Jan. 2025. doi:10.1145/3701740
[25] Y. Benjamini and Y. Hochberg, “Controlling the false discovery rate: A practical and powerful approach to multiple testing,” Journal of the Royal Statistical Society Series B: Statistical Methodology, vol. 57, no. 1, pp. 289–300, Jan. 1995. doi:10.1111/j.2517-6161.1995.tb02031.x
[26] E. R. DeLong, D. M. DeLong, and D. L. Clarke-Pearson, “Comparing the areas under two or more correlated receiver operating characteristic curves: A nonparametric approach,” Biometrics, vol. 44, no. 3, p. 837, Sep. 1988. doi:10.2307/2531595
[27] A. Bhatta, V. Albiero, K. W. Bowyer, and M. C. King, “The gender gap in face recognition accuracy is a hairy problem,” 2023 IEEE/CVF Winter Conference on Applications of Computer Vision Workshops (WACVW), pp. 1–10, Jan. 2023. doi:10.1109/wacvw58289.2023.00034
[28] D. Fucci, M. Gaido, M. Negri, M. Cettolo, and L. Bentivogli, “No pitch left behind: Addressing gender unbalance in automatic speech recognition through pitch manipulation,” 2023 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), pp. 1–8, Dec. 2023. doi:10.1109/asru57964.2023.10389767
[29] D. Wagh and P. S. Jadhav, “Bias mitigation and fairness in AI Healthcare Applications,” Artificial Intelligence and Machine Learning in Neurology, pp. 47–78, Jan. 2026. doi:10.1002/9781394389131.ch3
[30] J. W. Anderson and S. Visweswaran, “Algorithmic individual fairness and Healthcare: A scoping review,” JAMIA Open, vol. 8, no. 1, Dec. 2024. doi:10.1093/jamiaopen/ooae149
[31] M. D. Abràmoff, M. E. Tarver, N. Loyo-Berrios, S. Trujillo, D. Char, Z. Obermeyer, et al., “Considerations for addressing bias in artificial intelligence for health equity,” npj Digital Medicine, vol. 6, Art. no. 170, Sep. 2023.
[32] L. Haliburton et al., “Uncovering labeler bias in machine learning annotation tasks,” AI and Ethics, vol. 5, no. 3, pp. 2515–2528, Sep. 2024. doi:10.1007/s43681-024-00572-w
[33] S. Caton and C. Haas, “Fairness in machine learning: A survey,” ACM Computing Surveys, vol. 56, no. 7, pp. 1–38, Apr. 2024. doi:10.1145/3616865
[34] D. Pessach and E. Shmueli, “A review on fairness in machine learning,” ACM Comput. Surv., vol. 55, no. 3, Art. no. 51, pp. 1–44, Feb. 2022, doi: 10.1145/3494672.
[35] M. M. Lucas, C.-H. Chang, and C. C. Yang, “Resampling for mitigating bias in predictive model for substance use disorder treatment completion,” 2023 IEEE 11th International Conference on Healthcare Informatics (ICHI), pp. 709–711, Jun. 2023. doi:10.1109/ichi57859.2023.00128
[36] I. Chakraborty, “Bayesian sampler: Fairness in sampling,” 2020 19th IEEE International Conference on Machine Learning and Applications (ICMLA), pp. 246–251, Dec. 2020. doi:10.1109/icmla51294.2020.00048
[37] C. Li et al., “Multi-task learning with dynamic re-weighting to achieve fairness in Healthcare Predictive Modeling,” Journal of Biomedical Informatics, vol. 143, p. 104399, Jul. 2023. doi:10.1016/j.jbi.2023.104399
[38] B. Han et al., “Revive re-weighting in imbalanced learning by density ratio estimation,” Advances in Neural Information Processing Systems 37, pp. 79909–79934, 2024. doi:10.52202/079017-2539
[39] Y. Luo et al., “Harvard glaucoma fairness: A retinal nerve disease dataset for Fairness Learning and Fair Identity normalization,” IEEE Transactions on Medical Imaging, vol. 43, no. 7, pp. 2623–2633, Jul. 2024. doi:10.1109/tmi.2024.3377552
[40] Y. Linghu, T. de Freitas Pereira, C. Ecabert, S. Marcel, and M. Günther, “Score normalization for Demographic Fairness in face recognition,” 2024 IEEE International Joint Conference on Biometrics (IJCB), pp. 1–11, Sep. 2024. doi:10.1109/ijcb62174.2024.10744514
[41] L. Ranaldi, E. Ruzzetti, D. Venditti, D. Onorati, and F. Zanzotto, “A trip towards fairness: Bias and de-biasing in large language models,” Proceedings of the 13th Joint Conference on Lexical and Computational Semantics (*SEM 2024), pp. 372–384, 2024. doi:10.18653/v1/2024.starsem-1.30
[42] V. V. Ramaswamy, S. S. Kim, and O. Russakovsky, “Fair attribute classification through Latent Space De-biasing,” 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 9297–9306, Jun. 2021. doi:10.1109/cvpr46437.2021.00918
[43] A. Priyadarshini and S. Gago-Masague, “Fair evaluator: An adversarial debiasing-based deep learning framework in student admissions,” 2024 IEEE 6th International Conference on Cognitive Machine Intelligence (CogMI), pp. 152–161, Oct. 2024. doi:10.1109/cogmi62246.2024.00029
[44] L. Zhang et al., “Towards fairness-aware adversarial network pruning,” 2023 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 5145–5154, Oct. 2023. doi:10.1109/iccv51070.2023.00477
[45] D. Kim, S. Park, S. Hwang, and H. Byun, “Fair classification by loss balancing via fairness-aware batch sampling,” Neurocomputing, vol. 518, pp. 231–241, Jan. 2023. doi:10.1016/j.neucom.2022.11.018
[46] C. Zhu, G. Zhang, and K. Yang, “Fairness-aware task loss rate minimization for Multi-UAV enabled mobile edge computing,” IEEE Wireless Communications Letters, vol. 12, no. 1, pp. 94–98, Jan. 2023. doi:10.1109/lwc.2022.3218035
[47] Z. Liu et al., “A self-adaptive fairness constraint framework for Industrial Recommender System,” Proceedings of the 33rd ACM International Conference on Information and Knowledge Management, pp. 4726–4733, Oct. 2024. doi:10.1145/3627673.3680099
[48] R. Roy and G. A. Rao, “A framework for an efficient recommendation system using time and fairness constraint based web usage mining technique,” Ingénierie des systèmes d information, vol. 27, no. 3, pp. 425–431, Jun. 2022. doi:10.18280/isi.270308
[49] W. Wang et al., “Group-aware long- and short-term graph representation learning for sequential group recommendation,” Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval, pp. 1449–1458, Jul. 2020. doi:10.1145/3397271.3401136
[50] R. Li, X. Meng, and Y. Zhang, “Group-aware dynamic graph representation learning for next poi recommendation,” IEEE Transactions on Knowledge and Data Engineering, vol. 37, no. 5, pp. 2614–2625, May 2025. doi:10.1109/tkde.2025.3538005
[51] T. Jang, P. Shi, and X. Wang, “Group-aware threshold adaptation for Fair Classification,” Proceedings of the AAAI Conference on Artificial Intelligence, vol. 36, no. 6, pp. 6988–6995, Jun. 2022. doi:10.1609/aaai.v36i6.20657
[52] E. Small, K. Sokol, D. Manning, F. D. Salim, and J. Chan, “Equalised odds is not equal individual odds: Post-processing for group and individual fairness,” The 2024 ACM Conference on Fairness, Accountability, and Transparency, pp. 1559–1578, Jun. 2024. doi:10.1145/3630106.3658989
[53] R. Souza and M. Manzato, “A two-stage calibration approach for mitigating bias and fairness in Recommender Systems,” Proceedings of the 39th ACM/SIGAPP Symposium on Applied Computing, pp. 1659–1661, Apr. 2024. doi:10.1145/3605098.3636092
[54] Y. Gao and D.-W. Ding, “Fadiaframe: Improving fairness and accuracy of deep learning-based diagnosis for dermatological lesions via a novel post-processing framework,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 36, no. 2, pp. 2259–2272, Feb. 2026. doi:10.1109/tcsvt.2025.3603409
[55] Z. Wu and J. He, “Fairness-aware model-agnostic positive and unlabeled learning,” 2022 ACM Conference on Fairness Accountability and Transparency, pp. 1698–1708, Jun. 2022. doi:10.1145/3531146.3533225
[56] Y. Zhao, K. Zhang, L. Gao, and J. Chen, “Privacy and fairness analysis in the Post-Processed Differential Privacy Framework,” IEEE Transactions on Information Forensics and Security, vol. 20, pp. 2412–2423, 2025. doi:10.1109/tifs.2025.3528222
[57] A. Vaidya et al., “Demographic bias in misdiagnosis by computational pathology models,” Nature Medicine, vol. 30, no. 4, pp. 1174–1190, Apr. 2024. doi:10.1038/s41591-024-02885-z
[58] D. Montezuma et al., “Unbiased artificial intelligence: Addressing bias in computational pathology,” Mayo Clinic Proceedings: Digital Health, vol. 3, no. 4, p. 100302, Dec. 2025. doi:10.1016/j.mcpdig.2025.100302
[59] J. Liu, W. Ding, W. Yang, and L. Xu, “Digital Pathology Image Domain generalization based on collaborative feature matching and uncertainty perturbation,” Lecture Notes in Computer Science, pp. 508–521, 2026. doi:10.1007/978-981-95-5631-1_36
[60] M. Y. Lu et al., “Data-efficient and weakly supervised computational pathology on whole-slide images,” Nature Biomedical Engineering, vol. 5, no. 6, pp. 555–570, Mar. 2021. doi:10.1038/s41551-020-00682-w
[61] A. H. Song et al., “Analysis of 3D pathology samples using weakly supervised AI,” Cell, vol. 187, no. 10, May 2024. doi:10.1016/j.cell.2024.03.035
[62] E. Gaffney, P. Riegman, W. Grizzle, and P. Watson, “Factors that drive the increasing use of FFPE tissue in basic and Translational Cancer Research,” Biotechnic & Histochemistry, vol. 93, no. 5, pp. 373–386, Jul. 2018. doi:10.1080/10520295.2018.1446101
[63] W. Mathieson and G. Thomas, “Using FFPE tissue in genomic analyses: Advantages, disadvantages and the role of Biospecimen Science,” Current Pathobiology Reports, vol. 7, no. 3, pp. 35–40, Jun. 2019. doi:10.1007/s40139-019-00194-6
[64] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “BERT: Pre-training of deep bidirectional transformers for language understanding,” in Proc. 2019 Conf. North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT), Minneapolis, MN, USA, Jun. 2019, pp. 4171–4186, doi: 10.18653/v1/N19-1423.
[65] A. Radford, K. Narasimhan, T. Salimans, and I. Sutskever. Improving Language Understanding by Generative Pre-Training. Mikecaptain.com. Accessed: Feb. 15, 2024. [Online]. Available: https://www.mikecaptain.com/resources/pdf/GPT-1.pdf
[66] Hao Zhang, Feng Li, Shilong Liu, Lei Zhang, Hang Su, Jun Zhu, Lionel M. Ni, and Heung-Yeung Shum. DINO: DETR with improved denoising anchor boxes for end-to-end object detection. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net, 2023.
[67] Kirillov, A., Mintun, E., Ravi, N., Mao, H., Rolland, C., Gustafson, L., Xiao, T., Whitehead, S., Berg, A. C., Lo, W.-Y., Dollár, P., and Girshick, R.,
“Segment Anything,” in Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 4015–4026.
[68] T. Wu et al., “GPT-4V(ision) is a human-aligned evaluator for text-to-3d generation,” 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 22227–22238, Jun. 2024. doi:10.1109/cvpr52733.2024.02098
[69] S. Wang et al., “Learning transferable human-object interaction detector with natural language supervision,” 2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 929–938, Jun. 2022. doi:10.1109/cvpr52688.2022.00101
[70] X. Wang, J. Zhao, E. Marostica, W. Yuan, J. Jin, J. Zhang, et al., “A pathology foundation model for cancer diagnosis and prognosis prediction,” Nature, vol. 634, pp. 970–978, Sep. 2024.
[71] H. Xu, N. Usuyama, J. Bagga, S. Zhang, R. Rao, T. Naumann, et al., “A whole-slide foundation model for digital pathology from real-world data,” Nature, vol. 630, pp. 181–188, May 2024.
[72] E. Zimmermann et al., “Virchow2: Scaling self-supervised mixed magnification models in pathology,” 2024, arXiv:2408.00738.
[73] R. J. Chen et al., “Towards a general-purpose foundation model for Computational Pathology,” Nature Medicine, vol. 30, no. 3, pp. 850–862, Mar. 2024. doi:10.1038/s41591-024-02857-3
[74] Oded Maron and Tomas Lozano-P ´ erez. A Framework for Multiple-Instance Learning. In M. I. Jordan, M. J. Kearns, and S. A. Solla, editors, Advances in Neural Information Processing Systems 10, pages 570–576. MIT Press, 1998.
[75] Maximilian Ilse, Jakub Tomczak, and Max Welling. Attention-based Deep Multiple Instance Learning. In International Conference on Machine Learning, pages 2127-2136. PMLR, July 2018. ISSN: 2640-3498.
[76] Ting Chen, Simon Kornblith, Mohammad Norouzi, and Geoffrey E. Hinton. 2020.A Simple Framework for Contrastive Learning of Visual Representations. InProceedings of the 37th International Conference on Machine Learning (ICML),Vol. 119. 1597–1607
[77] B. Efron, “Bootstrap methods: Another look at the jackknife,” The Annals of Statistics, vol. 7, no. 1, Jan. 1979. doi:10.1214/aos/1176344552
[78] F. Chollet, “Xception: Deep learning with depthwise separable convolutions,” 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1800–1807, Jul. 2017. doi:10.1109/cvpr.2017.195
[79] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov, Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner, Mostafa Dehghani, Matthias Minderer, Georg Heigold, Sylvain Gelly, et al. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929, 2020.
[80] H. B. Mann and D. R. Whitney, “On a test of whether one of two random variables is stochastically larger than the other,” The Annals of Mathematical Statistics, vol. 18, no. 1, pp. 50-60, Mar. 1947. doi:10.1214/aoms/1177730491
[81] J. N. Kather et al., “Predicting survival from colorectal cancer histology slides using Deep Learning: A Retrospective Multicenter Study,” PLOS Medicine, vol. 16, no. 1, Jan. 2019. doi:10.1371/journal.pmed.1002730
[82] F. Hörst et al., “Cellvit: Vision transformers for precise cell segmentation and classification,” Medical Image Analysis, vol. 94, p. 103143, May 2024. doi:10.1016/j.media.2024.103143
[83] K. Tomczak, P. Czerwińska, and M. Wiznerowicz, “Review the cancer genome atlas (TCGA): An immeasurable source of knowledge,” Współczesna Onkologia, vol. 1A, pp. 68–77, 2015. doi:10.5114/wo.2014.47136
[84] N. J. Edwards et al., “The CPTAC data portal: A resource for cancer proteomics research,” Journal of Proteome Research, vol. 14, no. 6, pp. 2707–2713, May 2015. doi:10.1021/pr501254j
[85] C. S. Zhu et al., “The prostate, Lung, colorectal, and ovarian cancer screening trial and its associated Research Resource,” JNCI Journal of the National Cancer Institute, vol. 105, no. 22, pp. 1684–1693, Oct. 2013. doi:10.1093/jnci/djt281
[86] M. Dörrich et al., “A multimodal dataset for Precision Oncology in head and Neck Cancer,” Nature Communications, vol. 16, no. 1, Aug. 2025. doi:10.1038/s41467-025-62386-6
[87] Y. Bao et al., “Origin, methods, and evolution of the Three Nurses’ Health Studies,” American Journal of Public Health, vol. 106, no. 9, pp. 1573–1581, Sep. 2016. doi:10.2105/ajph.2016.303338
[88] Harvard T.H. Chan School of Public Health, "Health Professionals Follow-Up Study Questionnaires," Harvard T.H. Chan School of Public Health, [Online]. Available: https://sites.sph.harvard.edu/hpfs/hpfs-questionnaires/. [Accessed: Sep. 27, 2018].