簡易檢索 / 詳目顯示

研究生: 邱煥榮
Chiu, Huan-Jung
論文名稱: 乳癌與C型肝炎疾病之智慧型學習系統之設計與實現
Design and Implementation of Intelligent Learning Detection Systems for Breast Cancer and Hepatitis C Diseases
指導教授: 李祖聖
Li, Tzuu-Hseng S.
學位類別: 博士
Doctor
系所名稱: 電機資訊學院 - 電機工程學系
Department of Electrical Engineering
論文出版年: 2022
畢業學年度: 110
語文別: 英文
論文頁數: 92
中文關鍵詞: 多層感知網絡智慧型演算法主成分分析隨機森林智慧型學習系統
外文關鍵詞: Multilayer perceptron network, Intelligent algorithm, Principal component analysis, Random forest, Intelligent learning system
相關次數: 點閱:62下載:14
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 本論文主要探討乳癌與C型肝炎疾病之智慧型學習系統之設計與實現,所開發自動分類器可用於檢測乳癌與C型肝炎發病率之檢測。在乳癌檢測方面,首先使用主成分分析來識別數據中有價值部分,進一步降低數據的維度,接著使用多層感知網絡方法提取數據中特徵,並設計網絡結構以探索與開發數據,可增加或減少數據之維度。因此,模型之建立是為了先探索然後開發數據。經過訓練和學習後,模型可以分離出具有代表性的屬性與特性,最後通過使用支持向量機的遷移學習技術將特徵數據用作分類器。另外,在C型肝炎檢測方面則提出結合隨機森林、邏輯回歸算法及人工蜂群演算法之串聯式兩階段智慧學習系統。策略驗證則是進行了50次10折蒙地卡羅交叉驗證法,並使用多種性能指標進行比較。所提模型可用於檢測C型肝炎發病之多類概率,從而提高相關治療之有效性。最後,模型驗證結果顯示本論文所提兩種智慧型學習系統均具可行性與實用性。

    This dissertation proposes intelligent learning systems for breast cancer and hepatitis C diseases, respectively. The study is an automatic classifier for detecting the probability incidence of breast cancer and hepatitis C virus. The principal component analysis first identifies valuable parts of the data and further reduces the dimensions for predicting breast cancer disease. The multilayer perceptron network method is submitted to extract characteristics of the data. The network structure is designed to explore the examined data as the dimensions are increased or decreased. As such, the model is established first to explore and then investigate data. After training and learning, the models could segregate the representative attributes and numbers, and the characteristic data are then used as classifiers through transfer learning techniques using support vector machines. Moreover, in the aspect of HCV diseases, this dissertation proposes a cascade two-stage intelligent learning system, which combines the random forest, logistic regression, and artificial bee colony algorithms. After conducting a 10-fold Monte Carlo cross-validation 50 times, data from the recent pandemic are used to verify the proposed method. Various indicators are utilized to compare with the latest algorithms used in relevant fields to evaluate the quantitative results. All the simulation results indicate that the proposed model can detect the multiclass probabilities of HCV incidence, thereby improving the effectiveness of appropriate treatments. Finally, the validation results demonstrate the feasibility and effectiveness of these two intelligent learning systems.

    Abstract I Acknowledgment III Contents IV List of Figures VI List of Tables VIII List of Abbreviation IX Chapter 1. Introduction 1 1.1 Motivation and Literature Survey 1 1.2 Contributions of Dissertation 7 1.3 Dissertation Organization 8 Chapter 2. Intelligent Learning Systems for Breast Cancer Detection 9 2.1 Introduction 9 2.2 System Architecture 10 2.3 The Proposed Model 19 2.3.1 Dimension Reduction Using Principal Component Analysis 21 2.3.2 Multilayer Perceptron Model 22 2.3.3 Combination of PCA and MLP model 24 2.3.4 Transferring Multilayer Perceptron Network Model 26 2.3.5 Classifier Using Support Vector Machine 27 2.4 Verification Results 27 2.5 Discussion and Future Work 36 Chapter 3. Intelligent Learning Systems for Hepatitis C Virus Detection 43 3.1 Introduction 43 3.2 Related Work 44 3.3 Method 50 3.3.1 Data Used to Create Model 1 by RF Algorithm 51 3.3.2 Data Used to Create Model 2 by MLR Algorithm 51 3.3.3 Cascade RF–MLR by ABC Algorithm 53 3.4 Verification Results 56 3.4.1 Database 56 3.4.2 Verification Setup 57 3.4.3 Performance Measurements 58 3.4.4 K-Fold-Monte Cross-Validation 67 3.4.5 Comparison with Other Datasets 68 3.4.6 Ablation Analysis 69 3.5 Discussion and Future Work 72 Chapter 4. Conclusion 74 4.1 Conclusions 74 Appendix. 76 References 81

    [1] V. G. Vogel, “Epidemiology, genetics, and risk evaluation of postmenopausal women at risk of breast cancer,” Menopause, vol. 15, no. 4, pp. 782–789, Jul. 2008.
    [2] J. Ferlay, I. Soerjomataram, R. Dikshit, S. Eser, C. Mathers, M. Rebelo, D. M. Parkin, D. Forman, and F. Bray, “Cancer incidence and mortality worldwide: Sources, methods and major patterns in GLOBOCAN 2012,” Int. J. Cancer, vol. 136, no. 5, pp. E359–E386, Mar. 2015
    [3] C. Saunders and S. Jassal, Breast cancer, Oxford. Oxford University Press, 2009.
    [4] A. G. Renehan, M. Tyson, M. Egger, R. F.Heller, and M. Zwahlen, “Body-mass index and incidence of cancer: a systematic review and meta-analysis of prospective observational studies,” Lancet, vol. 371, no. 9612, pp. 569–578, 2008.
    [5] J. Crisóstomo, P. Matafome, D. Santos-Silva, A. L. Gomes, M. Gomes, M. Patrício, L. Letra, A. B. Sarmento-Ribeiro, L. Santos, and R. Seiça, “Hyperresistinemia and metabolic dysregulation: a risky crosstalk in obese breast cancer,” Endocrine, vol. 53, no. 2, pp. 433–442, Aug. 2016.
    [6] J. A. Tice, S. R. Cummings, E. Ziv, and K. Kerlikowske, “Mammographic breast density and the gail model for breast cancer risk prediction in a screening population,” Breast Cancer Res. Treat., vol. 94, no. 2, pp. 115–122, Nov. 2005.
    [7] A. R. Brentnall, E. F. Harkness, S. M. Astley, L. S. Donnelly, P. Stavrinos, S. Sampson, L. Fox, J. C. Sergeant, M. N. Harvie, M. Wilson, U. Beetles, S. Gadde, Y. Lim, A. Jain, S. Bundred, N. Barr, V. Reece, A. Howell, J. Cuzick, and D. G. R. Evans, “Mammographic density adds accuracy to both the Tyrer-Cuzick and Gail breast cancer risk models in a prospective UK screening cohort,” Breast Cancer Res., vol. 17, no. 1, p. 147, Dec. 2015.
    [8] Z. Wang, M. Li, H. Wang, H. Jiang, Y. Yao, H. Zhang, and J. Xin, “Breast cancer detection using extreme learning machine based on feature fusion with CNN deep features,” IEEE Access, vol. 7, pp. 105146–105158, Jan. 2019.
    [9] R. K. Ross, A. Paganini-Hill, P. C. Wan, and M. C. Pike, “Effect of hormone replacement therapy on breast cancer risk: estrogen versus estrogen plus progestin,” JNCI J. Natl. Cancer Inst., vol. 92, no. 4, pp. 328–332, Feb. 2000.
    [10] J. Tyrer, S. W. Duffy, and J. Cuzick, “A breast cancer prediction model incorporating familial and personal risk factors,” Stat. Med., vol. 23, no. 7, pp. 1111–1130, Apr. 2004.
    [11] A. Collins and I. Politopoulos, “The genetics of breast cancer: risk factors for disease,” Appl. Clin. Genet., vol. 4, pp. 11–19, Jan. 2011.
    [12] Y. Zhang, R. Shi, C. Chen, M. Duan, S. Liu, Y. Ren, L. Huang, X. Dai, and F. Zho, “ELMO: an efficient logistic regression-based multi-omic integrated analysis method for breast cancer intrinsic subtypes,” IEEE Access, vol. 8, pp. 5121–5130, Dec. 2020.
    [13] A. Yala, C. Lehman, T. Schuster, T. Portnoi, and R. Barzilay, “A deep learning mammography-based model for improved breast cancer risk prediction,” Radiology, vol. 292, no. 1, pp. 60–66, Jul. 2019.
    [14] G. G. N. Geweid and M. A. Abdallah, “A novel approach for breast cancer investigation and recognition using m-level set-based optimization functions,” IEEE Access, vol. 7, pp. 136343–136357, Sep. 2019.
    [15] X. Li, M. Radulovic, K. Kanjer, and K. N. Plataniotis, “Discriminative pattern mining for breast cancer histopathology image classification via fully convolutional autoencoder,” IEEE Access, vol. 7, pp. 36433–36445, Mar. 2019.
    [16] M. R. Karim, G. Wicaksono, I. G. Costa, S. Decker, and O. Beyan, “Prognostically relevant subtypes and survival prediction for breast cancer based on multimodal genomics data,” IEEE Access, vol. 7, pp. 133850–133864, Sep. 2019.
    [17] B. Abdikenov, Z. Iklassov, A. Sharipov, S. Hussain, and P. K. Jamwal, “Analytics of heterogeneous breast cancer data using neuroevolution,” IEEE Access, vol. 7, pp. 18050–18060, Feb. 2019.
    [18] Q. Wuniri, W. Huangfu, Y. Liu, X. Lin, L. Liu, and Z. Yu, “A generic-driven wrapper embedded with feature-type-aware hybrid bayesian classifier for breast cancer classification,” IEEE Access, vol. 7, pp. 119931–119942, Aug. 2019.
    [19] H. M. Whitney, H. Li, Y. Ji, P. Liu, and M. L. Giger, “Comparison of breast mri tumor classification using human-engineered radiomics, transfer learning from deep convolutional neural networks, and fusion methods,” Proc. IEEE, vol. 108, no. 1, pp. 163–177, Jan. 2020.
    [20] X.-X. Niu and C. Y. Suen, “A novel hybrid CNN–SVM classifier for recognizing handwritten digits,” Pattern Recognition, vol. 45, no. 4, pp. 1318–1325, Apr. 2012.
    [21] G. Capizzi, G. L. Sciuto, C. Napoli, D. Połap, and M. Woźniak, “Small lung nodules detection based on fuzzy-logic and probabilistic neural network with bioinspired reinforcement learning,” IEEE Trans. Fuzzy Syst., vol. 28, no. 6, pp. 1178–1189, Jun. 2020.
    [22] W. Wei, B. Zhou, D. Połap, and M. Woźniak, “A regional adaptive variational PDE model for computed tomography image reconstruction,” Pattern Recognition., vol. 92, pp. 64–81, 2019.
    [23] F. Beritelli, G. Capizzi, G. Lo Sciuto, C. Napoli, and M. Woźniak, “A novel training method to preserve generalization of RBPNN classifiers applied to ECG signals diagnosis,” Neural Networks, vol. 108, pp. 331–338, Dec. 2018.
    [24] S. Dash, S. K. Shakyawar, M. Sharma, and S. Kaushik, “Big data in healthcare: management, analysis and future prospects,” J. Big Data, vol. 6, no. 1, p. 54, Dec. 2019.
    [25] J. Han, J. Pei, and M. Kamber, Data mining: concepts and techniques. Elsevier Copyright 2011 Elsevier Inc. All rights reserved, 2012.
    [26] B. Hajarizadeh, J. Grebely, and G. J. Dore, “Epidemiology and natural history of HCV infection,” Nat. Rev. Gastroenterol. Hepatol., vol. 10, no. 9, pp. 553–562, Sep. 2013.
    [27] J. S. Bajaj, “Alcohol, liver disease and the gut microbiota,” Nat. Rev. Gastroenterol. Hepatol., vol. 16, no. 4, pp. 235–246, Apr. 2019.
    [28] J.-H. Wang, C.-H. Chen, C.-M. Chang, W.-C. Feng, C.-Y. Lee, and S.-N. Lu, “Hepatitis C virus core antigen is cost-effective in community-based screening of active hepatitis C infection in Taiwan,” J. Formos. Med. Assoc., vol. 119, no. 1, Part 3, pp. 504–508, Jan. 2020.
    [29] T. Boettler, P. N. Newsome, M. U. Mondelli, M. Maticic, E. Cordero, M. Cornberg, and T. Berg, “Care of patients with liver disease during the COVID-19 pandemic: EASL-ESCMID position paper,” JHEP Reports, vol. 2, no. 3, p. 100113, Jun. 2020.
    [30] J. H. Hoofnagle, “Hepatitis C: The clinical spectrum of disease,” Hepatology, vol. 26, no. S3, pp. 15S-20S, Dec. 1997.
    [31] A. M. Anter, S. Bhattacharyya, and Z. Zhang, “Multi-stage fuzzy swarm intelligence for automatic hepatic lesion segmentation from CT scans,” Applied. Soft Computing, vol. 96, no. 106677, Nov. 2020.
    [32] R. T. Ribeiro, R. T. Marinho, and J. M. Sanches, “Classification and staging of chronic liver disease from multimodal data,” IEEE Trans. Biomed. Eng., vol. 60, no. 5, pp. 1336–1344, May. 2013.
    [33] G. Chandrasekaran, P. R. Karthikeyan, N. S. Kumar, and V. Kumarasamy, “Test scheduling of system-on-chip using dragonfly and ant lion optimization algorithms”. Journal of Intelligent & Fuzzy Systems, vol. 40, no. 3, pp. 4905-4917, Mar. 2021.
    [34] Chandrasekaran, G., Periyasamy, S. & Karthikeyan, P.R. “Test scheduling for system on chip using modified firefly and modified ABC algorithms.” SN Appl. Sci., vol. 1, no. 9, p. 1079, Sep. 2019.
    [35] Chandrasekaran, G., Kumarasamy, V., Chinraj, “Test scheduling of core based system-on-chip using modified ant colony optimization.” Journal Européen des Systèmes Automatisés, vol. 52, no. 6, pp. 599-605. Dec. 2019.
    [36] Chandrasekaran, G., Singaram, G., Duraisamy, R., Ghodake, A.S., Ganesan, P.K. “Test scheduling and test time reduction for SoC by using enhanced firefly algorithm.” Revue d'Intelligence Artificielle, vol. 35, no. 3, pp. 265-271. Jun. 2021.
    [37] J. H. Joloudari, H. Saadatfar, A. Dehzangi, and S. Shamshirband, “Computer-aided decision-making for predicting liver disease using PSO-based optimized SVM with feature selection,” Informatics Med. Unlocked, vol. 17, p. 100255, Jan. 2019.
    [38] C.-C. Wu, W.-C. Yeh, W.-D. Hsu, M. M. Islam, P. A. A. Nguyen, T. N. Poly, Y.-C. Wang, H.-C. Yang, and Y.-C. J. Li, “Prediction of fatty liver disease using machine learning algorithms,” Comput. Methods Programs Biomed., vol. 170, pp. 23–29, Mar. 2019.
    [39] J. Y. Nakayama, J. Ho, E. Cartwright, R. Simpson, and V. S. Hertzberg, “Predictors of progression through the cascade of care to a cure for hepatitis C patients using decision trees and random forests,” Comput. Biol. Med., vol. 134, p. 104461, Jul. 2021.
    [40] O. Kadioglu, M. Saeed, H. J. Greten, and T. Efferth, “Identification of novel compounds against three targets of SARS CoV-2 coronavirus by combined virtual screening and supervised machine learning,” Comput. Biol. Med., vol. 133, p. 104359, Jun. 2021.
    [41] R. Kumar, V. Kumar, and K. W. Lee, “A computational drug repurposing approach in identifying the cephalosporin antibiotic and anti-hepatitis C drug derivatives for COVID-19 treatment,” Comput. Biol. Med., vol. 130, p. 104186, 2021.
    [42] D. Chicco and G. Jurman, “An ensemble learning approach for enhanced classification of patients with hepatitis and cirrhosis,” IEEE Access, vol. 9, pp. 24485–24498, Feb. 2021.
    [43] A. Fabijańska and S. Grabowski, “Viral genome deep classifier,” IEEE Access, vol. 7, pp. 81297–81307, 2019.
    [44] H.-J. Chiu, T.-H. S. Li, and P.-H. Kuo, “Breast cancer–detection system using PCA, multilayer perceptron, transfer learning, and support vector machine,” IEEE Access, vol. 8, pp. 204309–204324, Nov. 2020.
    [45] Y. Wei, “Study on the application of internet of things-based intelligent microscope in blood cell analysis,” J. Comput. Theor. Nanosci., vol. 14, no. 2, pp. 1199–1203, Feb. 2017.
    [46] Sysmex America, Inc., “Sysmex XE-5000 specifications,” in XE-5000™ Automated Hematology System Leaflet, [online document ], 2010. Available: http://photos.labwrench.com/equipmentManuals/10820-4303.pdf
    [47] P. D. P.Adi and A. Kitagawa, “ZigBee radio frequency (RF) performance on raspberry Pi 3 for internet of things (IoT) based blood pressure sensors monitoring,” Int. J. Adv. Comput. Sci. Appl., vol. 10, no. 5, pp. 18–27, 2019.
    [48] A. Agirre, A. Armentia, E. Estévez, and M. Marcos, “A component-based approach for securing indoor home care applications,” Sensors (Basel)., vol. 18, no. 1, p. 46, Dec.2017.
    [49] A. Paris, T. Nhan, E. Cornet, J.-P. Perol, M. Malet, and X. Troussard, “ORIGINAL ARTICLE: Performance evaluation of the body fluid mode on the platform Sysmex XE-5000 series automated hematology analyzer,” Int. J. Lab. Hematol., vol. 32, no. 5, pp. 539–547, Oct. 2010.
    [50] Online Browsing Platform (OBP). (2012). Examination Processes ISO 15189:2012 Medical laboratories—Requirements for Quality and Competence, Para. 5.5. [Online]. Available: https://www.iso.org/obp/ui/#iso:std:iso:15189:ed-3:v2:en
    [51] Evaluation of Measurement Data—Guide to the Expression of Uncertainty in Measurement, Standard ISO Guide 98-3, International Bureau of Weights and Measures, 2008. [Online]. Available: http://www.bipm.org/en/publications/guides/gum.html
    [52] R. da Silva and A. Williams. (2016). Eurachem/CITAC guide: setting and using target uncertainty in chemical measurement. leoben, austria: eurachem; 2015. [Online]. Available: http://www.citac.cc/STMU_2015_EN.pdf
    [53] C. G. Fraser, Biological variation: from principles to practice. Amer. Assoc. for Clinical Chemistry, pp. 55, 2001.
    [54] J. O. Westgard, “Method validation,” Basic Method Validation. 2nd ed. Madison, WI Westgard QC, pp. 156–157, 2003.
    [55] K. Pearson, “LIII. On lines and planes of closest fit to systems of points in space,” London, Edinburgh, Dublin Philos. Mag. J. Sci., vol. 2, no. 11, pp. 559–572, Nov.1901.
    [56] H. Hotelling, “Analysis of a complex of statistical variables into principal components.,” J. Educ. Psychol., vol. 24, no. 6, pp. 417–441, 1933.
    [57] F. Rosenblatt, “The perceptron: a probabilistic model for information storage and organization in the brain.,” Psychol. Rev., vol. 65, no. 6, p. 386, 1958.
    [58] F. Rosenblatt, “Principles of neurodynamics. perceptrons and the theory of brain mechanisms,” https://apps.dtic.mil/dtic/tr/fulltext/u2/256582.pdf, 1961.
    [59] W. S. McCulloch and W. Pitts, “A logical calculus of the ideas immanent in nervous activity,” Bull. Math. Biophys., vol. 5, no. 4, pp. 115–133, Dec. 1943.
    [60] D. O. Hebb, The organization of behavior. New York, NY, USA: Wiley, 1949.
    [61] T. V. P. Bliss and G. L. Collingridge, “A synaptic model of memory: long-term potentiation in the hippocampus,” Nature, vol. 361, pp. 31–39, Jan, 1993.
    [62] M. F. Bear and R. C. Malenka, “Synaptic plasticity: LTP and LTD,” Curr. Opin. Neurobiol., vol. 4, no. 3, pp. 389–399, Jun. 1994.
    [63] J. A. Kauer and R. C. Malenka, “Synaptic plasticity and addiction,” Nat. Rev. Neurosci., vol. 8, no. 11, pp. 844–858, Nov. 2007.
    [64] K. Hornik, M. Stinchcombe, and H. White, “Multilayer feedforward networks are universal approximators,” Neural Networks, vol. 2, no. 5, pp. 359–366, Jan. 1989.
    [65] Y. Freund and R. E. Schapire, “Large margin classification using the perceptron algorithm,” Mach. Learn., vol. 37, no. 3, pp. 277–296, 1999.
    [66] D. O. Hebb, The organization of behavior: A neuropsychological theory. Psychology Press, John Wiley and Sons, Inc., New York, 1949.
    [67] E. Romero and J. M. Sopena, “Performing feature selection with multilayer perceptrons,” IEEE Trans. Neural Networks, vol. 19, no. 3, pp. 431–441, Mar. 2008.
    [68] B. Gas, “Self-organizing multilayer perceptron,” IEEE Trans. Neural Networks, vol. 21, no. 11, pp. 1766–1779, Nov. 2010.
    [69] J. Tang, C. Deng, and G.-B. Huang, “Extreme Learning Machine for Multilayer Perceptron,” IEEE Trans. Neural Networks Learn. Syst., vol. 27, no. 4, pp. 809–821, Apr. 2016.
    [70] W. Liu, P. Gao, Y. Wang, W. Yu, and M. Zhang, “A unitary weights based one-iteration quantum perceptron algorithm for non-ideal training sets,” IEEE Access, vol. 7, pp. 36854–36865, Jan. 2019.
    [71] B. Scholkopf et al., “Input space versus feature space in kernel-based methods,” IEEE Trans. Neural Networks, vol. 10, no. 5, pp. 1000–1017, Sep. 1999.
    [72] A. Bordes, S. Ertekin, J. Weston, and L. Bottou, “Fast kernel classifiers with online and active learning,” J. Mach. Learn. Res., vol. 6, pp. 1579–1619, Oct. 2005.
    [73] C.-F. Lin and S.-D.Wang, “Fuzzy support vector machines,” IEEE Trans. Neural Networks, vol. 13, no. 2, pp. 464–471, Mar. 2002.
    [74] C. J.C. Burges, “A tutorial on support vector machines for pattern recognition,” Data Min. Knowl. Discov., vol. 2, no. 2, pp. 121–167, Jun. 1998.
    [75] C. J. C. Burges and B. Schölkopf, “Improving the accuracy and speed of support vector machine,” in Advances in Neural Information Processing Systems, pp. 375–381, 1997.
    [76] M. A. Hearst, S. T. Dumais, E. Osuna, J. Platt, and B. Scholkopf, “Support vector machines,” IEEE Intell. Syst. their Appl., vol. 13, no. 4, pp. 18–28, Jul. 1998.
    [77] B. Schölkopf, C. J. C. Burges, and A. J. Smola, Eds., Advances in Kernel Methods: Support Vector Learning. Cambridge, MA, USA: MIT Press, 1999.
    [78] B. Schölkopf, A. J. Smola, F. Bach, et al, Learning with kernels: support vector machines, regularization, optimization, and beyond. MIT press, 2002.
    [79] B. Haasdonk, “Feature space interpretation of SVMs with indefinite kernels,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 27, no. 4, pp. 482–492, Apr. 2005.
    [80] J. Yosinski, J. Clune, Y. Bengio, and H. Lipson, “How transferable are features in deep neural networks?,” Advances in Neural Information Processing Systems 27, Curran Associates, Inc., pp. 3320–3328, 2014.
    [81] S. J. Pan and Q. A. Yang, “A survey on transfer learning,” IEEE Trans. Knowl. Data Eng., vol. 22, no. 10, pp. 1345–1359, Oct. 2010.
    [82] M. Long, Y. Cao, J. Wang, and M. I.J Ordan, “Learning transferable features with deep adaptation networks,” 32nd Int. Conf. Mach. Learn. ICML, vol. 1, 2015, pp. 97–105.
    [83] D. M. Don, N. A. Goldstein, D. M. Crockett, and S. D. Ward, “Domain-adversarial training of neural networks,” J. Mach. Learn. Res., vol. 133, no. 4, pp. 562–568, 2016.
    [84] R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Region-based convolutional networks for accurate object detection and segmentation,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 38, no. 1, pp. 142–158, Jan. 2016.
    [85] S. Ioffe and C. Szegedy, “Batch normalization: accelerating deep network training by reducing internal covariate shift,” in Proc. 32nd Int. Conf. Mach. Learn. (ICML), vol. 1, 2015, pp. 448–456.
    [86] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jun. 2016, pp. 770–778.
    [87] J. Redmon and A. Farhadi, “YOLO9000: better, faster, stronger,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jul. 2017, pp. 7263–7271.
    [88] A. Esteva et al., “Dermatologist-level classification of skin cancer with deep neural networks,” Nature, vol. 542, no. 7639, pp. 115–118, 2017.
    [89] P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, “Image-to-image translation with conditional adversarial networks,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), Jul. 2017, pp. 1125–1134.
    [90] L.van der Maaten, “Accelerating t-SNE using tree-based algorithms,” J. Mach. Learn. Res., vol. 15, pp. 3221–3245, 2014.
    [91] Student, ‘‘The probable error of a mean,’’ in Breakthroughs in Statistics (Springer Series in Statistics: Perspectives in Statistics), S. Kotzand N. L. Johnson, Eds. New York, NY, USA: Springer, 1992, doi: 10.1007/978-1-4612-4380-9_4.
    [92] “Breast Cancer Coimbra Dataset of the Machine Learning Repository in University of California, Irvine.” [Online]. Available: https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Coimbra.
    [93] M. Patrício et al., “Using Resistin, glucose, age and BMI to predict the presence of breast cancer,” BMC Cancer, vol. 18, no. 1, pp. 1–8, Dec. 2018.
    [94] W. H. Wolberg and O. L. Mangasarian, “Multisurface method of pattern separation for medical diagnosis applied to breast cytology,” Proc. Natl. Acad. Sci. U. S. A., vol. 87, no. 23, pp. 9193–9196, Dec. 1990.
    [95] W. N. Street, W. H. Wolberg, and O. L. Mangasarian, “Nuclear feature extraction for breast tumor diagnosis,” Proc.SPIE, vol. 1905, pp. 861–870, Jul. 1993.
    [96] M. Saritas and A.Yasar, “Performance analysis of ANN and naive bayes classification algorithm for data classification,” Int. J. Intell. Syst. Appl. Eng., vol. 7, no. 2, pp. 88–91, Jun. 2019.
    [97] R. O. Badiang, B. D. Gerardo, and R. P. Medina, “Relocating local outliers produced by K-means and K-medoids using local outlier rectifier V.2.0,” in 2019 International Conference on Advanced Computer Science and information Systems (ICACSIS), 2019, pp. 89–94.
    [98] Y. Li, “Performance evaluation of machine learning methods for breast cancer prediction,” Appl. Comput. Math., vol. 7, no. 4, p. 212, Oct. 2018.
    [99] M. F. Aslan, Y. Celik, K. Sabanci, and A. Durdu, “Breast cancer diagnosis by different machine learning methods using blood analysis data,” Int. J. Intell. Syst. Appl. Eng., vol. 6, no. 4, pp. 289–293, Dec.2018.
    [100] P. D. Hung, T. D. Hanh, and V. T. Diep, “Breast cancer prediction using spark MLlib and ML packages,” in Proceedings of the 2018 5th International Conference on Bioinformatics Research and Applications, 2018, pp. 52–59.
    [101] F. Sardouk, A. D. Duru, O. Bayat, et al, “Classification of breast cancer using data mining,” Am. Sci. Res. J. Eng. Technol. Sci., vol. 51, no. 1, pp. 38–46, Jan. 2019.
    [102] “Breast Cancer Wisconsin (Original) Dataset of the Machine Learning Repository in University of California, Irvine.” [Online]. Available:https://archive.ics.uci.edu/ml/datasets/breast+cancer+wisconsin+(original).
    [103] “Breast Cancer Wisconsin (Diagnostic) Dataset of the Machine Learning Repository in University of California, Irvine.” [Online]. Available:https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+(Diagnostic)
    [104] “Breast Cancer Wisconsin (Prognostic) Dataset of the Machine Learning Repository in University of California, Irvine.” [Online]. Available:https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+(Prognostic)
    [105] Q. Peng, A. Gilman, N. Vasconcelos, P. C. Cosman, and L. B. Milstein, “Robust Deep Sensing Through Transfer Learning in Cognitive Radio,” IEEE Wirel. Commun. Lett., vol. 9, no. 1, pp. 38–41, Jan. 2020.
    [106] H. Zuo, J. Lu, G. Zhang, and F. Liu, “Fuzzy Transfer Learning Using an Infinite Gaussian Mixture Model and Active Learning,” IEEE Trans. Fuzzy Syst., vol. 27, no. 2, pp. 291–303, Feb. 2019.
    [107] U. Côté-Allard et al., “Deep Learning for Electromyographic Hand Gesture Signal Classification Using Transfer Learning,” IEEE Trans. Neural Syst. Rehabil. Eng., vol. 27, no. 4, pp. 760–771, Apr. 2019.
    [108] S. Wang, L. Zhang, W. Zuo and B. Zhang, “Class-Specific Reconstruction Transfer Learning for Visual Recognition Across Domains,” IEEE Trans. Image Process., vol. 29, pp. 2424-2438, Nov. 2020.
    [109] C. Sun, M. Ma, Z. Zhao, S. Tian, R. Yan and X. Chen, “Deep transfer learning based on sparse autoencoder for remaining useful life prediction of tool in manufacturing,” IEEE Trans. Ind. Informat., vol. 15, no. 4, pp. 2416-2425, Apr. 2019.
    [110] P. Viola and M. Jones, “Rapid object detection using a boosted cascade of simple features,” in Proc. IEEE Comput. Soc. Conf. Comput. Vis. Pattern Recognit. (CVPR), vol. 1, 2001, pp. 1-8.
    [111] G. Heitz, S. Gould, A. Saxena, and D. Koller, “Cascaded classification models: combining models for holistic scene understanding,” in Proc. Adv. Neural Inf. Process. Syst., vol. 21, 2009, pp. 641–648.
    [112] Z. Zhang and P. H. S. Torr, “Object proposal generation using two-stage cascade SVMs,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 38, no. 1, pp. 102–115, Jan. 2016.
    [113] Z. Cai, M. Saberian, and N. Vasconcelos, “Learning complexity-aware cascades for pedestrian detection,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 42, no. 9, pp. 2195–2211, Sep. 2020.
    [114] X. Liang, D. Li, M. Song, A. Madden, Y. Ding, and Y. Bu, “Predicting biomedical relationships using the knowledge and graph embedding cascade model,” PLoS One, vol. 14, no. 6, p. e0218264, Jun. 2019.
    [115] N. Michielli, U. R. Acharya, and F. Molinari, “Cascaded LSTM recurrent neural network for automated sleep stage classification using single-channel EEG signals,” Comput. Biol. Med., vol. 106, pp. 71–81, Mar. 2019.
    [116] L. Breiman, “Random Forests,” Mach. Learn., vol. 45, no. 1, pp. 5–32, Oct. 2001.
    [117] G. Biau and E. Scornet, “A random forest guided tour,” TEST, vol. 25, no. 2, pp. 197–227, Apr. 2016.
    [118] B.-H. Kung, P.-Y. Hu, C.-C. Huang, C.-C. Lee, C.-Y. Yao, and C.-H. Kuan, “An efficient ecg classification system using resource-saving architecture and random forest,” IEEE J. Biomed. Heal. Informatics, vol. 25, no. 6, pp. 1904–1914, Jun. 2021.
    [119] S. Isci, D. S. Yaman Kalender, F. Bayraktar, and A. Yaman, “Machine learning models for classification of cushing’s syndrome using retrospective data,” IEEE J. Biomed. Heal. Informatics, p. 1, Aug. 2021.
    [120] L. Luo, X. Yu, Z. Yong, C. Li, and Y. Gu, “Design comorbidity portfolios to improve treatment cost prediction of asthma using machine learning,” IEEE J. Biomed. Heal. Informatics, p. vol. 25, no. 6, pp. 2237–2247, Jun. 2021.
    [121] M. Zhu et al., “Class weights random forest algorithm for processing class imbalanced medical data,” IEEE Access, vol. 6, pp. 4641–4652, Jan. 2018.
    [122] D Jurafsky and JH. Martin, Speech and language processing, 3rd ed draft. reading, 2019. [E-book] Available: https://web.stanford.edu/~jurafsky/slp3/
    [123] D. Karaboga and B. Basturk, “On the performance of artificial bee colony (ABC) algorithm,” Appl. Soft Comput., vol. 8, no. 1, pp. 687–697, Jan. 2008.
    [124] D. Karaboga and B. Akay, “A comparative study of Artificial Bee Colony algorithm,” Appl. Math. Comput., vol. 214, no. 1, pp. 108–132, Aug. 2009.
    [125] N. V Chawla, K. W. Bowyer, L. O. Hall, and W. P. Kegelmeyer, “SMOTE: synthetic minority over-sampling technique,” J. Artif. Intell. Res., vol. 16, pp. 321–357, Jan. 2002.
    [126] V. López, A. Fernández, S. García, V. Palade, and F. Herrera, “An insight into classification with imbalanced data: empirical results and current trends on using data intrinsic characteristics,” Inf. Sci. (Ny)., vol. 250, pp. 113–141, Dec. 2013.
    [127] G. Haixiang, L. Yijing, J. Shang, G. Mingyun, H. Yuanyue, and G. Bing, “Learning from class-imbalanced data: Review of methods and applications,” Expert Syst. Appl., vol. 73, pp. 220–239, May. 2017.
    [128] M. Buda, A. Maki, and M. A. Mazurowski, “A systematic study of the class imbalance problem in convolutional neural networks,” Neural Networks, vol. 106, pp. 249–259, Oct. 2018.
    [129] M. F. Ijaz, G. Alfian, M. Syafrudin, and J. Rhee, “Hybrid prediction model for type 2 diabetes and hypertension using DBSCAN-based outlier detection, synthetic minority over sampling technique (SMOTE), and Random Forest,” Appl. Sci., vol. 8, no. 8, p. 1325, Aug. 2018.
    [130] Z.-H. Zhou and J. Feng, “Deep forest: Towards an alternative to deep neural networks,” in Proc. 26th Int. Joint Conf. Artif. Intell., Aug. 2017, pp. 3553–3559.
    [131] Z.-H. Zhou and J. Feng, “Deep forest,” Natl. Sci. Rev., vol. 6, no. 1, pp. 74–86, Jan. 2019.
    [132] T. Chen and C. Guestrin, “XGBoost: a scalable tree boosting system,” in Proc. 22nd ACM SIGKDD Int. Conf. Knowl. Discovery Data Mining, Aug. 2016, pp. 785–794
    [133] F. J. Rohlf and M. Corti, “Use of Two-Block Partial Least-Squares to Study Covariation in Shape,” Syst. Biol., vol. 49, no. 4, pp. 740–753, Dec. 2000.
    [134] J. A. Wegelin and others, “A survey of Partial Least Squares (PLS) methods, with emphasis on the two-block case,” Univ. Washington, Tech. Rep, 2000.
    [135] HCV data set of the Machine Learning Repository in University of California, Irvine. [Online]. Available: https://archive.ics.uci.edu/ml/data sets/HCV+data
    [136] D. Dua and C. Graff, (2019). UCI Machine Learning Repository Irvine, CA: University of California, School of Information and Computer Science. [Online]. Available: http://archive.ics.uci.edu/ml.
    [137] G. Hoffmann, A. Bietenbeck, R. Lichtinghagen, and F. Klawonn, “Using machine learning techniques to generate laboratory diagnostic pathways—a case study,” J. Lab. Precis. Med., vol. 3, no. 6, Jun. 2018.
    [138] H. He and E. A. Garcia, “Learning from imbalanced data,” IEEE Trans. Knowl. Data Eng., vol. 21, no. 9, pp. 1263–1284, Sep. 2009.
    [139] Z.-H. Zhou and X.-Y. Liu, “On multi-class cost-sensitive learning,” Comput. Intell., vol. 26, no. 3, pp. 232–257, Aug. 2010.
    [140] B. W. Matthews, “Comparison of the predicted and observed secondary structure of T4 phage lysozyme,” Biochim. Biophys. Acta - Protein Struct., vol. 405, no. 2, pp. 442–451, Oct. 1975.
    [141] D. Chicco and G. Jurman, “The advantages of the matthews correlation coefficient (MCC) over F1 score and accuracy in binary classification evaluation,” BMC Genomics, vol. 21, no. 1, p. 6, Dec. 2020.
    [142] J. Gorodkin, “Comparing two K-category assignments by a K-category correlation coefficient,” Comput. Biol. Chem., vol. 28, no. 5, pp. 367–374, Dec. 2004.

    下載圖示 校內:立即公開
    校外:立即公開
    QR CODE