簡易檢索 / 詳目顯示

研究生: 蔡穎琇
Tsai, Ying-Hsiu
論文名稱: 使用少量HPLC數據評估中藥濃縮液(CCM)的組成:基於AI的方法(II)
Evaluation the composition of concentrated Chinese medicines (CCMs) using small HPLC data: an AI-based approach (II)
指導教授: 藍崑展
Lan, Kun-Chan
學位類別: 碩士
Master
系所名稱: 電機資訊學院 - 醫學資訊研究所
Institute of Medical Informatics
論文出版年: 2025
畢業學年度: 113
語文別: 英文
論文頁數: 134
中文關鍵詞: 高效能液態層析法點雲資料點雲變換模型少量資料
外文關鍵詞: HPLC, point cloud data, Point Cloud Transformer, small data
相關次數: 點閱:3下載:0
分享至:
查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報
  • 隨著全球中醫藥市場的快速成長,以及對高品質製劑需求的提升,中藥產業面臨日益嚴格的品質監控挑戰。現代中藥製劑具有化學成分複雜、批次差異顯著等特性,傳統的品質控制方式難以應對現代化標準下的效率與準確性需求。
    本研究屬於技術應用導向的研究,透過整合既有的高效液相層析(HPLC)指紋分析技術與先進的 AI 模型架構,發展一套自動化的中藥製劑成分識別系統。具體而言,研究創新地將一維 HPLC 譜圖轉換為三維點雲資料格式,並引入 Point Cloud Transformer 模型,應用於中藥製劑中賦形劑比例的分類任務。儘管點雲模型原本應用於空間感知與 3D 視覺領域,本研究將其作為分析工具導入中藥資料分析流程中,實現不同領域技術的整合應用。
    整體而言,本研究並非聚焦於開發全新的模型架構,而是善用既有技術,針對中藥品質控領域的實務問題進行有效解決。藉由提升分類精度、自動化處理流程與減少人為判斷誤差,本研究有助於強化中藥製劑的品質一致性與標準化生產潛力。

    With the rapid growth of the global traditional Chinese medicine (TCM) market and the increasing demand for high-quality formulations, the TCM industry is facing increasingly stringent challenges in quality monitoring. Modern TCM preparations are characterized by complex chemical compositions and significant batch-to-batch variations, making traditional quality control methods inadequate in meeting the requirements for efficiency and accuracy under modern standards.
    This study is application-oriented in nature, integrating established high-performance liquid chromatography (HPLC) fingerprinting techniques with advanced AI model architectures to develop an automated system for identifying components in TCM formulations. Specifically, the research innovatively converts one-dimensional HPLC chromatograms into three-dimensional point cloud data formats and introduces the Point Cloud Transformer model for the classification task of excipient ratio estimation in TCM preparations. Although point cloud models were originally designed for spatial perception and 3D vision applications, this study applies them as analytical tools within the workflow of TCM data analysis, thereby achieving cross-domain technological integration.
    Overall, this research does not focus on developing entirely new model architectures, but rather on leveraging existing techniques to effectively address practical problems in TCM quality control. By improving classification accuracy, automating processing workflows, and reducing human judgment errors, this study contributes to enhancing the consistency of TCM formulation quality and the potential for standardized production.

    摘要 II ABSTRACT III CONTENTS IV LIST OF FIGURES VII LIST OF TABLES XIII CHAPTER 1 INTRODUCTION 1 1.1 BACKGROUND 1 1.2 WHAT IS CONCENTRATED CHINESE MEDICINE 2 1.3 PROBLEM IN PRIOR WORK ON CHEMICAL IDENTIFICATION AND OUR SOLUTION 4 1.4 OUR CONTRIBUTION 5 CHAPTER 2 RELATED WORK 8 2.1 PREVIOUS WORK ON HPLC DATA ANALYSIS FOR QUALITY CONTROL 8 2.2 PREVIOUS WORK ON POINT CLOUD AUGMENTATION 10 2.3 PREVIOUS WORK ON POINT CLOUD CLASSIFICATION 15 2.4 PREVIOUS WORK ON AI FOR SMALL DATA (ZERO-SHOT OR ONE-SHOT LEARNING) 21 CHAPTER 3 METHOD 27 3.1 HPLC DATA COLLECTION 27 3.2 HIERARCHICAL CLASSIFICATION MODEL ARCHITECTURE 29 3.3 POINT CLOUD TRANSFORMER 31 3.3.1 Point Cloud Transformer Architecture 32 3.3.2 Model Components 32 3.3.3 Motivation and Adaptation for HPLC Data 33 3.4 EXPERIMENT 35 3.4.1 Synthetic Data Generation 35 3.4.2 Data filtering and down sampling 39 3.4.3 Data augmentation 45 3.4.4 Data visualization 49 CHAPTER 4 RESULTS 50 4.1 ASTRAGALUS MEMBRANACEUS (黃耆) CLASSIFICATION 52 4.2 SCUTELLARIA BAICALENSIS (黃芩) CLASSIFICATION 61 4.3 SALVIA MILTIORRHIZA (丹參) CLASSIFICATION 71 CHAPTER 5 DISCUSSION 79 5.1 ABLATION STUDY - EFFECT OF DIFFERENT DATA AUGMENTATION 79 5.1.1 Astragalus membranaceus (黃耆) different data augmentation 86 5.1.2 Scutellaria baicalensis (黃芩)different data augmentation 87 5.1.3 Salvia miltiorrhiza(丹參)different data augmentation 88 5.2 ABLATION STUDY -EFFECT OF DIFFERENT CLASSIFICATION MODELS 93 5.2.1 Astragalus membranaceus (黃耆)different classification model 97 5.2.2 Scutellaria baicalensis (黃芩)different classification model 98 5.2.3 Salvia miltiorrhiza (丹參)different classification model 99 5.3 ABLATION STUDY - EFFECT OF DIFFERENT FEW SHOTS METHOD 102 5.3.1 Astragalus membranaceus (黃耆) different few shot method 103 5.3.2 Scutellaria baicalensis (黃芩) different few shot method 105 5.3.3 Salvia miltiorrhiza (丹參) different few shot method 106 5.4 ABLATION STUDY - EFFECT OF DIFFERENT BASELINE 107 5.5 ABLATION STUDY – EFFECT OF DIFFERENT FILTERING 109 CHAPTER 6 CONCLUSION 112 CHAPTER 7 LIMITATIONS AND FUTURE WORK 114 REFERENCES 115

    [1] Jain, S. D., Shrivastava, S. K., Agrawal, A., & Gupta, A. K. (2024). WHO guidelines for quality control of herbal medicines: From cultivation to consumption. Internafional Journal of Pharmaceufical and Chemical Analysis, 11(3), 212-225.
    [2] Luo, H., Li, Q., Flower, A., Lewith, G., & Liu, J. (2012). Comparison of effectiveness and safety between granules and decoction of Chinese herbal medicine: a systematic review of randomized clinical trials. Journal of Ethnopharmacology, 140(3), 555-567.
    [3] Furrer, P. (2013). The central role of excipients in drug formulation. European Pharmaceutical Review, 18(2), 67-70.
    [4] Xie, P., Chen, S., Liang, Y. Z., Wang, X., Tian, R., & Upton, R. (2006). Chromatographic fingerprint analysis—a rational approach for quality assessment of traditional Chinese herbal medicine. Journal of chromatography A, 1112(1-2), 171-180.
    [5] Han, J., Xu, K., Yan, Q., Sui, W., Zhang, H., Wang, S., ... & Han, F. (2022). Qualitative and quantitative evaluation of Flos Puerariae by using chemical fingerprint in combination with chemometrics method. Journal of Pharmaceutical Analysis, 12(3), 489-499.
    [6] Cheng, H., Wu, W., Chen, J., Pan, H., Xu, E., Chen, S., ... & Chen, J. (2022). Establishment of anthocyanin fingerprint in black wolfberry fruit for quality and geographical origin identification. Lwt, 157, 113080.
    [7] Shorten, C., & Khoshgoftaar, T. M. (2019). A survey on image data augmentation for deep learning. Journal of big data, 6(1), 1-48.
    [8] Zhu, Q., Fan, L., & Weng, N. (2024). Advancements in point cloud data augmentation for deep learning: A survey. Pattern recognition, 153, 110532.
    [9] Si, H., & Wei, X. (2024). Feature extraction and representation learning of 3D point cloud data. Image and Vision Computing, 142, 104890.
    [10] Chen, Y., Hu, V. T., Gavves, E., Mensink, T., Mettes, P., Yang, P., & Snoek, C. G. (2020, August). Pointmixup: Augmentation for point clouds. In European conference on computer vision (pp. 330-345). Cham: Springer International Publishing.
    [11] Gong, C., Ren, T., Ye, M., & Liu, Q. (2021). Maxup: Lightweight adversarial training with data augmentation improves neural network training. In Proceedings of the IEEE/CVF Conference on computer vision and pattern recognition (pp. 2474-2483).
    [12] Lehner, A., Gasperini, S., Marcos-Ramiro, A., Schmidt, M., Mahani, M. A. N., Navab, N., ... & Tombari, F. (2022). 3d-vfield: Adversarial augmentation of point clouds for domain generalization in 3d object detection. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 17295-17304).
    [13] Lehner, A., Gasperini, S., Marcos-Ramiro, A., Schmidt, M., Navab, N., Busam, B., & Tombari, F. (2024). 3D adversarial augmentations for robust out-of-domain predictions. International Journal of Computer Vision, 132(3), 931-963.
    [14] Wu, M., Huang, H., & Fang, Y. (2022, August). 3d point cloud completion with geometric-aware adversarial augmentation. In 2022 26th International Conference on Pattern Recognition (ICPR) (pp. 4001-4007). IEEE.
    [15] Yin, T., Zhou, X., & Krähenbühl, P. (2021). Multimodal virtual point 3d detection. Advances in Neural Information Processing Systems, 34, 16494-16507.
    [16] Zhang, X., Wang, Q., Zhang, J., & Zhong, Z. (2019). Adversarial autoaugment. arXiv preprint arXiv:1912.11188.
    [17] Grilli, E., Menna, F., & Remondino, F. (2017). A review of point clouds segmentation and classification algorithms. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 42, 339-344.
    [18] Zhang, H., Wang, C., Tian, S., Lu, B., Zhang, L., Ning, X., & Bai, X. (2023). Deep learning-based 3D point cloud classification: A systematic survey and outlook. Displays, 79, 102456.
    [19] Maturana, D., & Scherer, S. (2015, September). Voxnet: A 3d convolutional neural network for real-time object recognition. In 2015 IEEE/RSJ international conference on intelligent robots and systems (IROS) (pp. 922-928). Ieee.
    [20] Su, H., Maji, S., Kalogerakis, E., & Learned-Miller, E. (2015). Multi-view convolutional neural networks for 3d shape recognition. In Proceedings of the IEEE international conference on computer vision (pp. 945-953).
    [21] Qi, C. R., Su, H., Mo, K., & Guibas, L. J. (2017). Pointnet: Deep learning on point sets for 3d classification and segmentation. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 652-660).
    [22] Qi, C. R., Yi, L., Su, H., & Guibas, L. J. (2017). Pointnet++: Deep hierarchical feature learning on point sets in a metric space. Advances in neural information processing systems, 30.
    [23] Wang, Y., Sun, Y., Liu, Z., Sarma, S. E., Bronstein, M. M., & Solomon, J. M. (2019). Dynamic graph cnn for learning on point clouds. ACM Transactions on Graphics (tog), 38(5), 1-12.
    [24] Zhao, H., Jiang, L., Jia, J., Torr, P. H., & Koltun, V. (2021). Point transformer. In Proceedings of the IEEE/CVF international conference on computer vision (pp. 16259-16268).
    [25] Yu, X., Tang, L., Rao, Y., Huang, T., Zhou, J., & Lu, J. (2022). Point-bert: Pre-training 3d point cloud transformers with masked point modeling. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 19313-19322).
    [26] Riegler, G., Osman Ulusoy, A., & Geiger, A. (2017). Octnet: Learning deep 3d representations at high resolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 3577-3586).
    [27] Bai, S., Bai, X., Zhou, Z., Zhang, Z., & Jan Latecki, L. (2016). Gift: A real-time and scalable 3d shape search engine. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 5023-5032).
    [28] Chang, A. X., Funkhouser, T., Guibas, L., Hanrahan, P., Huang, Q., Li, Z., ... & Yu, F. (2015). Shapenet: An information-rich 3d model repository. arXiv preprint arXiv:1512.03012.
    [29] Pang, Y., Tay, E. H. F., Yuan, L., & Chen, Z. (2023). Masked autoencoders for 3d point cloud self-supervised learning. World Scientific Annual Review of Artificial Intelligence, 1, 2440001.
    [30] Guo, Y., Wang, H., Hu, Q., Liu, H., Liu, L., & Bennamoun, M. (2020). Deep learning for 3d point clouds: A survey. IEEE transactions on pattern analysis and machine intelligence, 43(12), 4338-4364.
    [31] Zhang, Y., Gong, M., Li, J., Feng, K., & Zhang, M. (2024). Few-shot learning with enhancements to data augmentation and feature extraction. IEEE Transactions on Neural Networks and Learning Systems, 36(4), 6655-6668.
    [32] Wang, Z., Wang, P., Liu, K., Wang, P., Fu, Y., Lu, C. T., ... & Zhou, Y. (2024). A comprehensive survey on data augmentation. arXiv preprint arXiv:2405.09591.
    [33] Xu, M., Yoon, S., Fuentes, A., & Park, D. S. (2023). A comprehensive survey of image augmentation techniques for deep learning. Pattern Recognition, 137, 109347.
    [34] Vinyals, O., Blundell, C., Lillicrap, T., & Wierstra, D. (2016). Matching networks for one shot learning. Advances in neural information processing systems, 29.
    [35] Koch, G., Zemel, R., & Salakhutdinov, R. (2015, July). Siamese neural networks for one-shot image recognition. In ICML deep learning workshop (Vol. 2, No. 1, pp. 1-30).
    [36] Snell, J., Swersky, K., & Zemel, R. (2017). Prototypical networks for few-shot learning. Advances in neural information processing systems, 30.
    [37] Sung, F., Yang, Y., Zhang, L., Xiang, T., Torr, P. H., & Hospedales, T. M. (2018). Learning to compare: Relation network for few-shot learning. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1199-1208).
    [38] Oreshkin, B., Rodríguez López, P., & Lacoste, A. (2018). Tadam: Task dependent adaptive metric for improved few-shot learning. Advances in neural information processing systems, 31.
    [39] Yoon, S. W., Seo, J., & Moon, J. (2019, May). Tapnet: Neural network augmented with task-adaptive projection for few-shot learning. In International conference on machine learning (pp. 7115-7123). PMLR.
    [40] Li, H., Eigen, D., Dodge, S., Zeiler, M., & Wang, X. (2019). Finding task-relevant features for few-shot learning by category traversal. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 1-10).
    [41] Finn, C., Abbeel, P., & Levine, S. (2017, July). Model-agnostic meta-learning for fast adaptation of deep networks. In International conference on machine learning (pp. 1126-1135). PMLR.
    [42] Sun, Q., Liu, Y., Chua, T. S., & Schiele, B. (2019). Meta-transfer learning for few-shot learning. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition (pp. 403-412).
    [43] Rusu, A. A., Rao, D., Sygnowski, J., Vinyals, O., Pascanu, R., Osindero, S., & Hadsell, R. (2018). Meta-learning with latent embedding optimization. arXiv preprint arXiv:1807.05960.
    [44] Santoro, A., Bartunov, S., Botvinick, M., Wierstra, D., & Lillicrap, T. (2016, June). Meta-learning with memory-augmented neural networks. In International conference on machine learning (pp. 1842-1850). PMLR.
    [45] Cai, Q., Pan, Y., Yao, T., Yan, C., & Mei, T. (2018). Memory matching networks for one-shot image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 4080-4088).
    [46] Mishra, N., Rohaninejad, M., Chen, X., & Abbeel, P. (2017). A simple neural attentive meta-learner. arXiv preprint arXiv:1707.03141.
    [47] Munkhdalai, T., & Yu, H. (2017, July). Meta networks. In International conference on machine learning (pp. 2554-2563). PMLR.
    [48] Munkhdalai, T., Yuan, X., Mehri, S., & Trischler, A. (2018, July). Rapid adaptation with conditionally shifted neurons. In International conference on machine learning (pp. 3664-3673). PMLR.
    [49] Ren, M., Triantafillou, E., Ravi, S., Snell, J., Swersky, K., Tenenbaum, J. B., ... & Zemel, R. S. (2018). Meta-learning for semi-supervised few-shot classification. arXiv preprint arXiv:1803.00676.
    [50] Li, X., Sun, Q., Liu, Y., Zhou, Q., Zheng, S., Chua, T. S., & Schiele, B. (2019). Learning to self-train for semi-supervised few-shot classification. Advances in neural information processing systems, 32.
    [51] Wang, P., Liu, L., Shen, C., Huang, Z., Van Den Hengel, A., & Tao Shen, H. (2017). Multi-attention network for one shot learning. In proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2721-2729).
    [52] Xing, C., Rostamzadeh, N., Oreshkin, B., & O Pinheiro, P. O. (2019). Adaptive cross-modal few-shot learning. Advances in neural information processing systems, 32.
    [53] Hou, R., Chang, H., Ma, B., Shan, S., & Chen, X. (2019). Cross attention network for few-shot classification. Advances in neural information processing systems, 32.
    [54] Vapnik, V. (2006). 24 transductive inference and semi-supervised learning. In Semi-supervised learning (pp. 453-472). MIT press.
    [55] Liu, Y., Lee, J., Park, M., Kim, S., Yang, E., Hwang, S. J., & Yang, Y. (2018). Learning to propagate labels: Transductive propagation network for few-shot learning. arXiv preprint arXiv:1805.10002.
    [56] Antoniou, A., & Storkey, A. J. (2019). Learning to learn by self-critique. Advances in Neural Information Processing Systems, 32.
    [57] Huang, G., Larochelle, H., & Lacoste-Julien, S. (2019). Centroid networks for few-shot clustering and unsupervised few-shot classification. arXiv preprint arXiv:1902.08605, 3(7).
    [58] Xian, Y., Schiele, B., & Akata, Z. (2017). Zero-shot learning-the good, the bad and the ugly. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 4582-4591).
    [59] Zhang, L., Xiang, T., & Gong, S. (2017). Learning a deep embedding model for zero-shot learning. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2021-2030).
    [60] Song, Y., Wang, T., Cai, P., Mondal, S. K., & Sahoo, J. P. (2023). A comprehensive survey of few-shot learning: Evolution, applications, challenges, and opportunities. ACM Computing Surveys, 55(13s), 1-40.
    [61] Kensert, A., Bosten, E., Collaerts, G., Efthymiadis, K., Van Broeck, P., Desmet, G., & Cabooter, D. (2022). Convolutional neural network for automated peak detection in reversed-phase liquid chromatography. Journal of Chromatography A, 1672, 463005.

    下載圖示 校內:立即公開
    校外:立即公開
    QR CODE