| 研究生: |
夏至賢 Hsia, Chih-Hsien |
|---|---|
| 論文名稱: |
iEduTech-生物影像技術應用於線上學習裝置之研究 iEduTech: Study on Intelligent Biometric Image Techniques for Online Learning Device |
| 指導教授: |
賴槿峰
Lai, Chin-Feng |
| 學位類別: |
博士 Doctor |
| 系所名稱: |
工學院 - 工程科學系 Department of Engineering Science |
| 論文出版年: | 2023 |
| 畢業學年度: | 111 |
| 語文別: | 英文 |
| 論文頁數: | 64 |
| 中文關鍵詞: | 後疫情時代 、學習者參與度 、生物特徵 、線上學習 、學習者身份辨識 |
| 外文關鍵詞: | Post pandemic era, Learner engagement, Biometric features, Online learning, Learner identification |
| 相關次數: | 點閱:138 下載:33 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
近年來,後疫情時代迫使智慧教育技術輔助學習的方式逐漸發展,透過人工智慧在生物特徵用於偵測與辨識在線上學習場域也受到重視。在精準教育的應用中,人工智慧物聯網的概念逐漸形成,並透過傳感器檢測與未來趨勢接軌。利用人工智慧技術來偵測學習者在線上課程中的參與度及其身分辨識是一個重要的研究領域,因為它可以幫助教師了解學習者的學習情況,並藉此改善學習成效。在線上課程中,學習者的參與度可以透過各種方式來衡量,包括參與討論區討論、完成作業、參加模擬實驗、等;其技術可以自動分析學習者的行為資料,並根據分析結果來評估學習者的參與度及學習成效;它也可以幫助辨識學習者的身分,避免學習成績的濫用。
然而,目前學習者參與度中其人臉情緒資料集普遍存在標籤分佈不平衡的問題,導致擁有少數樣本的類別無法提升準確度。除此之外,在學習者的身分辨識方面也產生系統的反應時間及辨識穩定度降低等議題。基於上述問題,本文提出深度身份不變性參與度網路模型在公用資料集DAiSEE和EmotiW-EP進行學習者參與度辨識。我們提出深度身份不變性參與度網路模型與基於深度身份不變性參與度網路模型架構串加多個一維卷積以及雙向長短期記憶模型透過對資料樣本進行身分特徵的學習,並且使用該特徵來將人臉特徵中的臉部差異進行移除,以獲得身份不變性特徵且同時改善標籤分佈於資料不平衡之議題。在學習者身份辨識方面,我們提出嵌入式指靜脈辨識方法,利用透過與意分割再配合自適性二維對稱遮罩式離散小波轉換與可適性影像對比增強作前處理,接者基於重複線追蹤法作特徵提取,利用支持向量機來訓練分類器。
經由實驗結果得知,本文所提出的模型與過往相關研究相比,其模型在DAiSEE與EmotiW-EP兩個公用資料集上可達到較佳的辨識準確度。最後,我們也提出得自行拍攝以及公用FVUSM資料庫方法在低階嵌入式系統上實現,其提出之系統具備低設備成本、高辨識率、以及低反應時間之優點,改善嵌入式靜脈影像辨識的三大問題。故本文在適合應用於學習者線上課程參與度分析及學習者的身分辨識。
Recently, the application of intelligent education techniques (iEduTech) to assist learning is developing rapidly in the post-epidemic era. Through artificial intelligence (AI) in biometric detection and identification of online learning has attention. In the application of education-related industries, the concept of the AI in the internet of things (AIoT) has gradually taken shape, and it has been connected with future trends, which is detected by sensors. Learners' engagement is a decisive prerequisite for effective teaching and learning. Therefore, it has become a hot topic to evaluate learners' effectiveness through AI to detect learners' engagement in online courses.
Due to imbalanced labels in the database of facial expressions involving learner engagement, small-scale samples cannot improve recognition accuracy. Besides, if a learner is registered in the biometric identification system, its expanding database will lower its identification stability and increase the response time of the system. Hence, the deep identity-invariant engagement network (DIIEN) and DIIEN + convolution 1D (Conv1D) + Bi-directional long short term memory (Bi-LSTM) models are introduced in this thesis to carry out learner engagement recognition on the public databases DAiSEE and EmotiW-EP. By learning identity-invariant features of the samples in the database and using those features to remove identity differences in facial features, the DIIEN model can obtain identity-invariant features and improve the imbalanced distribution of labels in the database. In learner identification, we propose an embedded finger-vein recognition technique. First, we used to filter out the background noise and improve stability by identifying and separating objects and background elements in an image using semantic segmentation. Next, in the pre-processing of images, the adaptive symmetric mask-based discrete wavelet transform (A-SMDWT) and image contrast enhancement and feature extraction with the repeated line tracking (RLT) method were proposed. Overall, the combination of the A-SMDWT and the RLT method can be used to improve the quality and clarity of images and extract important features that can be used for further analysis. Then, the histogram of oriented gradient (HOG) of an image is first computed, after which a support vector machine (SVM) was used to train a classifier.
In experimental results, compared with other models in two police databases DAiSEE and EmotiW-EP, this model performed best in terms of unweighted average recall (UAR), accuracy and mean square error (MSE), respectively. Finally, a self-established finger-vein image and a FVUSM public databases were implemented in the low-cost embedded system. As results show that the algorithm offers advantages, such as a low-cost device, a high accuracy rate, and a low response time. However, this study addresses three major issues that have previously been encountered in embedded vein identification systems. These issues have been successfully mitigated in the work presented in this thesis. This thesis believes that the presented approach can be used as a novel way of analyzing learner engagement in online learning.
[1] Thomas Kin-Fung Chiu, “Applying the self-determination theory (SDT) to explain student engagement in online learning during the COVID-19 pandemic,” Journal of Research on Technology in Education, vol. 54, no. 1, pp. S14-S30, 2022.
[2] George D. Kuh, “What student affairs professionals need to know about student engagement,” Journal of College Student Development, vol. 50, no. 6, pp. 683-706, 2009.
[3] Ella R. Kahu, “Framing student engagement in higher education,” Studies in Higher Education, vol. 38, no. 5, pp. 758-773, 2013.
[4] Kerri-Lee Krause and Hamish Coates, “Students' engagement in first-year university,” Assessment and Evaluation in Higher Education, vol. 33, no. 5, pp. 493-505, 2008.
[5] Liying Wang and Yunfan He, “Online learning engagement assessment based on multimodal behavioral data,” Transactions on Edutainment XVI, vol. 11782, pp 256-265, 2020.
[6] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun, “Deep residual learning for image recognition,” IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 770-778, 2016.
[7] Hao Zhang, Xiaofan Xiao, Tao Huang, Sanya Liu, Yu Xia, and Jia Li, “An novel end-to-end network for automatic student engagement recognition,” IEEE International Conference on Electronics Information and Emergency Communication, pp. 342-345, 2019.
[8] Lin Geng, Min Xu, Zeqiang Wei, and Xiuzhuang Zhou, “Learning deep spatiotemporal feature for engagement recognition of online courses,” IEEE Symposium Series on Computational Intelligence, pp. 442-447, 2019.
[9] Jiacheng Liao, Yan Liang, and Jiahui Pan, “Deep facial spatiotemporal network for engagement prediction in online learning,” Applied Intelligence, vol. 51, pp. 6609-6621, 2021.
[10] Ali Abedi and Shehroz S. Khan, “Improving state-of-the-art in detecting student engagement with ResNet and TCN hybrid network,” IEEE Conference on Robots and Vision, pp. 151-157, 2021.
[11] Denis Dresvyanskiy, Wolfgang Minker, and Alexey Karpov, “Deep learning based engagement recognition in highly imbalanced data,” International Conference on Speech and Computer, vol. 12997, pp. 166-178, 2021.
[12] Chih-Hsien Hsia, Bryan Chiang, Liang-Ying Ke, Zih-Yan Ciou, and Chin-Feng Lai, “Student engagement analysis using facial expression in online course,” IET International Conference on Engineering Technologies and Applications, pp. 1-2, 2022.
[13] Junichi Hashimoto, “Finger vein identification technology and its future,” IEEE Symposium on VLSI Circuits, pp. 5-8, 2006.
[14] Desong Wang, Jianping Li, and Gokhan Memik, “User identification based on finger-vein patterns for consumer electronics devices,” IEEE Transactions on Consumer Electronics, vol. 56, no. 2, pp. 799-804, 2010.
[15] David Mulyono and Shi-Jinn Horng, “A study of finger vein biometric for personal identification,” IEEE International Symposium on Biometrics and Security Technologies, pp. 1-8, 2008.
[16] Sang-Kyun Im, Hyung-Man Park, Soo-Won Kim, Chang-Kyung Chung, and Hwan-Soo Choi, “Improved vein pattern extracting algorithm and its implementation,” IEEE International Conference on Consumer Electronics, pp. 2-3, 2000.
[17] Naoto Miura, Akio Nagasaka, and Takafumi Miyatake, “Feature extraction of finger-vein patterns based on repeated line tracking and its application to personal identification,” Machine Vision and Applications, vol. 15, no. 4, pp. 194-203, 2004.
[18] Zhongbo Zhang, Siliang Ma, and Xiao Han, “Multiscale feature extraction of finger-vein patterns based on curvelets and local interconnection structure neural network,” IEEE International Conference on Pattern Recognition, pp. 145-148, 2006.
[19] Zhi Liu and Shangling Song, “An embedded real-time finger-vein recognition system for mobile devices,” IEEE Transactions on Consumer Electronics, vol. 58, no. 2, pp. 522-527, 2012.
[20] Eui Chui Lee, Hyunwoo Jung, and Daeyeoul Kim, “New finger biometric method using near infrared imaging,” Sensors, vol. 11, no. 3, pp. 2319-2333, 2011.
[21] Naoto Miura, Akio Nagasaka, and Takafumi Miyatake, “Extraction of finger-vein patterns using maximum curvature points in image profiles,” IAPR Conference on Machine Vision Applications, pp. 347-350, 2005.
[22] Chih-Hsien Hsia, “Improved finger vein pattern method using wavelet-based for real-time personal identification system,” Journal of Imaging Science and Technology, vol. 26, no. 3, pp. 30402-1-30402-8, 2018.
[23] Chih-Hsien Hsia, “New verification method for finger-vein recognition system,” IEEE Sensors Journal, vol. 18, no. 2, pp. 790-797, 2018.
[24] Chih-Hsien Hsia, Jen-Shiun Chiang, and Chin-Yi Lin, “A face detection method for illumination variant condition,” Scientia Iranica, vol. 22, no. 6, pp. 2081-2091, 2015.
[25] Chih-Hsien Hsia and Hsin-Ting Li, “Real-time wavelet face detection system with occlusion condition,” Journal of Technology, vol. 30, no. 4, pp. 343-350, 2015.
[26] Jing-Ming Guo, Chih-Hsien Hsia, Yun-Fu Liu, Jie-Cyun Yu, Mei-Hui Chu, and Thanh-Nam Le, “Contact-free hand geometry-based identification system,” Expert Systems with Applications, vol. 39, no. 14, pp. 11728-11736, 2012.
[27] Chih-Hsien Hsia, Yi-Jhe Dai, Shih-Lun Chen, Ting-Lan Lin, and Jain Shen, “A gait sequence analysis for IP camera using a modified LBP,” Journal of Internet Technology, vol. 19, no. 2, pp. 451-458, 2018.
[28] Mateusz Trokielewicz, Adam Czajka, and Piotr Maciejewicz, “Iris recognition after death,” IEEE Transactions on Information Forensics and Security, vol. 14, no. 6, pp. 1501-1514, 2019.
[29] Munalih Ahmad Syarif, Thian Song Ong, Andrew B. J. Teoh, and Connie Tee, “Enhanced maximum curvature descriptors for finger vein verification,” Multimedia Tools and Applications, pp. 1-29, 2016.
[30] Huafeng Qin, Lan Qin, and Chengbo Yu, “Region growth-based feature extraction method for finger-vein recognition,” Optical Engineering, vol. 50, no. 5, pp. 057281-057288, 2011.
[31] Jinfeng Yang, Yihua Shi, and Jinli Yang, “Finger-vein recognition based in a bank on Gabor filters,” Lecture Notes in Computer Science, vol. 5994, pp. 374-383, 2010.
[32] Azadeh Noori Hoshyar and Riza Sulaiman, “Review on finger vein identification system by applying neural network,” IEEE International Symposium on Information Technology, pp. 1020-1023, 2010.
[33] Xiang Yu, Wenming Yang, Qingmin Liao, and Fei Zhou, “A novel finger vein pattern extraction approach for near-infrared image,” IEEE International Congress on Image and Signal Processing, pp. 1-5, 2009.
[34] Chengbo Yu, Huafeng Qing, and Liang Zhang, “A research on extracting low quality human finger vein pattern characteristics,” IEEE International Conference on Bioinformatics and Biomedical Engineering, pp.1876–1879, 2008.
[35] Tong Liu, Jianbin Xie, Wei Yan, Peiqin Li, and Huanzhang Lu, “Finger-vein recognition with modified binary tree model,” Neural Computing and Applications, vol. 26, pp. 969-977, 2015.
[36] Yu Lu, Sook Yoon, and Dong Sun Park, “Finger vein identification system using two cameras,” IET Electronics Letters, vol. 50, no. 22, pp. 1591-1593, 2014.
[37] Yu Lu, Shanjuan Xie, Sook Yoon, Jucheng Yang, and Dong Sun Park. “Robust finger vein ROI localization based on flexible segmentation,” Sensors, vol. 13, pp. 14339-14366, 2013.
[38] Huafeng Qin and Mounim A. El Yacoubi, “Deep representation for finger-vein image quality assessment,” IEEE Transactions on Circuit Systems for Video Technology, vol. 28, no. 8, pp. 1677-1693, 2018.
[39] Rig Das, Emanuella Piciucco, Emanuele Maiorana, and Patrizio Campisi, “Convolutional neural network for finger-vein-based biometric identification,” IEEE Transactions on Information Forensics and Security, vol. 14, no. 2, pp. 360-373, 2019.
[40] Mohd Shahrimie Mohd Asaari, Shahrel A. Suandi, and Bakhtiar Affendi Rosdi, “Fusion of band limited phase only correlation and width centroid contour distance for finger based biometrics,” Expert Systems with Applications, vol. 41, no. 7, pp. 3367-3382, 2014.
[41] Chih-Hsien Hsia, Jing-Ming Guo, and Chong-Sheng Wu, “Finger-vein recognition based on parametric-oriented corrections,” Multimedia Tools and Applications, vol. 76, no. 23, pp. 25179-25196, 2017.
[42] Yun-Fu Liu, Jing-Ming Guo, Chih-Hsien Hsia, Sheng-Yao Su, and Hua Lee, “Sample space dimensionality refinement for symmetrical object detection,” IEEE Transactions on Information Forensics and Security, vol. 9, no. 11, pp. 1953-1961, 2014.
[43] Mang-Hui Wang, Mei-Ling Huang, Shiue-Der Lu, and Zong-Yi Lee, “Use of extension method with chaotic eye features for electrocardiogram biometric recognition,” Sensors and Materials, vol. 31, no. 11, pp. 3437-3449, 2019.
[44] Kizito Nkurikiyeyezu, Anna Yokokubo, and Guillaume Lopez, “Effect of person-specific biometrics in improving generic stress predictive models,” Sensors and Materials, vol. 32, no. 2, pp. 703-722, 2020.
[45] Weiyang Liu, Yandong Wen, Zhiding Yu, and Meng Yang, “Large-margin softmax loss for convolutional neural networks,” International Conference on Machine Learning, vol. 48, pp. 507-516, 2016.
[46] Weiyang Liu, Yandong Wen, Zhiding Yu, Ming Li, Bhiksha Raj, and Le Song, “SphereFace: deep hypersphere embedding for face recognition,” IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 6738-6746, 2017.
[47] Hao Wang, Yitong Wang, Zheng Zhou, Xing Ji, Dihong Gong, Jingchao Zhou, Zhifeng Li, and Wei Liu, “CosFace: large margin cosine loss for deep face recognition,” IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 5265-5274, 2018.
[48] Jiankang Deng, Jia Guo, Niannan Xue, and Stefanos Zafeiriou, “ArcFace: additive angular margin loss for deep face recognition,” IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4690-4699, 2019.
[49] Abhay Gupta, Arjun D’Cunha, Kamal Awasthi, and Vineeth Balasubramanian, “DAiSEE: towards user engagement recognition in the wild,” arXiv preprint arXiv:1609.01885, 2016.
[50] Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbiniew Wojna, “Rethinking the inception architecture for computer vision,” IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2818-2826, 2016.
[51] Jeff Donahue, Lisa Anne Hendricks, Sergio Guadarrama, Marcus Rohrbach, Subhashini Venugopalan, Trevor Darrell, and Kate Saenko, “Long-term recurrent convolutional networks for visual recognition and description,” IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 2625-2634, 2015.
[52] M. Ali Akber Dewan, Fuhua Lin, Dunwei Wen, Mahbub Murshed, and Zia Uddin, “A deep learning approach to detecting engagement of online learners,” IEEE SmartWorld, Ubiquitous Intelligence & Computing, Advanced & Trusted Computing, Scalable Computing & Communications, Cloud & Big Data Computing, Internet of People and Smart City Innovation, pp. 1895-1902, 2018.
[53] Mahbub Murshed, M. Ali Akber Dewan, Fuhua Lin, and Dunwei Wen, “Engagement detection in E-learning environments using convolutional neural networks,” IEEE International Conference on Dependable, Autonomic and Secure Computing, Intl Conf on Pervasive Intelligence and Computing, Intl Conf on Cloud and Big Data Computing, Intl Conf on Cyber Science and Technology Congress, pp. 80-86, 2019.
[54] Geoffrey E. Hinton, Simon Osindero, and Yee-Whye Teh, “A fast learning algorithm for deep belief nets,” Neural Computation, vol. 18, no. 7, pp. 1527-1554, 2006.
[55] Lin Geng, Min Xu, Zeqiang Wei, and Xiuzhuang Zhou, “Learning deep spatiotemporal feature for engagement recognition of online courses,” IEEE Symposium Series on Computational Intelligence, pp. 442-447, 2019.
[56] Ben-Youssef, Atef, Chloé Clavel, and Slim Essid. “Early detection of user engagement breakdown in spontaneous human-humanoid interaction,” IEEE Transactions on Affective Computing, vol. 12, no. 3, pp. 776-787, 2019.
[57] Th´eo Ayral, Marco Pedersoli, Simon Bacon, and Eric Granger, “Temporal stochastic softmax for 3D CNNs: an application in facial expression recognition,” IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 3029-3038, 2021.
[58] Md Taufeeq Uddin and Shaun Canavan, “Quantified facial expressiveness for affective behavior analytics,” IEEE/CVF Winter Conference on Applications of Computer Vision, pp. 985-994, 2022.
[59] Liang-Chieh Chen, Yukun Zhu, George Papandreou, Florian Schroff, and Hartwig Adam, “Encoder-decoder with Atrous separable convolution for semantic image segmentation,” European Conference on Computer Vision, pp. 833-851, 2018.
[60] Chih-Hsien Hsia, Jing-Ming Guo, and Jen-Shiun Chiang, “Improved low-complexity algorithm for 2-D integer lifting-based discrete wavelet transform using symmetric mask-based scheme,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 19, no. 8, pp. 1201-1208, 2009.
[61] Chih-Hsien Hsia, Jing-Ming Guo, and Jen-Shiun Chiang, “A fast discrete wavelet transform algorithm for visual processing applications,” Signal Processing, vol. 92, no. 1, pp. 89-106, 2012.
[62] Chih-Hsien Hsia, “A new VLSI architecture for symmetric mask-based discrete wavelet transform,” Journal of Internet Technology, vol. 15, no. 7, pp. 1083-1090, December 2014.
[63] Navneet Dalal and Bill Triggs, “Histograms of oriented gradients for human detection,” IEEE Conference on Computer Vision and Pattern Recognition, vol. 1, pp. 886-893, 2005.
[64] Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, Xiang-Rui Wang, and Chih-Jen Lin, “LIBLINEAR: a library for large linear classification,” Journal of Machine Learning Research, vol. 9, pp. 1871-1874, 2008.
[65] Amanjot Kaur, Aamir Mustafa, Love Mehta and Abhinav Dhall, “Prediction and localization of student engagement in the wild,” Digital Image Computing: Techniques and Applications, pp. 1-8, 2018.
[66] Abhinav Dhall, Garima Sharma, Roland Goecke, and Tom Gedeon, “EmotiW 2020: driver gaze, group emotion, student engagement and physiological signal based challenges,” ACM International Conference on Multimodal Interaction, pp. 784-789, 2020.
[67] Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zach DeVito, Martin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala, “PyTorch: an imperative style, high-performance deep learning library,” ACM International Conference on Neural Information Processing Systems, vol. 32, no. 721, pp. 8026-8037, 2019.
[68] Xuesong Niu, Hu Han, Jiabei Zeng, Xuran Sun, Shiguang Shan, Yan Huang, Songfan Yang, and Xilin Chen, “Automatic engagement prediction with gap feature,” ACM International Conference on Multimodal Interaction, pp 599–603, 2018.
[69] Jianfei Yang, Kai Wang, Xiaojiang Peng, and Yu Qiao, “Deep recurrent multi-instance learning with spatio-temporal features for engagement intensity prediction,” ACM International Conference on Multimodal Interaction, pp. 594-598, 2018.
[70] Bin Zhu, Xinjie Lan, Xin Guo, Kenneth E. Barner, and Charles Boncelet, “Multi-rate attention based GRU model for engagement prediction,” ACM International Conference on Multimodal Interaction, pp. 841–848, 2020.
[71] Tao Huang, Yunshan Mei, Hao Zhang, Sanya Liu, and Huali Yang, “Fine-grained engagement recognition in online learning environment,” IEEE International Conference on Electronics Information and Emergency Communication, pp. 338-341, 2019.
[72] Chih-Hsien Hsia and Chin-Feng Lai, “Embedded vein recognition system with wavelet domain,” Sensors and Materials, vol. 32, no. 10, pp. 3221-3234, 2020.
[73] Lavinia Mihaela Dinca and Gerhard Petrus Hancke, “The fall of one, the rise of many: a survey on multi-biometric fusion methods,” IEEE Access, vol. 5, pp. 6247-6289, 2017.
[74] Eman Turki Magdi, Maha Mahmood, and Raghad Alabboodi, “A proposed hybrid biometric technique for patterns distinguishing,” Journal of Information Science and Engineering, vol. 36, pp. 337-345, 2020.
[75] Wei Wu, Stephen John Elliott, Sen Lin, Shenshen Sun, and Yandong Tang, “Review of palm vein recognition,” IET Biometrics, vol. 9, no. 1, pp. 1-10, 2020.
[76] Yue Zhao and Jiancheng Xu, “Necessary morphological patches extraction for automatic micro-expression recognition,” Applied Sciences, vol. 8, no. 10, pp. 1811, 2018.
[77] Sai Prasanna Teja Reddy, Surya Teja Karri, Shiv Ram Dubey, and Snehasis Mukherjee, “Spontaneous facial micro-expression recognition using 3D spatiotemporal convolutional neural networks,” International Joint Conference on Neural Networks, pp. 1-8, 2019.
[78] Yue Zhao and Jiancheng Xu, “An improved micro-expression recognition method based on necessary morphological patches,” Symmetry, vol. 11, no. 4, pp. 497, 2019.
[79] Chih-Hsien Hsia, Jen-Shiun Chiang, and Shih-Hung Chang, “Adaptive vision-based self-localization system for humanoid robot of RoboCup,” International Journal of Innovative Computing, Information and Control, vol. 9, no. 2, pp. 991-1012, 2013.
[80] Chih-Hsien Hsia and Jen-Shiun Chiang, “Real-time moving objects detection and tracking with direct LL-mask band scheme,” International Journal of Innovative Computing, Information and Control, vol. 8, no. 7(A), pp. 4451-4468, 2012.
[81] Yusuke Matsuda, Naoto Miura, Akio Nagasaka, Harumi Kiyomizu, and Takafumi Miyatake, “Finger-vein authentication based on deformation-tolerant feature-point matching,” Machine Vision and Applications, vol. 27, no. 2, pp. 237-250, 2016.
[82] Yinbo Zhou and Ajay Kumar, “Human identification using palm-vein images,” IEEE Transactions on Information Forensics and Security, vol. 6, no. 4, pp. 1259-1274, 2011.
[83] Chih-Hsien Hsia and Chin-Hua Liu, “New hierarchical finger-vein feature extraction method for iVehicles,” IEEE Sensors Journal, vol. 22, no. 13, pp. 13612-13621, 2022.
[84] Zih-Ching Chen, Sin-Ye Jhong, and Chin-Hsien Hsia, “Design of a lightweight palm-vein authentication system based on model compression,” Journal of Information Science and Engineering, vol. 37, no.4, pp. 809-825, 2021.
[85] Wei Lu and Wei-Qi Yuan, “Comparison of four local invariant characteristics based on palm vein,” IEEE International Conference on Computational Science and Engineering, pp. 850-853, 2017.
[86] Huafeng Qin, Lan Qin, Lian Xue, Xiping He, Chengbo Yu, and Xinyuan Liang, “Finger-vein verification based on multi-features fusion,” Sensors, vol. 13, no. 11, pp. 15048-15067, 2013.
[87] Xuekui Yan, Wenxiong Kang, Feiqi Deng, and Qiuxia Wu, “Palm vein recognition based on multi-sampling and feature-level fusion,” Neurocomputing, vol. 151, no. 2, pp. 798-807, 2015.
[88] Xianjing Meng, Xiaoming Xi, Zongwei Li, and Qing Zhang, “Finger vein recognition based on fusion of deformation information,” IEEE Access, vol. 8, pp. 50519-50530, 2020.
[89] Jian-Da Wu and Chiung-Tsiung Liu, “Finger-vein pattern identification using SVM and neural network technique,” Expert Systems with Applications, vol. 38, no. 11, pp. 14284-14289, 2011.
[90] Meirista Wulandari, Basari, and Dadang Gunawan, “On the performance of pretrained CNN aimed at palm vein recognition application,” IEEE International Conference on Information Technology and Electrical Engineering, 2019.
[91] Onsen Toygar, Felix O. Babalola, and Yiltan Bitrim, “FYO: A novel multimodal vein database with palmar, dorsal and wrist biometrics,” IEEE Access, vol. 8, pp. 82461–82470, 2020.