| 研究生: |
劉睿宏 Liu, Jui-Hung |
|---|---|
| 論文名稱: |
創客教育虛擬互動機器人於國小學童的學習體驗之探討 Investigating Learning Experience by Maker-based Virtual Agent Robotic on Elementary Student |
| 指導教授: |
黃悅民
Huang, Yueh-Min |
| 學位類別: |
碩士 Master |
| 系所名稱: |
工學院 - 工程科學系 Department of Engineering Science |
| 論文出版年: | 2020 |
| 畢業學年度: | 108 |
| 語文別: | 中文 |
| 論文頁數: | 71 |
| 中文關鍵詞: | 創客教育 、虛擬代理人 、教育機器人 、語音情感辨識 |
| 外文關鍵詞: | Maker Education, Virtual Agent, Educational Robot, Speech Emotion Recognition. |
| 相關次數: | 點閱:92 下載:0 |
| 分享至: |
| 查詢本校圖書館目錄 查詢臺灣博碩士論文知識加值系統 勘誤回報 |
近年來,世界各國正掀起一波創客教育的浪潮,我國政府也積極的在推動創客運動,而「創客教育」隨即在各級學校不斷落實及實踐,甚至已擁有屬於自己的創客基地或創客空間,然歸因於創客教育具有跨學科、領域性,目前還沒有專職的創客教師,因而在創客教育中恐衍伸出一些教學挑戰;其一是恐因過於複雜且多元的創客任務,而造成學生無法理解或完成創客任務的內容與步驟,亦或者是無法自行操作創客工具的使用方法等,導致學習成效不彰的情形;其二則是學生在面臨實作問題的處理方式,便是立即尋求教師的協助,這恐由於頻繁地依賴教師而導致缺乏問題解決能力,進而衍伸出師資不足的現象。
因資訊科技的快速進步,智慧教育機器人技術和深度學習技術已逐漸成熟,本研究試圖改善教學現場師資不足的問題,即開發了一套教育機器人系統,稱為創客教育虛擬互動機器人;並嘗試經由創客教育虛擬互動機器人導入創客活動中,探討學習者的學習成效、任務理解、任務完成度的影響,以及學生自我效能與參與度是否皆有明顯的提升。為此,本研究以“Micro:bit自走車”作為創客活動主題,並招募自願參加實驗之國小五、六年級生,共25人。在創客活動中導入本研究所開發的虛擬互動機器人來輔助學生從事各項任務,在此會記錄學生們的使用機器人的行為與相關歷程作為後續分析使用;同時,本研究還蒐集各小組討論互動過程,試圖從語音情緒辨識系來來獲知小組討論之氣氛。從研究結果顯示,發現機器人對學習者的學習成效、任務理解、任務完成度、自我效能與參與度皆有所提升及改善,尤其對低成就之學生較為明顯;情緒辨識系統則由於語言文化的不同所產生之差異,導致辨識率偏低現象,而此語言差異的結果可作為未研究參考與改善之依據。
Recently, the Maker Education has been continuously implemented in schools, and even has its own maker space in Taiwan. Because of the interdisciplinary and field nature of maker education, there are currently no full-time maker educators that is face a new challenge. The one is complex of maker tasks cause students to be unable to understand or complete the them, or students don’t know how to use maker tools resulting in ineffective learning; the other is students immediately seek the assistance of teachers when they are face problem, which will lead the phenomenon of insufficient teachers.
This study aims to improve the problem of insufficient teachers by developing a Maker-based Virtual Agent Robotic. Introduce the virtual agent robotic into maker activities to explore the impact of learners' learning performance, task understanding, task completion, self-efficacy and students' engagement have been significantly improved. Therefore, the theme of the the maker activity in this study is "Micro:bit obstacles avoidance car", and invited 25 students in the fifth and sixth grades of elementary schools who volunteer to participate in this experiment. The virtual agent robotic is used to assist students in various maker tasks, meanwhile, students’ behavior and learning processes of using the robot will be recorded for analysis. Moreover, this robotic also collects group discussions during the interactive process, it tries to analysis students' emotion of discussion through the speech emotion recognition system. According to the results, this finding found that robotic has improved learners’ engaging in maker, especially for low-achievement students.
Alimisis, D. (2013). Educational robotics: Open questions and new challenges. Themes in Science and Technology Education, 6(1), 63-71.
Allen, V. L. (2013). Children as teachers: Theory and research on tutoring: Academic Press.
Anderson, E., & Kim, D. (2006). Increasing the success of minority students in science and technology.
Anderson, F., Annett, M., & Bischof, W. F. (2010). Lean on Wii: physical rehabilitation with virtual reality Wii peripherals. Stud Health Technol Inform, 154(154), 229-234.
Angello, G., Chu, S. L., Okundaye, O., Zarei, N., & Quek, F. (2016). Making as the new colored pencil: translating elementary curricula into maker activities. Paper presented at the Proceedings of the The 15th International Conference on Interaction Design and Children.
Bahreini, K., Nadolski, R., Qi, W., & Westera, W. (2012). FILTWAM-A framework for online game-based communication skills training-Using webcams and microphones for enhancing learner support. Paper presented at the The 6th European conference on games based learning (ECGBL).
Bahreini, K., Nadolski, R., & Westera, W. (2016a). Data fusion for real-time multimodal emotion recognition through webcams and microphones in e-learning. International Journal of Human–Computer Interaction, 32(5), 415-430.
Bahreini, K., Nadolski, R., & Westera, W. (2016b). Towards real-time speech emotion recognition for affective e-learning. Education and information technologies, 21(5), 1367-1386.
Baxter, P., De Jong, C., Aarts, R., de Haas, M., & Vogt, P. (2017). The effect of age on engagement in preschoolers' child-robot interactions. Paper presented at the Proceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction.
Baylor, A. L., & Kim, Y. (2004). Pedagogical agent design: The impact of agent realism, gender, ethnicity, and instructional role. Paper presented at the International conference on intelligent tutoring systems.
Becker, T., Jessen, M., & Grigoras, C. (2008). Forensic speaker verification using formant features and Gaussian mixture models. Paper presented at the Ninth Annual Conference of the International Speech Communication Association.
Benitti, F. B. V. (2012). Exploring the educational potential of robotics in schools: A systematic review. Computers & Education, 58(3), 978-988.
Bhattarai, K., Prasad, P., Alsadoon, A., Pham, L., & Elchouemi, A. (2017). Experiments on the MFCC application in speaker recognition using Matlab. Paper presented at the 2017 Seventh International Conference on Information Science and Technology (ICIST).
Breiner, J. M., Harkness, S. S., Johnson, C. C., & Koehler, C. M. (2012). What is STEM? A discussion about conceptions of STEM in education and partnerships. School Science and Mathematics, 112(1), 3-11.
Burbaite, R., Stuikys, V., & Marcinkevicius, R. (2012). The LEGO NXT robot-based e-learning environment to teach computer science topics. Elektronika ir Elektrotechnika, 18(9), 113-116.
Bybee, R. (2010). Advancing STEM education: A 2020 vision. Technology and Engineering Teacher, 70 (1), 30-35. Engelsiz STEM Eğifimi.
Carberry, A. R., Lee, H. S., & Ohland, M. W. (2010). Measuring engineering design self‐efficacy. Journal of Engineering Education, 99(1), 71-79.
Cen, L., Wu, F., Yu, Z. L., & Hu, F. (2016). A real-time speech emotion recognition system and its application in online learning. In Emotions, technology, design, and learning (pp. 27-46): Elsevier.
Chang, C.-W., Lee, J.-H., Chao, P.-Y., Wang, C.-Y., & Chen, G.-D. (2010). Exploring the possibility of using humanoid robots as instructional tools for teaching a second language in primary school. Journal of Educational Technology & Society, 13(2), 13-24.
Chen, G.-D., & Wang, C.-Y. (2011). A survey on storytelling with robots. Paper presented at the International Conference on Technologies for E-Learning and Digital Entertainment.
Chen, H., Park, H. W., & Breazeal, C. (2020). Teaching and learning with children: Impact of reciprocal peer learning with a social robot on children’s learning and emotive engagement. Computers & Education, 150, 103836.
Chen, M., He, X., Yang, J., & Zhang, H. (2018). 3-D convolutional recurrent neural networks with attention model for speech emotion recognition. IEEE Signal Processing Letters, 25(10), 1440-1444.
Chen, X. (2009). Students Who Study Science, Technology, Engineering, and Mathematics (STEM) in Postsecondary Education. Stats in Brief. NCES 2009-161. National Center for Education Statistics.
Cheng, Y.-W., Sun, P.-C., & Chen, N.-S. (2018). The essential applications of educational robot: Requirement analysis from the perspectives of experts, researchers and instructors. Computers & Education, 126, 399-416.
Chu, S. L., Schlegel, R., Quek, F., Christy, A., & Chen, K. (2017). 'I Make, Therefore I Am' The Effects of Curriculum-Aligned Making on Children's Self-Identity. Paper presented at the Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems.
Chuang, C.-P., Huang, Y.-J., Guo-Hao, L., & Huang, Y.-C. (2010). POPBL-based education and trainning system on robotics training effectiveness. Paper presented at the 2010 International Conference on System Science and Engineering.
Clark, R. C., & Mayer, R. E. (2016). E-learning and the science of instruction: Proven guidelines for consumers and designers of multimedia learning: John Wiley & Sons.
Conti, D., Di Nuovo, S., Buono, S., & Di Nuovo, A. (2017). Robots in education and care of children with developmental disabilities: a study on acceptance by experienced and future professionals. International Journal of Social Robotics, 9(1), 51-62.
Cowie, R., Douglas-Cowie, E., Tsapatsoulis, N., Votsis, G., Kollias, S., Fellenz, W., & Taylor, J. G. (2001). Emotion recognition in human-computer interaction. IEEE Signal processing magazine, 18(1), 32-80.
Crawford, B. A. (2000). Embracing the essence of inquiry: New roles for science teachers. Journal of Research in Science Teaching: The Official Journal of the National Association for Research in Science Teaching, 37(9), 916-937.
De Silva, L. C., Miyasato, T., & Nakatsu, R. (1997). Facial emotion recognition using multi-modal information. Paper presented at the Proceedings of ICICS, 1997 International Conference on Information, Communications and Signal Processing. Theme: Trends in Information Systems Engineering and Wireless Multimedia Communications (Cat.
Deutsch, J. E., Brettler, A., Smith, C., Welsh, J., John, R., Guarrera-Bowlby, P., & Kafri, M. (2011). Nintendo wii sports and wii fit game analysis, validation, and application to stroke rehabilitation. Topics in stroke rehabilitation, 18(6), 701-719.
Do, M. N. (2001). An automatic speaker recognition system.
Dowd, A. C., Malcom, L. E., & Bensimon, E. M. (2009). Benchmarking the success of Latina and Latino students in STEM to achieve national graduation goals: Center for Urban Education.
Efklides, A., & Volet, S. (2005). Emotional experiences during learning: Multiple, situated and dynamic.
Eguchi, A. (2010). What is educational robotics? Theories behind it and practical implementation. Paper presented at the Society for information technology & teacher education international conference.
Felicia, A., & Sharif, S. (2014). A review on educational robotics as assistive tools for learning mathematics and science. Int. J. Comput. Sci. Trends Technol, 2(2), 62-84.
Fridin, M. (2014). Storytelling by a kindergarten social assistive robot: A tool for constructive learning in preschool education. Computers & Education, 70, 53-64.
Graesser, A. C., Wiemer-Hastings, K., Wiemer-Hastings, P., Kreuz, R., & Group, T. R. (1999). AutoTutor: A simulation of a human tutor. Cognitive Systems Research, 1(1), 35-51.
Halverson, E. R., & Sheridan, K. (2014). The maker movement in education. Harvard educational review, 84(4), 495-504.
Hatch, M. (2013). The maker movement manifesto: rules for innovation in the new world of crafters, hackers, and tinkerers: McGraw Hill Professional.
Hijmans, J. M., Hale, L. A., Satherley, J. A., McMillan, N. J., & King, M. J. (2011). Bilateral upper-limb rehabilitation after stroke using a movement-based game controller. Journal of Rehabilitation Research & Development, 48(8).
House, W. (2010). President obama to announce major expansion of educate to innovate campaign to improve science, technology, engineering and math (stem) education. Office of the Press Secretary.
House, W. (2014). Presidential proclamation—National day of making. In.
Hu, H. (2000). Introduction to Speech Signal Processing. Harbin Institute of Technology Press, Harbin.
Hu, H., Xu, M.-X., & Wu, W. (2007). GMM supervector based SVM with spectral features for speech emotion recognition. Paper presented at the 2007 IEEE International Conference on Acoustics, Speech and Signal Processing-ICASSP'07.
Huang, C.-W., & Narayanan, S. S. (2016). Attention Assisted Discovery of Sub-Utterance Structure in Speech Emotion Recognition. Paper presented at the INTERSPEECH.
Huang, T.-C., Lin, W., & Yueh, H.-P. (2019). How to cultivate an environmentally responsible maker? A CPS approach to a comprehensive maker education model. International Journal of Science and Mathematics Education, 17(1), 49-64.
Jain, M., Narayan, S., Balaji, P., Bhowmick, A., & Muthu, R. K. (2020). Speech emotion recognition using support vector machine. arXiv preprint arXiv:2002.07590.
Jin, S.-A. A. (2011). Leveraging avatars in 3D virtual environments (Second Life) for interactive learning: The moderating role of the behavioral activation system vs. behavioral inhibition system and the mediating role of enjoyment. Interactive Learning Environments, 19(5), 467-486.
Johnson, W. L., Rickel, J. W., & Lester, J. C. (2000). Animated pedagogical agents: Face-to-face interaction in interactive learning environments. International Journal of Artificial intelligence in education, 11(1), 47-78.
Jones, W. M. (2020). Teachers’ perceptions of a maker-centered professional development experience: a multiple case study. International Journal of Technology and Design Education, 1-25.
Jones, W. M., Smith, S., & Cohen, J. (2017). Preservice teachers' beliefs about using maker activities in formal K-12 educational settings: A multi-institutional study. Journal of Research on Technology in Education, 49(3-4), 134-148.
Kafai, Y., Fields, D., & Searle, K. (2014). Electronic textiles as disruptive designs: Supporting and challenging maker activities in schools. Harvard educational review, 84(4), 532-556.
Kalogiannakis, M., Ampartzaki, M., Papadakis, S., & Skaraki, E. (2018). Teaching natural science concepts to young children with mobile devices and hands-on activities. A case study. International Journal of Teaching and Case Studies, 9(2), 171-183.
Kennedy, J., Baxter, P., Senft, E., & Belpaeme, T. (2015). Higher nonverbal immediacy leads to greater learning gains in child-robot tutoring interactions. Paper presented at the International conference on social robotics.
Kennedy, J., Baxter, P., Senft, E., & Belpaeme, T. (2016). Social robot tutoring for child second language learning. Paper presented at the 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI).
Kennedy, T., & Odell, M. (2014). Engaging students in STEM education. Science Education International, 25(3), 246-258.
Konijn, E. A., & Hoorn, J. F. (2020). Robot tutor and pupils’ educational ability: Teaching the times tables. Computers & Education, 103970.
Korwin, A. R., & Jones, R. E. (1990). Do hands-on, technology-based activities enhance learning by reinforcing cognitive knowledge and retention? Volume 1 Issue 2 (spring 1990).
Kory-Westlund, J. M., & Breazeal, C. (2019a). Exploring the effects of a social robot's speech entrainment and backstory on young children's emotion, rapport, relationship, and learning. Frontiers in Robotics and AI, 6, 54.
Kory-Westlund, J. M., & Breazeal, C. (2019b). A long-term study of young children's rapport, social emulation, and language learning with a peer-like robot playmate in preschool. Frontiers in Robotics and AI, 6, 81.
Kory, J., & Breazeal, C. (2014). Storytelling with robots: Learning companions for preschool children's language development. Paper presented at the The 23rd IEEE international symposium on robot and human interactive communication.
Kory Westlund, J. M., Jeong, S., Park, H. W., Ronfard, S., Adhikari, A., Harris, P. L., . . . Breazeal, C. L. (2017). Flat vs. expressive storytelling: young children’s learning and retention of a social robot’s narrative. Frontiers in human neuroscience, 11, 295.
Krajcik, J. S., Blumenfeld, P. C., Marx, R. W., & Soloway, E. (1994). A collaborative model for helping middle grade science teachers learn project-based instruction. The elementary school journal, 94(5), 483-497.
Kwon, O.-W., Chan, K., Hao, J., & Lee, T.-W. (2003). Emotion recognition by speech signals. Paper presented at the Eighth European Conference on Speech Communication and Technology.
Lee, H., & Hyun, E. (2015). The intelligent robot contents for children with speech-language disorder. Journal of Educational Technology & Society, 18(3), 100-113.
Lee, J., & Tashev, I. (2015). High-level feature representation using recurrent neural network for speech emotion recognition. Paper presented at the Sixteenth Annual Conference of the International Speech Communication Association.
Lee, J., Zuo, S., Chung, Y., Park, D., Chang, H.-H., & Kim, S. (2014). Formant-based acoustic features for cow's estrus detection in audio surveillance system. Paper presented at the 2014 11th IEEE International Conference on Advanced Video and Signal Based Surveillance (AVSS).
Lester, J. C., Converse, S. A., Kahler, S. E., Barlow, S. T., Stone, B. A., & Bhogal, R. S. (1997). The persona effect: affective impact of animated pedagogical agents. Paper presented at the Proceedings of the ACM SIGCHI Conference on Human factors in computing systems.
Li, J., Kizilcec, R., Bailenson, J., & Ju, W. (2016). Social robots and virtual agents as lecturers for video instruction. Computers in Human Behavior, 55, 1222-1230.
Lieberman, D. A. (2006). What can we learn from playing interactive games?
Lin, K.-Y., Yu, K.-C., Hsiao, H.-S., Chu, Y.-H., Chang, Y.-S., & Chien, Y.-H. (2015). Design of an assessment system for collaborative problem solving in STEM education. Journal of Computers in Education, 2(3), 301-322.
Linnenbrink, E. A. (2006). Emotion research in education: Theoretical and methodological perspectives on the integration of affect, motivation, and cognition. Educational psychology review, 18(4), 307-314.
Liu, D., Huang, R., Chen, N., & Fan, L. (2016). The Current Situation and Trend of Global Development of Educational Robots. In: Beijing: Posts & Telecommunications Press.
Liu, Q., Yao, M., Xu, H., & Wang, F. (2013). Research on different feature parameters in speaker recognition. Journal of Signal and Information Processing, 4(02), 106.
Lui, D., Litts, B. K., Widman, S., Walker, J. T., & Kafai, Y. B. (2016). Collaborative Maker Activities in the Classroom: Case Studies of High School Student Pairs' Interactions in Designing Electronic Textiles. Paper presented at the Proceedings of the 6th Annual Conference on Creativity and Fabrication in Education.
Mao, Q., Dong, M., Huang, Z., & Zhan, Y. (2014). Learning salient features for speech emotion recognition using convolutional neural networks. IEEE transactions on multimedia, 16(8), 2203-2213.
Martin, L. (2015). The promise of the maker movement for education. Journal of Pre-College Engineering Education Research (J-PEER), 5(1), 4.
Martinez, D., Lleida, E., Ortega, A., & Miguel, A. (2013). Prosodic features and formant modeling for an ivector-based language recognition system. Paper presented at the 2013 IEEE International Conference on Acoustics, Speech and Signal Processing.
Milne, M., Luerssen, M. H., Lewis, T. W., Leibbrandt, R. E., & Powers, D. M. (2010). Development of a virtual agent based social tutor for children with autism spectrum disorders. Paper presented at the the 2010 international joint conference on neural networks (IJCNN).
Moreno-León, J., & Robles, G. (2015). Analyze your Scratch projects with Dr. Scratch and assess your computational thinking skills. Paper presented at the Scratch conference.
Mubin, O., Stevens, C. J., Shahid, S., Al Mahmud, A., & Dong, J.-J. (2013). A review of the applicability of robots in education. Journal of Technology in Education and Learning, 1(209-0015), 13.
Mutlu, B., Forlizzi, J., & Hodgins, J. (2006). A storytelling robot: Modeling and evaluation of human-like gaze behavior. Paper presented at the 2006 6th IEEE-RAS International Conference on Humanoid Robots.
Oliver, K. M. (2016a). Professional development considerations for makerspace leaders, part one: Addressing “what?” and “why?”. TechTrends, 60(2), 160-166.
Oliver, K. M. (2016b). Professional development considerations for makerspace leaders, part two: Addressing “how?”. TechTrends, 60(3), 211-217.
Ouamour, S., & Sayoud, H. (2013). Automatic speaker localization based on speaker identification-A smart room application. Paper presented at the Fourth international conference on information and communication technology and accessibility (ICTA).
Peppler, K., & Bender, S. (2013). Maker movement spreads innovation one project at a time. Phi Delta Kappan, 95(3), 22-27.
Petrushin, V. A. (2000). Emotion recognition in speech signal: experimental study, development, and application. Paper presented at the Sixth International Conference on Spoken Language Processing.
Picard, R. W. (1997). Affective computing MIT press. Cambridge, Massachsusetts.
Picard, R. W. (2000). Affective computing: MIT press.
Ren, F., & Quan, C. (2012). Linguistic-based emotion analysis and recognition for measuring consumer satisfaction: an application of affective computing. Information Technology and Management, 13(4), 321-332.
Salmam, F. Z., Madani, A., & Kissi, M. (2016). Facial expression recognition using decision trees. Paper presented at the 2016 13th International Conference on Computer Graphics, Imaging and Visualization (CGiV).
Sanders, M. E. (2008). Stem, stem education, stemmania.
Scherer, K. R., Banse, R., & Wallbott, H. G. (2001). Emotion inferences from vocal expression correlate across languages and cultures. Journal of Cross-cultural psychology, 32(1), 76-92.
Schuller, B. (2017). Enhancing Speech-Based Depression Detection Through Gender Dependent Vowel-Level Formant Features. Paper presented at the Artificial Intelligence in Medicine: 16th Conference on Artificial Intelligence in Medicine, AIME 2017, Vienna, Austria, June 21-24, 2017, Proceedings.
Schuller, B., Rigoll, G., & Lang, M. (2003). Hidden Markov model-based speech emotion recognition. Paper presented at the 2003 IEEE International Conference on Acoustics, Speech, and Signal Processing, 2003. Proceedings.(ICASSP'03).
Schutz, P. A., Pekrun, R., & Phye, G. D. (2007). Emotion in education (Vol. 10): Elsevier.
Shi, L., Wang, Z., & Li, Z. (2008). Affective Model for Intelligent Virtual Agent Based on PFCM. IJVR, 7(3), 49-52.
Skinner, E., Furrer, C., Marchand, G., & Kindermann, T. (2008). Engagement and disaffection in the classroom: Part of a larger motivational dynamic? Journal of educational psychology, 100(4), 765.
Snell, R. C., & Milinazzo, F. (1993). Formant location from LPC analysis data. IEEE transactions on Speech and Audio Processing, 1(2), 129-134.
Spolaôr, N., & Benitti, F. B. V. (2017). Robotics applications grounded in learning theories on tertiary education: A systematic review. Computers & Education, 112, 97-107.
Sun, J.-M., Pei, X.-S., & Zhou, S.-S. (2008). Facial emotion recognition in modern distant education system using SVM. Paper presented at the 2008 International Conference on Machine Learning and Cybernetics.
Tanaka, F., & Matsuzoe, S. (2012). Children teach a care-receiving robot to promote their learning: Field experiments in a classroom for vocabulary learning. Journal of Human-Robot Interaction, 1(1), 78-95.
Tirumala, S. S., Shahamiri, S. R., Garhwal, A. S., & Wang, R. (2017). Speaker identification features extraction methods: A systematic review. Expert Systems with Applications, 90, 250-271.
Topping, K. J. (2005). Trends in peer learning. Educational psychology, 25(6), 631-645.
Trigeorgis, G., Ringeval, F., Brueckner, R., Marchi, E., Nicolaou, M. A., Schuller, B., & Zafeiriou, S. (2016). Adieu features? end-to-end speech emotion recognition using a deep convolutional recurrent network. Paper presented at the 2016 IEEE international conference on acoustics, speech and signal processing (ICASSP).
Tsuei, M. (2017). Learning behaviours of low-achieving children’s mathematics learning in using of helping tools in a synchronous peer-tutoring system. Interactive Learning Environments, 25(2), 147-161.
U.S. Department of Labor, B. o. L. S. (2016). Computer and Information Technology Occupations. Retrieved from https://www.bls.gov/ooh/computer-and-information-technology/
Wang, C.-H., & Lin, H.-C. K. (2018). Emotional Design Tutoring System Based on Multimodal Affective Computing Techniques. International Journal of Distance Education Technologies (IJDET), 16(1), 103-117.
Wang, N., Johnson, W. L., Mayer, R. E., Rizzo, P., Shaw, E., & Collins, H. (2008). The politeness effect: Pedagogical agents and learning outcomes. International journal of human-computer studies, 66(2), 98-112.
Yang, D., Alsadoon, A., Prasad, P., Singh, A., & Elchouemi, A. (2018). An emotion recognition model based on facial recognition in virtual learning environment. Procedia Computer Science, 125, 2-10.
Yang, J., & Waibel, A. (1996). A real-time face tracker. Paper presented at the Proceedings Third IEEE Workshop on Applications of Computer Vision. WACV'96.
Yang, X., & Chi, H. (1995). Digital processing of speech signal. Publishing house of electronics industry.
Yang, Y., van Aalst, J., & Chan, C. K. (2020). Dynamics of reflective assessment and knowledge building for academically low-achieving students. American Educational Research Journal, 57(3), 1241-1289.
Yong, C., & Tong, H. (2005). Affective computing model based on rough sets. Paper presented at the International Conference on Affective Computing and Intelligent Interaction.
Yow, K. C., & Cipolla, R. (1997). Feature-based human face detection. Image and vision computing, 15(9), 713-735.
校內:2025-08-31公開