Pedagogical Alignment of Large Language Models (LLM) for Personalized Learning: A Survey, Trends and Challenges  

Pedagogical Alignment of Large Language Models (LLM) for Personalized Learning: A Survey, Trends and Challenges

在线阅读下载全文

作  者:Mahefa Abel Razafinirina William Germain Dimbisoa Thomas Mahatody Mahefa Abel Razafinirina;William Germain Dimbisoa;Thomas Mahatody(School of Computer Science, University of Fianarantsoa, Fianarantsoa, Madagascar)

机构地区:[1]School of Computer Science, University of Fianarantsoa, Fianarantsoa, Madagascar

出  处:《Journal of Intelligent Learning Systems and Applications》2024年第4期448-480,共33页智能学习系统与应用(英文)

摘  要:This survey paper investigates how personalized learning offered by Large Language Models (LLMs) could transform educational experiences. We explore Knowledge Editing Techniques (KME), which guarantee that LLMs maintain current knowledge and are essential for providing accurate and up-to-date information. The datasets analyzed in this article are intended to evaluate LLM performance on educational tasks, such as error correction and question answering. We acknowledge the limitations of LLMs while highlighting their fundamental educational capabilities in writing, math, programming, and reasoning. We also explore two promising system architectures: a Mixture-of-Experts (MoE) framework and a unified LLM approach, for LLM-based education. The MoE approach makes use of specialized LLMs under the direction of a central controller for various subjects. We also discuss the use of LLMs for individualized feedback and their possibility in content creation, including the creation of videos, quizzes, and plans. In our final section, we discuss the difficulties and potential solutions for incorporating LLMs into educational systems, highlighting the importance of factual accuracy, reducing bias, and fostering critical thinking abilities. The purpose of this survey is to show the promise of LLMs as well as the issues that still need to be resolved in order to facilitate their responsible and successful integration into the educational ecosystem.This survey paper investigates how personalized learning offered by Large Language Models (LLMs) could transform educational experiences. We explore Knowledge Editing Techniques (KME), which guarantee that LLMs maintain current knowledge and are essential for providing accurate and up-to-date information. The datasets analyzed in this article are intended to evaluate LLM performance on educational tasks, such as error correction and question answering. We acknowledge the limitations of LLMs while highlighting their fundamental educational capabilities in writing, math, programming, and reasoning. We also explore two promising system architectures: a Mixture-of-Experts (MoE) framework and a unified LLM approach, for LLM-based education. The MoE approach makes use of specialized LLMs under the direction of a central controller for various subjects. We also discuss the use of LLMs for individualized feedback and their possibility in content creation, including the creation of videos, quizzes, and plans. In our final section, we discuss the difficulties and potential solutions for incorporating LLMs into educational systems, highlighting the importance of factual accuracy, reducing bias, and fostering critical thinking abilities. The purpose of this survey is to show the promise of LLMs as well as the issues that still need to be resolved in order to facilitate their responsible and successful integration into the educational ecosystem.

关 键 词:Chain of Thought Education IA LLM Machine Learning NLP Personalized Learning Prompt Optimization Video Generation 

分 类 号:H31[语言文字—英语]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象