Towards efficient and effective unlearning of large language models for recommendation  

在线阅读下载全文

作  者:Hangyu WANG Jianghao LIN Bo CHEN Yang YANG Ruiming TANG Weinan ZHANG Yong YU 

机构地区:[1]Computer Science and Technology,Shanghai Jiao Tong University,Shanghai 200240,China [2]Huawei Noah’s Ark Lab,Shenzhen 518129,China

出  处:《Frontiers of Computer Science》2025年第3期119-121,共3页计算机科学前沿(英文版)

基  金:supported by the National Natural Science Foundation of China(Grant No.62177033);sponsored by the Huawei Innovation Research Program.

摘  要:1 Introduction Large Language Models(LLMs)possess massive parameters and are trained on vast datasets,demonstrating exceptional proficiency in various tasks.The remarkable advancements in LLMs also inspire the exploration of leveraging LLMs as recommenders(LLMRec),whose effectiveness stems from extensive open-world knowledge and reasoning ability in LLMs[1].LLMRec obtains the recommendation ability through instruction tuning on the user interaction data.But in many cases,it is also crucial for LLMRec to forget specific user data,which is referred to as recommendation unlearning[2],as shown in Fig.1.

关 键 词:large language models llms possess user interaction data large language models instruction tuning recommendation unlearning 

分 类 号:TP391.3[自动化与计算机技术—计算机应用技术]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象