检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:Hangyu WANG Jianghao LIN Bo CHEN Yang YANG Ruiming TANG Weinan ZHANG Yong YU
机构地区:[1]Computer Science and Technology,Shanghai Jiao Tong University,Shanghai 200240,China [2]Huawei Noah’s Ark Lab,Shenzhen 518129,China
出 处:《Frontiers of Computer Science》2025年第3期119-121,共3页计算机科学前沿(英文版)
基 金:supported by the National Natural Science Foundation of China(Grant No.62177033);sponsored by the Huawei Innovation Research Program.
摘 要:1 Introduction Large Language Models(LLMs)possess massive parameters and are trained on vast datasets,demonstrating exceptional proficiency in various tasks.The remarkable advancements in LLMs also inspire the exploration of leveraging LLMs as recommenders(LLMRec),whose effectiveness stems from extensive open-world knowledge and reasoning ability in LLMs[1].LLMRec obtains the recommendation ability through instruction tuning on the user interaction data.But in many cases,it is also crucial for LLMRec to forget specific user data,which is referred to as recommendation unlearning[2],as shown in Fig.1.
关 键 词:large language models llms possess user interaction data large language models instruction tuning recommendation unlearning
分 类 号:TP391.3[自动化与计算机技术—计算机应用技术]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.49