检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:徐伟[1] 韦红梅 Xu Wei;Wei Hongmei(Network Rule of Law Research Center,Chongqing University of Posts and Telecommunications,Chongqing 400065,China)
机构地区:[1]重庆邮电大学网络法治研究中心,重庆400065
出 处:《现代情报》2025年第5期89-98,共10页Journal of Modern Information
基 金:国家社会科学基金重大项目“健全网络综合治理体系研究”(项目编号:23ZDA086)。
摘 要:[目的/意义]生成式人工智能模型的性能依赖于训练数据的安全性,而频发的训练数据安全风险已经成为人工智能技术发展的障碍。保障训练数据安全对技术的健康发展具有重要意义。[方法/过程]通过文献、经验和比较分析,揭示了生成式人工智能训练数据的安全风险,并在借鉴欧盟治理经验的基础上,结合我国实践提出了应对策略。[结果/结论]研究发现,当前训练数据存在数据来源不透明、标注不规范、内容不安全及泄露风险等问题。欧盟已建立以保障数据来源、标注、内容及泄露防控为核心的监管体系。未来,我国应加强数据来源管理、统一标注标准、完善内容安全规则,强化数据保护技术以确保训练数据安全,推动技术健康发展。[Purpose/Significance]The performance of generative artificial intelligence models depends on the security of training data,and frequent security risks of training data have become an obstacle to the development of artificial intelligence technology.Ensuring the security of training data is of great importance for the healthy development of technology.[Method/Process]Through literature,experience and comparative analysis,this paper revealed the security risks of generative artificial intelligence training data,and put forward countermeasures based on the experience of EU governance and China's practice.[Result/Conclusion]The study found that the current training data has opaque data sources,non-standard labeling,unsafe content and hidden dangers of leakage.The EU has established a regulatory system with the core of ensuring data sources,labeling,content and leakage prevention and control.In the future,China should strengthen data source management,unify labeling standards,improve content security rules,and strengthen data protection technology to ensure the safety of training data and promote the healthy development of technology.
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.222