检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:苏宇[1] Su Yu
出 处:《探索与争鸣》2025年第3期107-116,179,共11页Exploration and Free Views
基 金:国家社会科学基金一般项目“算法解释制度的体系化建构研究”(22BFX016)。
摘 要:大模型的兴起正在引起人工智能法律治理中基础性信息工具的变革。算法解释因存在计算量、随机性与理解力等局限,不足以独力承担人工智能治理基础信息工具的重任,而系统测评正日益发挥重要作用。系统测评有利于应对算法风险的全面扩张、模型安全的关切增强、关键性能的特殊需要、多维目标的综合考量等治理需求变化,但其原理与实践尚未成熟,在制度化过程中可能面临训练针对性、基准科学性、利益关联性等挑战。对此,应合理界定测评的应用场景,科学建立测评基准筛选机制、测评基准质量管理机制、测评过程公正保障机制,并通过多种途径实现系统测评与算法解释的制度化兼容。The rise of large models is causing a fundamental change in the information tools used in the legal governance of artificial intelligence.Algorithmic explanation alone is insufficient to bear the responsibility of the basic information tool for AI governance due to limitations in computation,randomness,and understanding.In contrast,system evaluation is playing an increasingly important role.System evaluation is conducive to addressing the changes in governance requirements,such as the comprehensive expansion of algorithmic risks,the increased concerns about model security,the special needs for key performance,and the comprehensive consideration of multi-dimensional objectives.However,its principles and practices are not yet mature,which may face challenges such as training specificity,benchmark scientificity,and interest relevance during the institutionalization process.In response,it is necessary to reasonably define the application scenarios of evaluation,scientifically establish mechanisms for the selection and formation of evaluation benchmarks,the quality management mechanism for evaluation benchmarks,and the mechanism for ensuring fairness in the evaluation process.Institutional compatibility between system evaluation and algorithmic explanation should be achieved through various means.
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.38