检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:靳庆文 朝乐门[1,2] 孟刚 Jin Qingwen;Chao Lemen;Meng Gang(Key Laboratory of Data Engineering and Knowledge Engineering(Renmin University of China),Beijing,100872;School of Information Resource Management,Renmin University of China,Beijing,100872;Institute of Finance and Economics,Zhengzhou University of Science and Technology,Zhengzhou,450064)
机构地区:[1]数据工程与知识工程教育部重点实验室(中国人民大学),北京100872 [2]中国人民大学信息资源管理学院,北京100872 [3]郑州科技学院财经学院,郑州450064
出 处:《情报资料工作》2022年第5期16-23,共8页Information and Documentation Services
摘 要:[目的/意义]算法解释是AI治理的技术基础。正确认识AI治理中的算法解释与通用意义上的可解释性机器学习的差异性是实现负责任人工智能(Responsible Artificial Intelligence,RAI)的关键所在。[方法/过程]从AI事故的后解释、AI公平性的局部解释、以人为中心的算法解释三个方面阐述了AI治理中算法解释的需求特征,并结合IBM、微软、谷歌和阿里巴巴公司的治理实践分析算法解释方法的应用。[结果/结论]AI治理中的算法解释方法包括:以特征重要性分析和可视解释为代表的后解释实现方法、以局部扰动法和反事实解释为代表的局部解释实现方法、多目标进化优化的解释方法。常用的算法解释评价方法有以人为中心的定性评价、基于统计指标的定量评价和基于模糊认知的模糊评价。最后,对AI治理中算法解释的主要存在问题和未来研究应重视的发展方向进行总结。[Purpose/significance] Algorithm explanationis the technical basis of AI governance. Correctly understanding the difference between algorithm explanation in AI governance and interpretable machine learning in general sense is the key to realizing responsible artificial intelligence(RAI). [Method/process] The requirement characteristics of algorithm explanation in AI governance are expoundedfrom three aspects: post-hoc interpretation of AI accidents, local interpretation of AI fairness, and algorithm explanation of human-centered. Combined with the governance practices of IBM, Microsoft, Google and Alibaba corporations, the application of algorithm explanation methods is analyzed. [Result/conclusion] Algorithm explanation methods in AI governance include post-hoc interpretation implementation methods represented by feature importance analysis and visual interpretation, local interpretation implementation methods represented by local perturbation methods and counterfactual interpretations, and interpretation methods of multi-objective evolutionary optimization. The commonly used algorithm explanation evaluation methods include human-centered qualitative evaluation, quantitative evaluation based on statistical indicators, and fuzzy evaluation based on fuzzy cognition. Finally, the main problems of algorithm explanation in AI governance and the development direction that should be paid attention to in future research are summarized.
关 键 词:AI治理 算法解释 可解释性机器学习 负责任人工智能
分 类 号:TP18[自动化与计算机技术—控制理论与控制工程]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.145