基于代理模型的XAI可解释性量化评估方法  被引量:2

Quantitative evaluation method for interpretability of XAI based on surrogate model

在线阅读下载全文

作  者:李瑶 王春露[1,3] 左兴权 黄海[2] 丁忆宁[2,3] 张修建 LI Yao;WANG Chun-lu;ZUO Xing-quan;HUANG Hai;DING Yi-ning;ZHANG Xiu-jian(School of Cyberspace Security,Beijing University of Posts and Telecommunications,Beijing 100876,China;School of Computer Science,Beijing University of Posts and Telecommunications,Beijing 100876,China;Key Laboratory of Trustworthy Distributed Computing and Service of Ministry of Education,Beijing 100876,China;Beijing Aerospace Institute for Metrology and Measurement Technology,Beijing 100076,China;Key Laboratory of Artificial Intelligence Measurement and Standards for State Market Regulation,Beijing 100076,China)

机构地区:[1]北京邮电大学网络空间安全学院,北京100876 [2]北京邮电大学计算机学院,北京100876 [3]可信分布式计算与服务教育部重点实验室,北京100876 [4]北京航天计量测试技术研究所,北京100076 [5]国家市场监管重点实验室(人工智能计量测试与标准),北京100076

出  处:《控制与决策》2024年第2期680-688,共9页Control and Decision

摘  要:可解释人工智能(explainable artificial intelligence, XAI)近年来发展迅速,已出现多种人工智能模型的解释技术,但是目前缺乏XAI可解释性的定量评估方法.已有评估方法大多需借助用户实验进行评估,这种方法耗时长且成本高昂.针对基于代理模型的XAI,提出一种可解释性量化评估方法.首先,针对这类XAI设计一些指标并给出计算方法,构建包含10个指标的评估指标体系,从一致性、用户理解性、因果性、有效性、稳定性5个维度来评估XAI的可解释性;然后,对于包含多个指标的维度,将熵权法与TOPSIS相结合,建立综合评估模型来评估该维度上的可解释性;最后,将该评估方法用于评估6个基于规则代理模型的XAI的可解释性.实验结果表明,所提出方法能够展现XAI在不同维度上的可解释性水平,用户可根据需求选取合适的XAI.Explainable artificial intelligence(XAI)is growing rapidly in recent years and many interpretability techniques have emerged,but there is a lack of quantitative evaluation approaches for XAI’s interpretability.Most of existing evaluation methods rely on users’experiments,which is time-consuming and costly.Aiming at the surrogate model-based XAI,we propose a quantitative evaluation approach for the XAI’s interpretability.Firstly,we devise some indices for this kind of XAI and give their computational method,and construct an index system with 10 quantitative indices to evaluate the XAI’s interpretability from five dimensions,namely consistency,user comprehension,causality,effectiveness and stability.For the dimension with multiple indices,a comprehensive evaluation model is established by combining the entropy weight method with TOPSIS to evaluate the XAI’s interpretability in the dimension.The proposed approach is applied to the evaluation of the interpretability of 6 XAIs based on the rule surrogate model.Experimental results show that the approach can demonstrate the XAI’s interpretability in different dimensions,and users can choose suitable XAI according to their needs.

关 键 词:可解释人工智能 可解释性评估 评估模型 代理模型 规则模型 定量评估 

分 类 号:TP18[自动化与计算机技术—控制理论与控制工程]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象