深度学习可解释性研究进展  被引量:73

Research Advances in the Interpretability of Deep Learning

在线阅读下载全文

作  者:成科扬[1,2] 王宁 师文喜 詹永照 Cheng Keyang;Wang Ning;Shi Wenxi;Zhan Yongzhao(School of Computer Science and Communication Engineering,Jiangsu University,Zhenjiang,Jiangsu 212013;National Engineering Laboratory for Public Safety Risk Perception and Control by the Big Data(China Academy of Electronic Sciences),Beijing 100041;Xinjiang Lianhaichuangzhi Information Technology Co.LTD,Urumqi 830001)

机构地区:[1]江苏大学计算机科学与通信工程学院,江苏镇江212013 [2]社会安全风险感知与防控大数据应用国家工程实验室(中国电子科学研究院),北京100041 [3]新疆联海创智信息科技有限公司,乌鲁木齐830001

出  处:《计算机研究与发展》2020年第6期1208-1217,共10页Journal of Computer Research and Development

基  金:国家自然科学基金项目(61972183,61672268);社会安全风险感知与防控大数据应用国家工程实验室主任基金项目。

摘  要:深度学习的可解释性研究是人工智能、机器学习、认知心理学、逻辑学等众多学科的交叉研究课题,其在信息推送、医疗研究、金融、信息安全等领域具有重要的理论研究意义和实际应用价值.从深度学习可解释性研究起源、研究探索期、模型构建期3方面回顾了深度学习可解释性研究历史,从可视化分析、鲁棒性扰动分析、敏感性分析3方面展现了深度学习现有模型可解释性分析研究现状,从模型代理、逻辑推理、网络节点关联分析、传统机器学习模型改进4方面剖析了可解释性深度学习模型构建研究,同时对当前该领域研究存在的不足作出了分析,展示了可解释性深度学习的典型应用,并对未来可能的研究方向作出了展望.The research on the interpretability of deep learning is closely related to various disciplines such as artificial intelligence,machine learning,logic and cognitive psychology.It has important theoretical research significance and practical application value in too many fields,such as information push,medical research,finance,and information security.In the past few years,there were a lot of well studied work in this field,but we are still facing various issues.In this paper,we clearly review the history of deep learning interpretability research and related work.Firstly,we introduce the history of interpretable deep learning from following three aspects:origin of interpretable deep learning,research exploration stage and model construction stage.Then,the research situation is presented from three aspects,namely visual analysis,robust perturbation analysis and sensitivity analysis.The research on the construction of interpretable deep learning model is introduced following four aspects:model agent,logical reasoning,network node association analysis and traditional machine learning model.Moreover,the limitations of current research are analyzed and discussed in this paper.At last,we list the typical applications of the interpretable deep learning and forecast the possible future research directions of this field along with reasonable and suitable suggestions.

关 键 词:人工智能 深度学习 可解释性 神经网络 可视化 

分 类 号:TP391[自动化与计算机技术—计算机应用技术]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象