基于可解释基拆解和知识图谱的深度神经网络可视化  被引量:8

Deep Neural Network Visualization Based on Interpretable Basis Decomposition and Knowledge Graph

在线阅读下载全文

作  者:阮利[1,2] 温莎莎[2] 牛易明 李绍宁 薛云志[3] 阮涛 肖利民[1,2] RUAN Li;WEN Sha-Sha;NIU Yi-Ming;LI Shao-Ning;XUE Yun-Zhi;RUAN Tao;XIAO Li-Min(State Key Laboratory of Software Development Environment,Beijing 100191;School of Computer Science and Engineering,Beihang University,Beijing 100191;Institute of Software,Chinese Academy of Sciences,Beijing 100190;China Patent Information Center,Beijing 100088)

机构地区:[1]软件开发环境国家重点实验室,北京100191 [2]北京航空航天大学计算机学院,北京100191 [3]中国科学院软件研究所,北京100190 [4]中国专利信息中心,北京100088

出  处:《计算机学报》2021年第9期1786-1805,共20页Chinese Journal of Computers

基  金:国家重点研究计划(2017YFB0202004);软件开发环境国家重点实验室课题(SKLSDE-2020ZX-15);国家自然科学基金青年项目(11701545,61772053)资助.

摘  要:近年来,以卷积神经网络(CNN)等为代表的深度学习模型,以其深度分层学习,无标签化学习等优势,已在图像识别为代表的各个领域得到日益广泛的应用.然而,深度神经网络模型由于其内在的黑盒原理,对其内部工作机制的解释仍然面临巨大挑战,其可解释性问题已成为了研究界和工业界的前沿性热点研究课题.针对现有研究存在的缺乏基于图谱的可解释性方法的问题,以及可解释基模型的图谱构建优势,本文提出了一种基于可解释基拆解和知识图谱的深度神经网络可视化方法.首先采用一种面向可解释基模型特征拆解结构的知识图谱构建方法,构建了场景和解释特征之间的解释关系和并列关系等图谱信息;利用场景-特征的解释关系网络,提出了一种基于Jaccard系数的场景间相似度聚类方法;针对现有可解释基模型对相似的场景,其解释特征重合率可能很高的问题,提出了一种基于场景的判别性特征提取方法,在特征拆解结果中能对每一类样本分别提取出能够区别此类和其他类并且拥有同等重要性的拆解特征(即判别性特征);针对现有可解释基的深度网络可视化测试缺乏保真度测试的问题,提出了一种适于深度神经网络的保真度测试方法.保真度测试和人类置信度测试,均表明本文所提方法可取得优异效果.Recently,owing to the advantages of deep-layered learning and unlabeled learning,etc.,deep learning models represented by convolutional neural network,deep neural network,recurrent neural network,have gained increasing applications in various fields,such as image recognition,video,and natural language processing.To achieve the high transparency and security assurance of deep learning models,the interpretability research of deep neural networks is of great theoretical significance and industrial application value and recently gains increasingly attention.However,because of the intrinsic black-box characteristics of the deep learning models,the interpretation of its internal structure and the running mechanism is still of great challenges,including the rigorous theoretical results originated from the manual observations of large-scale training and testing set,and scarce appropriate explanation of the learning results based on the human understanding.Moreover,most of the existing researches analyzing the decision-making process of deep learning models only from a local perspective and lacks a graphical representation based on the overall understanding.On the other hand,the interpretable basis decomposition(IBD)model has the advantages that its interpretation result is not only a strict corresponding relation from scene to feature,but also is a kind of semi-structured data which can facilitate IBD based knowledge map construction from it.Aiming at the problem that existing deep neural network visualization researches lacks the interpretability based on the knowledge map and the well-suited knowledge map representability of IBD,we propose a deep neural network visualization approach based on interpretable basis decomposition and knowledge map,which fully takes the advantage of map construction ability of interpretable basis decomposition.Firstly,we propose a knowledge map construction method based on the feature decomposition structure of IBD,which constructs the map information,such as the interpretation relationship

关 键 词:深度神经网络 可视化 可解释基拆解模型 知识图谱 解释深度学习模型 

分 类 号:TP391[自动化与计算机技术—计算机应用技术]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象