深度神经网络知识蒸馏综述  被引量:1

A Review of Knowledge Distillation in Deep Neural Networks

在线阅读下载全文

作  者:韩宇[1] 

机构地区:[1]中国公安部第一研究所,北京

出  处:《计算机科学与应用》2020年第9期1625-1630,共6页Computer Science and Application

摘  要:深度神经网络在计算机视觉、自然语言处理、语音识别等多个领域取得了巨大成功,但是随着网络结构的复杂化,神经网络模型需要消耗大量的计算资源和存储空间,严重制约了深度神经网络在资源有限的应用环境和实时在线处理的应用上的发展。因此,需要在尽量不损失模型性能的前提下,对深度神经网络进行压缩。本文介绍了基于知识蒸馏的神经网络模型压缩方法,对深度神经网络知识蒸馏领域的相关代表性工作进行了详细的梳理与总结,并对知识蒸馏未来发展趋势进行展望。Deep neural networks have achieved great success in computer vision, natural language processing, speech recognition and other fields. However, with the complexity of network structure, the neural network model needs to consume a lot of computing resources and storage space, which seriously restricts the development of deep neural network in the resource limited application environment and real-time online processing application. Therefore, it is necessary to compress the deep neural network without losing the performance of the model as much as possible. This article introduces the neural network model compression method based on knowledge distillation, combs and summarizes the relevant representative works in the field of deep neural network knowledge distillation in detail, and prospects the future development trend of knowledge distillation.

关 键 词:神经网络 深度学习 知识蒸馏 

分 类 号:TP3[自动化与计算机技术—计算机科学与技术]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象