深度学习中的内存管理问题研究综述  被引量:7

Memory management in deep learning:a survey

在线阅读下载全文

作  者:马玮良 彭轩 熊倩 石宣化[1,2] 金海[1,2] MA Weiliang;PENG Xuan;XIONG Qian;SHI Xuanhua;JIN Hai(School of Computer Science and Technology,Huazhong University of Science and Technology,Wuhan 430074,China;National Engineering Research Center for Big Data Technology and System,Services Computing Technology and System Lab,Huazhong University of Science and Technology,Wuhan 430074,China)

机构地区:[1]华中科技大学计算机科学与技术学院,湖北武汉430074 [2]华中科技大学大数据技术与系统国家地方联合工程研究中心,服务计算技术与系统教育部重点实验室,湖北武汉430074

出  处:《大数据》2020年第4期56-68,共13页Big Data Research

基  金:国家自然科学基金资助项目(No.61772218)。

摘  要:近年来,深度学习已经在多个领域取得了巨大的成功。深度神经网络向着更深更广的方向发展,训练和部署深度神经网络模型都将面对巨大的内存压力。加速设备有限的内存空间已经成为限制神经网络模型快速发展的重要因素,如何在深度学习中实现高效的内存管理成为深度学习发展的关键问题。为此,介绍了深度神经网络的基本特征;分析了深度学习训练过程中的内存瓶颈;对一些代表性的研究工作进行了分类阐述,并对其优缺点进行了分析;对深度学习中内存管理技术的未来发展趋势进行了探索。In recent years,deep learning has achieved great success in many fields.As the deep neural network develops towards a deeper and wider direction,the training and inference of a deep neural network face huge memory pressure.The limited memory space of accelerating devices has become an important factor restricting the rapid development of deep neural network.How to achieve efficient memory management in deep learning has become a key point in the development of deep learning.Therefore,the basic characteristics of deep neural network were introduced firstly and memory bottleneck in deep learning training was analyzed.Some representative research works were classified,and their advantages and disadvantages were analyzed.Finally,some important direction and tendency of memory management in deep learning were suggested.

关 键 词:内存管理 深度学习 内存交换 重计算 内存共享 压缩 

分 类 号:TP183[自动化与计算机技术—控制理论与控制工程]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象