检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:杨宁[1]
机构地区:[1]南京晓庄学院信息工程学院,江苏南京211171
出 处:《南京晓庄学院学报》2015年第6期21-25,共5页Journal of Nanjing Xiaozhuang University
摘 要:深度神经网络(DNN)是近年来机器学习领域中的研究热点.它是层数较多的神经网络,有数千万参数需要学习,计算开销大,导致训练非常耗时.而GPU有强大的计算能力,适合于加速深度神经网络训练.针对这种情况,提出了DNN的多GPU并行框架,描述了其实现方法及性能优化,依托多GPU的强大协同并行计算能力,结合数据并行的特点,实现快速高效的深度神经网络训练.对于在语音识别上的应用,其模型收敛速度和模型性能上都取得了有效提升——相比单GPU有4.6倍加速比,字错率降低约10%.Deep Neural Network (DNN) is a neural network with multiple layers, which has to learn tens of mil- lions of parameters, resulting in time-consuming in training. Comparatively, Graphic Processing Unit (GPU) has powerful computing ability, suitable for speeding up the training of neural network in depth. Accordingly, Multi- GPU parallel framework of DNN is proposed in this paper, and the implementation method and its performance opti- mization is also described. With the help of the powerful synergy parallel computing ability of Muhi-GPU combined with the characteristics of data parallel, the fast and efficient training of Deep Neural Network can be achieved. The effective enhancement can be seen in the application of speech recognition, the model convergence rate and model performance, 4.6 times the speedup ratio of the use of a single GPU, with billions of training day convergence. Be- sides, the word error rate is reduced about 10%.
分 类 号:TP183[自动化与计算机技术—控制理论与控制工程]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:18.191.5.237