带动量项的梯度下降算法的收敛性  被引量:2

Convergence of Gradient Descent Algorithm with Momentum

在线阅读下载全文

作  者:彭先伦 谢纲 PENG Xianlun;XIE Gang(School of Mathematics,East China University of Science and Technology,Shanghai 200237,China)

机构地区:[1]华东理工大学数学学院,上海200237

出  处:《华东理工大学学报(自然科学版)》2021年第6期779-786,共8页Journal of East China University of Science and Technology

摘  要:本文对基于三层前馈神经网络的带动量项的反向传播算法进行了理论分析。在我们的模型中,学习率为常数,动量系数为一个适应性的变量。本文给出了带动量项的反向传播算法的收敛性结果及详细的证明。相比于目前已有的结果,本文中的结论更具有一般性。At present,neural networks have been widely used,and have achieved some success in many fields.However,there is not much theoretical analysis about neural networks.This paper analyzed the convergence of the back-propagation algorithm with momentum for the three-layer feed-forward neural networks.In our model,the learning rate is set to be a constant,and the momentum coefficient is set as an adaptive variable to accelerate and stabilize the training procedure of network parameters.The corresponding convergence results and detailed proofs are given.Compared with the existing results,our results are more general.

关 键 词:神经网络 反向传播算法 动量 动量系数 收敛 

分 类 号:O29[理学—应用数学]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象