Gradient Convergence of Deep Learning-Based Numerical Methods for BSDEs  

在线阅读下载全文

作  者:Zixuan WANG Shanjian TANG 

机构地区:[1]Department of Finance and Control Sciences,Shanghai Center for Mathematical Science,Fudan University,Shanghai 200433,China [2]Department of Finance and Control Sciences,School of Mathematical Sciences,Fudan University,Shanghai 200433,China

出  处:《Chinese Annals of Mathematics,Series B》2021年第2期199-216,共18页数学年刊(B辑英文版)

基  金:This work was supported by the National Key R&D Program of China(No.2018YFA0703900);the National Natural Science Foundation of China(No.11631004)。

摘  要:The authors prove the gradient convergence of the deep learning-based numerical method for high dimensional parabolic partial differential equations and backward stochastic differential equations, which is based on time discretization of stochastic differential equations(SDEs for short) and the stochastic approximation method for nonconvex stochastic programming problem. They take the stochastic gradient decent method,quadratic loss function, and sigmoid activation function in the setting of the neural network. Combining classical techniques of randomized stochastic gradients, Euler scheme for SDEs, and convergence of neural networks, they obtain the O(K^(-1/4)) rate of gradient convergence with K being the total number of iterative steps.

关 键 词:PDES BSDES Deep learning Nonconvex stochastic programming Convergence result 

分 类 号:TP18[自动化与计算机技术—控制理论与控制工程] O211.63[自动化与计算机技术—控制科学与工程]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象