检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:Arnulf Jentzen Adrian Riekert
机构地区:[1]School of Data Science and Shenzhen Research Institute of Big Data,The Chinese University of Hong Kong,Shenzhen,People’s Republic of China [2]Applied Mathematics:Institute for Analysis and Numerics,University of Münster,Münster,Germany
出 处:《Communications in Mathematics and Statistics》2024年第3期385-434,共50页数学与统计通讯(英文)
基 金:funded by the Deutsche Forschungsgemeinschaft(DFG,German Research Foundation)under Germany’s Excellence Strategy EXC 2044-390685587;Mathematics Münster:Dynamics-Geometry-Structure。
摘 要:Although deep learning-based approximation algorithms have been applied very successfully to numerous problems,at the moment the reasons for their performance are not entirely understood from a mathematical point of view.Recently,estimates for the convergence of the overall error have been obtained in the situation of deep supervised learning,but with an extremely slow rate of convergence.In this note,we partially improve on these estimates.More specifically,we show that the depth of the neural network only needs to increase much slower in order to obtain the same rate of approximation.The results hold in the case of an arbitrary stochastic optimization algorithm with i.i.d.random initializations.
关 键 词:Deep learning Artificial intelligence Empirical risk minimization Optimization
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.38