检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:于洁 孟文辉 Yu Jie;Meng Wenhui(School of Mathematics,Northwest University,Xi′an 710127,China)
出 处:《纯粹数学与应用数学》2022年第1期116-126,共11页Pure and Applied Mathematics
基 金:国家自然科学基金(11201373);陕西省教育厅自然科学基金(14JK747).
摘 要:提出一种在分布式环境中利用共轭梯度法优化二次损失函数的算法,该算法利用本地子机器局部损失函数的一阶导数信息更新迭代点,在每次迭代中执行两轮通信,通过通信协作使主机器上的损失函数之和最小化.经过理论分析,证明该算法具有线性收敛性.在模拟数据集上与分布式交替方向乘子法进行对比,结果表明分布式共轭梯度算法更匹配于集中式性能.通过实验发现,增加子机器上的样本量不仅能提高收敛速度,也能降低计算误差.Aiming at the problem of large-scale data set optimization,this paper proposes an algorithm for optimizing the quadratic loss function using the conjugate gradient method in a distributed environment.The algorithm uses the first derivative information of the local sub-machine’s local loss function to update the iteration point,performs two rounds of communication in each iteration,and minimizes the sum of the loss function on the main machine through communication cooperation.After theoretical analysis,it is proved that the algorithm has linear convergence under appropriate conditions.Compared with the distributed alternating direction multiplier method on the simulated data set,the results show that the distributed conjugate gradient algorithm can match the centralized performance with fewer iterations than the distributed alternating direction multiplier method.It is found through experiments,Increasing the sample size on the sub-machine can not only increase the convergence speed,but also reduce the calculation error.
关 键 词:大数据 分布式优化 共轭梯度法 二次损失函数 线性收敛
分 类 号:O224[理学—运筹学与控制论]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:3.23.101.186