基于随机加速对偶下降算法的分布式网络流量优化  被引量:2

Distributed network traffic optimization based on random ADD algorithms

在线阅读下载全文

作  者:刘琳[1] 杨丽芳[1] 

机构地区:[1]重庆电子工程职业学院计算机学院,重庆401331

出  处:《重庆邮电大学学报(自然科学版)》2014年第6期838-844,共7页Journal of Chongqing University of Posts and Telecommunications(Natural Science Edition)

摘  要:传统的分布式网络流量优化问题大都通过对偶梯度下降算法来解决,虽然该算法能够以分布式方式来实现,但其收敛速度较慢。加速对偶下降(accelerated dual descent,ADD)算法通过近似牛顿步长的分布式计算,提高了对偶梯度下降算法的收敛速率。但由于通信网络的不确定性,在约束不确定时,该算法的收敛性难以保证。基于此,提出了一种随机形式的ADD算法来解决该网络优化问题。理论上证明了随机ADD算法在不确定性的均方误差有界时,能以较高概率收敛于最优值的一个误差邻域;当给出更严格的不确定性的约束条件时,算法则可以较高概率收敛于最优值。实验结果表明,随机ADD算法的收敛速率比随机梯度下降算法快2个数量级。Traditional network optimization problems are always solved by the dual gradient descent algorithm, which al- though can be implemented in a distributed manner, has a slow convergence rate. The accelerated dual descent (ADD) al- gorithms improve the convergence rate of dual gradient descent algorithm through distributed computation of approximated Newton steps. But with the uncertainty of communication networks, the convergence of the algorithm cannot be guaranteed under uncertain constraints. Based on this, a stochastic version of ADD algorithm is proposed to solve the network optimiza- tion problems under uncertainty. It is proved theoretically that the stochastic ADD algorithms can almost surely converge to an error neighborhood of the optimal when the mean square error of the uncertainty is bounded, and given a more strict con- straint of uncertainty, can exactly almost surely converge to the optimal point. Numerical results show that the stochastic ADD algorithms converge in two orders of magnitude less iteration than the stochastic gradient descent algorithms.

关 键 词:加速对偶下降算法 随机加速对偶下降(ADD)算法 网络优化 收敛速率 

分 类 号:TN92[电子电信—通信与信息系统] TP393[电子电信—信息与通信工程]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象