基于随机L-BFGS的二阶非凸稀疏优化算法  被引量:1

Second-Order Nonconvex Sparse Optimization Method Based on Stochastic L-BFGS

在线阅读下载全文

作  者:刘光宇[1] 张令威 杭仁龙 LIU Guang-yu;ZHANG Ling-wei;HANG Ren-long(Jiangsu Key Laboratory of Big Data Analysis Technology,Nanjing University of Information Science and Technology,Nanjing Jiangsu 210044,China)

机构地区:[1]南京信息工程大学江苏省大数据分析技术重点实验室,江苏南京210044

出  处:《计算机仿真》2022年第10期359-363,共5页Computer Simulation

基  金:江苏省青年基金项目(BK20180786);国家自然科学基金(61906096)。

摘  要:在稀疏模型中普遍采用一阶优化算法进行学习,这些算法的普遍思路都是将迭代硬阈值算法与优化算法进行结合。相较于一阶优化算法,二阶优化算法很少被应用到稀疏优化问题中,因为Hessian矩阵及其逆矩阵的计算需要消耗极大的算力资源。所以为了有效且高效地利用二阶信息,提出一种新的随机L-BFGS硬阈值优化算法用于解决非凸的稀疏学习问题,算法将迭代硬阈值方法(IHT)引入随机L-BFGS算法,在保持模型性能的同时显著提升了算法的收敛速度,并在线性回归和逻辑回归上的实验结果上证明了新算法的优越性。First-order optimization methods have been widely applied in the learning of sparse models. The fundamental idea of these methods is to incorporate the Iterative Hard Thresholding(IHT) algorithm into traditional optimization methods. Compared with the first-order methods, very few second-order methods have been applied to sparse optimization problems because of the tremendous computation that obtains the Hessian matrix and its inverse. In order to make use of the second-order information effectively and efficiently, this paper proposes a novel optimization method to solve the nonconvex sparse learning problems. The core idea of the proposed Stochastic L-BFGS Hard Thresholding approach is to incorporate the Stochastic L-BFGS into the Iterative Hard Thresholding(IHT) method, which significantly accelerates the convergence rate and maintains the effectiveness of the model. The experimental results on linear regression and logistic regression demonstrate the superiority of the proposed approach.

关 键 词:一阶优化算法 二阶信息 迭代硬阈值 稀疏学习 

分 类 号:TP311[自动化与计算机技术—计算机软件与理论]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象