检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:刘光宇[1] 张令威 杭仁龙 LIU Guang-yu;ZHANG Ling-wei;HANG Ren-long(Jiangsu Key Laboratory of Big Data Analysis Technology,Nanjing University of Information Science and Technology,Nanjing Jiangsu 210044,China)
机构地区:[1]南京信息工程大学江苏省大数据分析技术重点实验室,江苏南京210044
出 处:《计算机仿真》2022年第10期359-363,共5页Computer Simulation
基 金:江苏省青年基金项目(BK20180786);国家自然科学基金(61906096)。
摘 要:在稀疏模型中普遍采用一阶优化算法进行学习,这些算法的普遍思路都是将迭代硬阈值算法与优化算法进行结合。相较于一阶优化算法,二阶优化算法很少被应用到稀疏优化问题中,因为Hessian矩阵及其逆矩阵的计算需要消耗极大的算力资源。所以为了有效且高效地利用二阶信息,提出一种新的随机L-BFGS硬阈值优化算法用于解决非凸的稀疏学习问题,算法将迭代硬阈值方法(IHT)引入随机L-BFGS算法,在保持模型性能的同时显著提升了算法的收敛速度,并在线性回归和逻辑回归上的实验结果上证明了新算法的优越性。First-order optimization methods have been widely applied in the learning of sparse models. The fundamental idea of these methods is to incorporate the Iterative Hard Thresholding(IHT) algorithm into traditional optimization methods. Compared with the first-order methods, very few second-order methods have been applied to sparse optimization problems because of the tremendous computation that obtains the Hessian matrix and its inverse. In order to make use of the second-order information effectively and efficiently, this paper proposes a novel optimization method to solve the nonconvex sparse learning problems. The core idea of the proposed Stochastic L-BFGS Hard Thresholding approach is to incorporate the Stochastic L-BFGS into the Iterative Hard Thresholding(IHT) method, which significantly accelerates the convergence rate and maintains the effectiveness of the model. The experimental results on linear regression and logistic regression demonstrate the superiority of the proposed approach.
分 类 号:TP311[自动化与计算机技术—计算机软件与理论]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.145