Feature-aware regularization for sparse online learning  被引量:2

Feature-aware regularization for sparse online learning

在线阅读下载全文

作  者:OIWA Hidekazu MATSUSHIMA Shin NAKAGAWA Hiroshi 

机构地区:[1]Graduate School of Information Science and Technology, The University of Tokyo [2]Information Technology Center, The University of Tokyo

出  处:《Science China(Information Sciences)》2014年第5期61-81,共21页中国科学(信息科学)(英文版)

基  金:supported by JSPS KAKENHI,Grant-in-Aid for JSPS Fellows for Hidekazu Oiwa

摘  要:Learning a compact predictive model in an online setting has recently gained a great deal of at- tention. The combination of online learning with sparsity-inducing regularization enables faster learning with a smaller memory space than the previous learning frameworks. Many optimization methods and learning algo- rithms have been developed on the basis of online learning with Ll-regularization. Ll-regularization tends to truncate some types of parameters, such as those that rarely occur or have a small range of values, unless they are emphasized in advance. However, the inclusion of a pre-processing step would make it very difficult to pre- serve the advantages of online learning. We propose a new regularization framework for sparse online learning. We focus on regularization terms, and we enhance the state-of-the-art regularization approach by integrating information on all previous subgradients of the loss function into a regularization term. The resulting algorithms enable online learning to adjust the intensity of each feature's truncations without pre-processing and eventually eliminate the bias of Ll-regularization. We show theoretical properties of our framework, the computational complexity and upper bound of regret. Experiments demonstrated that our algorithms outperformed previous methods in many classification tasks.Learning a compact predictive model in an online setting has recently gained a great deal of at- tention. The combination of online learning with sparsity-inducing regularization enables faster learning with a smaller memory space than the previous learning frameworks. Many optimization methods and learning algo- rithms have been developed on the basis of online learning with Ll-regularization. Ll-regularization tends to truncate some types of parameters, such as those that rarely occur or have a small range of values, unless they are emphasized in advance. However, the inclusion of a pre-processing step would make it very difficult to pre- serve the advantages of online learning. We propose a new regularization framework for sparse online learning. We focus on regularization terms, and we enhance the state-of-the-art regularization approach by integrating information on all previous subgradients of the loss function into a regularization term. The resulting algorithms enable online learning to adjust the intensity of each feature's truncations without pre-processing and eventually eliminate the bias of Ll-regularization. We show theoretical properties of our framework, the computational complexity and upper bound of regret. Experiments demonstrated that our algorithms outperformed previous methods in many classification tasks.

关 键 词:online learning supervised learning sparsity-inducing regularization feature selection sentimentanalysis 

分 类 号:TP301.6[自动化与计算机技术—计算机系统结构]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象