Estimation of convergence rate for multi-regression learning algorithm  被引量:1

Estimation of convergence rate for multi-regression learning algorithm

在线阅读下载全文

作  者:XU ZongBen ZHANG YongQuan CAO FeiLong 

机构地区:[1]Institute for Information and System Sciences,Xi'an Jiaotong University,Xi'an 710049,China [2]MOE Key Labratory for Intelligent Networks and Network Security,Xi'an Jiaotong University,Xi'an 710049,China [3]Department of Information and Mathematics Sciences,China Jiliang University,Hangzhou 310018,China

出  处:《Science China(Information Sciences)》2012年第3期701-713,共13页中国科学(信息科学)(英文版)

基  金:supported by National Basic Research Program of China (Grant No. 2007CB311002);National Natural Science Foundation of China (Grant Nos. 90818020, 60873206)

摘  要:In many applications, the pre-information on regression function is always unknown. Therefore, it is necessary to learn regression function by means of some valid tools. In this paper we investigate the regression problem in learning theory, i.e., convergence rate of regression learning algorithm with least square schemes in multi-dimensional polynomial space. Our main aim is to analyze the generalization error for multiregression problems in learning theory. By using the famous Jackson operators in approximation theory, covering number, entropy number and relative probability inequalities, we obtain the estimates of upper and lower bounds for the convergence rate of learning algorithm. In particular, it is shown that for multi-variable smooth regression functions, the estimates are able to achieve ahnost optimal rate of convergence except for a logarithmic factor. Our results are significant for the research of convergence, stability and complexity of regression learning algorithm.In many applications, the pre-information on regression function is always unknown. Therefore, it is necessary to learn regression function by means of some valid tools. In this paper we investigate the regression problem in learning theory, i.e., convergence rate of regression learning algorithm with least square schemes in multi-dimensional polynomial space. Our main aim is to analyze the generalization error for multiregression problems in learning theory. By using the famous Jackson operators in approximation theory, covering number, entropy number and relative probability inequalities, we obtain the estimates of upper and lower bounds for the convergence rate of learning algorithm. In particular, it is shown that for multi-variable smooth regression functions, the estimates are able to achieve ahnost optimal rate of convergence except for a logarithmic factor. Our results are significant for the research of convergence, stability and complexity of regression learning algorithm.

关 键 词:learning theory covering number rate of convergence entropy number 

分 类 号:O212.1[理学—概率论与数理统计] TP18[理学—数学]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象