Adaptive learning with guaranteed stability for discrete-time recurrent neural networks  被引量:1

Adaptive learning with guaranteed stability for discrete-time recurrent neural networks

在线阅读下载全文

作  者:邓华 吴义虎 段吉安 

机构地区:[1]School of Mechanical and Electrical Engineering, Central South University [2]School of Automobile and Mechanical Engineering, Changsha University of Science and Technology

出  处:《Journal of Central South University of Technology》2007年第5期685-689,共5页中南工业大学学报(英文版)

基  金:Project(50276005) supported by the National Natural Science Foundation of China; Projects (2006CB705400, 2003CB716206) supported by National Basic Research Program of China

摘  要:To avoid unstable learning, a stable adaptive learning algorithm was proposed for discrete-time recurrent neural networks. Unlike the dynamic gradient methods, such as the backpropagation through time and the real time recurrent learning, the weights of the recurrent neural networks were updated online in terms of Lyapunov stability theory in the proposed learning algorithm, so the learning stability was guaranteed. With the inversion of the activation function of the recurrent neural networks, the proposed learning algorithm can be easily implemented for solving varying nonlinear adaptive learning problems and fast convergence of the adaptive learning process can be achieved. Simulation experiments in pattern recognition show that only 5 iterations are needed for the storage of a 15×15 binary image pattern and only 9 iterations are needed for the perfect realization of an analog vector by an equilibrium state with the proposed learning algorithm.To avoid unstable learning, a stable adaptive learning algorithm was proposed for discrete-time recurrent neural networks. Unlike the dynamic gradient methods, such as the backpropagation through time and the real time recurrent learning, the weights of the recurrent neural networks were updated online in terms of Lyapunov stability theory in the proposed learning algorithm, so the learning stability was guaranteed. With the inversion of the activation function of the recurrent neural networks, the proposed learning algorithm can be easily implemented for solving varying nonlinear adaptive learning problems and fast convergence of the adaptive learning process can be achieved. Simulation experiments in pattern recognition show that only 5 iterations are needed for the storage of a 15 × 15 binary image pattern and only 9 iterations are needed for the perfect realization of an analog vector by an equilibrium state with the proposed learning algorithm.

关 键 词:recurrent neural networks adaptive learning nonlinear discrete-time systems pattern recognition 

分 类 号:TP183[自动化与计算机技术—控制理论与控制工程]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象