人工神经网络的互连权空间的演化方程学习算法  

EVOLUTION EQUATION LEARNING ALGORITHM OF ARTIFICIAL NEURAL NETWORKS

在线阅读下载全文

作  者:顾玉巧[1] 周昌松[1] 黄五群[1] 陈天伦 

机构地区:[1]南开大学物理系,天津300071

出  处:《南开大学学报(自然科学版)》1997年第2期31-35,47,共6页Acta Scientiarum Naturalium Universitatis Nankaiensis

基  金:国家自然科学基金;"非线性攀登项目"资助

摘  要:本文把人工神经网络的互连权视为广义的自旋变量,网络的学习问题看作互连权空间的优化问题.进而将通常在人工神经网络组态空间的连续时间动力学方程组推广到人工神经网络的互连权空间,并在方程组中引入类似Metropolis的MonteCarlo算法机制改进此方程组以提高寻优能力,提出了一个人工神经网络的演化方程学习算法.该算法在很大程度上摆脱了局域极值的束缚,得到了最优或接近最优的互连权.本文集中研究单层反馈人工神经网络,将互连权组态空间的能量函数取为二次函数,研究了由该学习算法得到的网络的最大存储容量ac.结果表明,当神经元数较少时,得到了接近赝逆模型的结果,并且随神经元数的增大存储容量缓慢降低.n this paper the synaptic couplings of artificial neural networks are viewed as a general spin variable, considering the learning of the network as an optimal problem in synapticspace. As a result, the continuous-time dynamical coupled equations of pattern space ofneural network can be extended to synaptic space. An evolution equation learning algorithm is proposed with introducing a scheme similar to the Monte Carlo method proposedby Metropolis into these equations to improve the optimization ability. With this algorithm, the system can escape from the difficult of local extrema of the energy function andreach an optimal solution or its good approximations. This paper concentrate on singlelayered recurrent artificial neural networks. We use a quadric energy function in synapticcoupling space. We have studied the critical storage capacity ac of the networks whosesynaptic couplings are obtained with our algorithm. Computer simulation show that ac approximates to pseudo-inverse models when neuron number N is small and decreases slowlywith the increasing of N.

关 键 词:神经网络 演化方程 互连权空间 学习算法 

分 类 号:TP18[自动化与计算机技术—控制理论与控制工程]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象