基于注意力模型的混合学习算法  被引量:5

A Hybrid Algorithm Based on Attention Model

在线阅读下载全文

作  者:杨博[1] 苏小红[1] 王亚东[1] 

机构地区:[1]哈尔滨工业大学计算机科学与技术学院,黑龙江哈尔滨150001

出  处:《软件学报》2005年第6期1073-1080,共8页Journal of Software

基  金:国家自然科学基金~~

摘  要:为了解决传统BP(back-propagation)算法收敛速度慢,训练得到的网络性能较差的问题,在借鉴生理学中“选择性注意力模型”的基础上,将遗传算法与误差放大的BP学习算法进行了有机的融合,提出了基于注意力模型的快速混合学习算法.该算法的核心在于将单独的BP训练过程划分为许多小的切片,并对每个切片进行误差放大的训练和竞争淘汰机制的选择.通过发现收敛速率较快的个体和过滤陷入局部极值的个体,来保证网络训练的成功率和实现快速向全局最优区域逼近的目的.仿真结果表明,该算法有效地解决了传统BP算法中由于初始权值的随机性造成的训练失败问题,并能有效解决饱和区域引起的后期训练缓慢问题,在不增加网络隐层节点数的情况下,显著地提高了网络的收敛精度和泛化能力.这将使神经网络在众多实际的分类问题上具有更广泛的应用前景.A hybrid algorithm based on attention model (HAAM) is proposed to speed up the training of back-propagation neural networks and improve the performances. The algorithm combines the genetic algorithm with the BP algorithm based on magnified error signal. The key to this algorithm lies in the partition of the BP training process into many chips with each chip trained by the BP algorithm. The chips in the same iteration are optimized by the GA operators, and those in different iterations constitute the whole training. Therefore, the HAAM obtains the ability of searching the global optimum solution relying on these operations, and it is easy to be parallelly processed. The simulation experiments show that this algorithm can effectively avoid failure training caused by randomizing the initial weights and thresholds, and solve the slow convergence problem resulted from the Flat-Spots when the error signal becomes too small. Moreover, this algorithm improves the generalization of BP network by improving the training precision instead of adding hidden neurons.

关 键 词:BP算法 人工神经网络 注意力模型 遗传算法 饱和区域 局部极值 

分 类 号:TP18[自动化与计算机技术—控制理论与控制工程]

 

参考文献:

正在载入数据...

 

二级参考文献:

正在载入数据...

 

耦合文献:

正在载入数据...

 

引证文献:

正在载入数据...

 

二级引证文献:

正在载入数据...

 

同被引文献:

正在载入数据...

 

相关期刊文献:

正在载入数据...

相关的主题
相关的作者对象
相关的机构对象