检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
机构地区:[1]北京工业大学电子信息与控制工程学院,北京100124 [2]辽宁工程技术大学电子与信息工程学院,辽宁葫芦岛125105
出 处:《智能系统学报》2011年第4期312-317,共6页CAAI Transactions on Intelligent Systems
基 金:国家自然科学基金资助项目(60873043);国家"863"计划资助项目(2009AA04Z155);北京市自然科学基金资助项目(4092010);教育部博士点基金资助项目目(PHR201006103)
摘 要:针对多数前馈神经网络结构设计算法采取贪婪搜索策略而易陷入局部最优结构的问题,提出一种自适应前馈神经网络结构设计算法.该算法在网络训练过程中采取自适应寻优策略合并和分裂隐节点,达到设计最优神经网络结构的目的.在合并操作中,以互信息为准则对输出线性相关的隐节点进行合并;在分裂操作中,引入变异系数,有助于跳出局部最优网络结构.算法将合并和分裂操作之后的权值调整与网络对样本的学习过程结合,减少了网络对样本的学习次数,提高了网络的学习速度,增强了网络的泛化性能.非线性函数逼近结果表明,所提算法能得到更小的检测误差,最终网络结构紧凑.Due to the fact that most are susceptible to becoming trapped algorithms use a greedy strategy in designing artificial neural networks which at the architectural local optimal point, an adaptive algorithm for designing an optimal feed-forward neural network was proposed. During the training process of the neural network, the adaptive optimization strategy was adopted to merge and split the hidden unit to design optimal neural network architecture. In the merge operation, the hidden units were merged based on mutual information criterion. In the split operation, a mutation coefficient was introduced to help jump out of locally optimal network. The process of adjusting the con- nection weight after merge and split operations was combined with the process of training the neural network. There- fore, the number of training samples was reduced, the training speed was increased, and the generalization per- formance was improved. The results of approximating non-linear functions show that the proposed algorithm can lim- it testing errors and a compact neural network structure.
分 类 号:TP183[自动化与计算机技术—控制理论与控制工程]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:3.17.57.190