检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:张池平[1] 唐蕾[1] 苏小红[2] 马培军[2]
机构地区:[1]哈尔滨工业大学数学系,黑龙江哈尔滨150001 [2]哈尔滨工业大学计算机学院,黑龙江哈尔滨150001
出 处:《计算机仿真》2008年第4期172-174,209,共4页Computer Simulation
基 金:国家自然科学基金资助项目(10672044)
摘 要:传统BP神经网络学习算法有学习速度慢、精度不高、易于陷入局部极小值、不稳定等问题,DFP神经网络学习算法是最优化理论中一类典型的拟牛顿法,具有超线性收敛速度和全局收敛性。但普通DFP算法有数值不稳定的缺陷,在处理大规模网络的学习问题时容易失效;在算法进入到饱和区域、接近最小值的时候,普通DFP算法会产生溢出错误。通过放大权值更新向量和权值导数更新向量,改进拟Hesse逆矩阵的求解,并结合线性搜索和L-M算法,改善了方法的稳定性,解决了算法失效的问题,同时保证了高效的学习速度和较高的学习精度。与目前应用最广泛的BP学习算法L-M算法相比,改进的DFP算法具有与其相同的学习速度,计算量小,学习精度高,更适用于大残量问题。Traditional learning algorithm of BP NN has problems of low convergent learning speed, low accuracy, local minimum slumping, and instability. This paper puts forward an advanced NN learning algorithm based on DFP method which is a classical Quasi - Newton Algorithm. It is super - linear convergent and global convergent in the optimization theory. But normal DFP method is unstable and easy to be invalidated when coming to large NN. When it comes to the Flat - Spots, normal DFP method would be ended in overflow error. This paper solves the quasi inverse Hesse matrix through magnifying the change value of weight and its differential coefficient and combines the DFP method with search technique and L - M algorithm. The advanced DFP method improves the method's stability and solves the invalidated problem with high learning speed and accuracy. Comparing with the L - M algorithm which is the most widely applied BP learning algorithm, DFP method has the same high learning speed and higher accura- cy, requires less computation, and does much better in large residual problems.
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.42