检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:张代远[1]
机构地区:[1]南京邮电大学计算机学院,江苏南京210003
出 处:《系统工程与电子技术》2006年第6期929-932,共4页Systems Engineering and Electronics
摘 要:为了改善学习速率,提出了一种确定复数神经网络初始权值的新颖方法。初始权值不是随机给定的,而是通过计算求得。具体方法是选择一类隐层神经元的变换函数(类支集函数),将输入层和隐层之间的复数权值计算出来,保证隐层的输出矩阵是满秩矩阵,并从理论上证明了这样的满秩矩阵是存在的。利用这个满秩矩阵,通过最小平方算法就可以求得隐层和输出层之间的复数权值。将这些权值作为初始权值,采用最速下降算法来对神经网络进行训练。初始权值的优化,使得该算法可以有效地提高复数神经网络的训练速度和计算精度。一个特例是当隐层神经元的个数与样本个数相等时,就可以求得代价函数值为0的全局最小点。计算机仿真实例验证了该算法的有效性。To improve learning speed, a novel method for properly initializing the parameters (weights) of training complex-valued neural networks is proposed. The complex-valued weights between hidden and output layers are not randomly preassigned, but calculated to guarantee that the output matrix of hidden layer is full rank by using a kind of activation functions in hidden layer. Theoretically, it is proved that the full-rank matrix of hidden layer exists. The full-rank matrix is employed to find the complex-valued weights between hidden and output layers by the least mean square algorithm. These weights are used as initialized weights, and then, the steepest descent approach is introduced for training the networks. Because the initialized weights are optimized, the training accuracy and the learning speed are improved a lot for training complex-valued neural networks. In particular, when the number of neurons in hidden layer equals the number of training patterns, the global minima with zero cost functions are obtained. Computer simulations show the good performance of the algorithm.
分 类 号:TN183[电子电信—物理电子学] TP181[自动化与计算机技术—控制理论与控制工程]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:3.149.230.234