检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:戴宪华[1]
出 处:《电子与信息学报》2002年第1期45-53,共9页Journal of Electronics & Information Technology
基 金:国家自然科学基金(69872021);广东省自然科学基金(980438)
摘 要:由于非线性系统输出是其参数的非线性函数,直接利用高阶累积量辨识两层前馈神经网络(FNN)通常是十分困难的。为解决这一问题,该文提出两种基于四阶累积量的FNN辨识方法。第一种方法,FNN的隐元在其输入空间利用多个线性系统近似,进而FNN利用一统计模型—混合专家(ME)网络重新描述。基于ME模型,FNN参数可利用统计期望值最大化(EM)算法获得估计。第二种方法,为简化FNN的ME模型,引入隐含观测量。基于隐含观测量估计,FNN被分解为多个单隐元的训练问题,进而整体FNN可利用一两阶层ME描述。基于单隐元的参数估计,FNN可利用一具有更快收敛速度的简化算法获得估计。It is always difficult to train a two-layer Feedforward Neural Networks (FNN) based on the cunmlant match criterion because cumulants are nonlinear implicit functions of the FNN parameters. In this work, two new cumulant-based training methods for two-layer FNN are de-veloped. In the first method, the hidden units of two-layer FNN are approximated with multiple linear systems, and further total FNN is modeled with a 'mixture of experts' (ME) architec-ture. With the ME modei, FNN parameters are estimated with expectation-maximization (EM) algorithm. The second method, for simplifying the two-layer FNN statistical modei, proposes a, simplified two-level hierarchical ME to remodel the FNN, in which hidden variables are in-troduced such that training total FNN is changed to training a set of single neurons. Based on training single neuron, total FNN is estimated in a simplified version with a faster convergence speed.
关 键 词:累积量 前馈神经网络 盲辨识 ME模型 EM算法
分 类 号:TP183[自动化与计算机技术—控制理论与控制工程]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.117