检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
机构地区:[1]南京理工大学机械工程学院,南京210094 [2]军械技术研究所,石家庄050000
出 处:《电子与信息学报》2014年第6期1307-1311,共5页Journal of Electronics & Information Technology
基 金:国家自然科学基金(51175266/E050604)资助课题
摘 要:针对含不同置信级样本的模型拟合问题,该文提出了一种基于神经网络的二次学习方法。文中指出真实模型是实验模型的一种变异,提出逼近真实模型期望值的神经网络,是融合先验样本和真实样本的最佳网络。首先,以先验样本为训练样本进行第1次神经网络学习,并计算取决于硬点信息的软点误差容量区间;然后,同时将先验样本和真实样本作为训练样本,利用软点误差容量区间和硬点误差敏感系数,对神经网络训练过程中输入/目标对的误差进行修改,通过第2次学习得到既能精确拟合真实样本,又能最大化利用先验样本信息的综合网络。与基于知识的神经网络(KBNN)相比,该方法更加简单,可操控性更强并具有更加明确的逻辑意义。To solve the model-fitting problem with different confidence levels of samples, a Neural-Network (NN)-based twice learning method is proposed. It is pointed out that the real model is a variation of experimental model. The neural network approximation to the mathematical expectation of real model, is believed to be the best network fusing the information of prior samples and real samples. In the first learning, neural network is trained using the prior samples only, and the error capacity intervals of the soft points, which are determined by the information of hard points, are calculated. Then, both prior samples and real samples are included in the training samples. The import-objective errors in the process of NN training are modified, using soft point error capacity intervals and hard point error-sensitivity coefficients. The expected network is generated by the second learning, with accurate fitting to the real samples and efficacious utilization of the prior samples. In contrast with Knowledge-Based Neural Networks (KBNN), this method is simpler and more amenable to manipulation with definite logical significance.
关 键 词:神经网络 模型拟合 基于知识的神经网络(KBNN) 先验知识
分 类 号:TP183[自动化与计算机技术—控制理论与控制工程]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.169