检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:于薇薇[1] 闫杰[1] C.Sabourin K.Madani
机构地区:[1]西北工业大学航天学院飞行控制与仿真研究所,陕西西安710072 [2]巴黎十二大学LISSI实验室
出 处:《西北工业大学学报》2008年第6期732-737,共6页Journal of Northwestern Polytechnical University
摘 要:CMAC神经网络具有学习算法简单、收敛速度快、局域泛化等优点,被广泛应用于机器人控制、信号处理、模式识别以及自适用控制等领域。但是网络的训练过程需要大量的存储单元,最优结构参数的选取是CMAC网络设计中一个重要问题。文中通过对函数逼近问题的研究,说明了量化精度和泛化参数如何影响网络对函数的逼近质量。仿真结果表明,通过对结构参数的调整,可以达到最小的逼近误差。而通过对网络结构的优化不但可以节约网络的训练时间而且可以大幅度减少存储单元的数量。Aim. To our knowledge, there do not exist any papers in the open literature on optimizing structural parameters in order to reduce memory size and save training time. We now present our results on such an optimization study. In the full paper, we explain in some detail our research results; in this abstract, we just add some pertinent remarks to naming the first two sections of the full paper. Section 1 is. CMAC neural network structure. Section 2 is. CMAC neural network structural parameters and some function approximation problems. In subsection 2.1, we study the two structural parameters, step-length quantization and generalization. Then we discuss how the two parameters influence the approximation quality of the CMAC neural network. In subsection 2.2 we study some function approximation problems and error measurements. In sub-subsection 2.2.1 we give two function approximation examples. In sub- subsection 2.2.2, we calculate the function approximation errors of measurements. Finally we perform computer simulations, whose results are given in Tables 1 through 3 and Figs. 6 and 7. These results show preliminarily that our optimization method can not only much decrease memory size but also save training time.
分 类 号:TP183[自动化与计算机技术—控制理论与控制工程]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.222