检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:李明爱[1] 焦利芳[1] 郝冬梅[1] 乔俊飞[1]
机构地区:[1]北京工业大学电子信息与控制工程学院,北京100022
出 处:《系统仿真学报》2008年第24期6683-6685,6690,共4页Journal of System Simulation
基 金:国家自然科学基金(60674066;3067054);科博启动基金(52002011200702)
摘 要:为解决标准Q学习算法收敛速度慢的问题,提出一种基于多个并行小脑模型(Cerebellar Model Articulation Controller:CMAC)神经网络的强化学习方法。该方法通过对输入状态变量进行分割,在不改变状态分辨率的前提下,降低每个状态变量的量化级数,有效减少CMAC的存储空间,将之与Q学习方法相结合,其输出用于逼近状态变量的Q值,从而提高了Q学习方法的学习速度和控制精度,并实现了连续状态的泛化。将该方法用于直线倒立摆的平衡控制中,仿真结果表明了其正确性和有效性。To solve the problem of the slow convergent rate of standard Q-learning, a reinforcement learning algorithm based on many parallel Cerebellar Model Articulation Controller (CMAC) neural networks was proposed. The input state variables were divided to decrease the grades of quantization without changing the resolution. Therefore, the storage spaces of CMAC were reduced effectively, and the outputs of CMAC with lower storage spaces were used to approximate the Q-functions of the corresponding input state variables by integrating CMAC with Q-learning method. So, the learning rate and control precision of Q-algorithm were improved simultaneity, and the generalization of continuous state variables was realized. The method was applied to control the balance of inverted pendulum, and the simulation results show its correctness and efficiency.
分 类 号:TP391[自动化与计算机技术—计算机应用技术]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.90