检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
机构地区:[1]北京工业大学电子信息与控制学院,北京100022 [2]合肥工业大学计算机系,安徽合肥230009 [3]中国科学技术大学自动化系,安徽合肥230027
出 处:《控制理论与应用》2006年第4期547-551,共5页Control Theory & Applications
基 金:国家自然科学基金资助项目(60274012);北京工业大学博士科研启动基金资助项目(00194)
摘 要:基于性能势理论和等价Markov过程方法,研究了一类半Markov决策过程(SMDP)在参数化随机平稳策略下的仿真优化算法,并简要分析了算法的收敛性.通过SMDP的等价Markov过程,定义了一个一致化Markov链,然后根据该一致化Markov链的单个样本轨道来估计SMDP的平均代价性能指标关于策略参数的梯度,以寻找最优(或次优)策略.文中给出的算法是利用神经元网络来逼近参数化随机平稳策略,以节省计算机内存,避免了“维数灾”问题,适合于解决大状态空间系统的性能优化问题.最后给出了一个仿真实例来说明算法的应用.Based on the theory of performance potentials and the method of equivalent Markov process, the performance optimization problem is discussed for a class of semi-Markov decision processes (SMDPs) with parameterized randomized stationary policies and a simulation optimization algorithm is proposed. Firstly, a uniform Markov chain is defined through the equivalent Markov process. Secondly, the gradient of the average cost performance with respect to the policy parameters is then estimated by simulating a single sample path of the uniformized Markov chain, so that an optimal (or suboptimal) randomized stationary policy can be found by iterating the parameters. The derived algorithm can meet the requirements of performance optimization of many different systems with large-scale state space, an artificial neural network is also used to approximate the parameterized randomized stationary policies and avoid the curse of dimensionality. Finally, convergence of the algorithm with probability one on an infinite sample path is considered, and a numerical example is provided to illustrate the application of the algorithm.
关 键 词:随机平稳策略 等价Markov过程 一致化Markov链 神经元动态规划 仿真优化
分 类 号:TP391.9[自动化与计算机技术—计算机应用技术]
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.117