检索规则说明:AND代表“并且”;OR代表“或者”;NOT代表“不包含”;(注意必须大写,运算符两边需空一格)
检 索 范 例 :范例一: (K=图书馆学 OR K=情报学) AND A=范并思 范例二:J=计算机应用与软件 AND (U=C++ OR U=Basic) NOT M=Visual
作 者:Zhuo JIN Hai-liang YANG G.YIN
机构地区:[1]Centre for Actuarial Studies,Department of Economics,the University of Melbourne [2]Department of Statistics and Actuarial Science,the University of Hong Kong [3]Department of Mathematics,Wayne State University
出 处:《Acta Mathematicae Applicatae Sinica》2017年第1期221-238,共18页应用数学学报(英文版)
基 金:supported in part by Early Career Research Grant and Faculty Research Grant by The University of Melbourne;supported in part by Research Grants Council of the Hong Kong Special Administrative Region(project No.HKU 17330816);Society of Actuaries’Centers of Actuarial Excellence Research Grant;supported in part by U.S.Army Research Office under grant W911NF-15-1-0218
摘 要:This work focuses on numerical methods for finding optimal dividend payment and capital injection policies to maximize the present value of the difference between the cumulative dividend payment and the possible capital injections. Using dynamic programming principle, the value function obeys a quasi-variational inequality (QVI). The state constraint of the impulsive control gives rise to a capital injection region with free boundary. Since the closed-form solutions are virtually impossible to obtain, we use Markov chain approximation techniques to construct a discrete-time controlled Markov chain to approximate the value function and optimal controls. Convergence of the approximation algorithms is proved.This work focuses on numerical methods for finding optimal dividend payment and capital injection policies to maximize the present value of the difference between the cumulative dividend payment and the possible capital injections. Using dynamic programming principle, the value function obeys a quasi-variational inequality (QVI). The state constraint of the impulsive control gives rise to a capital injection region with free boundary. Since the closed-form solutions are virtually impossible to obtain, we use Markov chain approximation techniques to construct a discrete-time controlled Markov chain to approximate the value function and optimal controls. Convergence of the approximation algorithms is proved.
关 键 词:CONTROL singular control dividend policy capital injection free boundary Markov chain approx-imation
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在载入数据...
正在链接到云南高校图书馆文献保障联盟下载...
云南高校图书馆联盟文献共享服务平台 版权所有©
您的IP:216.73.216.3