Parametrized actor-critic algorithms for finite-horizon MDPs
Data(s) |
2007
|
---|---|
Resumo |
Due to their non-stationarity, finite-horizon Markov decision processes (FH-MDPs) have one probability transition matrix per stage. Thus the curse of dimensionality affects FH-MDPs more severely than infinite-horizon MDPs. We propose two parametrized 'actor-critic' algorithms to compute optimal policies for FH-MDPs. Both algorithms use the two-timescale stochastic approximation technique, thus simultaneously performing gradient search in the parametrized policy space (the 'actor') on a slower timescale and learning the policy gradient (the 'critic') via a faster recursion. This is in contrast to methods where critic recursions learn the cost-to-go proper. We show w.p 1 convergence to a set with the necessary condition for constrained optima. The proposed parameterization is for FHMDPs with compact action sets, although certain exceptions can be handled. Further, a third algorithm for stochastic control of stopping time processes is presented. We explain why current policy evaluation methods do not work as critic to the proposed actor recursion. Simulation results from flow-control in communication networks attest to the performance advantages of all three algorithms. |
Formato |
application/pdf |
Identificador |
http://eprints.iisc.ernet.in/26811/1/yas.pdf Abdulla, Mohammed Shahid and Bhatnagar, Shalabh (2007) Parametrized actor-critic algorithms for finite-horizon MDPs. In: American Control Conference 2007, JUL 09-13, 2007, New York,. |
Publicador |
IEEE |
Relação |
http://ieeexplore.ieee.org/search/srchabstract.jsp?tp=&arnumber=4282587&queryText%3DParametrized+actor-critic+algorithms++for+finite-horizon++MDPs%26openedRefinements%3D*%26searchField%3DSearch+All http://eprints.iisc.ernet.in/26811/ |
Palavras-Chave | #Computer Science & Automation (Formerly, School of Automation) |
Tipo |
Conference Paper PeerReviewed |