Dynamic programming for a Markov-switching jump–diffusion


Autoria(s): Azevedo, Nuno; Pinheiro, D.; Weber, G.-W.
Data(s)

09/01/2015

09/01/2015

2014

Resumo

We consider an optimal control problem with a deterministic finite horizon and state variable dynamics given by a Markov-switching jump–diffusion stochastic differential equation. Our main results extend the dynamic programming technique to this larger family of stochastic optimal control problems. More specifically, we provide a detailed proof of Bellman’s optimality principle (or dynamic programming principle) and obtain the corresponding Hamilton–Jacobi–Belman equation, which turns out to be a partial integro-differential equation due to the extra terms arising from the Lévy process and the Markov process. As an application of our results, we study a finite horizon consumption– investment problem for a jump–diffusion financial market consisting of one risk-free asset and one risky asset whose coefficients are assumed to depend on the state of a continuous time finite state Markov process. We provide a detailed study of the optimal strategies for this problem, for the economically relevant families of power utilities and logarithmic utilities.

Identificador

In "Journal of Computational and Applied Mathematics". ISSN 0377-0427. 267 (2014) 1-19

0377-0427

http://hdl.handle.net/10400.22/5367

10.1016/j.cam.2014.01.021

Idioma(s)

eng

Publicador

Elsevier

Relação

http://www.sciencedirect.com/science/article/pii/S0377042714000491

Direitos

openAccess

Palavras-Chave #Stochastic optimal control #Jump–diffusion #Markov-switching #Optimal consumption–investment
Tipo

article