987 resultados para Stochastic Approximation Algorithms
Resumo:
National inflation rates reflect domestic and international (regional and global) influences. The relative importance of these components remains a controversial empirical issue. We extend the literature on inflation co-movement by utilising a dynamic factor model with stochastic volatility to account for shifts in the variance of inflation and endogenously determined regional groupings. We find that most of inflation variability is explained by the country specific disturbance term. Nevertheless, the contribution of the global component in explaining industrialised countries’ inflation rates has increased over time.
Resumo:
Less is known about social welfare objectives when it is costly to change prices, as in Rotemberg (1982), compared with Calvo-type models. We derive a quadratic approximate welfare function around a distorted steady state for the costly price adjustment model. We highlight the similarities and differences to the Calvo setup. Both models imply inflation and output stabilization goals. It is explained why the degree of distortion in the economy influences inflation aversion in the Rotemberg framework in a way that differs from the Calvo setup.
Resumo:
"Vegeu el resum a l'inici del document del fitxer adjunt."
Resumo:
We propose a non-equidistant Q rate matrix formula and an adaptive numerical algorithm for a continuous time Markov chain to approximate jump-diffusions with affine or non-affine functional specifications. Our approach also accommodates state-dependent jump intensity and jump distribution, a flexibility that is very hard to achieve with other numerical methods. The Kolmogorov-Smirnov test shows that the proposed Markov chain transition density converges to the one given by the likelihood expansion formula as in Ait-Sahalia (2008). We provide numerical examples for European stock option pricing in Black and Scholes (1973), Merton (1976) and Kou (2002).
Resumo:
We model a boundedly rational agent who suffers from limited attention. The agent considers each feasible alternative with a given (unobservable) probability, the attention parameter, and then chooses the alternative that maximises a preference relation within the set of considered alternatives. We show that this random choice rule is the only one for which the impact of removing an alternative on the choice probability of any other alternative is asymmetric and menu independent. Both the preference relation and the attention parameters are identi fied uniquely by stochastic choice data.
Resumo:
In this paper, we develop numerical algorithms that use small requirements of storage and operations for the computation of invariant tori in Hamiltonian systems (exact symplectic maps and Hamiltonian vector fields). The algorithms are based on the parameterization method and follow closely the proof of the KAM theorem given in [LGJV05] and [FLS07]. They essentially consist in solving a functional equation satisfied by the invariant tori by using a Newton method. Using some geometric identities, it is possible to perform a Newton step using little storage and few operations. In this paper we focus on the numerical issues of the algorithms (speed, storage and stability) and we refer to the mentioned papers for the rigorous results. We show how to compute efficiently both maximal invariant tori and whiskered tori, together with the associated invariant stable and unstable manifolds of whiskered tori. Moreover, we present fast algorithms for the iteration of the quasi-periodic cocycles and the computation of the invariant bundles, which is a preliminary step for the computation of invariant whiskered tori. Since quasi-periodic cocycles appear in other contexts, this section may be of independent interest. The numerical methods presented here allow to compute in a unified way primary and secondary invariant KAM tori. Secondary tori are invariant tori which can be contracted to a periodic orbit. We present some preliminary results that ensure that the methods are indeed implementable and fast. We postpone to a future paper optimized implementations and results on the breakdown of invariant tori.
Resumo:
Classical definitions of complementarity are based on cross price elasticities, and so they do not apply, for example, when goods are free. This context includes many relevant cases such as online newspapers and public attractions. We look for a complementarity notion that does not rely on price variation and that is: behavioural (based only on observable choice data); and model-free (valid whether the agent is rational or not). We uncover a conflict between properties that complementarity should intuitively possess. We discuss three ways out of the impossibility.
Resumo:
The neutral rate of allelic substitution is analyzed for a class-structured population subject to a stationary stochastic demographic process. The substitution rate is shown to be generally equal to the effective mutation rate, and under overlapping generations it can be expressed as the effective mutation rate in newborns when measured in units of average generation time. With uniform mutation rate across classes the substitution rate reduces to the mutation rate.
Resumo:
Defining an efficient training set is one of the most delicate phases for the success of remote sensing image classification routines. The complexity of the problem, the limited temporal and financial resources, as well as the high intraclass variance can make an algorithm fail if it is trained with a suboptimal dataset. Active learning aims at building efficient training sets by iteratively improving the model performance through sampling. A user-defined heuristic ranks the unlabeled pixels according to a function of the uncertainty of their class membership and then the user is asked to provide labels for the most uncertain pixels. This paper reviews and tests the main families of active learning algorithms: committee, large margin, and posterior probability-based. For each of them, the most recent advances in the remote sensing community are discussed and some heuristics are detailed and tested. Several challenging remote sensing scenarios are considered, including very high spatial resolution and hyperspectral image classification. Finally, guidelines for choosing the good architecture are provided for new and/or unexperienced user.
Resumo:
There are far-reaching conceptual similarities between bi-static surface georadar and post-stack, "zero-offset" seismic reflection data, which is expressed in largely identical processing flows. One important difference is, however, that standard deconvolution algorithms routinely used to enhance the vertical resolution of seismic data are notoriously problematic or even detrimental to the overall signal quality when applied to surface georadar data. We have explored various options for alleviating this problem and have tested them on a geologically well-constrained surface georadar dataset. Standard stochastic and direct deterministic deconvolution approaches proved to be largely unsatisfactory. While least-squares-type deterministic deconvolution showed some promise, the inherent uncertainties involved in estimating the source wavelet introduced some artificial "ringiness". In contrast, we found spectral balancing approaches to be effective, practical and robust means for enhancing the vertical resolution of surface georadar data, particularly, but not exclusively, in the uppermost part of the georadar section, which is notoriously plagued by the interference of the direct air- and groundwaves. For the data considered in this study, it can be argued that band-limited spectral blueing may provide somewhat better results than standard band-limited spectral whitening, particularly in the uppermost part of the section affected by the interference of the air- and groundwaves. Interestingly, this finding is consistent with the fact that the amplitude spectrum resulting from least-squares-type deterministic deconvolution is characterized by a systematic enhancement of higher frequencies at the expense of lower frequencies and hence is blue rather than white. It is also consistent with increasing evidence that spectral "blueness" is a seemingly universal, albeit enigmatic, property of the distribution of reflection coefficients in the Earth. Our results therefore indicate that spectral balancing techniques in general and spectral blueing in particular represent simple, yet effective means of enhancing the vertical resolution of surface georadar data and, in many cases, could turn out to be a preferable alternative to standard deconvolution approaches.
Resumo:
Per a determinar la dinàmica espai-temporal completa d’un sistema quàntic tridimensional de N partícules cal integrar l’equació d’Schrödinger en 3N dimensions. La capacitat dels ordinadors actuals permet fer-ho com a molt en 3 dimensions. Amb l’objectiu de disminuir el temps de càlcul necessari per a integrar l’equació d’Schrödinger multidimensional, es realitzen usualment una sèrie d’aproximacions, com l’aproximació de Born–Oppenheimer o la de camp mig. En general, el preu que es paga en realitzar aquestes aproximacions és la pèrdua de les correlacions quàntiques (o entrellaçament). Per tant, és necessari desenvolupar mètodes numèrics que permetin integrar i estudiar la dinàmica de sistemes mesoscòpics (sistemes d’entre tres i unes deu partícules) i en els que es tinguin en compte, encara que sigui de forma aproximada, les correlacions quàntiques entre partícules. Recentment, en el context de la propagació d’electrons per efecte túnel en materials semiconductors, X. Oriols ha desenvolupat un nou mètode [Phys. Rev. Lett. 98, 066803 (2007)] per al tractament de les correlacions quàntiques en sistemes mesoscòpics. Aquesta nova proposta es fonamenta en la formulació de la mecànica quàntica de de Broglie– Bohm. Així, volem fer notar que l’enfoc del problema que realitza X. Oriols i que pretenem aquí seguir no es realitza a fi de comptar amb una eina interpretativa, sinó per a obtenir una eina de càlcul numèric amb la que integrar de manera més eficient l’equació d’Schrödinger corresponent a sistemes quàntics de poques partícules. En el marc del present projecte de tesi doctoral es pretén estendre els algorismes desenvolupats per X. Oriols a sistemes quàntics constituïts tant per fermions com per bosons, i aplicar aquests algorismes a diferents sistemes quàntics mesoscòpics on les correlacions quàntiques juguen un paper important. De forma específica, els problemes a estudiar són els següents: (i) Fotoionització de l’àtom d’heli i de l’àtom de liti mitjançant un làser intens. (ii) Estudi de la relació entre la formulació de X. Oriols amb la aproximació de Born–Oppenheimer. (iii) Estudi de les correlacions quàntiques en sistemes bi- i tripartits en l’espai de configuració de les partícules mitjançant la formulació de de Broglie–Bohm.
Advanced mapping of environmental data: Geostatistics, Machine Learning and Bayesian Maximum Entropy
Resumo:
This book combines geostatistics and global mapping systems to present an up-to-the-minute study of environmental data. Featuring numerous case studies, the reference covers model dependent (geostatistics) and data driven (machine learning algorithms) analysis techniques such as risk mapping, conditional stochastic simulations, descriptions of spatial uncertainty and variability, artificial neural networks (ANN) for spatial data, Bayesian maximum entropy (BME), and more.
Gaussian estimates for the density of the non-linear stochastic heat equation in any space dimension
Resumo:
In this paper, we establish lower and upper Gaussian bounds for the probability density of the mild solution to the stochastic heat equation with multiplicative noise and in any space dimension. The driving perturbation is a Gaussian noise which is white in time with some spatially homogeneous covariance. These estimates are obtained using tools of the Malliavin calculus. The most challenging part is the lower bound, which is obtained by adapting a general method developed by Kohatsu-Higa to the underlying spatially homogeneous Gaussian setting. Both lower and upper estimates have the same form: a Gaussian density with a variance which is equal to that of the mild solution of the corresponding linear equation with additive noise.