938 resultados para Price dynamics model with memory
Resumo:
We study the joint determination of the lag length, the dimension of the cointegrating space and the rank of the matrix of short-run parameters of a vector autoregressive (VAR) model using model selection criteria. We consider model selection criteria which have data-dependent penalties for a lack of parsimony, as well as the traditional ones. We suggest a new procedure which is a hybrid of traditional criteria and criteria with data-dependant penalties. In order to compute the fit of each model, we propose an iterative procedure to compute the maximum likelihood estimates of parameters of a VAR model with short-run and long-run restrictions. Our Monte Carlo simulations measure the improvements in forecasting accuracy that can arise from the joint determination of lag-length and rank, relative to the commonly used procedure of selecting the lag-length only and then testing for cointegration.
Resumo:
When the joint assumption of optimal risk sharing and coincidence of beliefs is added to the collective model of Browning and Chiappori (1998) income pooling and symmetry of the pseudo-Hicksian matrix are shown to be restored. Because these are also the features of the unitary model usually rejected in empirical studies one may argue that these assumptions are at odds with evidence. We argue that this needs not be the case. The use of cross-section data to generate price and income variation is based Oil a definition of income pooling or symmetry suitable for testing the unitary model, but not the collective model with risk sharing. AIso, by relaxing assumptions on beliefs, we show that symmetry and income pooling is lost. However, with usual assumptions on existence of assignable goods, we show that beliefs are identifiable. More importantly, if di:fferences in beliefs are not too extreme, the risk sharing hypothesis is still testable.
Resumo:
Mensalmente são publicados relatórios pelo Departamento de Agricultura dos Estados Unidos (USDA) onde são divulgados dados de condições das safras, oferta e demanda globais, nível dos estoques, que servem como referência para todos os participantes do mercado de commodities agrícolas. Esse mercado apresenta uma volatilidade acentuada no período de divulgação dos relatórios. Um modelo de volatilidade estocástica com saltos é utilizado para a dinâmica de preços de milho e de soja. Não existe um modelo ‘ideal’ para tal fim, cada um dos existentes têm suas vantagens e desvantagens. O modelo escolhido foi o de Oztukel e Wilmott (1998), que é um modelo de volatilidade estocástica empírica, incrementado com saltos determinísticos. Empiricamente foi demonstrado que um modelo de volatilidade estocástica pode ser bem ajustado ao mercado de commodities, e o processo de jump-diffusion pode representar bem os saltos que o mercado apresenta durante a divulgação dos relatórios. As opções de commodities agrícolas que são negociadas em bolsa são do tipo americanas, então alguns métodos disponíveis poderiam ser utilizados para precificar opções seguindo a dinâmica do modelo proposto. Dado que o modelo escolhido é um modelo multi-fatores, então o método apropriado para a precificação é o proposto por Longstaff e Schwartz (2001) chamado de Monte Carlo por mínimos quadrados (LSM). As opções precificadas pelo modelo são utilizadas em uma estratégia de hedge de uma posição física de milho e de soja, e a eficiência dessa estratégia é comparada com estratégias utilizando-se instrumentos disponíveis no mercado.
Resumo:
Esta tese se dedica ao estudo de modelos de fixação de preços e suas implicações macroeconômicas. Nos primeiros dois capítulos analiso modelos em que as decisões das firmas sobre seus preços praticados levam em conta custos de menu e de informação. No Capítulo 1 eu estimo tais modelos empregando estatísticas de variações de preços dos Estados Unidos, e concluo que: os custos de informação são significativamente maiores que os custos de menu; os dados claramente favorecem o modelo em que informações sobre condições agregadas são custosas enquanto que as idiossincráticas têm custo zero. No Capítulo 2 investigo as consequências de choques monetários e anúncios de desinflação usando os modelos previamente estimados. Mostro que o grau de não-neutralidade monetária é maior no modelo em que parte da informação é grátis. O Capítulo 3 é um artigo em conjunto com Carlos Carvalho (PUC-Rio) e Antonella Tutino (Federal Reserve Bank of Dallas). No artigo examinamos um modelo de fixação de preços em que firmas estão sujeitas a uma restrição de fluxo de informação do tipo Shannon. Calibramos o modelo e estudamos funções impulso-resposta a choques idiossincráticos e agregados. Mostramos que as firmas vão preferir processar informações agregadas e idiossincráticas conjuntamente ao invés de investigá-las separadamente. Este tipo de processamento gera ajustes de preços mais frequentes, diminuindo a persistência de efeitos reais causados por choques monetários.
Resumo:
The paper analyzes a two period general equilibrium model with individual risk and moral hazard. Each household faces two individual states of nature in the second period. These states solely differ in the household's vector of initial endowments, which is strictly larger in the first state (good state) than in the second state (bad state). In the first period households choose a non-observable action. Higher leveis of action give higher probability of the good state of nature to occur, but lower leveIs of utility. Households have access to an insurance market that allows transfer of income across states of oature. I consider two models of financiaI markets, the price-taking behavior model and the nonlínear pricing modelo In the price-taking behavior model suppliers of insurance have a belief about each household's actíon and take asset prices as given. A variation of standard arguments shows the existence of a rational expectations equilibrium. For a generic set of economies every equilibrium is constraíned sub-optímal: there are commodity prices and a reallocation of financiaI assets satisfying the first period budget constraint such that, at each household's optimal choice given those prices and asset reallocation, markets clear and every household's welfare improves. In the nonlinear pricing model suppliers of insurance behave strategically offering nonlinear pricing contracts to the households. I provide sufficient conditions for the existence of equilibrium and investigate the optimality properties of the modeI. If there is a single commodity then every equilibrium is constrained optimaI. Ir there is more than one commodity, then for a generic set of economies every equilibrium is constrained sub-optimaI.
Resumo:
We characterize optimal policy in a two-sector growth model with xed coeÆcients and with no discounting. The model is a specialization to a single type of machine of a general vintage capital model originally formulated by Robinson, Solow and Srinivasan, and its simplicity is not mirrored in its rich dynamics, and which seem to have been missed in earlier work. Our results are obtained by viewing the model as a specific instance of the general theory of resource allocation as initiated originally by Ramsey and von Neumann and brought to completion by McKenzie. In addition to the more recent literature on chaotic dynamics, we relate our results to the older literature on optimal growth with one state variable: speci cally, to the one-sector setting of Ramsey, Cass and Koopmans, as well as to the two-sector setting of Srinivasan and Uzawa. The analysis is purely geometric, and from a methodological point of view, our work can be seen as an argument, at least in part, for the rehabilitation of geometric methods as an engine of analysis.
Resumo:
Who was the cowboy in Washington? What is the land of sushi? Most people would have answers to these questions readily available,yet, modern search engines, arguably the epitome of technology in finding answers to most questions, are completely unable to do so. It seems that people capture few information items to rapidly converge to a seemingly 'obvious' solution. We will study approaches for this problem, with two additional hard demands that constrain the space of possible theories: the sought model must be both psychologically and neuroscienti cally plausible. Building on top of the mathematical model of memory called Sparse Distributed Memory, we will see how some well-known methods in cryptography can point toward a promising, comprehensive, solution that preserves four crucial properties of human psychology.
Resumo:
Real exchange rate is an important macroeconomic price in the economy and a ects economic activity, interest rates, domestic prices, trade and investiments ows among other variables. Methodologies have been developed in empirical exchange rate misalignment studies to evaluate whether a real e ective exchange is overvalued or undervalued. There is a vast body of literature on the determinants of long-term real exchange rates and on empirical strategies to implement the equilibrium norms obtained from theoretical models. This study seeks to contribute to this literature by showing that it is possible to calculate the misalignment from a mixed ointegrated vector error correction framework. An empirical exercise using United States' real exchange rate data is performed. The results suggest that the model with mixed frequency data is preferred to the models with same frequency variables
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
In this study we analysed the theoretical population dynamics of C. megacephala, an exotic blowfly, kept at 25 and 30degreesC, using a density-dependent mathematical model, with parametric estimates of survival and fecundity in the laboratory. No change in terms of oscillation patterns was found for the two temperatures. The populations exhibited a two-point limit cycle, i.e. oscillations between two fixed points, at 25 and 30degreesC. However a quantitative change was observed, indicating that at 25degreesC the number of immatures in equilibrium is 1176 and at 30degreesC, 1944. The implications of this difference in terms of equilibrium for population dynamics of C. megacephala are discussed.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
The leading-twist pion-distribution amplitude is obtained at a low normalization scale of order ρc (inverse average size of an instanton). Pion dynamics, consistent with gauge invariance and low-energy theorems, is considered within the instanton vacuum model. The results are QCD-evolved to higher momentum-transfer values and are in agreement with recent data from CLEO on the pion transition form factor. It is also shown that some previous calculations violate the axial Ward-Takahashi identity. © 2001 MAIK Nauka/Interperiodica.
Resumo:
We analyse the production of multileptons in the simplest supergravity model with bilinear violation of R parity at the Fermilab Tevatron. Despite the small .R-parity violating couplings needed to generate the neutrino masses indicated by current atmospheric neutrino data, the lightest supersymmetric particle is unstable and can decay inside the detector. This leads to a phenomenology quite distinct from that of the R-parity conserving scenario. We quantify by how much the supersymmetric multilepton signals differ from the R-parity conserving expectations, displaying our results in the m0 ⊙ m1/2 plane. We show that the presence of bilinear R-parity violating interactions enhances the supersymmetric multilepton signals over most of the parameter space, specially at moderate and large m0. © SISSA/ISAS 2003.
Resumo:
Simulations of overshooting, tropical deep convection using a Cloud Resolving Model with bulk microphysics are presented in order to examine the effect on the water content of the TTL (Tropical Tropopause Layer) and lower stratosphere. This case study is a subproject of the HIBISCUS (Impact of tropical convection on the upper troposphere and lower stratosphere at global scale) campaign, which took place in Bauru, Brazil (22° S, 49° W), from the end of January to early March 2004. Comparisons between 2-D and 3-D simulations suggest that the use of 3-D dynamics is vital in order to capture the mixing between the overshoot and the stratospheric air, which caused evaporation of ice and resulted in an overall moistening of the lower stratosphere. In contrast, a dehydrating effect was predicted by the 2-D simulation due to the extra time, allowed by the lack of mixing, for the ice transported to the region to precipitate out of the overshoot air. Three different strengths of convection are simulated in 3-D by applying successively lower heating rates (used to initiate the convection) in the boundary layer. Moistening is produced in all cases, indicating that convective vigour is not a factor in whether moistening or dehydration is produced by clouds that penetrate the tropopause, since the weakest case only just did so. An estimate of the moistening effect of these clouds on an air parcel traversing a convective region is made based on the domain mean simulated moistening and the frequency of convective events observed by the IPMet (Instituto de Pesquisas Meteorológicas, Universidade Estadual Paulista) radar (S-band type at 2.8 Ghz) to have the same 10 dBZ echo top height as those simulated. These suggest a fairly significant mean moistening of 0.26, 0.13 and 0.05 ppmv in the strongest, medium and weakest cases, respectively, for heights between 16 and 17 km. Since the cold point and WMO (World Meteorological Organization) tropopause in this region lies at ∼ 15.9 km, this is likely to represent direct stratospheric moistening. Much more moistening is predicted for the 15-16 km height range with increases of 0.85-2.8 ppmv predicted. However, it would be required that this air is lofted through the tropopause via the Brewer Dobson circulation in order for it to have a stratospheric effect. Whether this is likely is uncertain and, in addition, the dehydration of air as it passes through the cold trap and the number of times that trajectories sample convective regions needs to be taken into account to gauge the overall stratospheric effect. Nevertheless, the results suggest a potentially significant role for convection in determining the stratospheric water content. Sensitivity tests exploring the impact of increased aerosol numbers in the boundary layer suggest that a corresponding rise in cloud droplet numbers at cloud base would increase the number concentrations of the ice crystals transported to the TTL, which had the effect of reducing the fall speeds of the ice and causing a ∼13% rise in the mean vapour increase in both the 15-16 and 16-17 km height ranges, respectively, when compared to the control case. Increases in the total water were much larger, being 34% and 132% higher for the same height ranges, but it is unclear whether the extra ice will be able to evaporate before precipitating from the region. These results suggest a possible impact of natural and anthropogenic aerosols on how convective clouds affect stratospheric moisture levels.