944 resultados para Variable pricing model
Resumo:
New Keynesian models rely heavily on two workhorse models of nominal inertia - price contracts of random duration (Calvo, 1983) and price adjustment costs (Rotemberg, 1982) - to generate a meaningful role for monetary policy. These alternative descriptions of price stickiness are often used interchangeably since, to a first order of approximation they imply an isomorphic Phillips curve and, if the steady-state is efficient, identical objectives for the policy maker and as a result in an LQ framework, the same policy conclusions. In this paper we compute time-consistent optimal monetary policy in bench-mark New Keynesian models containing each form of price stickiness. Using global solution techniques we find that the inflation bias problem under Calvo contracts is significantly greater than under Rotemberg pricing, despite the fact that the former typically significant exhibits far greater welfare costs of inflation. The rates of inflation observed under this policy are non-trivial and suggest that the model can comfortably generate the rates of inflation at which the problematic issues highlighted in the trend inflation literature emerge, as well as the movements in trend inflation emphasized in empirical studies of the evolution of inflation. Finally, we consider the response to cost push shocks across both models and find these can also be significantly different. The choice of which form of nominal inertia to adopt is not innocuous.
Resumo:
Bayesian model averaging (BMA) methods are regularly used to deal with model uncertainty in regression models. This paper shows how to introduce Bayesian model averaging methods in quantile regressions, and allow for different predictors to affect different quantiles of the dependent variable. I show that quantile regression BMA methods can help reduce uncertainty regarding outcomes of future inflation by providing superior predictive densities compared to mean regression models with and without BMA.
Resumo:
Abstract. Given a model that can be simulated, conditional moments at a trial parameter value can be calculated with high accuracy by applying kernel smoothing methods to a long simulation. With such conditional moments in hand, standard method of moments techniques can be used to estimate the parameter. Because conditional moments are calculated using kernel smoothing rather than simple averaging, it is not necessary that the model be simulable subject to the conditioning information that is used to define the moment conditions. For this reason, the proposed estimator is applicable to general dynamic latent variable models. It is shown that as the number of simulations diverges, the estimator is consistent and a higher-order expansion reveals the stochastic difference between the infeasible GMM estimator based on the same moment conditions and the simulated version. In particular, we show how to adjust standard errors to account for the simulations. Monte Carlo results show how the estimator may be applied to a range of dynamic latent variable (DLV) models, and that it performs well in comparison to several other estimators that have been proposed for DLV models.
Resumo:
It is generally accepted that most plant populations are locally adapted. Yet, understanding how environmental forces give rise to adaptive genetic variation is a challenge in conservation genetics and crucial to the preservation of species under rapidly changing climatic conditions. Environmental variation, phylogeographic history, and population demographic processes all contribute to spatially structured genetic variation, however few current models attempt to separate these confounding effects. To illustrate the benefits of using a spatially-explicit model for identifying potentially adaptive loci, we compared outlier locus detection methods with a recently-developed landscape genetic approach. We analyzed 157 loci from samples of the alpine herb Gentiana nivalis collected across the European Alps. Principle coordinates of neighbor matrices (PCNM), eigenvectors that quantify multi-scale spatial variation present in a data set, were incorporated into a landscape genetic approach relating AFLP frequencies with 23 environmental variables. Four major findings emerged. 1) Fifteen loci were significantly correlated with at least one predictor variable (R (adj) (2) > 0.5). 2) Models including PCNM variables identified eight more potentially adaptive loci than models run without spatial variables. 3) When compared to outlier detection methods, the landscape genetic approach detected four of the same loci plus 11 additional loci. 4) Temperature, precipitation, and solar radiation were the three major environmental factors driving potentially adaptive genetic variation in G. nivalis. Techniques presented in this paper offer an efficient method for identifying potentially adaptive genetic variation and associated environmental forces of selection, providing an important step forward for the conservation of non-model species under global change.
Resumo:
This thesis focuses on theoretical asset pricing models and their empirical applications. I aim to investigate the following noteworthy problems: i) if the relationship between asset prices and investors' propensities to gamble and to fear disaster is time varying, ii) if the conflicting evidence for the firm and market level skewness can be explained by downside risk, Hi) if costly learning drives liquidity risk. Moreover, empirical tests support the above assumptions and provide novel findings in asset pricing, investment decisions, and firms' funding liquidity. The first chapter considers a partial equilibrium model where investors have heterogeneous propensities to gamble and fear disaster. Skewness preference represents the desire to gamble, while kurtosis aversion represents fear of extreme returns. Using US data from 1988 to 2012, my model demonstrates that in bad times, risk aversion is higher, more people fear disaster, and fewer people gamble, in contrast to good times. This leads to a new empirical finding: gambling preference has a greater impact on asset prices during market downturns than during booms. The second chapter consists of two essays. The first essay introduces a foramula based on conditional CAPM for decomposing the market skewness. We find that the major market upward and downward movements can be well preadicted by the asymmetric comovement of betas, which is characterized by an indicator called "Systematic Downside Risk" (SDR). We find that SDR can efafectively forecast future stock market movements and we obtain out-of-sample R-squares (compared with a strategy using historical mean) of more than 2.27% with monthly data. The second essay reconciles a well-known empirical fact: aggregating positively skewed firm returns leads to negatively skewed market return. We reconcile this fact through firms' greater response to negative maraket news than positive market news. We also propose several market return predictors, such as downside idiosyncratic skewness. The third chapter studies the funding liquidity risk based on a general equialibrium model which features two agents: one entrepreneur and one external investor. Only the investor needs to acquire information to estimate the unobservable fundamentals driving the economic outputs. The novelty is that information acquisition is more costly in bad times than in good times, i.e. counter-cyclical information cost, as supported by previous empirical evidence. Later we show that liquidity risks are principally driven by costly learning. Résumé Cette thèse présente des modèles théoriques dévaluation des actifs et leurs applications empiriques. Mon objectif est d'étudier les problèmes suivants: la relation entre l'évaluation des actifs et les tendances des investisseurs à parier et à crainadre le désastre varie selon le temps ; les indications contraires pour l'entreprise et l'asymétrie des niveaux de marché peuvent être expliquées par les risques de perte en cas de baisse; l'apprentissage coûteux augmente le risque de liquidité. En outre, des tests empiriques confirment les suppositions ci-dessus et fournissent de nouvelles découvertes en ce qui concerne l'évaluation des actifs, les décisions relatives aux investissements et la liquidité de financement des entreprises. Le premier chapitre examine un modèle d'équilibre où les investisseurs ont des tendances hétérogènes à parier et à craindre le désastre. La préférence asymétrique représente le désir de parier, alors que le kurtosis d'aversion représente la crainte du désastre. En utilisant les données des Etats-Unis de 1988 à 2012, mon modèle démontre que dans les mauvaises périodes, l'aversion du risque est plus grande, plus de gens craignent le désastre et moins de gens parient, conatrairement aux bonnes périodes. Ceci mène à une nouvelle découverte empirique: la préférence relative au pari a un plus grand impact sur les évaluations des actifs durant les ralentissements de marché que durant les booms économiques. Exploitant uniquement cette relation générera un revenu excédentaire annuel de 7,74% qui n'est pas expliqué par les modèles factoriels populaires. Le second chapitre comprend deux essais. Le premier essai introduit une foramule base sur le CAPM conditionnel pour décomposer l'asymétrie du marché. Nous avons découvert que les mouvements de hausses et de baisses majeures du marché peuvent être prédits par les mouvements communs des bêtas. Un inadicateur appelé Systematic Downside Risk, SDR (risque de ralentissement systématique) est créé pour caractériser cette asymétrie dans les mouvements communs des bêtas. Nous avons découvert que le risque de ralentissement systématique peut prévoir les prochains mouvements des marchés boursiers de manière efficace, et nous obtenons des carrés R hors échantillon (comparés avec une stratégie utilisant des moyens historiques) de plus de 2,272% avec des données mensuelles. Un investisseur qui évalue le marché en utilisant le risque de ralentissement systématique aurait obtenu une forte hausse du ratio de 0,206. Le second essai fait cadrer un fait empirique bien connu dans l'asymétrie des niveaux de march et d'entreprise, le total des revenus des entreprises positiveament asymétriques conduit à un revenu de marché négativement asymétrique. Nous décomposons l'asymétrie des revenus du marché au niveau de l'entreprise et faisons cadrer ce fait par une plus grande réaction des entreprises aux nouvelles négatives du marché qu'aux nouvelles positives du marché. Cette décomposition révélé plusieurs variables de revenus de marché efficaces tels que l'asymétrie caractéristique pondérée par la volatilité ainsi que l'asymétrie caractéristique de ralentissement. Le troisième chapitre fournit une nouvelle base théorique pour les problèmes de liquidité qui varient selon le temps au sein d'un environnement de marché incomplet. Nous proposons un modèle d'équilibre général avec deux agents: un entrepreneur et un investisseur externe. Seul l'investisseur a besoin de connaitre le véritable état de l'entreprise, par conséquent, les informations de paiement coutent de l'argent. La nouveauté est que l'acquisition de l'information coute plus cher durant les mauvaises périodes que durant les bonnes périodes, comme cela a été confirmé par de précédentes expériences. Lorsque la récession comamence, l'apprentissage coûteux fait augmenter les primes de liquidité causant un problème d'évaporation de liquidité, comme cela a été aussi confirmé par de précédentes expériences.
Resumo:
Human schistosomiasis develops extensive and dense fibrosis in portal space, together with congested new blood vessels. This study demonstrates that Calomys callosus infected with Schistosoma mansoni also develops fibrovascular lesions, which are found in intestinal subserosa. Animals were percutaneously infected with 70 cercariae and necropsied at 42, 45, 55, 80, 90 and 160 days after infection. Intestinal sections were stained for brightfield, polarization microscopy, confocal laser scanning, transmission and scanning electron microscopies. Immunohistological analysis was also performed and some nodules were aseptically collected for cell culture. Numerous intestinal nodules, appearing from 55 up to 160 days after infection, were localized at the interface between external muscular layer and intestinal serosa, consisting of fibrovascular tissue forming a shell about central granuloma(s). Intranodular new vessels were derived from the vasculature of the external vascular layer and were positive for laminin, chondroitin-sulfate, smooth muscle alpha-actin and FVIII-RA. Fibroblastic cells and extracellular matrix components (collagens I, III and VI, fibronectin and tenascin) comprised the stroma. Intermixed with the fibroblasts and vessels there were variable number of eosinophils, macrophages and haemorrhagic foci. In conclusion, the nodules constitute an excellent and accessible model to study fibrogenesis and angiogenesis, dependent on S. mansoni eggs. The fibrogenic activity is fibroblastic and not myofibroblastic-dependent. The angiogenesis is so prominent that causes haemorrhagic ascites.
Resumo:
Analyzing the relationship between the baseline value and subsequent change of a continuous variable is a frequent matter of inquiry in cohort studies. These analyses are surprisingly complex, particularly if only two waves of data are available. It is unclear for non-biostatisticians where the complexity of this analysis lies and which statistical method is adequate.With the help of simulated longitudinal data of body mass index in children,we review statistical methods for the analysis of the association between the baseline value and subsequent change, assuming linear growth with time. Key issues in such analyses are mathematical coupling, measurement error, variability of change between individuals, and regression to the mean. Ideally, it is better to rely on multiple repeated measurements at different times and a linear random effects model is a standard approach if more than two waves of data are available. If only two waves of data are available, our simulations show that Blomqvist's method - which consists in adjusting for measurement error variance the estimated regression coefficient of observed change on baseline value - provides accurate estimates. The adequacy of the methods to assess the relationship between the baseline value and subsequent change depends on the number of data waves, the availability of information on measurement error, and the variability of change between individuals.
Resumo:
This thesis consists of four essays in equilibrium asset pricing. The main topic is investors' heterogeneity: I investigates the equilibrium implications for the financial markets when investors have different attitudes toward risk. The first chapter studies why expected risk and remuneration on the aggregate market are negatively related, even if intuition and standard theory suggest a positive relation. I show that the negative trade-off can obtain in equilibrium if investors' beliefs about economic fundamentals are procyclically biased and the market Sharpe ratio is countercyclical. I verify that such conditions hold in the real markets and I find empirical support for the risk-return dynamics predicted by the model. The second chapter consists of two essays. The first essay studies how het¬erogeneity in risk preferences interacts with other sources of heterogeneity and how this affects asset prices in equilibrium. Using perceived macroeconomic un¬certainty as source of heterogeneity, the model helps to explain some patterns of financial returns, even if heterogeneity is small as suggested by survey data. The second essay determines conditions such that equilibrium prices have analytical solutions when investors have heterogeneous risk attitudes and macroeconomic fundamentals feature latent uncertainty. This approach provides additional in-sights to the previous literature where models require numerical solutions. The third chapter studies why equity claims (i.e. assets paying a single future dividend) feature premia and risk decreasing with the horizon, even if standard models imply the opposite shape. I show that labor relations helps to explain the puzzle. When workers have bargaining power to exploit partial income insurance within the firm, wages are smoother and dividends are riskier than in a standard economy. Distributional risk among workers and shareholders provides a rationale to the equity short-term risk, which leads to downward sloping term structures of premia and risk for equity claim. Résumé Cette thèse se compose de quatre essais dans l'évaluation des actifs d'équilibre. Le sujet principal est l'hétérogénéité des investisseurs: J'étudie les implications d'équilibre pour les marchés financiers où les investisseurs ont des attitudes différentes face au risque. Le première chapitre étudie pourquoi attendus risque et la rémunération sur le marché global sont liées négativement, même si l'intuition et la théorie standard suggèrent une relation positive. Je montre que le compromis négatif peut obtenir en équilibre si les croyances des investisseurs sur les fondamentaux économiques sont procyclique biaisées et le ratio de Sharpe du marché est anticyclique. Je vérifier que ces conditions sont réalisées dans les marchés réels et je trouve un appui empirique à la dynamique risque-rendement prédites par le modèle. Le deuxième chapitre se compose de deux essais. Le première essai étudie com¬ment hétérogénéité dans les préférences de risque inter agit avec d'autres sources d'hétérogénéité et comment cela affecte les prix des actifs en équilibre. Utili¬sation de l'incertitude macroéconomique perù comme source d'hétérogénéité, le modèle permet d'expliquer certaines tendances de rendements financiers, même si l'hétérogénéité est faible comme suggéré par les données d'enquête. Le deuxième essai détermine des conditions telles que les prix d'équilibre disposer de solutions analytiques lorsque les investisseurs ont des attitudes des risques hétérogènes et les fondamentaux macroéconomiques disposent d'incertitude latente. Cette approche fournit un éclairage supplémentaire à la littérature antérieure où les modèles nécessitent des solutions numériques. Le troisième chapitre étudie pourquoi les equity-claims (actifs que paient un seul dividende futur) ont les primes et le risque décroissante avec l'horizon, mme si les modèles standards impliquent la forme opposée. Je montre que les relations de travail contribue à expliquer l'énigme. Lorsque les travailleurs ont le pouvoir de négociation d'exploiter assurance revenu partiel dans l'entreprise, les salaires sont plus lisses et les dividendes sont plus risqués que dans une économie standard. Risque de répartition entre les travailleurs et les actionnaires fournit une justification à le risque à court terme, ce qui conduit à des term-structures en pente descendante des primes et des risques pour les equity-claims.
Resumo:
Given the very large amount of data obtained everyday through population surveys, much of the new research again could use this information instead of collecting new samples. Unfortunately, relevant data are often disseminated into different files obtained through different sampling designs. Data fusion is a set of methods used to combine information from different sources into a single dataset. In this article, we are interested in a specific problem: the fusion of two data files, one of which being quite small. We propose a model-based procedure combining a logistic regression with an Expectation-Maximization algorithm. Results show that despite the lack of data, this procedure can perform better than standard matching procedures.
Resumo:
In CoDaWork’05, we presented an application of discriminant function analysis (DFA) to 4 differentcompositional datasets and modelled the first canonical variable using a segmented regression modelsolely based on an observation about the scatter plots. In this paper, multiple linear regressions areapplied to different datasets to confirm the validity of our proposed model. In addition to dating theunknown tephras by calibration as discussed previously, another method of mapping the unknown tephrasinto samples of the reference set or missing samples in between consecutive reference samples isproposed. The application of these methodologies is demonstrated with both simulated and real datasets.This new proposed methodology provides an alternative, more acceptable approach for geologists as theirfocus is on mapping the unknown tephra with relevant eruptive events rather than estimating the age ofunknown tephra.Kew words: Tephrochronology; Segmented regression
Resumo:
Des de l’any 2000 es té constància de la presencia del llop a Catalunya. Des de llavors, com a mínim 14 llops diferents han entrat i sortit del territori català, encara que cap d’ells s’ha assentat de manera permanent. L’estudi analitza l’entorn català utilitzant GIS, creant un model d’adequació de l’hàbitat tenint en compte les següents variables: la distància a la carretera més propera, la biomassa disponible a la zona, l’altitud i el tipus i tant per cent de recobriment. El model es basa en la informació obtinguda mitjançant la consulta a experts tant del llop com del territori català, així com en una recerca bibliogràfica sobre l’adequació de l’hàbitat del llop. L’enquesta que es dirigí als experts té en compte els valors que cada variable pot prendre dins l’àrea d’estudi, estableix rangs dels valors de cada variable i pregunta als experts com cada rang pot afectar a l’adequació de l’hàbitat pel llop. Els resultats mostren com bona part de la zona Nord de Catalunya té unes condicions adequades perquè el llop pugui arribar a reproduir-s’hi. Es desenvolupa també una anàlisi dels possibles punts de conflicte humà-llop i una superposició dels espais protegits amb les zones més adequades per l’establiment del llop.
Resumo:
The development of susceptibility maps for debris flows is of primary importance due to population pressure in hazardous zones. However, hazard assessment by processbased modelling at a regional scale is difficult due to the complex nature of the phenomenon, the variability of local controlling factors, and the uncertainty in modelling parameters. A regional assessment must consider a simplified approach that is not highly parameter dependant and that can provide zonation with minimum data requirements. A distributed empirical model has thus been developed for regional susceptibility assessments using essentially a digital elevation model (DEM). The model is called Flow-R for Flow path assessment of gravitational hazards at a Regional scale (available free of charge under www.flow-r.org) and has been successfully applied to different case studies in various countries with variable data quality. It provides a substantial basis for a preliminary susceptibility assessment at a regional scale. The model was also found relevant to assess other natural hazards such as rockfall, snow avalanches and floods. The model allows for automatic source area delineation, given user criteria, and for the assessment of the propagation extent based on various spreading algorithms and simple frictional laws.We developed a new spreading algorithm, an improved version of Holmgren's direction algorithm, that is less sensitive to small variations of the DEM and that is avoiding over-channelization, and so produces more realistic extents. The choices of the datasets and the algorithms are open to the user, which makes it compliant for various applications and dataset availability. Amongst the possible datasets, the DEM is the only one that is really needed for both the source area delineation and the propagation assessment; its quality is of major importance for the results accuracy. We consider a 10m DEM resolution as a good compromise between processing time and quality of results. However, valuable results have still been obtained on the basis of lower quality DEMs with 25m resolution.
Resumo:
Using data from the Spanish household budget survey, we investigate life- cycle effects on several product expenditures. A latent-variable model approach is adopted to evaluate the impact of income on expenditures, controlling for the number of members in the family. Two latent factors underlying repeated measures of monetary and non-monetary income are used as explanatory variables in the expenditure regression equations, thus avoiding possible bias associated to the measurement error in income. The proposed methodology also takes care of the case in which product expenditures exhibit a pattern of infrequent purchases. Multiple-group analysis is used to assess the variation of key parameters of the model across various household life-cycle typologies. The analysis discloses significant life-cycle effects on the mean levels of expenditures; it also detects significant life-cycle effects on the way expenditures are affected by income and family size. Asymptotic robust methods are used to account for possible non-normality of the data.
Resumo:
Using data from the Spanish household budget survey, we investigate life-cycle effects on several product expenditures. A latent-variable model approach is adopted to evaluate the impact of income on expenditures, controlling for the number of members in the family. Two latent factors underlying repeated measures of monetary and non-monetary income are used as explanatory variables in the expenditure regression equations, thus avoiding possible bias associated to the measurement error in income. The proposed methodology also takes care of the case in which product expenditures exhibit a pattern of infrequent purchases. Multiple-group analysis is used to assess the variation of key parameters of the model across various household life-cycle typologies. The analysis discloses significant life-cycle effects on the mean levels of expenditures; it also detects significant life-cycle effects on the way expenditures are affected by income and family size. Asymptotic robust methods are used to account for possible non-normality of the data.
Resumo:
We reformulate the Smets-Wouters (2007) framework by embedding the theory of unemployment proposed in Galí (2011a,b). Weestimate the resulting model using postwar U.S. data, while treatingthe unemployment rate as an additional observable variable. Our approach overcomes the lack of identification of wage markup and laborsupply shocks highlighted by Chari, Kehoe and McGrattan (2008) intheir criticism of New Keynesian models, and allows us to estimate a"correct" measure of the output gap. In addition, the estimated modelcan be used to analyze the sources of unemployment fluctuations.