892 resultados para estimating conditional probabilities
Resumo:
This paper provides a method to estimate time varying coefficients structuralVARs which are non-recursive and potentially overidentified. The procedureallows for linear and non-linear restrictions on the parameters, maintainsthe multi-move structure of standard algorithms and can be used toestimate structural models with different identification restrictions. We studythe transmission of monetary policy shocks and compare the results with thoseobtained with traditional methods.
Resumo:
In many areas of economics there is a growing interest in how expertise andpreferences drive individual and group decision making under uncertainty. Increasingly, we wish to estimate such models to quantify which of these drive decisionmaking. In this paper we propose a new channel through which we can empirically identify expertise and preference parameters by using variation in decisionsover heterogeneous priors. Relative to existing estimation approaches, our \Prior-Based Identification" extends the possible environments which can be estimated,and also substantially improves the accuracy and precision of estimates in thoseenvironments which can be estimated using existing methods.
Resumo:
Intraspecific genetic variation for morphological traits is observed in many organisms. In Arabidopsis thaliana, alleles responsible for intraspecific morphological variation are increasingly being identified. However, the fitness consequences remain unclear in most cases. Here, the fitness effects of alleles of the BRX gene are investigated. A brx loss-of-function allele, which was found in a natural accession, results in a highly branched but poorly elongated root system. Comparison between the control accession Sav-0 and an introgression of brx into this background (brxS) indicated that, surprisingly, brx loss of function did not negatively affect fitness in pure stands. However, in mixed, well-watered stands brxS performance and reproductive output decreased significantly, as the proportion of Sav-0 neighbors increased. Additional comparisons between brxS and a brxS line that was complemented by a BRX transgene confirmed a direct effect of the loss-of-function allele on plant performance, as indicated by restored competitive ability of the transgenic genotype. Further, because plant height was very similar across genotypes and because the experimental setup largely excluded shading effects, the impaired competitiveness of the brx loss-of-function genotype likely reflects below-ground competition. In summary, these data reveal conditional fitness effects of a single gene polymorphism in response to intraspecific competition in Arabidopsis.
Resumo:
This paper shows that the distribution of observed consumption is not a good proxy for the distribution of heterogeneous consumers when the current tariff is an increasing block tariff. We use a two step method to recover the "true" distribution of consumers. First, we estimate the demand function induced by the current tariff. Second, using the demand system, we specify the distribution of consumers as a function of observed consumption to recover the true distribution. Finally, we design a new two-part tariff which allows us to evaluate the equity of the existence of an increasing block tariff.
Resumo:
This paper explores three aspects of strategic uncertainty: its relation to risk, predictability of behavior and subjective beliefs of players. In a laboratory experiment we measure subjects certainty equivalents for three coordination games and one lottery. Behavior in coordination games is related to risk aversion, experience seeking, and age.From the distribution of certainty equivalents we estimate probabilities for successful coordination in a wide range of games. For many games, success of coordination is predictable with a reasonable error rate. The best response to observed behavior is close to the global-game solution. Comparing choices in coordination games with revealed risk aversion, we estimate subjective probabilities for successful coordination. In games with a low coordination requirement, most subjects underestimate the probability of success. In games with a high coordination requirement, most subjects overestimate this probability. Estimating probabilistic decision models, we show that the quality of predictions can be improved when individual characteristics are taken into account. Subjects behavior is consistent with probabilistic beliefs about the aggregate outcome, but inconsistent with probabilistic beliefs about individual behavior.
Resumo:
We analyze how unemployment, job finding and job separation rates reactto neutral and investment-specific technology shocks. Neutral shocks increaseunemployment and explain a substantial portion of it volatility; investment-specificshocks expand employment and hours worked and contribute to hoursworked volatility. Movements in the job separation rates are responsible for theimpact response of unemployment while job finding rates for movements alongits adjustment path. The evidence warns against using models with exogenousseparation rates and challenges the conventional way of modelling technologyshocks in search and sticky price models.
Resumo:
We propose a new econometric estimation method for analyzing the probabilityof leaving unemployment using uncompleted spells from repeated cross-sectiondata, which can be especially useful when panel data are not available. Theproposed method-of-moments-based estimator has two important features:(1) it estimates the exit probability at the individual level and(2) it does not rely on the stationarity assumption of the inflowcomposition. We illustrate and gauge the performance of the proposedestimator using the Spanish Labor Force Survey data, and analyze the changesin distribution of unemployment between the 1980s and 1990s during a periodof labor market reform. We find that the relative probability of leavingunemployment of the short-term unemployed versus the long-term unemployedbecomes significantly higher in the 1990s.
Resumo:
We use CEX repeated cross-section data on consumption and income, to evaluate the nature of increased income inequality in the 1980s and 90s. We decompose unexpected changes in family income into transitory and permanent, and idiosyncratic and aggregate components, and estimate the contribution of each component to total inequality. The model we use is a linearized incomplete markets model, enriched to incorporate risk-sharing while maintaining tractability. Our estimates suggest that taking risk sharing into account is important for the model fit; that the increase in inequality in the 1980s was mainly permanent; and that inequality is driven almost entirely by idiosyncratic income risk. In addition we find no evidence for cyclical behavior of consumption risk, casting doubt on Constantinides and Duffie s (1995) explanation for the equity premium puzzle.
Resumo:
Background: Alcohol is a major risk factor for burden of disease and injuries globally. This paper presents a systematic method to compute the 95% confidence intervals of alcohol-attributable fractions (AAFs) with exposure and risk relations stemming from different sources.Methods: The computation was based on previous work done on modelling drinking prevalence using the gamma distribution and the inherent properties of this distribution. The Monte Carlo approach was applied to derive the variance for each AAF by generating random sets of all the parameters. A large number of random samples were thus created for each AAF to estimate variances. The derivation of the distributions of the different parameters is presented as well as sensitivity analyses which give an estimation of the number of samples required to determine the variance with predetermined precision, and to determine which parameter had the most impact on the variance of the AAFs.Results: The analysis of the five Asian regions showed that 150 000 samples gave a sufficiently accurate estimation of the 95% confidence intervals for each disease. The relative risk functions accounted for most of the variance in the majority of cases.Conclusions: Within reasonable computation time, the method yielded very accurate values for variances of AAFs.
Resumo:
Although dispersal is recognized as a key issue in several fields of population biology (such as behavioral ecology, population genetics, metapopulation dynamics or evolutionary modeling), these disciplines focus on different aspects of the concept and often make different implicit assumptions regarding migration models. Using simulations, we investigate how such assumptions translate into effective gene flow and fixation probability of selected alleles. Assumptions regarding migration type (e.g. source-sink, resident pre-emption, or balanced dispersal) and patterns (e.g. stepping-stone versus island dispersal) have large impacts when demes differ in sizes or selective pressures. The effects of fragmentation, as well as the spatial localization of newly arising mutations, also strongly depend on migration type and patterns. Migration rate also matters: depending on the migration type, fixation probabilities at an intermediate migration rate may lie outside the range defined by the low- and high-migration limits when demes differ in sizes. Given the extreme sensitivity of fixation probability to characteristics of dispersal, we underline the importance of making explicit (and documenting empirically) the crucial ecological/ behavioral assumptions underlying migration models.
Resumo:
Any electoral system has an electoral formula that converts voteproportions into parliamentary seats. Pre-electoral polls usually focuson estimating vote proportions and then applying the electoral formulato give a forecast of the parliament's composition. We here describe theproblems arising from this approach: there is always a bias in theforecast. We study the origin of the bias and some methods to evaluateand to reduce it. We propose some rules to compute the sample sizerequired for a given forecast accuracy. We show by Monte Carlo simulationthe performance of the proposed methods using data from Spanish electionsin last years. We also propose graphical methods to visualize how electoralformulae and parliamentary forecasts work (or fail).
Resumo:
We study the statistical properties of three estimation methods for a model of learning that is often fitted to experimental data: quadratic deviation measures without unobserved heterogeneity, and maximum likelihood withand without unobserved heterogeneity. After discussing identification issues, we show that the estimators are consistent and provide their asymptotic distribution. Using Monte Carlo simulations, we show that ignoring unobserved heterogeneity can lead to seriously biased estimations in samples which have the typical length of actual experiments. Better small sample properties areobtained if unobserved heterogeneity is introduced. That is, rather than estimating the parameters for each individual, the individual parameters are considered random variables, and the distribution of those random variables is estimated.
Resumo:
Many dynamic revenue management models divide the sale period into a finite number of periods T and assume, invoking a fine-enough grid of time, that each period sees at most one booking request. These Poisson-type assumptions restrict the variability of the demand in the model, but researchers and practitioners were willing to overlook this for the benefit of tractability of the models. In this paper, we criticize this model from another angle. Estimating the discrete finite-period model poses problems of indeterminacy and non-robustness: Arbitrarily fixing T leads to arbitrary control values and on the other hand estimating T from data adds an additional layer of indeterminacy. To counter this, we first propose an alternate finite-population model that avoids this problem of fixing T and allows a wider range of demand distributions, while retaining the useful marginal-value properties of the finite-period model. The finite-population model still requires jointly estimating market size and the parameters of the customer purchase model without observing no-purchases. Estimation of market-size when no-purchases are unobservable has rarely been attempted in the marketing or revenue management literature. Indeed, we point out that it is akin to the classical statistical problem of estimating the parameters of a binomial distribution with unknown population size and success probability, and hence likely to be challenging. However, when the purchase probabilities are given by a functional form such as a multinomial-logit model, we propose an estimation heuristic that exploits the specification of the functional form, the variety of the offer sets in a typical RM setting, and qualitative knowledge of arrival rates. Finally we perform simulations to show that the estimator is very promising in obtaining unbiased estimates of population size and the model parameters.
Resumo:
International industry data permits testing whether the industry-specific impact of cross-countrydifferences in institutions or policies is consistent with economic theory. Empirical implementationrequires specifying the industry characteristics that determine impact strength. Most of the literature has been using US proxies of the relevant industry characteristics. We show that usingindustry characteristics in a benchmark country as a proxy of the relevant industry characteristicscan result in an attenuation bias or an amplification bias. We also describe circumstances allowingfor an alternative approach that yields consistent estimates. As an application, we reexamine theinfluential conjecture that financial development facilitates the reallocation of capital from decliningto expanding industries.