869 resultados para estimating equations
Resumo:
This paper provides a method to estimate time varying coefficients structuralVARs which are non-recursive and potentially overidentified. The procedureallows for linear and non-linear restrictions on the parameters, maintainsthe multi-move structure of standard algorithms and can be used toestimate structural models with different identification restrictions. We studythe transmission of monetary policy shocks and compare the results with thoseobtained with traditional methods.
Resumo:
In many areas of economics there is a growing interest in how expertise andpreferences drive individual and group decision making under uncertainty. Increasingly, we wish to estimate such models to quantify which of these drive decisionmaking. In this paper we propose a new channel through which we can empirically identify expertise and preference parameters by using variation in decisionsover heterogeneous priors. Relative to existing estimation approaches, our \Prior-Based Identification" extends the possible environments which can be estimated,and also substantially improves the accuracy and precision of estimates in thoseenvironments which can be estimated using existing methods.
Resumo:
This paper shows that the distribution of observed consumption is not a good proxy for the distribution of heterogeneous consumers when the current tariff is an increasing block tariff. We use a two step method to recover the "true" distribution of consumers. First, we estimate the demand function induced by the current tariff. Second, using the demand system, we specify the distribution of consumers as a function of observed consumption to recover the true distribution. Finally, we design a new two-part tariff which allows us to evaluate the equity of the existence of an increasing block tariff.
Resumo:
We propose a new econometric estimation method for analyzing the probabilityof leaving unemployment using uncompleted spells from repeated cross-sectiondata, which can be especially useful when panel data are not available. Theproposed method-of-moments-based estimator has two important features:(1) it estimates the exit probability at the individual level and(2) it does not rely on the stationarity assumption of the inflowcomposition. We illustrate and gauge the performance of the proposedestimator using the Spanish Labor Force Survey data, and analyze the changesin distribution of unemployment between the 1980s and 1990s during a periodof labor market reform. We find that the relative probability of leavingunemployment of the short-term unemployed versus the long-term unemployedbecomes significantly higher in the 1990s.
Resumo:
We use CEX repeated cross-section data on consumption and income, to evaluate the nature of increased income inequality in the 1980s and 90s. We decompose unexpected changes in family income into transitory and permanent, and idiosyncratic and aggregate components, and estimate the contribution of each component to total inequality. The model we use is a linearized incomplete markets model, enriched to incorporate risk-sharing while maintaining tractability. Our estimates suggest that taking risk sharing into account is important for the model fit; that the increase in inequality in the 1980s was mainly permanent; and that inequality is driven almost entirely by idiosyncratic income risk. In addition we find no evidence for cyclical behavior of consumption risk, casting doubt on Constantinides and Duffie s (1995) explanation for the equity premium puzzle.
Resumo:
This paper describes a methodology to estimate the coefficients, to test specification hypothesesand to conduct policy exercises in multi-country VAR models with cross unit interdependencies, unit specific dynamics and time variations in the coefficients. The framework of analysis is Bayesian: a prior flexibly reduces the dimensionality of the model and puts structure on the time variations; MCMC methods are used to obtain posterior distributions; and marginal likelihoods to check the fit of various specifications. Impulse responses and conditional forecasts are obtained with the output of MCMC routine. The transmission of certain shocks across countries is analyzed.
Resumo:
Background: Alcohol is a major risk factor for burden of disease and injuries globally. This paper presents a systematic method to compute the 95% confidence intervals of alcohol-attributable fractions (AAFs) with exposure and risk relations stemming from different sources.Methods: The computation was based on previous work done on modelling drinking prevalence using the gamma distribution and the inherent properties of this distribution. The Monte Carlo approach was applied to derive the variance for each AAF by generating random sets of all the parameters. A large number of random samples were thus created for each AAF to estimate variances. The derivation of the distributions of the different parameters is presented as well as sensitivity analyses which give an estimation of the number of samples required to determine the variance with predetermined precision, and to determine which parameter had the most impact on the variance of the AAFs.Results: The analysis of the five Asian regions showed that 150 000 samples gave a sufficiently accurate estimation of the 95% confidence intervals for each disease. The relative risk functions accounted for most of the variance in the majority of cases.Conclusions: Within reasonable computation time, the method yielded very accurate values for variances of AAFs.
Resumo:
Any electoral system has an electoral formula that converts voteproportions into parliamentary seats. Pre-electoral polls usually focuson estimating vote proportions and then applying the electoral formulato give a forecast of the parliament's composition. We here describe theproblems arising from this approach: there is always a bias in theforecast. We study the origin of the bias and some methods to evaluateand to reduce it. We propose some rules to compute the sample sizerequired for a given forecast accuracy. We show by Monte Carlo simulationthe performance of the proposed methods using data from Spanish electionsin last years. We also propose graphical methods to visualize how electoralformulae and parliamentary forecasts work (or fail).
Resumo:
We study the statistical properties of three estimation methods for a model of learning that is often fitted to experimental data: quadratic deviation measures without unobserved heterogeneity, and maximum likelihood withand without unobserved heterogeneity. After discussing identification issues, we show that the estimators are consistent and provide their asymptotic distribution. Using Monte Carlo simulations, we show that ignoring unobserved heterogeneity can lead to seriously biased estimations in samples which have the typical length of actual experiments. Better small sample properties areobtained if unobserved heterogeneity is introduced. That is, rather than estimating the parameters for each individual, the individual parameters are considered random variables, and the distribution of those random variables is estimated.
Resumo:
International industry data permits testing whether the industry-specific impact of cross-countrydifferences in institutions or policies is consistent with economic theory. Empirical implementationrequires specifying the industry characteristics that determine impact strength. Most of the literature has been using US proxies of the relevant industry characteristics. We show that usingindustry characteristics in a benchmark country as a proxy of the relevant industry characteristicscan result in an attenuation bias or an amplification bias. We also describe circumstances allowingfor an alternative approach that yields consistent estimates. As an application, we reexamine theinfluential conjecture that financial development facilitates the reallocation of capital from decliningto expanding industries.
Resumo:
The treatments for ischemic stroke can only be administered in a narrow time-window. However, the ischemia onset time is unknown in ~30% of stroke patients (wake-up strokes). The objective of this study was to determine whether MR spectra of ischemic brains might allow the precise estimation of cerebral ischemia onset time. We modeled ischemic stroke in male ICR-CD1 mice using a permanent middle cerebral artery filament occlusion model with laser Doppler control of the regional cerebral blood flow. Mice were then subjected to repeated MRS measurements of ipsilateral striatum at 14.1 T. A striking initial increase in γ-aminobutyric acid (GABA) and no increase in glutamine were observed. A steady decline was observed for taurine (Tau), N-acetyl-aspartate (NAA) and similarly for the sum of NAA+Tau+glutamate that mimicked an exponential function. The estimation of the time of onset of permanent ischemia within 6 hours in a blinded experiment with mice showed an accuracy of 33±10 minutes. A plot of GABA, Tau, and neuronal marker concentrations against the ratio of acetate/NAA allowed precise separation of mice whose ischemia onset lay within arbitrarily chosen time-windows. We conclude that (1)H-MRS has the potential to detect the clinically relevant time of onset of ischemic stroke.
Resumo:
The identification of associations between interleukin-28B (IL-28B) variants and the spontaneous clearance of hepatitis C virus (HCV) raises the issues of causality and the net contribution of host genetics to the trait. To estimate more precisely the net effect of IL-28B genetic variation on HCV clearance, we optimized genotyping and compared the host contributions in multiple- and single-source cohorts to control for viral and demographic effects. The analysis included individuals with chronic or spontaneously cleared HCV infections from a multiple-source cohort (n = 389) and a single-source cohort (n = 71). We performed detailed genotyping in the coding region of IL-28B and searched for copy number variations to identify the genetic variant or haplotype carrying the strongest association with viral clearance. This analysis was used to compare the effects of IL-28B variation in the two cohorts. Haplotypes characterized by carriage of the major alleles at IL-28B single-nucleotide polymorphisms (SNPs) were highly overrepresented in individuals with spontaneous clearance versus those with chronic HCV infections (66.1% versus 38.6%, P = 6 × 10(-9) ). The odds ratios for clearance were 2.1 [95% confidence interval (CI) = 1.6-3.0] and 3.9 (95% CI = 1.5-10.2) in the multiple- and single-source cohorts, respectively. Protective haplotypes were in perfect linkage (r(2) = 1.0) with a nonsynonymous coding variant (rs8103142). Copy number variants were not detected. We identified IL-28B haplotypes highly predictive of spontaneous HCV clearance. The high linkage disequilibrium between IL-28B SNPs indicates that association studies need to be complemented by functional experiments to identify single causal variants. The point estimate for the genetic effect was higher in the single-source cohort, which was used to effectively control for viral diversity, sex, and coinfections and, therefore, offered a precise estimate of the net host genetic contribution.
Resumo:
Polychlorinated biphenyls (PCBs) are carcinogenic. Estimating PCB half-life in the body based on levels in sera from exposed workers is complicated by the fact that occupational exposure to PCBs was to commercial PCB products (such as Aroclors 1242 and 1254) comprised of varying mixtures of PCB congeners. Half-lives were estimated using sera donated by 191 capacitor manufacturing plant workers in 1976 during PCB use (1946-1977), and post-exposure (1979, 1983, and 1988). Our aims were to: (1) determine the role of covariates such as gender on the half-life estimates, and (2) compare our results with other published half-life estimates based on exposed workers. All serum PCB levels were adjusted for PCB background levels. A linear spline model with a single knot was used to estimate two separate linear equations for the first two serum draws (Equation A) and the latter two (Equation B). Equation A gave half-life estimates of 1.74 years and 6.01 years for Aroclor 1242 and Aroclor 1254, respectively. Estimates were 21.83 years for Aroclor 1242 and 133.33 years for Aroclor 1254 using Equation B. High initial body burden was associated with rapid PCB elimination in workers at or shortly after the time they were occupationally exposed and slowed down considerably when the dose reached background PCB levels. These concentration-dependent half-life estimates had a transition point of 138.57 and 34.78 ppb for Aroclor 1242 and 1254, respectively. This result will help in understanding the toxicological and epidemiological impact of exposure to PCBs in humans.
Resumo:
Hydrodynamical equations act as a link between the local observed magnitudes of galactic motion and the general ones accounting for the behaviour of the Galaxy as a whole. Constraints are set usually in order to use them even in the lower order hierarchy. The authors present in this paper the complete expressions up to their fourth order. These equations will be used in the next future in their general form taking into account both the expected increase of kinematic data that the astrometric mission Hipparcos will provide, and some recent results indicating the possibility to obtain estimates for the momenta gradients.