926 resultados para Approximate Bayesian computation


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In many areas of economics there is a growing interest in how expertise andpreferences drive individual and group decision making under uncertainty. Increasingly, we wish to estimate such models to quantify which of these drive decisionmaking. In this paper we propose a new channel through which we can empirically identify expertise and preference parameters by using variation in decisionsover heterogeneous priors. Relative to existing estimation approaches, our \Prior-Based Identification" extends the possible environments which can be estimated,and also substantially improves the accuracy and precision of estimates in thoseenvironments which can be estimated using existing methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A new algorithm called the parameterized expectations approach(PEA) for solving dynamic stochastic models under rational expectationsis developed and its advantages and disadvantages are discussed. Thisalgorithm can, in principle, approximate the true equilibrium arbitrarilywell. Also, this algorithm works from the Euler equations, so that theequilibrium does not have to be cast in the form of a planner's problem.Monte--Carlo integration and the absence of grids on the state variables,cause the computation costs not to go up exponentially when the numberof state variables or the exogenous shocks in the economy increase. \\As an application we analyze an asset pricing model with endogenousproduction. We analyze its implications for time dependence of volatilityof stock returns and the term structure of interest rates. We argue thatthis model can generate hump--shaped term structures.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We address the problem of scheduling a multiclass $M/M/m$ queue with Bernoulli feedback on $m$ parallel servers to minimize time-average linear holding costs. We analyze the performance of a heuristic priority-index rule, which extends Klimov's optimal solution to the single-server case: servers select preemptively customers with larger Klimov indices. We present closed-form suboptimality bounds (approximate optimality) for Klimov's rule, which imply that its suboptimality gap is uniformly bounded above with respect to (i) external arrival rates, as long as they stay within system capacity;and (ii) the number of servers. It follows that its relativesuboptimality gap vanishes in a heavy-traffic limit, as external arrival rates approach system capacity (heavy-traffic optimality). We obtain simpler expressions for the special no-feedback case, where the heuristic reduces to the classical $c \mu$ rule. Our analysis is based on comparing the expected cost of Klimov's ruleto the value of a strong linear programming (LP) relaxation of the system's region of achievable performance of mean queue lengths. In order to obtain this relaxation, we derive and exploit a new set ofwork decomposition laws for the parallel-server system. We further report on the results of a computational study on the quality of the $c \mu$ rule for parallel scheduling.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Consider the problem of testing k hypotheses simultaneously. In this paper,we discuss finite and large sample theory of stepdown methods that providecontrol of the familywise error rate (FWE). In order to improve upon theBonferroni method or Holm's (1979) stepdown method, Westfall and Young(1993) make eective use of resampling to construct stepdown methods thatimplicitly estimate the dependence structure of the test statistics. However,their methods depend on an assumption called subset pivotality. The goalof this paper is to construct general stepdown methods that do not requiresuch an assumption. In order to accomplish this, we take a close look atwhat makes stepdown procedures work, and a key component is a monotonicityrequirement of critical values. By imposing such monotonicity on estimatedcritical values (which is not an assumption on the model but an assumptionon the method), it is demonstrated that the problem of constructing a validmultiple test procedure which controls the FWE can be reduced to the problemof contructing a single test which controls the usual probability of a Type 1error. This reduction allows us to draw upon an enormous resamplingliterature as a general means of test contruction.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper analyses and discusses arguments that emerge from a recent discussion about the proper assessment of the evidential value of correspondences observed between the characteristics of a crime stain and those of a sample from a suspect when (i) this latter individual is found as a result of a database search and (ii) remaining database members are excluded as potential sources (because of different analytical characteristics). Using a graphical probability approach (i.e., Bayesian networks), the paper here intends to clarify that there is no need to (i) introduce a correction factor equal to the size of the searched database (i.e., to reduce a likelihood ratio), nor to (ii) adopt a propositional level not directly related to the suspect matching the crime stain (i.e., a proposition of the kind 'some person in (outside) the database is the source of the crime stain' rather than 'the suspect (some other person) is the source of the crime stain'). The present research thus confirms existing literature on the topic that has repeatedly demonstrated that the latter two requirements (i) and (ii) should not be a cause of concern.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The generalization of simple correspondence analysis, for two categorical variables, to multiple correspondence analysis where they may be three or more variables, is not straighforward, both from a mathematical and computational point of view. In this paper we detail the exact computational steps involved in performing a multiple correspondence analysis, including the special aspects of adjusting the principal inertias to correct the percentages of inertia, supplementary points and subset analysis. Furthermore, we give the algorithm for joint correspondence analysis where the cross-tabulations of all unique pairs of variables are analysed jointly. The code in the R language for every step of the computations is given, as well as the results of each computation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We provide methods for forecasting variables and predicting turning points in panel Bayesian VARs. We specify a flexible model which accounts for both interdependencies in the cross section and time variations in the parameters. Posterior distributions for the parameters are obtained for a particular type of diffuse, for Minnesota-type and for hierarchical priors. Formulas for multistep, multiunit point and average forecasts are provided. An application to the problem of forecasting the growth rate of output and of predicting turning points in the G-7 illustrates the approach. A comparison with alternative forecasting methods is also provided.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

To provide a quantitative support to the handwriting evidence evaluation, a new method was developed through the computation of a likelihood ratio based on a Bayesian approach. In the present paper, the methodology is briefly described and applied to data collected within a simulated case of a threatening letter. Fourier descriptors are used to characterise the shape of loops of handwritten characters "a" of the true writer of the threatening letter, and: 1) with reference characters "a" of the true writer of the threatening letter, and then 2) with characters "a" of a writer who did not write the threatening letter. The findings support that the probabilistic methodology correctly supports either the hypothesis of authorship or the alternative hypothesis. Further developments will enable the handwriting examiner to use this methodology as a helpful assistance to assess the strength of evidence in handwriting casework.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper proposes a common and tractable framework for analyzingdifferent definitions of fixed and random effects in a contant-slopevariable-intercept model. It is shown that, regardless of whethereffects (i) are treated as parameters or as an error term, (ii) areestimated in different stages of a hierarchical model, or whether (iii)correlation between effects and regressors is allowed, when the sameinformation on effects is introduced into all estimation methods, theresulting slope estimator is also the same across methods. If differentmethods produce different results, it is ultimately because differentinformation is being used for each methods.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Introduction: As imatinib pharmacokinetics are highly variable, plasma levels differ largely between patients under the same dosage. Retrospective studies in chronic myeloid leukemia (CML) patients showed significant correlations between low levels and suboptimal response, as well as between high levels and poor tolerability. Monitoring of trough plasma levels, targeting 1000 μg/L and above, is thus increasingly advised. Our study was launched to assess prospectively the clinical usefulness of systematic imatinib TDM in CML patients. This preliminary analysis addresses the appropriateness of the dosage adjustment approach applied in this study, which targets the recommended trough level and allows an interval of 4-24 h after last drug intake for blood sampling. Methods: Blood samples from the first 15 patients undergoing 1st TDM were obtained 1.5-25 h after last dose. Imatinib plasma levels were measured by LC-MS/MS and the concentrations were extrapolated to trough based on a Bayesian approach using a population pharmacokinetic model. Trough levels were predicted to differ significantly from the target in 12 patients (10 <750 μg/L; 2 >1500 μg/L along with poor tolerance) and individual dose adjustments were proposed. 8 patients underwent a 2nd TDM cycle. Trough levels of 1st and 2nd TDM were compared, the sample drawn 1.5 h after last dose (during distribution phase) was excluded from the analysis. Results: Individual dose adjustments were applied in 6 patients. Observed concentrations extrapolated to trough ranged from 360 to 1832 μg/L (median 725; mean 810, CV 52%) on 1st TDM and from 720 to 1187 μg/L (median 950; mean 940, CV 18%) on 2nd TDM cycle. Conclusions: These preliminary results suggest that TDM of imatinib using a Bayesian interpretation is able to target the recommended trough level of 1000 μg/L and to reduce the considerable differences in trough level exposure between patients (with CV decreasing from 52% to 18%). While this may simplify blood collection in daily practice, as samples do not have to be drawn exactly at trough, the largest possible interval to last drug intake yet remains preferable to avoid sampling during distribution phase leading to biased extrapolation. This encourages the evaluation of the clinical benefit of a routine TDM intervention in CML patients, which the randomized Swiss I-COME trial aims to.