890 resultados para BOOTSTRAP CONFIDENCE-INTERVALS
Resumo:
Industrial recurrent event data where an event of interest can be observed more than once in a single sample unit are presented in several areas, such as engineering, manufacturing and industrial reliability. Such type of data provide information about the number of events, time to their occurrence and also their costs. Nelson (1995) presents a methodology to obtain asymptotic confidence intervals for the cost and the number of cumulative recurrent events. Although this is a standard procedure, it can not perform well in some situations, in particular when the sample size available is small. In this context, computer-intensive methods such as bootstrap can be used to construct confidence intervals. In this paper, we propose a technique based on the bootstrap method to have interval estimates for the cost and the number of cumulative events. One of the advantages of the proposed methodology is the possibility for its application in several areas and its easy computational implementation. In addition, it can be a better alternative than asymptotic-based methods to calculate confidence intervals, according to some Monte Carlo simulations. An example from the engineering area illustrates the methodology.
Resumo:
The aim of many genetic studies is to locate the genomic regions (called quantitative trait loci, QTLs) that contribute to variation in a quantitative trait (such as body weight). Confidence intervals for the locations of QTLs are particularly important for the design of further experiments to identify the gene or genes responsible for the effect. Likelihood support intervals are the most widely used method to obtain confidence intervals for QTL location, but the non-parametric bootstrap has also been recommended. Through extensive computer simulation, we show that bootstrap confidence intervals are poorly behaved and so should not be used in this context. The profile likelihood (or LOD curve) for QTL location has a tendency to peak at genetic markers, and so the distribution of the maximum likelihood estimate (MLE) of QTL location has the unusual feature of point masses at genetic markers; this contributes to the poor behavior of the bootstrap. Likelihood support intervals and approximate Bayes credible intervals, on the other hand, are shown to behave appropriately.
Resumo:
Condence intervals in econometric time series regressions suffer fromnotorious coverage problems. This is especially true when the dependencein the data is noticeable and sample sizes are small to moderate, as isoften the case in empirical studies. This paper suggests using thestudentized block bootstrap and discusses practical issues, such as thechoice of the block size. A particular data-dependent method is proposedto automate the method. As a side note, it is pointed out that symmetricconfidence intervals are preferred over equal-tailed ones, since theyexhibit improved coverage accuracy. The improvements in small sampleperformance are supported by a simulation study.
Resumo:
Hierarchical linear growth model (HLGM), as a flexible and powerful analytic method, has played an increased important role in psychology, public health and medical sciences in recent decades. Mostly, researchers who conduct HLGM are interested in the treatment effect on individual trajectories, which can be indicated by the cross-level interaction effects. However, the statistical hypothesis test for the effect of cross-level interaction in HLGM only show us whether there is a significant group difference in the average rate of change, rate of acceleration or higher polynomial effect; it fails to convey information about the magnitude of the difference between the group trajectories at specific time point. Thus, reporting and interpreting effect sizes have been increased emphases in HLGM in recent years, due to the limitations and increased criticisms for statistical hypothesis testing. However, most researchers fail to report these model-implied effect sizes for group trajectories comparison and their corresponding confidence intervals in HLGM analysis, since lack of appropriate and standard functions to estimate effect sizes associated with the model-implied difference between grouping trajectories in HLGM, and also lack of computing packages in the popular statistical software to automatically calculate them. ^ The present project is the first to establish the appropriate computing functions to assess the standard difference between grouping trajectories in HLGM. We proposed the two functions to estimate effect sizes on model-based grouping trajectories difference at specific time, we also suggested the robust effect sizes to reduce the bias of estimated effect sizes. Then, we applied the proposed functions to estimate the population effect sizes (d ) and robust effect sizes (du) on the cross-level interaction in HLGM by using the three simulated datasets, and also we compared the three methods of constructing confidence intervals around d and du recommended the best one for application. At the end, we constructed 95% confidence intervals with the suitable method for the effect sizes what we obtained with the three simulated datasets. ^ The effect sizes between grouping trajectories for the three simulated longitudinal datasets indicated that even though the statistical hypothesis test shows no significant difference between grouping trajectories, effect sizes between these grouping trajectories can still be large at some time points. Therefore, effect sizes between grouping trajectories in HLGM analysis provide us additional and meaningful information to assess group effect on individual trajectories. In addition, we also compared the three methods to construct 95% confident intervals around corresponding effect sizes in this project, which handled with the uncertainty of effect sizes to population parameter. We suggested the noncentral t-distribution based method when the assumptions held, and the bootstrap bias-corrected and accelerated method when the assumptions are not met.^
Resumo:
This thesis proposes some confidence intervals for the mean of a positively skewed distribution. The following confidence intervals are considered: Student-t, Johnson-t, median-t, mad-t, bootstrap-t, BCA, T1 , T3 and six new confidence intervals, the median bootstrap-t, mad bootstrap-t, median T1, mad T1 , median T3 and the mad T3. A simulation study has been conducted and average widths, coefficient of variation of widths, and coverage probabilities were recorded and compared across confidence intervals. To compare confidence intervals, the width and coverage probabilities were compared so that smaller widths indicated a better confidence interval when coverage probabilities were the same. Results showed that the median T1 and median T3 outperformed other confidence intervals in terms of coverage probability and the mad bootstrap-t, mad-t, and mad T3 outperformed others in terms of width. Some real life data are considered to illustrate the findings of the thesis.
Resumo:
We propose a novel, simple, efficient and distribution-free re-sampling technique for developing prediction intervals for returns and volatilities following ARCH/GARCH models. In particular, our key idea is to employ a Box–Jenkins linear representation of an ARCH/GARCH equation and then to adapt a sieve bootstrap procedure to the nonlinear GARCH framework. Our simulation studies indicate that the new re-sampling method provides sharp and well calibrated prediction intervals for both returns and volatilities while reducing computational costs by up to 100 times, compared to other available re-sampling techniques for ARCH/GARCH models. The proposed procedure is illustrated by an application to Yen/U.S. dollar daily exchange rate data.
Resumo:
PowerPoint slides for Confidence Intervals. Examples are taken from the Medical Literature
Resumo:
lecture for COMP6235
Resumo:
The calculation of interval forecasts for highly persistent autoregressive (AR) time series based on the bootstrap is considered. Three methods are considered for countering the small-sample bias of least-squares estimation for processes which have roots close to the unit circle: a bootstrap bias-corrected OLS estimator; the use of the Roy–Fuller estimator in place of OLS; and the use of the Andrews–Chen estimator in place of OLS. All three methods of bias correction yield superior results to the bootstrap in the absence of bias correction. Of the three correction methods, the bootstrap prediction intervals based on the Roy–Fuller estimator are generally superior to the other two. The small-sample performance of bootstrap prediction intervals based on the Roy–Fuller estimator are investigated when the order of the AR model is unknown, and has to be determined using an information criterion.
Resumo:
Recently, in order to accelerate drug development, trials that use adaptive seamless designs such as phase II/III clinical trials have been proposed. Phase II/III clinical trials combine traditional phases II and III into a single trial that is conducted in two stages. Using stage 1 data, an interim analysis is performed to answer phase II objectives and after collection of stage 2 data, a final confirmatory analysis is performed to answer phase III objectives. In this paper we consider phase II/III clinical trials in which, at stage 1, several experimental treatments are compared to a control and the apparently most effective experimental treatment is selected to continue to stage 2. Although these trials are attractive because the confirmatory analysis includes phase II data from stage 1, the inference methods used for trials that compare a single experimental treatment to a control and do not have an interim analysis are no longer appropriate. Several methods for analysing phase II/III clinical trials have been developed. These methods are recent and so there is little literature on extensive comparisons of their characteristics. In this paper we review and compare the various methods available for constructing confidence intervals after phase II/III clinical trials.
Resumo:
In the present work a comparative quantitative evaluation of the differential effects of neuromuscular blockers on twitches and tetani was performed, encompassing: atracurium, cisatracurium, mivacurium, pancuronium, rocuronium and vecuronium. The sciatic nerve-extensor digitorum longus muscle of the rat was used, in vitro. Twitches were evoked at 0.1 Hz and tetani at 50 Hz. The differential effects of the studied compounds on twitches and tetani were statistically compared using simultaneous confidence intervals for the ratios between mean IC(50) for the block of twitches and mean IC(50) for the block of tetani. The results of ratios of mean IC(50) together with their corresponding 95% simultaneous confidence intervals were: vecuronium: 2.5 (1.8-3.5); mivacurium: 3.8 (3.0-4.9); pancuronium: 3.9 (2.0-7.6); rocuronium: 6.1 (3.8-9.9); atracurium: 9.0 (6.4-12.6); cisatracurium: 13.1 (6.0-28.4). Using the criteria that neuromuscular blockers displaying disjunct confidence intervals for the ratios of mean IC(50) differ statistically with regard to differential effects on twitches and tetani, significant differences in ratios of IC(50) were detected in the following cases: vecuronium vs. rocuronium, vs. atracurium and vs. cisatracurium and mivacurium vs: cisatracurium and vs. atracurium. The results show that the magnitude of the differential effects of neuromuscular blockers on twitches and tetani, as evaluated in the present work in the form of ratios of mean IC(50), does not depend on the chemical structure (comparing steroidal and isoquinolinic compounds), but seems to depend on differential pre- and post-synaptic effects of the compounds. It is also suggested that the greater the ability of a compound to block twitches and tetani in a differential manner, the safer is the compound from the clinical anesthesiology viewpoint.
Resumo:
Using data from the United States, Japan, Germany , United Kingdom and France, Sims (1992) found that positive innovations to shortterm interest rates led to sharp, persistent increases in the price leveI. The result was confirmed by other authors and, as a consequence of its non-expectable nature, was given the name "price puzzle" by Eichenbaum (1992). In this paper I investigate the existence of a price puzzle in Brazil using the same type of estimation and benchmark identification scheme employed by Christiano et aI. (2000). In a methodological improvement over these studies, I qualify the results with the construction of bias-corrected bootstrap confidence intervals. Even though the data does show the existence of a statistically significant price puzzle in Brazil, it lasts for .only one quarter and is quantitatively immaterial.
Resumo:
Theory recently developed to construct confidence regions based on the parametric bootstrap is applied to add inferential information to graphical displays of sample centroids in canonical variate analysis. Problems of morphometric differentiation among subspecies and species are addressed using numerical resampling procedures.