776 resultados para simultaneous confidence intervals
Resumo:
In the present work a comparative quantitative evaluation of the differential effects of neuromuscular blockers on twitches and tetani was performed, encompassing: atracurium, cisatracurium, mivacurium, pancuronium, rocuronium and vecuronium. The sciatic nerve-extensor digitorum longus muscle of the rat was used, in vitro. Twitches were evoked at 0.1 Hz and tetani at 50 Hz. The differential effects of the studied compounds on twitches and tetani were statistically compared using simultaneous confidence intervals for the ratios between mean IC(50) for the block of twitches and mean IC(50) for the block of tetani. The results of ratios of mean IC(50) together with their corresponding 95% simultaneous confidence intervals were: vecuronium: 2.5 (1.8-3.5); mivacurium: 3.8 (3.0-4.9); pancuronium: 3.9 (2.0-7.6); rocuronium: 6.1 (3.8-9.9); atracurium: 9.0 (6.4-12.6); cisatracurium: 13.1 (6.0-28.4). Using the criteria that neuromuscular blockers displaying disjunct confidence intervals for the ratios of mean IC(50) differ statistically with regard to differential effects on twitches and tetani, significant differences in ratios of IC(50) were detected in the following cases: vecuronium vs. rocuronium, vs. atracurium and vs. cisatracurium and mivacurium vs: cisatracurium and vs. atracurium. The results show that the magnitude of the differential effects of neuromuscular blockers on twitches and tetani, as evaluated in the present work in the form of ratios of mean IC(50), does not depend on the chemical structure (comparing steroidal and isoquinolinic compounds), but seems to depend on differential pre- and post-synaptic effects of the compounds. It is also suggested that the greater the ability of a compound to block twitches and tetani in a differential manner, the safer is the compound from the clinical anesthesiology viewpoint.
Resumo:
In this work, we investigate an alternative bootstrap approach based on a result of Ramsey [F.L. Ramsey, Characterization of the partial autocorrelation function, Ann. Statist. 2 (1974), pp. 1296-1301] and on the Durbin-Levinson algorithm to obtain a surrogate series from linear Gaussian processes with long range dependence. We compare this bootstrap method with other existing procedures in a wide Monte Carlo experiment by estimating, parametrically and semi-parametrically, the memory parameter d. We consider Gaussian and non-Gaussian processes to prove the robustness of the method to deviations from normality. The approach is also useful to estimate confidence intervals for the memory parameter d by improving the coverage level of the interval.
Resumo:
Purpose The previous literature on Bland-Altman analysis only describes approximate methods for calculating confidence intervals for 95% Limits of Agreement (LoAs). This paper describes exact methods for calculating such confidence intervals, based on the assumption that differences in measurement pairs are normally distributed. Methods Two basic situations are considered for calculating LoA confidence intervals: the first where LoAs are considered individually (i.e. using one-sided tolerance factors for a normal distribution); and the second, where LoAs are considered as a pair (i.e. using two-sided tolerance factors for a normal distribution). Equations underlying the calculation of exact confidence limits are briefly outlined. Results To assist in determining confidence intervals for LoAs (considered individually and as a pair) tables of coefficients have been included for degrees of freedom between 1 and 1000. Numerical examples, showing the use of the tables for calculating confidence limits for Bland-Altman LoAs, have been provided. Conclusions Exact confidence intervals for LoAs can differ considerably from Bland and Altman’s approximate method, especially for sample sizes that are not large. There are better, more precise methods for calculating confidence intervals for LoAs than Bland and Altman’s approximate method, although even an approximate calculation of confidence intervals for LoAs is likely to be better than none at all. Reporting confidence limits for LoAs considered as a pair is appropriate for most situations, however there may be circumstances where it is appropriate to report confidence limits for LoAs considered individually.
Resumo:
We propose a new model for estimating the size of a population from successive catches taken during a removal experiment. The data from these experiments often have excessive variation, known as overdispersion, as compared with that predicted by the multinomial model. The new model allows catchability to vary randomly among samplings, which accounts for overdispersion. When the catchability is assumed to have a beta distribution, the likelihood function, which is refered to as beta-multinomial, is derived, and hence the maximum likelihood estimates can be evaluated. Simulations show that in the presence of extravariation in the data, the confidence intervals have been substantially underestimated in previous models (Leslie-DeLury, Moran) and that the new model provides more reliable confidence intervals. The performance of these methods was also demonstrated using two real data sets: one with overdispersion, from smallmouth bass (Micropterus dolomieu), and the other without overdispersion, from rat (Rattus rattus).
Resumo:
Existing point estimates of half-life deviations from purchasing power parity (PPP), around 3-5 years, suggest that the speed of convergence is extremely slow. This article assesses the degree of uncertainty around these point estimates by using local-to-unity asymptotic theory to construct confidence intervals that are robust to high persistence in small samples. The empirical evidence suggests that the lower bound of the confidence interval is between four and eight quarters for most currencies, which is not inconsistent with traditional price-stickiness explanations. However, the upper bounds are infinity for all currencies, so we cannot provide conclusive evidence in favor of PPP either. © 2005 American Statistical Association.
Resumo:
14C wiggle-match dating (WMD) of peat deposits uses the non-linear relationship between 14C age and calendar age to match the shape of a sequence of closely spaced peat 14C dates with the 14C calibration curve. A numerical approach to WMD enables the quantitative assessment of various possible wiggle-match solutions and of calendar year confidence intervals for sequences of 14C dates. We assess the assumptions, advantages, and limitations of the method. Several case-studies show that WMD results in more precise chronologies than when individual 14C dates are calibrated. WMD is most successful during periods with major excursions in the 14C calibration curve (e.g., in one case WMD could narrow down confidence intervals from 230 to 36 yr).
Resumo:
PowerPoint slides for Confidence Intervals. Examples are taken from the Medical Literature
Resumo:
lecture for COMP6235
Resumo:
Recently, in order to accelerate drug development, trials that use adaptive seamless designs such as phase II/III clinical trials have been proposed. Phase II/III clinical trials combine traditional phases II and III into a single trial that is conducted in two stages. Using stage 1 data, an interim analysis is performed to answer phase II objectives and after collection of stage 2 data, a final confirmatory analysis is performed to answer phase III objectives. In this paper we consider phase II/III clinical trials in which, at stage 1, several experimental treatments are compared to a control and the apparently most effective experimental treatment is selected to continue to stage 2. Although these trials are attractive because the confirmatory analysis includes phase II data from stage 1, the inference methods used for trials that compare a single experimental treatment to a control and do not have an interim analysis are no longer appropriate. Several methods for analysing phase II/III clinical trials have been developed. These methods are recent and so there is little literature on extensive comparisons of their characteristics. In this paper we review and compare the various methods available for constructing confidence intervals after phase II/III clinical trials.
Resumo:
Industrial recurrent event data where an event of interest can be observed more than once in a single sample unit are presented in several areas, such as engineering, manufacturing and industrial reliability. Such type of data provide information about the number of events, time to their occurrence and also their costs. Nelson (1995) presents a methodology to obtain asymptotic confidence intervals for the cost and the number of cumulative recurrent events. Although this is a standard procedure, it can not perform well in some situations, in particular when the sample size available is small. In this context, computer-intensive methods such as bootstrap can be used to construct confidence intervals. In this paper, we propose a technique based on the bootstrap method to have interval estimates for the cost and the number of cumulative events. One of the advantages of the proposed methodology is the possibility for its application in several areas and its easy computational implementation. In addition, it can be a better alternative than asymptotic-based methods to calculate confidence intervals, according to some Monte Carlo simulations. An example from the engineering area illustrates the methodology.