65 resultados para Variable sample size
Resumo:
The behavior of the Asian summer monsoon is documented and compared using the European Centre for Medium-Range Weather Forecasts (ECMWF) Reanalysis (ERA) and the National Centers for Environmental Prediction-National Center for Atmospheric Research (NCEP-NCAR) Reanalysis. In terms of seasonal mean climatologies the results suggest that, in several respects, the ERA is superior to the NCEP-NCAR Reanalysis. The overall better simulation of the precipitation and hence the diabatic heating field over the monsoon domain in ERA means that the analyzed circulation is probably nearer reality. In terms of interannual variability, inconsistencies in the definition of weak and strong monsoon years based on typical monsoon indices such as All-India Rainfall (AIR) anomalies and the large-scale wind shear based dynamical monsoon index (DMI) still exist. Two dominant modes of interannual variability have been identified that together explain nearly 50% of the variance. Individually, they have many features in common with the composite flow patterns associated with weak and strong monsoons, when defined in terms of regional AIR anomalies and the large-scale DMI. The reanalyses also show a common dominant mode of intraseasonal variability that describes the latitudinal displacement of the tropical convergence zone from its oceanic-to-continental regime and essentially captures the low-frequency active/break cycles of the monsoon. The relationship between interannual and intraseasonal variability has been investigated by considering the probability density function (PDF) of the principal component of the dominant intraseasonal mode. Based on the DMI, there is an indication that in years with a weaker monsoon circulation, the PDF is skewed toward negative values (i,e., break conditions). Similarly, the PDFs for El Nino and La Nina years suggest that El Nino predisposes the system to more break spells, although the sample size may limit the statistical significance of the results.
Resumo:
The Representative Soil Sampling Scheme (RSSS) has monitored the soil of agricultural land in England and Wales since 1969. Here we describe the first spatial analysis of the data from these surveys using geostatistics. Four years of data (1971, 1981, 1991 and 2001) were chosen to examine the nutrient (available K, Mg and P) and pH status of the soil. At each farm, four fields were sampled; however, for the earlier years, coordinates were available for the farm only and not for each field. The averaged data for each farm were used for spatial analysis and the variograms showed spatial structure even with the smaller sample size. These variograms provide a reasonable summary of the larger scale of variation identified from the data of the more intensively sampled National Soil Inventory. Maps of kriged predictions of K generally show larger values in the central and southeastern areas (above 200 mg L-1) and an increase in values in the west over time, whereas Mg is fairly stable over time. The kriged predictions of P show a decline over time, particularly in the east, and those of pH show an increase in the east over time. Disjunctive kriging was used to examine temporal changes in available P using probabilities less than given thresholds of this element. The RSSS was not designed for spatial analysis, but the results show that the data from these surveys are suitable for this purpose. The results of the spatial analysis, together with those of the statistical analyses, provide a comprehensive view of the RSSS database as a basis for monitoring the soil. These data should be taken into account when future national soil monitoring schemes are designed.
Resumo:
It has been generally accepted that the method of moments (MoM) variogram, which has been widely applied in soil science, requires about 100 sites at an appropriate interval apart to describe the variation adequately. This sample size is often larger than can be afforded for soil surveys of agricultural fields or contaminated sites. Furthermore, it might be a much larger sample size than is needed where the scale of variation is large. A possible alternative in such situations is the residual maximum likelihood (REML) variogram because fewer data appear to be required. The REML method is parametric and is considered reliable where there is trend in the data because it is based on generalized increments that filter trend out and only the covariance parameters are estimated. Previous research has suggested that fewer data are needed to compute a reliable variogram using a maximum likelihood approach such as REML, however, the results can vary according to the nature of the spatial variation. There remain issues to examine: how many fewer data can be used, how should the sampling sites be distributed over the site of interest, and how do different degrees of spatial variation affect the data requirements? The soil of four field sites of different size, physiography, parent material and soil type was sampled intensively, and MoM and REML variograms were calculated for clay content. The data were then sub-sampled to give different sample sizes and distributions of sites and the variograms were computed again. The model parameters for the sets of variograms for each site were used for cross-validation. Predictions based on REML variograms were generally more accurate than those from MoM variograms with fewer than 100 sampling sites. A sample size of around 50 sites at an appropriate distance apart, possibly determined from variograms of ancillary data, appears adequate to compute REML variograms for kriging soil properties for precision agriculture and contaminated sites. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
Cost-sharing, which involves government-farmer partnership in the funding of agricultural extension service, is one of the reforms aimed at achieving sustainable funding for extension systems. This study examined the perceptions of farmers and extension professionals on this reform agenda in Nigeria. The study was carried out in six geopolitical zones of Nigeria. A multi-stage random sampling technique was applied in the selection of respondents. A sample size of 268 farmers and 272 Agricultural Development Programme (ADP) extension professionals participated in the study. Both descriptive and inferential statistics were used in analysing the data generated from this research. The results show that majority of farmers (80.6%) and extension professionals (85.7%) had favourable perceptions towards cost-sharing. Furthermore, the overall difference in their perceptions was not significant (t =0.03). The study concludes that the strong favourable perception held by the respondents is a pointer towards acceptance of the reform. It therefore recommends that government, extension administrators and policymakers should design and formulate effective strategies and regulations for the introduction and use of cost-sharing as an alternative approach to financing agricultural technology transfer in Nigeria.
Resumo:
This paper introduces a simple futility design that allows a comparative clinical trial to be stopped due to lack of effect at any of a series of planned interim analyses. Stopping due to apparent benefit is not permitted. The design is for use when any positive claim should be based on the maximum sample size, for example to allow subgroup analyses or the evaluation of safety or secondary efficacy responses. A final frequentist analysis can be performed that is valid for the type of design employed. Here the design is described and its properties are presented. Its advantages and disadvantages relative to the use of stochastic curtailment are discussed. Copyright (C) 2003 John Wiley Sons, Ltd.
Resumo:
Most statistical methodology for phase III clinical trials focuses on the comparison of a single experimental treatment with a control. An increasing desire to reduce the time before regulatory approval of a new drug is sought has led to development of two-stage or sequential designs for trials that combine the definitive analysis associated with phase III with the treatment selection element of a phase II study. In this paper we consider a trial in which the most promising of a number of experimental treatments is selected at the first interim analysis. This considerably reduces the computational load associated with the construction of stopping boundaries compared to the approach proposed by Follman, Proschan and Geller (Biometrics 1994; 50: 325-336). The computational requirement does not exceed that for the sequential comparison of a single experimental treatment with a control. Existing methods are extended in two ways. First, the use of the efficient score as a test statistic makes the analysis of binary, normal or failure-time data, as well as adjustment for covariates or stratification straightforward. Second, the question of trial power is also considered, enabling the determination of sample size required to give specified power. Copyright © 2003 John Wiley & Sons, Ltd.
Resumo:
Pharmacogenetic trials investigate the effect of genotype on treatment response. When there are two or more treatment groups and two or more genetic groups, investigation of gene-treatment interactions is of key interest. However, calculation of the power to detect such interactions is complicated because this depends not only on the treatment effect size within each genetic group, but also on the number of genetic groups, the size of each genetic group, and the type of genetic effect that is both present and tested for. The scale chosen to measure the magnitude of an interaction can also be problematic, especially for the binary case. Elston et al. proposed a test for detecting the presence of gene-treatment interactions for binary responses, and gave appropriate power calculations. This paper shows how the same approach can also be used for normally distributed responses. We also propose a method for analysing and performing sample size calculations based on a generalized linear model (GLM) approach. The power of the Elston et al. and GLM approaches are compared for the binary and normal case using several illustrative examples. While more sensitive to errors in model specification than the Elston et al. approach, the GLM approach is much more flexible and in many cases more powerful. Copyright © 2005 John Wiley & Sons, Ltd.
Resumo:
Sequential methods provide a formal framework by which clinical trial data can be monitored as they accumulate. The results from interim analyses can be used either to modify the design of the remainder of the trial or to stop the trial as soon as sufficient evidence of either the presence or absence of a treatment effect is available. The circumstances under which the trial will be stopped with a claim of superiority for the experimental treatment, must, however, be determined in advance so as to control the overall type I error rate. One approach to calculating the stopping rule is the group-sequential method. A relatively recent alternative to group-sequential approaches is the adaptive design method. This latter approach provides considerable flexibility in changes to the design of a clinical trial at an interim point. However, a criticism is that the method by which evidence from different parts of the trial is combined means that a final comparison of treatments is not based on a sufficient statistic for the treatment difference, suggesting that the method may lack power. The aim of this paper is to compare two adaptive design approaches with the group-sequential approach. We first compare the form of the stopping boundaries obtained using the different methods. We then focus on a comparison of the power of the different trials when they are designed so as to be as similar as possible. We conclude that all methods acceptably control type I error rate and power when the sample size is modified based on a variance estimate, provided no interim analysis is so small that the asymptotic properties of the test statistic no longer hold. In the latter case, the group-sequential approach is to be preferred. Provided that asymptotic assumptions hold, the adaptive design approaches control the type I error rate even if the sample size is adjusted on the basis of an estimate of the treatment effect, showing that the adaptive designs allow more modifications than the group-sequential method.
Resumo:
The proportional odds model provides a powerful tool for analysing ordered categorical data and setting sample size, although for many clinical trials its validity is questionable. The purpose of this paper is to present a new class of constrained odds models which includes the proportional odds model. The efficient score and Fisher's information are derived from the profile likelihood for the constrained odds model. These results are new even for the special case of proportional odds where the resulting statistics define the Mann-Whitney test. A strategy is described involving selecting one of these models in advance, requiring assumptions as strong as those underlying proportional odds, but allowing a choice of such models. The accuracy of the new procedure and its power are evaluated.
Resumo:
A score test is developed for binary clinical trial data, which incorporates patient non-compliance while respecting randomization. It is assumed in this paper that compliance is all-or-nothing, in the sense that a patient either accepts all of the treatment assigned as specified in the protocol, or none of it. Direct analytic comparisons of the adjusted test statistic for both the score test and the likelihood ratio test are made with the corresponding test statistics that adhere to the intention-to-treat principle. It is shown that no gain in power is possible over the intention-to-treat analysis, by adjusting for patient non-compliance. Sample size formulae are derived and simulation studies are used to demonstrate that the sample size approximation holds. Copyright © 2003 John Wiley & Sons, Ltd.
Resumo:
While planning the GAIN International Study of gavestinel in acute stroke, a sequential triangular test was proposed but not implemented. Before the trial commenced it was agreed to evaluate the sequential design retrospectively to evaluate the differences in the resulting analyses, trial durations and sample sizes in order to assess the potential of sequential procedures for future stroke trials. This paper presents four sequential reconstructions of the GAIN study made under various scenarios. For the data as observed, the sequential design would have reduced the trial sample size by 234 patients and shortened its duration by 3 or 4 months. Had the study not achieved a recruitment rate that far exceeded expectation, the advantages of the sequential design would have been much greater. Sequential designs appear to be an attractive option for trials in stroke. Copyright 2004 S. Karger AG, Basel
Resumo:
Proportion estimators are quite frequently used in many application areas. The conventional proportion estimator (number of events divided by sample size) encounters a number of problems when the data are sparse as will be demonstrated in various settings. The problem of estimating its variance when sample sizes become small is rarely addressed in a satisfying framework. Specifically, we have in mind applications like the weighted risk difference in multicenter trials or stratifying risk ratio estimators (to adjust for potential confounders) in epidemiological studies. It is suggested to estimate p using the parametric family (see PDF for character) and p(1 - p) using (see PDF for character), where (see PDF for character). We investigate the estimation problem of choosing c 0 from various perspectives including minimizing the average mean squared error of (see PDF for character), average bias and average mean squared error of (see PDF for character). The optimal value of c for minimizing the average mean squared error of (see PDF for character) is found to be independent of n and equals c = 1. The optimal value of c for minimizing the average mean squared error of (see PDF for character) is found to be dependent of n with limiting value c = 0.833. This might justifiy to use a near-optimal value of c = 1 in practice which also turns out to be beneficial when constructing confidence intervals of the form (see PDF for character).
Resumo:
We consider the case of a multicenter trial in which the center specific sample sizes are potentially small. Under homogeneity, the conventional procedure is to pool information using a weighted estimator where the weights used are inverse estimated center-specific variances. Whereas this procedure is efficient for conventional asymptotics (e. g. center-specific sample sizes become large, number of center fixed), it is commonly believed that the efficiency of this estimator holds true also for meta-analytic asymptotics (e.g. center-specific sample size bounded, potentially small, and number of centers large). In this contribution we demonstrate that this estimator fails to be efficient. In fact, it shows a persistent bias with increasing number of centers showing that it isnot meta-consistent. In addition, we show that the Cochran and Mantel-Haenszel weighted estimators are meta-consistent and, in more generality, provide conditions on the weights such that the associated weighted estimator is meta-consistent.
Resumo:
Background and Purpose-Clinical research into the treatment of acute stroke is complicated, is costly, and has often been unsuccessful. Developments in imaging technology based on computed tomography and magnetic resonance imaging scans offer opportunities for screening experimental therapies during phase II testing so as to deliver only the most promising interventions to phase III. We discuss the design and the appropriate sample size for phase II studies in stroke based on lesion volume. Methods-Determination of the relation between analyses of lesion volumes and of neurologic outcomes is illustrated using data from placebo trial patients from the Virtual International Stroke Trials Archive. The size of an effect on lesion volume that would lead to a clinically relevant treatment effect in terms of a measure, such as modified Rankin score (mRS), is found. The sample size to detect that magnitude of effect on lesion volume is then calculated. Simulation is used to evaluate different criteria for proceeding from phase II to phase III. Results-The odds ratios for mRS correspond roughly to the square root of odds ratios for lesion volume, implying that for equivalent power specifications, sample sizes based on lesion volumes should be about one fourth of those based on mRS. Relaxation of power requirements, appropriate for phase II, lead to further sample size reductions. For example, a phase III trial comparing a novel treatment with placebo with a total sample size of 1518 patients might be motivated from a phase II trial of 126 patients comparing the same 2 treatment arms. Discussion-Definitive phase III trials in stroke should aim to demonstrate significant effects of treatment on clinical outcomes. However, more direct outcomes such as lesion volume can be useful in phase II for determining whether such phase III trials should be undertaken in the first place. (Stroke. 2009;40:1347-1352.)
Resumo:
Two-stage designs offer substantial advantages for early phase II studies. The interim analysis following the first stage allows the study to he stopped for futility, or more positively, it might lead to early progression to the trials needed for late phase H and phase III. If the study is to continue to its second stage, then there is an opportunity for a revision of the total sample size. Two-stage designs have been implemented widely in oncology studies in which there is a single treatment arm and patient responses are binary. In this paper the case of two-arm comparative studies in which responses are quantitative is considered. This setting is common in therapeutic areas other than oncology. It will be assumed that observations are normally distributed, but that there is some doubt concerning their standard deviation, motivating the need for sample size review. The work reported has been motivated by a study in diabetic neuropathic pain, and the development of the design for that trial is described in detail. Copyright (C) 2008 John Wiley & Sons, Ltd.