905 resultados para interim
Resumo:
This paper reflects on the challenges facing the effective implementation of the new EU fundamental rights architecture that emerged from the Lisbon Treaty. Particular attention is paid to the role of the Court of Justice of the European Union (CJEU) and its ability to function as a ‘fundamental rights tribunal’. The paper first analyses the praxis of the European Court of Human Rights in Strasbourg and its long-standing experience in overseeing the practical implementation of the European Convention for the Protection of Human Rights and Fundamental Freedoms. Against this analysis, it then examines the readiness of the CJEU to live up to its consolidated and strengthened mandate on fundamental rights as one of the prime guarantors of the effective implementation of the EU Charter of Fundamental Rights. We specifically review the role of ‘third-party interventions’ by non-governmental organisations, international and regional human rights actors as well as ‘interim relief measures’ when ensuring effective judicial protection of vulnerable individuals in cases of alleged violations of fundamental human rights. To flesh out our arguments, we rely on examples within the scope of the relatively new and complex domain of EU legislation, the Area of Freedom, Security and Justice (AFSJ), and its immigration, external border and asylum policies. In view of the fundamental rights-sensitive nature of these domains, which often encounter shifts of accountability and responsibility in their practical application, and the Lisbon Treaty’s expansion of the jurisdiction of the CJEU to interpret and review EU AFSJ legislation, this area can be seen as an excellent test case for the analyses at hand. The final section puts forth a set of policy suggestions that can assist the CJEU in the process of adjusting itself to the new fundamental rights context in a post-Lisbon Treaty setting.
Resumo:
To identify the causes of population decline in migratory birds, researchers must determine the relative influence of environmental changes on population dynamics while the birds are on breeding grounds, wintering grounds, and en route between the two. This is problematic when the wintering areas of specific populations are unknown. Here, we first identified the putative wintering areas of Common House-Martin (Delichon urbicum) and Common Swift (Apus apus) populations breeding in northern Italy as those areas, within the wintering ranges of these species, where the winter Normalized Difference Vegetation Index (NDVI), which may affect winter survival, best predicted annual variation in population indices observed in the breeding grounds in 1992–2009. In these analyses, we controlled for the potentially confounding effects of rainfall in the breeding grounds during the previous year, which may affect reproductive success; the North Atlantic Oscillation Index (NAO), which may account for climatic conditions faced by birds during migration; and the linear and squared term of year, which account for nonlinear population trends. The areas thus identified ranged from Guinea to Nigeria for the Common House-Martin, and were located in southern Ghana for the Common Swift. We then regressed annual population indices on mean NDVI values in the putative wintering areas and on the other variables, and used Bayesian model averaging (BMA) and hierarchical partitioning (HP) of variance to assess their relative contribution to population dynamics. We re-ran all the analyses using NDVI values at different spatial scales, and consistently found that our population of Common House-Martin was primarily affected by spring rainfall (43%–47.7% explained variance) and NDVI (24%–26.9%), while the Common Swift population was primarily affected by the NDVI (22.7%–34.8%). Although these results must be further validated, currently they are the only hypotheses about the wintering grounds of the Italian populations of these species, as no Common House-Martin and Common Swift ringed in Italy have been recovered in their wintering ranges.
Resumo:
Variability in aspects of the hydrological cycle over the Europe-Atlantic region during the summer season is analysed for the period 1979-2007, using observational estimates, reanalyses and climate model simulations. Warming and moistening trends are evident in observations and models although decadal changes in water vapour are not well represented by reanalyses, including the new European Centre for Medium Range Weather Forecasts (ECMWF) Interim reanalysis. Over the north Atlantic and northern Europe, observed water vapour trends are close to that expected from the temperature trends and Clausius-Clapeyron equation (7% K-1), larger than the model simulations. Precipitation over Europe is dominated by large-scale dynamics with positive phases of the North Atlantic Oscillation coinciding with drier conditions over north Europe and wetter conditions over the Mediterranean region. Evaporation trends over Europe are positive in reanalyses and models, especially for the Mediterranean region (1-3% per decade in reanalyses and climate models). Over the north Atlantic, declining precipitation combined with increased moisture contributed to an apparent rise in water vapour residence time. Maximum precipitation minus evaporation over the north Atlantic occurred during summer 1991, declining thereafter.
Resumo:
Particle size distribution (psd) is one of the most important features of the soil because it affects many of its other properties, and it determines how soil should be managed. To understand the properties of chalk soil, psd analyses should be based on the original material (including carbonates), and not just the acid-resistant fraction. Laser-based methods rather than traditional sedimentation methods are being used increasingly to determine particle size to reduce the cost of analysis. We give an overview of both approaches and the problems associated with them for analyzing the psd of chalk soil. In particular, we show that it is not appropriate to use the widely adopted 8 pm boundary between the clay and silt size fractions for samples determined by laser to estimate proportions of these size fractions that are equivalent to those based on sedimentation. We present data from field and national-scale surveys of soil derived from chalk in England. Results from both types of survey showed that laser methods tend to over-estimate the clay-size fraction compared to sedimentation for the 8 mu m clay/silt boundary, and we suggest reasons for this. For soil derived from chalk, either the sedimentation methods need to be modified or it would be more appropriate to use a 4 pm threshold as an interim solution for laser methods. Correlations between the proportions of sand- and clay-sized fractions, and other properties such as organic matter and volumetric water content, were the opposite of what one would expect for soil dominated by silicate minerals. For water content, this appeared to be due to the predominance of porous, chalk fragments in the sand-sized fraction rather than quartz grains, and the abundance of fine (<2 mu m) calcite crystals rather than phyllosilicates in the clay-sized fraction. This was confirmed by scanning electron microscope (SEM) analyses. "Of all the rocks with which 1 am acquainted, there is none whose formation seems to tax the ingenuity of theorists so severely, as the chalk, in whatever respect we may think fit to consider it". Thomas Allan, FRS Edinburgh 1823, Transactions of the Royal Society of Edinburgh. (C) 2009 Natural Environment Research Council (NERC) Published by Elsevier B.V. All rights reserved.
Resumo:
This paper introduces a simple futility design that allows a comparative clinical trial to be stopped due to lack of effect at any of a series of planned interim analyses. Stopping due to apparent benefit is not permitted. The design is for use when any positive claim should be based on the maximum sample size, for example to allow subgroup analyses or the evaluation of safety or secondary efficacy responses. A final frequentist analysis can be performed that is valid for the type of design employed. Here the design is described and its properties are presented. Its advantages and disadvantages relative to the use of stochastic curtailment are discussed. Copyright (C) 2003 John Wiley Sons, Ltd.
Resumo:
In clinical trials, situations often arise where more than one response from each patient is of interest; and it is required that any decision to stop the study be based upon some or all of these measures simultaneously. Theory for the design of sequential experiments with simultaneous bivariate responses is described by Jennison and Turnbull (Jennison, C., Turnbull, B. W. (1993). Group sequential tests for bivariate response: interim analyses of clinical trials with both efficacy and safety endpoints. Biometrics 49:741-752) and Cook and Farewell (Cook, R. J., Farewell, V. T. (1994). Guidelines for monitoring efficacy and toxicity responses in clinical trials. Biometrics 50:1146-1152) in the context of one efficacy and one safety response. These expositions are in terms of normally distributed data with known covariance. The methods proposed require specification of the correlation, ρ between test statistics monitored as part of the sequential test. It can be difficult to quantify ρ and previous authors have suggested simply taking the lowest plausible value, as this will guarantee power. This paper begins with an illustration of the effect that inappropriate specification of ρ can have on the preservation of trial error rates. It is shown that both the type I error and the power can be adversely affected. As a possible solution to this problem, formulas are provided for the calculation of correlation from data collected as part of the trial. An adaptive approach is proposed and evaluated that makes use of these formulas and an example is provided to illustrate the method. Attention is restricted to the bivariate case for ease of computation, although the formulas derived are applicable in the general multivariate case.
Resumo:
A number of authors have proposed clinical trial designs involving the comparison of several experimental treatments with a control treatment in two or more stages. At the end of the first stage, the most promising experimental treatment is selected, and all other experimental treatments are dropped from the trial. Provided it is good enough, the selected experimental treatment is then compared with the control treatment in one or more subsequent stages. The analysis of data from such a trial is problematic because of the treatment selection and the possibility of stopping at interim analyses. These aspects lead to bias in the maximum-likelihood estimate of the advantage of the selected experimental treatment over the control and to inaccurate coverage for the associated confidence interval. In this paper, we evaluate the bias of the maximum-likelihood estimate and propose a bias-adjusted estimate. We also propose an approach to the construction of a confidence region for the vector of advantages of the experimental treatments over the control based on an ordering of the sample space. These regions are shown to have accurate coverage, although they are also shown to be necessarily unbounded. Confidence intervals for the advantage of the selected treatment are obtained from the confidence regions and are shown to have more accurate coverage than the standard confidence interval based upon the maximum-likelihood estimate and its asymptotic standard error.
Resumo:
Most statistical methodology for phase III clinical trials focuses on the comparison of a single experimental treatment with a control. An increasing desire to reduce the time before regulatory approval of a new drug is sought has led to development of two-stage or sequential designs for trials that combine the definitive analysis associated with phase III with the treatment selection element of a phase II study. In this paper we consider a trial in which the most promising of a number of experimental treatments is selected at the first interim analysis. This considerably reduces the computational load associated with the construction of stopping boundaries compared to the approach proposed by Follman, Proschan and Geller (Biometrics 1994; 50: 325-336). The computational requirement does not exceed that for the sequential comparison of a single experimental treatment with a control. Existing methods are extended in two ways. First, the use of the efficient score as a test statistic makes the analysis of binary, normal or failure-time data, as well as adjustment for covariates or stratification straightforward. Second, the question of trial power is also considered, enabling the determination of sample size required to give specified power. Copyright © 2003 John Wiley & Sons, Ltd.
The sequential analysis of repeated binary responses: a score test for the case of three time points
Resumo:
In this paper a robust method is developed for the analysis of data consisting of repeated binary observations taken at up to three fixed time points on each subject. The primary objective is to compare outcomes at the last time point, using earlier observations to predict this for subjects with incomplete records. A score test is derived. The method is developed for application to sequential clinical trials, as at interim analyses there will be many incomplete records occurring in non-informative patterns. Motivation for the methodology comes from experience with clinical trials in stroke and head injury, and data from one such trial is used to illustrate the approach. Extensions to more than three time points and to allow for stratification are discussed. Copyright © 2005 John Wiley & Sons, Ltd.
Resumo:
Sequential methods provide a formal framework by which clinical trial data can be monitored as they accumulate. The results from interim analyses can be used either to modify the design of the remainder of the trial or to stop the trial as soon as sufficient evidence of either the presence or absence of a treatment effect is available. The circumstances under which the trial will be stopped with a claim of superiority for the experimental treatment, must, however, be determined in advance so as to control the overall type I error rate. One approach to calculating the stopping rule is the group-sequential method. A relatively recent alternative to group-sequential approaches is the adaptive design method. This latter approach provides considerable flexibility in changes to the design of a clinical trial at an interim point. However, a criticism is that the method by which evidence from different parts of the trial is combined means that a final comparison of treatments is not based on a sufficient statistic for the treatment difference, suggesting that the method may lack power. The aim of this paper is to compare two adaptive design approaches with the group-sequential approach. We first compare the form of the stopping boundaries obtained using the different methods. We then focus on a comparison of the power of the different trials when they are designed so as to be as similar as possible. We conclude that all methods acceptably control type I error rate and power when the sample size is modified based on a variance estimate, provided no interim analysis is so small that the asymptotic properties of the test statistic no longer hold. In the latter case, the group-sequential approach is to be preferred. Provided that asymptotic assumptions hold, the adaptive design approaches control the type I error rate even if the sample size is adjusted on the basis of an estimate of the treatment effect, showing that the adaptive designs allow more modifications than the group-sequential method.
Resumo:
This report describes the concept for a clinical trial that uses carbamazepine as the gold-standard active control for a study of newly diagnosed patients. The authors describe an endpoint including efficacy and tolerability, and a stopping rule that uses a series of interim analyses in order to reach a conclusion as efficiently as possible without sacrificing reliability.
Resumo:
The International Citicoline Trial in acUte Stroke is a sequential phase III study of the use of the drug citicoline in the treatment of acute ischaemic stroke, which was initiated in 2006 in 56 treatment centres. The primary objective of the trial is to demonstrate improved recovery of patients randomized to citicoline relative to those randomized to placebo after 12 weeks of follow-up. The primary analysis will take the form of a global test combining the dichotomized results of assessments on three well-established scales: the Barthel Index, the modified Rankin scale and the National Institutes of Health Stroke Scale. This approach was previously used in the analysis of the influential National Institute of Neurological Disorders and Stroke trial of recombinant tissue plasminogen activator in stroke. The purpose of this paper is to describe how this trial was designed, and in particular how the simultaneous objectives of taking into account three assessment scales, performing a series of interim analyses and conducting treatment allocation and adjusting the analyses to account for prognostic factors, including more than 50 treatment centres, were addressed. Copyright (C) 2008 John Wiley & Sons, Ltd.
Resumo:
Two-stage designs offer substantial advantages for early phase II studies. The interim analysis following the first stage allows the study to he stopped for futility, or more positively, it might lead to early progression to the trials needed for late phase H and phase III. If the study is to continue to its second stage, then there is an opportunity for a revision of the total sample size. Two-stage designs have been implemented widely in oncology studies in which there is a single treatment arm and patient responses are binary. In this paper the case of two-arm comparative studies in which responses are quantitative is considered. This setting is common in therapeutic areas other than oncology. It will be assumed that observations are normally distributed, but that there is some doubt concerning their standard deviation, motivating the need for sample size review. The work reported has been motivated by a study in diabetic neuropathic pain, and the development of the design for that trial is described in detail. Copyright (C) 2008 John Wiley & Sons, Ltd.
Resumo:
New high technology products usher in novel possibilities to transform the design, production and use of buildings. The high technology companies which design, develop and introduce these new products by generating and applying novel scientific and technical knowledge are faced with significant market uncertainty, technological uncertainty and competitive volatility. These characteristics present unique innovation challenges compared to low- and medium technology companies. This paper reports on an ongoing Construction Knowledge Exchange funded project which is tracking, real time, the new product development process of a new family of light emitting diode (LEDs) technologies. LEDs offer significant functional and environmental performance improvements over incumbent tungsten and halogen lamps. Hitherto, the use of energy efficient, low maintenance LEDs has been constrained by technical limitations. Rapid improvements in basic science and technology mean that for the first time LEDs can provide realistic general and accent lighting solutions. Interim results will be presented on the complex, emergent new high technology product development processes which are being revealed by the integrated supply chain of a LED module manufacture, a luminaire (light fitting) manufacture and end user involved in the project.
Resumo:
A series of model experiments with the coupled Max-Planck-Institute ECHAM5/OM climate model have been investigated and compared with microwave measurements from the Microwave Sounding Unit (MSU) and re-analysis data for the period 1979–2008. The evaluation is carried out by computing the Temperature in the Lower Troposphere (TLT) and Temperature in the Middle Troposphere (TMT) using the MSU weights from both University of Alabama (UAH) and Remote Sensing Systems (RSS) and restricting the study to primarily the tropical oceans. When forced by analysed sea surface temperature the model reproduces accurately the time-evolution of the mean outgoing tropospheric microwave radiation especially over tropical oceans but with a minor bias towards higher temperatures in the upper troposphere. The latest reanalyses data from the 25 year Japanese re-analysis (JRA25) and European Center for Medium Range Weather Forecasts Interim Reanalysis are in very close agreement with the time-evolution of the MSU data with a correlation of 0.98 and 0.96, respectively. The re-analysis trends are similar to the trends obtained from UAH but smaller than the trends from RSS. Comparison of TLT, computed from observations from UAH and RSS, with Sea Surface Temperature indicates that RSS has a warm bias after 1993. In order to identify the significance of the tropospheric linear temperature trends we determined the natural variability of 30-year trends from a 500 year control integration of the coupled ECHAM5 model. The model exhibits natural unforced variations of the 30 year tropospheric trend that vary within ±0.2 K/decade for the tropical oceans. This general result is supported by similar results from the Geophysical Fluid Dynamics Laboratory (GFDL) coupled climate model. Present MSU observations from UAH for the period 1979–2008 are well within this range but RSS is close to the upper positive limit of this variability. We have also compared the trend of the vertical lapse rate over the tropical oceans assuming that the difference between TLT and TMT is an approximate measure of the lapse rate. The TLT–TMT trend is larger in both the measurements and in the JRA25 than in the model runs by 0.04–0.06 K/decade. Furthermore, a calculation of all 30 year TLT–TMT trends of the unforced 500-year integration vary between ±0.03 K/decade suggesting that the models have a minor systematic warm bias in the upper troposphere.