851 resultados para statistical methods


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background. Meta-analyses show that cognitive behaviour therapy for psychosis (CBT-P) improves distressing positive symptoms. However, it is a complex intervention involving a range of techniques. No previous study has assessed the delivery of the different elements of treatment and their effect on outcome. Our aim was to assess the differential effect of type of treatment delivered on the effectiveness of CBT-P, using novel statistical methodology. Method. The Psychological Prevention of Relapse in Psychosis (PRP) trial was a multi-centre randomized controlled trial (RCT) that compared CBT-P with treatment as usual (TAU). Therapy was manualized, and detailed evaluations of therapy delivery and client engagement were made. Follow-up assessments were made at 12 and 24 months. In a planned analysis, we applied principal stratification (involving structural equation modelling with finite mixtures) to estimate intention-to-treat (ITT) effects for subgroups of participants, defined by qualitative and quantitative differences in receipt of therapy, while maintaining the constraints of randomization. Results. Consistent delivery of full therapy, including specific cognitive and behavioural techniques, was associated with clinically and statistically significant increases in months in remission, and decreases in psychotic and affective symptoms. Delivery of partial therapy involving engagement and assessment was not effective. Conclusions. Our analyses suggest that CBT-P is of significant benefit on multiple outcomes to patients able to engage in the full range of therapy procedures. The novel statistical methods illustrated in this report have general application to the evaluation of heterogeneity in the effects of treatment.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In recent years, there has been a drive to save development costs and shorten time-to-market of new therapies. Research into novel trial designs to facilitate this goal has led to, amongst other approaches, the development of methodology for seamless phase II/III designs. Such designs allow treatment or dose selection at an interim analysis and comparative evaluation of efficacy with control, in the same study. Methods have gained much attention because of their potential advantages compared to conventional drug development programmes with separate trials for individual phases. In this article, we review the various approaches to seamless phase II/III designs based upon the group-sequential approach, the combination test approach and the adaptive Dunnett method. The objective of this article is to describe the approaches in a unified framework and highlight their similarities and differences to allow choice of an appropriate methodology by a trialist considering conducting such a trial.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Elephant poaching and the ivory trade remain high on the agenda at meetings of the Convention on International Trade in Endangered Species of Wild Fauna and Flora (CITES). Well-informed debates require robust estimates of trends, the spatial distribution of poaching, and drivers of poaching. We present an analysis of trends and drivers of an indicator of elephant poaching of all elephant species. The site-based monitoring system known as Monitoring the Illegal Killing of Elephants (MIKE), set up by the 10th Conference of the Parties of CITES in 1997, produces carcass encounter data reported mainly by anti-poaching patrols. Data analyzed were site by year totals of 6,337 carcasses from 66 sites in Africa and Asia from 2002–2009. Analysis of these observational data is a serious challenge to traditional statistical methods because of the opportunistic and non-random nature of patrols, and the heterogeneity across sites. Adopting a Bayesian hierarchical modeling approach, we used the proportion of carcasses that were illegally killed (PIKE) as a poaching index, to estimate the trend and the effects of site- and country-level factors associated with poaching. Important drivers of illegal killing that emerged at country level were poor governance and low levels of human development, and at site level, forest cover and area of the site in regions where human population density is low. After a drop from 2002, PIKE remained fairly constant from 2003 until 2006, after which it increased until 2008. The results for 2009 indicate a decline. Sites with PIKE ranging from the lowest to the highest were identified. The results of the analysis provide a sound information base for scientific evidence-based decision making in the CITES process.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Statistical methods of inference typically require the likelihood function to be computable in a reasonable amount of time. The class of “likelihood-free” methods termed Approximate Bayesian Computation (ABC) is able to eliminate this requirement, replacing the evaluation of the likelihood with simulation from it. Likelihood-free methods have gained in efficiency and popularity in the past few years, following their integration with Markov Chain Monte Carlo (MCMC) and Sequential Monte Carlo (SMC) in order to better explore the parameter space. They have been applied primarily to estimating the parameters of a given model, but can also be used to compare models. Here we present novel likelihood-free approaches to model comparison, based upon the independent estimation of the evidence of each model under study. Key advantages of these approaches over previous techniques are that they allow the exploitation of MCMC or SMC algorithms for exploring the parameter space, and that they do not require a sampler able to mix between models. We validate the proposed methods using a simple exponential family problem before providing a realistic problem from human population genetics: the comparison of different demographic models based upon genetic data from the Y chromosome.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The radiation of the mammals provides a 165-million-year test case for evolutionary theories of how species occupy and then fill ecological niches. It is widely assumed that species often diverge rapidly early in their evolution, and that this is followed by a longer, drawn-out period of slower evolutionary fine-tuning as natural selection fits organisms into an increasingly occupied niche space1,2. But recent studies have hinted that the process may not be so simple3–5. Here we apply statistical methods that automatically detect temporal shifts in the rate of evolution through time to a comprehensive mammalian phylogeny6 and data set7 of body sizes of 3,185 extant species. Unexpectedly, the majority of mammal species, including two of the most speciose orders (Rodentia and Chiroptera), have no history of substantial and sustained increases in the rates of evolution. Instead, a subset of the mammals has experienced an explosive increase (between 10- and 52-fold) in the rate of evolution along the single branch leading to the common ancestor of their monophyletic group (for example Chiroptera), followed by a quick return to lower or background levels. The remaining species are a taxonomically diverse assemblage showing a significant, sustained increase or decrease in their rates of evolution. These results necessarily decouple morphological diversification from speciation and suggest that the processes that give rise to the morphological diversity of a class of animals are far more free to vary than previously considered. Niches do not seem to fill up, and diversity seems to arise whenever, wherever and at whatever rate it is advantageous.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The growing human population will require a significant increase in agricultural production. This challenge is made more difficult by the fact that changes in the climatic and environmental conditions under which crops are grown have resulted in the appearance of new diseases, whereas genetic changes within the pathogen have resulted in the loss of previously effective sources of resistance. To help meet this challenge, advanced genetic and statistical methods of analysis have been used to identify new resistance genes through global screens, and studies of plant-pathogen interactions have been undertaken to uncover the mechanisms by which disease resistance is achieved. The informed deployment of major, race-specific and partial, race-nonspecific resistance, either by conventional breeding or transgenic approaches, will enable the production of crop varieties with effective resistance without impacting on other agronomically important crop traits. Here, we review these recent advances and progress towards the ultimate goal of developing disease-resistant crops.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background: Association mapping, initially developed in human disease genetics, is now being applied to plant species. The model species Arabidopsis provided some of the first examples of association mapping in plants, identifying previously cloned flowering time genes, despite high population sub-structure. More recently, association genetics has been applied to barley, where breeding activity has resulted in a high degree of population sub-structure. A major genotypic division within barley is that between winter- and spring-sown varieties, which differ in their requirement for vernalization to promote subsequent flowering. To date, all attempts to validate association genetics in barley by identifying major flowering time loci that control vernalization requirement (VRN-H1 and VRN-H2) have failed. Here, we validate the use of association genetics in barley by identifying VRN-H1 and VRN-H2, despite their prominent role in determining population sub-structure. Results: By taking barley as a typical inbreeding crop, and seasonal growth habit as a major partitioning phenotype, we develop an association mapping approach which successfully identifies VRN-H1 and VRN-H2, the underlying loci largely responsible for this agronomic division. We find a combination of Structured Association followed by Genomic Control to correct for population structure and inflation of the test statistic, resolved significant associations only with VRN-H1 and the VRN-H2 candidate genes, as well as two genes closely linked to VRN-H1 (HvCSFs1 and HvPHYC). Conclusion: We show that, after employing appropriate statistical methods to correct for population sub-structure, the genome-wide partitioning effect of allelic status at VRN-H1 and VRN-H2 does not result in the high levels of spurious association expected to occur in highly structured samples. Furthermore, we demonstrate that both VRN-H1 and the candidate VRN-H2 genes can be identified using association mapping. Discrimination between intragenic VRN-H1 markers was achieved, indicating that candidate causative polymorphisms may be discerned and prioritised within a larger set of positive associations. This proof of concept study demonstrates the feasibility of association mapping in barley, even within highly structured populations. A major advantage of this method is that it does not require large numbers of genome-wide markers, and is therefore suitable for fine mapping and candidate gene evaluation, especially in species for which large numbers of genetic markers are either unavailable or too costly.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Background The persistence of rural-urban disparities in child nutrition outcomes in developing countries alongside rapid urbanisation and increasing incidence of child malnutrition in urban areas raises an important health policy question - whether fundamentally different nutrition policies and interventions are required in rural and urban areas. Addressing this question requires an enhanced understanding of the main drivers of rural-urban disparities in child nutrition outcomes especially for the vulnerable segments of the population. This study applies recently developed statistical methods to quantify the contribution of different socio-economic determinants to rural-urban differences in child nutrition outcomes in two South Asian countries – Bangladesh and Nepal. Methods Using DHS data sets for Bangladesh and Nepal, we apply quantile regression-based counterfactual decomposition methods to quantify the contribution of (1) the differences in levels of socio-economic determinants (covariate effects) and (2) the differences in the strength of association between socio-economic determinants and child nutrition outcomes (co-efficient effects) to the observed rural-urban disparities in child HAZ scores. The methodology employed in the study allows the covariate and coefficient effects to vary across entire distribution of child nutrition outcomes. This is particularly useful in providing specific insights into factors influencing rural-urban disparities at the lower tails of child HAZ score distributions. It also helps assess the importance of individual determinants and how they vary across the distribution of HAZ scores. Results There are no fundamental differences in the characteristics that determine child nutrition outcomes in urban and rural areas. Differences in the levels of a limited number of socio-economic characteristics – maternal education, spouse’s education and the wealth index (incorporating household asset ownership and access to drinking water and sanitation) contribute a major share of rural-urban disparities in the lowest quantiles of child nutrition outcomes. Differences in the strength of association between socio-economic characteristics and child nutrition outcomes account for less than a quarter of rural-urban disparities at the lower end of the HAZ score distribution. Conclusions Public health interventions aimed at overcoming rural-urban disparities in child nutrition outcomes need to focus principally on bridging gaps in socio-economic endowments of rural and urban households and improving the quality of rural infrastructure. Improving child nutrition outcomes in developing countries does not call for fundamentally different approaches to public health interventions in rural and urban areas.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Airborne high resolution in situ measurements of a large set of trace gases including ozone (O3) and total water (H2O) in the upper troposphere and the lowermost stratosphere (UT/LMS) have been performed above Europe within the SPURT project. SPURT provides an extensive data coverage of the UT/LMS in each season within the time period between November 2001 and July 2003. In the LMS a distinct spring maximum and autumn minimum is observed in O3, whereas its annual cycle in the UT is shifted by 2–3 months later towards the end of the year. The more variable H2O measurements reveal a maximum during summer and a minimum during autumn/winter with no phase shift between the two atmospheric compartments. For a comprehensive insight into trace gas composition and variability in the UT/LMS several statistical methods are applied using chemical, thermal and dynamical vertical coordinates. In particular, 2-dimensional probability distribution functions serve as a tool to transform localised aircraft data to a more comprehensive view of the probed atmospheric region. It appears that both trace gases, O3 and H2O, reveal the most compact arrangement and are best correlated in the view of potential vorticity (PV) and distance to the local tropopause, indicating an advanced mixing state on these surfaces. Thus, strong gradients of PV seem to act as a transport barrier both in the vertical and the horizontal direction. The alignment of trace gas isopleths reflects the existence of a year-round extra-tropical tropopause transition layer. The SPURT measurements reveal that this layer is mainly affected by stratospheric air during winter/spring and by tropospheric air during autumn/summer. Normalised mixing entropy values for O3 and H2O in the LMS appear to be maximal during spring and summer, respectively, indicating highest variability of these trace gases during the respective seasons.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The Homeric epics are among the greatest masterpieces of literature, but when they were produced is not known with certainty. Here we apply evolutionary-linguistic phylogenetic statistical methods to differences in Homeric, Modern Greek and ancient Hittite vocabulary items to estimate a date of approximately 710–760 BCE for these great works. Our analysis compared a common set of vocabulary items among the three pairs of languages, recording for each item whether the words in the two languages were cognate – derived from a shared ancestral word – or not. We then used a likelihood-based Markov chain Monte Carlo procedure to estimate the most probable times in years separating these languages given the percentage of words they shared, combined with knowledge of the rates at which different words change. Our date for the epics is in close agreement with historians' and classicists' beliefs derived from historical and archaeological sources.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Market failure can be corrected using different regulatory approaches ranging from high to low intervention. Recently, classic regulations have been criticized as costly and economically irrational and thus policy makers are giving more consideration to soft regulatory techniques such as information remedies. However, despite the plethora of food information conveyed by different media there appears to be a lack of studies exploring how consumers evaluate this information and how trust towards publishers influence their choices for food information. In order to fill such a gap, this study investigates questions related to topics which are more relevant to consumers, who should disseminate trustful food information, and how communication should be conveyed and segmented. Primary data were collected both through qualitative (in depth interviews and focus groups) and quantitative research (web and mail surveys). Attitudes, willingness to pay for food information and trust towards public and private sources conveying information through a new food magazine were assessed using both multivariate statistical methods and econometric analysis. The study shows that consumer attitudes towards food information topics can be summarized along three cognitive-affective dimensions: the agro-food system, enjoyment and wellness. Information related to health risks caused by nutritional disorders and food safety issues caused by bacteria and chemical substances is the most important for about 90% of respondents. Food information related to regulations and traditions is also considered important for more than two thirds of respondents, while information about food production and processing techniques, life style and food fads are considered less important by the majority of respondents. Trust towards food information disseminated by public bodies is higher than that observed for private bodies. This behavior directly affects willingness to pay (WTP) for food information provided by public and private publishers when markets are shocked by a food safety incident. WTP for consumer association (€ 1.80) and the European Food Safety Authority (€ 1.30) are higher than WTP for the independent and food industry publishers which cluster around zero euro. Furthermore, trust towards the type of publisher also plays a key role in food information market segmentation together with socio-demographic and economic variables such as gender, age, presence of children and income. These findings invite policy makers to reflect on the possibility of using information remedies conveyed using trusted sources of information to specific segments of consumers as an interesting soft alternative to the classic way of regulating modern food markets.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Human brain imaging techniques, such as Magnetic Resonance Imaging (MRI) or Diffusion Tensor Imaging (DTI), have been established as scientific and diagnostic tools and their adoption is growing in popularity. Statistical methods, machine learning and data mining algorithms have successfully been adopted to extract predictive and descriptive models from neuroimage data. However, the knowledge discovery process typically requires also the adoption of pre-processing, post-processing and visualisation techniques in complex data workflows. Currently, a main problem for the integrated preprocessing and mining of MRI data is the lack of comprehensive platforms able to avoid the manual invocation of preprocessing and mining tools, that yields to an error-prone and inefficient process. In this work we present K-Surfer, a novel plug-in of the Konstanz Information Miner (KNIME) workbench, that automatizes the preprocessing of brain images and leverages the mining capabilities of KNIME in an integrated way. K-Surfer supports the importing, filtering, merging and pre-processing of neuroimage data from FreeSurfer, a tool for human brain MRI feature extraction and interpretation. K-Surfer automatizes the steps for importing FreeSurfer data, reducing time costs, eliminating human errors and enabling the design of complex analytics workflow for neuroimage data by leveraging the rich functionalities available in the KNIME workbench.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper presents an approximate closed form sample size formula for determining non-inferiority in active-control trials with binary data. We use the odds-ratio as the measure of the relative treatment effect, derive the sample size formula based on the score test and compare it with a second, well-known formula based on the Wald test. Both closed form formulae are compared with simulations based on the likelihood ratio test. Within the range of parameter values investigated, the score test closed form formula is reasonably accurate when non-inferiority margins are based on odds-ratios of about 0.5 or above and when the magnitude of the odds ratio under the alternative hypothesis lies between about 1 and 2.5. The accuracy generally decreases as the odds ratio under the alternative hypothesis moves upwards from 1. As the non-inferiority margin odds ratio decreases from 0.5, the score test closed form formula increasingly overestimates the sample size irrespective of the magnitude of the odds ratio under the alternative hypothesis. The Wald test closed form formula is also reasonably accurate in the cases where the score test closed form formula works well. Outside these scenarios, the Wald test closed form formula can either underestimate or overestimate the sample size, depending on the magnitude of the non-inferiority margin odds ratio and the odds ratio under the alternative hypothesis. Although neither approximation is accurate for all cases, both approaches lead to satisfactory sample size calculation for non-inferiority trials with binary data where the odds ratio is the parameter of interest.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Forecasting wind power is an important part of a successful integration of wind power into the power grid. Forecasts with lead times longer than 6 h are generally made by using statistical methods to post-process forecasts from numerical weather prediction systems. Two major problems that complicate this approach are the non-linear relationship between wind speed and power production and the limited range of power production between zero and nominal power of the turbine. In practice, these problems are often tackled by using non-linear non-parametric regression models. However, such an approach ignores valuable and readily available information: the power curve of the turbine's manufacturer. Much of the non-linearity can be directly accounted for by transforming the observed power production into wind speed via the inverse power curve so that simpler linear regression models can be used. Furthermore, the fact that the transformed power production has a limited range can be taken care of by employing censored regression models. In this study, we evaluate quantile forecasts from a range of methods: (i) using parametric and non-parametric models, (ii) with and without the proposed inverse power curve transformation and (iii) with and without censoring. The results show that with our inverse (power-to-wind) transformation, simpler linear regression models with censoring perform equally or better than non-linear models with or without the frequently used wind-to-power transformation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Recruitment of patients to a clinical trial usually occurs over a period of time, resulting in the steady accumulation of data throughout the trial's duration. Yet, according to traditional statistical methods, the sample size of the trial should be determined in advance, and data collected on all subjects before analysis proceeds. For ethical and economic reasons, the technique of sequential testing has been developed to enable the examination of data at a series of interim analyses. The aim is to stop recruitment to the study as soon as there is sufficient evidence to reach a firm conclusion. In this paper we present the advantages and disadvantages of conducting interim analyses in phase III clinical trials, together with the key steps to enable the successful implementation of sequential methods in this setting. Examples are given of completed trials, which have been carried out sequentially, and references to relevant literature and software are provided.