987 resultados para statistical methodology
Resumo:
Most statistical methodology for phase III clinical trials focuses on the comparison of a single experimental treatment with a control. An increasing desire to reduce the time before regulatory approval of a new drug is sought has led to development of two-stage or sequential designs for trials that combine the definitive analysis associated with phase III with the treatment selection element of a phase II study. In this paper we consider a trial in which the most promising of a number of experimental treatments is selected at the first interim analysis. This considerably reduces the computational load associated with the construction of stopping boundaries compared to the approach proposed by Follman, Proschan and Geller (Biometrics 1994; 50: 325-336). The computational requirement does not exceed that for the sequential comparison of a single experimental treatment with a control. Existing methods are extended in two ways. First, the use of the efficient score as a test statistic makes the analysis of binary, normal or failure-time data, as well as adjustment for covariates or stratification straightforward. Second, the question of trial power is also considered, enabling the determination of sample size required to give specified power. Copyright © 2003 John Wiley & Sons, Ltd.
Resumo:
This note considers the variance estimation for population size estimators based on capture–recapture experiments. Whereas a diversity of estimators of the population size has been suggested, the question of estimating the associated variances is less frequently addressed. This note points out that the technique of conditioning can be applied here successfully which also allows us to identify sources of variation: the variance due to estimation of the model parameters and the binomial variance due to sampling n units from a population of size N. It is applied to estimators typically used in capture–recapture experiments in continuous time including the estimators of Zelterman and Chao and improves upon previously used variance estimators. In addition, knowledge of the variances associated with the estimators by Zelterman and Chao allows the suggestion of a new estimator as the weighted sum of the two. The decomposition of the variance into the two sources allows also a new understanding of how resampling techniques like the Bootstrap could be used appropriately. Finally, the sample size question for capture–recapture experiments is addressed. Since the variance of population size estimators increases with the sample size, it is suggested to use relative measures such as the observed-to-hidden ratio or the completeness of identification proportion for approaching the question of sample size choice.
Resumo:
Assaying a large number of genetic markers from patients in clinical trials is now possible in order to tailor drugs with respect to efficacy. The statistical methodology for analysing such massive data sets is challenging. The most popular type of statistical analysis is to use a univariate test for each genetic marker, once all the data from a clinical study have been collected. This paper presents a sequential method for conducting an omnibus test for detecting gene-drug interactions across the genome, thus allowing informed decisions at the earliest opportunity and overcoming the multiple testing problems from conducting many univariate tests. We first propose an omnibus test for a fixed sample size. This test is based on combining F-statistics that test for an interaction between treatment and the individual single nucleotide polymorphism (SNP). As SNPs tend to be correlated, we use permutations to calculate a global p-value. We extend our omnibus test to the sequential case. In order to control the type I error rate, we propose a sequential method that uses permutations to obtain the stopping boundaries. The results of a simulation study show that the sequential permutation method is more powerful than alternative sequential methods that control the type I error rate, such as the inverse-normal method. The proposed method is flexible as we do not need to assume a mode of inheritance and can also adjust for confounding factors. An application to real clinical data illustrates that the method is computationally feasible for a large number of SNPs. Copyright (c) 2007 John Wiley & Sons, Ltd.
Resumo:
Imputation is commonly used to compensate for item non-response in sample surveys. If we treat the imputed values as if they are true values, and then compute the variance estimates by using standard methods, such as the jackknife, we can seriously underestimate the true variances. We propose a modified jackknife variance estimator which is defined for any without-replacement unequal probability sampling design in the presence of imputation and non-negligible sampling fraction. Mean, ratio and random-imputation methods will be considered. The practical advantage of the method proposed is its breadth of applicability.
Resumo:
Background. Meta-analyses show that cognitive behaviour therapy for psychosis (CBT-P) improves distressing positive symptoms. However, it is a complex intervention involving a range of techniques. No previous study has assessed the delivery of the different elements of treatment and their effect on outcome. Our aim was to assess the differential effect of type of treatment delivered on the effectiveness of CBT-P, using novel statistical methodology. Method. The Psychological Prevention of Relapse in Psychosis (PRP) trial was a multi-centre randomized controlled trial (RCT) that compared CBT-P with treatment as usual (TAU). Therapy was manualized, and detailed evaluations of therapy delivery and client engagement were made. Follow-up assessments were made at 12 and 24 months. In a planned analysis, we applied principal stratification (involving structural equation modelling with finite mixtures) to estimate intention-to-treat (ITT) effects for subgroups of participants, defined by qualitative and quantitative differences in receipt of therapy, while maintaining the constraints of randomization. Results. Consistent delivery of full therapy, including specific cognitive and behavioural techniques, was associated with clinically and statistically significant increases in months in remission, and decreases in psychotic and affective symptoms. Delivery of partial therapy involving engagement and assessment was not effective. Conclusions. Our analyses suggest that CBT-P is of significant benefit on multiple outcomes to patients able to engage in the full range of therapy procedures. The novel statistical methods illustrated in this report have general application to the evaluation of heterogeneity in the effects of treatment.
Resumo:
Many modern statistical applications involve inference for complex stochastic models, where it is easy to simulate from the models, but impossible to calculate likelihoods. Approximate Bayesian computation (ABC) is a method of inference for such models. It replaces calculation of the likelihood by a step which involves simulating artificial data for different parameter values, and comparing summary statistics of the simulated data with summary statistics of the observed data. Here we show how to construct appropriate summary statistics for ABC in a semi-automatic manner. We aim for summary statistics which will enable inference about certain parameters of interest to be as accurate as possible. Theoretical results show that optimal summary statistics are the posterior means of the parameters. Although these cannot be calculated analytically, we use an extra stage of simulation to estimate how the posterior means vary as a function of the data; and we then use these estimates of our summary statistics within ABC. Empirical results show that our approach is a robust method for choosing summary statistics that can result in substantially more accurate ABC analyses than the ad hoc choices of summary statistics that have been proposed in the literature. We also demonstrate advantages over two alternative methods of simulation-based inference.
Resumo:
Ever since the classic research of Nicholls (1976) and others, effort has been recognized as a double-edged sword: whilst it might enhance achievement, it undermines academic self-concept (ASC). However, there has not been a thorough evaluation of the longitudinal reciprocal effects of effort, ASC and achievement,in the context of modern self-concept theory and statistical methodology. Nor have there been developmental equilibrium tests of whether these effects are consistent across the potentially volatile early-to-middle adolescence. Hence, focusing on mathematics, we evaluate reciprocal effects models over the first four years of secondary school, relating effort, achievement (test scores and school grades), ASC, and ASCxEffort interactions for a representative sample of 3,421 German students (Mn age = 11.75 years at Wave 1). ASC, effort and achievement were positively correlated at each wave, and there was a clear pattern of positive reciprocal positive effects among ASC, test scores and school grades—each contributing to the other, after controlling for the prior effects of all others. There was an asymmetrical pattern of effects for effort that is consistent with the double-edged sword premise: prior school grades had positive effects on subsequent effort, but prior effort had non-significant or negative effects on subsequent grades and ASC. However, on the basis of a synergistic application of new theory and methodology, we predicted and found a significant ASC-by-effort interaction, such that prior effort had more positive effects on subsequent ASC and school grades when prior ASC was high—thus providing a key to breaking the double-edged sword.
Resumo:
A Análise Financeira dos Índices tem sido usada largamente, desde o fim do século passado na avaliação das demonstrações financeiras; no entanto, encontrou-se desacreditada por volta dos anos 60. Assim, por essa época, Beaver; baseando-se na literatura existente, testa empiricamente muitas crenças tidas como verdadeiras e chega a resultados diferentes. Para isso, utilizou-se de metodologia estatística univariada, isto é, analisou cada índice isoladamente. Altman refuta a metodologia usada por Beaver e cria um modelo utilizando se da estatística multivariada, isto é, vários Índices são estudados, estabelecendo um score limite. Outros autores, baseando-se nestes dois estudos, ora criticando, ora comparando, montam outros modelos. Assim temos: Deakin, Blum, Libby, Kennedy, Kanitz, Zappa, Collongues, Conan, Holder, C.E.S.A. e outros que não foram mencionados neste trabalho. Este estudo mostrou o trabalho de cada autor individualmente e, num segundo momento, apresentou críticas e comparações sofridas por esses modelos, procurando evidenciar que essas críticas e comparações serviram para estimular o desenvolvimento da Análise Financeira, a tal ponto que pode-se citar Altman que, devido a críticas e também ao seu interesse em atualizar os dados, reformula o seu modelo original (1968), criando o modelo Zeta (1977). A segunda preocupação deste trabalho foi evidenciar alguns pontos que puderam ser colhidos do material disponível. Assim, procurou-se responder quais os Índices que mais foram usados pelos modelos e, como conclusão, chegou-se a evidência de que não existem alguns poucos índices que melhor discriminem uma empresa falida e não falida. Outra indagação foi, se esses modelos serviriam para todos os tipos e tamanhos de empresas, e concluiu-se que servem, desde que o analista os adapte às suas necessidades peculiares. E, por fim, procurou-se verificar a que finalidade esses modelos podiam corresponder. E, assim, apontou-se a análise de crédito comercial, para investimento, para decisões internas, para compra e venda de ações etc. Conclui-se, pois, que os vários modelos desenvolvidos nestes últimos anos, realmente trouxeram progresso à Análise Financeira, mas que esses resultados devem ser cuidadosa mente adaptados às diversas realidades e mais, dado ao desenvolvimento de outras ciências, podem somar-se esforços com intuito de alcançar um maior desenvolvimento.
Resumo:
The problems of combinatory optimization have involved a large number of researchers in search of approximative solutions for them, since it is generally accepted that they are unsolvable in polynomial time. Initially, these solutions were focused on heuristics. Currently, metaheuristics are used more for this task, especially those based on evolutionary algorithms. The two main contributions of this work are: the creation of what is called an -Operon- heuristic, for the construction of the information chains necessary for the implementation of transgenetic (evolutionary) algorithms, mainly using statistical methodology - the Cluster Analysis and the Principal Component Analysis; and the utilization of statistical analyses that are adequate for the evaluation of the performance of the algorithms that are developed to solve these problems. The aim of the Operon is to construct good quality dynamic information chains to promote an -intelligent- search in the space of solutions. The Traveling Salesman Problem (TSP) is intended for applications based on a transgenetic algorithmic known as ProtoG. A strategy is also proposed for the renovation of part of the chromosome population indicated by adopting a minimum limit in the coefficient of variation of the adequation function of the individuals, with calculations based on the population. Statistical methodology is used for the evaluation of the performance of four algorithms, as follows: the proposed ProtoG, two memetic algorithms and a Simulated Annealing algorithm. Three performance analyses of these algorithms are proposed. The first is accomplished through the Logistic Regression, based on the probability of finding an optimal solution for a TSP instance by the algorithm being tested. The second is accomplished through Survival Analysis, based on a probability of the time observed for its execution until an optimal solution is achieved. The third is accomplished by means of a non-parametric Analysis of Variance, considering the Percent Error of the Solution (PES) obtained by the percentage in which the solution found exceeds the best solution available in the literature. Six experiments have been conducted applied to sixty-one instances of Euclidean TSP with sizes of up to 1,655 cities. The first two experiments deal with the adjustments of four parameters used in the ProtoG algorithm in an attempt to improve its performance. The last four have been undertaken to evaluate the performance of the ProtoG in comparison to the three algorithms adopted. For these sixty-one instances, it has been concluded on the grounds of statistical tests that there is evidence that the ProtoG performs better than these three algorithms in fifty instances. In addition, for the thirty-six instances considered in the last three trials in which the performance of the algorithms was evaluated through PES, it was observed that the PES average obtained with the ProtoG was less than 1% in almost half of these instances, having reached the greatest average for one instance of 1,173 cities, with an PES average equal to 3.52%. Therefore, the ProtoG can be considered a competitive algorithm for solving the TSP, since it is not rare in the literature find PESs averages greater than 10% to be reported for instances of this size.
Resumo:
This work is a study of strategic management of catering establishments in the tourist route from Natal, through the study of the strategic profile of the manager and the level of satisfaction with the quality of services offered. Identifies the strategic profile prevalent in the studied sector, measures the level of customer satisfaction with services and associate the two constructs to distinguish the services of strategic profile. Uses population of 33 restaurants, built for convenience, from a list composed establishments associated with the Brazilian Association of Bars and Restaurants - ABRASEL, Veja magazine Christmas food and drink 2011/2012 and information from the natives. It presents statistical methodology used for descriptive bivariate analysis complemented by quantitative data. The quantitative characteristics of the population shows non-normality checked by the Shapiro-Wilks. Used the Kruskal-Wallis test for the realization of the association of variables and the Mann-Whitney test to perform post-test. It shows the strategic profile prevalent in the sector of restoration in Natal is the analyzer, although other types were detected. Notes that the level of satisfaction with the quality of service is getting a high score approximately 5 points in a 6-point Likert scale. Demonstrates that the client can tell the quality of services between the different strategic profiles. Identifies distinction between services provided by prospector profile compared to other profiles, indicating the size as the tangible aspects that presents noticeable difference. Certifies that these variables affect the environment of the restaurant in the building of strategic profile and reflect on the service provided. Concludes that the quality of services provided by catering establishments is influenced by the type of establishment and strategic profile of the study of this relation to establishments offering development opportunities and improving the quality of their services
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
The epidemiological surveys are important to the deployment, implementation and evaluation of projects and health actions in a community. The planning, goals, samples, team training/calibration, carrying out and publication of results are extremely important in the epidemiological surveys. Thus, the care with the sample and statistical analysis is fundamental for the results to be consistent and trustworthy in order to be able to be inferred for all the population. The aims of this study is to investigate the statistical methodology used in papers on dental caries epidemiological surveys published from 1960 to 2001. A bibliographical survey was carried out in BBO, MEDLINE and SCIELO databases. The papers found were analyzed with regards to the statistical methodology applied in the whole study, from the sampling to the tabling of data. Most studies (72.6%) only presented the number of elements that composed the sample, without explaining the planning involved in obtaining it.
Resumo:
Incluye prólogo de la Sra. Alicia Bárcena
Resumo:
Incluye Bibliografía
Resumo:
Incluye Bibliografía